And here we are.
This is the first post on this blog after I migrated off WordPress for a static solution.
At first, I wanted to set things up on Amazon Web Services (AWS), which was an adventure. There are lots of online posts about how to do this, but Amazon’s services change quickly and there was often outdated information. For instance, Amazon had a wizard that led you through setting up a static site, which I clicked on. It helpfully handled a lot of grunt work, but now I was out of sync with all of the guides. Oh well.
I think things are confusing partly because there are four AWS components all interacting to make a static site happen:
Amazon S3: Simple Storage Service. Stores your files and serves them, but from a really long URL. Amazon has a command-line client
aws
that will let you push things by typing something like:aws s3 sync --acl "public-read" --sse "AES256" public/ s3://aws-website-blogverosite-zld83
Amazon Certificate Manager: Does what you think it does, manages certificates. It will also issue you a HTTPS certificate for a domain you control by emailing you.
Amazon CloudFront: Distributes content through the content delivery network and serves your website from a reasonable domain. You can load a certificate if it’s issued from Amazon Certificate Manager in the region
us-east-1
, and serve it using SNI for free.Amazon Route 53: DNS service, which lets you point your domain to the CloudFront servers.
(I really should understand this better than I do after taking 6.033; hopefully I got the details right.)
I actually ended up abandoning AWS for the other popular static site
serving option online, GitHub Pages + CloudFlare, once I realized that
this wasn’t actually less sketchy just because all my components were
under Amazon. As I understand it, you can get that sweet
(search-engine-rank-boosting) green padlock on your GitHub Pages site by
getting CloudFlare to serve it to users over HTTPS, but behind the
scenes there is still no HTTPS between GitHub Pages and the CloudFlare
servers. Similarly, in the above setup I think there is still no HTTPS
between Amazon S3 and Amazon CloudFront, so these setups still don’t
achieve end-to-end HTTPS. Still, I tried and have the padlock to show
for it… For me the tiebreaker ended up being that I knew GitHub Pages
supported extensionless
permalinks. I’m kind of picky about URLs. I also couldn’t quite
figure out how 404 pages work exactly, although I didn’t really try that
hard. I never got to setting the “error” page, but my site seemed to be
giving 403s, which seemed wrong, and http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectGET.html
suggests granting s3:ListBucket
would cause 404s to
happen.
As for site generation, I went with Hugo because it was the second-most popular static site generator on the most popular list of static site generators I could find, staticgen.com, and I wasn’t satisfied with Jekyll’s filename convention. I wanted flexibility instead of simplicity for this site. Even then, there were enough things I wanted to change that I hacked on it, sent two pull requests (one to shell out to Pandoc and one to support custom permalinks for categories and tags), and locally merged a third (extensionless permalinks).
This was essentially my first actual experience writing Go (*ahem*
Golang, for the search engines), which was also an adventure. My
previous impressions of Go weren’t great and I’ve resisted learning it
for a while, but I thought most of this was just because it was a
language for a different niche of programmers and projects than the one
I was in, big projects for big teams. Bashing Go is a fairly common
pastime; there’s even a GitHub repo for
collecting articles complaining about Go. After actually digging
into a Go codebase, I think my overall feeling went from “Go is bad” to
“Go could have been so much better”. Go does a lot of things well and
scratches a lot of itches that previous languages just don’t. There are
pointers, but you can’t shoot yourself in the foot with them, which is
already transformational; Go handles random-access integer-index data
structures with slices instead. The syntax is mostly crisp and concise
(I haven’t made up my mind about the way one declares methods on classes
yet, but right now I think I like it). Functions are first-class objects
with concise syntax and types. The opinionated tooling built into the
go
executable makes getting started easy. Thanks to
Google’s backing, the ecosystem is great. This is to say nothing of
concurrency, which is supposed to be Go’s big selling point but which I
haven’t even used yet.
Still, all of the reasons I didn’t like Go before still pretty much stand. To be fair, I think a lot of reasons I dislike Go, including most of the reasons I had a negative impression of it, boil down to the fact that I’m just not a fan of removing language features because they can be abused, usually to write unreadable or unmaintainable code. I don’t think I’ll ever get over the lack of ternary expressions. A more advanced type system would also be nice (I will never get tired of showing people the Canadian Aboriginal Syllabics thing). But from my couple-hour dip into a big Go codebase, I haven’t acutely missed these features, so I won’t go into these reasons here.
I feel like the biggest missed opportunity is the pesky
nil
, or in more memorable terms the “billion
dollar mistake” of null references. I’d prefer the full-blown
functor T -> Option[T]
where you can create a nullable
version of any type (James
Iry makes this case here), but if I had just been given a hardwired
single-level nullable, I would have grumbled a little and moved on. As
it is, nil
exists almost-but-not-quite-everywhere and I
still don’t understand how it works. Apparently it inhabits pointers,
interfaces, maps, slices, channels and function types, but it’s not
a string
even though they seem to be basically immutable
byte
slices, so where everywhere else you can (or kind of
have to) use and treat nil
as a sentinel, for
string
you’re stuck with the empty string, crossing your
fingers hoping that you’ll never need to accept both a null value and
the empty string as a valid value for the same use case. You can take
the length of a nil
slice and you can index into
nil
as a map with any key and get the default type of the
map value, but you can also make empty slices and empty maps that have
the same behavior, which are not nil
, but have the same
behavior if you only read from them. Things can even have
nil
value but not nil
type or something, so
now you have this weird
nil error. What? I understand the semantics of JavaScript’s
null
and undefined
better than this.
Then there’s the case-for-visibility feature (uppercase for exported
identifiers, lowercase for unexported identifiers), which is even one of the Go
team’s favorite features, despite how they note in the same answer
how they kind of went for supporting anybody who wanted to use Unicode
identifiers but then ended up screwing those people over if their
preferred language doesn’t have uppercase and lowercase letters. I’m
sure some of my reaction is just unfamiliarity from reading, uh, every
other language under the planet, but I think there are a few objective
drawbacks. The small things: I have to think a lot harder about naming
and remembering the names of things if the name has the same visibility
as the type. The mitigation technique seems to be a convention of using
short or single-letter variable names, which is mystifying to me — what
are we doing, programming with punch cards in 1960? If I want to declare
structs
that will be magically parsed from JSON or YAML or
whatever, I have to think carefully about whatever case transformations
the engine will want to do and not just write what I mean. Changing the
visibility of an identifier can only be accomplished by changing every
instance of that identifier in your code, and shuts the door to more
granular permissions. For what gain? I suppose it’s kind of useful to be
able to know the visibility of an identifier just by glancing at it, but
then one loses the ability to differentiate between different
categorizations of identifiers by convention or by syntactic rule. My
initial reaction was that now I can’t tell if something is a type or a
variable, but upon further reflection I guess in Go you can usually tell
them apart from syntactic context. What I miss more is telling if
something is a constant or a variable. Oh wait, Go doesn’t have
constants…
My last comment is neither praise nor a complaint because I still
don’t know how to feel about it: You specify a time format Time.Format
works by formatting the following magical reference date.
Mon Jan 2 15:04:05 -0700 MST 2006
Month 1, day 2, hour 3 (PM), minute 4, second 5, year (20)06, time
zone UTC−7. The great thing about this is that you can format a date by
just looking up the short reference date and formatting it yourself, and
you can also intuitively get most of the time format just by reading it
and don’t need to stare at a table of strftime
codes.
The awful thing about this is that the time format is super ambiguous
if you don’t look up the reference date, and can totally look like the
format you think it is, but be slightly off or totally nonsensical. Is
01/02/2006
month/day/year or day/month/year? What about
05/15/2006
even? Second/24-hour hour/year? What? The
ambiguity between MM/DD and DD/MM and many other choices arises because
the day is ≤ 12, but solving this ambiguity by choosing a reference day
≥ 13 removes functionality from the library, because then you can’t
specify if you want to zero-pad (or space-pad) the day by choosing to
format the date as 2
or 02
.
Some other notes: The blog theme/CSS is based on Hugo Nuo — it was pretty
useful to have scaffolding to base everything on, but I modified it
pretty heavily and I don’t know if it’s recognizably such any more.
Pandoc syntax highlighting will result in HTML with suitable markup and
classes, but you need to supply your own CSS. It seems the best way to
do this is grab the CSS from the output of pandoc -s
on a
trivial highlighted code block, perhaps with your preferred
--highlight-style
(I chose tango
).
I think that’s most of what I have to say about the technical details of this new blog. There are a lot of missing features, of course, most notably comments, but given their rarity on the old blog I don’t think I’m in a rush to get them working.
As for the content, you might be able to tell that I haven’t ported all of the posts from my old blog. I took some kind of transitive closure of a handful of recent posts just to have somewhere to start, and as I get free time, I’ll probably keep porting more posts (that I want to) and also merge in more technical posts on other subjects from other places once I figure out a categorization system I’m happy with. I might even sneakily merge in a few old drafts I never posted on my old blog. Finally, I have a few drafts of CTF writeups that might be up Soon™. Stay tuned or something.