I board the plane to Addis. I am the sixth person on the plane, but I get to my seat (12A) and it is taken.
No problem. I sit in 13A instead. A man sits next to me, he is friendly. His ticket says 13A, and makes no mention of the fact that I’m in his seat. He receives a phone call.
Someone comes with the ticket for 13C, and says ‘Excuse me, you’re in my seat’. But 13C is on the phone, and doesn’t speak much English, and most importantly doesn’t care. After all, it’s just a seat on a fifty minute flight. I explain to the new man that nobody cares, and if he doesn’t mind, he’s best off just taking any seat somewhere else. But he insists. He calls the flight attendant.
The flight attendant asks for the ticket of the man in 13C. He produces 13A. She asks me for my ticket. I produce 12A. She asks 12A for his ticket. He produces 12J. Someone is in 12J too, and it appears that she has a very good reason for being in 12J, judging from the length of the conversation.
The flight attendant doesn’t know what to do, and escalates the matter to his colleague.
Meanwhile, the man is all in on 13C. It is rightfully his! It was ordained to him, by a small stub of paper printed only an hour earlier. He allows other passengers through, apologises for being in the way, hovers over the seat, waits, expects the flight attendant to sort it out. My nonplussed friend in 13C is still on the phone.
Now there are two flight attendants trying to convince the Rightful Holder of 13C to sit somewhere else, but now all the good seats are gone. He gets mad - nothing is happening, and nobody feels much sympathy, for anyone except the flight attendants.
And then, the flight attendants solve the puzzle by asking me to move. I agree; it’s just a seat on a fifty minute flight, after all. But at that exact instant, the man has stormed off down the aisle, and now sits grumpily at the back of the plane.
I went to Lofoten in Norway for a four-night getaway with some friends this week. Here are some photos.
I took all of these on an Olympus EM-10 (original, there’s a newer Mk II now) with an Olympus 9-18 mm f4-5.6 lens, some of them with a polarizing filter.
This was my first time taking photos with such a wide lens – I bought a 25mm (50mm equiv.) prime lens a few months after I bought the EM-10 two years ago, and I’ve used it ever since. I felt like the colours weren’t quite as vivid with this lens, but it was pretty nice to be able to capture some sweeping landscapes. The setup is also ludicrously small and light, which makes it perfect to stick in a backup and scramble up mountains (e.g. Reinebringen).
Getting there, where to stay etc.
I’m not by any means claiming that this is the best way to do it, but it worked really well for us:
The southern part of Lofoten is less built up than the north. We stayed at Toppøy Rorbuer (run by the same people as Sakrisøy Rorbuer, linked), and loved the relative isolation. Svolvær, on the other hand, is probably easier to organise / has more tours etc, but it’s definitely a tourist town.
Shoulder season was a great time to go. It wasn’t relatively crowded and we managed to book a cabin a month out. You can see the northern lights at night (though, not as strongly as later in the year), and still get 11 hours or so of daylight.
Fly into Bodø, organise car rental at the airport, and take it on the ferry to Moskenes. It’s much easier to get around with a car and the roads are amazing fun to drive (we saw a number of buses around, but I’d want to do more research before I relied upon it). Otherwise, car rental is a little more fiddly in the south, and the alternative is to go from Svolvær, which is a two hour drive away from where we were staying.
Cook your own meals mostly. Eating out is expensive (NOK 300+ / AUD 50+ / EUR 35+ / USD 35+ for a main meal). But also, try Tørrfisk (Stockfish) and have a glass of Akavit sometime.
I wrapped up at Google last week, but I spent the months before I left working on an Australian Election tracker map, which was completely unlike any project I’d worked on before.
We first assembled the team for this about 10 weeks before the election, from a number of teams across Google Sydney. By the time we actually started building anything, it was about six weeks out from the election.
Team photos. On the left, you can see me stuffing T-shirts into Barry, the friendly office bean-bag shark.
The website had two main goals - to provide information on where to vote, and to provide live results after the official data feed went live on the day.
We used Google App Engine to serve everything. App Engine was super easy to use - being able to write code and have it scale automatically is a huge win to me.
The backend was written in Go. In addition to the HTML / JS / CSS for the web app, it served static geographical data, which is publicly available from the Australian Electoral Commission (AEC).
For data sources that needed to update live, we used Firebase. This included live election results, and information on which polling booths had sausage sizzles and cake stands (it’s an Australian tradition).
We wrote the frontend in Dart, using Angular 2 as the framework to tie everything together.
I spent most of my time working on the geographical data backend and building out the web UI, so in this article I’m going to focus on those aspects.
Serving geospatial data
The representation of election results on a map was seen to be one of the most interesting differentiators between Google’s election map and other coverage. This turned out to be significantly challenging to implement though.
We were able to make the assumption that geographical data was static - once the electorates and polling places had been defined, they weren’t going to change frequently. This meant that we could build them directly into whatever serving infrastructure we used, which makes scaling easier.
We started by investigating Carto (formerly, CartoDB), and found that it was a lot of extra infrastructure for the small amount that we wanted to use. We basically would’ve ended up falling back to their SQL API for queries, and so we might as well just use PostGIS in a PostgreSQL database and use that instead 1.
PostGIS, however, is a little tricky to get set up. We got it running with the sample AEC data inside a Docker container, with the intention to serve static geographical data from a cluster running on Google Container Engine.
This meant that we’d also have configure App Engine to talk to our PostGIS instances, and scale them separately, which adds another dimension of complexity. One of our engineers started wondering if there was another way, and knocked up a prototype solution directly in the Go app using go-shp (to read the shapefiles from the AEC) and rtreego (for lookups). This turned out to be a significantly less complex solution. Our code loads the shapefiles once on startup, and then serves them from memory for the lifetime of the service.
The AEC shape data worked out to be around 46MB for all of Australia, so we manually ran some preprocessing using Mapshaper to simplify the polygons down to more efficient sizes for different zoom levels. Using this scheme, we managed to keep the page load down to < 500kB.
This turned out to be super performant - one engineer estimated that each instance was capable of roughly 4x the same load as the python servers he’d written for a previous, similar project. We managed to squeeze even better performance through some clever caching design from some of the engineers on our team.
Frontend: Angular2 and Dart
I should preface this section by saying that I don’t have a lot of experience with web frontend development. If I’ve said something that doesn’t sound right to you, please let me know - I’m keen to learn :)
Hemicycles and half-donuts
Angular isn’t without its frustrations though: there are some things that Angular’s abstractions just don’t work for. I implemented this rather fabulous component:
Fun fact - this seating arrangement is officially called a Hemicycle. I didn’t know this until the BBC used it on their Brexit real-time results feed (though this seems like a weird choice for a Yes/No vote with no quota). Our version of the hemicycle is thus known throughout the code as the half-donut component:
Anyway, using Angular for this got a little gnarly. There’s a lot to this component:
Combine certain parties into representative coalitions
Sort all parties from most to least votes
Allocate them to the left or right of the hemicycle based on whether they’re a “left-leaning” or “right-leaning” party2.
Draw a donut slice (yes, that’s what its called in code) for each party. Each slice has to have the correct starting angle and ending angle.
Rendering is the part where the abstraction felt a little leaky, and Angular felt like more of a toolkit to get the job done rather than a comprehensive solution to “do things right”. As I write this section, I’m starting to wonder if Angular’s actually about as good as it can get here, and the web is just a bit of a mess.
My first rendering algorithm was:
The HalfDonutComponent generates a set of DonutSlices, each having a starting angle, an ending angle, and a fill colour
The template for the HalfDonutComponent feeds the DonutSlices into a DonutSlicePipe, which generates the appropriate SVG path. Unfortunately, there’s no way to render arcs using angles in SVG - they use X,Y coordinates instead (whyyyyyyyyyy?). This meant that I had to do more trigonometry than I would’ve liked.
This definitely worked, but in the interest of keeping the code simple and maintainable (and at the suggestion of a colleague), I changed it to the version embedded above (if you’re inclined, open the inspector and have a look at the generated SVG).
The new rendering algorithm:
The HalfDonutComponent generates a set of DonutSlices, each having an ending angle only.
There is a single half-donut path, which is reused multiple times.
The component creates multiple copies of this half-donut path, colours them differently for each party, and uses transform="rotate(angle, X-center, Y-center)" to angle them into the right positions (no complicated trigonometry!).
Right-aligned parties get rotated with a negative angle instead of a positive angle.
Clip the SVG, so that the area under the half-donut isn’t visible.
This produces a pretty nice result for a lot less code:
Dart felt like it was getting in the way as much as it was helping. Dart’s object streaming framework works really well with live updates from Firebase - it’s possible to just rebind parts of the UI as things change. Dart’s error messages, however, were often hard to understand. Debugging was complicated, because Dart is only natively supported in Dartium, a custom build of Chromium. Builds took a surprisingly long time, and optional typing is very error-prone - we had a hard-to-diagnose JS error related to this that took weeks to find, mostly because we hadn’t enabled the relevant static analysis on build. In my opinion, this shouldn’t be necessary - language behaviour that changes between development and production builds is a bad idea. I think I’d like to try Typescript next time.
I should note that if you were ok with using Carto for web map rendering as well, then it works really well as an integrated suite. That’s the approach that Democracy Sausage were using. ↩
Australian politics is mostly a two-party affair (but decreasingly so!). The Australian Labor Party is usually represented on the left of the hemicycle and the coalition of Liberal, National, Liberal-National, and Country Liberal parties (‘The Coalition’) is usually represented on the right. We needed to do something with other minor parties that get votes, so we put smaller slices in the middle, but if a party gets less than three votes, it’s aggregated into a ‘Other’ component. ↩
For all user-generated content on the Internet, there’s a signal-to-noise ratio. My theory is that in situations where the abstraction doesn’t work, there’s very little signal – it’s hard to create meaningful content when the context is mostly nonsensical. As it turns out, the noise can be pretty funny:
★★★★★ It's great to feel welcomed by my people in a foreign country. Dno't change a thing friendly canadians. https://t.co/oRRgeHN93t
Reviews of embassies fit the bill for this because reviews were designed to help individuals make comparisons between competitors. Reviews work well for any situation where there’s product differentiation – cafés, restaurants, and software are classic examples, which is why they’re rated so frequently. They work less well for pure commodities – at a petrol station or a supermarket, you’d only write a review if something really bad happened. They work terribly for things that aren’t competitive – for example, bridges (“The supporting cables used to vibrate, then they put supporting supporting cables. Good looking bridge, actually quite pleasant to walk across.”).
To me, embassies are the epitome of non-competitiveness - they’re a requirement to get a visa, but people don’t choose countries to go to based on reviews of embassies. In the words of one particularly frustrated reviewer:
On the other side of the coin, the way people interact with strict systems and handle rules and restrictions is a fundamental act of self-expression.
Indeed, the reason why all kinds of games are interesting – board games, sports, video games, whatever – is because by exploring the restricted problem space, you’re learning more about yourself, and about those who explore with you1.
The language used around the AlphaGo vs Lee Sedol match earlier this year really brought this to the front of the tech community’s mind – Go players knew that there was something inhuman about the very moves that AlphaGo played. Whether this was beautiful, or sad, or just weird is a matter of taste, but there’s no doubt that the moves were fundamentally inhuman.
The review box on Google Maps, like a game, is a bounded system – a review is attached to a place, it has a writing prompt, the user is to give a rating out of five stars.
On these fringes of what the product was originally designed for, and within the confines of the system, people self-express what’s important to them in the weirdest ways:
★★★★★ WITH THE SUCCESSES OF DUBAI AND OTHER URBAN MIRACLES SAUDI ARABIA BEATS SOME OF THE GUINNESS WORLD RECORDS. https://t.co/UCVK3D647B
Tweets are reviews taken directly from Google Maps, chosen at random from more than 11000 embassies, and 900 cities globally. Tweets are (usually) in English, four times a day, and are unfiltered - though sometimes the reviews are automatically edited down to fit in a tweet.
I have no idea where I got this idea from, but I remember reading that chess is widely perceived as a form of self-expression for this reason. Interviews with Jamin Warren and Robin Hunicke in Offscreen issues #13 and #14 respectively get pretty close to the concept. ↩
♣️ 🅾️ 😳
🍒 👎 🐀
CLOCK FACE FIVE-THIRTY
CLUB SUIT O BUTTON FLUSHED FACE
CHERRIES THUMBS DOWN RAT
I made an absurd little thing that generates haikus from the official unicode emoji descriptions.
I came across the idea after seeing a bunch of bots on twitter recently – particularly @WizardGenerator and a few bots from @tinysubversions. After reading something that said that the best way to get started was to take a concept, boil it down to something as minimally viable as possible, and ship it, I decided to have a go.
I thought it was funny to juxtapose something ancient, and old, and introspective, and personal, and effort-requiring against something modern, and ephemeral, and mass consumed.
Real haikus make you think; this is like the bastard limerick of haikus.
Also, I’ve always been interested in emoji (they feature on the homepage of my website). It’s utterly bizarre that something so modern has been somewhat standardised into language through the Unicode spec.
Emojis and meaning
Emoji illustrations have implicit meaning by virtue of being pictures, but a lot of the meaning now comes as a result of the fact that Apple mainstreamed emoji first. The dancing lady is classy and exuberant, but the Unicode spec just says “DANCER” (with the note also used for “let’s party”). It makes no mention of pose, gender, the red dress, or a particular dance style. Just “DANCER”. I’ve seen PERSON WITH FOLDED HANDS used as high-five. The Japanese INFORMATION DESK PERSON emoji is often used to mean sassy. People interpret the FACE WITH TEARS OF JOY emoji as just “crying” (with hilarious/disastrous results).
So for something that’s so ubiquitous, it’s amazing that nobody knows what these symbols formally mean. Maybe the problem here is that they were mobile-first for so long - you can’t include a formal description when the keyboard has to fit on a 4” screen. Maybe the entries in the unicode specification was under-specified. Maybe in seeking to make emoji memorable and fun, Apple engineers were a little too liberal in their interpretation of the spec.
Either way, we’re in a situation where emoji have a richness and layers in meaning. Not to the same quality of words, but hey, at least they’re mainstream and accessible.
HEY THE SPEC IS IN ALL CAPS FOR SOME REASON
It’s also extremely entertaining to me that the UNICODE DESCRIPTIONS are always UPPERCASE. it’s like the spec is yelling at us: “HEY, YOU! that’s a DOG FACE.” “DEAR READER! This is a MAN IN BUSINESS SUIT LEVITATING.”
So juxtaposing something ancient, and nuanced, and subtle, against something modern, and really really blunt and TIRELESSLY YELLING really appealed to me.