Moon Base Alpha and the Comlock

It is the year 1977. I am in third grade. German television shows Moon Base Alpha (Space: 1999) and for the first time, I take a look into the future.

Commander John Koenig and Doctor Helena Russell

The show is set in the year 1999, and among the most spectacular achievements – apart from the laser guns you can set to kill or stun – is the comlock. That is a device the inhabitants of the moon base carry with them on their belts, and which allows them to do two things: open the electric doors of the moon base, and communicate. For that purpose, the comlock has a tiny screen built into its upper side, which allows you to see the other person.

Comlock

I am deeply fascinated by these devices and wonder whether something like that might indeed exist in the far future. I dream about it. I build a comlock in lego.

Of course the film crew does not have any comlocks, either. Many years later I will read that even the most spectacularly small ray tube of the time – and you can clearly see that the comlocks have ray tube monitors – did not fit into the comlock. Closeups were therefore shot using a model that was significantly larger than the props the actors wore on their belts.

The tiny screens almost never show anything except talking heads. Actually, the computers in the moon base communicate with the inhabitants either via printers (but there’s nowhere a trash can for the produced paper), or via artificial voices, engaging the humans in mysterious dialog. All that is enough to make me and my friends utterly excited and long for the future.

Kano and Doctor Bergman

What would we have said if we had known that, in the unfathomable future of thirty, fourty years later, we would not carry around comlocks but fluorescent glass panes, and that all knowledge of mankind would be accessible through them, plus communication with any human that happens to live in the same century. Oh, and video calling also works, but it’s about the most boring thing you could do with these glass panes.

I guess we would have acknowledged that this unfathomably far future does make good on what we expect from it, and continued to play longingly with our legos.

(Note: This was originally a German article, written for the German blog Techniktagebuch. But there is no fair use in German copyright law, and so we could not include the images, not even if the material is from a British television programme, and even the blog is hosted in the US – it is enough that a German audience is addressed. So I decided I’d rather address an English audience and include the images under fair use. We would probably not have expected that from the future.)

Tragic, alright

This is my contribution to Writing a Thousand Deaths, a project by German publisher Christiane Frohmann. The text is originally in German, I translated it into English, with a little help from A. Jesse Jiryu Davis.

I felt dizzy and somehow it didn’t go away. I tried sitting down quietly somewhere and close my eyes, but that didn’t help either – quite the opposite, I seemed to lose control even more that way. I thought if I think in the wrong direction now my head will tear off.

I was afraid I’d hurt myself if I fell down somewhere around here or hit something. I was afraid I’d pee into my pants again and the new briefcase, which I’d bought only last week, would also be soaked in urine then and wrecked. So just to be careful I took it out of my pocket and placed it on the table beside me, but then I was afraid it would be stolen or simply lost if I’d end up unconscious again and woke up in the intensive care unit.

I realized only later and slowly that the problem wasn’t any of these things I was afraid of, but rather that ever-increasing fear itself, coming out of nowhere and for no reason. What could happen anyway? A new seizure? More frequent seizures? Danger to my professional future? A seizure I would experience consciously, whole or in part, even if I couldn’t remember it later? Which would maybe feel as if you’d gotten yourself into a roller-coaster three sizes too violent. Or a sky-dive, when you fall and the falling doesn’t stop. Of course, something might also rupture in the brain and then I’d be dead the next moment. Tragic, alright, but it’s not that you’d have to run around crying in fear of it.

That fear inside an airplane, knocking on the door ever so mildly during stronger turbulence – I could contain that pretty well by reminding myself of how petit bourgeois it is. That far too exaggerated clinging to your own life. As if the universe would owe me my existence. To realize that, if it ripped off one of our wings now, I couldn’t do anything about it anyway, and whether I’d really want to spend those last two minutes of my life clinging to my seat like a coward.

Not really. So there.

Some Exciting Personal News

It’s funny how Jay Rosen introduces each of his recent Facebook posts with “I’ve got some exciting personal news”, and then goes on explaining that he’s only trying to trick the newsfeed algorithm.

If he wrote on a web page, it would be the equivalent of writing “sex, sex, sex” somewhere on the page in order to push it higher in Google’s search results. (Or perhaps, more realistically, to maintain automatic link farms to boost the rank of that page.)

With Google, we’ve gotten used to the fact that search results are produced by invisible ranking algorithms. This includes the fact that these algorithms can only work if their details are not known to the public. We realize that SEO is a shady practice, and there’s a pretty broad consensus that “the best way to get your page high up in search results is to have great content”.

In other words, we’re fine with Google doing it, because it works well.

Facebook doesn’t have that bonus. We’re irritated because things disappear from our newsfeed that we think should be there. The algorithms don’t work too well – yet.

But is it even necessary to filter the newsfeed? As a long-time Twitter user, I’m not sure. There’s a feeling of control, of knowing exactly what’s going on when you are entirely responsible for your own timeline. What Facebook gets admirably right, though, is to introduce me to stuff I didn’t have on my radar yet. It shows up in my feed because a friend commented on it or liked it. This is highly selective, I’m not seeing all of my friends’ activity. But what I see is usually right on the mark.

There were reports recently in which people manipulated their newsfeed by liking just about everything, or the other way round, not liking anything at all, and then were astonished how their feed changed as a consequence of that. This reminds me of the steering wheel on an ocean ship. Sure, if you turn that thing, the ship turns. That’s what it’s designed for.

I would hope that the algorithms do get better. I would rather not have a platform that leaves me alone with my friends, but one that points me to things that widen my horizon. Given the sheer size of our information universe, it surely takes algorithms to do that.

(Originally posted on Facebook.)

Notes on Ingress

1. We are flipping bits on Google’s servers, but we are only allowed to do so if our phones report that we are currently at a given location. That’s all it is. But that’s all it takes to create a powerful illusion – or perhaps I shouldn’t call it an illusion, because that would be a term of the physical world. It’s the blending of virtual reality and physical reality, such that I can almost see the links and the fields stretching over the buildings in my neighborhood. I am running around frantically from portal to portal, trying to make it in time before the checkpoint. I’m hooked.

2. The fact that we can only make certain game moves while we are at a given physical location, and that we cannot tamper with the location-reporting mechanism in our phones, is a game rule, just as any other game rule, albeit a technically enforced one. Perhaps in all games where the players are not physically in the same location, and don’t know each other personally, the rules need to be enforced by technical means.

3. Of course there is spoofing. It wrecks the game. Not much to be said about it.

4. Pretty much the only thing which Google doesn’t allow us to use is computers. The only way to interact with the game is the Scanner app (which is visually nice but not very practical) and a really pathetic display of the game state on ingress.com, literally just a toy map. Even the attempt to augment that map and turn it into a more precise and practical tool is considered illegal and barely tolerated.

5. A great game would allow arbitrary API access to the game state, while still enforcing move legality (the position-reporting bit. I’m sure that can be done). It would allow players to use all computing power and UI design ideas they can come up with to enhance their game.

6. There is no need to worry that this would only benefit one team, because that team happens to have much better programs than the other. There is no way half the world could keep something secret from the other half of the world.

7. The Enlightened are the Resistance. Probably due to popular movie culture, or because of a basic human instinct, far more players choose the Resistance as their faction than the Enlightened. Which means there is, practically everywhere, a crushing dominance of the Resistance. Which, in turn, means that if you want the experience of being a rebel and an underdog, of having to use your wits and creativity to stand up against a much more powerful opponent that seems to dominate the entire world: choose the Enlightened.

8. One of the more fascinating aspects of Ingress is that the goal of the game is not clear. Is it leveling up? Creating farms? Recruiting new players? Having fun? There has been a recent push by Niantic to establish MU – the size of the fields that a faction creates and maintains – as the goal, but there is little agreement among the players that this actually is it. Which doesn’t seem to curb people’s enthusiasm and even passion to play the game.

How I’m using the Five-Star System for Books

In an attempt to make my rating of books more systematic, here’s how I’ve been using, and intend to further use, the five-star system.

***** A book that makes deep and lasting contributions to fundamental questions which are relevant to me. This book will henceforth be inseparable from my thinking, and I will return to it often. Alternatively, a book which, if I had never read another book before, would get me to start writing.

**** A strong book that has been significantly beneficial for me to read. I will return to it once in a while and continue to be influenced by it.

*** A so-so book. Nothing particularly outstanding, but no fundamental shortcomings either. I am likely not going to return to this book in my thinking and might forget about it before too long.

** A book that made no positive contributions to my thinking and has fundamental shortcomings, but I finished it nonetheless.

* A book that was so unbearable to read that I did not finish it.

These are personal criteria and that’s intentional. If I stumbled upon an objectively outstanding book that happened to be irrelevant to me, I would give it a low rating. Usually that doesn’t happen though, as my decision which books to read will likely filter out those books in the first place.

My Current Position

The most important feature of now-deceased Google Latitude, for me, was the ability to publish your real-time location on your own web site, also known as the “public location badge”. I had set this up under a private URL which I only gave to a couple of friends. Actually, and totally unexpected for me, this turned into the most intimate link between me and my parents that had existed in decades. And I was not alone: this thread brought up experiences from a surprising number of people who had found surprisingly intimate and, shall I say, human uses of Latitude.

This is no more. It was given the axe by Google like many other services, and supposedly replaced by an integration with Google+ that is, as of this writing, not even close to the features that Latitude had. I searched high and low but there is no service in sight that could fill the gap.

So I decided to roll my own. I created a simple mash-up of my Foursquare feed and the Google Maps API, which displays my last few check-ins nicely laid out on a map with a timeline on the left. (I’m actually surprised that Foursquare doesn’t offer a feature like this.)

map of foursquare checkins

The one downside of this compared to Latitude is that it’s not real-time, always-on, since it shows only my check-ins. I mitigated this by setting up automatic check-ins at the places where I most frequently show up: home and work. (The Android app FsIntents lets you do that easily; iOS solutions also exist.)

Compared to Latitude, my solution also has a few very nice upsides going for it:

  • It shows semantic locations – in other words, it gives an idea of what I’m doing, rather than just where I am.
  • It shows history, rather than just a single position as the location badge did.
  • It is a real map, rather than just a static image. It is zoomable and scrollable, satellite-switchable, and even street view is in there.

It’s implemented as a simple php script, named pub4sq.php, which can be downloaded from github. Instructions how to customize it for your own feed are in the file. I’ll also take the opportunity and make the map of my own feed public: You can henceforth find my reasonably current position under drmirror.net/pos.

The Cost of Creating Objects in Java: A Quick Sanity Check

Experienced Java programmers are often careful not to create unnecessary objects. This is not about the overall object structure of an application, but about tight, inner loops, where careless programming can easily lead to huge numbers of objects being created and then thrown away in a very short time.

I got my fingers burned with this in the early days of Java, version 1.1 or 1.2, when I had to read a few million records from a text file and put them into an in-memory data structure. The loop went something like this:

while (true) {
  String line = input.readLine();
  if (line == null) break;
  StringTokenizer tok = new StringTokenizer (line, "");
  String s1 = tok.nextToken();
  String s2 = tok.nextToken();
  ...
}

You can see how each line is returned as a new String object, then a StringTokenizer is created for each line, and the individual tokens are returned as new, individual String objects again. For a few million input lines, that’s x-times-a-few-million temporary objects. On that ancient 1.1 or 1.2 JVM, the program would simply not finish, because those millions of objects flooded the heap and the garbage collector couldn’t keep up with it. So I rewrote my code to use a single byte array buffer and simply avoid the creation of objects whereever possible. Bam! it finished in a few seconds.

Things have changed a lot since then. The garbage collectors in a modern Java 6 or Java 7 VM are so good that they can easily deal with millions of temporary objects thrown at them. Still… you would think that creating an object, which means talking to heap management and getting some storage reserved, initializing it, etc. is not exactly a trivial operation, certainly more expensive than a method call or an integer increment? When optimizing a tight, inner loop, perhaps it might still be a good idea to avoid creating too many temporary objects…?

I looked around on the net and came across this entry on stackexchange where an innocent programmer asked pretty much the same question (“people told me I should avoid creating objects — really?”), to which he received an energetic, sarcastic lecture by another programmer to settle the issue once and for all:

Your colleague has no idea what they are talking about. Your most expensive operation would be listening to them, they wasted your time mis-directing you to information at least a decade out of date as well as you having to spend time posting here and researching the Internet for the truth.

Hopefully they are just ignorantly regurgitating something they heard or read from a decade ago and don’t know any better. I would take anything else they say as suspect as well, this should be a well known fallacy by anyone that keeps up to date either way.

[...]

Object creation in Java due to its memory allocation strategies is faster than C++ in most cases and for all practical purposes compared to everything else in the JVM can be considered “free”.

And another programmer went on:

Actually, due to the memory management strategies that the Java language (or any other managed language) makes possible, object creation is little more than incrementing a pointer in a block of memory called the young generation. It’s much faster than C, where a search for free memory has to be done.

Now that sounds intriguing (and a bit intimidating, I almost did not dare to even think about the question anymore after that lecture). Object creation is just a pointer increment nowadays?

I decided to do a quick sanity check — not an exhaustive, scientific study — based on what I happened to be working on at the moment. This comes from h3270, a web-to-host adapter I have been involved with for a number of years. The task at hand was to convert a hexadecimal representation of a UTF-8 character into a Java char (UTF-16). For example, the string “61″ would be converted to the character ‘a’, and “e282ac” to the euro sign ‘€’. This conversion would have to be done for each character on a terminal screen, altogether several thousand times per screen — so here’s our tight loop.

A naive approach is to create a byte array from the hex string and then to create a string from the byte array using the UTF-8 charset, from which we take the first (and only) character and return it (the function value() used below simply converts a hex digit into a number from 0 to 15):

public char decodeChar1 (String source) throws UnsupportedEncodingException {
  byte[] b = new byte[source.length() / 2];
  for (int i=0; i<b.length; i++) {
    int val = value(source.charAt(i*2)) * 16 + value(source.charAt(i*2+1) );
    b[i] = (byte)val;
  }
  return new String(b, "UTF-8").charAt(0);
}

This code creates two objects per call: the byte array b and the result string from which we return only the first character. If object creation were essentially free nowadays that would be fine, but it happens that Java offers a different API precisely to avoid creating these objects. Here’s another solution using a CharsetDecoder and re-usable buffers:

private CharsetDecoder charsetDecoder = Charset.forName("UTF-8").newDecoder();
private ByteBuffer codeBuffer = ByteBuffer.allocate(6); // max utf8 encoding length for a single character
private CharBuffer charBuffer = CharBuffer.allocate(1);

public char decodeChar2 (String source) {
  codeBuffer.clear();
  for (int i=0; i<source.length(); i+=2) {
    int val = value(source.charAt(i)) * 16 + value(source.charAt(i+1));
    codeBuffer.put((byte)val);
  }
  codeBuffer.rewind();
  charBuffer.clear();
  charsetDecoder.reset();
  charsetDecoder.decode(codeBuffer, charBuffer, true);
  charsetDecoder.flush (charBuffer);
  return charBuffer.get(0);
}

This code looks more complicated (and it is), but it avoids all object creations inside the method. (And if you walk through the library code you realize that actually both versions use the same API and the only meaningful difference is that the second version does not create any objects while decoding.)

Here are the running times for 10 million iterations on a 2 GHz Linux laptop under Java 1.4 and Java 6:

Java 1.4

Java 6

decodeChar1

9.5 s

2.8 s

decodeChar2

3.3 s

1.4 s

That is a factor of 2-3 between the version that creates objects and the version that doesn’t. We also see that the JVM has indeed become much better between Java 1.4 and 6, but it remains a fact that object creation is a non-trivial operation, certainly not comparable to a pointer increment or even a method call. A rough back-of-the-envelope calculation puts it somewhere in the 100 nanosecond range for this example, which is one or two orders of magnitude more than a pointer increment on a 2 GHz processor. (These numbers are consistent with what other people report for an object creation in Java, at least to the order of magnitude.)

In conclusion, it follows that:

  1. Yes, object creation has a non-trivial, measurable cost in Java, and avoiding object creation is therefore a reasonable optimization technique for tight, inner loops.

  2. This has hopefully been clear all along: For the large-scale structure of an object-oriented program, this is completely irrelevant. At the macroscopic level, structure is much more important than a few nanoseconds, and therefore objects should be used to the fullest degree everywhere, except in those tiny spots where nanosecond-level optimization really is relevant.

Nail me to the Past: Publishing and Updating your Tweet Archive

I have recently published my tweet archive here on this site (drmirror.net/tweets) and it has turned out to be one of the most useful online tools I’ve discovered in quite a while. Because I usually tweet about everything I find interesting, the archive is an excellent means (and sometimes the only means) to re-discover something I’ve come across in the past.

I have also promised I would update my tweet archive regularly (you can download a fresh one from Twitter once a week). When I tried this today for the first time, I realized the caching of Javascript pages can get in the way. Since the archive is implemented entirely in Javascript, browsers will not by themselves pick up the new version of the archive, not even if you restart the browser or hold shift while clicking the reload button. (Users would have to actually clear their entire browser cache to pick up the new archive version.)

The most elegant solution to this problem seems to be Google’s mod_pagespeed Apache module. It sits as an output filter in your web server and performs all sorts of optimizations on the content that it serves, including auto-versioning of Javascript modules. This way, you can simply upload and unpack a new version of your tweet archive on the server, and mod_pagespeed will magically make sure that updated Javascript modules will be renamed to force reloading of them by the client. It actually seems to work completely by itself, without any configuration of mod_pagespeed required. (At one point, when I tried going back and forth between versions of my tweet archive, it seemed the served content got out of sync, but I could fix that by clearing the mod_pagespeed cache under /var/cache/mod_pagespeed.)

Two Smartphones and Public Service

This should not go unmentioned.

On New Year’s Eve, as I was traveling through Sweden to be with my loved ones, I missed my train connection in Eskilstuna. I was supposed to go to Arboga, to catch the long-distance train to Gothenburg, but my limited Swedish caused me to miss one crucial announcement: that only the first car of the train onto which I had just hopped would go to Arboga.

It was 6pm and here I was, suddenly stranded in the middle of the Swedish countryside. A conductor who was more helpful than knowledgable suggested I’d immediately take the next train that went into even remotely the same direction, so I took one that left at 18.09 towards Katrineholm.

Nye

And here I fell into the hands of the conductor who is the reason I’m writing this. Probably in his late twenties, he was equipped with not one, but two smartphones, a big Samsung and an iPhone. With these, and a plethora of apps which he kept jockeying between at frantic speed, he used the next 45 minutes to not only fulfill his regular duties as a conductor, but to figure out:

  • There was no way I could intercept my train to Gothenburg via this or any other train. We would be in Katrineholm at 18.57, and there was no connection that could reach Hallsberg by 19.40, which was when my train towards Gothenburg would be passing through there. And there was no other connection to Gothenburg that evening either.
  • A quick check on Google maps revealed that not even driving seemed possible: one hour ten minutes from Katrineholm to Hallsberg — no way to bring this down to the required 40 minutes, or really?
  • He double-checked by actually calling the taxi stand at Katrineholm station. No way. You could do it in one hour, they said, but not 40 minutes.
  • It began to dawn on me that the only way I could be with my loved ones by midnight would be to rent a car. My conductor came to the same conclusion. Within minutes, he had located and contacted not one, but two places that would rent me a car at 7pm on New Year’s Eve, allowing me to compare prices and figure out how long I’d need it.

By the time we reached Katrineholm, my New Year’s Panic had turned into admiration and gratitude. Falling into my New York City habits, I offered him a generous tip, which he so firmly refused that it left absolutely no doubt: This was a public service.

On Public Archives

Whatever is said in public, ought to be archived in public.

I don’t mean the option for each of us to get our tweets, posts, photos out of the silos to which we handed them. I mean publically searchable archives, the prerequisite of cultural memory.

Of course that’s not happening anytime soon. (The fact that Twitter is archived at the Library of Congress is only a painful reminder of what’s actually missing. Or has that deal also been cancelled? I wouldn’t be surprised.)

It is something that should be mandated by government. Not happening anytime soon either, of course.

The question is, what would need to happen to our society for this to be established? Will we have a lost century, or only a few lost decades?