Public Parts: A Personal Review

Jeff Jarvis, Public Parts: How Sharing in the Digital Age is Revolutionizing Life, Business, and Society. Simon & Schuster, 2011.

I had been looking forward to this book for months. When it came out, I was glad I didn’t have to camp in front of a bookstore to get my copy — it was delivered wirelessly to my e-reader a few seconds after publication. And yet, having read it, I cannot deny a mild sense of disappointment.

I feel a bit like a choir being preached to. I’m on Twitter, on Facebook, on Google+, and a host of other online services. I publish my precise physical location online, and I’ve got my own blog. I haven’t yet written about my penis online, but would feel little restraint doing so, if the occasion arose. Oh, and perhaps I should mention I’m German and am quite comfortable sitting naked in the sauna.

Whenever a new service comes out that suggests to make further parts of my life public, my initial feeling is not so much one of anxiety, but rather of curiousity. I’m eager to try things out.

So maybe I’m not quite the intended audience of the book, which might be those who are still tip-toeing into the new kind of public sphere that is developing, or those who are critical of it.

Alas, I don’t think this works. In my experience, irrespective of age, social background, or even culture, people fall squarely into two camps: those who are curious about publicness, and try things out, and those who are not. If someone belongs to the second camp, I have found that no reasoning whatsoever, no carefully compiled list of advantages, and no enthusiasm could persuade them to venture into publicness beyond a half-hearted first attempt that quickly fades.

»It is futile to try and explain a thought to someone for whom a hint is not enough«, said Nicolás Gómez Dávila. And one of my German Twitter acquaintances quipped: »The digital divide is not between us and those who don’t get it, it is between us and those who couldn’t care less.«

I would, therefore, consider it still very much up in the air whether the desire to share, which Jarvis so enthusiastically celebrates in his book, is really a fundamental human instinct that is only inhibited because we did not grow up in an environment that enabled it, or whether it is just a trait of a rather limited group of people.

For those like me who are in the publicness camp, the book is what Germans might call a »Konsensschmöker« — a tome of consent. It is exciting to read, not least because it kindles one’s sense of being part of all sorts of fascinating developments, the outcome of which we probably cannot even begin to imagine. »We ain’t seen nothing yet«, as Jarvis exclaims. And he quotes Leah Marcus with what became one of my favorite passages: »Renaissances happen by infrequently enough that they should be enjoyed in the process. I, for one, await the Cyberspace Renaissance with great interest, and hope to live to see its zenith.«

This nicely sums up the spirit of the book. Unfortunately, beyond this enthusiasm, it offered little additional food for thought to me.

A View of the WTC

For what it’s worth, here’s my view of the new World Trade Center Tower from my usual walk to the subway. It’s the tiny silhouette in the center with the two cranes on top forming a »V«.

Img_20110730_135901

I always get a bit of a lump in my throat when I see it. From all the drawings and computer renderings of it that I’ve seen, I cannot help but feel it will be an ugly, ugly building.

When I was a little boy growing up in Germany in the nineteen seventies, the Twin Towers were fascinating to me. I saw them in books, I saw them on TV, and the parents of a friend had actually travelled to New York and brought home a Super 8 movie of Manhattan and these Towers. I wondered if I would ever be so rich that I could afford going there.

It certainly was a boyish fascination I had with these highest buildings in the world. But when I found myself on a field in Scotland a little while ago, in a circle of upright stones that had been standing there longer than the pyramids of Egypt, it occured to me that there really is something to such vertical structures. Something I would call, for lack of a better word: human — partly ingenious, partly foolish, maybe just plain phallic, but certainly human in all of that.

By the time I first came to New York, the Towers no longer existed.

And it turns out they’re not needed anymore. Businesses are no longer as dependent on their geographic location as they were in the seventies; there is no need to concentrate everything on that small, narrow island of Manhattan — or anywhere else, for that matter. At the price which the office space in the new World Trade Center will cost if the building is to be profitable, it looks like few companies will bother setting up shop there.

The right answer to this, I think, would have been splendour. Something which the initial design by Daniel Libeskind had: A glass tower with a garden inside, reaching up to those symbolic 1776 feet, its shape resembling the Statue of Liberty across the harbor — now that would have been something. The design on which they agreed instead is about as ugly as US politics, and as inspiring as the abandonment of human space flight.

I could be wrong. When the Twin Towers were initially built, everybody seems to have hated them as well. The new building, when it is actually, physically there, might acquire a radiance of its own. And the United States, for that matter, have been able to reinvent themselves several times during their history already. It remains to be seen whether they can do so again.

The Two Sides of Circles

I’m puzzled by the circles in plus. (Funny how about ten days ago, nobody would have known what I was talking about.)

Circles are a good tool for reading. They allow you to divide your stream into a number of channels so you can more easily focus your attention.

But circles are bad for sharing. For one thing, if you want to keep something private to a certain group of people, circles provide a deceptive, if not dangerous illusion of that. We all know it’s not the case, it can’t be guaranteed, but less technical people might still be tricked into that old fallacy that anything could remain private on the internet.

And there are other concerns. For example, someone suggested that bilingual plussers such as me should put contacts into circles according to their language. One circle for English, one for German, and the like. When you share something, you share it to the language circle in which the posting is written — this way, people are not bothered with content they do not understand. The problem is: Who am I to make that call? How should I know what languages people speak, and what languages they want to be bothered with? Most Germans are fine with an occasional post in English. And I have even met Americans who will not flee in panic when facing a post in a language they do not understand.

It would be much better to let the reader decide. The language in which a post is written can be detected automatically rather easily. Users could then be given an option to filter what languages they want to see. Or even better, posts could be auto-translated for users who want that. They should be clearly marked as auto-translated, of course, and maybe it should only be done after an explicit push of a button. You’ve got the technology, Google — please plug it in!

But the problem, exemplified here with languages, extends everywhere. I see people on plus who explain that they keep technical, geeky stuff to a circle of geeks. Photography stuff to a circle of photographers. Movie stuff to a circle of movie lovers. All of these posts and conversations are thereby removed from the public. This precludes any chance of people listening in on these conversations, perhaps joining, extending their reach. It takes away from one of the most important aspects of the internet — a redefinition of what the public sphere is.

I would encourage people to share to public. I would encourage Google to give us excellent search and filter tools for the reading side.

I’m sure you’ve got what it takes, Google.

Optimized for Coziness: Initial thoughts on Google+

Everybody’s excited about Google+ and I, too, could not wait until the somewhat arcane invitation process finally let me in, after about 24 hours of trying, waiting, and trying.

I’m always fascinated how seemingly minor adjustments of parameters bring about entirely new forms of communication. Why don’t you just make a phone call?, we asked, when the text message was invented. But of course, a text was something very different from a phone call – more indirect, and strangely more constrained, yet also more concentrated than verbal communication. Why don’t you just send an e-mail? But of course, a text is quite different from an e-mail, because it reaches the recipient instantaneously, usually alerting them with an audible signal to its arrival.

When Blogs, Twitter, and Facebook came up, the power of one-to-many was given to individuals. Everybody could suddenly speak in such a way that the entire world might hear it. Of these mechanisms, Twitter seems the most fascinating to me, with its asymmetric attention structure (you don’t need to follow me if I follow you), and, most of all, its 140 character limit. This is so outright brilliant it could only have been invented by accident.

In this world, where individuals can suddenly talk to the whole planet, the length limit allows for a manageable economy of attention. You can’t talk my ear off, unless I specifically allow you by clicking the link you put in your tweet. But even more important is that the 140 character limit forces people to think a lot about what exactly they are going to say. The result being that Twitter is an oasis of wit, of fun, of brilliance at times, and altogether an intelligent medium.

Facebook handles these two critical areas exactly the other way round, and thus gets them wrong. Listening to somebody is tied to the in this case awkward concept of »friendship«, which means it is symmetrical: You have to friend me if I want to friend you. This causes people to hang around with, and listen to those they know, rather than extend their reach. Facebook is the people you went to school with – Twitter is the people you wished you went to school with, as somebody put it brilliantly on, of course, Twitter. And besides, there is no length limit on Facebook posts (well, there is one, but a much larger one than on Twitter). There is thus a lot less effort involved in creating a Facebook post. People need to think less, and it shows. Facebook, on the average, feels dull and boring.

In the light of this, the interesting question to me seems what communication structure Google+ establishes. What mode of communication does it suggest, what does it encourage?

Well, firstly, it gets the asymmetry of attention right. I can put you in a circle, but you don’t need to do anything in reverse. You will notice that you captured my attention, which is an important part of extending one’s communication reach, but other than that, you can ignore me for now, or forever. G+ does attempt to make this more fine-grained, however, allowing me to put people into different circles, for friends, for family, or strangers from out there in the net. I can then choose which of these circles I want to share with, unless I choose to make my posts public for everybody.

I have little use for these circles, I think. For me, the whole point of communicating in the network is to publish, to speak to, potentially, everybody, and let listeners decide whether they find interesting what I have to say. Circles and selective sharing might make G+ more family-friendly and group-cozy, but they add nothing to the fascinating, new, emerging communication patterns that are currently redefining the very concept of what the public sphere is.

The other important characteristic of G+ is that there’s no length limit. It therefore does not enforce the radical discipline of Twitter, and contributions are therefore bound to be more verbose, less concentrated, less to the point, and therefore less witty, less brilliant on G+.

Comparing my Twitter timeline to my G+ stream, Twitter is an amazingly concentrated source of highly relevant information, very efficiently organized, augmented with precise pointers to places outside of it. G+, by comparison, is lots of white space and outright clumsiness: »xyz originally shared this post«. It does not help that re-sharing an article doubles it in the streams of all those who already saw the first version, something that Twitter’s built-in retweet feature got right a long time ago. Sharing a photo drops it right into the stream itself, pushing everything else away, and sharing a simple link produces a hard-to-understand mess of no less than three different paragraphs. A search function for streams is something we can only hope for (rather optimistically perhaps, given that this thing was created by Google). Chronological ordering of the stream? Let’s hope for that too.

Some of these quirks might be ironed out soon, but it does not take away from my general impression that G+, for all of its admirable slickness in the UI, does not come along as an efficient engine for exchanging information on a global scale.

Ultimately, the value of a medium depends on the quality of the information that it produces, or to put it more colorful: the information that chooses to live in it. Surprises will happen. It is doubtful that Gutenberg, when he invented the printing press, envisioned the Critique of Pure Reason being enabled by it. When Twitter’s creators first played with their tiny little text messages, they surely did not expect it would be such an excellent tool for the emerging communications structure of the 21st century. When something as stupid as Facebook was created, who would have thought that almost a billion people would be just fine with it.

I will, therefore, remain curious about G+, excited even, and surely be hanging around there. For the time being, however, I really wish there would be nicely built bridges to and from Twitter for it.

Quo vadis, New York Times?

I am more than willing to pay for online news, but after many recent disappointments with the New York Times, I really doubt whether this is where my money should be going.

I was stunned by their behavior in the Wikileaks case, including but not limited to their reporting on Julian Assange, but also more recently their coverage of the Fukushima nuclear crisis.

Do I want a newspaper that only tells me what I already know, already think, or what I want to hear? No. I want to be challenged by the newspaper I am reading, forced to think out of the box, confronting views that may not be my own.

But more than anything, I want the newspaper that I read and pay for to stand for the essential values of journalism — independence from government, and critical rationality. I’m beginning to turn to the New York Times only when I want to see counter examples to these.

Bulkheads of Abstraction

Introducing a new abstraction is like closing the bulkheads on a leaking submarine.

I tweeted this a few days ago (in German), and I think it was one of my better tweets. I wasn’t entirely sure what I meant by it when I wrote it, but after some thinking it appears that the metaphor is a really good one. Here are some of the reasons why:

  1. You better be quick. If you wait too long closing the bulkhead, you’ve got the water everywhere and you might as well leave it.
  2. There’s gonna be some spillage on your side of the bulkhead as well. That’s just inevitable. Gotta mop that up afterwards.
  3. There may be some people on the other side of the bulkhead who don’t like so much what you’re doing.
  4. You’d rather hope there is not another leak somewhere else.

Sustainable Energy, very British

I’ve just read the 10-page-summary of David MacKay’s book on Sustainable Energy. It appears to be an excellent book that I recommend to anyone interested in the energy future of our civilization. It is full of calm, rational thinking and solid numerical evidence, and written in such a compelling and entertaining way that it is hard to put down.

There is one aspect that strikes me as odd, and this is however the central assumption of the book: It sets out to show if and how the United Kingdom could totally switch to its own renewable sources of energy. It turns out that this is pretty much impossible — a significant portion would have to be imported from the outside, preferably in the form of solar power harvested in deserts. Given the fact that direct sunlight is by far the largest part of the energy income of Earth, and that the United Kingdom is not particularly blessed with a large area or a favourable position near the equator, I think this result is pretty much self-evident. The whole idea of trying to make do on your own, and import as little as possible from the outside, is an island mentality that appears very British to me. But of course we know and love our British friends for that.

I would like to repeat what I spelled out in greater detail in another blog post just recently: Mankind has much, much more energy available than it could possibly use. MacKay’s book makes it appear more difficult to switch to other forms of energy than it actually is. If we think globally, and learn to consider the Earth as a single spaceship that is inhabited by humanity as a whole, things look very different. From this perspective, it appears to me that the path of action that our civilization needs to take is far more logical and self-evident.

At least this is what I think. And now I will buy and read the whole book.

Energy: The Real Thing and the Substitutes

(Zur deutschen Version.)

My understanding of energy, the world, and our civilization has been influenced, more than anything else, by the books of R. Buckminster Fuller. They have been real eye-openers for me. As of today, Fuller’s perspective and line of reasoning is becoming more and more mainstream and common sense. But still, in discussions, I frequently encounter people who are just as stunned and baffled by Fuller’s way of looking at things as I was when I first came across it.

So I decided to write a small summary of it, and back it up with some links to provide solidification for the facts that Fuller made me aware of.

There is no energy crisis, only a crisis of ignorance. — R. Buckminster Fuller

Almost all of our energy comes from a single source: the sun. While scientists on earth are still trying in vain to light the fire of nuclear fusion, we do have a working nuclear fusion reactor right outside our windows. It is located at a comfortable safety distance of 150 million kilometers from Earth, and we are shielded from the dangerous parts of its radiation by an intricate structure, the Van Allen Belt. This reactor is so huge, it emits billions and billions of times more energy than our civilization could possibly ever use. Even the tiny fraction of that energy which hits our small blue marble called Earth is several thousand times more than our current world energy consumption.

Practically all sources of energy that we know of are more or less indirect forms of that solar energy. Wind is air that is differentially heated by the sun. If we put propeller blades into that air stream, we are using the atmosphere as a kind of giant turbine, driven by the sun. Hydroelectric power — currents of water flowing downhill — is kinetic energy in water that was heated and vapourized by the sun, thus lifted up into the atmosphere, and then fell down in the form of rain or snow to slightly more elevated levels than where it was initially vapourized.

Fossil fuels (coal, gas, and oil) are the concentrated remains of photosynthesis. Fire is the sun unwinding from a tree’s log, as Buckminster Fuller put it. It’s a stored form of solar energy. In fact, it’s a highly concentrated form of solar energy that took millions of years to produce. That’s why it is so remarkably efficient, and why it is so easy to unlock the energy from that form of storage. The reserves of fossil fuels are finite, however, and their amount pales, compared to what the sun delivers to our doorstep every single day. It’s one of my favourite numbers: The amount of energy stored in all the remaining fossil fuels in the Earth’s crust equals about twenty days of sunshine.

Fossil fuels, therefore, can be considered a kind of kick starter for a civilization: very easy to activate and use, but only in very limited supply. There seems to be just about enough of them so that a civilization can develop means to tap into the real source of energy: the sun itself.

Photovoltaic cells of today’s technology, covering an area about the size of Germany or Pennsylvania, would meet the current world energy consumption, if located near the equator. Move it away from the equator a bit, make it a bit larger and spread it out around the globe, and our energy needs are provided for. It is true that some problems remain to be solved: For example, better short-term storage for electricity needs to be developed, so that solar power can be made more readily available on the night side of Earth. Some of our technology needs to be transformed so that it can be powered by electricity, rather than combustion engines. All of that is conceivable, doable with very little extension of our current technological means.

That which we call nuclear power, by contrast, is a rather awkward way to unlock energy from matter. The fuel for nuclear fission reactors (Uranium) is also an indirect form of solar energy. Uranium is a heavy element, having been bred in the fusion reactors of several generations of stars over billions of years. We can unlock the energy from that kind of storage, too, but it results in highly toxic waste for which we have currently no means of dealing with, except burying it as deep as possible in the Earth and forgetting about it. That doesn’t sound very convincing to me.

It is true that technology might some day allow us to solve the problem of nuclear waste. It is my impression, however, that the technological gap that we need to bridge in order for solar energy to be viable, is much smaller than what would be needed to solve the problem of nuclear waste. Given the fact that we are literally drowned by a form of energy that shines directly at us, I think it is self-evident which is the most reasonable technology to invest in.

When I grew up in the nineteen seventies and eighties, I was under the impression that there is “real energy”, the kind of energy that keeps Daddy’s car running and the house warm in the winter, but that this real energy might some day run out and then we’ll be screwed, having to make do with some weak substitutes like solar panels and wind mills, which will be nowhere near as efficient as the real energy. These substitutes even come with a strange name: renewable energy, which really does make them sound like a poor substitute for the real thing.

I have since learned that it is just the opposite. Fossil fuels, which have kept the industrialized world running from its beginning, should be called for what they are: preliminary energy, a kind of kick starter to advance to the next level. The real thing is the sun, our big fusion reactor in the sky, which will be with us for billions of years to come. The energy that we derive from this source, as directly as possible, is what deserves the name real energy, direct energy, or maybe simply:energy.

As I said above, I owe this line of reasoning mostly to the books of R. Buckminster Fuller. As a starting point, I recommend his master work, Critical Path, particularly the Introduction. This is where his ideas are best developed, the sum of a life-time’s thinking. I also recommend his earlier book, the Operating Manual for Spaceship Earth, which is available in full text online, although it does not quite reach the level of excellence found in Critical Path.

For a very thorough review of Critical Path, including long passages from the book itself, go here.

Powerpoint Files Changing in Transit

I think this is a story worth telling.

At the site of my current customer, we use Mediawiki for internal knowledge management — the same software that runs Wikipedia. I have set up the installation and added quite a few special features to it. It’s a very nice software package to work with.

A few days ago, the head of our group tried to upload a Powerpoint file into the Wiki, but it wouldn’t work. The Mediawiki software complained that although the extension of the file was .ppt, it didn’t look like a Powerpoint file (wrong MIME type: application/zip instead of application/vnd.ms-powerpoint). I asked him to send me the file via e-mail. I tried to upload it, and for me it worked just fine. We switched off the MIME check in Mediawiki, and then he could upload the file as well. But it was a different file that arrived at the server — about 4kB shorter than the one I had uploaded.

We were baffled. We suspected a bug in his web browser, since he used a lower version of Firefox than I did. Maybe an incompatibility between the browser and the Apache web server, a problem during compression negotiation, or a problem with PHP which ultimately handles the upload and writes the file to the disk on the server. Searching the Internet for any problems in this area yielded no results however. It looked like a complete mystery.

To make the story short, after two days of research, we found out what had happened. The Powerpoint file that my customer tried to upload was indeed not recognized by Mediawiki’s MIME detection. It was not a problem with the file upload, which worked flawlessly. But when he sent me the file via e-mail (in Outlook), it was a different file that arrived in my Inbox. The file had silently been re-encoded by Outlook: it was now 4kB larger, and a byte-by-byte comparison revealed that several blocks inside the file had changed (not just added or dropped). The resulting file was considered a legal Powerpoint file by Mediawiki. The MIME type was correct, and I could upload it without problems.

What I want to know is this: How on earth does an e-mail program dare to change a file that I’m sending as an attachment?

Don’t get me wrong on this: Every software has bugs, and sometimes embarassing ones. Software that I have written has them too. But this behaviour of Outlook/Powerpoint to me reveals a system philosophy that I find totally unacceptable. An e-mail program is just not supposed to do that, and it breaks a fundamental assumption that the user has about the software he’s working with.

I am once more glad that whenever I buy a new computer, my first action is to wipe every single Microsoft bit from the hard disk. I’m a happy Ubuntu user. No, it’s not perfect, and sometimes not as polished as Microsoft’s or Apple’s software (although getting ever closer to it). But it is the kind of software that an IT professional can work with, without his sense of logic being insulted.

For reference, these are the software versions involved:

  • Mediawiki 1.15.0
  • Microsoft Office Outlook 2003 (11.8313.8221) SP3
  • The Powerpoint files were originally written in Office 2007 (.pptx), and then saved in Office 2000 format (not quite sure if that’s a meaningful version identification).