The Curse of the Splinternet

This week’s New Scientist featured a particularly fear-mongering article about internet security. Entitled “Age of the Splinternet”, it at first appears to be a hooray to the importance of net neutrality. But the subplot quickly becomes clear: the internet is a great place to be, but anything can and will go wrong, even fantastical, sci-fi doomsday scenarios …

The author begins with an illuminating history lesson on the structure of the internet. Dating back to the 1960s, the underlying system of routers was initially designed by the military as a fault-tolerant network, able to withstand a nuclear blast. The lack of central command, and the presence of autonomous nodes resulted in a decentralised, self-sustaining mesh, able to route traffic around a fault without human input.

The article goes on to praise the open, anonymous nature of the internet, making it difficult (though not impossible) for repressive regimes to censor the information their citizens can access. Then the author shifts a gear, and a dark side to this openness is revealed. We are warned that companies such as Apple, Google and Amazon are starting to - can you imagine it - “fragment” the web to support their own products and interests.

Perhaps it should not be such a shock that business and commerce continues to operate through self-interest, even on the internet. There is no problem with the way Apple restrict the apps users can install: if they didn’t do it, someone else would. The motivation is that a large proportion of users want things to “just work”, and are happier on the “less choice, more reliability” side of the equation.

The existence of the iPhone caused its flip-side to come into existence - Android - with its comparatively open policy. As long as business happens on the internet, it will want to manipulate things to make a profit; nothing new there. The “internet” as a whole is unblemished by that fact.

The cloud is described as a single point of failure. For example, when Amazon’s EC2 service croaked, businesses that relied on it were offline for the duration. Ignoring the fact that the definition of “the cloud” is precisely the reverse of a “single point” - that it is distributed and ought to be redundant (the EC2 outage was a bit of a freak occurence, due to a network engineer running the wrong command and taking out the whole farm), there is - again - no threat here. The internet - like any social system - will grow and evolve based on the demands of its users. EC2 makes up a tiny part of the internet and like any other service, has advantages and disadvantages, and nobody has to use it.

The very next paragraph does an about-turn by admitting that the cloud actually is distributed, spreading your data among many locations, and that this too is a problem. An example of this “threat” is the hack of RSA, which led to the intrusion of Lockheed-Martin’s computers. It’s not clear exactly what the problem is here, other than the fact that specialised groups of people depend on each other to get things done, and sometimes they mess up.

Beyond this point, we are taken on a ghost train ride into sheer speculation and technical folly. “Imagine being a heart patient and having your pacemaker hacked” the author warns. Even with the “evils” of the internet, we still know how to put up a firewall on our home PCs, surely also on medical devices. We may pick up a virus by randomly browing the web, but pacemakers will never be so general and will always be highly limited in functionality, through common sense and good engineering. If a pacemaker was somehow monitored over the internet, then it would not be hard to isolate this function from its critical control system.

If we’re not already quaking in our boots and reaching for the “off” switch on our broadband modems, a similar medical doomsday scenario is presented. What if the glucose levels of a diabetic patient is monitored and controlled over the internet? Wouldn’t it get hacked? Again, this is pure speculation out of the context of real constraints. The fault-tolerance levels of medical systems are far more stringent than those of general consumer devices. Not to mention the fact that nobody would design a system which gambles a human life on network availability.

Anonymity is blamed for the ability of online criminals to operate with impunity. I guess the humble balaclava, or simply “hiding from the police” aren’t sophisticated enough tactics to make it into New Scientist.

Are we offered any solutions to these terrors of the modern age? Actually, yes. The first is a good old “internet licence”, along with some kind of hardware identification system. Although the author recognises the technical challenges in such an idea, we must also consider the 100% likelihood of the system being circumvented by those it intends to control. DRM comes to mind as an example of an identification and control system which simply doesn’t work, and is prohibitively expensive to fix after the fact.

In summary, it would be fair to say that the article finds a balance between encouraging openness of the internet, and preventing misuse. But a prominent message prevails, which is that nonetheless, the internet is a dangerous place and therefore must be controlled. The solution really is in basic common sense and best practices.

Armed with fully patched software and treading wisely, there is no cause for concern. Business will continue to bend the internet to their own ends (as it does in the real world). Criminals will continue to attempt to exploit it (as they do in the real world). We can’t prevent these things from happening, but we can use our heads to drastically reduce our chances of going splat on the imformation superhighway.