By David Lounsbury, The Open Group
There has been a lot of talk recently in the United States in the print, TV and government press about the “hijacking” of U.S. Internet traffic.
The problem as I understand it is that (whether by accident or intent) a Chinese core routing server advertised it had the best route for a large set of Internet routes. Moving traffic over the best route is a fundamental algorithm for Internet robustness and efficiency, so when other routers saw this apparently better route, they took it. This resulted in traffic between the U.S. and other nations’ destinations being routed through China.
This has happened before, according to a U.S.-China Economic and Security Review Commission entitled “Internet Routing Processes Susceptible to Manipulation.” There was also a hijacking of YouTube in February 2008, apparently by Pakistan. There have been many other examples of bad routes — some potentially riskier — getting published.
Unsurprisingly, the involvement of China concerns a lot of people in the U.S., and generates calls for investigations regarding our cybersecurity. The problem is that our instinctive model of control over where our data is and how it flows doesn’t work in our hyperconnected world anymore.
It is important to note that this was proper technical operation, not a result of some hidden defect. So testing would not have found any problems. In fact, the tools could have given a false sense of assurance by “proving” the systems were operating correctly. Partitioning the net during a confirmed attack might also resolve the problem — but in this case that would mean no or reduced connectivity to China, which would be a commercial (and potentially a diplomatic) disaster. Confirming what really constitutes an attack is also a problem – Hanlon’s Razor (“Never attribute to malice that which is adequately explained by stupidity”) applies to the cyber world as well. Locking down the routes or requiring manual control would work, but only at the cost of reduced efficiency and robustness of the Internet. Best practices could help by giving you some idea of whether the networks you peer with are following best practices on routing — but it again comes down to having a framework to trust your networking partners.
This highlights what may be the core dilemma in public cybersecurity: establishing the balance of between boundaryless and economical sharing of information via the Internet (which favors open protocols and routing over the most cost-effective networks) versus security (which means not using routes you don’t control no matter what the cost). So far, economics and efficiency have won on the Internet. Managers often ask the question, “You can have cheap, quick, or good — pick two.” On the Internet, we may need to start asking the question, “You can have reliable, fast or safe — pick two.”
It isn’t an easy problem. The short-term solution probably lies in best practices for the operators, and increased vigilance and monitoring of the overall Internet routing configuration. Longer term, there may be some opportunities for improvements to the security and controls on the routing protocols, and more formal testing and evidence of conformance. However, the real long-term solution to secure exchange of information in a boundaryless world lies in not relying on the security of the pipes or the perimeter, but improving the trust and security of the data itself, so you can know it is safe from spying and tampering no matter what route if flows over. Security needs to be associated with data and people, not the connections and routers that carry it — or, as the Jericho 9th commandment puts it, “All devices must be capable of maintaining their security policy on an untrusted network.”
What do you see as the key problems and potential solutions in balancing boundarylessness and cybersecurity?
Dave Lounsbury is The Open Group‘s Chief Technology Officer, previously VP of Collaboration Services. Dave holds three U.S. patents and is based in the U.S.