Tag Archives: network

Security & architecture: Convergence, or never the twain shall meet?

By Jim Hietala, The Open Group

Our Security Forum chairman, Mike Jerbic, introduced a concept to The Open Group several months ago that is worth thinking a little about. Oversimplifying his ideas a bit, the first point is that much of what’s done in architecture is about designing for intention — that is, thinking about the intended function and goals of information systems, and architecting with these in mind. His second related point has been that in information security management, much of what we do tends to be reactive, and tends to be about dealing with the unintended consequences (variance) of poor architectures and poor software development practices. Consider a few examples:

architecture under fireSignature-based antivirus, which relies upon malware being seen in the wild, captured, and having signatures being distributed to A/V software around the world to pattern match and stop the specific attack. Highly reactive. The same is true for signature-based IDS/IPS, or anomaly-based systems.

Data Loss (or Leak) Prevention, which for the most part tries to spot sensitive corporate information being exfiltrated from a corporate network. Also very reactive.

Vulnerability management, which is almost entirely reactive. The cycle of “Scan my systems, find vulnerabilities, patch or remediate, and repeat” exists entirely to find the weak spots in our environments. This cycle almost ensures that more variance will be headed our way in the future, as each new patch potentially brings with it uncertainty and variance in the form of new bugs and vulnerabilities.

The fact that each of these security technology categories even exist has everything to do with poor architectural decisions made in years gone by, or inadequate ongoing software development and Q/A practices.

Intention versus variance. Architects tend to be good at the former; security professionals have (of necessity) had to be good at managing the consequences of the latter.

Can the disciplines of architecture and information security do a better job of co-existence? What would that look like? Can we get to the point where security is truly “built in” versus “bolted on”?

What do you think?

P.S. The Open Group has numerous initiatives in the area of security architecture. Look for an updated Enterprise Security Architecture publication from us in the next 30 days; plus we have ongoing projects to align TOGAF™ and SABSA, and to develop a Cloud Security Reference Architecture. If there are other areas where you’d like to see guidance developed in the area of security architecture, please contact us.

Jim HietalaAn IT security industry veteran, Jim Hietala is Vice President of Security at The Open Group, where he is responsible for security programs and standards activities. He holds the CISSP and GSEC certifications. Jim is based in the U.S.

Comments Off

Filed under Cybersecurity

Cybersecurity in a boundaryless world

By David Lounsbury, The Open Group

There has been a lot of talk recently in the United States in the print, TV and government press about the “hijacking” of U.S. Internet traffic.

The problem as I understand it is that (whether by accident or intent) a Chinese core routing server advertised it had the best route for a large set of Internet routes. Moving traffic over the best route is a fundamental algorithm for Internet robustness and efficiency, so when other routers saw this apparently better route, they took it. This resulted in traffic between the U.S. and other nations’ destinations being routed through China.

This has happened before, according to a U.S.-China Economic and Security Review Commission entitled “Internet Routing Processes Susceptible to Manipulation.” There was also a hijacking of YouTube in February 2008, apparently by Pakistan. There have been many other examples of bad routes — some potentially riskier — getting published.

Unsurprisingly, the involvement of China concerns a lot of people in the U.S., and generates calls for investigations regarding our cybersecurity.Secure Network The problem is that our instinctive model of control over where our data is and how it flows doesn’t work in our hyperconnected world anymore.

It is important to note that this was proper technical operation, not a result of some hidden defect. So testing would not have found any problems. In fact, the tools could have given a false sense of assurance by “proving” the systems were operating correctly. Partitioning the net during a confirmed attack might also resolve the problem — but in this case that would mean no or reduced connectivity to China, which would be a commercial (and potentially a diplomatic) disaster.  Confirming what really constitutes an attack is also a problem – Hanlon’s Razor (“Never attribute to malice that which is adequately explained by stupidity”) applies to the cyber world as well. Locking down the routes or requiring manual control would work, but only at the cost of reduced efficiency and robustness of the Internet.  Best practices could help by giving you some idea of whether the networks you peer with are following best practices on routing — but it again comes down to having a framework to trust your networking partners.

This highlights what may be the core dilemma in public cybersecurity: establishing the balance of between boundaryless and economical sharing of information via the Internet (which favors open protocols and routing over the most cost-effective networks) versus security (which means not using routes you don’t control no matter what the cost). So far, economics and efficiency have won on the Internet. Managers often ask the question, “You can have cheap, quick, or good — pick two.” On the Internet, we may need to start asking the question, “You can have reliable, fast or safe — pick two.”

It isn’t an easy problem. The short-term solution probably lies in best practices for the operators, and increased vigilance and monitoring of the overall Internet routing configuration. Longer term, there may be some opportunities for improvements to the security and controls on the routing protocols, and more formal testing and evidence of conformance. However, the real long-term solution to secure exchange of information in a boundaryless world lies in not relying on the security of the pipes or the perimeter, but improving the trust and security of the data itself, so you can know it is safe from spying and tampering no matter what route if flows over. Security needs to be associated with data and people, not the connections and routers that carry it — or, as the Jericho 9th commandment puts it, “All devices must be capable of maintaining their security policy on an untrusted network.”

What do you see as the key problems and potential solutions in balancing boundarylessness and cybersecurity?

Dave LounsburyDave Lounsbury is The Open Group‘s Chief Technology Officer, previously VP of Collaboration Services.  Dave holds three U.S. patents and is based in the U.S.

Comments Off

Filed under Cybersecurity