Monthly Archives: December 2010

IT: The professionals

By Steve Philp, The Open Group

The European Commission (EC) recently warned of a potential 350,000-plus shortfall in IT practitioners in the region by 2015 and criticised the UK for failing to adequately promote professionalism in the industry.  According to EC principal administrator André Richier, although Europe has approximately four million IT practitioners, 50 per cent are not IT degree-qualified.certification

While the EC raises some interesting points about the education of those entering the field of IT, it’s important not to lose sight of what’s really important – ensuring IT executives are continually improving and developing their skills and capabilities.

Developments in technology are moving faster than ever and bringing about major changes to the lives of IT professionals.  Today, for instance, it’s crucial IT professionals are not just technical experts but able to speak the language of business and ensure the work of the IT function is closely aligned to business objectives.  This is particularly so when it comes to cloud computing where pressure is mounting for IT teams to clearly articulate the benefits the technology can offer the business.

Business decision makers aren’t interested in the details of cloud computing implementation but do want to know that IT teams understand their situation and are well placed to solve the challenges they face.  In short, they want to know important IT decisions being made in their business are in the hands of true professionals.

ITSCCertification can act as an important mark of professional standards and inspire confidence by verifying the qualities and skills IT executives have with regards to the effective deployment, implementation and operation of IT solutions. It’s these factors that led to the launch of the Open Group’s IT Specialist Certification (ITSC) Programme.  The programme is peer reviewed, vendor-neutral and global, ensuring IT executives can use it to distinguish their skills regardless of the organisation they work for.  As such, it guarantees a professional standard, assuring business leaders that the IT professionals they have in place can help address the challenges they face.  Given the current pressures to do more with less and the rising importance of IT to business, expect to see certification rise in importance in the months ahead.

Steve PhilpSteve Philp is the Marketing Director for the IT Architect and IT Specialist certification programs at The Open Group. Over the past 20 years, Steve has worked predominantly in sales, marketing and general management roles within the IT training industry. Based in Reading, UK, he joined the Open Group in 2008 to promote and develop the organization’s skills and experience-based IT certifications.

1 Comment

Filed under Certifications, Enterprise Architecture

The Trusted Technology Forum: Best practices for securing the global technology supply chain

By Mary Ann Davidson, Oracle

Hello, I am Mary Ann Davidson. I am the Chief Security Officer for Oracle and I want to talk about The Open Group Trusted Technology Provider Frameworkhardware (O-TTPF). What, you may ask, is that? The Trusted Technology Forum (OTTF) is an effort within The Open Group to develop a body of practices related to software and hardware manufacturing — the O-TTPF — that will address procurers’ supply chain risk management concerns.

That’s a mouthful, isn’t it? Putting it in layman’s terms, if you are an entity purchasing hardware and software for mission-critical systems, you want to know that your supplier has reasonable practices as to how they build and maintain their products that addresses specific (and I would argue narrow, more on which below) supply chain risks. The supplier ought to be doing “reasonable and prudent” practices to mitigate those risks and to be able to tell their buyers, “here is what I did.” Better industry practices related to supply chain risks with more transparency to buyers are both, in general, good things.

Real-world solutions

One of the things I particularly appreciate is that the O-TTPF is being developed by, among others, actual builders of software and hardware. So many of the “supply chain risk frameworks” I’ve seen to date appear to have been developed by people who have no actual software development and/or hardware manufacturing expertise. I think we all know that even well-intended and smart people without direct subject matter experience who want to “solve a problem” will often not solve the right problem, or will mandate remedies that may be ineffective, expensive and lack the always-needed dose of “real world pragmatism.”  In my opinion, an ounce of “pragmatic and implementable” beats a pound of “in a perfect world with perfect information and unlimited resources” any day of the week.

I know this from my own program management office in software assurance. When my team develops good ideas to improve software, we always vet them by our security leads in development, to try to achieve consensus and buy-in in some key areas:

  • Are our ideas good?
  • Can they be implemented?  Specifically, is our proposal the best way to solve the stated problem?
  • Given the differences in development organizations and differences in technology, is there a body of good practices that development can draw from rather than require a single practice for everyone?

That last point is a key one. There is almost never a single “best practice” that everybody on the planet should adhere in almost any area of life. The reality is that there are often a number of ways to get to a positive outcome, and the nature of business – particularly, the competitiveness and innovation that enables business – depends on flexibility.  The OTTF is outcomes-focused and “body of practice” oriented, because there is no single best way to build hardware and software and there is no single, monolithic supply chain risk management practice that will work for everybody or is appropriate for everybody.

BakingIt’s perhaps a stretch, but consider baking a pie. There is – last time I checked – no International Organization for Standardization (ISO) standard for how to bake a cherry pie (and God forbid there ever is one). Some people cream butter and sugar together before adding flour. Other people dump everything in a food processor. (I buy pre-made piecrusts and skip this step.) Some people add a little liqueur to the cherries for a kick, other people just open a can of cherries and dump it in the piecrust. There are no standards organization smack downs over two-crust vs. one-crust pies, and whether to use a crumble on the top or a pastry crust to constitute a “standards-compliant cherry pie.” Pie consumers want to know that the baker used reasonable ingredients – piecrust and cherries – that none of the ingredients were bad and that the baker didn’t allow any errant flies to wander into the dough or the filling. But the buyer should not be specifying exactly how the baker makes the pie or exactly how they keep flies out of the pie (or they can bake it themselves). The only thing that prescribing a single “best” way to bake a cherry pie will lead to is a chronic shortage of really good cherry pies and a glut of tasteless and mediocre ones.

Building on standards

Another positive aspect of the O-TTPF is that it is intended to build upon and incorporate existing standards – such as the international Common Criteria – rather than replace them. Incorporating and referring to existing standards is important because supply chain risk is not the same thing as software assurance — though they are related. For example, many companies evaluate ­one or more products, but not all products they produce. Therefore, even to the extent their CC evaluations incorporate a validation of the “security of the software development environment,” it is related to a product, and not necessarily to the overall corporate development environment. More importantly, one of the best things about the Common Criteria is that it is an existing ISO standard (ISO/IEC 15408:2005) and, thanks to the Common Criteria recognition arrangement (CCRA), a vendor can do a single evaluation accepted in many countries. Having to reevaluate the same product in multiple locations – or having to do a “supply chain certification” that covers the same sorts of areas that the CC covers – would be wasteful and expensive. The O-TTPF builds on but does not replace existing standards.

Another positive: The focus I see on “solving the right problems.” Too many supply chain risk discussions fail to define “supply chain risk” and in particular define every possible concern with a product as a supply chain risk. (If I buy a car that turns out to be a lemon, is it a supply chain risk problem? Or just a “lemon?”) For example, consider a system integrator who took a bunch of components and glued them together without delivering the resultant system in a locked down configuration. The weak configuration is not, per se, a supply chain risk; though arguably it is poor security practice and I’d also say it’s a weak software assurance practice. With regard to OTTF, we defined “supply chain attack” as (paraphrased) an attempt to deliberately subvert the manufacturing process rather than exploiting defects that happened to be in the product. Every product has defects, some are security defects, and some of those are caused by coding errors. That’s a lot different – and profoundly different — from someone putting a back door in code. The former is a software assurance problem and the second is a supply chain attack.

Why does this matter? Because supply chain risk – real supply chain risk, not every single concern either a vendor or a customer could have aboutManufacturing a product – needs focus to be able to address the concern. As has been said about priorities, if everything is priority number one, then nothing is.  In particular, if everything is “a supply chain risk,” then we can’t focus our efforts, and hone in on a reasonable, achievable, practical and implementable set  – “set” meaning “multiple avenues that lead to positive outcomes” – of practices that can lead to better supply chain practices for all, and a higher degree of confidence among purchasers.

Consider the nature of the challenges that OTTF is trying to address, and the nature of the challenges our industry faces, I am pleased that Oracle is participating in the OTTF. I look forward to working with peers – and consumers of technology – to help improve everyone’s supply chain risk management practices and the confidence of consumers of our technologies.

Mary Ann DavidsonMary Ann Davidson is the Chief Security Officer at Oracle Corporation, responsible for Oracle product security, as well as security evaluations, assessments and incident handling. She had been named one of Information Security’s top five “Women of Vision,” is a Fed100 award recipient from Federal Computer Week and was recently named to the Information Systems Security Association Hall of Fame. She has testified on the issue of cybersecurity multiple times to the US Congress. Ms. Davidson has a B.S.M.E. from the University of Virginia and a M.B.A. from the Wharton School of the University of Pennsylvania. She has also served as a commissioned officer in the U.S. Navy Civil Engineer Corps. She is active in The Open Group Trusted Technology Forum and writes a blog at Oracle.

6 Comments

Filed under Cybersecurity, Supply chain risk

EA: Being fit is overrated

By Garry Doherty, The Open Group

Charles Darwin is usually misquoted. He didn’t mention anything about “survival of the fittest;” what he really said was, “It is not the strongest of the species that survives, nor the most intelligent that survives. It is the one that is the most adaptable to change.” And that’s what EA is all about, is it not?

In nature, complexity abounds; but nothing is ever more complex than it needs to be… a real lesson for Enterprise Architects.

So, like evolution, TOGAF™ is a catalyst to appropriate change, creating the infrastructure and momentum behind business strategy, simplifying the organization and ensuring that it is never more complex than it actually needs to be. To slightly misquote Turing Award winner Alan Perlis: “Fools ignore complexity, pragmatists suffer it, but geniuses remove it.” Image credit: jscreationzs

Managing complexity is the key

Of course, I’m not suggesting that Enterprise Architects are geniuses (though I’ve met a few who could credibly apply for the position), but without a control mechanism to ensure that appropriate change happens in the way it needs to happen, rising complexity (and therefore risk) will strangle the organization.

TOGAF™ is much more than just a method for doing EA; it must genuinely be seen in the broader context of a natural process. Admittedly, evolution has had a bit of a head start, but the outcomes of TOGAF™ and evolution are similar. As natural evolution never stops, neither too must that of TOGAF™.

Garry DohertyGarry Doherty is an experienced product marketer and product manager with a background in the IT and telecommunications industries. Garry is the TOGAF™ Product Manager and the ArchiMate® Forum Director at The Open Group. Garry is based in the U.K.

5 Comments

Filed under Enterprise Architecture, TOGAF®

The Newest from SOA: The SOA Ontology Technical Standard

By Heather Kreger, IBM

The Open Group just announced the availability of The Open Group SOA Ontology Technical Standard.

Ontology?? Sounds very ‘semantic Web,’ doesn’t it? Just smacks of reasoning engines. What on earth do architects using SOA want with reasoning engines?

Actually, Ontologies are misunderstood — an Ontology is simply the definition of a set of concepts and the relationships between them for a particular domain — in this case, the domain is SOA.

They don’t HAVE to be used for reasoning… or semantic Web. And they are more than a simple glossary which defines terms, because they also define relationships between them — something important for SOA, we thought. It’s also important to note that they are more formal than Reference Models, usually by providing representations in OWL (just in case you want to use popular tools for Ontology and reasoners).

What would an architect do with THIS ontology?Image credit: jscreationzs

It can be used simply to read and understand the key concepts of SOA, and more importantly, a set of definitions and UNDERSTANDING of key concepts that you can agree to use with others in your company and between organizations. Making sure you are ‘speaking the same language’ is essential for any architect to be able to communicate effectively with IT, business, and marketing professionals within the enterprise as well as with vendors and suppliers outside the enterprise. This common language can help ensure that you can ask the right questions and interpret the answers you get unambiguously.

It can be used as a basis for the models for the SOA solution as well. In fact, this is happening in the SOA repository standard under development in OASIS, S-RAMP, where they have used the SOA Ontology as the foundational business model for registry/repository integration.

The Ontology can also be augmented with additional related domain-specific ontologies; for example, on Governance or Business Process Management… or even in a vertical industry like retail where ARTS is developing service models. In fact, we, the SOA Ontology project, tried to define the minimum, absolutely core concepts needed for SOA and allow other domain experts to define additional details for Policy, Process, Service Contract, etc.

This Ontology was developed to be consistent with existing and developing SOA standards including OMG’s SOA/ML and BPMN and those in The Open Group SOA Workgroup: SOA Governance Framework, OSIMM, and the SOA Reference Architecture. It seems it would have been good to have developed this standard before now, but the good news is that it is grounded in extensive real-world experience developing, deploying and communicating about SOA solutions over the past five years. The Ontology reflects the lessons learned about what terms NOT to use to avoid confusion, and how to best distinguish among some common and often overused concepts like service composition, process, service contracts, and policy and their roles in SOA.

Have a look at the new SOA Ontology and see if it can help you in your communications for SOA. It’s available to you free at this link: http://www.opengroup.org/bookstore/catalog/c104.htm

Additional Links:

Heather KregerHeather Kreger is IBM’s lead architect for Smarter Planet, Policy, and SOA Standards in the IBM Software Group, with 15 years of standards experience. She has led the development of standards for Cloud, SOA, Web services, Management and Java in numerous standards organizations, including W3C, OASIS, DMTF, and Open Group. Heather is currently co-chair for The Open Group’s SOA Work Group and liaison for the Open Group SOA and Cloud Work Groups to ISO/IEC JTC1 SC7 SOA SG and INCITS DAPS38 (US TAG to ISO/IEC JTC 1 SC38). Heather is also the author of numerous articles and specifications, as well as the book Java and JMX, Building Manageable Systems, and most recently was co-editor of Navigating the SOA Open Standards Landscape Around Architecture.

6 Comments

Filed under Cloud/SOA

Security & architecture: Convergence, or never the twain shall meet?

By Jim Hietala, The Open Group

Our Security Forum chairman, Mike Jerbic, introduced a concept to The Open Group several months ago that is worth thinking a little about. Oversimplifying his ideas a bit, the first point is that much of what’s done in architecture is about designing for intention — that is, thinking about the intended function and goals of information systems, and architecting with these in mind. His second related point has been that in information security management, much of what we do tends to be reactive, and tends to be about dealing with the unintended consequences (variance) of poor architectures and poor software development practices. Consider a few examples:

architecture under fireSignature-based antivirus, which relies upon malware being seen in the wild, captured, and having signatures being distributed to A/V software around the world to pattern match and stop the specific attack. Highly reactive. The same is true for signature-based IDS/IPS, or anomaly-based systems.

Data Loss (or Leak) Prevention, which for the most part tries to spot sensitive corporate information being exfiltrated from a corporate network. Also very reactive.

Vulnerability management, which is almost entirely reactive. The cycle of “Scan my systems, find vulnerabilities, patch or remediate, and repeat” exists entirely to find the weak spots in our environments. This cycle almost ensures that more variance will be headed our way in the future, as each new patch potentially brings with it uncertainty and variance in the form of new bugs and vulnerabilities.

The fact that each of these security technology categories even exist has everything to do with poor architectural decisions made in years gone by, or inadequate ongoing software development and Q/A practices.

Intention versus variance. Architects tend to be good at the former; security professionals have (of necessity) had to be good at managing the consequences of the latter.

Can the disciplines of architecture and information security do a better job of co-existence? What would that look like? Can we get to the point where security is truly “built in” versus “bolted on”?

What do you think?

P.S. The Open Group has numerous initiatives in the area of security architecture. Look for an updated Enterprise Security Architecture publication from us in the next 30 days; plus we have ongoing projects to align TOGAF™ and SABSA, and to develop a Cloud Security Reference Architecture. If there are other areas where you’d like to see guidance developed in the area of security architecture, please contact us.

Jim HietalaAn IT security industry veteran, Jim Hietala is Vice President of Security at The Open Group, where he is responsible for security programs and standards activities. He holds the CISSP and GSEC certifications. Jim is based in the U.S.

Comments Off

Filed under Cybersecurity

Cybersecurity in a boundaryless world

By David Lounsbury, The Open Group

There has been a lot of talk recently in the United States in the print, TV and government press about the “hijacking” of U.S. Internet traffic.

The problem as I understand it is that (whether by accident or intent) a Chinese core routing server advertised it had the best route for a large set of Internet routes. Moving traffic over the best route is a fundamental algorithm for Internet robustness and efficiency, so when other routers saw this apparently better route, they took it. This resulted in traffic between the U.S. and other nations’ destinations being routed through China.

This has happened before, according to a U.S.-China Economic and Security Review Commission entitled “Internet Routing Processes Susceptible to Manipulation.” There was also a hijacking of YouTube in February 2008, apparently by Pakistan. There have been many other examples of bad routes — some potentially riskier — getting published.

Unsurprisingly, the involvement of China concerns a lot of people in the U.S., and generates calls for investigations regarding our cybersecurity.Secure Network The problem is that our instinctive model of control over where our data is and how it flows doesn’t work in our hyperconnected world anymore.

It is important to note that this was proper technical operation, not a result of some hidden defect. So testing would not have found any problems. In fact, the tools could have given a false sense of assurance by “proving” the systems were operating correctly. Partitioning the net during a confirmed attack might also resolve the problem — but in this case that would mean no or reduced connectivity to China, which would be a commercial (and potentially a diplomatic) disaster.  Confirming what really constitutes an attack is also a problem – Hanlon’s Razor (“Never attribute to malice that which is adequately explained by stupidity”) applies to the cyber world as well. Locking down the routes or requiring manual control would work, but only at the cost of reduced efficiency and robustness of the Internet.  Best practices could help by giving you some idea of whether the networks you peer with are following best practices on routing — but it again comes down to having a framework to trust your networking partners.

This highlights what may be the core dilemma in public cybersecurity: establishing the balance of between boundaryless and economical sharing of information via the Internet (which favors open protocols and routing over the most cost-effective networks) versus security (which means not using routes you don’t control no matter what the cost). So far, economics and efficiency have won on the Internet. Managers often ask the question, “You can have cheap, quick, or good — pick two.” On the Internet, we may need to start asking the question, “You can have reliable, fast or safe — pick two.”

It isn’t an easy problem. The short-term solution probably lies in best practices for the operators, and increased vigilance and monitoring of the overall Internet routing configuration. Longer term, there may be some opportunities for improvements to the security and controls on the routing protocols, and more formal testing and evidence of conformance. However, the real long-term solution to secure exchange of information in a boundaryless world lies in not relying on the security of the pipes or the perimeter, but improving the trust and security of the data itself, so you can know it is safe from spying and tampering no matter what route if flows over. Security needs to be associated with data and people, not the connections and routers that carry it — or, as the Jericho 9th commandment puts it, “All devices must be capable of maintaining their security policy on an untrusted network.”

What do you see as the key problems and potential solutions in balancing boundarylessness and cybersecurity?

Dave LounsburyDave Lounsbury is The Open Group‘s Chief Technology Officer, previously VP of Collaboration Services.  Dave holds three U.S. patents and is based in the U.S.

Comments Off

Filed under Cybersecurity