In an intriguing technical twist, the "Aurora" attacks (given that name by McAfee from a fragment found in the executable), were initiated by taking control of the PCs of unsuspecting employees, apparently through "spear phishing" emails sent to specific users. When the users clicked links in the messages their machines were infiltrated and then remote-controlled to further penetrate the internal networks of the target companies.
Some early reports suggested that the infected links were PDF files, and that a vulnerability in Adobe Acrobat/Reader was the entry point for the attack. This theory has since been debunked, and the actual vector identified as a previously unknown JavaScript-execution flaw in Internet Explorer (Security Advisory 979352).
Adobe patched the Acrobat/Reader JavaScript flaw (APSA09-07) on January 12, 2010, in versions 8.2 and 9.3. However, the Adobe patch was released a week after the Aurora attack, so doesn't settle the break-in whodunit one way or the other. (The patch does, however, allow up-to-date Adobe Acrobat/Reader installations to re-enable Javascript execution and/or to avoid having to implement a blacklist framework for filtering malicious code.) Incidentally, disabling JavaScript has no impact on the operation of the FileOpen software, which loads into the Adobe viewer via the C/C++ interface, i.e. at a lower level of the Acrobat API.
Regardless of whose vulnerability was exploited to gain the beachhead, the fact remains that the attackers penetrated the internal networks of Google and dozens of other name-brand software and internet companies, set up encrypted channels for the extraction of data, and operated for a period of days or weeks before being expelled. The attackers may have stolen or modified source code from any of the targeted companies (Google's announcement refers to "the theft of intellectual property from Google", though Adobe's announcement suggests that nothing of value was taken), and in doing so have probably paved the way for even more sophisticated intrusions down the road.
Despite the fact that most of the networks penetrated by the Aurora hack were private corporate environments, not public hosting providers, the event has sparked wide-ranging debate about the risks of storing critical data on servers "in the cloud". At least one commentary from a widely-respected author calls the breach so damaging that henceforth "none of our stuff on Google's servers is safe."
If indeed the attackers were able to extract or modify Google's source code - not just for GMail, presumably, but also the various applications providing hosted document authoring, management and storage, etc. - then the amount of data at risk is vast, and there may indeed be reason to think twice before storing important corporate data in that hosted environment, if not hosted environments in general.
This is not meant to suggest that other companies can better defend themselves from attack than can Google, with all of its resources and security expertise, only that the centralized storage of any valuable good creates incentive for theft. In a physical world it is possible to scale security proportionally to the value of the goods - castles, Fort Knox, missile silos - but doing the same for data is much harder. Data security at a major enterprise like Google comes to resemble airport security, an imperfect struggle to avoid the worst outcome. In hosted computing, like air traffic, scale creates risk: because Google aggregates so much usage, a breach of Google's security (or Facebook's, etc.) is potentially an attack on the data of thousands of companies and millions of users.
In the next post, we'll discuss how FileOpen's document security software was designed from the start to avoid the aggregation of documents, user data and decryption keys all in one place.
Sanford Bingham President FileOpen Systems Inc. www.fileopen.com