The Expert's View with Robert Bigman

Preventing Another OPM-Type Breach

Former CIA CISO on Practical Defensive Measures
Preventing Another OPM-Type Breach
Robert Bigman

Imagine that the Office of Personnel Management ran a ship-shape, Federal Information Security Management Act-compliant secure system that adopted all National Institute of Standards and Technology security guidelines. Let's assume OPM fully patched software, employed the latest and greatest cybersecurity intelligence, "proxied" all sessions through a firewall and encrypted stored data.

See Also: Live Webinar | Navigating Identity Threats: Detection & Response Strategies for Modern Security Challenges

Exclusive Webinar: OPM Breach Aftermath: How Your Agency Can Improve on Breach Prevention Programs

Would all these measures stop a determined China's People Liberation Army from stealing 4.2 million U.S. government personnel files (including mine)? Would these measures inhibit sophisticated cybercriminal syndicates in China or Russia?

Of course not.

The real issue surrounding the OPM breach has not been discussed in the thousands of sentences of commentary written about the hack. The problem isn't the lack of security standards, commercial security products and services or even cybersecurity competence, but the fact that the existing collection of commercial hardware, firmware and software employed to provide IT services contain inherent design and implementation vulnerabilities. As we add millions of lines of poorly secured code every day to our existing base of vulnerable applications and operating systems, the next zero-day exploit is being written and tested.

Even if OPM was second to none at cybersecurity, those pilfered, juicy SF-86 security clearance application files never had a chance once they came upon the digital radar screen of the Chinese government. With few exceptions, almost any TCP/IP Internet accessible system can be had if the value of the target exceeds the technical and political cost to obtain it. While security measures like two-factor authentication, next-generation firewalls and data encryption help reduce risk, everyone familiar with the technical aspects of the Stuxnet/Flame/Duqu/Regin/Gauss family of exploits understands that the offenders are decades ahead of the defenders. More worrisome, the gap is widening, and these sophisticated families of exploits are now in the hands of very less sophisticated hackers.

Addressing the 'Trustability' Problem

The problem is "trustability." What sophisticated hackers understand - and our policymakers don't - is that regardless of the amount of security products and services deployed, Internet-connected systems remain vulnerable to exploitation. Success is only a matter of time and possibly the right zero-day payload. The only solution to this dilemma is to raise the "trustability" level of our computer systems high enough to make even sophisticated hacking riskier and more susceptible to easier identification. As they say in the military: Reduce the attack surface.

Organizations can greatly reduce their risk by logically isolating their networks from the bad guys. As depicted in the diagram below, organizations should isolate access to the Internet by using a demilitarized zone that breaks the TCP/IP connection into the internal network and only allows Internet access via a VDI logical client over a VLAN from the DMZ. All Internet connections terminate in the DMZ domain and data can only be moved into the organization's internal network via a one-way physical diode.

This configuration basically eliminates the data exfiltration risk. Instead of having to monitor and secure every endpoint in the organization, this configuration reduces the risk to securing and monitoring only the DMZ domain.

If a hacker were able to access the internal network through the VDI protocol, the one-way diode into the internal network physically does not allow data to be sent out to the DMZ. Accordingly, an organization only needs to monitor data movement at the DMZ to ensure compliance with digital rights policies.

Creating a DMZ


SOURCE: Robert Bigman

To strengthen such an approach, we must establish a new partnership between the U.S. government and industry to establish standards that exploit security features in existing computer architectures. While there will never be a 100 percent secure computer system, we need to reduce dramatically the attack surface by requiring the use of more trusted firmware and software like those espoused by the Trusted Computing Group community.

The government and industry should agree on detailed trusted system security standards that fully exploit the existing Trusted Platform Module and microprocessor trusted execution technology, such as Intel's Trusted Execution Technology. Trusted Execution Technology ensures the integrity of the firmware and creates an unforgeable hash key summary of the hardware and software configuration and everything that is loaded into memory.

Within three years, standards should be established to take advantage of hardware-enforced virtualization technology to isolate every process being run on the system. Microsoft has started down this path with Windows 10 Device Guard that will use the hardware virtualization layer technology to secure its configuration parameters from malware running within the operating system.

Seeking a Long-Term Strategy

The public-private partnership also needs to develop a long-term strategy that promotes increasing trustability within our commercial operating systems, programming languages and network protocols. Much like the trusted computing base initiative of the 1980s, a new partnership should begin with a specific set of trusted system engineering principles, such as NIST Special Publication 800-160, and expand to establish operating system security principles, known as the Trusted Computer System Evaluation Criteria that date back to 1983. These principles should serve as a foundation to establish requirements for secure programming languages; secure, two-factor identification; and authentication and trusted networking protocol standards.

Establishing standards won't mean anything if the government and critical infrastructure industry members don't use them and drive IT vendors to produce and support products based on these standards. The political will to make these changes will be as challenging as establishing the technical security standards.

While Congress justifiably rakes OPM over the hot coals, lawmakers need to understand that the remedy is not more public scolding of officials, cyber data sharing or even the cyberthreat information sharing legislation. These measures do almost nothing to prevent the next significant penetration of a government agency or industry organization.

With the Internet of Things being built on the very same "untrustable" platforms, Congress must fund a new initiative that requires the government to develop a Manhattan Project-style partnership with industry to achieve the goals of a trusted computing environment. Anything less and we will continue to experience OPM-size hacks while spending more money on Band-Aid solutions that do not address the root cause of the problem.



About the Author

Robert Bigman

Robert Bigman

Former CISO, Central Intelligence Agency

Robert Bigman, former chief information security officer at the Central Intelligence Agency, is chief executive officer of the IT security consultancy 2BSecure LLC.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing databreachtoday.com, you agree to our use of cookies.