FBI's iPhone paid-for hack should be barred, say ex-govt officials

Cybersecurity bods argue for formalizing zero-day disclosure rules

The FBI's purchase of a hack to get into the San Bernardino shooter's iPhone should not have been allowed.

That's according to a new paper from two former US government cybersecurity officials, Ari Schwartz and Rob Knake.

In their paper [PDF] they dig into the current vulnerability equities process (VEP), disclosed in 2014, which the US government uses to decide whether to disclose critical security holes. They argue that it needs to be formalized.

While the VEP provides a useful guide, it is only an informal practice and the authors argue that with an increasingly important topic like software vulnerabilities, the government should have a formal policy, and one subject to public review and comment.

Although the question over whether to disclose a security hole is complex, it is not so complex as to avoid a clear set of rules, say Knake and Schwartz. They don't agree with Bruce Schneier's argument that all zero-day holes should be disclosed immediately regardless of their potential value, and instead highlight a possible case where disclosure would result in the loss of valuable intelligence in an ongoing investigation.

Nope

That does not include the FBI's $1.2m purchase of a hack, however. One of the paper's recommendations is that government agencies be "prohibited from entering into non-disclosure agreements with vulnerability researchers and resellers" – which is what the FBI did in buying access to the San Bernardino shooter's phone from an unnamed third party and then claiming it cannot disclose how it did so.

The recommendation goes on: "When the government purchases a zero day vulnerability or a tool to exploit such a vulnerability, the seller should be legally obligated to foreswear reselling the vulnerability or tool to a third party. The government must have exclusive rights to the vulnerability or tool. If it does not obtain these rights, including the right to disclose the vulnerability, it runs the risk that it could be sold or shared with other actors working against the national security interest of the United States."

In terms of vulnerabilities that the government already has knowledge of, the paper argues that there need to be "clear limits on and adequate oversight of the decision to retain and use such a vulnerability."

In other words, the system needs to move from one where the FBI and NSA can decide whether to keep knowledge of critical security holes to themselves for future use, to one where they have to periodically and formally justify keeping it under wraps.

"If a law enforcement agency has an ongoing investigation on a suspect and the only information is coming through communications legally intercepted through a previously unknown vulnerability, the balance may very well be for the agency to keep the vulnerability, at least until the end of the investigation," the paper argues.

The current VEP "functions as intended," they argue, but point out that without the process being formalized, it could be thrown out by a future administration. It also recognizes that some steps within the VEP would likely need to remain classified but that "high-level criteria" used to inform decision "should be subject to public debate and scrutiny."

The paper also argues that broad data on use of the VEP would be useful without giving away too much detail. It argues for aggregate data on the number of vulnerabilities disclosed (as opposed to retained) and the length of time that vulnerabilities are kept before disclosure.

The decision over how and when disclosures should be revealed should also be moved from the NSA to Homeland Security, the authors recommend. ®


Biting the hand that feeds IT © 1998–2017