When “full disclosure” equals collusion, users are in danger

36

Author: Joe Barr

Gone are the days when “full disclosure” meant the immediate public release of information about vulnerabilities or exploits uncovered by security researchers. Whatever it means today is the result of a collaboration — some might call it collusion — between the researcher or firm finding the flaw and the vendor or project responsible for the code. Recent patches from Apple illustrate the dangers of this practice when proprietary software is involved.

Last week, Apple announced three security patches for its wireless component across virtually its entire platform line.

The first patch (CVE-2006-3507) is for two stack overflow vulnerabilities in Airport, Apple’s wireless driver. The second patch (CVE-2006-3508) fixes a heap buffer overflow in Airport. The third patch (CVE-2006-3509) addresses an integer overflow in Airport code which handles third-party wireless card connections. All are ranked as “high” severity in the National Vulnerability Database.

According to Apple, there are no known exploits for any of these vulnerabilities. Of course, this is the same firm that denied its customers were at risk from wireless vulnerabilities last month.

One bad Apple spoils the barrel

The problem is that Apple’s claims that there are no known exploits are false. Not only have exploits been found, they’ve been demonstrated, explained, and widely publicized. They first surfaced at least as early as June, when presentations were booked to demonstrate them. They are, after all, the heart and soul of the “faux disclosure” controversy surrounding Maynor and Ellch’s presentation at Black Hat and DEFCON last month.

Nowadays, a vendor or project is typically told about a flaw privately, and given time to fix it before any public disclosure is made. Whether or not this new arrangement is better than the old practice of making a flaw public as soon as possible, and setting aside any debate about who it is allegedly better for, the vendor or their customers, it requires that both parties to the agreement bring a minimum amount of integrity to the table in order for it to work as designed. In theory, users are offered the optimum level of security when a company does not make their exposure more widely known until a patch is ready to close the hole.

But when a vendor deliberately silences one side of the “collaboration” process through legal threats — as Ellch hinted strongly that they did in an email to a security mailing list on September 3 and Washington Post writer Brian Krebs reported immediately following the Black Hat presentation — that impose an involuntary “cone of silence” over the researchers, and at the same time issues public lies about the affair, the whole shoddy collaboration falls apart like a house of cards, and the users get the worst of both ends of the stick.

In the Washington Post story linked to above where Apple denied the vulnerabilities, Krebs wrote, “Apple today issued a statement strongly refuting claims put forth by researchers at SecureWorks that Apple’s Macbook computer contains a wireless-security flaw that could let attackers hijack the machines remotely.” The NIST site claims all three patches are for “locally exploitable” vulnerabilities. It may be that Apple is playing on the definition of remote versus local exploit its claims.

According to Donnie Werner of Zone-H.Org, all three patches are to close the door on remote, not local, exploits. He explained that local exploits usually require the rights of a local user of the machine being attacked, which is definitely not the case with these.

The good news is that if you’re using free/open source software, you’re largely immune not only to the vulnerabilities with a lifespan as long as these, but to the depraved indifference of proprietary firms which value their ad campaigns above the security of their customers. The transparency of open source software makes the denial game impossible and long delays inexcusable.

Category:

  • Security