Dublin native James Joyce famously wrote that “mistakes are the portals of discovery.” LinuxCon 2015 keynote speaker Leigh Honeywell grabbed hold of the same theme here in Dublin, reminding hundreds of open source professionals that “you’re going to make mistakes; you’re going to introduce security bugs.” The goal, said Honeywell, who works as a senior security engineer at Slack Technologies, shouldn’t be the all-out elimination of these mistakes. Instead, security engineers should strive to make different mistakes next time around.
Evoking our collectively painful memories of the Heartbleed virus, Honeywell discussed the need to think through scenarios in advance, without making futile and frustrating attempts to get security plans exactly right. “There are always going to be a zillion different ways to respond,” she said. “The software that many of you work on is unimaginably, unknowably complex. Any large codebase will end up with dark, scary corners.”
What’s more, said Honeywell, the work of defenders is always harder than the work of attackers. While an attacker just needs to find one bug to succeed, security engineers have to find or at least mitigate all of them. “They only have to be right once. We have to be right over and over again.”
If it sounds hard, that’s because it is. “You think Dungeons & Dragons is nerdy,” she quipped. “Come talk to me after this keynote about tabletop incident response drills.”
So, how can we secure an open future? Given the challenges, is it even possible? The first step, says Honeywell, is to remember that complex systems always fail. Referencing psychologist James Reason’s “Swiss Cheese Model of Accident Causation” Leigh called the bugs in open software “the holes in the cheese.” Since they’ll never entirely go away, it’s the job of security engineers to “make the holes slightly smaller, make fewer of them, and make sure they don’t all line up.”
But that doesn’t mean we can’t keep software both open and secure -- we just need to approach security failures systemically. To do this, Honeywell’s suggestions included:
Think like an attacker -- Ask yourself, “if I had control of this input, how would I use it to get in trouble?”
Trust your gut and ask for help --”If you’ve got bad vibes about a piece of code, say something -- ask for a code review or additional testing,” says Honeywell. “And if you do get shot down for raising fears about the safety of some code, that’s useful information, too.”
Embrace a culture of blamelessness -- Managers should assume that their people want to write secure code, says Honeywell. And they should create a culture where errors can be addressed without fear of punishment or retribution, but instead with a perspective of learning and growth.
Be polite -- When Honeywell shared an image of a puffed-up cat flashing its sharp teeth and asked if anyone who worked in open source communities felt like they were trying to pet that cat, hands raised throughout the auditorium. It shouldn’t be that way, said Honeywell. “Polite conversation leads to more secure software.”
There’s no doubt, concluded Honeywell, that writing secure open software is difficult. One of the primary solutions, however, is actually quite simple. “We’ve got to work together, compassionately and diligently if we are to have any hope of doing it well.”