Home Blog Page 896

7 Things to Consider Before Fuzzing a Large Open Source Project

fuzzy lights

One of the best practices for secure development is dynamic analysis. Among the various dynamic analysis techniques, fuzzing has been highly popular since its invention and a multitude of fuzzing tools of varying sophistication have been developed. It can be enormously fun to take the latest fuzzing tool and see just how many ways you can crash your favorite application. But what if you are a developer of a large project which does not lend itself to being fuzzed easily? How should you approach dynamic analysis in this case?

1. Decide your goals

First, decide whether you are looking only for security issues, or whether you are looking for all types of correctness issues. Fuzzing finds a lot of low severity issues which may never be encountered in normal use. These may look exactly like security vulnerabilities with the only difference being that no trust boundary is being crossed. For example, if you fuzz a tool that only ever expects input to come from the output of a trusted tool, you may finds lots of crashes which will never be encountered in normal usage. Are there other ways to get corrupt input into the tool? If so, you have found a security vulnerability, if not, then you have found a low priority correctness issue which may never get fixed. Will the project be willing to address all of the issues found or only the security issues? You can save yourself a lot of time and frustration by setting expectations for dynamic analysis up front.

2. Understand your trust boundaries

Understand (and document) where the error checking should occur

It is easy for security boffins like me to create a mental model of strong security whereby every function defensively checks every input. Sadly for us, the real world is more complex than that. This type of hypervigilance is hugely wasteful and therefore never survives in production. We have to work a little harder to establish a correct mental model of the security boundaries for the project. It is necessary to understand where in the program control flow that the checks should be made and where they can be omitted.

3. Segment your project based on interface

Different fuzzers have different specialities. Segment your project into buckets appropriate for the different types of fuzzers based on the interface – file, network, API.

4. Explore existing tools

New fuzzing tools are being developed all the time and old tools are getting new capabilities. Take a fresh look at some of the most popular tools and see whether or not they can help with even a subset of your project. David Birdwell recently added network fuzzing to a derivative of American Fuzzy Lop which is worth checking out. Hanno Böck has written useful tutorials on how to use some common fuzzing tools at The Fuzzing Project.

5. Write your own tools

When faced with the question of how to perform dynamic analysis on a large mixed language project which did not lend itself to existing tools, I turned to David A. Wheeler to see how he would approach this problem. Dr. Wheeler recommends, “I’d consider writing a fuzzer specific to the project’s APIs & generate random inputs based on them, and adding lots of assertions that are at least enabled during fuzzing.  If you know your API (or can introspect it), creating a specific fuzzer is pretty easy – grab your random number generator, set up an isolated container or VM for the fireworks, and go. “

6. Is fuzzing really worth it?

A common critique of fuzzing tools is that after you run them for a while they stop finding bugs. This is a good thing! Just like you wouldn’t throw out your automated test suite because it finds so few regressions, you shouldn’t use this rationale to avoid fuzzing your project. If your fuzzing tools are no longer finding bugs, Congratulations! It’s time to celebrate! And now, onto finding the more difficult bugs.

7. Sounds like a lot of work

Do you really expect me to do all that? Just give me the name of a good tool to run. (American Fuzzy Lop) You don’t have to do all of this, at least not at once. If you find a tool that works with your project to cover even a subset of the project, then you can just start running it. You will wind up figuring out whether the project developers (or you) are willing to fix low priority issues and where the project’s trust boundaries are along the way. You may find a crash, generate a patch and submit it to the project only to find that it gets rejected because the incorrect input generated by the fuzzer can never reach that part of the project and so adding your check is too expensive. Whichever approach you take, do the people following in your footsteps a favor and write it down. Sure it will get outdated, but it makes for a fun read and helps people coming along behind you to stand on your shoulders.

One final reminder, if you are fuzzing someone else’s project and you have any suspicion that you have found a security vulnerability, remember to use the project’s security vulnerability reporting process!

Emily Ratliff is senior director of infrastructure security for the Core Infrastructure Initiative at The Linux Foundation. Ratliff is a Linux, system and cloud security expert with more than 20 year’s experience. Most recently she worked as a security engineer for AMD and logged nearly 15 years at IBM.

MEM 5.0 Aims to Simplify OpenStack Management

MEM product smallMidokura has released Midokura Enterprise MidoNet (MEM) 5.0, a network virtualization product designed for Infrastructure as a Service (IaaS) clouds. MEM 5.0 builds on Midokura’s open source, highly scalable, network virtualization system — MidoNet — to support network virtualization deployments with enhanced tools for OpenStack operators.

According to the announcement, “MEM 5.0 offers an intelligent, software-based network abstraction layer between the hosts and the physical network, allowing operators to build isolated networks in software overlaying pre-existing, hardware-based network infrastructure.”

This type of network function virtualization (NFV), which lets companies move network services traditionally carried out by proprietary, dedicated hardware into the cloud, has seen huge growth over the past few years. And, a fivefold increase in revenue, to $11.6 billion, is projected for the NFV and Software-Defined Networking (SDN) market by 2019.

Pino de Candia, CTO of Midokura, explains the technology explosion this way: “The value proposition for SDN was initially for disaggregation of the silicon-based networking equipment and decoupling it from the network operating systems. This strategy was certainly attractive to web-scale companies like Facebook, Google, and Amazon, who need the flexibility and cost efficiency of commodity “white box” switches. Those companies essentially want to replicate the formula from Open Compute to Open Networking.”  

However, he says that a more compelling use case for SDN/NFV in the enterprise is security. “Virtualizing network nodes and implementing fine-grain security at the virtual machine and container level is driving the use case for NFV-enabled security.” (For more details, see Pino de Candia’s recent Huffington Post article, “Why the Cloud Has a Security Problem.”)

Open source also plays a role in this infrastructure change. De Candia says, “Foundational technologies like OpenStack and Docker are becoming the standard for building infrastructure clouds and platform as a service (PaaS), respectively.” MidoNet itself is built on open source elements including Apache Zookeeper, Cassandra, and ELK stack. “Building on open source helps our developers bring innovations far more quickly into MidoNet,” he says.

midokura logoMEM 5.0 specifically offers more security and simplicity, along with more support for Docker. Additionally, OpenStack users benefit from advanced data visualizations that provide up-to-date details about the state of the virtual network. “Operational tools are generally geared towards configurations, monitoring in OpenStack, but they offer no visibility into encapsulated traffic…. MEM 5.0 builds upon our popular technology to meet this need, making OpenStack far simpler to manage, operate, and also troubleshoot,” de Candia says.

MEM 5.0 emphasizes the simplification of OpenStack management, which is often viewed as complex. A recent report commissioned by openSUSE, for example, cited difficulties associated with deploying OpenStack as a factor in slowing OpenStack adoption.

De Candia, however, says, “From our vantage point, getting started and running OpenStack can be done with a lean and agile team. Gone are the days when you needed a dedicated systems admin for Linux and Windows, another certified network admin or storage admin. A single cross-functional operations team with Linux competency is all you need. Companies are choosing OpenStack for consistency of operations, centralized policies, and management — IT silos have been shown to be inefficient and costly to manage.”

 

Learn more:

Check out these new courses from the Linux Foundation to improve your OpenStack skills:

Essentials of OpenStack Administration is a classroom-based training course aimed primarily at those who are deploying applications and infrastructure on OpenStack.

OpenStack Administration Fundamentals is a self-paced course that provides everything you need to know to administer public and private clouds with OpenStack.

Canonical Releases Major Linux Kernel Updates for All Supported Ubuntu OSes

canonical-releases-majorCanonical, the company behind the world’s most popular free operating system, Ubuntu Linux, has published multiple Ubuntu Security Notices to inform users about major kernel updates for all of its supported Ubuntu OSes.

For all systems, the update addresses a use-after-free vulnerability in Linux kernel’s AF_UNIX implementation, which could have allowed a local attacker to expose sensitive information or crash the host system by causing a denial-of-service (DoS) attack by crafting epoll_ctl calls, as well as a security flaw in Linux kernel’s Kernel-based Virtual Machine (KVM) implementation leading to DoS attacks.

Installing Laravel on Ubuntu for Apache

Laravel is a very popular open source PHP framework aimed at easy development of applications. If you are looking for a new PHP framework to try, you should give Laravel a try. The following guide will allow you to run Laravel on a Ubuntu 15.10 based Apache server.

Read more at HowtoForge

Google Doubles Cloud Compute Local SSD Capacity: Now It’s 3TB per VM

Google Cloud Compute Engine customers running big databases can now attach up to 3TB of high IOPS local solid-state drive (SSD) to a single virtual machine.

The new capacity, which Google has launched in beta, doubles the previous limit of four local SSD 375GB partitions attached to each machine to eight partitions, amounting to a total of 3TB compared with the previous 1.5TB limit. Local SSDs are physically attached to the host server and offer higher performance and lower latency storage than Google’s cheaper persistent disk storage.

Read more at ZDNet News

SDN in Enterprise Moving Past the Hype, Survey Finds

More businesses are using or planning to adopt SDN, and they have a broad array of vendor options to choose from, a Quinstreet Enterprise survey says. 

Software-defined networking and other network virtualization technologies continue to move beyond the hype phase as more end users launch deployments and more vendors roll out more offerings, according to a new survey looking at the space.Thirty-nine percent of survey respondents said they are either currently using software-defined networking (SDN) in their environments or will deploy the technology within the next 12 months,…

Read more at eWeek

Using IPv6 with Linux? You’ve Likely Been Visited by Shodan and Other Scanners

ipv6-network-scan-640x284One of the benefits of the next-generation Internet protocol known as IPv6 is the enhanced privacy it offers over its IPv4 predecessor. With a staggering 2128 (or about 3.4×1038) theoretical addresses available, its IP pool is immune to the types of systematic scans that criminal hackers and researchers routinely perform to locate vulnerable devices and networks with IPv4 addresses. What’s more, IPv6 addresses can contain regularly changing, partially randomized extensions. Together, the IPv6 features cloak devices in a quasi anonymity that’s not possible with IPv4….

Shodan—the vulnerability search engine that indexes Internet-connected devices—has been quietly contributing NTP services for months to the cluster of volunteer time servers known as the NTP Pool Project

Read more at Ars Technica

Illumos Continues To Let OpenSolaris Live On

imgresIt’s been more than five years since the launch of Illumos as the concerted, community-based effort around the OpenSolaris code-base. This truly-open Solaris stack continues to be at the heart of OpenIndiana, SmartOS, Dyson, and other operating systems.

Daniel McDonald, one of those involved with Illumos via his work on OmniOS, presented this past weekend in Brussels at FOSDEM 2016 about the state of Illumos

Read more at Phoronix

IEEE Anti-Malware Support Service Goes Live

Through the collaborative effort of major players in the computer security industry, organizations now have two new tools for better malware detection.  As readers know all too well, the computer security industry today faces many challenges in detecting malware and effectively vetting the supply chain on behalf of providers and consumers of anti-malware software….

The IEEE-SA’s AMSS is comprised of two main services: the Clean File Metadata Exchange (CMX) and the Taggant System…

Read more at Dark Reading

Default Settings in Apache May Decloak Tor Hidden Services

Websites that rely on the Tor anonymity service to cloak their server address may be leaking their geographic location and other sensitive information thanks to a setting that’s turned on by default in many releases of Apache, the world’s most widely used Web server.

The information leak has long been known to careful administrators who take the time to read Tor documentation, but that hasn’t prevented some Tor hidden services from falling victim to it. 

Read more at Ars Technica