Home Blog Page 617

Top Lessons For Open Source Pros From License Compliance Failures

The following is adapted from Open Source Compliance in the Enterprise by Ibrahim Haddad, PhD.

In the past few years, several cases of non-compliance with open source licenses have made their way to the public eye. Increasingly, the legal disposition towards non-compliance has lessons to teach open source professionals. Here are my four top takeaways, gleaned from the years I’ve worked in open source.

1. Ensure Compliance Prior to Product Shipment/Service Launch

The most important lesson of non-compliance cases has been that the companies involved ultimately had to comply with the terms of the license(s) in question, and the costs of addressing the problem after the fact has categorically exceeded those of basic compliance. Therefore, it is really a smart idea to ensure compliance before a product ships or a service launches.

It is important to acknowledge that compliance is not just a legal department exercise. All facets of the company must be involved in ensuring proper compliance and contributing to correct open source consumption and, when necessary, redistribution.

This involvement includes establishing and maintaining consistent compliance policies and procedures as well as ensuring that the licenses of all the software components in use (proprietary, third-party, and open source) can co-exist before shipment or deployment.

To that effect, companies need to implement an end-to-end open source management infrastructure that will allow them to:

• Identify all open source used in products, presented in services, and/or used internally

• Perform architectural reviews to verify if and how open source license obligations are extending to proprietary and third-party software components

• Collect the applicable open source licenses for review by the legal department

• Develop open source use and distribution policies and procedures

• Mitigate risks through architecture design and engineering practices

2. Non-Compliance is Expensive

Most of the public cases related to non-compliance have involved GPL source code. Those disputes reached a settlement agreement that included one or more of these terms:

• Take necessary action to become compliant

• Appoint a Compliance Officer to monitor and ensure compliance

• Notify previous recipients of the product that the product contains open source software and inform them of their rights with respect to that software

• Publish licensing notice on company website

• Provide additional notices in product publications

• Make available the source code including any modifications applied to it (specific to the GPL/LGPL family of licenses)

• Cease binary distribution of the open source software in question until it has released complete corresponding source code or make it available to the specific clients affected by the non-compliance

• In some cases, pay an undisclosed amount of financial consideration to the plaintiffs

Furthermore, the companies whose compliance has been successfully challenged have incurred costs that included:

• Discovery and diligence costs in response to the compliance inquiry, where the company had to investigate the alleged inquiry and perform due diligence on the source code in question

• Outside and in-house legal costs

• Damage to brand, reputation, and credibility

In almost all cases, the failure to comply with open source license obligations has also resulted in public embarrassment, negative press, and damaged relations with the open source community.

3. Relationships Matter

For companies using open source software in their commercial products, it is recommended to develop and maintain a good relationship with the members of the open source communities that create and sustain the open source code they consume. The communities of open source projects expect companies to honor the licenses of the open source software they include in their products. Taking steps in this direction, combined with an open and honest relationship, is very valuable.

4. Training is Important

Training is an essential building block in a compliance program, to ensure that employees have a good understanding of the policies governing the use of open source software. All personnel involved with software need to understand the company’s policies and procedures. Companies often provide such education through formal and informal training sessions.

Learn more in the free “Compliance Basics for Developers” course from The Linux Foundation.

Read the other articles in the series:

An Introduction to Open Source Compliance in the Enterprise

Open Compliance in the Enterprise: Why Have an Open Source Compliance Program?

Open Source Compliance in the Enterprise: Benefits and Risks

3 Common Open Source IP Compliance Failures and How to Avoid Them

Don’t Let Serverless Applications Dodge Performance Monitoring

Serverless applications abstract the app from the underlying infrastructure. And that changes the IT team’s approach to application performance monitoring.

Serverless applications aren’t for everyone, as they make monitoring more difficult. While scaling and cost savings may be worth it for some developers, serverless apps come with higher test requirements and different monitoring strategies than traditional applications. The best way to ensure serverless applications function as intended is to have consistent back-end tests. While this may not anticipate every scenario, it is a good way to prevent any sort of regression and guarantee that code is operating within expectations in production.

Read more at TechTarget

Trimming Power on Remote IoT Projects

At last October’s Embedded Linux Conference Europe, Brent Roman, an embedded software engineer at the Monterey Bay Aquarium Research Institute (MBARI), described the two decade-long evolution of MBARI’s Linux-controlled Environmental Sampler Processor (ESP). Roman’s lessons in reducing power consumption on the remotely deployed, sensor-driven device are applicable to a wide range of remote Internet of Things projects. The take-home lesson: It’s not only about saving power on the device but also about the communications links.

The ESP is designed to quickly identify potential health hazards, such as toxic algae blooms. The microbiological analysis device is anchored offshore and moored at a position between two and 30 meters underwater where algae is typically found. It then performs chemical and genetic assays on water samples. Results are transferred over cable using RS-232 to a 2G modem mounted on a float, which transmits the data back to shore. There are 25 ESP units in existence at oceanographic labs around the world, with four currently deployed.

Over the years, Roman and his team have tried to reduce power consumption to extend the duration between costly site visits. Initially, the batteries lasted only about a month, but over time, this was extended to up to six months.

Before the ESP, the only way to determine the quantity and types of algae in Monterey Bay — and whether they were secreting neurotoxins — was to take samples from a boat and then return to shore for study. The analysis took several days from the point of sampling, by which time the ecosystem, including humans, might already be at risk. The ESP’s wirelessly connected automated sampling system provides a much earlier warning while also enabling more extensive oceanographic research.

Development on the ESP began in 1996, with the initial testing in 2002. The key innovation was the development of a system that coordinates the handling of identically sized filter pucks stored on a carousel. “Some pucks filter raw water, some preserve samples, and others facilitate processing for genetic identification,” said Roman “We built a robot that acts as a jukebox for shifting these pucks around.”

The data transmitted back to headquarters consists of monochrome images of the samples. “Fiduciary marks allow us to correlate spots with specific algae and other species such as deep water bacteria and human pathogens,” said Roman.

The ESP unit consists of 10 small servo motors to control the jukebox, plus eight rotary valves and 20 solenoid valves. It runs entirely on 360 D Cell batteries. “D Cells are really energy dense, with as much energy per Watt as Li-Ion, but are much safer, cheaper, and more recyclable,” said Roman. But the total energy budget was still 6kW hours.

The original system launched in 2002 ran Linux 2.4 on a Technologic Systems TS-7200 computer-on-module with a 200MHz ARM9 processor, 64MB RAM, and 16MB NOR flash. To extend battery life, Roman first focused on reducing the power load of a separate microcontroller that drove the servo motors.

“When we started building the system in 2002 we discovered that the microcontrollers that were designed for DC servos required buses like CAN and RS-485 with quiescent power draw in the Watt range,” said Roman. “With 10 servos, we would have a 10W quiescent load, which would quickly blow our energy budget. So we developed our own microcontroller using a multi-master I2C bus that we got down to 70mW per every two motors, or less than half a Watt for motor control.”

Even then, the ESP’s total 3W idle power draw limited the device to 70 days between recharge instead of their initial 180-day goal. At first, 70 days was plenty. “We were lucky if we lasted a few weeks before something jammed or leaked and we had to go out and repair it,” said Roman. “But after a few years it became more reliable, and we needed longer battery life.”

The core problem was that the device used up considerable power to keep the system partially awake. “We have to wait for the algae to come to the ESP, which means waiting until the sensors tell us we should take a sample.” said Roman. “Also, if the scientists spot a possible algae bloom in a satellite photo, they may want to radio the device to fire off a sample. The waiting game was killing us.”

In 2014, MBARI updated the system to run Linux 2.6 on a lower power PC/104 carrier board they designed in house. The board integrated a more efficient Embedded Artists LPC3141 module with a 270MHz ARM9 CPU, 64MB RAM, and 256MB NAND.

The ESP design remained the same. An I2C serial bus links to the servo controllers, which are turned off during idle. Three RS-232 links connect to the sensors, and also communicate with the float’s 500mW cell modem. “RS-232 uses very little power and you can run it beyond recommended limits at up to 20 meters,” said Roman.

In 2014, when they mounted the more power-efficient LPC3141 enabled carrier as a drop-in replacement, the computer’s idle time draw was reduced from almost 2.5W to 0.25W. Overall, the ESP system dropped from 3W to 1W idle power, which extended battery life to 205 days, or almost seven months.

The ESP enters rougher water

Monterey Bay is sufficiently sheltered to permit mooring the ESP about 10 meters below the surface. In more exposed ocean locations, however, the device needs to sit deeper to avoid “being pummeled by the waves,” said Roman.

MBARI has collaborated with other oceanographic research institutions to modify the device accordingly. Woods Hole Oceanographic Institution (WHOI), for example, began deploying ESPs off the coast of Maine. “WHOI needed a larger mooring about 25 meters below the surface,” said Roman. “The problem was that the algae were still way up above it, so they used a stretch rubber hose to pump the water down to the ESP.”

The greater distance required a switch from RS-232 to DSL, which boosted idle power draw to more than 8W. “Even when we retrofitted these units with the lower power CPU board, they only dropped from 8W to 6W, or only 60 days duration,” said Roman.

The Scripps Institute in La Jolla, California had a similar problem, as they were launching the ESP in the exposed coastal waters of the Pacific. Scripps similarly opted for a stretch hose, but used more power efficient RS-422 instead of DSL. This traveled farther than RS-232, supporting both the 10-meter stretch hose and the 65-meter link to the float.

RS-422 draws more current than RS-232, however, limiting them to 85 days. Roman considered a plan to suspend the CPU image to RAM. However, since the CPU was already very power efficient, “the energy you’re using to keep the RAM image refreshed is a fairly big part of the total, so we would have only gained 15 days,” said Roman. He also considered suspending to disk, but decided against it due to flash wear issues, and the fact that the Linux 2.6 ARM kernel they used did not support disk hibernation.

Ultimately, tradeoffs in functionality were required. “For Scripps, we made the whole system power on based on time rather than sensor input, so we could shut down the power until it received a radio command,” said Roman.

Due to the need to keep the radio on, even this yielded only enough power for 140 days. Roman dug into the AT command set of the 2G modems and found a deep sleep standby option that essentially uses the modems as pagers. The solution reduced power from 500Mw to 100Mw for the modem, or 200Mw overall.

The University of Washington came up with an entirely different solution to enable a deeper ESP mooring. “Rather than using an expensive stretch hose, they tried a 40-meter Cat5 cable to the surface, enabling an Ethernet connection that was more than 100 times faster than RS-232,” said Roman. This setup required a computer at both ends, however, as well as a cellular router that added 2-3 Watts.

Roman then came up with the idea to run USB signals over Cat5, avoiding the need for additional computers and routers while still enabling high-bandwidth communications. For this deployment, he used an Icron 1850 Cat5 USB extender, which he says works reliably at over 50 meters. The extender adds 400mW, plus another 150mW for the hub on the ESP.

Roman also described future plans to add energy harvesting to recharge the batteries. So far, putting a solar panel on the float seems to be the best solution due to the ease of maintenance. The downside to a solar panel is that the wind can more easily tip over the float. A larger float might help.

In summarizing all these projects, Roman concluded that reducing power consumption was a more complex problem than they had imagined. “We worried a lot about active consumption, but should have spent more time on passive, which we finally addressed,” said Roman. “But the real lesson was how important it was to look at the communications power consumption.”

For additional details, watch the full video below:

Embedded Linux Conference + OpenIoT Summit North America will be held on February 21-23, 2017 in Portland, Oregon. Check out over 130 sessions on the Linux kernel, embedded development & systems, and the latest on the open Internet of Things.

Linux.com readers can register now with the discount code, LINUXRD5, for 5% off the attendee registration price. Register now>>

Tracking Environmental Data with TPS and Keen.IO

About the Application

  • Keen.IO is a cloud platform that provides data collection and analysis services via the REST API. Keen offers SDKs for many programming languages, including browser-side JavaScript and Node.js.

  • In this tutorial we used Tibbit #30 (ambient humidity and temperature meter) for collecting environmental data and storing it on the Keen.IO server.

  • A very simple in-browser application is used for fetching data from Keen.IO and displaying charts in the browser window (the charts below have been received from Keen.IO, they are not static).

  • Data collection and representation are separated into device.js and server.js applications.

What you need

Hardware

Proposed Tibbit configuration

Onboard Software

  • Node.js V6.x.x (pre-installed during production)

External Services

  • Keen.IO (you should create an account)

GitHub Repository

Name: keenio-tutorial

Repository page: https://github.com/tibbotech/keenio-tutorial

Clone URL: https://github.com/tibbotech/keenio-tutorial.git

Updated At: Fri Oct 14 2016

Setting Up External Services

  • Create a Keen.IO account (free for up to 50000 events per month);
  • Add your project;
  • Click on the title of the project to open an overview tab:

keenio-screen-create-project.png

  • Receive your projectID and API keys (you need read and write keys):

Store the keys securely, especially the master one

keenio-screen-get-keys.png

Node.js Application

Configuration and Installation

git clone https://github.com/tibbotech/keenio-tutorial.git
cd keenio-tutorial
npm install .
  • Launch the app:
node device

Comments in the code explain how it works

// requires Keen.IO client module
const Keen = require('keen.io');

// requires Tibbo's humidity/temperature meter and sets it up to work with I2C line 4
const humTempMeter = require('@tibbo-tps/tibbit-30').init("S5");

// Binds the client to your account
const client = Keen.configure({
    projectId: "57066...........fe6a1279",
    writeKey: "0d2b95d4aa686e8274aa40604109d59c5..............4501378b3c193c3286608"
});

// Every minute..
setInterval(function(){
    // ..reads the environmental data from the meter..
    var data = humTempMeter.getData();

    // ..checks out if everything's correct..
    if(data.status === 1){
        var payload = {
            hum: data.humidity,
            temp: data.temperature
        };

        // ..and submits them to your event collection.
        client.addEvent("humtemp",  payload, function(err){
            if(err !== null){
                console.log(err);
            }
        });
    }
},60000);

Web Interface

Installation

  • The web interface application can be installed on your PC, a remote server, or executed on the same LTPS device (as a separate process)
  • Install the application (skip if running on the same LTPS):
git clone https://github.com/tibbotech/keenio-tutorial.git
cd keenio-tutorial
npm install .
  • Launch:
node server

Comments in the code explain how it works

browser.js

angular
    .module('tutorials',['nvd3'])
    .controller("nodejs-keen",['$scope',function($scope){
        var client = new Keen({
            projectId: "5706............fe6a1279",
            readKey: "29ec96c5e..........746871b0923"
        });

        // Configures NVD3 charting engine
        $scope.options = {
            chart: {
                type: 'lineChart',
                height: 300,
                margin: {
                    top: 20,
                    right: 20,
                    bottom: 40,
                    left: 55
                },
                x: function (d) {
                    return d.x;
                },
                y: function (d) {
                    return d.y;
                },
                useInteractiveGuideline: true,
                xAxis: {
                    axisLabel: 'Time',
                    tickFormat: function (d) {
                        return d3.time.format("%X")(new Date(d));
                    }
                }
            }
        };

        $scope.temperature = [
            {
                values: [],
                key: 'Temperature',
                color: 'red'
            }
        ];

        $scope.humidity = [
            {
                values: [],
                key: 'Humidity',
                color: 'blue'
            }
        ];

        // Defines Keen.IO query
        var query = new Keen.Query("multi_analysis", {
            event_collection: "humtemp",
            timeframe: {
                start : "2016-04-09T00:00:00.000Z",
                end : "2016-04-11T00:00:00.000Z"
            },
            interval: "hourly",
            analyses: {
                temp : {
                    analysis_type : "average",
                    target_property: "temp"
                },
                hum : {
                    analysis_type : "average",
                    target_property: "hum"
                }
            }
        });

        Keen.ready(function(){
            // Executes the query..
            client.run(query, function(err, res){

                // ..transforms the received data to be accepted by NVD3..
                res.result.forEach(function(record){
                    var timestamp = new Date(record.timeframe.end);
                    $scope.temperature[0].values.push({
                        x: timestamp,
                        y: record.value.temp.toFixed(2)
                    });
                    $scope.humidity[0].values.push({
                        x: timestamp,
                        y: record.value.hum.toFixed(2)
                    });
                });

                // ..and does rendering
                $scope.$apply();
            });
        });
    }]);

index.html

<html>
    <head>
        <link href="http://cdnjs.cloudflare.com/ajax/libs/nvd3/1.8.2/nv.d3.min.css" rel="stylesheet" type="text/css">

        <script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.4.2/angular.min.js"/>
        <script src="https://cdnjs.cloudflare.com/ajax/libs/angular-nvd3/1.0.6/angular-nvd3.min.js" type="text/javascript"/>
        <script src="http://cdnjs.cloudflare.com/ajax/libs/keen-js/3.4.0/keen.min.js" type="text/javascript"/>
        <script src="http://cdnjs.cloudflare.com/ajax/libs/d3/3.5.16/d3.min.js" type="text/javascript"/>
        <script src="http://cdnjs.cloudflare.com/ajax/libs/nvd3/1.8.2/nv.d3.min.js" type="text/javascript"/>

        <script src="browser.js" type="text/javascript"/>
    </head>

    <body ng-app="chart" ng-controller="nodejs-keen">
        <h3>Temperature, C</h3>
        <nvd3 options="options" data="temperature">

        <h3>Humidity, %</h3>
        <nvd3 options="options" data="humidity">
    </body>
</html>

To learn more click here

Automation Is the Key for Agile API Documentation

The very idea of documentation often invokes yawns followed by images of old-school, top-down Waterfall project management. Documentation is often seen an albatross to an agile project management process built with rapid iterations, learning from customers, and pivoting.

Yet, particularly with the API space, the most important thing that developers are considering is the documentation….

Read more at The New Stack

The First Steps to Analyzing Data in R

Analyzing the right data is crucial for an analytics project’s success. Most of the times, data from transactional systems or other data sources such as surveys, social media, and sensors are not ready to be analyzed directly. Data has to mix and matched, massaged and preprocessed to transform it into a proper form which can be analyzed. Without this, the data being analyzed and reported on becomes meaningless. And these small discrepancies can make a significant difference in the outcomes that can affect an organization’s bottom line performance.

With R being one of the most preferred tools for Data Science and Machine Learning, we’ll discuss some data management techniques using it.

Read more at DZone

The Growing Ecosystem Around Open Networking Hardware

At Facebook, we believe that hardware and software should be decoupled and evolve independently. We create hardware that can run multiple software options, and we develop software that supports a variety of hardware devices — enabling us to build more efficient, flexible, and scalable infrastructure. By building our data centers with fully open and disaggregated devices, we can upgrade the hardware and software at any time and evolve quickly.

We contribute these hardware designs to the Open Compute Project and the Telecom Infra Project, and we open-source many of our software components. We want to share technologies like Wedge 100BackpackVoyagerFBOSS, and OpenBMC, because we know that openness and collaboration helps everyone move faster. This ethos also led us to open a lab space for companies to validate their software on open hardware, which can help smaller companies without resources to develop custom solutions choose the hardware and software that work best for them. We’ve seen a lot of traction over the past five months, and there are now more options at every layer of the stack.

Read more at Code Facebook 

What You Need to Know About the Tech Pay Gap and Job Posts

There’s a long tradition in which employers ask candidates in job posts and applications for salary requirements or for their salary history. The practice is so common, most professionals don’t give the question a second thought. But asking for salary history in a job post can actually perpetuate the wage gap.

That’s why Massachusetts signed a new equal pay act into law that prohibits employers from asking job candidates about salary history in application materials and the interview. The reasoning behind the act is that when compensation is based on past numbers, it only perpetuates past disparities, especially because women typically earn less than men in their first job.

In addition, people tend to make judgments and assumptions about a candidate’s value based on what they earned in a previous position, whether they mean to or not. 

Read more at TechCrunch

Tips and Tricks for Making VM Migration More Secure

A challenge for any cloud installation is the constant tradeoff of availability versus security. In general, the more fluid your cloud system (i.e., making virtualized resources available on demand more quickly and easily), the more your system becomes open to certain cyberattacks. This tradeoff is perhaps most acute during active virtual machine (VM) migration, when a VM is moved from one physical host to another transparently, without disruption of the VM’s operations. Live virtual machine migration is a crucial operation in the day-to-day management of modern cloud environment.

For the most part, VM migration executed by modern hypervisors, both commercial proprietary and open source, satisfy the security requirements of a typical private cloud. Certain cloud systems, however, may require additional security. For example, consider a system that must provide greater guarantees that the virtual resources and virtual machine operations on a single platform are isolated between different (and possible competing) organizations. For these more restrictive cloud installations, VM migration becomes a potential weak link in a company’s security profile. What do these advanced security threats look like?

Advanced cyberattacks that are specific to VM migration

  • Spoofing: Mimicking a server to gain unauthorized access. This is essentially a variation of the traditional man-in-the-middle (MITM) attack.

  • Thrashing: A sophisticated denial-of-service (DOS) attack. In this case, the attacker deliberately disrupts the migration process at calculated intervals, so the migration is continually restarted over and over, consuming extra compute and network resources as a result. An alternative attack, for systems with automatic migration as part of an orchestration-level load balancing strategy, the DOS attack attempts to force repeated VM migration to burden system resources.

  • Smash and Grab: Forcing a VM image, either at the source or destination server host, into a bad state for the purpose of disrupting operations or exfiltrating data.

  • Bait and Switch: Creating a deliberate failure at a precise moment in the migration process so that a valid copy of a virtual machine is present on both the source and destination host servers simultaneously. The intent of this attack is to exfiltrate information without detection.

Before addressing a possible remedy to each of these attacks, a few more details regarding VM migration are necessary. First, in this context we are only concerned about active VM migration, or migration that does not interrupt the operation of migrating VMs. Migration of a VM that has been halted or powered down is not considered here. Also, the approaches we describe here are limited to the hypervisor and its associated toolstack. Any security profile must also include the hardware platform and network infrastructure, of course, but for this discussion these facets of the system are out of scope.

Lastly, the type of storage employed by the cloud has a big impact on VM migration. Networked storage, via protocols such as iSCSI, NFS, or FibreChannel, are more complicated to configure and maintain than local server storage, but simplify (and speed up) migration because the VM image itself typically does not need to be copied across a network to a separate physical storage device.

Note, however, that the attacks listed above are not solved by using networked storage. While the risk is somewhat alleviated by reducing exposure of the VM image to the network, and the timing window to exploit the migration process is significantly shortened with shared storage, these attacks remain viable. State data and VM metadata still must be passed over the network between server hosts, and the attacks outlined above rely on the corruption of the migration process itself.

With the ground rules understood, let’s dive into a basic approach to address each of the migration cyberattacks.

How to address VM migration cyberattacks

Spoofing: Man-in-the-middle attacks are well studied, and modern hypervisors should already utilize the proper authentication protocols integrated within its migration process to prevent this class of attack. The most common variations of Xen, for example, include public key infrastructure support for mutual authentication via certificate authorities or shared keys to guard against MITM attacks. For any new installation, it is worth verifying that proper authentication is available and configured properly.

Thrashing: External DOS attacks are usually best addressed outside of the hypervisor, within the network infrastructure. Systems that use orchestration software to automate VM migration for load balancing, or even defensive purposes, should be configured to guard against DOS attacks as well. Automated migration requests should be throttled to prevent network contention and avoid overloading a single host.

Smash and Grab: This attack attempts to disrupt the migration process at an opportune moment so that the VM state data is corrupted or forced out of sync with the VM image at the source or destination server, rendering the VM either temporarily or permanently disabled. A smash-and-grab attack could behave like DOS attack over the network, or could be enacted by malware in the hypervisor. In both incarnations, this attack tests how well the migration process recovers from intermittent failures, and how well the migration process can be rolled back to a prior stable state.

The robustness of a migration recovery scheme varies greatly from one hypervisor toolstack to another. Regardless, there are a few steps that can minimize this type of threat:

  • First, before initiating a migration, you should regularly create snapshots of the important VM images, so that you always have a stable image to go back to in case disaster strikes. This is common practice but bears repeating because it is often forgotten.

  • Second, many hypervisor toolstacks support customization of the migration process via scripts. For example the XAPI toolstack has hooks for pre- and post-migration scripts, where an industrious system designer or admin can insert their own recovery support at both the source and destination hosts. Support can be added to validate that the migration is successful, for instance, and bolster recovery procedures if a failure has occurred. These scripts also provide a means of adding special configuration support for specific VMs after a successful migration, such as adjusting permissions, or updating runtime parameters.

  • Third, and perhaps most importantly, when a VM migration completes, either successfully or unsuccessfully, it is likely that an old footprint of the migrated image is still hanging around. For successful migrations, a footprint will remain on the source server, while failed migration attempts often result in a residual footprint remaining on the destination server. While the hypervisor toolstack may delete the old VM image file after completion, very few toolstacks actually erase the image from disk storage. The bits representing the VM image are still on disk, and are vulnerable to exfiltration by malware. For systems with high security requirements, it is therefore necessary to extend the toolstack to zeroize (i.e., fill with zeroes) the blocks on disk previously representing an old VM image, or fill the blocks with random values. Depending on the storage device, hardware support may exist for this as well.

Bait and Switch: We can approach the bait-and-switch attack as a variation of the smash-and-grab attack, and the mitigation of this threat is the same. For the bait-and-switch attack to succeed, a residual copy of the aborted VM migration attempt must remain on the destination server. If the VM footprint is immediately scrubbed from the disk, the risk of this attack is also greatly reduced.

In summary, it is important to understand the security implications of active migration. The system behavior under specific attack scenarios and failure conditions during the migration process must be taken into full account when designing and configuring a modern cloud environment. Luckily, if you follow the steps above, you maybe be able to avoid the security risks associated with VM migration.

John Shackleton is a principal research scientist at Adventium Labs, where he is the technical lead for a series of research and development projects focused on virtualization and system security, in both commercial and government computing environments.

Stay one step ahead of malicious hackers with The Linux Foundation’s Linux Security Fundamentals course. Download the sample chapter today!

Hitachi Increases Investment in Open Source With Linux Foundation Platinum Membership

We are thrilled to announce that Hitachi has become the latest Linux Foundation Platinum member, joining existing Platinum members Cisco, Fujitsu, Huawei, IBM, Intel, Microsoft, NEC, Oracle, Qualcomm and Samsung. Hitachi has been a supporter of The Linux Foundation and Linux since 2000, and was previously a Linux Foundation Gold member. The company decided to upgrade its membership to Platinum in order to further support The Linux Foundation’s work, and open source development in general.

Hitachi is already a member of numerous Linux Foundation projects, such as Automotive Grade Linux, Civil Infrastructure Platform, Cloud Foundry Foundation, Core Infrastructure Initiative, Hyperledger and OpenDaylight. Platinum membership will enable Hitachi to help contribute further to these and other important open source projects.

Linux Foundation Platinum members have demonstrated a sincere dedication to open source by joining at the highest level. As a Platinum member, Hitachi will pay a $500,000 annual membership fee to support The Linux Foundation’s open source projects and initiatives. The company will also now occupy one of 14 seats on the Linux Foundation Board of Directors that are reserved for Platinum members.