Mistakes and missteps plague enterprise security. The Verizon 2017 Data Breach Investigations Report (DBIR) offers nuggets on what organizations must stop doing – now.
Datasets from the recent Verizon 2017 Data Breach Investigations Report (DBIR) show that some security teams still may be operating under false assumptions regarding what it takes to keep their organizations secure.
For starters, the same security standards don’t apply across all vertical industries, says Suzanne Widup, a senior consultant for the Verizon RISK Team and co-author of the Verizon DBIR.
The automotive electronic industry is now, more than ever, facing cybersecurity, connectivity, and software time-to-market challenges. Recently, in fact, vehicles have been hacked in several ways (e.g., physical and remote unlocking/control) and by different means (i.e., CAN bus, OBD-II, emulated cellular networks, etc.).
Consequently, car makers such as BMW, Fiat Chrysler Automobiles, General Motors, Nissan, and Tesla Motors struggled to shut down connected-car services, mailing updates to users on a USB stick, remotely delivering software updates, or in the worst cases going through vehicle safety recall procedures to fix vulnerabilities.
Michele Paolino
The first reason for such security problems is that cybersecurity, although widely recognized, has not been a top priority for designers and developers of automotive electronic systems. The second reason, which is more difficult to tackle, is that these systems are increasingly complex and difficult to maintain. Hundreds of sensors, actuators, and Electronic Control Units (ECUs) from different manufacturers, with heterogeneous connectivity requirements, are orchestrated together in a distributed way through communication protocols based on broadcast messages, with a very weak usage of encryption/authentication mechanisms.
To make things worse, future cars are expected to be always connected, producing terabytes of data per day, thus requiring high bandwidth/low latency connectivity for both safety critical (Vehicle to Vehicle, Vehicle to Infrastructure, etc.) and infotainment functions (video/audio streaming, social networks, etc.). Network Functions Virtualization (NFV) and 5G, the standardization group which aims at reshaping future telecom networks around the concept of virtualization and the proposed next wireless communication standard, are multiplying standardization efforts in the direction of hyper-connected cars. However, in this context, interoperability at all levels will be of utmost importance to make things really happen in terms of usability, quality of services. and security.
Another dilemma is how cars with a lifecycle of about 15 years can coexist with connected services having a lifecycle that is a fraction of this time. For instance, Spotify, YouTube, Google Maps, and Twitter did not even exist 15 years ago and might not exist in the same way in 15 years from now.
Today, software automotive systems need to adapt themselves much quicker to new requirements from users, manufacturers, as well as from legal authorities. A big difference from the past is that all of this has to happen during the lifecycle of the very same single car.
This is not only about infotainment, as shown by Volkswagen’s diesel-gate for example, for which a huge deployment of software updates has been mandated by law.
As a result, to realize smart-connected vehicles and to tackle cybersecurity, connectivity, and software time to market challenges, the automotive industry needs a hardware and software architecture that guarantees security, simplified systems management, high processing/networking performance, open standards, interoperability, and flexibility.
This type of requirement fits perfectly with open source virtualization, which is able to provide strong isolation (helping to address cyber security requirements), limited overhead (achieving almost native performance), openness (leveraging on open standards/licenses/code speed up applications time to market and reduce vulnerabilities life), and consolidation (contributing to reduce costs and ease maintenance). This is what makes open source virtualization a smart connected vehicle enabler, and for this reason I believe it should be considered in any future automotive solution design.
AGL Virtualization Expert Group
This is why, in late 2016, I proposed to start the design/development an open source virtualization solution for Automotive Grade Linux (AGL), the most important open source automotive project under The Linux Foundation umbrella targeting to develop an industry reference software stack based on open technologies.
The proposal resulted in the creation of the AGL Virtualization Expert Group (EG-VIRT), which aims to integrate virtualization in the AGL distribution without targeting a specific technology, but building an open infrastructure able to support different potential solutions.
With this in mind, a number of ambitious tasks need to be considered, first and foremost being the choice of the target hypervisor(s). In fact, different architectures and implementations are available: unikernels (e.g., Rump kernel) are extremely thin but usually run simplified applications built for a specific purpose; containers (e.g., Docker) do not need hardware virtualization extensions but are strongly coupled with the host kernel; partitioning hypervisors (e.g., Jailhouse) can benefit from very simple implementations but provide no over commitment and need modified guests; and Type-1/Type-2 hypervisors (e.g., Xen or KVM) are today mature technologies that provide strong isolation/flexibility but slightly higher overhead.
On top of this, we must also provide an open source solution for GPU virtualization and hypervisor/OS certification, needed to finally have a real impact on the market where existing solutions are mostly based on Type-1 hypervisors (either completely closed or based on open source projects like XEN). Other solutions put together different virtualization technologies, e.g., Virtual Open Systems combines a system partitioner based on ARM TrustZone (VOSYSmonitor) with the Type-2 hypervisor KVM.
However, virtualization brings new opportunities, and one of the most important is related to (virtual) ECUs interconnection. In fact, running multiple ECUs in the same system means that there is a need to create new virtual communication mechanisms. This could be the right chance to redefine both physical and virtual ECUs interconnection, in a way that offers stronger security and higher bandwidth.
Briefly, although some say future cars look similar to modern smartphones, I believe reality is more complex than that, and smart connected vehicles are much closer to NFV systems, where a network of virtual ECUs (functions) works together through virtualization consolidation.
Conclusion
For the increasingly growing AGL/EG-VIRT community, the challenges outlined above are not impossible to address. In fact, there are multiple examples of open source projects that created innovation in a disruptive way.
In the meantime, EG-VIRT has taken on the challenge and has already started its activity, focusing on the implementation of a proof-of-concept based on a KVM-enabled AGL distribution on ARM. From this activity, a first set of patches have been published by Virtual Open Systems and will be demonstrated at theAutomotive Linux Summit 2017 in Tokyo during my presentation “How to Introduce Virtualization in AGL? Objectives, Plans and Targets for AGL EG-VIRT.” You are all invited to join the event and the online discussion!
The Automotive Linux Summit 2017, held May 31 – June 2 in Tokyo, gathers the most innovative minds from automotive expertise and open source excellence to drive the future of embedded devices in the automotive arena. Linux.com readers can save $25 with code LINUXRD5! Register now >>
Today, the Cloud Native Computing Foundation (CNCF) Technical Oversight Committee (TOC) voted to accept CNI (Container Networking Interface) as the 10th hosted project alongside Kubernetes, Prometheus, OpenTracing, Fluentd, Linkerd, gRPC, CoreDNS, containerd, and rkt.
Container-based applications are rapidly moving into production. Just as Kubernetes allows enterprise developers to run containers en masse across thousands of machines, containers at scale also need to be networked.
The CNI project is a network interface created by multiple companies and projects; including CoreOS, Red Hat OpenShift, Apache Mesos, Cloud Foundry, Kubernetes, Kurma and rkt. First proposed by CoreOS to define a common interface between the network plugins and container execution, CNI is designed to be a minimal specification concerned only with the network connectivity of containers and removing allocated resources when the container is deleted.
“The CNCF TOC wanted to tackle the basic primitives of cloud native and formed a working group around cloud native networking,” said Ken Owens, TOC project sponsor and CTO at Cisco. “CNI has become the de facto network interface today and has several interoperable solutions in production. Adopting CNI for the CNCF’s initial network interface specification for connectivity and portability is our primary order of business. With support from CNCF, our work group is in an excellent position to continue our work and look at models, patterns, and policy frameworks.”
“Interfaces really need to be as simple as possible. What CNI offers is a nearly trivial interface against which to develop new plugins. Hopefully this fosters new ideas and new ways of integrating containers and other network technologies,” said Tim Hockin, Principal Software Engineer at Google. “CNCF is a great place to nurture efforts like CNI, but CNI is still young, and it almost certainly needs fine-tuning to be as air-tight as it should be. At this level of the stack, networking is one of those technologies that should be ‘boring’ – it needs to work, and work well, in all environments.”
Used by companies like Ticketmaster, Concur, CDK Global, and BMW, CNI is now used for Kubernetes network plugins and has been adopted by the community and many product vendors for this use case. In the CNI repo there is a basic example for connecting Docker containers to CNI networks.
“CoreOS created CNI years ago to enable simple container networking interoperability across container solutions and compute environments. Today CNI has a thriving community of third-party networking solutions users can choose from that plug into the Kubernetes container infrastructure,” said Brandon Philips, CTO of CoreOS. “And since CoreOS Tectonic uses pure-upstream Kubernetes in an Enterprise Ready configuration we help customers deploy CNI-based networking solutions that are right for their environment whether on-prem or in the cloud.”
Automated Network Provisioning in Containerized Environments
CNI has three main components:
CNI Specification: defines an API between runtimes and network plugins for container network setup. No more, no less.
Plugins: provide network setup for a variety of use-cases and serve as reference examples of plugins conforming to the CNI specification
Library: provides a Go implementation of the CNI specification that runtimes can use to more easily consume CNI
CNI specification and libraries exist to write plugins to configure network interfaces in Linux containers. The plugins support the addition and removal of container network interfaces to and from networks. Defined by a JSON schema, its templated code makes it straightforward to create a CNI plugin for an existing container networking project or a good framework for creating a new container networking project from scratch.
“As early supporters and contributors to the Kubernetes CNI design and implementation efforts, Red Hat is pleased that the Cloud Native Computing Foundation has decided to add CNI as a hosted project and to help extend CNI adoption. Once again, the power of cross-community, open source collaboration has delivered a specification that can help enable faster container innovation. Red Hat OpenShift Container Platform embraced CNI both to create a CNI plugin for it’s default OpenShift SDN solution based on Open vSwitch, and to allow for easier replacement by other third party CNI-compatible networking plugins. CNI is now the recommended way to enable networking solutions for OpenShift. Other projects like Open Virtual Networking (OVN) project have used CNI to integrate more cleanly and quickly with Kubernetes. As CNI gets widely adopted, the integration can automatically extend to other popular frameworks.” — Diane Mueller, Director, Community Development Red Hat OpenShift
“CNI provides a much needed common interface between network layer plugins and container execution,” said Chris Aniszczyk, COO of Cloud Native Computing Foundation. “Many of our members and projects have adopted CNI, including Kubernetes and rkt. CNI works with all the major container networking runtimes.”
As a CNCF hosted project, CNI will be part of a neutral community aligned with technical interests, receive help in defining an initial guideline for a network interface specification focused on connectivity and portability of cloud native application patterns. CNCF will also assist with CNI marketing and documentation efforts.
“The CNCF network working group’s first objective of curating and promoting a networking project for adoption was a straightforward task – CNI’s ubiquity across the container ecosystem is unquestioned,” said Lee Calcote, Sr. Director, Technology Strategy at SolarWinds. “The real challenge is addressing the remaining void around higher-level network services. We’re preparing to set forth on this task, and on defining and promoting common cloud-native networking models.” Anyone interested in seeing CNI in action, should check out Calcote’s talk on container networking at Velocity on June 21.
To join or learn more about the Kubernetes SIGs and Working Groups, including the Networking SIG, click here. To join the CNCF Networking WG, click here.
Stay up to date on all CNCF happenings by signing up for our monthly newsletter.
The New Stack provides comprehensive coverage of the Kubernetes open source container orchestration engine, and we’re looking to invest in the community even further. In 2016, our survey reported on “The Present State of Container Orchestration,” but a lot has changed in the last year. While Kubernetes’ current mind-share surpasses that of many competitors, the container wars are far from over. The success of Kubernetes depends on the satisfaction of early adopters and documentation of its success across many different use cases.
That’s where you come into the picture! We are surveying people that have already evaluated Kubernetes. We’re looking to better understand how and why those early adopters chose Kubernetes, and how they’re currently using it. The results will form the foundation of an upcoming e-book series on The State of the Kubernetes Ecosystem.
I’ve been a Node fan since 2012, when Kevin Griffin and I shifted our bootstrap startup to it from asp.net. I’m no expert (like the ETA shop is) but I’ve used it and Docker long enough to learn the happy path for Developers + Operations.
So I made you this with ❤️
This project turns on all the Buttery Goodness of Docker and Docker Compose so your Node app will develop and run best in a Container, both for development, and for production.
In his 1957 bookParkinson’s Law, and Other Studies in Administration, the naval historian and author C. Northcote Parkinson writes of a fictional committee meeting during which, after a two-and-a-half-minute nondiscussion on whether to build a nuclear reactor worth US $10 million, the members spend 45 minutes discussing the power plant’s bike shed, worth $2,350. From this he coined Parkinson’s Law of Triviality: “Time spent on any item of the agenda will be in inverse proportion to the sum involved.”
Using Parkinson’s example, the programmer Poul-Henning Kamp popularized the term bikeshedding: frequent, detailed discussions on a minor issue conducted while major issues are being ignored or postponed. The functional opposite of bikeshedding is trystorming, which refers to rapidly and repeatedly prototyping or implementing new products and processes. In a bikeshedding culture, ideas get only a short discussion before being put off “for further study.”
One of the many features of Chef is something called a Data Bag. Simply put, this allows you to store a blob of JSON based data on a Chef server that is shared across your Chef environments. If you have organizational level data that must be shared and not unique across environments, this is a great, easy system to store and retrieve this data. For this article, my example is the list of our network blocks at DNSimple. We have quite a bit of address space with the amount of hardware we have deployed and we share this data in various cookbooks to know which systems are on our network. This comes in handy when we want to put in firewall rules to allow only traffic from within our own networks, etc.
As I mentioned earlier, data bags are basically a bucket into which you put blobs of JSON data known as a data bag item into.
Libral provides a uniform management API across system resources and serves as a solid foundation for scripting management tasks and building configuration-management systems.
Linux, in keeping with Unix traditions, doesn’t have a comprehensive systems management API. Instead, management is done through a variety of special-purpose tools and APIs, all with their own conventions and idiosyncrasies. That makes scripting even simple systems-management tasks difficult and brittle.
For example, changing the login shell of the “app” user is done by running usermod -s /sbin/nologin app. This works great until it is attempted on a system that does not have an app user. To fix the ensuing failure, the enterprising script writer might now resort to:
Less than two weeks ago, the Wannacry ransomware attack compromised thousands of computers, causing considerable losses to big companies and individuals alike. That, along with other widespread vulnerabilities found in recent years (such as the Shellshock bug), highlight the importance of staying on top of your mission-critical systems.
Although vulnerabilities often target one specific operating system or software component, examining the traffic that goes in and out of your network can be a significant help to protect the assets you are responsible for.
In previous excerpts of the new, self-paced Containers Fundamentals course from The Linux Foundation, we discussed what containers are and are not. Here, we’ll take a brief look at the history of containers, which includes chroot, FreeBSD jails, Solaris zones, and systemd-nspawn.
Chroot was first introduced in 1979, during development of Seventh Edition Unix (also called Version 7), and was added to BSD in 1982. In 2000, FreeBSD extended chroot to FreeBSD Jails. Then, in the early 2000s, Solaris introduced the concept of zones, which virtualized the operating system services.
With chroot, you can change the apparent root directory for the currently running process and its children. After configuring chroot, subsequent commands will run with respect to the new root (/). With chroot, we can limit the processes only at the filesystem level, but they share the resources, like users, hostname, IP address, etc. FreeBSD Jails extended the chroot model by virtualizing users, network sub-systems, etc.
systemd-nspawn has not been around as long as chroot and Jails, but it can be used to create containers, which would be managed by systemd. On modern Linux operating systems, systemd is used as an init system to bootstrap the user space and manage all the processes subsequently.
This training course, presented mainly in video format, is aimed at those who are new to containers and covers the basics of container runtimes, container storage and networking, Dockerfiles, Docker APIs, and more.
You can learn more in the sample course video below, presented by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, Docker Captain, and author of the Docker Cookbook: