Home Blog Page 390

Civil Infrastructure Platform Sets Out to Save Civilization

“The Civil Infrastructure Platform is the most conservative of The Linux Foundation projects,” began Yoshitake Kobayashi at the recent Embedded Linux Conference in Portland. Yet, if any eyelids started fluttering shut in anticipation of an afternoon nap, they quickly opened when he added: “It may also be the most important to the future of civilization.”

The Linux Foundation launched the Civil Infrastructure Platform (CIP) project in April 2016 to develop base layer, open source industrial-grade software for civil infrastructure projects, starting with a 10-year Super Long-Term Support (SLTS) Linux kernel built around the LTS kernel. CIP expects to add other similarly reusable software building blocks that meet the safety and reliability requirements of industrial and civil infrastructure. CIP supports electrical and power grids, water and sewage facilities, oil and gas plants, and rail, shipping and transportation systems, among other applications.  

“Our civilization’s infrastructure already runs on Linux,” said Kobayashi, a CIP contributor and Senior Manager of Open Source Technology at Toshiba’s Software Development and Engineering Center. “Our power plants run on Linux. If they stop working, it’s serious.”

CIP’s OSBL may not be disruptive technology, but its aim is to more quickly and affordably bring disruptive tech into projects whose lifespans extend for a half century or more. The goal is to reduce development and testing time so that the latest clean energy equipment, IoT monitors, AI edge computing, and smart city technology can come online more quickly and be updated in a timely manner.

With standardization, open source licensing, and greater reuse of software, CIP plans to reduce duplication of effort and project costs, as well as ease maintenance and improve reliability. “We can provide the stability needed by infrastructure by using Linux,” said Kobayashi.

In many ways, CIP is like The Linux Foundation’s Automotive Grade Linux project in that it’s trying to more quickly introduce the latest technologies into a traditional industry with long lead times. In this case, however, the development times and product and maintenance lifespans can last decades.

Kobayashi explained that a power plant has a life cycle of 25-60 years. The technology takes 3-5 years to develop plus up to four years for customer specific extensions, 6-8 years for supply time, and 15+ years of hardware maintenance after the latest shipment.

“Things change a lot in 60 years, such as IoT, which requires security management and industrial grade devices,” said Kobayashi. Yet, bringing these technologies online is slowed by rampant duplication of effort. “In civil infrastructure, you typically have many companies doing industrial grade development and long-time support even if their business areas are quite similar. There’s a lot of duplication.”

In his talk, Kobayashi gave an overview of CIP’s first two years and shared plans for the future. CIP’s founding members – Codethink, Hitachi, Plat’Home, Siemens, and Toshiba – have since been joined by Renesas, which last October announced that the Linux stack for its Arm-based RZ/G SoCs had been upgraded to use CIP’s 10-year SLTS kernel. In December, CIP was joined by Moxa.

Upstream first, backport later

Unlike AGL, CIP is not developing and maintaining a full Linux distribution. CIP’s Open Source Base Layer (OSBL) is aligned closely with Debian, but it’s also designed to be usable with other Linux distributions.

Kobayashi emphasized that CIP is working closely with the upstream community. “We created a kernel maintenance policy where the most important principle is ‘upstream first.’ All features have to be in the upstream kernel before backporting to the CIP kernel.” Kobayashi added that out-of-tree drivers are unsupported by CIP.

The CIP project initially focused on the SLTS kernel, maintained by Codethink’s Ben Hutchings. New builds have come every 4-6 weeks, adding features such as security patch management.

The most recent, Mar. 9 Linux 4.4.120-cip20 build, based on linux-stable-git, adds Meltdown and Spectre fixes as well as backported patches, such as support for the Renesas RZ/G SoCs and Siemens IoT2000 gateway. It also includes a Kernel Self Protection Project that includes ASLR for user space process, GCC’s undefined behavior Sanitizer (UBSAN), and faster page poisoning.

Over the last year, the project has focused on real-time support. The first CIP SLTS real-time kernel was released in early January based on Stable RT Linux with PREEMPT-RT. The problem here is that Real Time Linux is not yet fully upstream. “We need it immediately, so we are trying to help the RTL project by becoming a Gold member,” said Kobayashi.

More recent projects have included the creation of an environment for testing kernels called Board At Desk (B&D), based on KernelCI and LAVA. The current focus is kernel testing, but CIP plans to eventually test the entire OSBL platform.

CIP is also developing a CIP Core implementation with minimal filesystem images for SLTS that is designed for creating and testing installable images. The project is currently defining component versions for its CIP Core package, which “is difficult because you have to go upstream,” says Kobayashi. “We decided to use Debian as the primary reference distribution, so CIP Core package components will be selected from Debian packages. We have begun to support the Debian-LTS project at the Platinum level.”

CIP has created a build environment for CIP Core based on Debian’s native-build system. The environment supports a Renesas RZ/G based iwg20m, which appears to be another name for iWave’s iW-RainboW-G20M-Qseven module. Other targets include the BeagleBone Black, Intel’s Cyclone V FPGA SoC, and QEMUx86.

The main challenge with aligning CIP with Debian is that Debian-LTS “is only five years but we need 10 years,” said Kobayashi. In addition, while CIP supports both Debian’s native-build and cross-build technology, Debian does not currently support cross-build. However, a Debian-cross (CrossToolchains) project is under development.

Next up: Cybersecurity and Y2038 protections

The CIP SLTS kernel and OSBL platform will have a major release every 2-3 years, so a new release can be expected in 2019. Potential additions include support for the ISA/IEC-62443 cybersecurity standard for industrial automation and control. “We think we can help developers gain certification, but we are not planning to develop procedures or certification schemes,” said Kobayashi.

CIP is also planning workarounds for the Y2038 bug. A Y2K-like computer clock crisis could occur in 2038 because 32-bit systems won’t be able to tell the difference between 2038 and 1970, the genesis year of 32-bit timing schemes.

The v2 release will also add some functional safety and software update code, and potentially add support for The Linux Foundation’s EdgeX Foundry IoT edge computing middleware standard. The main issue here, says Kobayashi, is that unlike EdgeX CIP’s OSBL does not support Java. Debian does, however, so there may be a fix.

Kobayashi concluded by emphasizing that “kernel version alignment is important” to CIP.  At the Open Source Summit Japan (June 20-22), CIP is hosting an F2F meeting with participants from LTS/LTSI, AGL, and Debian.

The slidedeck and 47-minute video of “Civil Infrastructure Platform: Industrial Grade Open Source Base-Layer” are now available. You can watch the complete presentation here:

Follow this Minikube Tutorial to Brew Up a Kubernetes Home Lab

This Minikube tutorial enables admins to work with Kubernetes without additional equipment, software or a significant time investment to set it up. Home labs isolate new technology from vital live infrastructure in production environments. 

Follow the installation steps here, run kubectl commands in the Kubernetes lab and then access the application workloads within it.

A Minikube Kubernetes cluster, complete with workload containers, is prebuilt and runs inside a single VM on the user’s computer. Minikube runs on Linux, Windows and macOS and can use a variety of hypervisors for its VM.

Minikube kubectl command lines run directly on the home lab computer, and Kubernetes-run applications are accessible there as well.

Read more at TechTarget

Hyperledger Sawtooth: A Milestone for Blockchain Technology

Blockchain technology —  which encompasses smart contracts and distributed ledgers — can be used to record promises, trades, and transactions of many types. Countless organizations, ranging from IBM to Wells Fargo and the London Stock Exchange Group are partnering to drive the technology forward, and The Linux Foundation’s Hyperledger Project is an open source collaborative effort aimed at advancing cross-industry blockchain technologies. Recently, the project announced the arrival of Hyperledger Sawtooth 1.0, a major milestone for the Hyperledger community, which represents the second blockchain framework that has reached production-ready status.

In conjunction with the release, Brian Behlendorf, Executive Director, Hyperledger, and Dan Middleton, Intel’s Head of Technology, Blockchain and Distributed Ledger Program, hosted a webinar, titled “Hyperledger Sawtooth v1.0: Market Significance & Technical Overview.” The webinar is now available as a video replay (registration required).

Read more at The Linux Foundation

Through the Looking Glass: Security and the SRE

Even as modern software becomes increasingly distributed, rapidly iterative, and predominantly stateless, today’s approach to security remains predominantly preventative, focused and dependent on state in time. It lacks the rapid iterative feedback loops that have made modern product delivery successful. The same feedback loops should exist between the changes in product environments and the mechanisms employed to keep them secure. Security measures should be iterative and agile enough to change their behavior as often as the software ecosystem in which they operate.

Security controls are typically designed with a particular state in mind (i.e., production release on Day 0). Meanwhile, the system ecosystem that surrounds these controls is changing rapidly every day. Microservices, machines, and other components are spinning up and spinning down; component changes are occurring multiple times a day through continuous delivery, external APIs are constantly changing on their own delivery schedules, etc. Security tools and methods must be flexible enough to match the constant change and iteration in the environment. Without a security feedback loop, the system will eventually drift into security failure, just as a system without a development feedback loop would drift into unreliable operational readiness.

Read more at OpenSource.com

Juniper’s OpenContrail SDN Rebranded as Tungsten Fabric

Sometimes, rebranding is a good thing. Juniper Networks‘ OpenContrail was an excellent open-source software-defined network (SDN) program. But, it was perceived as being too much under Juniper’s thumb to draw many outside developers. Realizing this, Juniper spun OpenContrail out into a community-controlled project under the The Linux Foundation. That left the name, so Juniper and the community decided to rebrand it: Tungsten Fabric.

Like its direct ancestor, Tungsten Fabric is a scalable, multi-cloud networking platform. It provides a single point of control, observability, and analytics for networking and security. The program is also integrated with many cloud technology stacks, including KubernetesMesosVMware, and OpenStack.

Read more at ZDNet

How to Monitor Network Protocol Traffic on your Data Center Linux Servers

With Linux in your data centers, you value the ability to monitor different network protocols on your servers. With this gathered information, you can troubleshoot issues or tweak your servers such that they outperform the original specs. Most Linux administrators might recall ntop. It was the de facto standard, text-based tool for monitoring network protocols. That tool has been deprecated, in favor of ntopng (aka The Network Time Protocol Reference Implementation, refactored, the Next Generation). This particular tool takes ntop to the next level, by giving it a web-based interface that is exponentially more powerful and easier to use. I’m going to walk you through the process of installing ntopng on the Ubuntu Server 16.04 platform. The process does require you to install via the command line, so be prepared to type a bit, or copy and paste.

Installation

We’ll be installing the stable build, as opposed to installing the outdated version from the standard repository. The steps for this are as follows:

Read more at Tech Republic

12 Kubernetes Distributions Leading the Container Revolution

Kubernetes has become the project to turn to if you need container orchestration at scale. The open source container orchestration system out of Google is well-regarded, well-supported, and evolving fast.

Kubernetes is also sprawling, complex, and difficult to set up and configure. Not only that, but much of the heavy lifting is left to the end user. The best approach, therefore, isn’t to grab the bits and try to go it alone, but to seek out a complete container solution that includes Kubernetes as a supported, maintained component.

Here I’ve listed the 12 most prominent Kubernetes offerings—what amount to distributions that incorporate Kubernetes plus container tools, in the same sense that various vendors offer distributions of the Linux kernel and its userland.

Read more at InfoWorld

4 New Training Courses Help You Keep Pace with Open Source Networking

Open source networking is transforming how enterprises today develop, deploy, and scale their networks and services. To help you keep pace with the evolution that is taking place in enterprise networking, The Linux Foundation has expanded its training offerings to include four new open networking courses:

These courses, which provide both introductory and advanced knowledge of ONAP and OPNFV technologies, are open for immediate enrollment.  

In the Introduction to ONAP course (LFS163x), taught by Amar Kapadia, you will learn how the ONAP platform uses SDN and NFV to orchestrate and automate physical and virtual network services. In this free course, Kapadia, the author of Understanding OPNFV, offers a high-level look at the ONAP project and provides a guide for participating in and benefiting from the ONAP community.

The free NFV Acceleration: An Introduction to OPNFV (LFS164x) course offers an introduction to the OPNFV project. It describes the various challenges that OPNFV solves and provides an overview of related projects and industry use cases.

ONAP Fundamentals (LFS263) provides basic hands-on knowledge of the ONAP project. The course includes lab exercises to run on the Google Cloud Platform to help you achieve a deeper understanding of ONAP’s functional areas.

OPNFV Fundamentals (LFS264) introduces students to the basics of OPNFV. Starting with an overview of NFV and OPNFV technology, the course looks at challenges that OPNFV solves and then discusses integrating and testing OPNFV projects. The course also includes deployment and testing exercises to run on the Google Cloud Platform.

Pre-enrollment discount

LFS163x and LFS164x course are now available for free on edX.org. LFS263 and LFS264 are open for pre-enrollment through The Linux Foundation, with the courses becoming fully available in May. Purchases for those courses made during the pre-enrollment period reflect a discounted fee of $99 each ($199 standard).

Protecting Code Integrity with PGP — Part 7: Protecting Online Accounts

So far in this tutorial series, we’ve provided practical guidelines for using PGP, including basic concepts and steps for generating and protecting your keys.  If you missed the previous articles, you can catch up below. In this final article, we offer additional guidance for protecting your online accounts, which is of paramount importance today.

Part 1: Basic Concepts and Tools

Part 2: Generating Your Master Key

Part 3: Generating PGP Subkeys

Part 4: Moving Your Master Key to Offline Storage

Part 5: Moving Subkeys to a Hardware Device

Part 6: Using PGP with Git

Checklist

  • Get a U2F-capable device (ESSENTIAL)

  • Enable 2-factor authentication for your online accounts (ESSENTIAL)

    • GitHub/GitLab

    • Google

    • Social media

  • Use U2F as primary mechanism, with TOTP as fallback (ESSENTIAL)

Considerations

You may have noticed how a lot of your online developer identity is tied to your email address. If someone can gain access to your mailbox, they would be able to do a lot of damage to you personally, and to your reputation as a free software developer. Protecting your email accounts is just as important as protecting your PGP keys.

Two-factor authentication with Fido U2F

Two-factor authentication is a mechanism to improve account security by requiring a physical token in addition to a username and password. The goal is to make sure that even if someone steals your password (via keylogging, shoulder surfing, or other means), they still wouldn’t be able to gain access to your account without having in their possession a specific physical device (“something you have” factor).

The most widely known mechanisms for 2-factor authentication are:

  • SMS-based verification

  • Time-based One-Time Passwords (TOTP) via a smartphone app, such as the “Google Authenticator” or similar solutions

  • Hardware tokens supporting Fido U2F

SMS-based verification is easiest to configure, but has the following important downsides: it is useless in areas without signal (e.g. most building basements), and can be defeated if the attacker is able to intercept or divert SMS messages, for example by cloning your SIM card.

TOTP-based multi-factor authentication offers more protection than SMS, but has important scaling downsides (there are only so many tokens you can add to your smartphone app before finding the correct one becomes unwieldy). Plus, there’s no avoiding the fact that your secret key ends up stored on the smartphone itself — which is a complex, globally connected device that may or may not have been receiving timely security patches from the manufacturer.

Most importantly, neither TOTP nor SMS methods protect you from phishing attacks — if the phisher is able to steal both your account password and the 2-factor token, they can replay them on the legitimate site and gain access to your account.

Fido U2F is a standard developed specifically to provide a mechanism for 2-factor authentication and to combat credential phishing. The U2F protocol will store each site’s unique key on the USB token and will prevent you from accidentally giving the attacker both your password and your one-time token if you try to use it on anything other than the legitimate website.

Both Chrome and Firefox support U2F 2-factor authentication, and hopefully other browsers will soon follow.

Get a token capable of Fido U2F

There are many options available for hardware tokens with Fido U2F support, but if you’re already ordering a smartcard-capable physical device, then your best option is a Yubikey 4, which supports both.

Enable 2-factor authentication on your online accounts

You definitely want to enable this option on the email provider you are using (especially if it is Google, which has excellent support for U2F). Other sites where this functionality should be enabled are:

  • GitHub: it probably occurred to you when you uploaded your PGP public key that if anyone else is able to gain access to your account, they can replace your key with their own. If you publish code on GitHub, you should take care of your account security by protecting it with U2F-backed authentication.

  • GitLab: for the same reasons as above.

  • Google: if you have a google account, you will be surprised how many sites allow logging in with Google authentication instead of site-specific credentials.

  • Facebook: same as above, a lot of online sites offer the option to authenticate using a Facebook account. You should 2-factor protect your Facebook account even if you do not use it.

  • Other sites, as you deem necessary. See dongleauth.info for inspiration.

Configure TOTP failover, if possible

Many sites will allow you to configure multiple 2-factor mechanisms, and the recommended setup is:

  • U2F token as the primary mechanism

  • TOTP phone app as the secondary mechanism

This way, even if you lose your U2F token, you should be able to re-gain access to your account. Alternatively, you can enroll multiple U2F tokens (e.g. you can get another cheap token that only does U2F and use it for backup reasons).

Further reading

By this point you have accomplished the following important tasks:

  1. Created your developer identity and protected it using PGP cryptography.

  2. Configured your environment so your identity is not easily stolen by moving your master key offline and your subkeys to an external hardware device.

  3. Configured your git environment to ensure that anyone using your project is able to verify the integrity of the repository and its entire history.

  4. Secured your online accounts using 2-factor authentication.

You are already in a good place, but you should also read up on the following topics:

  • How to secure your team communication (see the document in this repository). Decisions regarding your project development and governance require just as much careful protection as any committed code, if not so. Make sure that your team communication is trusted and the integrity of all decisions is verified.

  • How to secure your workstation (see the document in this repository). Your goal is to minimize risky behaviour that would cause your project code to be contaminated, or your developer identity to be stolen.

  • How to write secure code (see various documentation related to the programming languages and libraries used by your project). Bad, insecure code is still bad, insecure code even if there is a PGP signature on the commit that introduced it.

TLS 1.3 Is Approved: Here’s How It Could Make the Entire Internet Safer

​The IETF has finally given the okay to the TLS 1.3 protocol, which will speed up secure connections and make snooping harder for attackers.

  • TLS 1.3 has been approved for use, which will make all secure internet connections faster and safer.
  • The security and speed improvements brought by TLS 1.3 are due to the elimination of unnecessary handshake steps and the forced use of newer encryption methods.

Transport Layer Security (TLS) version 1.3 has been approved by the Internet Engineering Task Force (IETF), making it the new industry standard for secure connections.

Read more at Tech Republic