Home Blog Page 549

Tracking NFV Performance in the Data Center

Network function virtualization (NFV) is clearly on the rise, with an increasing number of production deployments across carriers worldwide. Operators are looking to create nimble, software-led topologies that can deliver services on-demand and reduce operational costs. From a data center performance standpoint, there’s a problem: Traditional IT virtualization approaches that have worked for cloud and enterprise data centers can’t cost-effectively support the I/O-centric and latency-sensitive workloads that carriers require.

NFV, as the name suggests, involves abstracting the underlying hardware from specific network functionalities. Where a stack was once a siloed on proprietary piece of hardware, virtual functions are created in software and can be run on x86 servers in a data center. Workloads can be shifted around as needed and network resources are spun up on-demand by whatever workload is asking for it. This fluid, just-in-time approach to provisioning services has significant upside in the carrier world, where over-provisioned pools of resources have always been the norm, and where hardware-tied infrastructure has historically made “service agility” an oxymoron. But there’s a bugbear ruining this rosy future-view: data center performance concerns.

Read more at SDxCentral

Introducing Minishift – Run OpenShift Locally

We are happy to introduce you to Minishift, providing a better user experience than our original Atomic Developer Bundle (ADB). We have shifted our development effort from ADB to Minishift, both to improve user experience, and to address the issues caused by depending on Vagrant. We’ll explain this more in a later blog post.

Minishift is a CLI tool that helps you run OpenShift locally by running a single-node cluster inside a VM. You can try out OpenShift or develop with it, day-to-day, on your local host.

Read more at Project Atomic

China Hits Milestone in Developing Quantum Computer ‘to Eclipse All Others’

A team of scientists from eastern China has built the first form of quantum computer that they say is faster than one of the early generation of conventional computers developed in the 1940s.

The researchers at the University of Science and Technology of China at Hefei in Anhui province built the machine as part of efforts to develop and highlight the future use of quantum computers.

The devices make use of the way particles interact at a subatomic level to make calculations rather than conventional computers which use electronic gates, switches and binary code.

Read more at South China Morning Post

CII Project Advances Linux Kernel Security as Firm Ends Free Patches

There has been some public discussion in the last week regarding the decision by Open Source Security Inc. and the creators of the Grsecurity® patches for the Linux kernel to cease making these patches freely available to users who are not paid subscribers to their service. While we at the Core Infrastructure Initiative (CII) would have preferred them to keep these patches freely available, the decision is absolutely theirs to make.

From the point of view of the CII, we would much rather have security capabilities such as those offered by Grsecurity® in the main upstream kernel rather than available as a patch that needs to be applied by the user. That said, we fully understand that there is a lot of work involved in upstreaming extensive patches such as these and we will not criticise the Grsecurity® team for not doing so. Instead we will continue to support work to make the kernel as secure as possible.

CII exists to support work improving the security of critical open source components. In a Linux system a flaw in the kernel can open up the opportunity for security problems in any or all the components – so it is in some sense the most critical component we have. Unsurprisingly, we have always been keen to support work that will make this more secure and plan to do even more going forward.

Over the past few years the CII has been funding the Kernel Self Protection Project, the aim of which is to ensure that the kernel fails safely rather than just running safely. Many of the threads of this project were ported from the GPL-licensed code created by the PaX and Grsecurity® teams while others were inspired by some of their design work. This is exactly the way that open source development can both nurture and spread innovation. Below is a list of some of the kernel security projects that the CII has supported.

One of the larger kernel security projects that the CII has supported was the work performed by Emese Renfy on the plugin infrastructure for gcc. This architecture enables security improvements to be delivered in a modular way and Emese also worked on the constify, latent_entropy, structleak and initify plugins.

  • Constify automatically applies const to structures which consist of function pointer members.

  • The Latent Entropy plugin mitigates the problem of the kernel having too little entropy during and after boot for generating crypto keys. This plugin mixes random values into the latent_entropy global variable in functions marked by the __latent_entropy attribute. The value of this global variable is added to the kernel entropy pool to increase the entropy.

  • The Structleak plugin zero-initializes any structures that containing a  __user attribute. This can prevent some classes of information exposures. For example, the exposure of siginfo in CVE-2013-2141 would have been blocked by this plugin.

  • Initify extends the kernel mechanism to free up code and data memory that is only used during kernel or module initialization. This plugin will teach the compiler to find more such code and data that can be freed after initialization, thereby reducing memory usage. It also moves string constants used in initialization into their own sections so they can also be freed.

Another, current project that the CII is supporting is the work by David Windsor on HARDENED_ATOMIC and HARDENED_USERCOPY.

HARDENED_ATOMIC is a kernel self-protection mechanism that greatly helps with the prevention of use-after-free bugs. It is based off of work done by Kees Cook and the PaX Team. David has been adding new data types for reference counts and statistics so that these do not need to use the main atomic_t type.

The overall hardened usercopy feature is extensive, and has many sub-components. The main part David is working on is called slab cache whitelisting. Basically, hardened usercopy adds checks into the Linux kernel to make sure that whenever data is copied to/from userspace, buffer overflows do not occur.  It does this by verifying the size of the source and destination buffers, the location of these buffers in memory, and other checks.

One of the ways that it does this is to, by default, deny copying from kernel slabs, unless they are explicitly marked as being allowed to be copied.  Slabs are areas of memory that hold frequently used kernel objects.  These objects, by virtue of being frequently used, are allocated/freed many times.  Rather than calling the kernel allocator each time it needs a new object, it rather just takes one from a slab. Rather than freeing these objects, it returns them to the appropriate slab. Hardened usercopy, by default, will deny copying objects obtained from slabs. The work David is doing is to add the ability to mark slabs as being “copyable.”  This is called “whitelisting” a slab.

We also have two new projects starting, where we are working with a senior member of the kernel security team mentoring a younger developer. The first of these projects is under Julia Lawall, who is based at the Université Pierre-et-Marie-Curie in Paris and who is mentoring Bhumika Goyal, an Indian student who will travel to Paris for the three months of the project. Bhumika will be working on ‘constification’ – systematically ensuring that those values that should not change are defined as constants.

The second project is under Peter Senna Tschudin, who is based in Switzerland and is mentoring Gustavo Silva, from Mexico, who will be working on the issues found by running the Coverity static analysis tool over the kernel. Running a tool like Coverity over a very large body of code like the Linux kernel will produce a very large number of results. Many of these results may be false positives and many of the others will be very similar to each other. Peter and Gustavo intend to use the Semantic Patch Language (SmPL) to write patches which can be used to fix whole classes of issue detected by Coverity in order to more rapidly work through the long list. The goal here is to get the kernel source to a state where the static analysis scan yields very few warnings, which in turn means that as new code is added which causes a warning it will more prominently stand out, which will make the results of future analysis much more valuable.

The Kernel Self Protection Project keeps a list of projects that they believe would be beneficial to the security of the kernel. The team has been working through this list and if you are interested in helping to make the Linux kernel more secure then we encourage you to get involved. Sign up to the mailing lists, get involved in the discussions and if you are up for it then write some code. If you have specific security projects that you want to work on and you need some support in order to be able to do so then do get in touch with the CII. Supporting this sort of work is our job and we are standing by for your call!

Automotive Grade Linux Looks Forward to Daring Dab and Electric Eel in 2017

After working for seven years at Tier 1 automotive suppliers that were members of the GENIVI project, Walt Miner, the Community Manager for the Linux Foundation’s Automotive Grade Linux (AGL) project understands the challenges of herding the car industry toward a common, open source computing standard. At the recent Embedded Linux Conference, Miner provided an AGL update and summarized AGL’s Yocto Project based Unified Code Base (UCB) for automotive infotainment, including the recent UCB 3.0 “Charming Chinook” release.

Recent membership wins for the project include Suzuki and Daimler AG (Mercedes-Benz). And, at the end of April, AGL announced six more new members, bringing the total to 96: ARCCORE, BayLibre, IoT.bzh, Nexius, SELTECH, and Voicebox.

In addition to Suzuki and Daimler AG, other automotive manufacturer members include Ford, Honda, Jaguar Land Rover, Mazda, Mitsubishi Motors, Nissan, Subaru, and Toyota. Joining AGL doesn’t necessarily mean these companies will release cars with UCB-compliant in-vehicle infotainment (IVI) systems. However, Miner says at least one UCB-enabled model is expected to hit the streets in 2018.

“Our goal is to build a single platform for the entire automotive industry that benefit Tier 1s, OEMs, and service providers so everyone has a strong base to start writing applications,” said Miner. “We want to reduce fragmentation both in open source and proprietary automotive solutions.”

Miner said that AGL has several advantages over the GENIVI Alliance spec, parts of which have been rolled into UCB along with a much larger chunk of Tizen’s automotive code. Miner previously worked for two Tier 1s, but despite being GENIVI members, “they never collaborated” with other Tier 1s, he said.

“By contrast, at AGL, we have Tier 1s collaborating in real time on the same software. We have had hackathons and integration sessions where we had 35 to 40 people from 20 to 25 companies working on the same code. In 2016, we had a totally 1,795 commits just on the master branch from 45 committers and 24 companies.”

AGL is a “code first” organization, said Miner. Instead of writing specs and hoping vendors stick to them, AGL has developed an actual Linux distribution that can bring Tier 1s and auto manufacturers “70 to 80 percent toward developing a product that ends up in a vehicle,” he added.

By comparison, “GENIVI provided function catalogs that were supposed to be common across the industry, but the catalogs were incomplete, so all the manufacturers went off and specified their own proprietary extensions,” said Miner. “We found we were constantly reimplementing these ‘standard’ function catalogs, and we could not reuse them going from manufacturer to manufacturer.”

Miner went on to describe the development cadence for the AGL project, which follows its Yocto Project base by about nine months. He also discussed new features in UCB 3.0 Charming Chinook, including application packaging and widget installation, as well as a switch to systemd for application control. There’s a new template for an application framework service binder APIs, as well as an SDK for app developers. Reference apps are available for home screen, media player, settings, AM/FM, and HVAC.

Official reference platforms now include the Renesas R-Car 2 Porter board, Minnowboard Turbot, Intel Joule, TI Jacinto 6 Vayu board, and QEMU. There are also emerging community BSP “best effort” projects from third parties, including the Raspberry Pi 2/3, NXP i.MX6 SABRE board, and a DragonBoard.

Miner played a video of AGL director Dan Cauchy demonstrating UCB 3.0 at January’s CES show. The demo revealed new functionality such as displaying navigation turn-by-turn instructions on the instrument cluster for reduced distraction, as well as multimedia playing over the MOST ring using “the first open source MOST device driver in history,” according to Cauchy.

Finally, Miner described some of the activities in AGL’s six expert groups: application framework and security, connectivity, UI and graphics, CI and automated test (CIAT), navigation, and virtualization. He also surveyed some new features coming out in the Yocto 2.2 Daring Dab release in July. These include secure signaling and notifications, smart device link, and application framework improvements such as service binders for navigation, speech, browser, and CAN.

In December, AGL hopes to release Electric Eel, a release that will add back ends for AGL reference apps working in both Qt 5 and HTML5. Other planned improvements include APIs available as application framework service binders, IC and telematics profiles, more complete documentation, and an expanded binder API capability for RTOS interoperability.

Future UCB versions will move beyond the IVI screen and instrument cluster. “AGL is the only organization planning to address all the software in the vehicle, including HUD, telematics/connected car, ADAS, functional safety, and autonomous driving,” said Miner.

As AGL moves into telematics, there are complications due to the need to interface with legacy, often proprietary technologies. “The vehicle signal architecture we’re working on will abstract the CAN or MOST layers in a secure manner so applications don’t need to know anything about the native CAN,” said Miner. “Microchip has been working on native CAN drivers for AGL, but the messaging and vehicle topology is proprietary, so we’ve asked OEMs to provide typical and worst-case network topologies in terms of things like message rates. We can then build a simulator based on that topology. “

More on these future directions should be on tap at the Automotive Linux Summit held May 31 to June 2 in Tokyo.

You can watch the full video below:

https://www.youtube.com/watch?v=Ub8bNo9yM_4?list=PLbzoR-pLrL6pSlkQDW7RpnNLuxPq6WVUR

Connect with the Linux community at Open Source Summit North America on September 11-13. Linux.com readers can register now with the discount code, LINUXRD5, for 5% off the all-access attendee registration price. Register now to save over $300!

Building Linux Firewalls With Good Old Iptables: Part 1

Of course, we still need firewalls on our computers, even though that is not a subject we see much in these here modern times. There are Linux and BSD firewalls, prefab firewalls on commercial hardware from little to big that are most likely based on an open source firewall and a multitude of GUI helpers. In this two-part series, we will learn how to run iptables from the command line and then how to set up a firewall for an individual PC and a LAN firewall.

Pointy-Clicky Meh

I don’t think those commercial products with their own special interfaces or those GUI helpers really help all that much because you still need knowledge beyond pointy-clicky. You need to know at least the basics of TCP/IP, and then iptables will make sense to you. I will show you how to configure your firewall by bypassing the fripperies and using plain old unadorned iptables. Iptables is part of netfilter, and I am still, after all these years, fuzzy on exactly what netfilter and iptables are. The site says “netfilter.org is home to the software of the packet filtering framework inside the Linux 2.4.x and later kernel series. Software commonly associated with netfilter.org is iptables.” It’s enough for me to know that iptables is native to the Linux kernel so you always have it. Also, it’s strong and it’s stable, so once you learn it your knowledge will always be valid.

Iptables supports both IPv4 and IPv6. It inspects all IP packet headers passing through your system, and routes them according to the rules you have defined. It may forward them to another computer, or drop them, or alter them and send them on their way. It does not inspect payload, but only headers. Packets must traverse tables and chains, and there are three built-in tables: filter, NAT, and mangle. Chains are lists of the rules you have defined. Rules that apply to any matching packets are called targets. These are easier to understand in action, which we shall get to presently.

Iptables tracks state, which makes it more efficient and more secure. You can think of it as remembering which packets are already permitted on an existing connection, so it uses ephemeral ports rather than requiring great gobs of permanent holes in your firewall to allow for all the different IP protocols. Of course, it doesn’t really remember, but rather reads packet headers to determine which packets belong in a particular sequence.

A brief digression: the current Linux kernel is well into the 4.x series, and the netfilter documentation still references 2.x kernels. Note that ipchains and ipfwadm — the ancestors of iptables — are obsolete since years ago, so we only need to talk about iptables.

Distro Defaults Bye

Your first task is to find out if your Linux distribution starts a firewall by default, how to turn it on and off, and whether it uses iptables or something else. Most likely it’s iptables. Conflicting rules are less fun than they sound, so copy and save any existing configurations you want to keep, disable it, and start over.

Command Line, Hear Us Roar

In part 1, we’ll run some basic rules, and learn a bit about how iptables works. When you run these rules from the command line they are not persistent and do not survive reboots, so you can safely test all manner of mad combinations without hurting anything.

Check your iptables version:

$ iptables --version
iptables v1.6.0

Take a few minutes to read man iptables. It is a helpful document, and you will be happy you studied it. It provides an excellent overview of the structure of iptables and its features, including Mandatory Access Control (MAC) networking rules, which are used by SELinux, what the built-in tables do, how routing operates, and the commands for doing stuff and finding stuff.

Let’s list all active rules. This example shows there are no active rules, and a blanket ACCEPT policy, so iptables, in effect, is turned off:

iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source    destination         

Chain FORWARD (policy ACCEPT)
target     prot opt source    destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source    destination    

The filter table is the default, and the most used. This example blocks all incoming (INPUT) packets that originate from a particular network. You could leave out -t filter, but it’s a good practice to make everything explicit. These examples follow the syntax iptables [-t table] {-A|-C|-D} chain rule-specification:

# iptables -t filter -A INPUT -s 192.0.2.0/24 -j DROP

This example drops all packets from an IPv6 network:

# ip6tables -t filter -A INPUT -s 2001:db8::/32 -j DROP

These example networks are officially set aside for examples in documentation; see RFC5737 and RFC3849.

Now you can see your new rules:

# iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         
DROP       all  --  192.0.2.0/24         anywhere
[...]
       
# ip6tables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         
DROP       all      2001:db8::/32        anywhere
[...]

Remove these rules with the -D switch:

# iptables -t filter -D INPUT -s 192.0.2.0/24 -j DROP
# ip6tables -t filter -D INPUT -s 2001:db8::/32 -j DROP

Define Policy

Trying to write individual rules for all contingencies is for people who have nothing else to do, so iptables supports policies for the built-in chains. These are the most commonly used policies:

# iptables -P INPUT DROP
# iptables -P FORWARD DROP
# iptables -P OUTPUT ACCEPT

Run iptables -L to compare. This applies the principal of “Deny all, allow only as needed”. These policies are the defaults, and so they are applied in the absence of any matching rules. All incoming packets are dropped, and all outgoing packets are not blocked. But policy alone is not enough, and you still need a set of rules. You must always allow localhost:

# iptables -A INPUT -i lo -j ACCEPT

You probably want some two-way communication, so this allows return traffic from connections you initiated, such as visiting Web sites and checking email:

# iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT

This demonstrates connection-tracking, the wonderful feature that allows you to write many fewer rules to cover a multitude of situtations. Run ip6tables to apply the same rules to your IPv6 sessions.

Kernel Modules

Iptables relies on a number of kernel modules, which are loaded automatically when you run these commands. You can see them with lsmod:

$ lsmod
Module                  Size  Used by
nf_conntrack_ipv6      20480  1
nf_defrag_ipv6         36864  1 nf_conntrack_ipv6
ip6table_filter        16384  1
ip6_tables             28672  1 ip6table_filter
nf_conntrack_ipv4      16384  1
nf_defrag_ipv4         16384  1 nf_conntrack_ipv4
xt_conntrack           16384  1
nf_conntrack          106496  2 xt_conntrack,nf_conntrack_ipv4
iptable_filter         16384  1
ip_tables              24576  1 iptable_filter
x_tables               36864  4 ip_tables,xt_tcpudp,xt_conntrack,iptable_filter

That’s it for today. Remember to check out man iptables, and come back next week to see two example iptables scripts for lone PCs and for your LAN.

Read Part 2 of Building Linux Firewalls with Iptables

 

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Red Hat’s New Products Centered Around Cloud Computing, Containers

Red Hat has made a number of announcements at its user group conference, Red Hat Summit. The announcements ranged from the announcement of OpenShift.io to facilitate the creation of software as a service applications,  pre-built application runtimes to facilitate creation of OpenShift-based workloads, an index to help enterprises build more reliable container-based computing environments, an update to the Red Hat Gluster storage virtualization platform allowing it to be used in an AWS computing environment, and, of course, an announcement of a Red Hat/Amazon Web Services partnership.

Red Hat summarized the announcements as follows:

  • OpenShift.io. A free, end-to-end, SaaS development environment for cloud-native apps built with popular open source code, built for modern dev teams using the latest technology. Built from technologies including Eclipse Che, OpenShift.io includes collaboration tools for remote teams to analyze and assign work. Code is automatically containerized and easily deployed to OpenShift.

Read more at Virtualization Review

Now that HTTPS Is Almost Everywhere, What About IPv6?

Let’s Encrypt launched April 12, 2016 with the intent to support and encourage sites to enable HTTPS everywhere (sometimes referred to as SSL everywhere even though the web is steadily moving toward TLS as the preferred protocol). As of the end of February 2017, EFF estimates that half the web is now encrypted. Now certainly not all of that is attributable to EFF and Let’s Encrypt. After all, I have data from well before that date that indicates a majority of F5 customers enabled HTTPS on client-facing services, in the 70% range. So clearly folks were supporting HTTPS before Let’s Encrypt launched its efforts, but given the significant number of certificates* it has issued the effort is not without measurable success.

On Sept 11, 2006, ICANN “ratified a global policy for the allocation of IPv6 addresses by the Internet Assigned Numbers Authority (IANA)”. While the standard itself was ratified many years (like a decade) before, without a policy governing the allocation of those addresses it really wasn’t all that significant. But as of 2006 we were serious about moving toward IPv6. After all, the web was growing, mobile was exploding, and available IPv4 addresses were dwindling to nothing.

Read more at F5 

Using fetch() and reduce() to Grab and Format Data from an External API – A Practical Guide

Today we’re going to learn how to get and manipulate data from an external API. We’ll use a practical example from one of my current projects that you will hopefully be able to use as a template when starting something of your own. 

For this exercise, we will look at current job posting data for New York City agencies. New York City is great about publishing all sorts of datasets, but I chose this particular one because it doesn’t require dealing with API keys — the endpoint is a publicly accessible URL.

Here’s a quick roadmap of of our plan. We’ll get the data from New York City’s servers by using JavaScript’s Fetch API, which is a good way to start working with promises. I’ll go over the very bare basics here, but I recommend Mariko Kosaka’s excellent illustrated blog The Promise of a Burger Party for a more thorough (and delicious) primer. 

Read more at Dev.to

TLS/SSL Explained: TLS/SSL Terminology and Basics

In Part 1 this series we asked, What is TLS/SSL? In this part in the series, we will be describing some of the TLS/SSL terminologies.

Before diving deeper into TLS, let’s first have a look at the very basics of SSL/TLS. Understanding the following will help you gain a better understanding of the topics discussed and analyzed later on.

Encryption

Encryption is the process in which a human-readable message (plaintext) is converted into an encrypted, non-human-readable, format (ciphertext). The main purpose of encryption is to ensure that only an authorized receiver will be able to decrypt and read the original message. When unencrypted data is exchanged between two parties, using any medium, a third-party can intercept and read the communication exchanged.

Read more at DZone