Home Blog Page 550

Automotive Grade Linux Looks Forward to Daring Dab and Electric Eel in 2017

After working for seven years at Tier 1 automotive suppliers that were members of the GENIVI project, Walt Miner, the Community Manager for the Linux Foundation’s Automotive Grade Linux (AGL) project understands the challenges of herding the car industry toward a common, open source computing standard. At the recent Embedded Linux Conference, Miner provided an AGL update and summarized AGL’s Yocto Project based Unified Code Base (UCB) for automotive infotainment, including the recent UCB 3.0 “Charming Chinook” release.

Recent membership wins for the project include Suzuki and Daimler AG (Mercedes-Benz). And, at the end of April, AGL announced six more new members, bringing the total to 96: ARCCORE, BayLibre, IoT.bzh, Nexius, SELTECH, and Voicebox.

In addition to Suzuki and Daimler AG, other automotive manufacturer members include Ford, Honda, Jaguar Land Rover, Mazda, Mitsubishi Motors, Nissan, Subaru, and Toyota. Joining AGL doesn’t necessarily mean these companies will release cars with UCB-compliant in-vehicle infotainment (IVI) systems. However, Miner says at least one UCB-enabled model is expected to hit the streets in 2018.

“Our goal is to build a single platform for the entire automotive industry that benefit Tier 1s, OEMs, and service providers so everyone has a strong base to start writing applications,” said Miner. “We want to reduce fragmentation both in open source and proprietary automotive solutions.”

Miner said that AGL has several advantages over the GENIVI Alliance spec, parts of which have been rolled into UCB along with a much larger chunk of Tizen’s automotive code. Miner previously worked for two Tier 1s, but despite being GENIVI members, “they never collaborated” with other Tier 1s, he said.

“By contrast, at AGL, we have Tier 1s collaborating in real time on the same software. We have had hackathons and integration sessions where we had 35 to 40 people from 20 to 25 companies working on the same code. In 2016, we had a totally 1,795 commits just on the master branch from 45 committers and 24 companies.”

AGL is a “code first” organization, said Miner. Instead of writing specs and hoping vendors stick to them, AGL has developed an actual Linux distribution that can bring Tier 1s and auto manufacturers “70 to 80 percent toward developing a product that ends up in a vehicle,” he added.

By comparison, “GENIVI provided function catalogs that were supposed to be common across the industry, but the catalogs were incomplete, so all the manufacturers went off and specified their own proprietary extensions,” said Miner. “We found we were constantly reimplementing these ‘standard’ function catalogs, and we could not reuse them going from manufacturer to manufacturer.”

Miner went on to describe the development cadence for the AGL project, which follows its Yocto Project base by about nine months. He also discussed new features in UCB 3.0 Charming Chinook, including application packaging and widget installation, as well as a switch to systemd for application control. There’s a new template for an application framework service binder APIs, as well as an SDK for app developers. Reference apps are available for home screen, media player, settings, AM/FM, and HVAC.

Official reference platforms now include the Renesas R-Car 2 Porter board, Minnowboard Turbot, Intel Joule, TI Jacinto 6 Vayu board, and QEMU. There are also emerging community BSP “best effort” projects from third parties, including the Raspberry Pi 2/3, NXP i.MX6 SABRE board, and a DragonBoard.

Miner played a video of AGL director Dan Cauchy demonstrating UCB 3.0 at January’s CES show. The demo revealed new functionality such as displaying navigation turn-by-turn instructions on the instrument cluster for reduced distraction, as well as multimedia playing over the MOST ring using “the first open source MOST device driver in history,” according to Cauchy.

Finally, Miner described some of the activities in AGL’s six expert groups: application framework and security, connectivity, UI and graphics, CI and automated test (CIAT), navigation, and virtualization. He also surveyed some new features coming out in the Yocto 2.2 Daring Dab release in July. These include secure signaling and notifications, smart device link, and application framework improvements such as service binders for navigation, speech, browser, and CAN.

In December, AGL hopes to release Electric Eel, a release that will add back ends for AGL reference apps working in both Qt 5 and HTML5. Other planned improvements include APIs available as application framework service binders, IC and telematics profiles, more complete documentation, and an expanded binder API capability for RTOS interoperability.

Future UCB versions will move beyond the IVI screen and instrument cluster. “AGL is the only organization planning to address all the software in the vehicle, including HUD, telematics/connected car, ADAS, functional safety, and autonomous driving,” said Miner.

As AGL moves into telematics, there are complications due to the need to interface with legacy, often proprietary technologies. “The vehicle signal architecture we’re working on will abstract the CAN or MOST layers in a secure manner so applications don’t need to know anything about the native CAN,” said Miner. “Microchip has been working on native CAN drivers for AGL, but the messaging and vehicle topology is proprietary, so we’ve asked OEMs to provide typical and worst-case network topologies in terms of things like message rates. We can then build a simulator based on that topology. “

More on these future directions should be on tap at the Automotive Linux Summit held May 31 to June 2 in Tokyo.

You can watch the full video below:

https://www.youtube.com/watch?v=Ub8bNo9yM_4?list=PLbzoR-pLrL6pSlkQDW7RpnNLuxPq6WVUR

Connect with the Linux community at Open Source Summit North America on September 11-13. Linux.com readers can register now with the discount code, LINUXRD5, for 5% off the all-access attendee registration price. Register now to save over $300!

Building Linux Firewalls With Good Old Iptables: Part 1

Of course, we still need firewalls on our computers, even though that is not a subject we see much in these here modern times. There are Linux and BSD firewalls, prefab firewalls on commercial hardware from little to big that are most likely based on an open source firewall and a multitude of GUI helpers. In this two-part series, we will learn how to run iptables from the command line and then how to set up a firewall for an individual PC and a LAN firewall.

Pointy-Clicky Meh

I don’t think those commercial products with their own special interfaces or those GUI helpers really help all that much because you still need knowledge beyond pointy-clicky. You need to know at least the basics of TCP/IP, and then iptables will make sense to you. I will show you how to configure your firewall by bypassing the fripperies and using plain old unadorned iptables. Iptables is part of netfilter, and I am still, after all these years, fuzzy on exactly what netfilter and iptables are. The site says “netfilter.org is home to the software of the packet filtering framework inside the Linux 2.4.x and later kernel series. Software commonly associated with netfilter.org is iptables.” It’s enough for me to know that iptables is native to the Linux kernel so you always have it. Also, it’s strong and it’s stable, so once you learn it your knowledge will always be valid.

Iptables supports both IPv4 and IPv6. It inspects all IP packet headers passing through your system, and routes them according to the rules you have defined. It may forward them to another computer, or drop them, or alter them and send them on their way. It does not inspect payload, but only headers. Packets must traverse tables and chains, and there are three built-in tables: filter, NAT, and mangle. Chains are lists of the rules you have defined. Rules that apply to any matching packets are called targets. These are easier to understand in action, which we shall get to presently.

Iptables tracks state, which makes it more efficient and more secure. You can think of it as remembering which packets are already permitted on an existing connection, so it uses ephemeral ports rather than requiring great gobs of permanent holes in your firewall to allow for all the different IP protocols. Of course, it doesn’t really remember, but rather reads packet headers to determine which packets belong in a particular sequence.

A brief digression: the current Linux kernel is well into the 4.x series, and the netfilter documentation still references 2.x kernels. Note that ipchains and ipfwadm — the ancestors of iptables — are obsolete since years ago, so we only need to talk about iptables.

Distro Defaults Bye

Your first task is to find out if your Linux distribution starts a firewall by default, how to turn it on and off, and whether it uses iptables or something else. Most likely it’s iptables. Conflicting rules are less fun than they sound, so copy and save any existing configurations you want to keep, disable it, and start over.

Command Line, Hear Us Roar

In part 1, we’ll run some basic rules, and learn a bit about how iptables works. When you run these rules from the command line they are not persistent and do not survive reboots, so you can safely test all manner of mad combinations without hurting anything.

Check your iptables version:

$ iptables --version
iptables v1.6.0

Take a few minutes to read man iptables. It is a helpful document, and you will be happy you studied it. It provides an excellent overview of the structure of iptables and its features, including Mandatory Access Control (MAC) networking rules, which are used by SELinux, what the built-in tables do, how routing operates, and the commands for doing stuff and finding stuff.

Let’s list all active rules. This example shows there are no active rules, and a blanket ACCEPT policy, so iptables, in effect, is turned off:

iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source    destination         

Chain FORWARD (policy ACCEPT)
target     prot opt source    destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source    destination    

The filter table is the default, and the most used. This example blocks all incoming (INPUT) packets that originate from a particular network. You could leave out -t filter, but it’s a good practice to make everything explicit. These examples follow the syntax iptables [-t table] {-A|-C|-D} chain rule-specification:

# iptables -t filter -A INPUT -s 192.0.2.0/24 -j DROP

This example drops all packets from an IPv6 network:

# ip6tables -t filter -A INPUT -s 2001:db8::/32 -j DROP

These example networks are officially set aside for examples in documentation; see RFC5737 and RFC3849.

Now you can see your new rules:

# iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         
DROP       all  --  192.0.2.0/24         anywhere
[...]
       
# ip6tables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         
DROP       all      2001:db8::/32        anywhere
[...]

Remove these rules with the -D switch:

# iptables -t filter -D INPUT -s 192.0.2.0/24 -j DROP
# ip6tables -t filter -D INPUT -s 2001:db8::/32 -j DROP

Define Policy

Trying to write individual rules for all contingencies is for people who have nothing else to do, so iptables supports policies for the built-in chains. These are the most commonly used policies:

# iptables -P INPUT DROP
# iptables -P FORWARD DROP
# iptables -P OUTPUT ACCEPT

Run iptables -L to compare. This applies the principal of “Deny all, allow only as needed”. These policies are the defaults, and so they are applied in the absence of any matching rules. All incoming packets are dropped, and all outgoing packets are not blocked. But policy alone is not enough, and you still need a set of rules. You must always allow localhost:

# iptables -A INPUT -i lo -j ACCEPT

You probably want some two-way communication, so this allows return traffic from connections you initiated, such as visiting Web sites and checking email:

# iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT

This demonstrates connection-tracking, the wonderful feature that allows you to write many fewer rules to cover a multitude of situtations. Run ip6tables to apply the same rules to your IPv6 sessions.

Kernel Modules

Iptables relies on a number of kernel modules, which are loaded automatically when you run these commands. You can see them with lsmod:

$ lsmod
Module                  Size  Used by
nf_conntrack_ipv6      20480  1
nf_defrag_ipv6         36864  1 nf_conntrack_ipv6
ip6table_filter        16384  1
ip6_tables             28672  1 ip6table_filter
nf_conntrack_ipv4      16384  1
nf_defrag_ipv4         16384  1 nf_conntrack_ipv4
xt_conntrack           16384  1
nf_conntrack          106496  2 xt_conntrack,nf_conntrack_ipv4
iptable_filter         16384  1
ip_tables              24576  1 iptable_filter
x_tables               36864  4 ip_tables,xt_tcpudp,xt_conntrack,iptable_filter

That’s it for today. Remember to check out man iptables, and come back next week to see two example iptables scripts for lone PCs and for your LAN.

Read Part 2 of Building Linux Firewalls with Iptables

 

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Red Hat’s New Products Centered Around Cloud Computing, Containers

Red Hat has made a number of announcements at its user group conference, Red Hat Summit. The announcements ranged from the announcement of OpenShift.io to facilitate the creation of software as a service applications,  pre-built application runtimes to facilitate creation of OpenShift-based workloads, an index to help enterprises build more reliable container-based computing environments, an update to the Red Hat Gluster storage virtualization platform allowing it to be used in an AWS computing environment, and, of course, an announcement of a Red Hat/Amazon Web Services partnership.

Red Hat summarized the announcements as follows:

  • OpenShift.io. A free, end-to-end, SaaS development environment for cloud-native apps built with popular open source code, built for modern dev teams using the latest technology. Built from technologies including Eclipse Che, OpenShift.io includes collaboration tools for remote teams to analyze and assign work. Code is automatically containerized and easily deployed to OpenShift.

Read more at Virtualization Review

Now that HTTPS Is Almost Everywhere, What About IPv6?

Let’s Encrypt launched April 12, 2016 with the intent to support and encourage sites to enable HTTPS everywhere (sometimes referred to as SSL everywhere even though the web is steadily moving toward TLS as the preferred protocol). As of the end of February 2017, EFF estimates that half the web is now encrypted. Now certainly not all of that is attributable to EFF and Let’s Encrypt. After all, I have data from well before that date that indicates a majority of F5 customers enabled HTTPS on client-facing services, in the 70% range. So clearly folks were supporting HTTPS before Let’s Encrypt launched its efforts, but given the significant number of certificates* it has issued the effort is not without measurable success.

On Sept 11, 2006, ICANN “ratified a global policy for the allocation of IPv6 addresses by the Internet Assigned Numbers Authority (IANA)”. While the standard itself was ratified many years (like a decade) before, without a policy governing the allocation of those addresses it really wasn’t all that significant. But as of 2006 we were serious about moving toward IPv6. After all, the web was growing, mobile was exploding, and available IPv4 addresses were dwindling to nothing.

Read more at F5 

Using fetch() and reduce() to Grab and Format Data from an External API – A Practical Guide

Today we’re going to learn how to get and manipulate data from an external API. We’ll use a practical example from one of my current projects that you will hopefully be able to use as a template when starting something of your own. 

For this exercise, we will look at current job posting data for New York City agencies. New York City is great about publishing all sorts of datasets, but I chose this particular one because it doesn’t require dealing with API keys — the endpoint is a publicly accessible URL.

Here’s a quick roadmap of of our plan. We’ll get the data from New York City’s servers by using JavaScript’s Fetch API, which is a good way to start working with promises. I’ll go over the very bare basics here, but I recommend Mariko Kosaka’s excellent illustrated blog The Promise of a Burger Party for a more thorough (and delicious) primer. 

Read more at Dev.to

TLS/SSL Explained: TLS/SSL Terminology and Basics

In Part 1 this series we asked, What is TLS/SSL? In this part in the series, we will be describing some of the TLS/SSL terminologies.

Before diving deeper into TLS, let’s first have a look at the very basics of SSL/TLS. Understanding the following will help you gain a better understanding of the topics discussed and analyzed later on.

Encryption

Encryption is the process in which a human-readable message (plaintext) is converted into an encrypted, non-human-readable, format (ciphertext). The main purpose of encryption is to ensure that only an authorized receiver will be able to decrypt and read the original message. When unencrypted data is exchanged between two parties, using any medium, a third-party can intercept and read the communication exchanged.

Read more at DZone

NEXmark: A Benchmarking Framework for Processing Data Streams

ApacheCon North America is only a few weeks away — happening May 16-18  in Miami. This year, it’s particularly exciting because ApacheCon will be a little different in how it’s set up to showcase the wide variety of Apache topics, technologies, and communities.

Apache: Big Data is part of the ApacheCon conference this year. Ismaël Mejía and Etienne Chauchot, of Talend, are giving a joint presentation called NEXmark, which is a unified framework to evaluate Big Data and processing systems with Apache Beam. In this interview, they are sharing some highlights on that talk and other thoughts on these topics, too.

LinuxCon: Who should attend your talk? Who will get the most out of it?

Etienne: Our talk is about NEXmark, which comes from a research paper that tried to evaluate the streaming systems for streaming semantics. This paper was adopted by Google into a suite of jobs, pipelines we’re calling them. It was contributed to the community, but it didn’t integrate well with all the Apache stuff, so we took the job and we improved on it and we’re going to present this story.

Ismaël:  And for the audience question, we will just define the concepts that are specific to Beam, so basic big data knowledge is required.

LinuxCon: Is it only focused on Apache Beam or is it on Big Data in general?

Etienne: In the Big Data world there are two big families: batch and streaming. We will treat both cases because Beam is a unified model for both. Then there are many Apache products involved also.

Apache Beam is enough traction to execute the pipeline or jobs. But we also need different Apache products, or different runners we call them, so we can run Beam code on Apache Flink, Apache Spark, or Apache Apex. But we also integrate with the data stores using Apache, like Cassandra.

Ismaël:  The main goal of this benchmark suite is to reproduce cases of advanced semantics of Beam that cover all the streaming of the space also.

LinuxCon: So you are both involved in Apache Beam? How long have you been involved in that?

Etienne: Since December, myself.

Ismaël:   I’ve been since June of the last year. I’m already a commenter, that’s the good news, as of two weeks ago.

LinuxCon: What are the main highlights? You talk about the runner, is there anything specific or new technology or new logic that you are unveiling as part of your talk?

Etienne: The big thing is that there is a new unified solution to evaluate Big Data using both streaming and batch and that’s quite new. Attendees will also learn the concepts of Beam and the API.

Linux.Com: So what’s your overall aim?

Etienne: There is one aim, that people will know that they can take this and use it to evaluate their own inference to two. For example, you might want to use big data framework from Apache and Spark, maybe version one or version two. You decide you want to evaluate the differences. So, you can take this suite and play this out. And then you will have some criteria extracted to decide. And the second thing that could be of interest is to use the advanced semantics of Beam. Things like timers, and other new stuff. So that would be of interest.

LinuxCon: Is this the first time you’re presenting?

Etienne: I went to Apache: Big Data in Vancouver last year and Seville also. It was a really nice atmosphere. But this is the first time I’m going to present something, so it’s going to be cool.

Ismaël:   This will be the second time I have attended ApacheCon. I’ve already been to the one in Seville, Europe. I’ve noticed that it’s a family atmosphere. That’s why I feel very confident in this kind of environment, and it’s very interesting for me. I mean in addition to the very interesting technical talks. But this is my first time speaking at ApacheCon.

LinuxCon: When is your talk? What date and time is it?

Etienne: It will be on Wednesday, May 17 at 2:30 pm.

Learn first-hand from the largest collection of global Apache communities at ApacheCon 2017 May 16-18 in Miami, Florida. ApacheCon features 120+ sessions including five sub-conferences: Apache: IoT, Apache Traffic Server Control Summit, CloudStack Collaboration Conference, FlexJS Summit and TomcatCon. Secure your spot now! Linux.com readers get $30 off their pass to ApacheCon. Select “attendee” and enter code LINUXRD5. Register now >>  

How Amazon and Red Hat Plan to Bridge Data Centers

“There’s a lot of innovation on AWS. This makes OpenShift more attractive to more developers, but it’s also a storefront for Amazon features and products,”Red Hat CEO Jim Whitehurst told Fortune during an interview at the Red Hat Summit tech conference in Boston. Whitehurst said he started discussing this plan with AWS chief executive Andy Jassy in January.

Red Hat is not alone in trying to woo corporate users with better ties to AWS. Last fall, VMware (VMW, -0.22%) and Amazon (AMZN, -0.69%) said they were working on a way to deploy VMware workloads on AWS, for example.

Read more at Fortune

The Case for Containerizing Middleware

It’s one thing to accept the existence of middleware in a situation where applications are being moved from a “legacy,” client/server, n-tier scheme into a fully distributed systems environment. For a great many applications whose authors have long ago moved on to well-paying jobs, containerizing the middleware upon which they depend may be the only way for them to co-exist with modern applications in a hybrid data center.

It’s why it’s a big deal that Red Hat is extending its JBoss Fuse middleware service for OpenShift. It’s also why Cloud Foundry’s move last December to make its Open Service Broker API an open standard can be viewed as a necessary event for container platforms.

Read more at The New Stack

Scaling Agile and DevOps in the Enterprise

In a recent Continuous Discussions (#c9d9) video podcast, expert panelists discussed scaling Agile and DevOps in the enterprise.

Our expert panel included: Gary Gruver, co-author of “Leading the Transformation, A Practical Approach to Large-Scale Agile Development,” and “Starting and Scaling DevOps in the Enterprise”; Mirco Hering, a passionate Agile and DevOps change agent; Rob Hirschfeld, CEO at RackN; Steve Mayner, Agile coach, mentor and thought leader; Todd Miller, delivery director for Celerity’s Enterprise Technology Solutions; and, our very own Anders Wallgren and Sam Fell.

During the episode, the panelists discussed lessons learned with regards to leadership, teams and the pipeline and patterns that can be applied for scaling Agile and DevOps in the Enterprise.

The full post can be found on the Electric Cloud blog