Home Blog Page 288

Hyundai Joins AGL and Other Automotive News from CES

This week’s Consumer Electronics Show (CES) in Las Vegas has been even more dominated by automotive news than last year, with scores of announcements of new in-vehicle development platforms, automotive 5G services, self-driving concept cars, automotive cockpit UIs, assisted driving systems, and a host of electric vehicles. We’ve also seen numerous systems that provide Google Assistant or Alexa-driven in-vehicle interfaces such as Anker’s Google Assistant based Roav Bolt.

Here we take a brief look at some of the major development-focused CES automotive announcements to date. The mostly Linux-focused developments range from Hyundai joining the Automotive Grade Linux project to major self-driving or assisted ADAS platforms from Baidu, Intel, and Nvidia.

Hyundai jumps on AGL bandwagon

Just prior to the launch of CES, the Linux Foundation’s Automotive Grade Linux (AGL) project announced that South Korean automotive giant Hyundai has joined the group as a Bronze member. The news follows last month’s addition of BearingPoint, BedRock Systems, Big Lake Software, Cognomotiv, and Dellfer to the AGL project. In October, AGL announced seven other new members, including its first Chinese car manufacturer — Sitech Electric Automotive. 

Hyundai’s membership does not commit it to using the group’s Unified Code Base (UCB) reference distribution for automotive in-vehicle infotainment, but it’s another example of the growing support for the open source, Linux-based IVI stack. Several major carmakers are members, including Honda, Mazda, Mitsubishi Electric, and Suzuki, yet Toyota is the only AGL automotive manufacturer to ship IVI systems based on UCB in most of its major models, from the Camry to its Lexus luxury cars. In June, AGL announced that Mercedes-Benz Vans was using UCB for upcoming vans, and we can expect more AGL commitments in 2019.

At the Westgate Hotel Pavilion (booth 1614) in Las Vegas this week, AGL is showing off a 2019 Toyota RAV4 equipped with AGL systems, and AGL members are offering demonstrations of AGL-based connected car services, audio innovations, instrument cluster, and security solutions.

Baidu releases open source Apollo 3.5 self-driving software

AGL is not the only automotive project offering an open source solution. For the past year, Chinese search and cloud giant Baidu has been developing its Linux-driven Apollo stack for self-driving cars. At CES, it announced Apollo 3.5, with new support for “complex urban and suburban driving scenarios.” A hardware platform is available with an Intel Core based Neousys industrial computer equipped with an Nvidia graphics card, among other components including Baidu’s own sensor fusion unit.

Baidu also announced an Apollo Enterprise platform built on top of Apollo designed for autonomous fleet operations. In addition, it revealed an open source OpenEdge cloud-enabled edge computing platform with development boards based on NXP and Intel technologies. The latter is designed for in-car video analytics and incorporates Intel’s Mobileye technology. Details were sketchy, however.

Intel AV

At CES, Intel unveiled an Intel AV compute platform aimed at autonomous cars. It features a pair of Linux-driven Mobileye EyeQ5 sensor processing chips and a new Intel Atom 3xx4 CPU.

The Intel AV system provides 60 percent greater performance at the same 30W consumption as Nvidia’s automotive focused Jetson Xavier processor, claims Intel, The Mobileye EyeQ5 processors are each claimed to generate 24 trillion deep learning operations per second (TOPS) at 10W each. Volkswagen and Nissan have announced plans to use the earlier EyeQ4 processors when it launches later this year. An EyeQ5 Linux SDK with support for OpenCL, deep learning deployment tools, and adaptive AUTOSAR will be available later this year, and production will begin in 2020.

The Atom 3xx4 chip, meanwhile, borrows high-end multi-threading and virtualization technologies from Intel’s Xeon processors for running different tasks simultaneously on different systems around the car. 

Nvidia Drive Autopilot

Intel is playing catchup with Nvidia in the autonomous vehicle computer contest. In recent years, Nvidia has increasingly focused on the automotive business, launching one of the first independent self-driving car computers with its Drive PX Pegasus based on its newly shipping, octa-core Arm-based Jetson AGX Xavier module. At CES, it followed up with a Xavier-based Nvidia Drive Autopilot system.

Unlike the fully autonomous, “Level 5” Drive PX Pegasus, the Drive Autopilot is designed for Level 2 assisted ADAS systems. Due to ship in vehicles in 2020, the system features a claimed 30 TOPS AI performance and provides “complete surround camera sensor data from outside the vehicle and inside the cabin.”

Drive Autopilot integrates a new Drive IX software stack that can map and memorize typical routes to improve performance in the future. It also provides driverless highway merge, lane change, lane splits, and as well as driver monitoring and AI copilot capabilities. We saw no OS details, but presumably Drive Autopilot runs the Tegra4Linux stack used on other Xavier based systems.

7 Data Trends on our Radar

Whether you’re a business leader or a practitioner, here are key data trends to watch and explore in the months ahead.

Increasing focus on building data culture, organization, and training

In a recent O’Reilly survey, we found that the skills gap remains one of the key challenges holding back the adoption of machine learning. The demand for data skills (“the sexiest job of the 21st century”) hasn’t dissipated. LinkedIn recently found that demand for data scientists in the US is “off the charts,” and our survey indicated that the demand for data scientists and data engineers is strong not just in the US but globally.

With the average shelf life of a skill today at less than five years and the cost to replace an employee estimated at between six and nine months of the position’s salary, there is increasing pressure on tech leaders to retain and upskill rather than replace their employees in order to keep data projects (such as machine learning implementations) on track. We are also seeing more training programs aimed at executives and decision makers, who need to understand how these new ML technologies can impact their current operations and products.

Read more at O’Reilly

Simplifying and Harmonizing Open Source for More Efficient Compliance

Using open source code comes with a responsibility to comply with the terms of that code’s license, which can sometimes be challenging for users and organizations to manage. The goal of ACT is to consolidate investment in and increase interoperability and usability of, open source compliance tooling, which helps organizations manage compliance obligations.

Four Parts of ACT:

  • FOSSology: An open source license compliance software system and toolkit allowing users to run license, copyright and export control scans from the command line
  • QMSTR: Also known as Quartermaster, this tool creates an integrated open source toolchain that implements industry best practices of license compliance management. QMSTR integrates into the build systems to learn about the software products, their sources, and dependencies.
  • SPDX Tools standing for Software Package Data Exchange (SPDX) is an open standard for communicating software bill of material information including components, licenses, copyrights, and security references.
  • Tern: Tern is an inspection tool to find the metadata of the packages installed in a container image. It provides a deeper understanding of a container’s bill of materials so better decisions can be made about container-based infrastructure, integration and deployment strategies.

“There are numerous open source compliance tooling projects, but the majority are unfunded and have limited scope to build out robust usability or advanced features,” commented Kate Stewart, Senior Director of Strategic Programs at The Linux Foundation. “We have also heard from many organizations that the tools that do exist do not meet their current needs.

Read more at InfoTech Spotlight

What Is DevSecOps?

DevOps was born from merging the practices of development and operations, removing the silos, aligning the focus, and improving efficiency and performance of both the teams and the product. 

Security is a common silo in many organizations. Security’s core focus is protecting the organization, and sometimes this means creating barriers or policies that slow down the execution of new services or products to ensure that everything is well understood and done safely and that nothing introduces unnecessary risk to the organization.

DevSecOps looks at merging the security discipline within DevOps. By enhancing or building security into the developer and/or operational role, or including a security role within the product engineering team, security naturally finds itself in the product by design.

Gettings started with DevSecOps involves shifting security requirements and execution to the earliest possible stage in the development process. It ultimately creates a shift in culture where security becomes everyone’s responsibility, not only the security team’s.

Read more at OpenSource.com

Blockchain: 11 ways to get smarter

Want to keep learning about blockchain? Experts share their favorite blockchain resources, from books to podcasts.

Blockchain – The New Technology of Trust, by Goldman Sachs

Recommended by Nikao Yang, COO at Lucidity

This web resource from financial giant Goldman Sachs offers a slick crash course in blockchain for beginners, including what it is, how it works, and more. Yang’s own company, part of a growing number of startups working on blockchain’s potentially significant applications in the digital advertising industry, also offers a Blockchain 101 primer.

Chain Letter, by MIT Technology Review

Recommended by Pierkarska

For regular updates on the blockchain universe, Pierkarska of Hyperledger recommends this free biweekly newsletter from MIT Technology Review.

Pierkarska from Hyperledger also recommends these two online courses from edX, which runs on the open source Open edX platform. Both courses cost $99 if you want to get a certificate at the end, but if you don’t need that certificate, you can study for free.

Blockchain: Understanding Its Uses and Implications

by The Linux Foundation

A good “101” class for getting up to speed. Course description: “Understand exactly what a blockchain is, its impact and potential for change around the world, and analyze use cases in technology, business, and enterprise products and institutions.”

Blockchain for Business – An Introduction to Hyperledger Technologies

by The Linux Foundation

A practical class for getting started with developing blockchain applications on the open source Hyperledger platform. Course description: “A primer to blockchain and distributed ledger technologies. Learn how to start building blockchain applications with Hyperledger frameworks.”

Read more at Enterprisers Project

Industry-Scale Collaboration at The Linux Foundation

Linux and open source have changed the computer industry (among many others) forever.  Today, there are tens of millions of open source projects. A valid question is “Why?” How can it possibly make sense to hire developers that work on code that is given away for free to anyone who cares to take it?  I know of many answers to this question, but for the communities that I work in, I’ve come to recognize the following as the common thread.

An Industry Pivot

Software has become the most important component in many industries, and it is needed in very large quantities. When an entire industry needs to make a technology “pivot,” they often do as much of that as possible in software. For example, the telecommunications industry must make such a pivot in order to support 5G, the next generation of mobile phone network.  Not only will the bandwidth and throughput be increased with 5G, but an entirely new set of services will be enabled, including autonomous cars, billions of Internet-connected sensors and other devices (aka IoT), etc.  To do that, telecom operators need to entirely redo their networks distributing millions of compute and storage instances very, very close to those devices/users.

Read more at The Linux Foundation

Unit Testing in the Linux Kernel

Brendan Higgins recently proposed adding unit tests to the Linux kernel, supplementing other development infrastructure such as perfautotest and kselftest. The whole issue of testing is very dear to kernel developers’ hearts, because Linux sits at the core of the system and often has a very strong stability/security requirement. Hosts of automated tests regularly churn through kernel source code, reporting any oddities to the mailing list.

Unit tests, Brendan said, specialize in testing standalone code snippets. It was not necessary to run a whole kernel, or even to compile the kernel source tree, in order to perform unit tests. The code to be tested could be completely extracted from the tree and tested independently. Among other benefits, this meant that dozens of unit tests could be performed in less than a second, he explained.

Giving credit where credit was due, Brendan identified JUnitPython‘s unittest.mock and Googletest/Googlemock for C++ as the inspirations for this new KUnit testing idea.

Brendan also pointed out that since all code being unit-tested is standalone and has no dependencies, this meant the tests also were deterministic. Unlike on a running Linux system, where any number of pieces of the running system might be responsible for a given problem, unit tests would identify problem code with repeatable certainty.

Read more at Linux Journal

DNS (Domain Name Service): A Detailed, High-level Overview

How’s that for a confuding title?  In a recent email discussion, a colleague compared the Decentralized Identifier framework to DNS …suggesting they were similar.  I cautiously tended to agree but felt I had an overly simplistic understanding of DNS at a protocol level.  That email discussion led me to learn more about the deeper details of how DNS actually works – and hence, this article.

On the surface, I think most people understand DNS to be a service that you can pass a domain name to and have it resolved to an IP address (in the familiar nnn.ooo.ppp.qqq format).

domain name => nnn.ooo.ppp.qqq

Examples:

  1. If you click on Google DNS Query for microsoft.com, you’ll get a list of IP addresses associated with the Microsoft’s corporate domain name microsoft.com.
  2. If you click on Google DNS Query for www.microsoft.com, you’ll get a list of IP addresses associated with the Microsoft’s corporate web site www.microsoft.com.

NOTE: The Google DNS Query page returns the DNS results in JSON format. This isn’t particular or specific to DNS. It’s just how the Google DNS Query page chooses to format and display the query results.

DNS is actually much more than a domain name to IP address mapping.  Read on…

Read more at Hyperonomy

Roles and Responsibilities of Cloud Native DevOps Engineers

Cloud Native DevOps is a relatively new collection of old concepts and ideas that coalesced out of a need to address inadequacies in the “old” way of building applications. To understand what Cloud Native DevOps engineers do on a daily basis, one needs to understand that the objective of the Cloud Native model is to build apps that take advantage of the adaptability and resiliency that are so easy to achieve using cloud tools. There are four main concepts that serve as the basis of cloud native computing: Microservices, Containers, CI/CD, and DevOps.

The best DevOps engineers will have the ability to use or learn a wide variety of open-source technologies and are comfortable with programming languages that are heavily used for scripting. They have some experience with IT systems and operations and data management and are able to integrate that knowledge into the CI/CD model of development. Crucially, DevOps engineers also need to have their sights set not just on writing code, but on the actual business outcomes from the product they develop. Big-picture thinking like that also requires strong soft skills to enable communication across teams and between the client and the technical team.

Read more at The New Stack

Migrating to Linux: Network and System Settings

Learn how to transition to Linux in this tutorial series from our archives.

In this series, we provide an overview of fundamentals to help you successfully make the transition to Linux from another operating system. If you missed the earlier articles in the series, you can find them here:

Part 1 – An Introduction

Part 2 – Disks, Files, and Filesystems

Part 3 – Graphical Environments

Part 4 – The Command Line

Part 5 – Using sudo

Part 6 – Installing Software

Linux gives you a lot of control over network and system settings. On your desktop, Linux lets you tweak just about anything on the system. Most of these settings are exposed in plain text files under the /etc directory. Here I describe some of the most common settings you’ll use on your desktop Linux system.

A lot of settings can be found in the Settings program, and the available options will vary by Linux distribution. Usually, you can change the background, tweak sound volume, connect to printers, set up displays, and more. While I won’t talk about all of the settings here, you can certainly explore what’s in there.

Connect to the Internet

Connecting to the Internet in Linux is often fairly straightforward. If you are wired through an Ethernet cable, Linux will usually get an IP address and connect automatically when the cable is plugged in or at startup if the cable is already connected.

If you are using wireless, in most distributions there is a menu, either in the indicator panel or in settings (depending on your distribution), where you can select the SSID for your wireless network. If the network is password protected, it will usually prompt you for the password. Afterward, it connects, and the process is fairly smooth.

You can adjust network settings in the graphical environment by going into settings. Sometimes this is called System Settings or just Settings. Often you can easily spot the settings program because its icon is a gear or a picture of tools (Figure 1).

Figure 1: Gnome Desktop Network Settings Indicator Icon.

Network Interface Names

Under Linux, network devices have names. Historically, these are given names like eth0 and wlan0 or Ethernet and wireless, respectively. Newer Linux systems have been using different names that appear more esoteric, like enp4s0 and wlp5s0. If the name starts with en, it’s a wired Ethernet interface. If it starts with wl, it’s a wireless interface. The rest of the letters and numbers reflect how the device is connected to hardware.

Network Management from the Command Line

If you want more control over your network settings, or if you are managing network connections without a graphical desktop, you can also manage the network from the command line.

Note that the most common service used to manage networks in a graphical desktop is the Network Manager, and Network Manager will often override setting changes made on the command line. If you are using the Network Manager, it’s best to change your settings in its interface so it doesn’t undo the changes you make from the command line or someplace else.

Changing settings in the graphical environment is very likely to be interacting with the Network Manager, and you can also change Network Manager settings from the command line using the tool called nmtui. The nmtui tool provides all the settings that you find in the graphical environment but gives it in a text-based semi-graphical interface that works on the command line (Figure 2).

Figure 2: nmtui interface

On the command line, there is an older tool called ifconfig to manage networks and a newer one called ip. On some distributions, ifconfig is considered to be deprecated and is not even installed by default. On other distributions, ifconfig is still in use.

Here are some commands that will allow you to display and change network settings:

Process and System Information

In Windows, you can go into the Task Manager to see a list of the all the programs and services that are running. You can also stop programs from running. And you can view system performance in some of the tabs displayed there.

You can do similar things in Linux both from the command line and from graphical tools. In Linux, there are a few graphical tools available depending on your distribution. The most common ones are System Monitor or KSysGuard. In these tools, you can see system performance, see a list of processes, and even kill processes (Figure 3).

Figure 3: Screenshot of NetHogs.

In these tools, you can also view global network traffic on your system (Figure 4).

Figure 4: Screenshot of Gnome System Monitor.

Managing Process and System Usage

There are also quite a few tools you can use from the command line. The command ps can be used to list processes on your system. By default, it will list processes running in your current terminal session. But you can list other processes by giving it various command line options. You can get more help on ps with the commands info ps, or man ps.

Most folks though want to get a list of processes because they would like to stop the one that is using up too much memory or CPU time. In this case, there are two commands that make this task much easier. These are top and htop (Figure 5).

Figure 5: Screenshot of top.

The top and htop tools work very similarly to each other. These commands update their list every second or two and re-sort the list so that the task using the most CPU is at the top. You can also change the sorting to sort by other resources as well such as memory usage.

In either of these programs (top and htop), you can type ‘?’ to get help, and ‘q’ to quit. With top, you can press ‘k’ to kill a process and then type in the unique PID number for the process to kill it.

With htop, you can highlight a task by pressing down arrow or up arrow to move the highlight bar, and then press F9 to kill the task followed by Enter to confirm.

The information and tools provided in this series will help you get started with Linux. With a little time and patience, you’ll feel right at home.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.