Home Blog Page 354

Intel Chip Flaw: Math Unit May Spill Crypto Secrets to Apps – Modern Linux, Windows, BSDs Immune

A security flaw within Intel Core and Xeon processors can be potentially exploited to swipe sensitive data from the chips’ math processing units. Malware or malicious logged-in users can attempt to leverage this design blunder to steal the inputs and results of computations performed in private by other software.

These numbers, held in FPU registers, could potentially be used to discern parts of cryptographic keys being used to secure data in the system. For example, Intel’s AES encryption and decryption instructions use FPU registers to hold keys.

In short, the security hole could be used to extract or guess at secret encryption keys within other programs, in certain circumstances, according to people familiar with the engineering mishap.

Modern versions of Linux – from kernel version 4.9, released in 2016, and later – and modern Windows, including Server 2016, as well as the latest spins of OpenBSD and DragonflyBSD are not affected by this flaw (CVE-2018-3665).

Read more at The Register

Going Global with Kubernetes

Kubernetes is often touted as the Linux of the cloud world, and that comparison is fair when you consider its widespread adoption. But, with great power comes great responsibility and, as the home of  Kubernetes, the Cloud Native Computing Foundation (CNCF) shoulders many responsibilities, including learning from the mistakes of other open source projects while not losing sight of the main goal. The rapid global growth of CNCF also means increased responsibility in terms of cultural diversity and creating a welcoming environment.

Rise of Kubernetes in China

CNCF in general has more than 216 members, making it the second largest project under the umbrella of The Linux Foundation. The project is enjoying massive adoption and growth in new markets, especially in China. For example, JD.com, one of the largest e-commerce companies in China, has moved to Kubernetes.

“If you are looking to innovate as a company, you are not going to always buy off-the-shelf technologies, you take Open Source technologies and customize them to your needs. China has over a billion people and they have to meet the needs of these people; they need to scale. Open Source technologies like Kubernetes enable them to customize and scale technologies to their needs,” said Chris Aniszczyk, CTO, CNCF.

This growth in Asia has inspired CNCF to bring KubeCon and CloudNativeCon to China. The organization will be organizing their first KubeCon + CloudNativeCon in Shanghai, November 14-15, 2018. China is already using open source cloud-native technologies, and through these and other efforts, CNCF wants to build a bridge to help Chinese developers increase their contribution to various projects. CNCF is also gearing up to help the community by offering translations of documentations, exams, certifications, etc.

In interviews and at events in China, language often becomes a barrier to collaboration and the free exchange of ideas and information. CNCF is aware of this. And, according to Aniszczyk, is working on plans for live translation at events to allow presenters to speak in their native language.

CNCF projects are growing not only in new regions but also in scope; people are finding new use-cases every day. While they are enjoying this adoption, the community has also started to prepare themselves for what lies ahead. They certainly can’t predict how some smart organization will use their technology in an area they never envisioned; but they can prepare the community to embrace new requirements.

We have started to hear about CNCF 2020 vision that goes beyond Kubernetes proper and looks at areas such as security and policy. The community has started adding new projects that deal with some of these topics, including Spiffy, which helps users deal with service identity and security at scale for Kubernetes related services, and OPA, a policy management project.

“We are witnessing a wide expansion of areas that CNCF is investing in to bring cloud native technologies to users,” said Aniszczyk.

Bane or boon?

Adoption is great, but we have seen how many open source projects lose track of their core mission and became bloated in order to cater to every use-case. The CNCF is not immune to such problems, but the community — at both developer and organizational level — is acutely aware of the risk and is working to protect itself.

“We have taken several approaches. First and foremost, unlike many other open source projects, CNCF doesn’t force integration. We don’t have one major release that bundles everything. We don’t have any gatekeeping processes that other foundations have,” said Aniszczyk.

What CNCF does do is allow its members and end users to come up with integration themselves to build products that solves the problems of their users. If such integration is useful, then they contribute it back to CNCF.  “We have a set of loosely coupled projects that are integrated by users; we don’t force any such integration,” said Aniszczyk.

According to Aniszczyk, CNCF  acts almost like a release valve and experimentation center for new things. It creates an environment to test new projects. “They are like sandbox projects doing some interesting innovation, solving some serious problems. We will see if they work or not. If they do work then the community may decide to integrate them, but none of it is forced,” said Aniszczyk.

It’s magic

All of this makes CNCF a unique project in the open source ecosystem. Kubernetes has now been widely adopted across industries. Look at cloud providers, for example, and you see that Kubernetes has the blessing of the public cloud trinity, which includes AWS, Azure, and Google Cloud. Three top Linux vendors — SUSE, Red Hat, and Canonical — have put their weight behind Kubernetes, as well as many other companies and organizations.

“I‘m so proud of being a person that’s been involved in open source and seeing all these companies working together under one neutral umbrella,” Aniszczyk said.

Join us at Open Source Summit in Vancouver, August for 250+ sessions covering the latest technologies and best practices in Kubernetes, cloud, open source, and more.

The Schedule for Open Source Summit North America Is Now Live

Join us August 29-31, in Vancouver, BC, for 250+ sessions covering a wide array of topics including Linux Systems, Cloud Native Applications, Blockchain, AI, Networking, Cloud Infrastructure, Open Source Leadership, Program Office Management and more. Arrive early for new bonus content on August 28 including co-located events, tutorials, labs, workshops, and lightning talks.

VIEW THE FULL SCHEDULE »

Register to save $300 through June 17.

REGISTER NOW »

Read more at The Linux Foundation

Why Open Source Needs Marketing (Even Though Developers Hate It)

Open source is community and collaboration driven. Instead of one dedicated product development team, there are thousands of developers from all over the world who contribute and develop the open-source project. For traditional marketers entering the open-source space, it takes a bit of a mindset shift. And if you are a company that is participating in an open-source project, you also have a role to play in helping to market it.

So, what have I learned about how to market open-source projects?

1. Recognize and respect the importance of the community.

As a marketer, you work for the community. They are the stars. It is your job to make them shine from behind the scenes. Without the community, the open-source project will die.

Read more at Forbes

Docker Enterprise Edition Offers Multicloud App Management

Docker has expanded its commercial container platform software, Docker Enterprise Edition (EE) to manage containerized applications across multiple cloud services.

The idea with this release is to better help enterprise customers manage their applications across the entire development and deployment lifecycle, said Jenny Fong, Docker director of product marketing. “While containers help make applications more portable, the management of the containers is not the same,” Fong said.

Docker EE provides a management layer for containers, addressing needs around security and governance, and the company is now extending this management into the cloud.

Read more at The New Stack

Pushing AI Performance Benchmarks to the Edge

As I discussed recently, the AI industry is developing benchmarking suites that will help practitioners determine the target environment in which their machine learning, deep learning or other statistical models might perform best. Increasingly, these frameworks are turning their focus to benchmarking AI workloads that run on edge devices, such as “internet of things” endpoints, smartphones and embedded systems.

There are as yet no widely adopted AI benchmarking suites. Of the ones under development, these are the ones that stand the greatest chance of prevailing down the road:

  • Transaction Processing Performance Council’s AI Working Group: The TPC includes more than 20 top server and software makers. Late last year, the organization formed a working group to define AI hardware and software benchmarks that are agnostic to the underlying chipsets where the workloads are executed.
  • MLPerf: Early this month, Google Inc. and Baidu Inc. announced that they are teaming with chipmakers and academic research centers to create the AI benchmark MLPerf.

Read more at Silicon Angle

Corgi, the CLI Workflow Manager: Cute *And* Useful

Cuteness overload! Corgi, the CLI workflow manager is here to make your life easier by providing a list of features for creating and managing reusable snippets.

Corgi is a command-line tool that helps with your repetitive command usages by organizing them into reusable snippet. It was inspired by Pet and aims to advance Pet’s command-level usage to a workflow level.

Read more at Jaxenter

AI Is Coming to Edge Computing Devices

Very few non-server systems run software that could be called machine learning (ML) and artificial intelligence (AI). Yet, server-class “AI on the Edge” applications are coming to embedded devices, and Arm intends to fight with Intel and AMD over every last one of them.

Arm recently announced a new Cortex-A76 architecture that is claimed to boost the processing of AI and ML algorithms on edge computing devices by a factor of four. This does not include ML performance gains promised by the new Mali-G76 GPU. There’s also a Mali-V76 VPU designed for high-res video. The Cortex-A76 and two Mali designs are designed to “complement” Arm’s Project Trillium Machine Learning processors (see below).

Improved performance

The Cortex-A76 differs from the Cortex-A73 and Cortex-A75 IP designs in that it’s designed as much for laptops as for smartphones and high-end embedded devices. Cortex-A76 provides “35 percent more performance year-over-year,” compared to Cortex-A75, claims Arm. The IP, which is expected to arrive in products a year from now, is also said to provide 40 percent improved efficiency.

Like Cortex-A75, which is equivalent to the latest Kyro cores available on Qualcomm’s Snapdragon 845, the Cortex-A76 supports DynamIQ, Arm’s more flexible version of its Big.Little multi-core scheme. Unlike Cortex-A75, which was announced with a Cortex-A55 companion chip, Arm had no new DynamIQ companion for the Cortex-A76.

Cortex-A76 enhancements are said to include decoupled branch prediction and instruction fetch, as well as Arm’s first 4-wide decode core, which boosts the maximum instruction per cycle capability. There’s also higher integer and vector execution throughput, including support for dual-issue native 16B (128-bit) vector and floating-point units. Finally, the new full-cache memory hierarchy is “co-optimized for latency and bandwidth,” says Arm.

Unlike the latest high-end Cortex-A releases, Cortex-A76 represents “a brand new microarchitecture,” says Arm. This is confirmed by AnandTech’s usual deep-dive analysis. Cortex-A73 and -A75 debuted elements of the new “Artemis” architecture, but the Cortex-A76 is built from scratch with Artemis.

The Cortex-A76 should arrive on 7nm-fabricated TSMC products running at 3GHz, says AnandTech. The 4x improvements in ML workloads are primarily due to new optimizations in the ASIMD pipelines “and how dot products are handled,” says the story.

Meanwhile, The Register noted that Cortex-A76 is Arm’s first design that will exclusively run 64-bit kernel-level code. The cores will support 32-bit code, but only at non-privileged levels, says the story..

Mali-G76 GPU and Mali-G72 VPU

The new Mali-G76 GPU announced with Cortex-A76 targets gaming, VR, AR, and on-device ML. The Mali-G76 is said to provide 30 percent more efficiency and performance density and 1.5x improved performance for mobile gaming. The Bifrost architecture GPU also provides 2.7x ML performance improvements compared to the Mali-G72, which was announced last year with the Cortex-A75.

The Mali-V76 VPU supports UHD 8K viewing experiences. It’s aimed at 4×4 video walls, which are especially popular in China and is designed to support the 8K video coverage, which Japan is promising for the 2020 Olympics. 8K@60 streams require four times the bandwidth of 4K@60 streams. To achieve this, Arm added an extra AXI bus and doubled the line buffers throughout the video pipeline. The VPU also supports 8K@30 decode.

Project Trillium’s ML chip detailed

Arm previously revealed other details about the Machine Learning (ML) processor, also referred to as MLP. The ML chip will accelerate AI applications including machine translation and face recognition. 

The new processor architecture is part of the Project Trillium initiative for AI, and follows Arm’s second-gen Object Detection (OD) Processor for optimizing visual processing and people/object detection. The ML design will initially debut as a co-processor in mobile phones by late 2019.

Numerous block diagrams for the MLP were published by AnandTech, which was briefed on the design. While stating that any judgment about the performance of the still unfinished ML IP will require next year’s silicon release, the publication says that the ML chip appears to check off all the requirements of a neural network accelerator, including providing efficient convolutional computations and data movement while also enabling sufficient programmability.

Arm claims the chips will provide >3TOPs per Watt performance in 7nm designs with absolute throughputs of 4.6TOPs, deriving a target power of approximately 1.5W. For programmability, MLP will initially target Android’s Neural Networks API and Arm’s NN SDK.

Join us at Open Source Summit + Embedded Linux Conference Europe in Edinburgh, UK on October 22-24, 2018, for 100+ sessions on Linux, Cloud, Containers, AI, Community, and more.

EKS vs. ECS: Orchestrating Containers on AWS

AWS announced Kubernetes-as-a-Service at re:Invent in November 2017: Elastic Container Service for Kubernetes (EKS). Now, EKS is generally available. I discussed ECS vs. Kubernetes before EKS was a thing. Therefore, I’d like to take a second attempt and compare EKS with ECS.

Before comparing the differences, let us start with what EKS and ECS have in common. Both solutions are managing containers distributed among a fleet of virtual machines. Managing containers includes:

  • Monitoring and replacing failed containers.
  • Deploying new versions of your containers.
  • Scaling the number of containers based on load.

What are the differences between EKS and ECS?

Load Balancing

Usually, a load balancer is the entry point into your AWS infrastructure. Both EKS and ECS offer integrations with Elastic Load Balancing (ELB).

On the one hand, Kubernetes — and therefore EKS — offers an integration with the Classic Load Balancer. When creating a service Kubernetes does also create or configure a Classic Load Balancer for you.

Read more at DZone

GStreamer CI Support for Embedded Devices

Embedded devices are a popular deployment target for GStreamer yet they are not tested on the project’s Continuous Integration (CI) system. Here’s a look at work done to introduce a Raspberry Pi for automated on-board testing using Jenkins, LAVA, and more.

By Omar Akkila, Software Engineer at Collabora.

GStreamer is a popular open-source pipeline-based multimedia framework that has been in development since 2001. That’s 17 years of constant development, triaging, bug fixes, feature additions, packaging, and testing. Adopting a Jenkins-based Continuous Integration (CI) setup in August 2013, GStreamer and its dependencies are now built multiple times a day with each commit. Prior to that, the multimedia project used a build bot hosted by Collabora and Igalia. At the time of this writing, GStreamer is built for the Linux (Fedora & Debian), macOS, Windows, Android, and iOS platforms. A very popular deployment target for GStreamer are embedded devices, but they are not targeted in the current CI setup.This meant additional manpower, effort, and testing outside of the automated tests for every release of GStreamer to validate on embedded boards. To rectify this, a goal was devised to integrate embedded devices into the CI.

Now, this meant more than just emulating embedded targets and building GStreamer for them. The desire is to test on physical boards with as much as automation as possible. This is where the the Linaro Automated Validation Architecture (LAVA) steps into play. LAVA is a continuous integration automation system, similar to Jenkins, oriented towards testing on physical and virtual hardware. Tests can range anywhere between simple boot testing to system-level testing. The plan being that GStreamer CI will interface with LAVA to run the gst-validate test suite on devices.

Architecturally, LAVA operates through a master-worker relationship. The master is responsible for housing the web interface, database of devices, and scheduler. The worker is responsible for receiving messages from the master and dispatching all operations and procedures to the Devices Under Test (DUT). At Collabora, we host a LAVA instance with a master and maintain a lab of physical devices connected to a LAVA worker in our Cambridge office. For the preliminary iteration of embedded support, the aim is to introduce a Raspberry Pi to the GStreamer CI. Collabora’s infrastructure is used as a playground to test and research. The Raspberry Pi is both popular and it offers the complex use-case of creating special builds of GStreamer components due to its design. Conveniently, one of the devices integrated with our worker is a Raspberry Pi 2 Model B – hereafter referred to as ‘RPi’.

Continue reading on Collabora’s blog.