Home Blog Page 355

Docker Enterprise Edition Offers Multicloud App Management

Docker has expanded its commercial container platform software, Docker Enterprise Edition (EE) to manage containerized applications across multiple cloud services.

The idea with this release is to better help enterprise customers manage their applications across the entire development and deployment lifecycle, said Jenny Fong, Docker director of product marketing. “While containers help make applications more portable, the management of the containers is not the same,” Fong said.

Docker EE provides a management layer for containers, addressing needs around security and governance, and the company is now extending this management into the cloud.

Read more at The New Stack

Pushing AI Performance Benchmarks to the Edge

As I discussed recently, the AI industry is developing benchmarking suites that will help practitioners determine the target environment in which their machine learning, deep learning or other statistical models might perform best. Increasingly, these frameworks are turning their focus to benchmarking AI workloads that run on edge devices, such as “internet of things” endpoints, smartphones and embedded systems.

There are as yet no widely adopted AI benchmarking suites. Of the ones under development, these are the ones that stand the greatest chance of prevailing down the road:

  • Transaction Processing Performance Council’s AI Working Group: The TPC includes more than 20 top server and software makers. Late last year, the organization formed a working group to define AI hardware and software benchmarks that are agnostic to the underlying chipsets where the workloads are executed.
  • MLPerf: Early this month, Google Inc. and Baidu Inc. announced that they are teaming with chipmakers and academic research centers to create the AI benchmark MLPerf.

Read more at Silicon Angle

Corgi, the CLI Workflow Manager: Cute *And* Useful

Cuteness overload! Corgi, the CLI workflow manager is here to make your life easier by providing a list of features for creating and managing reusable snippets.

Corgi is a command-line tool that helps with your repetitive command usages by organizing them into reusable snippet. It was inspired by Pet and aims to advance Pet’s command-level usage to a workflow level.

Read more at Jaxenter

AI Is Coming to Edge Computing Devices

Very few non-server systems run software that could be called machine learning (ML) and artificial intelligence (AI). Yet, server-class “AI on the Edge” applications are coming to embedded devices, and Arm intends to fight with Intel and AMD over every last one of them.

Arm recently announced a new Cortex-A76 architecture that is claimed to boost the processing of AI and ML algorithms on edge computing devices by a factor of four. This does not include ML performance gains promised by the new Mali-G76 GPU. There’s also a Mali-V76 VPU designed for high-res video. The Cortex-A76 and two Mali designs are designed to “complement” Arm’s Project Trillium Machine Learning processors (see below).

Improved performance

The Cortex-A76 differs from the Cortex-A73 and Cortex-A75 IP designs in that it’s designed as much for laptops as for smartphones and high-end embedded devices. Cortex-A76 provides “35 percent more performance year-over-year,” compared to Cortex-A75, claims Arm. The IP, which is expected to arrive in products a year from now, is also said to provide 40 percent improved efficiency.

Like Cortex-A75, which is equivalent to the latest Kyro cores available on Qualcomm’s Snapdragon 845, the Cortex-A76 supports DynamIQ, Arm’s more flexible version of its Big.Little multi-core scheme. Unlike Cortex-A75, which was announced with a Cortex-A55 companion chip, Arm had no new DynamIQ companion for the Cortex-A76.

Cortex-A76 enhancements are said to include decoupled branch prediction and instruction fetch, as well as Arm’s first 4-wide decode core, which boosts the maximum instruction per cycle capability. There’s also higher integer and vector execution throughput, including support for dual-issue native 16B (128-bit) vector and floating-point units. Finally, the new full-cache memory hierarchy is “co-optimized for latency and bandwidth,” says Arm.

Unlike the latest high-end Cortex-A releases, Cortex-A76 represents “a brand new microarchitecture,” says Arm. This is confirmed by AnandTech’s usual deep-dive analysis. Cortex-A73 and -A75 debuted elements of the new “Artemis” architecture, but the Cortex-A76 is built from scratch with Artemis.

The Cortex-A76 should arrive on 7nm-fabricated TSMC products running at 3GHz, says AnandTech. The 4x improvements in ML workloads are primarily due to new optimizations in the ASIMD pipelines “and how dot products are handled,” says the story.

Meanwhile, The Register noted that Cortex-A76 is Arm’s first design that will exclusively run 64-bit kernel-level code. The cores will support 32-bit code, but only at non-privileged levels, says the story..

Mali-G76 GPU and Mali-G72 VPU

The new Mali-G76 GPU announced with Cortex-A76 targets gaming, VR, AR, and on-device ML. The Mali-G76 is said to provide 30 percent more efficiency and performance density and 1.5x improved performance for mobile gaming. The Bifrost architecture GPU also provides 2.7x ML performance improvements compared to the Mali-G72, which was announced last year with the Cortex-A75.

The Mali-V76 VPU supports UHD 8K viewing experiences. It’s aimed at 4×4 video walls, which are especially popular in China and is designed to support the 8K video coverage, which Japan is promising for the 2020 Olympics. 8K@60 streams require four times the bandwidth of 4K@60 streams. To achieve this, Arm added an extra AXI bus and doubled the line buffers throughout the video pipeline. The VPU also supports 8K@30 decode.

Project Trillium’s ML chip detailed

Arm previously revealed other details about the Machine Learning (ML) processor, also referred to as MLP. The ML chip will accelerate AI applications including machine translation and face recognition. 

The new processor architecture is part of the Project Trillium initiative for AI, and follows Arm’s second-gen Object Detection (OD) Processor for optimizing visual processing and people/object detection. The ML design will initially debut as a co-processor in mobile phones by late 2019.

Numerous block diagrams for the MLP were published by AnandTech, which was briefed on the design. While stating that any judgment about the performance of the still unfinished ML IP will require next year’s silicon release, the publication says that the ML chip appears to check off all the requirements of a neural network accelerator, including providing efficient convolutional computations and data movement while also enabling sufficient programmability.

Arm claims the chips will provide >3TOPs per Watt performance in 7nm designs with absolute throughputs of 4.6TOPs, deriving a target power of approximately 1.5W. For programmability, MLP will initially target Android’s Neural Networks API and Arm’s NN SDK.

Join us at Open Source Summit + Embedded Linux Conference Europe in Edinburgh, UK on October 22-24, 2018, for 100+ sessions on Linux, Cloud, Containers, AI, Community, and more.

EKS vs. ECS: Orchestrating Containers on AWS

AWS announced Kubernetes-as-a-Service at re:Invent in November 2017: Elastic Container Service for Kubernetes (EKS). Now, EKS is generally available. I discussed ECS vs. Kubernetes before EKS was a thing. Therefore, I’d like to take a second attempt and compare EKS with ECS.

Before comparing the differences, let us start with what EKS and ECS have in common. Both solutions are managing containers distributed among a fleet of virtual machines. Managing containers includes:

  • Monitoring and replacing failed containers.
  • Deploying new versions of your containers.
  • Scaling the number of containers based on load.

What are the differences between EKS and ECS?

Load Balancing

Usually, a load balancer is the entry point into your AWS infrastructure. Both EKS and ECS offer integrations with Elastic Load Balancing (ELB).

On the one hand, Kubernetes — and therefore EKS — offers an integration with the Classic Load Balancer. When creating a service Kubernetes does also create or configure a Classic Load Balancer for you.

Read more at DZone

GStreamer CI Support for Embedded Devices

Embedded devices are a popular deployment target for GStreamer yet they are not tested on the project’s Continuous Integration (CI) system. Here’s a look at work done to introduce a Raspberry Pi for automated on-board testing using Jenkins, LAVA, and more.

By Omar Akkila, Software Engineer at Collabora.

GStreamer is a popular open-source pipeline-based multimedia framework that has been in development since 2001. That’s 17 years of constant development, triaging, bug fixes, feature additions, packaging, and testing. Adopting a Jenkins-based Continuous Integration (CI) setup in August 2013, GStreamer and its dependencies are now built multiple times a day with each commit. Prior to that, the multimedia project used a build bot hosted by Collabora and Igalia. At the time of this writing, GStreamer is built for the Linux (Fedora & Debian), macOS, Windows, Android, and iOS platforms. A very popular deployment target for GStreamer are embedded devices, but they are not targeted in the current CI setup.This meant additional manpower, effort, and testing outside of the automated tests for every release of GStreamer to validate on embedded boards. To rectify this, a goal was devised to integrate embedded devices into the CI.

Now, this meant more than just emulating embedded targets and building GStreamer for them. The desire is to test on physical boards with as much as automation as possible. This is where the the Linaro Automated Validation Architecture (LAVA) steps into play. LAVA is a continuous integration automation system, similar to Jenkins, oriented towards testing on physical and virtual hardware. Tests can range anywhere between simple boot testing to system-level testing. The plan being that GStreamer CI will interface with LAVA to run the gst-validate test suite on devices.

Architecturally, LAVA operates through a master-worker relationship. The master is responsible for housing the web interface, database of devices, and scheduler. The worker is responsible for receiving messages from the master and dispatching all operations and procedures to the Devices Under Test (DUT). At Collabora, we host a LAVA instance with a master and maintain a lab of physical devices connected to a LAVA worker in our Cambridge office. For the preliminary iteration of embedded support, the aim is to introduce a Raspberry Pi to the GStreamer CI. Collabora’s infrastructure is used as a playground to test and research. The Raspberry Pi is both popular and it offers the complex use-case of creating special builds of GStreamer components due to its design. Conveniently, one of the devices integrated with our worker is a Raspberry Pi 2 Model B – hereafter referred to as ‘RPi’.

Continue reading on Collabora’s blog.

Video: Linus Torvalds Explains How Linux Still Surprises and Motivates Him

Hear about Linux development directly from Linus Torvalds in this video from our archives.

Linus Torvalds took to the stage in China for the first time Monday at LinuxCon + ContainerCon + CloudOpen China 2017 in Beijing. In front of a crowd of nearly 2,000, Torvalds spoke with VMware Head of Open Source Dirk Hohndel in one of their famous “fireside chats” about what motivates and surprises him and how aspiring open source developers can get started. Here are some highlights of their talk.

What’s surprising about Linux development

What I find interesting is code that I thought was stable continually gets improved. There are things we haven’t touched for many years, then someone comes along and improves them or makes bug reports in something I thought no one used. We have new hardware, new features that are developed, but after 25 years, we still have old, very basic things that people care about and still improve.”

What motivates him

“I really like what I’m doing. I like waking up and having a job that is technically interesting and challenging without being too stressful so I can do it for long stretches; something where I feel I am making a real difference and doing something meaningful not just for me.”

“I occasionally have taken breaks from my job. The 2-3 weeks I worked on Git to get that started for example. But every time I take a longer break, I get bored. When I go diving for a week, I look forward to getting back. I never had the feeling that I need to take a longer break.”

The future of Linux leadership

“Our processes have not only worked for 25 years, we still have a very strong maintainer group. We complain that we don’t have enough maintainers – which is true, we only have tens of top maintainers who do the daily work of merging stuff. That’s a strong team for an open source project. And as these maintainers get older and fatter, we have new people coming in. It takes years to go from a new developer to a top maintainer, so I don’t feel that we should necessarily worry about the process and Linux for the next 20 years.”

Will Linux be replaced

“Maybe some new aggressive project will come along and show they can do what we do better, but I don’t worry about that. There have been lots of very successful forks of Linux. What makes people not think of them as forks is that they are harmonious. If someone says they want to do this and change everything and make the kernel so much better, my feeling is do it, prove yourself. I may think it’s a bad idea, but you can prove me wrong.”

Thoughts on Git

“I’m very surprised about how widely Git has spread. I’m pleased obviously, and it validates my notion of doing distributed development. At the same time, looking at most source control versions, it tends to be a huge slog and difficult to introduce a new software control version. I expected it to be limited mostly to the kernel — as it’s tailored to what we do.”

“For the first 3 to 4 years, the complaint about Git was it was so different and hard to use. About 5 years ago something changed. Enough projects and developers had started using Git that it wasn’t different anymore; it was what people were used to. They started taking advantage of the development model and the feeling of security that using Git meant nothing would be corrupted or lost.”

“In certain circles, Git is more well known than Linux. Linux is often hidden – on an Android phone you’re running Linux, but you don’t think about it. With Git, you know you are using Git.”

Forking Linux

“When I sat down and wrote Git, a prime principle was that you should be able to fork and go off on your own and do something on your own. If you have forks that are friendly — the type that prove me wrong and do something interesting that improves the kernel — in that situation, someone can come back and say they actually improved the kernel and there are no bad feelings. I’ll take your improved code and merge it back. That’s why you should encourage forks. You also want to make it easy to take back the good ones.”

How to get started as an open source developer

“For me, I was always self-motivated and knew what I wanted to do. I was never told what I should look at doing. I’m not sure my example is the right thing for people to follow. There are a ton of open source projects and, if you are a beginning programmer, find something you’re interested in that you can follow for more than just a few weeks. Get to know the code so well that you get to the point where you are an expert on a code piece. It doesn’t need to be the whole project. No one is an expert on the whole kernel, but you can know an area well.  

If you can be part of a community and set up patches, it’s not just about the coding, but about the social aspect of open source. You make connections and improve yourself as a programmer. You are basically showing off – I made these improvements, I’m capable of going far in my community or job. You’ll have to spend a certain amount of time to learn a project, but there’s a huge upside — not just from a career aspect, but having an amazing project in your life.”

Watch the complete video below:

https://www.youtube.com/watch?v=0rsx65_wjoE?list=PLbzoR-pLrL6rHryWIST4qnRZo-JVqUST3

Red Hat Reaches the Summit – A New Top Scientific Supercomputer

Red Hat just announced its role in bringing a top scientific supercomputer into service in the U.S. Named “Summit” and housed at the Department of Energy’s OAK Ridge National Labs, this system with its 4,608 IBM compute servers is running — you guessed it — Red Hat Enterprise Linux.

The Summit collaborators

With IBM providing its POWER9 processors, Nvidia contributing its Volta V100 GPUs, Mellanox bringing its Infiniband into play, and Red Hat supplying Red Hat Enterprise OS, the level of inter-vendor collaboration has reached something of an all-time high and an amazing new supercomputer is now ready for business.

Read more at NetworkWorld

Devuan 2.0 Is a Debian Fork for Linux Users Who Want to Avoid systemd

Devuan is a fork of Debian that eschews the Red Hat-developed systemd init system in favor of alternatives such as sysvinit, among others. Unlike the Mir vs. Wayland controversy, the use of systemd has impacted enterprise servers, which have highly customized init scripts that are challenging to reimplement for systemd-powered systems, or otherwise break across upgrades.

Devuan, pronounced “dev one,” is available for 32- and 64-bit PCs, with specialized ARM images available for certain Chromebooks, as well as the MeeGo-era Nokia N9, N900, and N950 phones, and the Motorola Droid 4. It’s also available for single-board computers including the Raspberry Pi series, ODROID XU and XU4, BeagleBone Black, and Allwinner-powered boards with mainline U-Boot and Linux support, including variants of the Banana Pi and Orange Pi products. Current Debian Jessie and Stretch users can migrate directly to Devuan without needing to start from a fresh installation.

Read more at TechRepublic

Facebook Releases Sonar Debugging Tool to the Open Source Community

Sonar was developed for and by Facebook engineers to help them manage the social network, including the implementation of new features, bug hunting, and performance optimization.

Now, Sonar is being released to the open source community in the hopes of giving programmers a tool for the acceleration of app development and deployment. … Made up of a desktop client and mobile SDK, Sonar can be used by developers to inspect app layouts — whether or not the apps were built with standard Android/iOS views or Litho/ComponentKit components — as well as inspect both logs and network traffic.

Read more at ZDNet