Home Blog Page 592

A Flurry of Open Source Graphics Milestones

Written by Daniel Stone, Graphics Lead at Collabora.

The past few months have been busy ones on the open-source graphics front, bringing with them Wayland 1.13, Weston 2.0, Mesa 17.0, and Linux 4.10. These releases have been quite interesting in and of themselves, but the biggest news must be that with Mesa 17.0, recent Intel platforms are fully conformant with the most recent Khronos rendering APIs: OpenGL 4.5, OpenGL ES 3.2, and Vulkan 1.0. This is an enormous step forward for open-source graphics: huge congratulations to everyone involved!

Mesa 17.0 also includes the Etnaviv driver, supporting the Vivante GPUs found in NXP/Freescale i.MX SoCs, amongst others. The Etnaviv driver brings with it a ‘renderonly’ framework for Mesa, explicitly providing support for systems with a separate display controller and 3D GPU. Etnaviv joins Mesa as the sixth hardware vendor to have a supported, fully open-source, driver.

Extending buffer modifier support

Though we were proud to participate in some of the new feature enablement work in Mesa to lift it to conformance (including arrays-of-arrays, enhanced UBO layouts, much shader cache work, etc), much of our work recently has been focused on behind-the-scenes performance improvements. Varad Gautam blogged about buffer modifiers and their importance; we continue to work on buffer modifier support both in Wayland/Weston and Mesa. With Wayland and Weston now re-opened for development after their release, we should see this support merged into the respective protocols soon.

We have also worked with Ben Widawsky at Intel and Kristian Høgsberg at Google to enable rendering and direct display of buffers with modifiers. Respectively, their patchsets extend the GBM API (used to enable GPU rendering for direct display to KMS) to accept a list of supported modifiers for rendering, and extend the KMS API to advertise a list of supported modifiers for each plane. With both allocation and advertisement solved, we are getting closer to a fully end-to-end-optimal pipeline.

An atomic Weston, and its little helpers

Weston is currently being used as a testbed/showcase for the new GBM and KMS API. The large patchset for Weston to support atomic modesetting is in the process of review and merge. In addition to numerous bugfixes found in the DRM backend during development, the atomic series delivers on the basic premise of atomic modesetting: that clients will be able to have their content displayed directly on hardware overlay planes, with no specific hardware knowledge required to achieve this.

Implementing this and having it fully correct exposed a number of design issues with Weston’s DRM backend, which long predates atomic. The legacy and atomic KMS APIs are substantially different, and the premise at the time was that Weston would use a hardware-specific component which would generate plane configurations for it.

Instead, the KMS atomic modesetting API provides an incremental approach: userspace builds up display state one by one, asking the kernel to validate the proposed configuration. In this, Weston attempts to place on-screen objects into display planes one by one, which requires repeatedly proposing, modifying, and tearing down internal state objects. The resulting patchset is the largest development in the DRM backend since 1.0, and one which should substantially improve the stability, quality, and extensibility of the DRM backend.

Bringing this up on new hardware has worked almost flawlessly, thanks to the kernel’s helper library design. In the old legacy-API world of KMS, each driver implemented extremely varied semantics, and trying to run generic userspace for all but the most basic tasks was an exercise in frustration. Instead, with atomic drivers, there is very little to go wrong: for the only one driver I’ve found out of nine which didn’t work out of the box with Weston, the fix was to remove driver-specific code and move more work to generic helpers.

The varying capabilities and implementations between platforms has long been a huge bone of frustration for us, as it makes more difficult for our customers to port their offerings between platforms in response to commercial/logistical, performance, or other issues. The work with atomic to make drivers as consistent as possible narrows this gap, and narrows the NRE investment required to change hardware platforms.

A reusable Weston

Speaking of large developments in Weston, its 2.0 version number was the result of developments in libweston, the API enabling external window managers and desktop environments to reuse our solid and complete core code. The original premise behind Weston was that compositors should be so small that everyone could write their own.

Unfortunately, experience has not borne this out: in order to deliver predictable repaint timings, support for mechanisms like dmabuf and modifiers, atomic modesetting and full use of hardware overlay planes, and so on, quite a lot of core code is required. However, tying Weston to one particular window manager or desktop environment would limit our scope and our reach.

The solution chosen was libweston: to expose Weston’s scene graph, protocol and hardware support, as a library for external users. Some environments such as Orbital are already making use of libweston, but we hope to see more in the future.

Towards this end, Weston 2.0 contains the work of Armin Krezović, a Google Summer of Code 2016 student who worked tirelessly on backend and output configuration. His work allows the environment to have more control over the configuration and placement of monitors and outputs, which we will absolutely need in full desktop environments. We’re immensely grateful to Armin for his work throughout the summer, Google for their annual support of the program, and to Pekka and Kat for mentoring Armin and dealing with the organisational side of GSoC, respectively.

Ever onwards

But we’re not done yet. Following on from the atomic Weston and dmabuf-modifier work, we plan to continue Gustavo Padovan’s work bringing Android fences to mainline Linux, and bring explicit fencing support into Wayland. The support for this is beginning to land properly in Mesa and the kernel, and we plan to make this available to Wayland, for direct clients as well as through the Vulkan WSI.

GDC also brought a new Vulkan release, support for which is being worked on in Mesa by Jason Ekstrand of Intel and Chad Versace of Google. Of particular note for window systems was the long-awaited external memory/image support, making it possible to write Vulkan-based compositors for the first time.

Collabora was also very pleased to announce our involvement in the OpenXR working group, as we explore the AR/VR space together with Khronos and our partners in the industry; watch this space.

Elie Tournier also joined our team, bravely moving to Cambridge during the darkest months of the year. You may recognise the name from his GSoC work developing a pure GLSL library to support double-precision floating-point (FP64) operations on GPUs which otherwise lack native support. Elie has been working with us to bring this to upstream Mesa, integrating it with Mesa’s low-level GLSL IR / NIR to provide transparent support, rather than requiring explicit app support. Welcome Elie!

The X.Org Foundation is also participating in GSoC again this year, offering students the chance to work all throughout the graphics stack (X.Org itself, Mesa, Wayland, and DRM). We look forward to welcoming even more students – and new developers – into the fold.

We’re here to help

The world of open-source graphics can be confusing, and despite some recent stellar efforts, somewhat underdocumented. We pride ourselves in our knowledge of the landscape – including others such as multimedia and core kernel development and hardware enablement – and are always happy to discuss it with you. If you would like to discuss any work or are even just seeking advice, please contact us: our friendly and knowledgable staff are standing by to take your call.

As the Software Supply Chain Shifts, Enterprise Open Source Programs Ramp Up

Today’s software supply chain is fundamentally different than it was only a few years ago, and open source programs at large enterprises are helping to drive that trend. According to Sonatype’s 2016 State of the Software Supply Chain enterprises are both turning to existing open source projects to decrease the amount of code they have to write, and increasingly creating their own open source tools.

Countless organizations have rolled out professional, in-house programs focused on advancing open source and encouraging its adoption. Some of the companies doing so may surprise you. Here are a few such companies that may not be top-of-mind when thinking about engagement with open source:

Walmart’s Open Source Mojo Spreads Out. Is Walmart a major player in open source? It absolutely is, and the company is expanding its open source engagement in 2017. The company’s Walmart Labs division, located in Silicon Valley, has released a slew of open source projects, including a notable project called Electrode, which is a product of Walmart’s migration to a React/Node.js platform. It gives developers templated code to build universal React apps that incorporate modules that developers can leverage to add functionality to Node apps. It’s also a key part of how Walmart’s site runs, and you can believe that that site runs at scale.

Additionally, after more than two years of development and testing within Walmart, the company has announced that OneOps is available to the open source community. If you have any type of cloud deployment, take note. According to Walmart: “OneOps is a cloud management and application lifecycle management platform that developers can use to both develop and launch new products faster, and more easily maintain them throughout their entire lifecycle. OneOps enables developers to code their products in a hybrid, multi-cloud environment. This means they can test and switch between different cloud providers to take advantage of better pricing, technology and scalability – without being locked into one cloud provider.”

General Electric? Yes, General Electric. Odds are that General Electric isn’t the first company that you think of when it comes to moving the open source needle, but GE is actually a  powerful player in open source. GE Software has an “Industrial Dojo” run in collaboration with the Cloud Foundry Foundation  to strengthen its efforts to solve the world’s biggest industrial challenges. According to GE: “The Cloud Foundry Dojo program allows software developers to immerse themselves in open source projects to quickly learn the inner workings of the core technology and the unique agile development environment, as well as recommended methodologies for contributing code. GE also works with the Cloud Foundry community to develop and contribute open source code to the Cloud Foundry Foundation that will route all industrial messaging protocols.

Telecoms are Opening Up. A number of telecom companies are rapidly increasing their engagement with the open source community. Ericsson, for example, regularly contributes projects and is a champion of several key open source initiatives. You can browse through the company’s open source hub here. The company is also one of the most active telecom-focused participants in the effort to advance open NFV and other open technologies that can eliminate historically proprietary components in telecom technology stacks. Ericsson works directly with The Linux Foundation on these efforts, and engineers and developers are encouraged to interface with the open source community.

Other organizations in the telecom space who are deeply involved with NFV and open source projects include AT&T, Bloomberg LP, China Mobile, Deutsche Telekom, NTT Group, SK Telekom and Verizon.

In a previous post, we also looked at growing enterprise open source programs from Microsoft, Netflix, Facebook, and Google. Many other organizations have active internal open source programs, and we will provide additional coverage of the most notable examples.

Learn more in the Fundamentals of Professional Open Source Management training course from The Linux Foundation. Download a sample chapter now.

A Hacker’s Guide to Kubernetes Networking

This post is the first in a series. I’ll share how Kubernetes and the Container Networking Interface works with some hacking tricks to learn its internals and manipulate it. Future posts will cover high-performance storage and inter-process communications (IPC) tricks we use with containers.

Container Networking Basics

Containers use Linux partitioning capabilities called Cgroups and Namespaces. Container processes are mapped to network, storage and other namespaces. Each namespace “sees” only a subset of OS resources to guarantee isolation between containers.

Read more at The New Stack

NPM or Yarn? Node.js Devs Pick Their Package Manager

Mere months since it was open-sourced by Facebook, Yarn has NPM on the run. The upstart JavaScript package manager has gained a quick foothold in the Node.js community, particularly among users of the React JavaScript UI library.

Known for faster installation, Yarn gives developers an improved ability to manage code dependencies in their Node.js projects, proponents say. It features a deterministic install algorithm and a lockfile capability that lists exact version numbers of all project dependencies. 

Read more at InfoWorld

What’s a Linked List, Anyway? [Part 1]

Regardless of which language we start coding in, one of the first things that we encounter are data structures, which are the different ways that we can organize our data; variablesarrays, hashes, and objects are all types of data structures. But these are still just the tip of the iceberg when it comes to data structures; there are a lot more, some of which start to sound super complicated the more that you hear about them.

One of those complicated things for me has always been linked lists. I’ve known about linked lists for a few years now, but I can never quite keep them straight in my head. I only really think about them when I’m preparing for (or sometimes, in the middle of) a technical interview, and someone asks me about them. 

Read more at Vaidehi Joshi

System Hardening with Ansible

The DevOps pipeline is constantly changing.  Therefore relevant security controls must be applied contextually.

We want to be secure, but I think all of us would rather spend our time developing and deploying software. Keeping up with server updates and all of the other security tasks is like cleaning your home – you know it has to be done, but you really just want to enjoy your clean home. The good news is you can hire a “service” to keep your application security up-to-date, giving you more time to develop.

At the recent All Day DevOps conferenceAkash Mahajan (@makash), a Founder/Director at Appsecco, discussed how to harden your system’s security.  In addition to his role at Appsecco, Akash is also involved as a local leader with the Open Web Application Security Project (OWASP).

Read more at DZone

Keynote: Community Software Powers the Machine by Mark Atwood

HPE’s Mark Atwood describes some parallels between how open source software is developed and the science fiction community. 

 

6 Reasons Why Open Source Software Lowers Development Costs

In some organizations, faster development is the primary motivation for using Open Source Software (OSS.) For others, cost savings or flexibility is the most important factor.

Last week, we detailed how OSS speeds development. Now let’s explore how open source software reduces development costs.

6 reasons OSS is lower cost                    

Using OSS can significantly reduce development costs in a number of proven ways. It can be much less expensive to acquire than commercially-licensed software or in-house developed software. These cost savings start with acquisition, but extend to deployment, support, and maintenance. Using open source software:

1. Saves 20-55% over commercial solutions, according to our Linux Foundation Consulting clients

2. Avoids functionality overkill and bundling — Many proprietary products have an overload of capabilities that clients rarely use, need, or even want. Often, they’re bundled, so that they must be paid for anyway.

● Avoids unwieldy closed system deployments – OSS eliminates the costly pricing games and traps that come with commercial sales and negotiations.

● Helps prevent vendor lock-in. Even where commercial OSS vendors provide a channel to deliver and support Open Source, customers have the freedom to switch vendors or even drop commercial support entirely, without changing the application or code in use.

● Avoids proprietary solutions consulting traps — OSS also helps with consulting, training and support costs because there is no exclusive access to the technology. You can often multi-source support, or even receive support from a vibrant community of developers who are actually working with the code on a daily basis.

● Benefits from ongoing community support — Active communities often provide higher quality support than commercial support organizations, and what’s more, community support is free.

Whether your organization chooses OSS for its speed of development, lower costs, flexibility, or because it keeps you on the leading edge of technology, OSS provides a competitive advantage.

Next up in this series, we’ll discuss why open source software is more flexible. You can also download the entire series today in our Fundamentals of Professional Open Source Management sample chapter.

Open source software management

Read more:

What Is Open Source Software?

Using Open Source Software to Speed Development and Gain Business Advantage

Why Using Open Source Software Helps Companies Stay Flexible and Innovate

Community Software, Science Fiction, and The Machine

Not many presentations can start with a video co-promoting a new computer and the latest Star Trek movie, but Mark Atwood, Director of Open Source Engagement at HP Enterprise, started his LinuxCon Europe keynote with a video about The Machine and Star Trek Beyond.

The Machine uses a new kind of physics for computation and data storage allowing it to be very fast, energy efficient, and agile. The Machine runs Linux, and Atwood says that “the best way to promote the use of any sort of new technology is to make it open source.”

There are quite a few parallels between how open source software is developed and the science fiction community. Atwood talked about how they even share some big milestone years: Linux is 25 years old; Star Trek is 50; and the genre of science fiction turned 90. 

The story starts in the early 20th century when high technology meant vacuum tubes and wireless radio and the field was full of passionate hobbyists building on each other’s ideas. A man in New York City named Hugo Gernsback helped facilitate this discussion via articles and letters from readers in his magazines. Atwood points out that they were essentially open sourcing the conversation through a moderated discussion forum with a one-month cycle time. 

In 1926, Gernsback started a new magazine, Astounding Science Fiction, thus creating the genre of science fiction and the beginning of science fiction fandom. This magazine was run like his technology magazines where he would publish stories and then later issues would contain stories that were written in response to earlier stories and letters from readers discussing them, again these were ideas built on ideas. The people in this community gathered together in 1939 for the world’s first science fiction convention. Also, in 1939 on the opposite coast, two men who’d grown up reading those technology and science fiction magazines founded Hewlett Packard out of their garage in Palo Alto, which as Atwood points out, helped create the very idea of Silicon Valley. 

Building ideas on top of ideas is at the core of how open source software and science fiction came to be what they are today. Atwood says that “science fiction is a way to have a conversation about the kind of world that we can make; the kind of world that can be made of the technology that we have and that we’re building and the world we can make out of our various ideas for organizing people.”

To get the full experience of Atwood’s talk, you should watch the video!

Interested in speaking at Open Source Summit North America on September 11 – 13? Submit your proposal by May 6, 2017. Submit now>>

Not interested in speaking but want to attend? Linux.com readers can register now with the discount code, LINUXRD5, for 5% off the all-access attendee registration price. Register now to save over $300!

With Azure Container Service, Microsoft Works to Make Container Management Boring

Earlier this week, Microsoft made the Kubernetes container orchestration service generally available on Azure Container Service, alongside the other predominant container orchestration engines Docker Swarm and Mesosphere’s Data Center Operating System (DC/OS). The move is one more step in building out the service, Kubernetes co-founder Brendan Burns told The New Stack.

Burns moved from Google to Microsoft seven months ago to run ACS with the vision of turning it into “a really managed service” that can deliver not just tools for working with containers, but work as a whole Containers-as-a-Service (CaaS) platform. … As the technology matures, the emphasis shifts from how you use containers to what you use them for, he pointed out. 

Read more at The New Stack