Home Blog Page 518

Last Chance to Submit Your Talk for Open Source Summit and ELC Europe

Submit your proposal soon to speak at Open Source Summit and Embedded Linux Conference (ELC) taking place in Prague, Czech Republic, October 23-25, 2017. The deadline for proposals is Saturday, July 8, 2017. Don’t miss out on this opportunity to share your expertise and experience at these events.

At the Open Source Summit, you have the chance to learn, collaborate, and share information along with 2,000 peers and community members. We encourage the open collaboration and discussions that are necessary to keep Linux successful. And, if you’re interested in making a difference in Linux, open cloud, and open source, submit a speaking proposal soon!

For 12 years, ELC has had the largest collection of sessions dedicated exclusively to embedded Linux and embedded Linux developers. At ELC, you can collaborate with peers on all aspects of embedded Linux from hardware to user space development. Submit your proposal and join the conversation.

We invite you to share your creative ideas, enlightening case studies, best practices, or technical knowledge at these exciting events.

Submit a Speaking Proposal for OS Summit Europe

Submit a proposal for ELC Europe

Linux Foundation events are an excellent way to get to know the community and share your ideas and the work that you are doing. You don’t need to be a core kernel maintainer or a chief architect to submit a proposal. In fact, we strongly encourage first-time speakers to submit talks for all of our events.

Our events are working conferences intended for professional networking and collaboration, and we work closely with our attendees, sponsors, and speakers to help keep Linux Foundation events professional, welcoming, and friendly.

Visit the OS Summit CFP page or the ELC Europe CFP page for suggested topics, submission guidelines, and other useful information. Submissions must be received by 11:59pm PST on Saturday, July 8, 2017.

Practical Networking for Linux Admins: IPv4 and IPv6 LAN Addressing

We’re cruising now. We know important basics about TCP/IP and IPv6. Today we’re learning about private and link-local addressing. Yes, I know, I promised routing. That comes next.

Private Address Spaces

IPv4 and IPv6 both have private address spaces. These are not meant to leave your private network, and you may use them without requesting assignments from an official authority, like your Internet service provider. Or, if you’re a bigwig, a direct allocation from a regional Internet registry.

IPv4 Private Addresses

You’re probably familiar with IPv4 private addressing, as we’ve all been using it since forever. There are four different private address spaces:

  • 10.0.0.0/8 (10.0.0.0 to 10.255.255.255), 16,777,216 addresses
  • 172.16.0.0/12 (172.16.0.0 to 172.31.255.255), 1,048,576 addresses
  • 192.168.0.0/16 (192.168.0.0 to 192.168.255.255), 65,536 addresses
  • 169.254.0.0/16 (169.254.0.0 to 169.254.255.255), 65,536 addresses

Let’s talk about the last one first, 169.254.0.0/16, because I find it annoying. I never warmed up to it because it just gets in my way. That is the link-local auto-configuration block, also called Avahi Zeroconf. Microsoft Windows and some Linux distributions use these, so even when you don’t assign an IP address to a network interface, or it does not receive one via DHCP, it will get a 169.254.0.0/16 address. What’s the point? Supposedly easy ad hoc networking, and enabling communication with a host when other means of address assignment fail. Link-local addresses are accessible only within their own broadcast domains and are not routable. I’ve been disabling it since forever without missing it. If you find it useful, perhaps you could share a comment on how you use it.

The other three address spaces are routable, even outside your LAN. That is why most firewall tutorials include rules to stop these from leaving your network. Most ISPs block them as well.

The four hexadecimal octets in IPv4 addresses are conversions from binary. This is a fun topic for another day; you might investigate it because IPv4 addressing makes more sense in binary. For everyday use, this is what you need to know:

Each octet is 8 bits, and the total is 32 bits.

10.0.0.0/8 means the subnet mask is 8 bits, 255.0.0.0. You cannot change the first octet, 10, which is the network ID. The remaining three octets are the host ID, 24 bits, and you can change them however you like. Each octet has possible values ranging from 0-255. 10.0.0.0 and 10.255.255.255 are reserved and you cannot use them for host addresses, so your usable addresses are 10.0.0.1 to 10.255.255.254.

172.16.0.0/12 has a 12-bit subnet mask, 255.240.0.0, which does not divide up neatly into hexadecimal octets. 172.16.0.0 and 172.31.255.255 are reserved and you cannot use them, so your usable addresses are 172.16.0.1 to 172.31.255.254.

192.168.0.0/16 has a 16-bit subnet mask, 255.255.0.0. Again, the first and last addresses are reserved, so your usable addresses are 192.168.0.1 to 192.168.255.254.

So, you ask, just what are the first and last addresses reserved for? The first address identifies your subnets, for example 192.168.1.0. The last address is the broadcast address. Suppose your subnet is 192.168.1.0/24, then 192.168.1.255 is the broadcast address. These broadcasts go to every host on the network segment, hence the term “broadcast domain”. This is how DHCP and routing tables are advertised.

IPv6 Private Addresses

IPv6 private link-local addresses, for whatever reason, are not pebbles in my shoes the way IPv4 link-local addresses are. Maybe because they’re so alien they bounce off my brain. And I have no choice, as the IPv6 protocol requires these. You can view yours with either the ip or ifconfig command:

$ ifconfig 
ifconfig wlan0
    wlan0 Link encap:Ethernet  HWaddr 9c:ef:d5:fe:8f:20  
          inet addr:192.168.0.135  Bcast:192.168.0.255  Mask:255.255.255.0
          inet6 addr: fe80::b07:5c7e:2e69:9d41/64 Scope:Link

These fall into the fe80::/10 range. You can ping your computer:

$ ping6 -I wlan0 fe80::b07:5c7e:2e69:9d41
PING fe80::b07:5c7e:2e69:9d41(fe80::b07:5c7e:2e69:9d41) 
from fe80::b07:5c7e:2e69:9d41 wlan0: 56 data bytes
64 bytes from fe80::b07:5c7e:2e69:9d41: 
icmp_seq=1 ttl=64 time=0.032 ms

With ping6, you must always specify your interface name, even if it is the only one. You can discover your LAN neighbors:

$ ping6 -c4 -I wlan0 ff02::1
PING ff02::1(ff02::1) from fe80::b07:5c7e:2e69:9d41 
wlan0: 56 data bytes
64 bytes from fe80::b07:5c7e:2e69:9d41: 
icmp_seq=1 ttl=64 time=0.078 ms
64 bytes from fe80::4066:50ff:fee7:3ac4: 
icmp_seq=1 ttl=64 time=20.7 ms (DUP!)
64 bytes from fe80::9eef:d5ff:fefe:17c: 
icmp_seq=1 ttl=64 time=27.7 ms (DUP!)

Cool, I have two neighbors. ff02::1 is a special link-local multicast address for discovering all link-local hosts. man ping tells us that DUP! means “ping will report duplicate and damaged packets. Duplicate packets should never occur, and seem to be caused by inappropriate link-level retransmissions.” In this context, it’s nothing to worry about, so I ping my neighbors:

$ ping6 -c4 -I wlan0 fe80::4066:50ff:fee7:3ac4
64 bytes from fe80::4066:50ff:fee7:3ac4: 
icmp_seq=1 ttl=64 time=4.72 ms

How is it that we can ping our LAN neighbors on their link-local addresses, when we couldn’t ping the 2001:0DB8::/32 addresses we created in last week’s installment? Because the routing is automatic. You won’t see IPv6 routes with the good ol’ route command, but must use the ip command:

$ ip -6 route show
fe80::/64 dev wlan0  proto kernel  metric 256  pref medium

Pretty slick. Come back next week, and we will really do some routing.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Linux Rolls Out to Most Toyota and Lexus Vehicles in North America

At the recent Automotive Linux Summit, held May 31 to June 2 in Tokyo, The Linux Foundation’s Automotive Grade Linux (AGL) project had one of its biggest announcements in its short history: The first automobile with AGLs open source Linux based Unified Code Base (UCB) infotainment stack will hit the streets in a few months.

In his ALS keynote presentation, AGL director Dan Cauchy showed obvious pride as he announced that the 2018 Toyota Camry will offer an in-vehicle infotainment (IVI) system based on AGL’s UCB when it debuts to U.S. customers in late summer. Following the debut, AGL will also roll out to most Toyota and Lexus vehicles in North America.

AGL’s first design win is particularly significant in that Toyota owns 14 percent of the U.S. automotive market. The Japanese automaker recently eclipsed GM as the world’s leading carmaker with an 11 percent global share.

The announcement came around the same time that Google tipped plans for an expansion of its Android Auto project for mobile communications with IVI systems to a comprehensive Android Automotive IVI project. The Android Automotive stack will first appear on Audi and Volvo cars, with the first Volvo model expected in two years.

If all goes to plan, Toyota’s 2018 Camry rollout will occur just over three years after AGL released its initial automotive stack based on Tizen IVI. This was followed a year later by the first AGL Requirements Specification, and then the UCB 1.0, an overhauled version based on Yocto Project code instead of Tizen.

Rapid development

That may seem like a long road compared to many open source projects, but it’s remarkably rapid for the comparatively sluggish automotive market. During that same period, AGL has also racked up an impressive roster of members, including automotive manufacturers like Ford, Honda, Jaguar Land Rover (JLR), Mazda, Mitsubishi, Nissan, and Subaru, Toyota. Other members include most of the major Tier 1 system integrators, as well as a growing list of software and services firms.

In his keynote, Cauchy seemed genuinely surprised at how quickly AGL has grown. “Back in 2015 at our first meeting, we had four core members — Honda, JLR, Nissan, and Toyota — and today we have 10 OEM automotive manufacturers,” said Cauchy.  “In 2015, we had 55 members, and we now have 98. We’re seeing a whole range of companies including middleware and services developers, voice recognition and navigation companies, and telecom companies that want to be part of the connected car. We have over 750 developers on the primary AGL mailing list.”

The pace is faster compared to Cauchy’s previous experience acting as a GENIVI Alliance board member and chairman of GENIVI Compliance back when he was developing an IVI platform at MontaVista. GENIVI eventually signed up a similar lineup of companies for its Linux-based, mostly open source IVI standard, but the momentum started flagging when it became clear that the standard was not solid enough to permit significant reuse of code from vendor to vendor.

AGL’s UCB is the specification

Many of the same companies have since joined AGL, which integrates some GENIVI code. The key difference is that instead of defining a specification, AGL’s UCB is the specification. Everyone agrees to use the same Linux distribution and middleware, while leaving the top layers customizable so each manufacturer can differentiate.

“AGL exists because the automakers realize they’re in the software business,” Cauchy told the ALS attendees. “AGL is a code first organization — we believe that specifications lead to fragmentation. Today, you have Microsoft and QNX and multiple flavors of Linux, and there’s no software reuse.”

As AGL Community Manager Walt Miner explained in a February presentation at the Embedded Linux Conference, GENIVI never pushed the specification far enough to be useful. “With specifications, multiple vendors can claim compliance, but you end up with different platforms with slightly different code,” said Cauchy. “We’re about building a single platform for the whole industry so you can port your software once, and it’s going to work for everyone.”

The ability to reuse code leads to faster time to market, which will soon enable new IVI systems to roll out every year rather than every three years, said Cauchy.  As a result, consumers will be less tempted to navigate and play music from a cell phone placed on the dashboard with all the safety hazards that implies.

New model

“We want to break that old supply chain model with a new model where the platform survives and evolves,” said Cauchy. “This will bring the industry on par to what consumers are expecting, which is more like cell phones.”

Cauchy went on to discuss the evolution of UCB last year from Agile Albacore to Brilliant Blowfish and then Charming Chinook. “The industry can now rely on us to have a release every six months, so companies can make product and deployment plans.”

Cauchy also announced release candidate 1 of Daring Dab, which will be available in a final release on July 22. As Miner explained at ELC, Daring Dab will tap Yocto Project 2.2 code, as well as secure signaling and notifications, smart device link, and application framework improvements such as service binders for navigation, speech, browser, and CAN.  An Electric Eel release will follow in six months with features like back ends for AGL reference apps working in both Qt 5 and HTML5.

Electric Eel may also include the first implementation of a headless telematics profile. “We’re redefining our architecture in layers so we can properly support a headless profile that runs on a lower performance chip and doesn’t need a display or infotainment,” said Cauchy. “We want to build our requirements out for ISO 26262 functional safety compliance to see if we can use the Linux kernel.”

Beyond that, AGL will move into instrument cluster and heads up displays, followed by ADAS “and eventually autonomous driving,” said Cauchy. “We want to be in every processor and every function in the car. This is really taking off.”

You can watch the complete video below:

https://www.youtube.com/watch?v=I8awghFEGS4?list=PLbzoR-pLrL6pYNCtxNmF7rG0I2d6Sd9Lq

Learn more from Automotive Linux Summit, which connects the community driving the future of embedded devices in the automotive arena.  Watch the videos from the event. 

Why Do Open Source Projects Fork?

Open source software (OSS) projects start with the intention of creating technology that can be used for the greater good of the technical, or global, community. As a project grows and matures, it can reach a point where the goals of or perspectives on the project diverge. At times like this, project participants start thinking about a fork.

Forking an OSS project often begins as an altruistic endeavor, where members of a community seek out a different path to improve upon the project. But the irony of it is that forking is kind of like the OSS equivalent of the third rail in the subway: You really don’t want to touch it if you can help it.

Open source software developer David A. Wheeler likens forking to a parliamentary no-confidence vote or a labor strike. 

Read more at The New Stack

The Evolving Role of Product Management

This post is an excerpt from Chapter 1 of “Product Leadership.” Read the full book on Safari.

As Marty Cagan, founding partner of Silicon Valley Product Group and a 30-year veteran of product management, puts it, “The job of a product manager is to discover a product that is valuable, usable, and feasible.” Similarly, co-author Martin Eriksson, in his oft-quoted definition of product management, calls it the intersection between business, user experience, and technology (see Figure 1; only a product manager would define themselves in a Venn diagram!). A good product manager must be experienced in at least one, passionate[1] about all three, and conversant with practitioners of all three. …

Perhaps most importantly, the product manager is the voice of the customer inside the business, and thus must be passionate about customers and the specific problems they’re trying to solve. This doesn’t mean the product manager should become a full-time researcher or a full-time designer, but they do need to make time for this important work. 

Read more at O’Reilly

Kubernetes as a Service Offers Orchestration Benefits for Containers

Kubernetes presumes deep private cloud knowledge. If your organization isn’t there yet, evaluate the supported Kubernetes distributions from third-party as-a-service vendors. Organizations can offload some of the heavy lifting to deploy containers with Kubernetes as a service from a hosting company.

Docker containers have a steep learning curve because of the intrinsic paradigm shift from hardware virtualization to OS-level abstraction. Developers and IT pros know how to interact with line-of-business applications through VMs, but Docker packages an application with all its dependencies into portable containers that share an OS and run on different host servers with different hardware platforms.

Read more at TechTarget

Cloud Foundry Makes Its Mark on the Enterprise

Cloud Foundry falls under the “platform-as-a-service” (PaaS) umbrella, which essentially makes it the PaaS counterpart to OpenStack’s “infrastructure-as-a-service.” The promise of Cloud Foundry is that it abstracts all of the grunt work of running the infrastructure and more high-level services like databases away and gives developers a single platform to write their applications for.

The premise here is that what sits underneath Cloud Foundry shouldn’t have to matter to the developer. That can be an on-premises OpenStack cloud or a public cloud like AWS, Google Cloud Platform, IBM Bluemix or Azure. This also means that companies get the ability to move their applications from one cloud to another (or use multiple clouds simultaneously) without having to adapt their code to every cloud’s peculiarities. As Cloud Foundry Foundation CTO Chip Childers told me, the project wants to make developers happy (and productive).

Read more at TechCrunch

Linux Foundation Announces New Project for Software-Defined Networks

The Linux Foundation is announcing a new open-source project designed to bring automated protection to software-defined networks. The Open Security Controller (OSC) Project is a new software-defined security orchestration solution with a focus on multi-cloud environments.

“Software-defined networks are becoming a standard for businesses, and open source networking projects are a key element in helping the transition, and pushing for a more automated network” said Arpit Joshipura, general manager of Networking and Orchestration at The Linux Foundation. “Equally important to automation in the open source community is ensuring security. The Open Security Controller Project touches both of these areas. We are excited to have this project join The Linux Foundation, and look forward to the collaboration this project will engender regarding network security now and in the future.”

Read more at SD Times

Xen Related Work in the Linux Kernel: Current and Future Plans

The Linux kernel contains a lot of code support for Xen. This code isn’t just meant to optimize Linux to run as a virtualized guest. As a type 1 hypervisor, Xen relies a lot on the support of the operating system running as dom0. Although other operating systems can be used as dom0, Linux is the most popular dom0 choice — due to its widespread use and for historical reasons (Linux was chosen as dom0 in the first Xen implementation). Given this, a lot of the work of adding new functionality to Xen is done in the Linux kernel.

In this article, I’ll cover some highlights of Xen related work that has been done in the past year and what’s expected in the near future, as well as few best practices learned along the way. This post will be helpful for anyone who is interested in Xen Project technology and its impact on the Linux kernel.

History of Xen support in the Linux kernel

When the Xen Project was released in 2003, it was using a heavily modified Linux kernel as dom0. Over the years, a lot of effort has gone into merging those modifications into the official Linux kernel code base. And, in 2011, this goal was achieved.

However, because some distributions — like SUSE’s SLE — had included Xen support for quite some time, they had built up another pile of patches for optimizing the Linux kernel to run as dom0 and as a Xen guest. For the past three years, it has been my job to try to merge those patches into the upstream Linux kernel. We finally made it possible to use the upstream kernel without any Xen specific patches as base for SLE in Linux kernel 4.4.

The primary reason for the large amount of patches needed in the Linux kernel for support stems from the primary design goal of Xen. It was introduced at a time when x86 processors had no special virtualization features, and Xen tried to establish an interface making it possible to run completely isolated guests on x86 with bare metal like performance.  

This was possible only by using paravirtualization. Instead of trying to emulate the privileged instructions of the x86 processor, Xen-enabled guests had to be modified to avoid those privileged instructions and use calls into the hypervisor when a privileged operation was unavoidable. This, of course, had a large impact on the low-level operating system, leading to the large patch amount. Basically, the Linux kernel had to support a new architecture.

Although they still have some advantages over fully virtualized guests with some workloads, paravirtualized guests are a little bit problematic from the kernel’s view:

  • The needed pvops framework limits performance of the same kernel running on bare metal.

  • Introducing new features touching this framework is more complicated than it should be.

With virtualization support in x86 processors available for many years now, there is an ongoing campaign to move away from paravirtualized domains to hardware virtualized ones. To get rid of paravirtualized guests completely, a new guest mode is needed: PVH. Basically, PVH mode is like a fully virtualized guest but without emulation of legacy features like BIOS. Many legacy features of fully virtualized guests are emulated via a qemu process running in dom0. Getting rid of using those legacy features will avoid the need for the qemu process. 

Full support of PVH will enable dom0 to run in this mode. dom0 can’t be run fully virtualized, as this would require legacy emulation delivered by the qemu process in dom0 for an ordinary guest. For dom0, this would raise a chicken and egg problem. More on PVH support and its problems will be discussed later.

Last Year with Xen and the Linux Kernel

So, what has happened in the Linux kernel regarding Xen in the last year? Apart from the always ongoing correction of bugs, little tweaks, and adapting to changed kernel interfaces the main work has happened in the following areas:

  • PVH: After a first test implementation of PVH the basic design has been modified to use the fully virtualized interface as a starting point and avoid the legacy features.

This has led to a clean model requiring only a very small boot prologue used to set some indicators for avoiding the legacy features later on. The old PVH implementation was removed from the kernel and the new one has been introduced. This enables the Linux kernel to run as a PVH guest on top of Xen. dom0 PVH support isn’t complete right now, but we are progressing.

  • Restructuring to be able to configure a kernel with Xen support but without paravirtualized guest support: This can be viewed as a first step to finally get rid of a major part of the pvops framework. Today, such a kernel would be capable of running as a PVH or fully virtualized guest (with some paravirtualized interfaces like paravirtualized devices), but not yet as dom0.

  • ARM support: There has been significant effort with Xen on ARM (both 32- and 64-bit platforms). For example, support of guests with a different page size than dom0.

  • New paravirtualized devices: New frontend/backend drivers have been introduced or are in the process of being introduced, such as, PV-9pfs and a PV-socket implementation.

  • Performance of guests and dom0: This has been my primary area of work over the past year. In the following, I’ll highlight two examples along with some background information.

Restructuring of the xenbus driver

As a type 1 hypervisor, Xen has a big advantage over a type 2 hypervisor: It is much smaller; thus, the probability of the complete system failing due to a software error is smaller. This, however, is only true as long as other components are no longer a single point of failure, like today’s dom0.

Given this, I’m trying to add features to Xen that disaggregate it into redundant components by moving essential services into independant guests (e.g., driver domains containing the backends of paravirtualized devices).

One such service running in dom0 today is the Xenstore. Xenstore is designed to handle multiple outstanding requests. It is possible to run it in a “xenstore domain” independent from dom0, but this configuration wasn’t optimized for performance up to now.

The reason for this performance bottleneck was the xenbus driver being responsible for communication with Xenstore running in another domain (with Xenstore running as a dom0 daemon this driver would be used by guest domains or the dom0 kernel accessing Xenstore only). The xenbus driver could only handle one Xenstore access at a time. This is a major bottleneck because, during domain creation, there are often multiple-processes activity trying to access Xenstore. This was fixed through restructuring the xenbus driver to allow multiple requests to the Xenstore without blocking each other more than necessary.

Finding and repairing a performance regression of fully virtualized domains

This problem kept me busy for the past three weeks. In some tests, comparing performance between fully virtualized guests with a recent kernel and a rather old one (pre-pvops era) showed that several benchmarks performed very poorly on the new kernel. Fortunately, the tests were very easy to set up and the problem could be reproduced really easily, for example, a single munmap() call for a 8kB memory area was taking twice as long on the new kernel as on the old one.

So as a kernel developer, the first thing I tried was bisecting. Knowing the old and the new kernel version, I knew Git would help me find the Git commit making the performance bad. The git bisect process is very easy: you tell Git the last known good version and the first known bad version, then it will interactively do a binary search until the offending commit has been found.

At each iteration step, you have to test and tell Git whether the result was good or bad. In the end, I had a rather disturbing result: The commit meant to enhance the performance was to blame. And at the time, the patch was written (some years ago), it was shown it really did increase performance.

The patch in question introduced some more paravirtualized features for fully virtualized domains. So, the next thing I tried was to disable all paravirtualized features (this is easy doable via a boot parameter of the guest). Performance was up again. Well, for the munmap() call, not for the rest, (e.g., I/O handling). The overall performance of a fully virtualized guest without any paravirtualization feature enabled is disgusting due to the full emulation of all I/O devices including the platform chipset. So, the only thing I learned was that the paravirtualization features enabled would make munmap() slow.

I tried modifying the kernel to be able to disable various paravirtualized features one at a time hoping to find the one to blame. I suspected PV time handling to be the culprit, but didn’t have any success. Neither PV timers, PV clocksource, nor PV spinlocks were to blame. 

Next idea: using ftrace to get timestamps of all the kernel functions called on the munmap() call. Comparing the timestamps of the test once run with PV features and once without should show the part of the kernel to blame. The result was again rather odd; the time seemed to be lost very gradually over the complete trace.

With perf I was finally able to find the problem: It showed a major increase of TLB misses with the PV features enabled. It turned out that enabling PV features requires mapping a Xen memory page into guest memory. The way this was done in the kernel required the hypervisor to split up a large page mapping into many small pages. Unfortunately, that large page contained the main kernel page tables accessed (e.g., when executing kernel code). 

Moving the mapping of the Xen page into an area already mapped via small pages solved the problem.

What’s to Come

The main topics for the next time will be:

  • PVH dom0 support: Some features like PCI passthrough are still missing. Another very hot topic for PVH dom0 support will be performance. Some early tests using a FreeBSD kernel being able to run as PVH dom0 domain indicate that creating domains from a PVH kernel will be much slower than from a PV kernel. The reason here is the huge amount of hypercalls needed for domain creation. Calling the hypervisor from PVH is an order of magnitudes slower than from PV (the difference between VMEXIT/VMENTER and INT/IRET execution times of the processor). I have already some ideas on how to address this problem, but this would require some hypervisor modifications.

Another performance problem is backend operation, which again suffers from hypercalls being much slower on PVH. Again, a possible solution could be a proper hypervisor modification.

  • There are several enhancements regarding PV-devices (sound, multi-touch devices, virtual displays) in the pipeline. Those will be needed for a project using Xen as base for automotive IT.

This topic will be discussed during the Xen Project Developer and Design Summit happening in Budapest, Hungary from July 11 to 13. Register for the conference today.

Cloud Foundry’s Abby Kearns Talks Inclusion, Interfaces

At the end of an action-packed Cloud Foundry Summit Silicon Valley 2017 earlier this month, Abby Kearns, the Cloud Foundry Foundation executive director, sat down with TNS founder Alex Williams to ponder a very challenging year ahead. Kubo, the platform’s new lifecycle manager that integrates Kubernetes, is now production-ready. And while you’d expect such a move to draw attention and participation from Google, Microsoft coming closer into the fold — as the Foundation’s newest gold-level member, changes the ball game somewhat.

Listen now to “Look For More Open Source Extensions With Cloud Foundry This Year,” on The New Stack Makers podcast.

Read more at The New Stack