Home Blog Page 632

Apache Geode Spawns ‘All Sorts of In-Memory Things’

Apache Geode is kind of like the six blind men describing an elephant. It’s all in how you use it, Nitin Lamba, product manager at Ampool, told a meetup group earlier this year.

Geode is a distributed, in-memory compute and data-management platform that elastically scales to provide high throughput and low latency for big data applications. It pools memory, CPU, and network resources — with the option to also use local disk storage — across multiple processes to manage application objects and behavior.

Using dynamic replication and data partitioning techniques it offers high availability, improved performance, scalability, and fault tolerance.

Read more at The New Stack

DevOps Trends, Predictions and 2017 Resolutions

We’re counting the days till the end of 2016. As 2017 comes into focus, we find ourselves reflecting on the advancements made in the world of DevOps during this past year, the challenges still to overcome, and some of the trends that will shape the software delivery industry in the year(s) to come.

To give a proper farewell to 2016, and welcome in the new year, we hosted a special episode of Continuous Discussions (#c9d9) earlier this week, featuring industry luminaries and experts looking back on the state of DevOps in 2016, as well as what emerging trends they see prevailing in 2017.

Our expert panel included: Robert Stroud, principal analyst at Forrester; Nicole Forsgren, CEO and chief scientist at DORA; Chris Riley, analyst at fixate.io; Alan Shimel, Editor-in-Chief at DevOps.com; Manuel Pais, author on InfoQ and Skelton Thatcher; and our very own Sam Fell and Anders Wallgren. Continue reading for their exclusive insights into what’s in store for DevOps in 2017, plus some of their own DevOps New Year’s resolutions.

Read the full article here. 

Endless Is Bringing its Cheap, User-Friendly Linux PCs to the US

The dream of a Linux computer for normal humans is relatively dead. Sure, Google put Linux in billions of hands and homes with Android and Chrome OS, but neither OS is very much like the desktop Linux flavors well-meaning open-source developers have been crafting for decades.

A company called Endless has marked a third route, a stripped-down Linux operating system without many of the complications and difficulties (and features) of a typical Linux distro, but more apps and offline capabilities than Chrome OS. The OS is available for free download, but it also ships on the quirky Endless Mini and Endless One desktops Endless sells.

Read more at The Verge

Converting Failure to Success Should Be Part of Your Core Process

My life is full of mistakes. They’re like pebbles that make a good road. — Beatrice Wood

You know all the catchphrases and inspirational quotations about failure: fail fast, succeed quicker; fail forward; embrace failure; fail fast, fail often, fail everywhere. As creators of the bleeding edge of technology, we know that if we’re not failing, we’re not trying hard enough, and we’re not learning. But merely failing a lot doesn’t lead to progress. Anyone can fail all the time; the trick is converting failure to success. Ilan Rabinovitch of Datadog tells us, in his LinuxCon North America presentation, how to intelligently learn from our failures, and how to progress from failure to success.

The key to converting failure to success is to collect and analyze useful metrics, and to conduct formal post-mortems (or call them reviews or retrospectives if you don’t care for “post-mortem”). This needs to be part of your core process, because “The monitoring systems that we engage with these days are distributed and complex, more so than ever… All the pieces interact in ways that are much more complex than they might have been 10 years ago when you had a very clear three-tier architecture or static website that you interacted with. There are lots more pieces that can break or interact in unintentional ways” says Rabinovitch.

There are enough new mistakes to make; we don’t need to repeat the old ones. — Ilan Rabinovitch

Your reviews are definitely not about blame and punishments, but rather “We need to go back and see why was I able to do that, why did I make that mistake, why did I think that was the right actions to take. Put away the pitchforks, it should never be about the blame.” Rabinovitch reminds us that “Culture is this idea that we’re working together, we’re seeing the problem as the enemy, not each other… Sharing this idea that we’re going to take our learnings back and help each other be more successful in the future”.

So how to approach this? We’re already drowning in data, and yet Rabinovitch advises us to “Collect as much [data] as you can. If you don’t, it’s going to be expensive to generate again later, going back and trying to recreate the events of a security incident or a technical outage or what you’ve said or didn’t say on a control call.” Then, the next step is to categorize your metrics into three buckets: work metrics, resources, and events. Then what do you do?

Watch the complete presentation (below) to learn excellent insights on what to look for, what kind of tools and processes can help you make sense of what happened, and how to move forward.

LinuxCon videos

Embracing Failure and Learning from Our Mistakes with Effective Post Mortems by Ilan Rabinovitch

In this session, Ilan Rabinovitch discusses how Datadog runs internal postmortems from data collection to building timelines to the blameless review. You will learn about a framework you can apply right away to make postmortems more impactful in your own organizations.

How the Kubernetes Community Drives The Project’s Success

Kubernetes is a hugely popular open source project, one that is in the top percentile on GitHub and that has spawned more than 3,000 other projects. And although the distributed application cluster technology is incredibly powerful in its own right, that’s not the sole reason for its success.

“We think it’s not just the technology, we think that what makes it special is the community that builds the technology,” said Chen Goldberg, Director of Engineering, Container Engine and Kubernetes at Google, during her keynote at CloudNativeCon in Seattle last November.

Goldberg explained how that community works by pointing to three key areas for keeping Kubernetes moving forward: empowering internal special interest groups (SIGs), a commitment to transparency, and a culture of shared learning.

Kubernetes’ SIGs are intertwined; they don’t map to different GitHub repositories. They meet frequently and communicate among each other as often as possible. Goldberg said that the SIGs exist to ensure the community is thinking about how to make the technology as broad and accessible as possible, that every facet of the project is making Kubernetes useful to more people.

“Everything in the Kubernetes community is operating around SIGs,” she said. “They decide what features they want to work on. They discuss roadmap strategy. They triage issues towards the release. They make decisions. That’s the most important thing. When a community is so big, we have to grow leadership and distribute it.”

Hand in hand with that distributed approach is the commitment to transparency. Through the use of the features repository on GitHub, SIGs ensure their alignment, get new members caught up to speed, and generally just conduct business out in the open. There is a project management working group that reviews all features, highlights new breakthroughs, and keeps the SIGs working together.

“We want to make sure that you are informed of decisions if things are happening in the community,” Goldberg said.

There are frequent “burndown” sessions, post-mortems, and other community meetings to keep everyone on the same page and to make sure new features live up to the community’s high standards.

“We take it really seriously, the responsibility for your productions,” Goldberg said. “It means that when we release something, we want to make sure that we put the quality bar really high. We make a community decision when we are ready to release something. We will triage issues together … We want to make sure it works for you.”

The final vital element — a culture of shared learning — is really a nod to the fact that everyone is in uncharted territory with this new technology. There are many great ideas inside the Kubernetes community about what could work, but that’s a far stretch from knowing what does work.

“We don’t know everything,” Goldberg said. “I would lie if I would say it’s easy to manage such a big community. We make mistakes. The important thing is the community, we’re engaged to learn together and to improve.”

To learn more, watch the complete presentation below:

Do you need training to prepare for the upcoming Kubernetes certification? Pre-enroll today to save 50% on Kubernetes Fundamentals (LFS258), a self-paced, online training course from The Linux Foundation. Learn More >>

Keynote: Backstage with Kubernetes by Chen Goldberg, Google

What makes Kubernetes special is the community that builds the technology, said Chen Goldberg, Director of Engineering, Container Engine and Kubernetes at Google, during her keynote at CloudNativeCon in Seattle last November.

 

Essentials of OpenStack Administration Part 4: Cloud Design, Software-Defined Networking and Storage

Start exploring Essentials of OpenStack Administration by downloading the free sample chapter today. DOWNLOAD NOW

There are a number of open source cloud solutions such as Eucalyptus, OpenQRM, OpenNebula, and of course, OpenStack. These implementations typically share some design concepts, and services, which we’ll cover in this article — part of our ongoing series from The Linux Foundation’s Essentials of OpenStack Administration course. Download the full sample chapter now.

Design Concepts

First, cloud platforms are expected to grow: platform providers must be able to add resources at any time, with little hassle and with no downtime.

Cloud platforms also have a special interest in providing open APIs (Application Program Interfaces): this brings third-party developers, which in turn bring more users. Publicly available and well-documented APIs make this easier by the order of magnitudes.

Open APIs also ensure a basic level of flexibility and transparency, among other things making it easier for companies to decide for or against a specific platform.

RESTful interfaces are accessible via the ubiquitous HTTP protocol, making them readily scalable. It’s also easy to write software that communicates using them. Plus, many cloud platforms and providers use REST, so programmers developing for one will find it relatively easy to do it for another.

Software-Defined Networking

Historically, the networking infrastructure has been a relatively static component of data centers. Even simple things like IP address provisioning are typically manual, error-prone affairs. Modern DCs (data centers) rely on advanced functions like VLANs or trunking, but they still happen on the networking level and require manual switch configuration.

We have established that cloud platforms require end users to configure networking, such as IP address requests, private networks, and gateway access. The cloud requires this to be flexible and open, hence the term software-defined networking, or SDN.

Software-defined networking is an area of OpenStack with a lot of attention and change. The goal of software-defined networking, or SDN, is to completely manage my network from within OpenStack. There are two general approaches to deploying SDN. One is to use the existing switch architecture. The OpenStack software then uses proprietary code to make a request to the switch. The other manner of SDN implementation is to replace the control plane of the switch with open software. This solution would mean that end-to-end the communication would be open and transparent. As well, there would be no vendor lock with a particular switch manufacturer.

A similar concept is network function virtualization (NFV). Where SDN is virtualization of the network and separation of control and data plane, NFV is the virtualization of historic appliances such as routers, firewalls load balancers, and accelerators. These would be functions, then, that exist in a particular virtual machine. Some customers, such as telephone companies, can then deploy these services as virtual machines, removing the need for multiple different proprietary implementations.

Software-Defined Storage

In conventional setups, storage is typically designed around SANs (storage area networks) or SAN-like software constructs. Like conventional networking, these are often difficult and expensive to scale, and, as such, are unsuited to cloud environments.

Storage is a central part of clouds, and (you guessed it!), it must be provided to the user in fully automated fashion. Once again, the best way to achieve this is to introduce an abstraction layer in the software, a layer that needs to be scalable and fully integrated with both the cloud platform itself and the underlying storage hardware.

Flexible storage is another area essential for a cloud provider. Historically the solution was a SAN. A storage-area network uses proprietary hardware and tends to be expensive. Cloud providers are looking towards Ceph which allows for distributed access to commodity hardware across the network. Ceph uses standard network connections and allows for parallel access of thousands of clients. Without a single point of failure, it is becoming the default choice for back end storage.

In part 5 of this series, we’ll delve more into the OpenStack project: its open source community, release cycles, and use cases.

The Essentials of OpenStack Administration course teaches you everything you need to know to create and manage private and public clouds with OpenStack. Download a sample chapter today!

Read the other articles in the series:

Essentials of OpenStack Administration Part 1: Cloud Fundamentals

Essentials of OpenStack Administration Part 2: The Problem With Conventional Data Centers

Essentials of OpenStack Administration Part 3: Existing Cloud Solutions

Essentials of OpenStack Administration Part 5: OpenStack Releases and Use Cases

Automotive Grade Linux Moves to UCB 3.0

The Linux Foundation’s Automotive Grade Linux (AGL) project has released version 3.0 of its open source Unified Code Base (UCB) for automotive infotainment development. Unlike AGL’s UCB 2.0, which was released in July, UCB 3.0 is already being used to develop in-vehicle infotainment (IVI) products, some of which will ship in cars this year.

The AGL also announced Daimler AG as its 10th carmaker member of the group, and the first German manufacturer. Daimler, which “will actively contribute to developing the Unified Code Base,” is known for divisions including Mercedes-Benz Cars, Daimler Trucks, Mercedes-Benz Vans, and Daimler Buses.

The addition of Daimler AG is significant considering the automotive manufacturer’s longtime partnership with Microsoft and its Windows Embedded Automotive platform. The AGL membership does not necessarily mean it’s dropping Windows, however. In September, Microsoft and Daimler announced an effort called “In Car Office” to bring Office 365 to the car environment.

The AGL is not saying which companies will ship products first, but notes that UCB 3.0 “has several strong supporters and contributors including Toyota, Mazda, Aisin AW, Continental, Denso, Harman, Panasonic, Qualcomm Technologies, Renesas and many others.” More than 40 new companies have joined AGL in the past year, bringing the member total to more than 80. In addition to Toyota and Mazda, AGL automotive manufacturer members include Ford, Honda, Jaguar Land Rover, Mitsubishi Motors, Nissan, Subaru, and as of last month, Suzuki.

UCB is currently focused on in-vehicle infotainment, where the goal is to provide 70-80 percent “of the starting point for a production project,” according to the AGL. “This enables automakers and suppliers to focus their resources on customizing the other 20-30 percent to meet their unique product needs.”

“Sharing a single software platform across the industry decreases development time which enables automakers and suppliers to bring new products to market faster so they can stay ahead of new advances in mobile technology,” stated Dan Cauchy, Executive Director of Automotive Grade Linux.

Future versions will expand to more comprehensive digital cockpit and assisted driving technology. Several Linux-friendly, automotive focused system-on-chips that span all these applications have been announced in recent months, including the Intel Atom A3900, Renesas Electronics R-Car H3, and NXP i.MX8 Quad (see farther below).

The previous UCB 2.0 added a new rear seat display, video playback, and audio routing support, as well as a comprehensive application framework. UCB 3.0 refines these features while adding instrument cluster integration, rear-camera support, and an improved SDK, among other enhancements.

The AGL UCB 3.0 spans technologies for navigation, communications, safety, security, and connectivity, with features including:

  • New home screen and window manager 

  • Improved application framework and application launcher

  • New SDK for rapid application development

  • Reference applications including media player, tuner, navigation, Bluetooth, WiFi, HVAC control, audio mixer and vehicle controls

  • Integration with simultaneous display on instrument cluster

  • Smart Device Link for mobile phone integration

  • Rear view camera and rear seat entertainment on MOST ring

  • Wide range of hardware board support including Renesas, Qualcomm Technologies, Intel, Texas Instrument, NXP and Raspberry Pi

Testimonials were supplied by Toyota, Renesas, Denso, Panasonic, and Qualcomm. “We support the AGL UCB 3.0 and plan to integrate it into our vehicles in the future,” stated Ken-ichi Murata, Group Manager, Connected Strategy & Planning, Connected Company of Toyota Motor Corp. “By adopting open source software, we can focus more on developing new features and contiguously creating better user experiences for our customers.”

Toyota may well be the first car company to ship with AGL UCB inside. The company has scheduled a press conference, available via livestream, for this Wednesday at 4PM. Like most major car companies, Toyota has numerous high tech projects going on, such as self-driving car technologies, so it won’t necessarily involve UCB. The presentation will “highlight the critical importance of User Experience (UX) in the development of highly automated vehicles and robots.”

The AGL is hosting an AGL Demonstration Showcase to demonstrate UCB 3.0 during the January 4-7 CES show this week in Las Vegas. The showcase will include an AGL Demo Suite held on January 5-6.

Linux-Ready Automotive SoCs Offer New Options

AGL is maturing at a time when automotive technology is increasingly driving the high-end, SoC market. In announcing its new line of Atom E3900 “Apollo Lake” embedded SoCs, Intel tipped a similar Atom A3900 automotive variant that will ship in Q1 2017. The A3900 will enable “a complete software defined cockpit solution,” says Intel. Earlier this year, Intel acquired Yogitech, which makes safety tools for autonomous car chips, and its Wind River unit bought Arynga, which offers Linux-based OTA for cars.

In the ARM world, Renesas recently released several third-generation R-Car starter kits that are optimized for both AGL and the rival GENIVI Alliance spec, which similarly focuses on open source Linux IVI development. The kits, one of which includes a newly announced R-Car H3 SoC, are designed for ADAS, infotainment, reconfigurable digital clusters, and integrated digital cockpits.

TI also plays a big role in automotive IVI with its Jacinto 6 SoCs. Nvidia, meanwhile, has pivoted the bulk of its Tegra development resources toward automotive, including its Drive PX 2 solution for self-driving cars.

Qualcomm has been slower to shift to automotive, but earlier this year, the company announced an automotive-focused Snapdragon 820a SoC, and then followed up with a wireless-studded Qualcomm Connected Car Reference Platform. Many believe that Qualcomm’s pending, $38 billion acquisition of NXP is largely intended to boost its automotive business. NXP will also help it with IoT devices, which are expected to interact with smart cars, for example via smart garages and fuel stations.  

NXP recently announced an automotive-focused i.MX8 Quad SoC with four Cortex-A53 cores, two Cortex-M4F cores, and two GPUs. Upcoming QuadPlus and QuadMax versions will add one or two -A72 cores, respectively.

Then there’s Tesla, which continues to use a custom Linux build in its automotive technology, but has yet to comply with GPL licensing. The company recently announced that “all Tesla vehicles produced in our factory — including Model 3 — will have the hardware needed for full self-driving capability.” The capability won’t be activated, however, until the company has completed “millions of miles of real-world driving” tests.

All these platforms support Linux, which is increasingly well positioned in automotive against its two main rivals:  QNX and Microsoft Windows Embedded Automotive. It remains unclear to what extent Google will turn Android Auto into a full automotive spec like AGL or GENIVI. Last January at CES, Google announced an Open Automotive Alliance with Audi, GM, Honda, Hyundai, and Nvidia, to standardize Android IVI systems.

We can expect a lot more automotive computing news at this week’s CES show, which is increasingly focused on the topic. Nine automotive manufacturers, 11 tier-one auto suppliers, and more than 300 vehicle tech-related exhibitors will be in attendance at CES, says Business Insider. Already, Fiat Chrysler Automobiles and Google announced that they will unveil the latest version of FCA’s Android based infotainment system, now based on Android 7.0 “Nougat.”

Why Machine Learning Is Hard to Apply to Networking

Machine learning is becoming a buzzword—arguably an overused one—among companies that deal with networking. Recent announcements have touted machine learning capabilities at GoogleHewlett Packard Enterprise (HPE), and Nokia, for instance.

But machine learning isn’t being applied to networking itself. Why is that? 

The intersection of machine learning and networking is where David Meyer, chief scientist at Brocade, has been working. After serving a term as the first chairman of the OpenDaylight Project’s Technical Steering Committee (TSC), Meyer shifted his work into the realm of artificial intelligence.

Read more at SDxCentral