Home Blog Page 378

There’s a Server in Every Serverless Platform

Serverless computing or Function as a Service (FaaS) is a new buzzword created by an industry that loves to coin new terms as market dynamics change and technologies evolve. But what exactly does it mean? What is serverless computing?

Before getting into the definition, let’s take a brief history lesson from Sirish Raghuram, CEO and co-founder of Platform9,  to understand the evolution of serverless computing.

“In the 90s, we used to build applications and run them on hardware. Then came virtual machines that allowed users to run multiple applications on the same hardware. But you were still running the full-fledged OS for each application. The arrival of containers got rid of OS duplication and process level isolation which made it lightweight and agile,” said Raghuram.

Serverless, specifically, Function as a Service, takes it to the next level as users are now able to code functions and run them at the granularity of build, ship and run. There is no complexity of underlying machinery needed to run those functions. No need to worry about spinning containers using Kubernetes. Everything is hidden behind the scenes.

“That’s what is driving a lot of interest in function as a service,” said Raghuram.

What exactly is serverless?

There is no single definition of the term, but to build some consensus around the idea, the Cloud Native Computing Foundation (CNCF) Serverless Working Group wrote a white paper to define serverless computing.

According to the white paper, “Serverless computing refers to the concept of building and running applications that do not require server management. It describes a finer-grained deployment model where applications, bundled as one or more functions, are uploaded to a platform and then executed, scaled, and billed in response to the exact demand needed at the moment.”

Ken Owens, a member of the Technical Oversight Committee at CNCF said that the primary goal of serverless computing is to help users build and run their applications without having to worry about the cost and complexity of servers in terms of provisioning, management and scaling.

“Serverless is a natural evolution of cloud-native computing. The CNCF is advancing serverless adoption through collaboration and community-driven initiatives that will enable interoperability,” said Chris Aniszczyk, COO, CNCF.

It’s not without servers

First things first, don’t get fooled by the term “serverless.” There are still servers in serverless computing. Remember what Raghuram said: all the machinery is hidden; it’s not gone.

The clear benefit here is that developers need not concern themselves with tasks that don’t add any value to their deliverables. Instead of worrying about managing the function, they can dedicate their time to adding featured and building apps that add business value. Time is money and every minute saved in management goes toward innovation. Developers don’t have to worry about scaling based on peaks and valleys; it’s automated. Because cloud providers charge only for the duration that functions are run, developers cut costs by not having to pay for blinking lights.

But… someone still has to do the work behind the scenes. There are still servers offering FaaS platforms.

In the case of public cloud offerings like Google Cloud Platform, AWS, and Microsoft Azure, these companies manage the servers and charge customers for running those functions. In the case of private cloud or datacenters, where developers don’t have to worry about provisioning or interacting with such servers, there are other teams who do.

The CNCF white paper identifies two groups of professionals that are involved in the serverless movement: developers and providers. We have already talked about developers. But, there are also providers that offer serverless platforms; they deal with all the work involved in keeping that server running.

That’s why many companies, like SUSE, refrain from using the term “serverless” and prefer the term function as a service, because they offer products that run those “serverless” servers. But what kind of functions are these? Is it the ultimate future of app delivery?

Event-driven computing

Many see serverless computing as an umbrella that offers FaaS among many other potential services. According to CNCF, FaaS provides event-driven computing where functions are triggered by events or HTTP requests. “Developers run and manage application code with functions that are triggered by events or HTTP requests. Developers deploy small units of code to the FaaS, which are executed as needed as discrete actions, scaling without the need to manage servers or any other underlying infrastructure,” said the white paper.

Does that mean FaaS is the silver bullet that solves all problems for developing and deploying applications? Not really. At least not at the moment. FaaS does solve problems in several use cases and its scope is expanding. A good use case of FaaS could be the functions that an application needs to run when an event takes place.

Let’s take an example: a user takes a picture from a phone and uploads it to the cloud. Many things happen when the picture is uploaded – it’s scanned (exif data is read), a thumbnail is created, based on deep learning/machine learning the content of the image is analyzed, the information of the image is stored in the database. That one event of uploading that picture triggers all those functions. Those functions die once the event is over.  That’s what FaaS does. It runs code quickly to perform all those tasks and then disappears.

That’s just one example. Another example could be an IoT device where a motion sensor triggers an event that instructs the camera to start recording and sends the clip to the designated contant. Your thermostat may trigger the fan when the sensor detects a change in temperature. These are some of the many use cases where function as a service make more sense than the traditional approach. Which also says that not all applications (at least at the moment, but that will change as more organizations embrace the serverless platform) can be run as function as service.

According to CNCF, serverless computing should be considered if you have these kinds of workloads:

  • Asynchronous, concurrent, easy to parallelize into independent units of work

  • Infrequent or has sporadic demand, with large, unpredictable variance in scaling requirements

  • Stateless, ephemeral, without a major need for instantaneous cold start time

  • Highly dynamic in terms of changing business requirements that drive a need for accelerated developer velocity

Why should you care?

Serverless is a very new technology and paradigm, just the way VMs and containers transformed the app development and delivery models, FaaS can also bring dramatic changes. We are still in the early days of serverless computing. As the market evolves, consensus is created and new technologies evolve, and FaaS may grow beyond the workloads and use cases mentioned here.

What is becoming quite clear is that companies who are embarking on their cloud native journey must have serverless computing as part of their strategy. The only way to stay ahead of competitors is by keeping up with the latest technologies and trends.

It’s about time to put serverless into servers.

For more information, check out the CNCF Working Group’s serverless whitepaper here. And, you can learn more at KubeCon + CloudNativeCon Europe, coming up May 2-4 in Copenhagen, Denmark.

OpenTracing: Distributed Tracing’s Emerging Industry Standard

What was traditionally known as just Monitoring has clearly been going through a renaissance over the last few years. The industry as a whole is finally moving away from having Monitoring and Logging silos – something we’ve been doing and “preaching” for years – and the term Observability emerged as the new moniker for everything that encompasses any form of infrastructure and application monitoring.  Microservices have been around for a over a decade under one name or another.  Now often deployed in separate containers it became obvious we need a way to trace transactions through various microservice layers, from the client all the way down to queues, storage, calls to external services, etc.  This created a new interest in Transaction Tracing that, although not new, has now re-emerged as the third pillar of observability….

In a distributed system, a trace encapsulates the transaction’s state as it propagates through the system. During the journey of the transaction, it can create one or multiple spans. A span represents a single unit of work inside a transaction, for example, an RPC client/server call, sending query to the database server, or publishing a message to the message bus. Speaking in terms of OpenTracing data model, the trace can also be seen as a collection of spans structured around the directed acyclic graph (DAG).

Read more at Sematext

How to Upgrade to Ubuntu Linux 18.04

Soon, Ubuntu 18.04, aka the Bionic Beaver, and Canonical‘s next long-term support version of its popular Linux distribution will be out. That means it’s about time to consider how to upgrade to the latest and greatest Ubuntu Linux.

First, keep in mind that this Ubuntu will not look or feel like the last few versions. That’s because Ubuntu is moving back to GNOME for its default desktop from Unity. The difference isn’t that big, but if you’re already comfortable with what you’re running, you may want to wait a while before switching over.

Read more at ZDNet

Building A Custom Brigade Gateway in 5 Minutes

Brigade gateways trigger new events in the Brigade system. While the included GitHub and container registry hooks are useful, the Brigade system is designed to make it easy for you to build your own. In this post, I show the quickest way to create a Brigade gateway using Node.js. How quick? We should be able to have it running in about five minutes.

Prerequisites

You’ll need Brigade installed and configured, and you will also need Draft installed and configured. Make sure Draft is pointed to the same cluster where Brigade is installed.

If you are planning to build a more complex gateway, you might also want Node.js installed locally.

Getting Started

Draft provides a way of bootstrapping a new application with a starter pack. Starter packs can contain things like Helm charts and Dockerfiles. But they can also include code. 

Read more at TechnoSophos

An Introduction to the GNU Core Utilities

Two sets of utilities—the GNU Core Utilities and util-linux—comprise many of the Linux system administrator’s most basic and regularly used tools. Their basic functions allow sysadmins to perform many of the tasks required to administer a Linux computer, including management and manipulation of text files, directories, data streams, storage media, process controls, filesystems, and much more….

These tools are indispensable because, without them, it is impossible to accomplish any useful work on a Unix or Linux computer. Given their importance, let’s examine them…

You can learn about all the individual programs that comprise the GNU Utilities by entering the command info coreutils at a terminal command line. The following list of the core utilities is part of that info page. The utilities are grouped by function to make specific ones easier to find; in the terminal, highlight the group you want more information on and press the Enter key.

Read more at OpenSource.com

Manipulating Binary Data with Bash

Bash is known for admin utilities and text manipulation tools, but the venerable command shell included with most Linux systems also has some powerful commands for manipulating binary data.

One of the most versatile scripting environments available on Linux is the Bash shell. The core functionality of Bash includes many mechanisms for tasks such as string processing, mathematical computation, data I/O, and process management. When you couple Bash with the countless command-line utilities available for everything from image processing to virtual machine (VM) management, you have a very powerful scripting platform.

One thing that Bash is not generally known for is its ability to process data at the bit level; however, the Bash shell contains several powerful commands that allow you to manipulate and edit binary data. This article describes some of these binary commands and shows them at work in some practical situations.

Read more at Linux Pro

Put Wind into your Deployments with Kubernetes and Helm

I’m a Software Engineer. Every day, I come into work and write code. That’s what I’m paid to do. As I write my code, I need to be confident that it’s of the highest quality. I can test it locally, but anyone who’s ever heard the words, “…but it works on my machine,” knows that’s not enough. There are huge differences between my local environment and my company’s production systems, both in terms of scale and integration with other components. Back in the day, production systems were complex, and setting them up required a deep knowledge of the underlying systems and infrastructure. To get a production-like environment to test my code, I would have to open a ticket with my IT department and wait for them to get to it and provision a new server (whether physical or virtual). This was a process that took a few days at best. That used to be OK when release cycles were several months apart. Today, it’s completely unacceptable.

Instant Environments Have Arrived

We all know this, it’s almost a cliché. Customers today will not wait months, weeks, or even days for urgent fixes and new features. They expect them almost instantly. Competition is fierce and if you snooze you lose. You must release fast or die! This is the reality of the software industry today. Everything is software and software needs to be continuously tested and updated.

To keep up with the growing velocity of release cycles and provide quality software at near-real-time speed with bug fixes, new features and security updates, developers need the tooling to support quick and accurate verification of their work. This need is met by virtualization and container technologies that put on-demand development environments at developers’ fingertips. Today, a developer can easily spin up a production-like Linux box on their own computer to run their application and test their work almost effortlessly.

The K8s Solution for O11n

Over the past few years, the evolution of orchestration (o11n) tools has made it incredibly easy to deploy containerized applications to remote production-like environments while seamlessly taking care of developer overhead such as security, networking, isolation, scaling and healing.

Kubernetes is one of the most popular tools and has quickly become the leading orchestration platform for containerized applications. As an open-source tool, it has one of the biggest developer communities in the world. With many companies using Kubernetes in production, it has proven mileage and continues to lead the container orchestration pack.

Much of Kubernetes’ popularity comes from the ease in which you can spin up a cluster, deploy your applications to it and scale it to your needs. It’s really DIY-friendly and you won’t need any system or IT engineers to support your development efforts.

Once your cluster is ready, anyone can deploy an application to it using a simple set of endpoints provided by the Kubernetes API.

In the following sections, I’ll show you how easy it can be to run and test your code on a production-like environment.

An Effective Daily Routine with Kubernetes

The illustration below suggests an effective flow that, as a developer, you could adopt as your daily routine. It assumes that you have a production-like Kubernetes cluster set up as your development or staging environment.

RYGGOfcmk9BRStLSMhpeTifgN4u-jiOyZKRFZXf9

Optimizing Deployment to Kubernetes with a Helm Repository

Several tools have evolved to help you integrate your development with Kubernetes letting you easily deploy changes to your cluster. One of the most popular tools is Helm, the Kubernetes packages manager. Helm gives you an easy way to manage the settings and configurations needed for your applications in Kubernetes. It also provides a way to specify all the pieces of your application as a single package and distribute in an easy-to-use format.

But things get really interesting when you use a repository manager that supports Helm. A Kubernetes Helm repository adds capabilities like security and access control over your Helm charts and a REST API to automate the use of Helm charts when deploying your application to Kubernetes. The more advanced repository managers even offer features such as high availability and massively scalable storage making them ready for use in enterprise-grade systems.

Other Players in the Field

Helm is not the only tool you can use to deploy an application to Kubernetes. There are other alternatives, some of them even integrate with IDEs and CI/CD tools. To help you decide which tool best meets your needs you can read this post that compares: Draft vs Gitkube vs Helm vs Ksonnet vs Metaparticle vs Skaffold. There are many other tools the help setup and integrate with Kubernetes. You can see a flat list in this Kubernetes tools repository.

Your One Takeaway

Several container orchestration tools are available; however, the ease with which Kubernetes lets you spin up a cluster and deploy your applications to it has fueled its dominance in the market. The combination of Kubernetes and a tool like Helm puts production-like systems at the hands of every developer. With the ability to spin up a Kubernetes cluster on virtually any development machine, developers can easily implement a fully automated CI/CD pipeline and deliver bug fixes, security patches and new features with the confidence that they will run as expected when deployed to production. If there’s one takeaway you should get from this article it’s that even if you’re already releasing fast, with Kubernetes and Helm, your development cycles can get even shorter and be more reliable letting you release better quality code faster.

Eldad Assis, DevOps Architect, JFrog

Eldad Assis has been working on infrastructure for years, and loving it! DevOps architect and advocate. Automation everywhere!

For similar topics on Kubernetes and Helm, consider attending KubeCon + CloudNativeCon EU, May 2-4, 2018 in Copenhagen, Denmark.

 

NOAA’s Mission Toward Open Data Sharing

The goal of the National Oceanic and Atmospheric Administration (NOAA) is to put all of its data — data about weather, climate, ocean coasts, fisheries, and ecosystems – into the hands of the people who need it most. The trick is translating the hard data and making it useful to people who aren’t necessarily subject matter experts, said Edward Kearns, the NOAA’s first ever data officer, speaking at the recent Open Source Leadership Summit (OSLS).  

NOAA’s mission is similar to NASA’s in that it is science based, but “our mission is operations; to get the quality information to the American people that they need to run their businesses, to protect their lives and property, to manage their water resources, to manage their ocean resources,” said Kearns, during his talk titled “Realizing the Full Potential of NOAA’s Open Data.

Now the NOAA is looking to find a way to make the data available to an even wider group of people and make it more easily understood. Those are their two biggest challenges: how to disseminate data and how to help people understand it, Kearns said.

Read more at The Linux Foundation

Heptio Launches New Open Source Load-Balancing Project with Kubernetes in Mind

Heptio added a new load balancer to its stable of open-source projects Monday, targeting Kubernetes users who are managing multiple clusters of the container-orchestration tool alongside older infrastructure.

Gimbal, developed in conjunction with Heptio customer Actapio, was designed to route network traffic within Kubernetes environments set up alongside OpenStack, said Craig McLuckie, co-founder and CEO of Heptio. It can replace expensive hardware load-balancers — which manage the flow of incoming internet traffic across multiple servers — and allow companies with outdated but stable infrastructure to take advantage of the scale that Kubernetes can allow.

“We’re just at the start of figuring out what are the things (that) we can build on top of Kubernetes,” said McLuckie in an interview last week at Heptio’s offices in downtown Seattle. The startup, founded by McLuckie and fellow Kubernetes co-creator Joe Beda, has raised $33.5 million to build products and services designed to make Kubernetes more prevalent and easy to use.

Read more at GeekWire

Why Is the Kernel Community Replacing iptables with BPF?

Author Note: this is a post by long-time Linux kernel networking developer and creator of the Cilium project, Thomas Graf

The Linux kernel community recently announced bpfilter, which will replace the long-standing in-kernel implementation of iptables with high-performance network filtering powered by Linux BPF, all while guaranteeing a non-disruptive transition for Linux users.

From humble roots as the packet filtering capability underlying popular tools like tcpdump and Wireshark, BPF has grown into a rich framework to extend the capabilities of Linux in a highly flexible manner without sacrificing key properties like performance and safety. This powerful combination has led forward-leaning users of Linux kernel technology like GoogleFacebook, andNetflix to choose BPF for use cases ranging from network security and load-balancing to performance monitoring and troubleshooting. Brendan Gregg of Netflix first called BPF Superpowers for Linux. This post will cover how these “superpowers” render long-standing kernel sub-systems like iptables redundant while simultaneous enabling new in-kernel use cases that few would have previously imagined were possible….

Over the years, iptables has been a blessing and a curse: a blessing for its flexibility and quick fixes. A curse during times debugging a 5K rules iptables setup in an environment where multiple system components are fighting over who gets to install what iptables rules.

Read more at Cilium