Home Blog Page 375

Pop OS 18.04 Bursts onto the Linux Scene

Meet the Linux distribution pushing hard to design an efficient and creative environment for users….

Where Linux excels is in the fields of computer science, engineering, and DevOps – this is where our customers live. It’s important for us to make sure we create the most productive computer environment for them to be efficient, free, and creative. During the first Pop!_OS release, we addressed the most common pain points we heard from customers with the Linux desktop:

  • The time it takes to set up a productive environment.
  • Removing bloatware.
  • Up-to-date drivers and software.
  • A fast app center that works well.

All of these items were fixed in the first Pop!_OS release. Additionally, it was also important that Pop!_OS provide a pleasant experience for non-System76 customers. 

Read more at TechRadar

Linux Mint 19 “Tara” Won’t Collect or Send Any of Your Personal or System Data

Now that Canonical released the Ubuntu 18.04 LTS (Bionic Beaver) operating system, on which Linux Mint 19 “Tara” will be based, it’s time for the Linux Mint team to finalize their releases. There’s still no fixed release date for Linux Mint 19 “Tara,” nor LMDE (Linux Mint Debian Edition) 3, but Clement Lefebvre said they will arrive soon.

Another interesting thing in Linux Mint 19 “Tara” is that it won’t collect or send any personal or system data as Clement Lefebvre confirmed today the operating system would not include the “ubuntu-report” that Canonical implemented in Ubuntu 18.04 LTS (Bionic Beaver) to allow users to optionally send their data.

Read more at Softpedia

How Open Source Is Powering the Modern Mainframe

When I mention the word “mainframe” to someone, the natural response is colored by a view of an architecture of days gone by — perhaps even invoking a memory of the Epcot Spaceship Earth ride. This is the heritage of mainframe, but it is certainly not its present state.

From the days of the System/360 in the mid 1960s through to the modern mainframe of the z14, the systems have been designed along four guiding principles of security, availability, performance, and scalability. This is exactly why mainframes are entrenched in the industries where those principles are top level requirements — think banking, insurance, healthcare, transportation, government, and retail. You can’t go a single day without being impacted by a mainframe — whether that’s getting a paycheck, shopping in a store, going to the doctor, or taking a trip.

What is often a surprise to people is how massive open source is on mainframe. Ninety percent of mainframe customers leverage Linux on their mainframe, with broad support across all the top Linux distributions along with a growing number of community distributions. Key open source applications such as MongoDB, Hyperledger, Docker, and PostgreSQL thrive on the architecture and are actively used in production. And DevOps culture is strong on mainframe, with tools such as Chef, Kubernetes, and OpenStack used for managing mainframe infrastructure alongside cloud and distributed.

Learn more

You can learn more about open source and mainframe, both the history along with the current and future states of open source on mainframe, in our upcoming presentation. Join us May 15 at 1:00pm ET for a session led by Open Mainframe Project members Steven Dickens of IBM, Len Santalucia of Vicom Infinity, and Mike Riggs of The Supreme Court of Virginia.

In the meantime, check out our podcast series “I Am A Mainframer” on both iTunes and Stitcher to learn more about the people who work with mainframe and what they see the future of mainframe to be.

This article originally appeared at The Linux Foundation.

Fluent Bit: Flexible Logging in Kubernetes

Logging by nature is complex and in containerized environments there are new challenges that need to be addressed. In this article we will describe the current status of the Fluentd Ecosystem and how Fluent Bit (a Fluentd sub-project) is filling the gaps in cloud native environments. Finally we will do a global overview of the new Fluent Bit v0.13 release and its major improvements for Kubernetes users.

Fluentd Ecosystem

Fluentd is an open source log processor and aggregator hosted by the Cloud Native Computing Foundation. It was started in 2011 by Sadayuki Furuhashi (Treasure Data co-founder), who wanted to solve the common pains associated with logging in production environments, most of them related to unstructured messages, security, aggregation and customizable input and outputs within others. Since the beginning Fluentd was open source, and that decision was the key that allowed several contributors to continue expanding the product. Today there are more than 700 plugins available.

Fluentd community created not only plugins but also specific language connectors which developers could use to ship logs from their own custom applications to Fluentd over the network, the best examples are connectors for Python, Golang, NodeJS, Java, etc. From that moment it was not only about Fluentd as a single project, it was a complete Logging Ecosystem around which it continued expanding, including the container’s space. Fluentd Docker driver was the entry point and then, Red Hat contributed with a Kubernetes Metadata Filter for Fluentd which helped to make it a default citizen and reliable solution for Kubernetes clusters.

What does Kubernetes Filter in Fluentd? When applications run in Kubernetes, likely they are not aware about the context where they are running, so when they generate logging information there are missing pieces associated to the origin of each application log entry like: container id, container name, Pod Name, Pod ID, Annotations, Labels, etc. Kubernetes Filter enriches each application log in Kubernetes with a proper metadata.

Fluent Bit

While Fluentd ecosystem continue to grow, Kubernetes users were in need of specific improvements associated with performance, monitoring and flexibility. Fluentd is a strong and reliable solution for log processing aggregation, but the team was always looking for ways to improve the overall performance in the ecosystem: Fluent Bit born as a lightweight log processor to fill the gaps in cloud native environments

Fluent Bit is a CNCF sub-project under the umbrella of Fluentd; written in C language and based in the design and experience of Fluentd, it has a pluggable architecture with built-in networking and security support. There are around 45 plugins available between inputs, filters and outputs. The most common used plugins in Fluent Bit are:

Fluent Bit was started almost 3 years ago, and in just the last year, more than 3 million of deployments had happened in Kubernetes clusters. The community around Fluentd and Kubernetes has been the key for it evolvement and positive impact in the ecosystem.

Fluent Bit in Kubernetes

Log processors, such as Fluentd or Fluent Bit, have to do some extra work in Kubernetes environments to get the logs enriched with proper metadata, an important actor is the Kubernetes API Server which provides such relevant information:

QKRUdJ9Vy42vDFx0ZCCFYE_MRlSefDAf-snTokAE

The model works pretty well but there are still some things raised by the community asking for improvement in certain areas:

  • Logs generated by Pods are encapsulated in JSON, but the original log message is likely unstructured. Also logs from an Apache Pod are different than a MySQL Pod, how to deal with different formats?

  • There are cases where would be ideal to exclude certain Pod logs, meaning: skip logs from Pod ABC.

  • Gather insights from the log processor. How to monitor it with Prometheus?

The new and exciting Fluent Bit 0.13 aims to address the needs described above, let’s explore a bit more about it.

Fluent Bit v0.13: What’s new?

In perfect timing, Fluent Bit 0.13 is being released at KubeCon+CloudNativeCon Europe 2018. Months of work between the developers and the community are bringing one of the most exciting versions available; here are the highlights.

Pods suggest a parser through a declarative annotation

From now Pods can suggest a pre-defined and known parser to the log processor, so a Pod running an Apache web server may suggest the following parser through a declarative annotation:

pdqfFOtk44YE-vOzk3md7hvXHXEeW_3axGARobMj

The new annotation fluent.io/parser allows to suggest the pre-defined parser apache to the log processor (Fluent Bit), so the data will be interpreted as a properly structured message.

Pods suggest to exclude the logs

If for some reason the logs from a specific logs should not be processed through the log processor, this can be suggested through the declarative annotation fluentbit.io/exclude: “true” :

Ec2CpOqzuGtblb8FsGD23FRwTWcm_RUosyc8Yt7Y

Monitor the Fluent Bit logging Pipeline

Fluent Bit 0.13 comes with built-in support for metrics which are exported through a HTTP end-point. By default the metrics and overall information are in JSON but also can be retrieved in Prometheus format, examples:

1. General information: $ curl -s http://127.0.0.1:2020 | jq

mTUO4-qC51v8KW5I-fYXA9YUmj0Z6VN6SwK-i1u8

2. Overall Metrics in JSON: $ curl -s http://127.0.0.1:2020/api/v1/metrics | jq

NduPLdD5FyRscS3EoBeTpE7c0KLDIRuhlbJGR_Ea

3. Metrics in Prometheus Format: $ curl -s http://127.0.0.1:2020/api/v1/metrics/prometheus

W8oZ0DjXv04SwTZI0WjD17ecTIVGXdCae4vmYMEN

4. Connecting Prometheus to Fluent Bit Metrics end-point

sh_V1b4prMk3sS89MeRhvCkC1_KIIdna6r-xykrT

New Enterprise output connectors

Output connectors are an important piece of the logging pipeline; as part of 0.13 release we are adding the following plugins:

  • Apache Kafka

  • Microsoft Azure

  • Splunk

Meet Fluentd and Fluent Bit maintainers at KubeCon+CloudNativeCon Europe 2018!

Masahiro Nakagawa and Eduardo Silva, maintainers of Fluentd and Fluent Bit, will be at KubeCon presenting about the projects, engaging with the community and discussing roadmaps!, join us the conversation!  Check out the schedule:

Also don’t forget to stop by the CNCF booth in the exhibitors hall!

Eduardo Silva

Eduardo Silva is a Software Engineer at Treasure Data, he is part of the Open Source Fluentd Engineering team. He is passionate about scalability, performance and logging. One of his primary roles at Treasure Data is to maintain and develop Fluent Bit, a lightweight log processor for Cloud Native environments.

Learn more at at KubeCon + CloudNativeCon EU, May 2-4, 2018 in Copenhagen, Denmark.

Where Is Serverless Computing Headed?

“Things are all moving very, very fast.”

That was one of the big takeaways from my conversation at the Open Source Leadership Summit with former RedMonk analyst Fintan Ryan last month. We were talking about the state of the serverless market. (Depending on who you ask, serverless is either the same thing as, or conceptually related to, Function-as-a-service.)

In Ryan’s view, there are two different aspects to serverless. The first, he says, is “essentially a programming model which allows you to deal with event-driven architectures and event-driven applications. What we actually mean is there’s an awful lot of small events that are generating things that give you an ability to deal with each thing individually in terms of very small functions.”

Read more at OpenSource.com

Speak at Open Networking Summit Europe – Submit by June 24

Open Networking Summit Europe (ONS EU) is the first combined Technical and Business Networking conference for Carriers, Cloud and Enterprises in Europe. The call for proposals for ONS EU 2018 is now open, and we invite you to share your expertise

Share your knowledge with over 700 architects, developers, and thought leaders paving the future of network integration, acceleration and deployment. Proposals are due Sunday, June 24, 2018.

Based on feedback we received at Open Networking Summit North America 2018, our restructured agenda will include project-based technical sessions as well.

Read more at The Linux Foundation

How to Get a Core Dump for a Segfault on Linux

This week at work I spent all week trying to debug a segfault. I’d never done this before, and some of the basic things involved (get a core dump! find the line number that segfaulted!) took me a long time to figure out. So here’s a blog post explaining how to do those things!

At the end of this blog post, you should know how to go from “oh no my program is segfaulting and I have no idea what is happening” to “well I know what its stack / line number was when it segfaulted at at least!“.

what’s a segfault?

A “segmentation fault” is when your program tries to access memory that it’s not allowed to access, or tries to. This can be caused by:

  • trying to dereference a null pointer (you’re not allowed to access the memory address 0)
  • trying to dereference some other pointer that isn’t in your memory

Read more at Julia Evans

How to Share Files Between Linux and Windows

Many people today work on mixed networks, with both Linux and Windows systems playing important roles. Sharing files between the two can be critical at times and is surprisingly easy with the right tools. With fairly little effort, you can copy files from Windows to Linux or Linux to Windows. In this post, we’ll look at what is needed to configure your Linux and Windows system to allow you to easily move files from one OS to the other.

Copying files between Linux and Windows

The first step toward moving files between Windows and Linux is to download and install a tool such as PuTTY’s pscp. You can get PuTTY from putty.org and set it up on your Windows system easily. 

Read more at NetworkWorld

Crostini Linux Container Apps Getting Full Native Treatment on Chromebooks

Another day, another Crostini feature comes to light. So far, we have the Linux Terminal installer, Files app integration, and Material Design cues already rounding out the Linux app experience. As we continue to uncover clues by the day, it seems development of the Crostini Project is full steam ahead today is no different. Each clue we uncover continues to push the entire experience closer to something I believe will be delivered to developers and general users alike.

There are multiple commits around this same theme: handling app icons for Linux apps on your Chromebook. 

Read more at ChromeUnboxed

Data Protection for Containers: Part I, Backup

 How do you ensure data is properly backed up and recoverable according to current policies on a Kubernetes cluster?

Traditional agent-based backup software won’t work natively with a container orchestrator such as Kubernetes. A Persistent Volume (PV) may be attached to any host at any given time in the cluster. Having a backup agent hogging the mount path on the host will lead to unpredictable behavior and most certainly failed backups. Applying the traditional backup paradigm to the containerized application paradigm will simply not work. Backup schemes need to be consumed as a data service provided by the cluster orchestrator and the underlying storage infrastructure. In this blog you’ll learn how HPE Nimble Storage provide these data protection services for Kubernetes and how cluster administrators, traditional storage architects and application owners that are part of a DevOps team may design and consume these data protection services.

Before we dig in, the very loaded word “Backup” needs to be defined with a touch of the 21st century. I’m using the define: operator on Google search to help me…

Read more at HPE