Home Blog Page 907

Verizon Joins ONOS Open-Source SDN Project

onos copyVerizon is the latest major service provider to join the ONOS open-source network virtualization initiative, joining other carriers like AT&T, NTT Communications, China Unicom and SK Telecom in the effort.

Verizon officials said Jan. 21 that they joined the ONOS (Open Network Operating System) in hopes of accelerating the development of open-source software-defined networking (SDN) and network-functions virtualization (NFV) offerings that their company and other carriers can use. … The ONOS project, which joined the Linux Foundation last year, is developing a carrier-grade SDN operating system that is aimed at delivering high availability, scalability and performance.

Read more at eWeek

How to Install Graylog2 and Elasticsearch on Ubuntu 15.10

In this tutorial, I will guide you trough the Graylog2, Elasticsearch and MongoDB installation to build a scalable log server node with advanced log search capabilities. I will use Ubuntu 15.10 for this installation. Elasticsearch is a distributed search server based on Lucene that is available as OpenSource software. Graylog2 is a centralized log management and log analysis framework based on Elasticsearch and MongoDB.

Read more at HowtoForge

Meet Deepin 15 – Video Overview and Screenshots

Deepin 15 has been released and announced by Deepin Developer on December 24 2015, ships with latest version of deepin desktop environment version 3.0, based on debian SID and powered by Linux Kernel 4.2.

This release also get new changes and improvements such as : supports 30 languages, the Control Center and Dock components are now fully pluggable, the Upstart init system has been removed and systemd has been added in its place, the GTK+ 3.18.6, Qt 5.5.1 and GCC 5.3.1 packages have been added, and the default shell is now Bash instead of Zsh.

More details : Meet Deepin 15 – Video Overview and Screenshots

Data Collection for Embedded Linux and IoT with Open Source Fluent Bit

F1 FluentBit-io

Nowadays, embedded devices are cheap, and there are many options with really good specifications. Five years ago, for example, it  was unimaginable to find a quad-core board for less than $30. But, although the embedded hardware market continues growing, from the software perspective, several challenges still remain.

In Internet of Things (IoT) environments, where devices interact with each other, connectivity is a requirement that exists to serve to a major purpose: data transfer. In some cases, this data contains certain instructions that aim to invoke remote functions in peer devices, or to provide information coming from sensors, services, and metrics within others. Because the data usually comes from different sources, it will likely come in different formats, so collecting this information requires special handling. A common approach is through the implementation of a unified logging layer; remember that logging is no longer restricted to data in a file, but instead to a stream of data.

When this data is collected, it’s very useful to store it in simple databases or a third-party services. If we aim to perform real-time queries, it would be great to insert these records into an Elasticsearch instance or similar solution. Because we are dealing with restricted environments, not all tools available are suitable to help with this task.

This article introduces a specialized tool for embedded Linux (and general IoT environments) that is built to solve the common problems associated with data collection, unification, and delivery: Fluent Bit.

Fluent Bit

Fluent Bit is an open source data collection tool originally developed for embedded Linux. It aims to solve all problems associated with data, from collection to distribution. It’s built in C and provides the following features:

  • Small core

  • Input/Output plugins

  • Event driven (async I/O network operations)

  • Internal data serialization with MsgPack

  • Built-in metrics

  • SSL/TLS support

Fluent Bit can deal with data collection in different modes, and it can listen over the network (locally or remote), collecting predefined metrics from the running system or simply being used as a library by any program that desires to flush data to databases or third-party services.

For Fluent Bit, every source of data is handled through an input plugin, and targets for delivery are handled by the output plugins. The following table describe the options available for collection and delivery:

Name

Type

Description

cpu

input

collect metrics of CPU usage, in global mode and per core.

mem

input

calculate memory status: total vs. available.

kmsg

input

read log messages directly from the Kernel Log Buffer.

xbee

input

receive messages from a connected XBee device.

serial

input

get messages from the serial interface.

stdin

input

read messages from the standard input.

mqtt

input

listen for MQTT messages over TCP (it behaves as an MQTT server).

es

output

flush records into an Elasticsearch server.

fluentd

output

flush records to a Fluentd data collector/aggregator instance.

td

output

flush records to Treasure Data cloud service (Big Data).

stdout

output

flush records to the standard output (debugging purposes).

Getting Started

For demonstration purposes, I will show a very basic, hands-on example of how to collect some CPU metrics from a Linux embedded device (e.g., Raspberry Pi or a generic Linux host) and insert these records into an Elasticsearch service. Finally, I will show some visualization using Kibana.

Install Fluent Bit

I will assume that you have a Raspberry Pi device running Raspbian; otherwise, a normal Linux host with Debian or Ubuntu is fine. Before you install the packages for your distribution, make sure to install the APT key to your system:

$ wget -qO - http://apt.fluentbit.io/fluentbit.key | sudo apt-key add -

Now, depending of your specific distribution, add the correct repository entry to your packages lists. 

Raspberry Pi/Raspbian 8 (Jessie):

$ sudo su -c "echo deb http://apt.fluentbit.io/raspbian jessie main >>  /etc/apt/sources.list"

Debian 8 (Jessie):

$ sudo su -c "echo deb http://apt.fluentbit.io/debian jessie main >> /etc/apt/sources.list"

Ubuntu 15.10 (Wily Werewolf):

$ sudo su -c "echo deb http://apt.fluentbit.io/ubuntu wily main >> /etc/apt/sources.list"

Finally, please update your local repository and install Fluent Bit:

$ sudo apt-get update
$ sudo apt-get install fluentbit

Install Elasticsearch and Kibana

The following steps give you some hints on how to install the Elasticsearch and Kibana components, Note that the steps mentioned here are just a reference, and I encourage you double-check the links posted. Let’s start with Elasticsearch:

Once installed, make sure the following curl test command gets some results as expected:

$ curl -X GET http://localhost:9200/
{
 "name" : "Shiver Man",
 "cluster_name" : "elasticsearch",
 "version" : {
   "number" : "2.1.1",
   "build_hash" : "40e2c53a6b6c2972b3d13846e450e66f4375bd71",
   "build_timestamp" : "2015-12-15T13:05:55Z",
   "build_snapshot" : false,
   "lucene_version" : "5.3.1"
 },
 "tagline" : "You Know, for Search"
}

At this point, Elasticsearch is up and running, so we can proceed to install Kibana (data visualization tool): 

If the Kibana service is running, you can access the dashboard through the address http://127.0.0.1:5601, you should get content similar to this:

F2 kibana settings homeIn the following section, I will show an example of how to collect some CPU metrics with Fluent Bit and start inserting records into Elasticsearch.

Fluent Bit/CPU Metrics

Now that all our components are in place, we can start getting metrics from the board or host where Fluent Bit has been installed. As mentioned previously, Fluent Bit needs to know which input and output plugins it should use. These can be specified from the command line, for example:

$ fluent-bit -i INPUT -o OUTPUT

For this use case, we will gather CPU usage metrics using the cpu input plugin and flush the data out to an Elasticsearch instance through the es plugin. In addition to its name, it also requires some extra parameters: hostname (or IP), TCP port, index, and type:

$ fluent-bit -i cpu -o es://HOSTNAME:TCP_PORT/INDEX/TYPE

Ideally, you will be running Fluent Bit from your Raspberry Pi. Make sure to use the right hostname or IP address to reach the Elasticsearch server in your network; otherwise, you can use the loopback address 127.0.0.1 if all components are in the same machine.

Assuming that our Elasticsearch server is located on the TCP address 192.168.1.15, we will start inserting CPU metrics with the following command:

$ fluent-bit -i cpu -o es://192.168.1.15:9200/fluentbit/cpu -V

The -V argument prints verbose messages. Leave that terminal running as the tool will collect metrics every second and flush them to the Elasticsearch server every five seconds.

Visualization with Kibana

While Fluent Bit inserts records into Elasticsearch, we will prepare Kibana to visualize the information. Execute the following commands from the host where Elasticsearch and Kibana running:

$ wget http://fluentbit.io/kibana/fluentbit.mapping.json
$ wget http://fluentbit.io/kibana/fluentbit.cpu.json

Now, to create the default mapping for Fluent Bit and it CPU data, do:

$ curl -XPUThttp://localhost:9200/fluentbit -d @fluentbit.mapping.json

Again, open your Kibana dashboard through the address http://127.0.0.1:5601, go to Settings from the top menu and prepare to Configure an index pattern as shown below:F3 kibana settings indexpNote that Index contains time-based events must be checked, the index name is fluentbit, and the time-field name is auto-filled with date. Then, click on the Create button.

Visualization Object

The next and final step is to load the predefined Visualization object. Go to Settings and click the Objects option, then click on the Import button and choose the fluentbit.cpu.json file recently downloaded.

Now navigate to the Dashboard top menu, click on the + button, and choose the new Fluent Bit – CPU visualization object. Once added, it will start showing the CPU metrics as shown:F4 kibana dashboard

The above graphic displays how much CPU the system has spent on Kernel and User space, respectively.

More about Fluent Bit

In this article, I have demonstrated just a small fraction of the capacity of Fluent Bit. If you are writing some custom C programs for embedded Linux, you can use it as an agnostic logging library, and it will take care of data packaging and routing. If you care about security, TLS can be enabled on all networking plugins without effort.

Feel free to check the source code in our github.com/fluent/fluent-bit repository, where you will find more resources for packaging, examples, unit test cases, and recipes for Yocto Project. You can also learn more in the official documentation.

Thousands of users in a daily basis collect billions of records with Fluentd, and now Treasure Data is taking this experience to the world of embedded Linux and IoT with FluentBit.

Community Announcementtreasure

Fluent Bit is part of the Fluentd project ecosystem, and we will be participating at the Scale14x event. Join us at our session on Saturday, Jan. 23 or just come to say hi at our Fluent booth. If you are an active user of these tools, ping us on @fluentbit, as we will have free stickers and t-shirts! 

Eduardo-Silva copyEduardo Silva is a principal Open Source developer at Treasure Data Inc. It currently leads the efforts to make logging ecosystem more friendly between Embedded and Cloud Services. He also directs the Monkey Project organization which is behind the Open Source projects Monkey HTTP Server and Duda I/O.

Linux Foundation Certified Engineer: Francisco Tsao

frantsao-300The Linux Foundation offers many resources for developers, users, and administrators of Linux systems, including its Linux Certification Program. This program is designed to give you a way to differentiate yourself in a competitive job market.

How well does the certification prepare you for the real world? To illustrate that, the Linux Foundation will be featuring some of those who have recently passed the certification examinations. These testimonials should serve to help you decide if either the Linux Foundation Certified System Administrator (LFCS) or the Linux Foundation Certified Engineer (LFCE) certification is right for you. In this feature, we talk with Francisco Tsao, who recently achieved LFCE certification.

How did you become interested in Linux and open source?

In 1998, I got bored by the MS-DOS/Windows world. I was studying Civil Engineering, but in the neighborhood of my school was the Faculty of Computer Science, and I had some friends there. I began hearing about GNU/Linux from them. I bought a new computer and spent a weekend installing Debian 2.0 and, after a week, I had a graphical interface running on the box. The same year, I joined GPUL the Coruña Linux Users Group, where I learned a lot about tech and Free Software philosophy. Richard Stallman’s “The Right To Read” changed my life definitely. I’m very proud of my LUG (one that is still very much alive). In fact, this year we hosted the Akademy!

What Linux Foundation course did you achieve certification in? Why did you select that particular course?

I achieved the Linux Foundation Certified Engineer (LFCE), but I didn’t follow a Linux Foundation course. As I usually do, I prepared myself looking at documentation on the Internet, making my own notes, and running a lot of tests with virtual machines. I aimed for the LFCE, because it was more challenging to me than the LFCS; plus, it was a perfect pretext to spend time diving into some technical areas for which I was not strong.

What other hobbies or projects are you involved in? Do you participate in any open source projects at this time?

My wife and friends usually say I’m sick, because when I arrive at home I leave my work laptop and then… I take up my personal laptop. Computers are my main hobby; I like testing and learning new technologies. But I never forget about the lessons learned from my alma mater, so I like reading about structural engineering and urban planning.

As far as open source/free software projects, I have been involved in the organization of hackmeetings with GPUL — the greatest one that I took part in was the GUADEC 2012. I also maintain some package translations for the Free Translation Project, as part of the Galician Team. And, these days I’m getting ready to become, in the near future, a Fedora maintainer.LF-small

Do you plan to take future Linux Foundation courses? If so, which ones?

Maybe I’ll take the OpenStack Administration Fundamentals. 2016 will certainly be the year of OpenStack.

In what ways do you think the certification will help you as a systems administrator in today’s market?

There is a lot of demand for Linux sysadmins in the market today. I think the LFCE helps distinguish sysadmin professionals from newbies. Because the LFCE exam is a fully practical exam (and not a multiple-choice one), having an LFCE certification in your CV guarantees to HR people that you are qualified.

What Linux distribution do you prefer and why?

I have been, for nearly 15 years, a Debian (sid) fanboy. I loved the quality of packages and the Debian Free Software Guidelines. But, when I began to work in the “corporate world,” I needed a change. So, now I use Fedora on my desktop systems, because it offers fresh software along with great stability. For my servers, I install CentOS and OpenBSD. However, these days I’m very excited with the release of OpenSUSE Leap. I think it will be a good choice for servers and cloud instances (I’m working with some experiments in that direction on SUSE Studio).

Are you currently working as a Linux systems administrator? If so, what role does Linux play?

Yes, currently I work as OS support for an important insurance company in Spain. I’m part of the team that administers the mission-critical systems of the company. I was hired three years ago because they needed GNU/Linux specialists to run the large Linux server farm.

Where do you see the Linux job market growing the most in the coming years?

I think the coming years will be the “real” adoption of (public/private) cloud technologies. And methodologies like Continuous Delivery and microservices architectures will improve the

internal processes of the companies, so containers infrastructure will be more and more important. Maybe it will be the end of pure sysadmins, as we will need more knowledge on the development and delivery processes.

What advice would you give those considering certification for their preparation?

Forget recipes, it’s not about memorization. Understand what are you doing by reading some books and documentation that give you a deep background of the tasks you’ll perform at the exam and in real life. Imagine real problems and try to solve them. Practice a lot, as the exam time is tight. And, remember you only will have the system documentation at the exam, so train with only the documentation you have with the operating system and not searching the Internet. Happy hacking!

Read more profiles:

Linux Foundation Certified System Administrator: Gabriel Canepa

Linux Foundation Certified Engineer: Michael Zamot

Linux Foundation Certified System Administrator: Ariel Jolo

Linux Foundation Certified System Administrator: Nam Pho

Linux Foundation Certified System Administrator: Steve Sharpe

Linux Foundation Certified Engineer: Diego Xirinachs

Kodi 16 “Jarvis” to Be a Massive Update, First RC Is Out

kodi-16-jarvisKodi 16.0 “Jarvis” has just been released by its developers, and it signals the fact that we’re getting closer to the stable version of this amazing application.

We might find it difficult to believe that an application as complex and with as many features as Kodi can still get some big improvements or features, but its developers have shown us that it can be done. From what has been revealed until now, Kodi 16.0 “Jarvis” promises to be a huge release and an important step in the franchise.

Alibaba Teams With Nvidia in $1 Billion Bet on Cloud Computing

Alibaba Group Holding Ltd. will work with Nvidia Corp. on cloud computing and artificial intelligence, and plans to enlist about 1,000 developers to work on its big-data platform during the next three years.

The arm of China’s biggest e-commerce operator, known as AliCloud, will boost investment in data analysis and machine learning, it said in a statement Wednesday. AliCloud is staking $1 billion on the belief that demand for processing and storage from governments and companies will boost growth during the next decade as its tries to compete with Amazon.com Inc. in computing services.

Read more at Bloomberg

How To Improve Tech Skills While Contributing to Open Source Projects

get started leadAlthough some people think open source projects only need programmers—and experienced ones, at that— open source project needs go beyond the ability to write code. They also require testing, technical support, documentation, marketing, and more. And contributing to projects also is a great way to improve technical skills and connect with people who share similar interests. One barrier to participating in open source projects is not knowing how to join and get started. In this article, I’ll explain how to start contributing to an open source project.

Read more at OpenSource.com

Pkware Aims to Take Pain out of Crypto (And Give IT the Golden Key)

One of the reasons that most people don’t use public key encryption to protect their e-mails is that the process is simply too arduous for everyday communications.

Open source projects like GNU Privacy Guard and GPGTools have made it easier for individuals to use PGP encryption, but managing the keys used in OpenPGP and other public-key encryption formats still requires effort.  … Pkware’s just-announced Smartcrypt covers everything from mainframes to mobile devices. Smartcrypt lets organizations decide what kind of encryption and authentication they want to use, and it integrates into many common applications.

Read more at Ars Technica

Docker Acquires Unikernel to Improve Container App Deployments

DockerFactsSolomon Hykes, Docker founder, explains how Unikernel and the concept of purpose-built microkernels will improve the future of computing. 

Docker Inc. today announced the acquisition of Unikernel Systems in a deal that aligns the emerging world of unikernel purpose-built system development with Docker containers. Financial terms of the deal are not being publicly disclosed.”Just like we’ve seen a spectrum of application payloads deployed across virtual machines, Linux containers and soon Windows containers, we expect the spectrum to expand to unikernels,” Solomon Hykes, founder of Docker, told eWEEK.

Read more at eWeek