Home Blog Page 582

Call for Proposals Now Open for Xen Project Developer and Design Summit 2017

We’re excited to announce that registration and the call for proposals is open for Xen Project Developer and Design Summit 2017, which will be held in Budapest, Hungary from July 11-13, 2017. The Xen Project Developer and Design Summit combines the formats of Xen Project Developer Summits with Xen Project Hackathons, and brings together the Xen Project’s community of developers and power users.

Submit a Talk

Do you have an interesting use case around Xen Project technology or best practices around the community? There’s a wide variety of topics we are looking for, including security, embedded environments, network function virtualization (NFV), and more. You can find all the suggested topics for presentations and panels here (make sure you select the Topics tab).

Several formats are being accepted for speaking proposals, including:

  • Presentations and Panels
  • Interactive design and problem solving sessions. These sessions can be submitted as part of the CFP, but we will reserve a number of design sessions to be allocated during the event. Proposers of design sessions are expected to host and moderate design sessions following the format we have used at Xen Project Hackathons. If you have not participated in these in the past, check out past event reports from 20162015 and 2013.

Never talked at a conference before? Don’t worry! We encourage new speakers to submit for our events and have plenty of resources to help you prepare for your presentation.

Here are some dates to remember for submissions and in general:

  • CFP Close: April 14, 2017
  • CFP Notifications: May 5, 2017
  • Schedule Announced: May 16, 2017
  • Event: July 11-13, 2017

Registration

Come join us for this event, and if you register by May 19, you’ll get an early bird discount :) Travel stipends are available for students or individuals that are not associated with a company. If you have any questions, please send a note to community.manager@xenproject.org.

This article originally appeared on Xen Project Blog

Solving Monitoring in the Cloud With Prometheus

Hundreds of companies are now using the open source Prometheus monitoring solution in production, across industries ranging from telecommunications and cloud providers to video streaming and databases.

In advance of CloudNativeCon + KubeCon Europe 2017 to be held March 29-30 in Berlin, we talked to Brian Brazil, the founder of Robust Perception and one of the core developers of the Prometheus project, who will be giving a keynote on Prometheus at CloudNativeCon. Make sure to catch the full Prometheus track at the conference.

Linux.com: What makes monitoring more challenging in a Cloud Native environment?

Brian Brazil: Traditional monitoring tools come from a time when environments were static and machines and services were individually managed. By contrast, a Cloud Native environment is highly automated and dynamic, which requires a more sophisticated approach.

With a traditional setup there were a relatively small number of services, each with their own machine. Monitoring was on machine metrics such as CPU usage and free memory, which were the best way available to alert on user-facing issues. In a Cloud Native world, where many different services not only share machines, but the way in which they’re sharing them is in constant flux, such an approach is not scalable.

For example with a mixed workload of user-facing and batch jobs, a high CPU usage merely indicates that you’re getting good value for money out of your resources. It doesn’t necessarily indicate anything about end-user experience. Thus, metrics like latency, failure ratios, and processing times from services spread across machines must be aggregated up and then used for graphs and alerts.

In the same way that the move was made from manual management of machines and services to tools like Chef and now Kubernetes, we must make a similar transition in the monitoring space.

Linux.com: What are the advantages of Prometheus?

Brian Brazil: Prometheus was created with a dynamic cloud environment in mind. It has integrations with systems such as Kubernetes and EC2 that keep it up to date with what type of containers are running where, which is essential with the rate of change in a modern environment.

Prometheus client libraries allow you to instrument your applications for the metrics and KPIs that matter in your system. For third-party application such as Cassandra, HAProxy or MySQL, there’s a variety of exporters to expose their useful metrics.

The data Prometheus collects is enriched by labels. Labels are arbitrary key-value pairs that can be used to distinguish the development cluster from the production environment, or which HTTP endpoints the metric is broken out by.

The PromQL query language allows for aggregation based on these labels, calculation of 95th percentile latencies per container, service or datacenter, forecasting, and any other math you’d care to do. What’s more: if you can graph it, you can alert on it. This gives you the power to have alerts on what really matters to you and your users, and helps eliminate those late night alerts for non-issues.

Linux.com: Are there things that catch new users off guard?

Brian Brazil: One common misunderstanding is the type of monitoring system that Prometheus is, and where it fits as part of your overall monitoring strategy.

Prometheus is metrics based, meaning it is designed to efficiently deal with numbers — numbers such as how many HTTP requests you’ve served and their latency. What Prometheus is not is an event logging system, and is thus not suitable for tracking the details of each individual HTTP request made. By having both a metrics solution and an event logging solution (such as the ELK stack), you’ll cover a good range in terms of breadth and depth. Neither is sufficient on their own, due to the different engineering tradeoffs each must make.

Linux.com: What has the response to Prometheus been?

Brian Brazil: From its humble beginnings in 2012 when Prometheus had just two developers working on it part time, today in 2017 hundreds of developers have contributed to the Prometheus project itself. In addition a rich ecosystem has spawned, with over 150 third-party party integrations — and that’s just the ones we know of.

There are hundreds of companies using Prometheus in production across all industries from telecommunications to cloud providers, video streaming to databases and startups to Fortune 500s. Since announcing 1.0 last year, the growth in users and the ecosystem has only accelerated.

Linux.com: Are there any talks in particular to watch out for at CloudNativeCon + KubeCon Europe?

Brian Brazil: For those who are used to more static environments, or just trying to reduce pager noise, Alerting in Cloud Native Environments by Fabian Reinartz of CoreOS is essential. If you’re already running Prometheus in a rapidly growing system, in Configuring Prometheus for High Performance, then Soundcloud’s Björn Rabenstein , who wrote the current storage system, will cover what you’ll need to know.

For those on the development side, there’s a workshop on Prometheus Instrumentation that’ll take you from instrumenting your code all the way through visualising the results. My own talk on Counting in Prometheus is a deep dive into the deceptively simple sounding question of counting how many requests there were in the past hour, and how it really works in various monitoring systems.

Not everything is cloud native, Prometheus: The Unsung Heroes is a user story of how Prometheus can monitor infrastructure such as load balancers via SNMP. Finally, in Integrating Long-Term Storage with Prometheus, Julius Volz looks at the plans for our most sought after pieces of future functionality.

All talks will be recorded, so if you aren’t lucky enough to attend in person, you can watch the talks later online.

CloudNativeCon + KubeCon Europe is almost sold out! Register now to secure your seat.

There’s More to Life Than Code: How to Keep Your Engineers Engaged

Building great new things requires hiring great engineers, but growing already great things requires keeping great engineers engaged. The key to that is making sure engineers feel rewarded and respected and to provide a sense of purpose, according to Camille Fournier, at the Open Source Leadership Summit in February.

In her talk, Fournier highlighted her experiences as CTO of Rent the Runway, which she also has chronicled in an upcoming book, The Manager’s Path: A Guide for Tech Leaders Navigating Growth & Change.

Those three keys — reward, respect, and purpose — require constant curation and nurture, she said, otherwise even the strongest engineering teams will fall apart.

“Ultimately, it’s not a series of steps, it’s not a step ladder,” Fournier said. “Once you get people to ownership, it’s not like they’re just completely perfectly engaged and you’re done. Unfortunately, that’s not the way it works. You can undermine people by neglecting any one of these, no matter how engaged they are.”

The first need, reward, starts with an economic incentive to bring the talented engineers into your company or project in the first place. That can be pay – everybody has living costs – but it can also be the prestige in being involved in the project, or a belief in the mission of the company, Fournier said.

“If your project is cool, more people are going to work for it,” Fournier said. “We all know this. People go to open source projects largely because their employers are paying them to do it, so there’s an economic incentive. They’re going to open source projects because they’re using them at work. They’re going to open source projects because they think that having it on their resume… like Kubernetes mentioned earlier, having it on their resume means that they will be able to get more jobs.”

Once they’re initially committed, Fournier said the ability to actually their hard work put to use quickly is a major reward. Engineers like to see what they’ve built actually used, so a fast deployment cycle can really keep people engaged.

“Being able to move fast and being able to get things done is a reward,” she said. “Every day you get to solve a little piece of a puzzle, you feel good. This is part of why we all went into tech in the first place.”

The key from moving an engineering team from just contributing to commitment to the cause is respect, Fournier said. The first building block for respect is safety; people want an environment when they can ask questions, make mistakes, be vulnerable and honest. Fournier pointed to a Google study where psychological safety was the first key to impactful engineers.

“I think this all comes down to really a feeling of relatedness, a feeling of kinship, friendliness, community, feeling like you’re part of a group that has your back,” she said. “You’re part of a tribe. This is ultimately what gives you that psychological safety element.”

Fournier said this was something she had to work on in her growth as a manager; asking people questions about their lives instead of just trying to solve work problems.

“Just simple things, we’re not becoming BFFs at work,” she said. “Just treating people like they’re more than a cog in the machine.”

Cross-Functional Teams

She found that her engineers actually were most productive when they not only felt like they were part of an engineering team, but when they felt like they were a part of the entire company. When Rent The Runway created cross-functional teams — with people from all departments working together to solve single problems — her engineers were at their happiest and most productive.

“There is more to life than code,” Fournier said. “We see this in our open source projects. Big successful open source projects need more than just software developers. They need people who are capable of answering questions on mailing list, of getting up on stage, like I’m doing right now, and teaching people about how to use the project.”

When those cross-functional teams were solving problems, that’s when a sense of ownership permeated Rent The Runway, Fournier said. That’s the third key: a sense of purpose, where the engineers not only understand why they’re building what they’re building, but where that project fits in the direction of the company, and that the little decisions they’re making every day while building something are helping steer the company in that direction.

“When we created those cross-functional teams at Rent The Runway, they were successful not just because we helped people see the larger context of the business, but also because we gave them high level business goals and told them to figure out the steps that they wanted to take to achieve those,” Fournier said. “What features should we build? What products should be build to achieve those goals? That was incredibly, incredibly engaging. Giving away ownership, figuring out how to engage people, not just by saying, ‘Here’s what you’re doing, go do it,’ but saying, ‘Hey, here’s the goal, figure out how we think we should go do it.’ That is the true engagement that comes from a strong sense of purpose and a strong sense of ownership.”

Fournier said that each of those three desires — reward, respect and purpose — feed off each other, and require constant reinforcement from managers.

“We are always in great times of change in the tech industry,” she said. “Keep learning. Keep asking yourself questions and keep questioning yourself, ‘How do I keep my teams engaged?’ This is the secret to building great, motivated, and engaged engineering teams.”

Watch the complete presentation below:

https://www.youtube.com/watch?v=7R-Y2DwWOr0?list=PLbzoR-pLrL6rm2vBxfJAsySspk2FLj4fM

Learn how successful companies gain a business advantage with open source software in our online, self-paced Fundamentals of Professional Open Source Management course. Download a free sample chapter now!

An Exploration of Citrix Delivery Networks

While many of us may be more familiar with the virtualization and remote access products from Citrix, Danny Phillips was talking about their products in the networking space during his keynote presentation at LinuxCon Europe.

In particular, he was focused mostly on NetScaler, which is what they refer as an Application Delivery Controller (ADC), which is basically a load balancer with a few extra features. NetScaler provides availability, increased performance, offloaded processing, and security mainly for web services and web applications. Phillips says, “we believe that networking is done better in software,” which he sees as one of the things that has allowed them to create a flexible platform.

Phillips talks about the “power of any,” which is the idea that you should be able to mix and match or easily switch between form factors, hypervisors, cloud platforms, orchestration systems, and architectures. Citrix helps make this easier by providing a single API, a single code base, a single feature set, and a single management infrastructure to go across all  form factors, hypervisors, cloud platforms, orchestration systems, and architectures.

Like most components of the technology stack, the rapid uptake in the use of containers is impacting how we use load balancing technologies. Phillips says, “Do you really want to come all the way out of your container infrastructure to hit a load balancer to go all the way back in? What we really need is something small that lives inside that architecture, inside those container hosts. … This is NetScaler CPX.” However, you could easily end up with thousands of tiny micro-load balancers scattered throughout your environment, which he refers to as the problem of “lots of little,” and it requires some additional management to keep everything in sync and working together, which Citrix does with NetScaler MAS.

Phillips sums it up by pointing out that “NetScaler is a lot more than a load balancer, we have it in a number of different formats, so it can be consumed pretty much anywhere. Also we think we’re well positioned to help you in this new container environment.”

Watch the full video of the keynote below to learn more about Citrix Delivery Networks.

Interested in speaking at Open Source Summit North America on September 11-13? Submit your proposal by May 6, 2017. Submit now>>

Not interested in speaking but want to attend? Linux.com readers can register now with the discount code, LINUXRD5, for 5% off the all-access attendee registration price. Register now to save over $300!

Amid Shortages in Apache Spark Skillsets, Training Options Proliferate

The open source Big Data scene is red hot, but organizations are now dealing with shortages in people with relevant deployment and management expertise. There are simply not enough skilled workers to go around, especially when it comes to one of the hottest technologies of all: Apache Spark.

According to Dice, the most in-demand technology skills are in Big Data, with Spark at the top of the list. Although the need for these skills has increased in the past few years, employers are still challenged to find qualified candidates. The Taneja Group recently reported similar findings in a global survey sponsored by Cloudera of nearly 7,000 technical and managerial-level professionals working in Big Data. The survey found that nearly half of the respondents see the Big Data skills gap as the most significant challenge to deploying Spark, and one-third named complexity in learning Spark as a barrier to adoption.

According to the Taneja Group report: “Barriers to adoption [of Spark] and challenges remain, and are largely attributed to the Big Data skills gap and the ability to consume relevant training in a variety of formats (online, in-person, conference or tradeshow).”

However, the good news is that Spark training options are spreading out, and some of the best options are free or available at low cost. MapR, which focuses on Hadoop as well as Spark, offers numerous Spark training options, and Cloudera also has an expanded Spark training curriculum. For more information about Cloudera’s courses on Spark and to register for a class, you can visit university.cloudera.com. Meanwhile, you can get a preview of MapR’s Spark Essentials course here.

How is a typical course structured? In MapR’s Spark Essentials course, in the first part of the course, students use Spark’s interactive shell to load and inspect data. The course describes the various modes for launching a Spark application, and students go on to build and launch a standalone Spark application. MapR notes that the concepts are taught using scenarios that form the basis of hands-on labs.

Cloudera University offers both instructor-led courses and on-demand training options. The courses are focused not just on Spark but on other tools in the Spark ecosystem, including Apache Impala, Apache Kudu, Apache Kafka, and Apache Hive. There is high demand for people with skills spanning across these data-centric, Apache-stewarded projects.

“Cloudera University has established itself as a valuable resource for preparing data professionals across every industry. We’ve seen throughout the years that organizations which invest in training up front drive deeper results from their big data initiatives and move more quickly from proof of concept into full production environments,” said Mark Morrissey, senior director, Education Programs at Cloudera. “The skills gap continues to be the biggest hurdle in our industry.”

There are other Spark training options that come along with technology bundles based on Spark. For example, Databricks, which is the company founded by the same team that created Apache Spark, has announced its Databricks Community Edition (DCE), a free version of a just-in-time data platform built on top of Spark. It comes with access to free, online courses that can arm you with top-notch Spark skills. With the Databricks Community Edition, users have access to 6GB clusters as well as a cluster manager and a notebook environment to prototype simple applications.

Databricks also offers a diversified set of Spark training options, including an option where an organization can have Databricks’ trainers teach workers in their own workplace environments. Databricks’ classes are structured to minimize time requirements, too. For example, it offers an Apache Spark Programming course that can be completed in three days.

Demand for people with Spark skills will only increase, and that will be partially driven by the huge investments that powerful companies are making. Leaders at IBM have called Spark “the most important new open source project in a decade” and is investing hundreds of millions of dollars in Spark-related initiatives.  The bottom line is that a little Spark education can go a long way.

Learn more about Spark at Apache: Big Data, which gathers developers, operators, and users working in Big Data for education, collaboration, and more. Check out the conference schedule and register now!

10 (Mostly) Easy Linux Distros for Newbies

A fresh look at some of the more popular Linux distros (plus one non-Linux OS), and an impression of their ease of use.

Linux has a bad rap as a daily driver – the programs aren’t written to run on Linux, it’s tricky to install stuff, and so on. But it might surprise people who think along those lines to learn that plenty of the distributions out there are actually quite simple to use. Here’s our latest appreciation of the desktop Linux landscape.

Read more at InfoWorld

Teaching Children to Code

Chris Ward looks at how-to tools to help teach children one of the most essential skills of the modern age, how to code.

A lot of projects aimed at children focus on visual learning, such as teaching concepts with draggable, interlinking blocks for creating visual applications like games and animations.

Scratch

Scratch from MIT was one of the earliest contenders, using simple verbs and characters to describe programming concepts. For example, ‘repeat’ is a loop, or ‘move’ and ‘play’ describe actions characters can take.

Read more at DZone

Three Challenges for the Web, According to its Inventor

Today marks 28 years since I submitted my original proposal for the world wide web. I imagined the web as an open platform that would allow everyone, everywhere to share information, access opportunities and collaborate across geographic and cultural boundaries. In many ways, the web has lived up to this vision, though it has been a recurring battle to keep it open. But over the past 12 months, I’ve become increasingly worried about three new trends, which I believe we must tackle in order for the web to fulfill its true potential as a tool which serves all of humanity.

1)   We’ve lost control of our personal data

The current business model for many websites offers free content in exchange for personal data. Many of us agree to this – albeit often by accepting long and confusing terms and conditions documents – but fundamentally we do not mind some information being collected in exchange for free services. 

Read more at WorldWideWeb Foundation

Danish Shipping Company Uses Blockchain in IBM Partnership

IBM and Danish shipping giant Maersk are using blockchain technology to digitise transactions in the global shipping industry. It is a huge market, with about 90% of the world’s trade carried by sea.

The blockchain product that IBM and Maersk are developing could help to track and manage the paper trail of millions of shipping containers end to end. It will increase transparency and improve secure data sharing between trading partners.

The companies’ blockchain effort is based on Hyperledger Fabric, part of the Linux Foundation’s open source Hyperledger Project, and is scheduled to be ready for production in late 2017.

Read more at ComputerWeekly

Dockerizing LEMP Stack with Docker-Compose on Ubuntu

Docker-Compose is a command line tool for defining and managing multi-container docker applications. Compose is a python script, it can be installed with the pip command easily (pip is the command to install Python software from the python package repository). With compose, we can run multiple docker containers with a single command. It allows you to create a container as a service, great for your development, testing and staging environment.

In this tutorial, I will guide you step-by-step to use docker-compose to create a LEMP Stack environment (LEMP = Linux – Nginx – MySQL – PHP). We will run all components in different Docker containers, we set up a Nginx container, PHP container, PHPMyAdmin container, and a MySQL/MariaDB container.

Read more at HowtoForge