Home Blog Page 357

The US Again Has World’s Most Powerful Supercomputer

PLENTY OF PEOPLE around the world got new gadgets Friday, but one in Eastern Tennessee stands out. Summit, a new supercomputer unveiled at Oak Ridge National Lab is, unofficially for now, the most powerful calculating machine on the planet. It was designed in part to scale up the artificial intelligence techniques that power some of the recent tricks in your smartphone.

America hasn’t possessed the world’s most powerful supercomputer since June 2013, when a Chinese machine first claimed the title. Summit is expected to end that run when the official ranking of supercomputers, from Top500, is updated later this month.

Summit, built by IBM, occupies floor space equivalent to two tennis courts, and slurps 4,000 gallons of water a minute around a circulatory system to cool its 37,000 processors. Oak Ridge says its new baby can deliver a peak performance of 200 quadrillion calculations per second (that’s 200 followed by 15 zeros) using a standard measure used to rate supercomputers, or 200 petaflops. That’s about a million times faster than a typical laptop, and nearly twice the peak performance of China’s top-ranking Sunway TaihuLight.

Read more at Wired

Comparing Files and Directories with the diff and comm Linux Commands

There are a number of ways to compare files and directories on Linux systems. The diff, colordiff, and wdiff commands are just a sampling of commands that you’re likely to run into. Another is comm. The command (think “common”) lets you compare files in side-by-side columns the contents of individual files.

Where diff gives you a display like this showing the lines that are different and the location of the differences, comm offers some different options with a focus on common content. Let’s look at the default output and then some other features.

Here’s some diff output — displaying the lines that are different in the two files and using < and > signs to indicate which file each line came from.

Read more at Network World

DevSecOps Gains Enterprise Traction

Enterprise adoption of DevSecOps has surged in the past year, according to a study conducted at this year’s RSA Conference.

DevSecOps is a great portmanteau word, but is it a concept in wide use? According to a survey of attendees at this year’s RSA Conference, it’s not yet universal, but many more organizations are now embracing at least some DevSecOps principles than was the case even a year ago.

In all, 63% of the participants said they have a formal or informal DevSecOps team in place. According to Andy Feit, VP, go-to-market, at Aqua Security, the “informal” part is important.

Read more at Dark Reading

Leap Motion Open Sources the Project North Star AR Headset’s Schematics

Leap Motion has long been a proponent of immersive technology. When VR hardware began to emerge in the consumer market, Leap Motion quickly adapted its technology for VR input. Now it has turned its sights to the budding AR market, but instead of offering to license its tracking technology to hardware makers, the company created a full reference headset to help accelerate AR HMD design. 

Today, Leap Motion released the project details for anyone to see and to tinker with. You can download the project files from Leap Motion’s website, or you can find them on GitHub. The package includes detailed schematics for the mechanical bits and assembly instructions

Read more at Tom’s Hardware

Mesos and Kubernetes: It’s Not a Competition

The roots of Mesos can be traced back to 2009 when Ben Hindman was a PhD student at the University of California, Berkeley working on parallel programming. They were doing massive parallel computations on 128-core chips, trying to solve multiple problems such as making software and libraries run more efficiently on those chips. He started talking with fellow students so see if they could borrow ideas from parallel processing and multiple threads and apply them to cluster management.

“Initially, our focus was on Big Data,” said Hindman. Back then, Big Data was really hot and Hadoop was one of the hottest technologies.  “We recognized that the way people were running things like Hadoop on clusters was similar to the way that people were running multiple threaded applications and parallel applications,” said Hindman.

However, it was not very efficient, so they started thinking how it could be done better through cluster management and resource management. “We looked at many different technologies at that time,” Hindman recalled.

Hindman and his colleagues, however, decided to adopt a novel approach. “We decided to create a lower level of abstraction for resource management, and run other services on top to that to do scheduling and other things,” said Hindman, “That’s essentially the essence of Mesos — to separate out the resource management part from the scheduling part.”

It worked, and Mesos has been going strong ever since.

The project goes to Apache

The project was founded in 2009. In 2010 the team decided to donate the project to the Apache Software Foundation (ASF). It was incubated at Apache and in 2013, it became a Top-Level Project (TLP).

There were many reasons why the Mesos community chose Apache Software Foundation, such as the permissiveness of Apache licensing, and the fact that they already had a vibrant community of other such projects.  

It was also about influence. A lot of people working on Mesos were also involved with Apache, and many people were working on projects like Hadoop. At the same time, many folks from the Mesos community were working on other Big Data projects like Spark. This cross-pollination led all three projects — Hadoop, Mesos, and Spark — to become ASF projects.

It was also about commerce. Many companies were interested in Mesos, and the developers wanted it to be maintained by a neutral body instead of being a privately owned project.

Who is using Mesos?

A better question would be, who isn’t? Everyone from Apple to Netflix is using Mesos. However, Mesos had its share of challenges that any technology faces in its early days. “Initially, I had to convince people that there was this new technology called ‘containers’ that could be interesting as there is no need to use virtual machines,” said Hindman.

The industry has changed a great deal since then, and now every conversation around infrastructure starts with ‘containers’ — thanks to the work done by Docker. Today convincing is not needed, but even in the early days of Mesos, companies like Apple, Netflix, and PayPal saw the potential. They knew they could take advantage of containerization technologies in lieu of virtual machines. “These companies understood the value of containers before it became a phenomenon,” said Hindman.

These companies saw that they could have a bunch of containers, instead of virtual machines. All they needed was something to manage and run these containers, and they embraced Mesos. Some of the early users of Mesos included Apple, Netflix, PayPal, Yelp, OpenTable, and Groupon.

“Most of these organizations are using Mesos for just running arbitrary services,” said Hindman, “But there are many that are using it for doing interesting things with data processing, streaming data, analytics workloads and applications.”

One of the reasons these companies adopted Mesos was the clear separation between the resource management layers. Mesos offers the flexibility that companies need when dealing with containerization.

“One of the things we tried to do with Mesos was to create a layering so that people could take advantage of our layer, but also build whatever they wanted to on top,” said Hindman. “I think that’s worked really well for the big organizations like Netflix and Apple.”

However, not every company is a tech company; not every company has or should have this expertise. To help those organizations, Hindman co-founded Mesosphere to offer services and solutions around Mesos. “We ultimately decided to build DC/OS for those organizations which didn’t have the technical expertise or didn’t want to spend their time building something like that on top.”

Mesos vs. Kubernetes?

People often think in terms of x versus y, but it’s not always a question of one technology versus another. Most technologies overlap in some areas, and they can also be complementary. “I don’t tend to see all these things as competition. I think some of them actually can work in complementary ways with one another,” said Hindman.

“In fact the name Mesos stands for ‘middle’; it’s kind of a middle OS,” said Hindman, “We have the notion of a container scheduler that can be run on top of something like Mesos. When Kubernetes first came out, we actually embraced it in the Mesos ecosystem and saw it as another way of running containers in DC/OS on top of Mesos.”

Mesos also resurrected a project called Marathon (a container orchestrator for Mesos and DC/OS), which they have made a first-class citizen in the Mesos ecosystem. However, Marathon does not really compare with Kubernetes. “Kubernetes does a lot more than what Marathon does, so you can’t swap them with each other,” said Hindman, “At the same time, we have done many things in Mesos that are not in Kubernetes. So, these technologies are complementary to each other.”

Instead of viewing such technologies as adversarial, they should be seen as beneficial to the industry. It’s not duplication of technologies; it’s diversity. According to Hindman, “it could be confusing for the end user in the open source space because it’s hard to know which technologies are suitable for what kind of workload, but that’s the nature of the beast called Open Source.”

That just means there are more choices, and everybody wins.

Designing New Cloud Architectures: Exploring CI/CD – from Data Centre to Cloud

Today, most companies are using continuous integration and delivery (CI/CD) in one form or another – and this is of significance due to various reasons:

  • It increases the quality of the code base and the testing of that code base
  • It greatly increases team collaboration
  • It reduces the time in which new features reach the production environment
  • It reduces the number of bugs that in turn reach the production environment

As the DevOps movement becomes more popular, CI/CD does as well, since it is a major component. Not doing CI/CD means not doing DevOps.

From data centre to cloud

After reducing some terms and concepts, it is clear why CI/CD is so important. Since architectures and abstraction levels change when migrating a product from data centre into the cloud, it has become necessary to evaluate what is needed in the new ecosystem for two reasons:

  • To take advantage of what the cloud has to offer, in terms of the new paradigm and the plethora of options
  • To avoid making the mistake of treating the cloud as a data centre and building everything from scratch

Necessary considerations

The CI/CD implementation to use in the cloud must fulfil the majority of the following:

  • Provided as a service: The cloud is XaaS-centric, and avoiding building things from scratch is a must. In the case of building from scratch, if it is a non in-house component, nor a value-added product feature, I would suggest a review of the architecture in addition to a logical business justification

Read more at CloudTech

How Not to Kill your DevOps Team

A thriving DevOps culture should mean a thriving IT team, one that plays a critical role in achieving the company’s goals. But leave certain needs and warning signs unchecked and your DevOps initiative might just grind your team into the ground.

There are things that IT leaders can do to foster a healthy, sustainable DevOps culture. There are also some things you should not do, and they’re are just as important as the “do’s.” As in: By not  doing these things, you will not “kill” your DevOps team.

With that in mind, we sought out some expert advice on the “do’s” and “don’ts” that DevOps success tends to hinge upon. Ignore them at your team’s – and consequently your own – peril.

Do: Remove friction anywhere it exists

One of the goals of the early days of DevOps, one that continues today, was to remove the traditional silos that long existed in IT shops. The name itself reflects this: Development and operations are no longer separate wholly separate entities, but can now function as a closely aligned team.

Read more at EnterprisersProject

Kubernetes Deep Dive and Use Cases

The popularity of Kubernetes has steadily increased, with more than four major releases in 2017. K8s also was the most discussed project in GitHub during 2017, and was the project with the second most reviews.

Deploying Kubernetes

Kubernetes offers a new way to deploy applications using containers. It creates an abstraction layer which can be manipulated with declarative rather than imperative programming. This way, it is much simpler to deploy and upgrade services over time. The screenshot below shows the deployment of a replication controller which controls the creation of pods—the smaller K8S unit available. The file is almost self-explanatory: the definition gcr.io/google_containers/elasticsearch:v5.5.1- 1 indicates that a Docker Elasticsearch will be deployedThis image will have two replicas and uses persistent storage for persistent data.

There are many ways to deploy a tool. A Deployment, for example, is an upgrade from a replication controller that has mechanisms to perform rolling updates — updating a tool while keeping it available. Moreover, it is possible to configure Load Balancers, subnet, and even secrets through declarations.

Computing resources can occasionally remain idle; the main goal is to avoid excess, such as containing cloud environment costs. A good way to reduce idle time is to use namespaces as a form of virtual cluster inside your cluster.

Read more at The New Stack

Interning at an Open Source Consultancy

At the start of 2018 in January, Omar Akkila joined Collabora, an open source software consultancy, as a Software Engineer Intern with the Multimedia team. The four-month internship was a highly rewarding experience, allowing him to gain valuable insight into how Linux runs “under the hood”.

By Omar Akkila, Software Engineer at Collabora.

At the start of 2018 in January, I joined Collabora, an open source software consultancy, as a Software Engineer Intern with the Multimedia team. Reaching the end of that internship, I would like to take the time to share my experience.

A big draw to selecting Collabora as my employer was the opportunity to work on open source software. I had previously spent the summer of 2017 working on my first contributions to open source projects such as Rust and Firefox. Initially, it was an excuse for me to write and learn more Rust, but with time I grew to really enjoy the process. I certainly do have to commend Mozilla for their exceptional work in introducing newcomers to their projects. As someone who did not have prior professional working experience, getting to work, contribute, and follow real-world software development processes thrilled me.

The first impressions I received from Collabora was one of a very open and transparent company dedicated to advancing FOSS. I have never learned so much about a company from a simple interview process. Given the line of work, the majority of employees work remotely. I had thought that this would take time to get used to, but I can fortunately say that this was never an issue in the slightest thanks to the great mentorship and support I was provided. There still exists two offices for the company – one in Montreal, CA and another in Cambridge, UK – and the company is more than happy to provide relocation packages. Working out of the Montreal office, I usually spend my day with 5-10 colleagues from different engineering domains and departments. Arriving onboard, I was given a work laptop and spent the first few days setting up my development environment, getting to know my colleagues, my mentor, and familiarizing myself with my assigned project.

The project for my internship was introducing a Raspberry Pi to GStreamer’s CI setup for running tests and to generalize the process for adding new embedded devices in the future. A thorough technical writeup will follow very soon. What I gained out of this project was proper experience working with tools and systems such as Docker, Jenkins, and LAVA. In addition, I attained valuable insight into how Linux runs “under the hood” and had the opportunity of building (first time!) the Linux kernel myself tuned to my requirements. My understanding of concepts related to cross-building, sysroots, the Linux filesystem, the boot process, containers, linkers, and dependency management were really strengthened as a whole.

I am happy to be able to report that I have accepted a full-time role at Collabora and I look forward to continuously expanding my skill set while progressing further into the world of FOSS!

Get Essential Git, Linux, and Open Source Skills with New Training Course

Git, the version control system originally created by Linus Torvalds, has become the standard for collaborative software development and is used by tens of millions of open source projects. To help you master this tool as well as gain essential knowledge of Linux and open source software development practices, The Linux Foundation is offering an Introduction to Open Source Development, Git, and Linux (LFD201), a new training course focused on Linux and Git.

“Open source software development practices lead to better code and faster development, which is why open source has become the dominant model for how the world’s technology infrastructure is built and operates,” said Linux Foundation General Manager, Training & Certification Clyde Seepersad. Thus, it is imperative to understand the fundamental systems and tools involved.

Course Objectives

In this course, you will:

  • Gain a strong foundation of skills for working in open source development communities

  • Learn to work comfortably and productively in a Linux environment

  • Master important Linux methods and tools

You will also learn how to use Git to:

  • Create new repositories or clone existing ones

  • Commit new changes, review revision histories, and view differences from older versions

  • Work with different branches, merge repositories, and work with a distributed development team

This course is aimed at experienced computer users and developers who have little or no experience in a Linux environment, as well as those with some Linux experience who want to gain a good working knowledge of Git. 

It provides an introduction to open source software, including an overview of methodology, licensing, and governance. It also provides details of working with Linux systems and examines an array of basic topics, including installation, desktop environments, important commands and utilities, file systems, and compiling software. The final section provides a practical introduction to Git, the source control system that allows efficient and verified software development to occur among widely distributed contributors.

Available Anywhere

The online course is accessible from anywhere in the world; it requires only a physical or virtual Linux environment — running any Linux distribution. It contains 43 hands-on lab exercises, more than 20 videos demonstrating important tasks, and quizzes to check your understanding of the material.

Take your open source journey to the next level with the essential skills offered in an  Introduction to Open Source Development, Git, and Linux. The course is available now for $299. Register now.