Home Blog Page 357

Leap Motion Open Sources the Project North Star AR Headset’s Schematics

Leap Motion has long been a proponent of immersive technology. When VR hardware began to emerge in the consumer market, Leap Motion quickly adapted its technology for VR input. Now it has turned its sights to the budding AR market, but instead of offering to license its tracking technology to hardware makers, the company created a full reference headset to help accelerate AR HMD design. 

Today, Leap Motion released the project details for anyone to see and to tinker with. You can download the project files from Leap Motion’s website, or you can find them on GitHub. The package includes detailed schematics for the mechanical bits and assembly instructions

Read more at Tom’s Hardware

Mesos and Kubernetes: It’s Not a Competition

The roots of Mesos can be traced back to 2009 when Ben Hindman was a PhD student at the University of California, Berkeley working on parallel programming. They were doing massive parallel computations on 128-core chips, trying to solve multiple problems such as making software and libraries run more efficiently on those chips. He started talking with fellow students so see if they could borrow ideas from parallel processing and multiple threads and apply them to cluster management.

“Initially, our focus was on Big Data,” said Hindman. Back then, Big Data was really hot and Hadoop was one of the hottest technologies.  “We recognized that the way people were running things like Hadoop on clusters was similar to the way that people were running multiple threaded applications and parallel applications,” said Hindman.

However, it was not very efficient, so they started thinking how it could be done better through cluster management and resource management. “We looked at many different technologies at that time,” Hindman recalled.

Hindman and his colleagues, however, decided to adopt a novel approach. “We decided to create a lower level of abstraction for resource management, and run other services on top to that to do scheduling and other things,” said Hindman, “That’s essentially the essence of Mesos — to separate out the resource management part from the scheduling part.”

It worked, and Mesos has been going strong ever since.

The project goes to Apache

The project was founded in 2009. In 2010 the team decided to donate the project to the Apache Software Foundation (ASF). It was incubated at Apache and in 2013, it became a Top-Level Project (TLP).

There were many reasons why the Mesos community chose Apache Software Foundation, such as the permissiveness of Apache licensing, and the fact that they already had a vibrant community of other such projects.  

It was also about influence. A lot of people working on Mesos were also involved with Apache, and many people were working on projects like Hadoop. At the same time, many folks from the Mesos community were working on other Big Data projects like Spark. This cross-pollination led all three projects — Hadoop, Mesos, and Spark — to become ASF projects.

It was also about commerce. Many companies were interested in Mesos, and the developers wanted it to be maintained by a neutral body instead of being a privately owned project.

Who is using Mesos?

A better question would be, who isn’t? Everyone from Apple to Netflix is using Mesos. However, Mesos had its share of challenges that any technology faces in its early days. “Initially, I had to convince people that there was this new technology called ‘containers’ that could be interesting as there is no need to use virtual machines,” said Hindman.

The industry has changed a great deal since then, and now every conversation around infrastructure starts with ‘containers’ — thanks to the work done by Docker. Today convincing is not needed, but even in the early days of Mesos, companies like Apple, Netflix, and PayPal saw the potential. They knew they could take advantage of containerization technologies in lieu of virtual machines. “These companies understood the value of containers before it became a phenomenon,” said Hindman.

These companies saw that they could have a bunch of containers, instead of virtual machines. All they needed was something to manage and run these containers, and they embraced Mesos. Some of the early users of Mesos included Apple, Netflix, PayPal, Yelp, OpenTable, and Groupon.

“Most of these organizations are using Mesos for just running arbitrary services,” said Hindman, “But there are many that are using it for doing interesting things with data processing, streaming data, analytics workloads and applications.”

One of the reasons these companies adopted Mesos was the clear separation between the resource management layers. Mesos offers the flexibility that companies need when dealing with containerization.

“One of the things we tried to do with Mesos was to create a layering so that people could take advantage of our layer, but also build whatever they wanted to on top,” said Hindman. “I think that’s worked really well for the big organizations like Netflix and Apple.”

However, not every company is a tech company; not every company has or should have this expertise. To help those organizations, Hindman co-founded Mesosphere to offer services and solutions around Mesos. “We ultimately decided to build DC/OS for those organizations which didn’t have the technical expertise or didn’t want to spend their time building something like that on top.”

Mesos vs. Kubernetes?

People often think in terms of x versus y, but it’s not always a question of one technology versus another. Most technologies overlap in some areas, and they can also be complementary. “I don’t tend to see all these things as competition. I think some of them actually can work in complementary ways with one another,” said Hindman.

“In fact the name Mesos stands for ‘middle’; it’s kind of a middle OS,” said Hindman, “We have the notion of a container scheduler that can be run on top of something like Mesos. When Kubernetes first came out, we actually embraced it in the Mesos ecosystem and saw it as another way of running containers in DC/OS on top of Mesos.”

Mesos also resurrected a project called Marathon (a container orchestrator for Mesos and DC/OS), which they have made a first-class citizen in the Mesos ecosystem. However, Marathon does not really compare with Kubernetes. “Kubernetes does a lot more than what Marathon does, so you can’t swap them with each other,” said Hindman, “At the same time, we have done many things in Mesos that are not in Kubernetes. So, these technologies are complementary to each other.”

Instead of viewing such technologies as adversarial, they should be seen as beneficial to the industry. It’s not duplication of technologies; it’s diversity. According to Hindman, “it could be confusing for the end user in the open source space because it’s hard to know which technologies are suitable for what kind of workload, but that’s the nature of the beast called Open Source.”

That just means there are more choices, and everybody wins.

Designing New Cloud Architectures: Exploring CI/CD – from Data Centre to Cloud

Today, most companies are using continuous integration and delivery (CI/CD) in one form or another – and this is of significance due to various reasons:

  • It increases the quality of the code base and the testing of that code base
  • It greatly increases team collaboration
  • It reduces the time in which new features reach the production environment
  • It reduces the number of bugs that in turn reach the production environment

As the DevOps movement becomes more popular, CI/CD does as well, since it is a major component. Not doing CI/CD means not doing DevOps.

From data centre to cloud

After reducing some terms and concepts, it is clear why CI/CD is so important. Since architectures and abstraction levels change when migrating a product from data centre into the cloud, it has become necessary to evaluate what is needed in the new ecosystem for two reasons:

  • To take advantage of what the cloud has to offer, in terms of the new paradigm and the plethora of options
  • To avoid making the mistake of treating the cloud as a data centre and building everything from scratch

Necessary considerations

The CI/CD implementation to use in the cloud must fulfil the majority of the following:

  • Provided as a service: The cloud is XaaS-centric, and avoiding building things from scratch is a must. In the case of building from scratch, if it is a non in-house component, nor a value-added product feature, I would suggest a review of the architecture in addition to a logical business justification

Read more at CloudTech

How Not to Kill your DevOps Team

A thriving DevOps culture should mean a thriving IT team, one that plays a critical role in achieving the company’s goals. But leave certain needs and warning signs unchecked and your DevOps initiative might just grind your team into the ground.

There are things that IT leaders can do to foster a healthy, sustainable DevOps culture. There are also some things you should not do, and they’re are just as important as the “do’s.” As in: By not  doing these things, you will not “kill” your DevOps team.

With that in mind, we sought out some expert advice on the “do’s” and “don’ts” that DevOps success tends to hinge upon. Ignore them at your team’s – and consequently your own – peril.

Do: Remove friction anywhere it exists

One of the goals of the early days of DevOps, one that continues today, was to remove the traditional silos that long existed in IT shops. The name itself reflects this: Development and operations are no longer separate wholly separate entities, but can now function as a closely aligned team.

Read more at EnterprisersProject

Kubernetes Deep Dive and Use Cases

The popularity of Kubernetes has steadily increased, with more than four major releases in 2017. K8s also was the most discussed project in GitHub during 2017, and was the project with the second most reviews.

Deploying Kubernetes

Kubernetes offers a new way to deploy applications using containers. It creates an abstraction layer which can be manipulated with declarative rather than imperative programming. This way, it is much simpler to deploy and upgrade services over time. The screenshot below shows the deployment of a replication controller which controls the creation of pods—the smaller K8S unit available. The file is almost self-explanatory: the definition gcr.io/google_containers/elasticsearch:v5.5.1- 1 indicates that a Docker Elasticsearch will be deployedThis image will have two replicas and uses persistent storage for persistent data.

There are many ways to deploy a tool. A Deployment, for example, is an upgrade from a replication controller that has mechanisms to perform rolling updates — updating a tool while keeping it available. Moreover, it is possible to configure Load Balancers, subnet, and even secrets through declarations.

Computing resources can occasionally remain idle; the main goal is to avoid excess, such as containing cloud environment costs. A good way to reduce idle time is to use namespaces as a form of virtual cluster inside your cluster.

Read more at The New Stack

Interning at an Open Source Consultancy

At the start of 2018 in January, Omar Akkila joined Collabora, an open source software consultancy, as a Software Engineer Intern with the Multimedia team. The four-month internship was a highly rewarding experience, allowing him to gain valuable insight into how Linux runs “under the hood”.

By Omar Akkila, Software Engineer at Collabora.

At the start of 2018 in January, I joined Collabora, an open source software consultancy, as a Software Engineer Intern with the Multimedia team. Reaching the end of that internship, I would like to take the time to share my experience.

A big draw to selecting Collabora as my employer was the opportunity to work on open source software. I had previously spent the summer of 2017 working on my first contributions to open source projects such as Rust and Firefox. Initially, it was an excuse for me to write and learn more Rust, but with time I grew to really enjoy the process. I certainly do have to commend Mozilla for their exceptional work in introducing newcomers to their projects. As someone who did not have prior professional working experience, getting to work, contribute, and follow real-world software development processes thrilled me.

The first impressions I received from Collabora was one of a very open and transparent company dedicated to advancing FOSS. I have never learned so much about a company from a simple interview process. Given the line of work, the majority of employees work remotely. I had thought that this would take time to get used to, but I can fortunately say that this was never an issue in the slightest thanks to the great mentorship and support I was provided. There still exists two offices for the company – one in Montreal, CA and another in Cambridge, UK – and the company is more than happy to provide relocation packages. Working out of the Montreal office, I usually spend my day with 5-10 colleagues from different engineering domains and departments. Arriving onboard, I was given a work laptop and spent the first few days setting up my development environment, getting to know my colleagues, my mentor, and familiarizing myself with my assigned project.

The project for my internship was introducing a Raspberry Pi to GStreamer’s CI setup for running tests and to generalize the process for adding new embedded devices in the future. A thorough technical writeup will follow very soon. What I gained out of this project was proper experience working with tools and systems such as Docker, Jenkins, and LAVA. In addition, I attained valuable insight into how Linux runs “under the hood” and had the opportunity of building (first time!) the Linux kernel myself tuned to my requirements. My understanding of concepts related to cross-building, sysroots, the Linux filesystem, the boot process, containers, linkers, and dependency management were really strengthened as a whole.

I am happy to be able to report that I have accepted a full-time role at Collabora and I look forward to continuously expanding my skill set while progressing further into the world of FOSS!

Get Essential Git, Linux, and Open Source Skills with New Training Course

Git, the version control system originally created by Linus Torvalds, has become the standard for collaborative software development and is used by tens of millions of open source projects. To help you master this tool as well as gain essential knowledge of Linux and open source software development practices, The Linux Foundation is offering an Introduction to Open Source Development, Git, and Linux (LFD201), a new training course focused on Linux and Git.

“Open source software development practices lead to better code and faster development, which is why open source has become the dominant model for how the world’s technology infrastructure is built and operates,” said Linux Foundation General Manager, Training & Certification Clyde Seepersad. Thus, it is imperative to understand the fundamental systems and tools involved.

Course Objectives

In this course, you will:

  • Gain a strong foundation of skills for working in open source development communities

  • Learn to work comfortably and productively in a Linux environment

  • Master important Linux methods and tools

You will also learn how to use Git to:

  • Create new repositories or clone existing ones

  • Commit new changes, review revision histories, and view differences from older versions

  • Work with different branches, merge repositories, and work with a distributed development team

This course is aimed at experienced computer users and developers who have little or no experience in a Linux environment, as well as those with some Linux experience who want to gain a good working knowledge of Git. 

It provides an introduction to open source software, including an overview of methodology, licensing, and governance. It also provides details of working with Linux systems and examines an array of basic topics, including installation, desktop environments, important commands and utilities, file systems, and compiling software. The final section provides a practical introduction to Git, the source control system that allows efficient and verified software development to occur among widely distributed contributors.

Available Anywhere

The online course is accessible from anywhere in the world; it requires only a physical or virtual Linux environment — running any Linux distribution. It contains 43 hands-on lab exercises, more than 20 videos demonstrating important tasks, and quizzes to check your understanding of the material.

Take your open source journey to the next level with the essential skills offered in an  Introduction to Open Source Development, Git, and Linux. The course is available now for $299. Register now.

Vote for Your Favorite Linux SBC and Be Entered to Win a Free Board

Vote for your favorite open-spec, Linux- or Android-ready single board computers priced under $200.

It’s time again for LinuxGizmos’ annual reader survey of single board computers. They’ve identified 116 SBCs that fit their requirements — up from 98 boards in the June 2017 survey. Make your picks from the new list of under $200, hacker-friendly SBCs that run Linux or Android, and you could win one of 15 prizes.

 Take the survey!

15 hacker SBC prizes

In the brief survey, you can select up to three boards and answer a few questions about buying criteria and intended applications. By completing the survey, you will earn a chance to be among 15 randomly selected winners who will receive free boards donated by Aaeon UP, Qualcomm and Gumstix.

The prizes this time around include five Qualcomm DragonBoard 410c development boards and five Chatterbox Raspberry Pi Expansion boards from Gumstix (Rasp Pi not included). There are also five different Aaeon UP board models including an UP, an UP Squared, and an UP Core, as well as the new UP Core Plus and AI Core module with a Myriad 2 VPU. See more details at LinuxGizmos.

More Raspberry Pi?

Last year’s results saw an overwhelming taste for Pi, with the Raspberry Pi 3 in the top spot, the Raspberry Pi Zero W in second, and the Cortex-A53 based Raspberry Pi 2 in third. Vote now for your favorites and stay tuned for the results.

Intel and AMD Reveal New Processor Designs

With this week’s Computex show in Taipei and other recent events, processors are front and center in the tech news cycle. Intel made several announcements ranging from new Core processors to a cutting-edge technology for extending battery life. AMD, meanwhile, unveiled a second-gen, 32-core Threadripper CPU for high-end gaming and revealed some new Ryzen chips including some embedded friendly models.

Here’s a quick tour of major announcements from Intel and AMD, focusing on those processors of greatest interest to embedded Linux developers.

Intel’s latest 8th Gen CPUs

In April, Intel announced that mass production of its 10nm fabricated Cannon Lake generation of Core processors would be delayed until 2019, which led to more grumbling about Moore’s Law finally running its course. Yet, there were plenty of consolation prizes in Intel’s Computex showcase. Intel revealed two power-efficient, 14nm 8th Gen Core product families, as well as its first 5GHz designs.

The Whiskey Lake U-series and Amber Lake Y-series Core chips will arrive in more than 70 different laptop and 2-in-1 models starting this fall. The chips will bring “double digit performance gains” compared to 7th Gen Kaby Lake Core CPUs, said Intel. The new product families are more power efficient than the Coffee Lake chips that are now starting to arrive in products.

Both Whiskey Lake and Amber Lake will provide Intel’s higher performance gigabit WiFi ((Intel 9560 AC), which is also appearing on the new Gemini Lake Pentium Silver and Celeron SoCs, the follow-ups to the Apollo Lake generation. Gigabit WiFi is essentially Intel’s spin on 802.11ac with 2×2 MU-MIMO and 160MHz channels.

Intel’s Whiskey Lake is a continuation of the 7th and 8th Gen Skylake U-series processors, which have been popular on embedded equipment. Intel had few details, but Whiskey Lake will presumably offer the same, relatively low 15W TDPs. It’s also likely that like the Coffee Lake U-series chips it will be available in quad-core models as well as the dual-core only Kaby Lake and Skylake U-Series chips.

The Amber Lake Y-series chips will primarily target 2-in-1s. Like the dual-core Kaby Lake Y-Series chips, Amber Lake will offer 4.5W TDPs, reports PC World.

To celebrate Intel’s upcoming 50th anniversary, as well as the 40th anniversary of the first 8086 processor, Intel will launch a limited edition, 8th Gen Core i7-8086K CPU with a clock rate of 4GHz. The limited edition, 64-bit offering will be its first chip with 5GHz, single-core turbo boost speed, and the first 6-core, 12-thread processor with integrated graphics. Intel will be giving away 8,086 of the overclockable Core i7-8086K chips starting on June 7.

Intel also revealed plans to launch a new high-end Core X series with high core and thread counts by the end of the year. AnandTech predicts that this will use the Xeon-like Cascade Lake architecture. Later this year, it will announce new Core S-series models, which AnandTech projects will be octa-core Coffee Lake chips.

Intel also said that the first of its speedy Optane SSDs — an M.2 form-factor product called the 905P — is finally available. Due later this year is an Intel XMM 800 series modem that supports Sprint’s 5G cellular technology. Intel says 5G-enabled PCs will arrive in 2019.

Intel promises all day laptop battery life

In other news, Intel says it will soon launch an Intel Low Power Display Technology that will provide all-day battery life on laptops. Co-developers Sharp and Innolux are using the technology for a late-2018 launch of a 1W display panel that can cut LCD power consumption in half.

AMD keeps on ripping

At Computex, AMD unveiled a second generation Threadripper CPU with 32 cores and 64 threads. The high-end gaming processor will launch in the third quarter to go head to head with Intel’s unnamed 28-core monster. According to Engadget, the new Threadripper adopts the same 12nm Zen+ architecture used by its Ryzen chips.

AMD also said it was sampling a 7nm Vega Instinct GPU designed for graphics cards with 32GB of expensive HBM2 memory rather than GDDR5X or GDDR6. The Vega Instinct will offer 35 percent greater performance and twice the power efficiency of the current 14nm Vega GPUs. New rendering capabilities will help it compete with Nvidia’s CUDA enabled GPUs in ray tracing, says WCCFTech.

Some new Ryzen 2000-series processors recently showed up on an ASRock CPU chart that have the lowest power efficiency of the mainstream Ryzen chips. As detailed on AnandTech, the 2.8GHz, octa-core, 16-thread Ryzen 7 2700E and 3.4GHz/3.9GHz, hexa-core, 12-thread Ryzen 5 2600E each have 45W TDPs. This is higher than the 12-54W TDPs of its Ryzen Embedded V1000 SoCs, but lower than the 65W and up mainstream Ryzen chips. The new Ryzen-E models are aimed at SFF (small form factor) and fanless systems.

Join us at Open Source Summit + Embedded Linux Conference Europe in Edinburgh, UK on October 22-24, 2018, for 100+ sessions on Linux, Cloud, Containers, AI, Community, and more. 

10 Open Source Security Tools You Should Know

Open source tools can be the basis for solid security and intense learning. Here are 10 you should know about for your IT security toolkit.

In many ways, security starts with understanding the situation. For a couple of generations of IT security professionals, understanding their networks’ vulnerabilities starts with Nessus from Tenable. According to sectools.org, Nessus is the most popular vulnerability scanner and third most popular security program currently in use.

Nessus comes in both free and commercial versions. The current version, Nessus 7.1.0, is a commercial program, though it is free for personal home use. Version 2, which was current as of 2005, is still open source and free.

Read more at DarkReading