Home Blog Page 677

Open Source Operating Systems for IoT

Over the past decade, the majority of new open source OS projects have shifted from the mobile market to the Internet of Things. In this fifth article in our IoT series, we look at the many new open source operating systems that target IoT. Our previous posts have examined open source IoT frameworks, as well as Linux- and open source development hardware for IoT and consumer smart home devices. But it all starts with the OS.

In addition to exploring new IoT-focused embedded Linux-based distributions, I’ve included a few older lightweight distributions like OpenWrt that have seen renewed uptake in the segment. While the Linux distros are aimed primarily at gateways and hubs, there has been equivalent growth in non-Linux, open source OSes for IoT that can run on microcontroller units (MCUs), and are typically aimed at IoT edge devices.

Keep in mind that almost all OSes these day are claiming some IoT connection, so the list is somewhat arbitrary. The contenders here fulfill most of the following properties: low memory footprint, high power efficiency, a modular and configurable communication stack, and strong support for specific wireless and sensor technologies. Some projects emphasize IoT security, and many of the non-Linux OSes focus on real-time determinism, which is sometimes a requirement in industrial IoT.

I have generally steered clear of Linux distros that are categorized as “lightweight” but are still largely aimed at desktop use or portable USB stick implementations, rather than headless devices. Still, lightweight Linux distros such as LXLE or Linux Lite could be good choices for IoT.

The choices were more difficult with non-Linux open source platforms. After all, most lightweight RTOSes can be used for IoT. I focused on the major platforms, or those that seemed to offer the most promise for IoT. Other potential candidates can be found at this Open Source RTOS site.

Not included here is Windows 10 for IoT Core, which is free to makers and supports AllJoyn and IoTivity, but is not fully open source. There are also a number of commercial RTOSes that are major players in IoT, such as Micrium’s µC/OS.

Nine Linux-based open source IoT distros

Brillo — In the year since Google released Brillo, the lightweight Android-based distro has seen growing adoption among hacker boards such as the Intel Edison and Dragonboard 410c, and even some computer-on-modules. The future of Brillo is tied to Google’s Weave communications protocol, which it requires. Weave brings discovery, provisioning, and authentication functions to Brillo, which can run on as little as 32MB RAM and 128MB flash.

Huawei LiteOS — Huawei’s LiteOS, which is not to be confused with the open source Unix variant, is said to be based on Linux, but it must be a very lean implementation indeed. Announced over a year ago, LiteOS is claimed to be deployable as a kernel as small as 10KB. LiteOS ranges from MCU-based devices to Android-compatible applications processors. The customizable OS is touted for its zero configuration, auto-discovery, auto-networking, fast boot, and real-time operation, and it offers extensive wireless support, including LTE and mesh networking. LiteOS is available with Huawei’s Agile IoT Solution, and it drives its Narrow-band IoT (NB-IoT) Solution.

OpenWrt/LEDE/Linino/DD-Wrt — The venerable, networking-focused OpenWrt embedded Linux distro has seen a resurgence due to the IoT craze. The lightweight OpenWrt is frequently found on routers and MIPS-based WiFi boards. Earlier spin-offs such as DD-Wrt and the Arduino-focused Linino have recently been followed by an outright fork. The Linux Embedded Development Environment (LEDE) project promises more transparent governance and predictable release cycles.

Ostro Linux — This Yocto Project based distro broke into the limelight in August when Intel chose it for its Intel Joule module, where it runs on the latest quad-core Atom T5700 SoC. Ostro Linux is compliant with IoTivity, supports numerous wireless technologies, and offers a sensor framework. It has a major focus on IoT security, providing OS-, device-, application, and data-level protections, including cryptography and MAC. The distribution is available in headless and media (XT) versions.

Raspbian — There are some other distributions for the Raspberry Pi that are more specifically aimed at IoT, but the quickly maturing Raspbian is still the best. Because it’s the most popular distro for DIY projects on one of the most widely used IoT platforms, developers can call upon numerous projects and tutorials for help. Now that Raspbian supports Node-RED, the visual design tool for Node-JS, we see less reason to opt for the RPi-specific, IoT-focused Thingbox.

Snappy Ubuntu Core — Also called Ubuntu Core with Snaps, this embedded version of Ubuntu Core draws upon a Snap package mechanism that Canonical is spinning off as a universal Linux package format, enabling a single binary package to work on “any Linux desktop, server, cloud or device.” Snaps enable Snappy Ubuntu Core to offer transactional rollbacks, secure updates, cloud support, and an app store platform. Snappy requires only a 600MHz CPU and 128MB RAM, but also needs 4GB of flash. It runs on the Pi and other hacker boards, and has appeared on devices including Erle-Copter drones, Dell Edge Gateways, Nextcloud Box, and LimeSDR.

Tizen — Primarily backed by Samsung, the Linux Foundation hosted embedded Linux stack has barely registered in the mobile market. However, it has been widely used in Samsung TVs and smartwatches, including the new Gear S3, and has been sporadically implemented in its cameras and consumer appliances. Tizen can even run on the Raspberry Pi. Samsung has begun to integrate Tizen with its SmartThings smart home system, enabling SmartThings control from Samsung TVs. We can also expect more integration with Samsung’s Artik modules and Artik Cloud. Artik ships with Fedora, but Tizen 3.0 has recently been ported, along with Ubuntu Core.

uClinux — The venerable, stripped-down uClinux is the only form of Linux that can run on MCUs, and only then on specific Cortex-M3, M4, and -M7 models. uClinux requires MCUs with built-in memory controllers that can use an external DRAM chip to meet its RAM requirements. Now merged into the mainline Linux kernel, uClinux benefits from the extensive wireless support found in Linux. However, newer MCU-oriented OSes such as Mbed are closing the gap quickly on wireless, and are easier to configure. EmCraft is one of the biggest boosters for uClinux on MCUs, offering a variety of Cortex-M-based modules with uClinux BSPs.

Yocto Project — The Linux Foundation’s Yocto Project is not a Linux distro, but an open source collaborative project to provide developers with templates, tools, and methods to create custom embedded stacks. Because you can customize stacks with minimal overhead, it’s frequently used for IoT. Yocto Project forms the basis for most commercial embedded Linux distros, and is part of projects such as Ostro Linux and Qt for Device Creation. Qt is prepping a Qt Lite technology for Qt 5.8 that will optimize Device Creation for smaller IoT targets.

Non-Linux Open Source IoT OSes

Apache Mynewt — The open source, wireless savvy Apache Mynewt for 32-bit MCUs was developed by Runtime and hosted by the Apache Software Foundation. The modular Apache Mynewt is touted for its wireless support, precise configurability of concurrent connections, debugging features, and granular power controls. In May, Runtime and Arduino Srl announced that Apache Mynewt would be available for Arduino Srl’s Primo and STAR Otto SBCs. The OS also supports Arduino LLC boards like the Arduino Zero. (Recently, Arduino Srl and Arduino LLC settled their legal differences, announcing plans to reunite under an Arduino Holding company and Arduino Foundation.)

ARM Mbed — ARM’s IoT-oriented OS targets tiny, battery-powered IoT endpoints running on Cortex-M MCUs with as little as 8KB of RAM, and has appeared on the BBC Micro:bit SBC. Although originally semi-proprietary, single threaded only, and lacking deterministic features, it’s now open sourced under Apache 2.0, and provides multithreading and RTOS support. Unlike many lightweight RTOSes, Mbed was designed with wireless communications in mind, and it recently added Thread support. The OS supports cloud services that can securely extract data via an Mbed Device Connector. Earlier this year, the project launched a Wearable Reference Design.

Contiki — With its 10KB RAM and 30KB flash requirements, the open source Contiki can’t get as tiny as Tiny OS or RIOT OS, nor does it offer real-time determinism like RIOT and some others. However, the widely used Contiki provides extensive wireless networking support, with an IPv6 stack contributed by Cisco. The OS supplies a comprehensive list of development tools including a dynamic module loading Cooja Network Simulator for debugging wireless networks. Contiki is touted for efficient memory allocation.

FreeRTOS — FreeRTOS is coming close to rivaling Linux among embedded development platforms, and it’s particularly popular for developing IoT end devices. FreeRTOS lacks Linux features such as device drivers, user accounts, and advanced networking and memory management. However, it has a far smaller footprint than Linux, not to mention mainstream RTOSes like VxWorks, and it offers an open source GPL license. FreeRTOS can run on under a half kilobyte of RAM and 5-10KB of ROM, although more typically when used with a TCP/IP stack, it’s more like 24KB of RAM and 60KB flash.

Fuchsia — Google’s latest open source OS was partially revealed in August, leaving more questions than answers. The fact that Fuchsia has no relation to Linux, but is based on an LK distro designed to compete with MCU-oriented OSes such as FreeRTOS, led many to speculate that it’s an IoT OS. Yet, Fuchsia also supports mobile and laptop computers, so Google may have much broader ambitions for this early-stage project.

NuttX — The non-restrictive BSD licensed NuttX is known primarily for being the most common RTOS for open source drones running on APM/ArduPilot and PX4 UAV platform, which are collectively part of the Dronecode platform. NuttX is widely used in other resource-constrained embedded systems, as well. Although it supports x86 and Cortex-A5 and -A8 platforms, this POSIX- and ANSI-based OS is primarily aimed at Cortex-M MCUs. NuttX is fully pre-emptible, with fixed priority, round-robin, and sporadic scheduling. The OS is billed as “a tiny Linux work-alike with a much reduced feature set.”

RIOT OS — The 8-year old RIOT OS is known for its efficient power usage and widespread wireless support. RIOT offers hardware requirements of 1.5KB RAM and 5KB of flash that are almost as low as Tiny OS. Yet it also offers features like multi-threading, dynamic memory management, hardware abstraction, partial POSIX compliance, and C++ support, which are more typical of Linux than lightweight RTOSes. Other features include a low interrupt latency of roughly 40 clock cycles, and priority-based scheduling. You can develop under Linux or OS X and deploy to embedded devices using a native port.

TinyOS — This mature, open source BSD-licensed OS is about as tiny as you can get, supporting low power consumption on MCU targets “with a few kB of RAM and a few tens of kB of code space.” Written in a C dialect called nesC, the event-driven TinyOS is used by researchers exploring low-power wireless networking, including multi-hop nets. By the project’s own admission, “computationally-intensive applications can be difficult to write.” The project is working on Cortex-M3 support, but for now it’s still designed for lower-end MCUs and radio chips.

Zephyr — The Linux Foundation’s lightweight, security-enabled Zephyr RTOS runs on as little as 2-8KB of RAM. Zephyr works on x86, ARM, and ARC systems, but focuses primarily on MCU-based devices with Bluetooth/BLE and 802.15.4 radios like 6LoWPAN. Zephyr is based on Wind River’s Rocket OS, which is based on Viper, a stripped-down version of VxWorks. Initial targets include the Arduino Due and Intel’s Arduino 101, among others. Zephyr recently appeared on SeeedStudio’s 96Boards IoT Edition BLE Carbon SBC, which is supported by a new Linaro LITE group.

Read the previous articles in the series:

Who Needs the Internet of Things?

21 Open Source Projects for IoT

Linux and Open Source Hardware for IoT

Smart Linux Home Hubs Mix IoT with AI

Learn more about embedded Linux through The Linux Foundation’s Embedded Linux Development with Yocto Project course.

 

Enterprise Open Source Programs Flourish — In Tech and Elsewhere

If you cycled the clock back about 15 years and surveyed the prevailing beliefs about open source technology at the time, you would find nowhere near the volume of welcome for it that we see today. As a classic example, The Register reported all the way back in 2001 that former CEO of Microsoft Steve Ballmer made the following famous statement in a Chicago Sun-Times interview: “Linux is a cancer that attaches itself in an intellectual property sense to everything it touches.”

Fast-forward to today, though, and not only has Microsoft been actively contributing to and advancing Linux, but countless organizations have rolled out professional, in-house programs focused on advancing open source and encouraging its adoption. Some of the companies doing so may surprise you. Here is a brief overview of these programs at companies that are all household names.

Netflix Tests, Netflix Contributes. Lots of people tune in to watch Netflix on a regular basis, but how often do they visit the company’s Open Source Software Center? Netflix has contributed a slew of very useful tools and applications to the open source community, ranging from machine learning and orchestration applications to utilities that run on its platform. Engineers at the company announce when new tools are open sourced, and many of them have been tested and hardened at scale by Netflix for years.

Don’t Count Out Telecoms. In the telecom arena, Ericsson regularly contributes projects to the open source community and is a champion of several key open source initiatives. You can browse through the company’s open source hub here. The company is also one of the most active telecom-focused participants in the effort to advance open NFV and other open technologies that can eliminate historically proprietary components in telecom technology stacks. Ericsson works directly with The Linux Foundation on these efforts, and engineers and developers are encouraged to interface with the open source community.

Microsoft Radically Changes Its Tune. Once viewed as an enemy of open source, Microsoft has completely reversed course, partly because embracing open source paves the way to a brighter future for key Microsoft platforms such as Azure. CEO Satya Nadella, has said that nearly a third of the Azure cloud platform is Linux-based.

The company has also announced support for the container-friendly CoreOS Linux distribution, and Microsoft’s Azure cloud supports CentOS, Oracle Linux, SUSE, Ubuntu, and other flavors of Linux. Microsoft has a growing partnership with Red Hat, and developers at the company are encouraged to participate actively in open initiatives, a number of which you can peruse here.

Walmart Speaks Open Source? As a matter of fact, Walmart does. The company’s Walmart Labs division, located in San Bruno, right down the road from Silicon Valley, has released a slew of open source projects, and now there is a significant new one arriving. Electrode is a product of Walmart’s migration to a React/Node.js platform. It gives developers templated code to build universal React apps that incorporate modules that developers can leverage to add functionality to Node apps. It’s also a key part of how Walmart’s site runs, and you can believe that that site runs at scale.

Walmart’s site has 80 million monthly visitors, loads up to 10,000 requests per second, and includes 15 million items, adding more than one million new items each month — nothing to shake a stick at. Read about some of the company’s open source contributions and programs here.

Mozilla’s Open Mojo. You can always count on Mozilla for an interesting spin on open source. Last year, Mozilla launched the Mozilla Open Source Support Program (MOSS) – an award program specifically focused on supporting open source and free software. As The VAR Guy notes: “The Mozilla Foundation has long injected money into the open source ecosystem through partnerships with other projects and grants. But it formalized that mission last year by launching MOSS, which originally focused on supporting open source projects that directly complement or help form the basis for Mozilla’s own products.”

Now, the company has announced that in the third quarter of this year, MOSS awarded over $300,000 to four projects which it either already supported, or which were in line with the Mozilla mission. The MOSS project is ongoing, and if you have a project that you think might qualify, you can take it to Mozilla.

Facebook and Google Play Leapfrog. Facebook and Google have such active internal open source programs, with developers and engineers regularly contributing their inventions to the community, that they frequently leapfrog each other. For example, just as Facebook announced that it open sourced its machine learning system designed for artificial intelligence computing at large scale, Google announced that it open sourced new AI tools. The Google Open Source Programs site is worth checking in on regularly, as is Facebook Open Source.  In 2016 alone, Facebook has already open sourced more than 50 projects, many proven and tested internally.

Many more organizations have active internal open source programs, and we will follow up with additional coverage of the most notable examples.

Get started with open source through The Linux Foundation’s Introduction to Linux, Open Source Development, and GIT course.

TNS Guide to Serverless Technologies: The Best Frameworks, Platforms and Tools

This post is the second of a two-part series that collects many of the technologies and services in the emerging serverless ecosystem. The first installment covered providers of Functions-as-a-Service (FaaS) and Backend-as-a-Service (BaaS or mBaas for mobile providers).

Even if you don’t need the servers themselves, serverless technologies could still require plenty of supporting software. Frameworks are needed to codify best practices, so that everyone is not out to reinvent the wheel, especially when it comes to interfacing with various languages such as Go, JavaScript and Python. And platforms are needed to help people avoid spending too much time on configuring the underlying infrastructure, perhaps by handing the work off to a service provider.

Read more at The New Stack

 

5 Common Failures Of Package Installation

For DevOps, installation is one of the major tasks. People may think package installation is pretty straightforward and easy now: Just run commands like apt-get, yum, brew, etc. Or simply leave it to containers.

Is it really that easy? Here is a list of headaches and hidden costs. 

Admit it. We all have unexpected installation failures.

Okay, we have wrapped up multiple scripts, which will install and configure all required services and components. And the test looks good. Services are running correctly. The GUI opens nicely. It feels just great. Maybe we’re even a bit proud of our achievements. Shouldn’t we be?

Then more and more people start to use our code to do deployment. That’s when the real fun starts. Oh, yes, and surprises and embarrassments, too. Package installations fail with endless issues. The process mysteriously sticks somewhere with few clues. Or the installation itself seems to be fine, but the system just doesn’t behave the same as our testing environments.

At first, people won’t complain. They understand it happens, but with more and more issues, the mood changes. And you feel the pressure! Your boss and colleagues have their concerns, too. The task seems quite straightforward. Why is it taking so long? And how much longer you will need to stabilize the installation? Sound familiar?

So what are the moving parts and obstacles really, in terms of system installation? We want to deliver the installation feature quickly, and it has to be reliable and stable.

Problem 1: Tools are in rapid development

Linux is powerful, because it believes in the philosophy of simplicity. Each tool is there for one simple purpose. Then we combine different tools into bigger ones, for bigger missions. That’s so called integration. Yeah, the integration!

If we only integrate stable and well-known tools, we’re in luck. Probably things will go smoothly; otherwise, the situation would be much different.

  • Tools in rapid development means issues, limitations, and workarounds.

Even worse, the error messages could be confusing. See the example below of an error in Chef development. How we can easily see it’s a local issue, not a bug, at the first glance?


Installing yum-epel (0.6.0) from https://supermarket.getchef.com ([opscode] https://supermarket.chef.io/api/v1)
Installing yum (3.5.3) from https://supermarket.getchef.com ([opscode] https://supermarket.chef.io/api/v1)
/var/lib/gems/1.9.1/gems/json-1.8.2/lib/json/common.rb:155:in `encode': "xC2" on US-ASCII (Encoding::InvalidByteSequenceError)
	from /var/lib/gems/1.9.1/gems/json-1.8.2/lib/json/common.rb:155:in `initialize'
	from /var/lib/gems/1.9.1/gems/json-1.8.2/lib/json/common.rb:155:in `new'
	from /var/lib/gems/1.9.1/gems/json-1.8.2/lib/json/common.rb:155:in `parse'
	from /var/lib/gems/1.9.1/gems/ridley-4.1.2/lib/ridley/chef/cookbook/metadata.rb:473:in `from_json'
	from /var/lib/gems/1.9.1/gems/ridley-4.1.2/lib/ridley/chef/cookbook/metadata.rb:29:in `from_json'
	from /var/lib/gems/1.9.1/gems/ridley-4.1.2/lib/ridley/chef/cookbook.rb:36:in `from_path'
	from /var/lib/gems/1.9.1/gems/berkshelf-3.2.3/lib/berkshelf/cached_cookbook.rb:15:in `from_store_path'
	from /var/lib/gems/1.9.1/gems/berkshelf-3.2.3/lib/berkshelf/cookbook_store.rb:86:in `cookbook'
	from /var/lib/gems/1.9.1/gems/berkshelf-3.2.3/lib/berkshelf/cookbook_store.rb:67:in `import'
	from /var/lib/gems/1.9.1/gems/berkshelf-3.2.3/lib/berkshelf/cookbook_store.rb:30:in `import'
	from /var/lib/gems/1.9.1/gems/berkshelf-3.2.3/lib/berkshelf/installer.rb:106:in `block in install'
	from /var/lib/gems/1.9.1/gems/berkshelf-3.2.3/lib/berkshelf/downloader.rb:38:in `block in download'
	from /var/lib/gems/1.9.1/gems/berkshelf-3.2.3/lib/berkshelf/downloader.rb:35:in `each'
	from /var/lib/gems/1.9.1/gems/berkshelf-3.2.3/lib/berkshelf/downloader.rb:35:in `download'
	from /var/lib/gems/1.9.1/gems/berkshelf-3.2.3/lib/berkshelf/installer.rb:105:in `install'
	from /var/lib/gems/1.9.1/gems/celluloid-0.16.0/lib/celluloid/calls.rb:26:in `public_send'
  • Issues of incompatible version frequently happen in system integration. Usually using the latest released version for all tools will work, but not always. Sometimes our development team may have their own preference, which makes things a bit complicated.

We see issues like the following constantly. Yes, I know. I need to upgrade Ruby, Python, or whatever. It just takes time, which means unplanned work, again.

sudo gem install rack -v '2.0.1'
ERROR:  Error installing rack:
        rack requires Ruby version >= 2.2.2.

Tip: Record the exact version for all components, including OS. After a successful deployment, I usually automatically dump versions via the trick listed in another post: Compare Difference Of Two Envs.

Problem 2: Every network request is a vulnerable failing point

Frequently, the installation will run commands like apt-get/yum or curl/wget. It will launch outgoing requests.

Well, watch out for any network request, my friends.

  • The external server may run into 5XX error, timeout, or be slower than before.
  • Files are removed in server, which results in HTTP 404 error.
  • Corporate firewall blocks the requests, because of security concerns or data leak.

Each ongoing network request is a failure point. Consequently, our deployment fails or suffers.

Tip: Replicate as many servers as possible under our control — for example, local http server, apt repo server, etc.

People might try to pre-cache all Internet download, by building customized OS images or Docker images. This is meaningful for an installation with no network. It comes with a cost. Things are now more complicated and it takes a significant amount of effort.

Tip: Record all outgoing network requests during deployment. Yes, the issue is still there. But this give us valuable input: what to improve or what to check. Tracking requests can be done easily: Monitor Outbound Traffic In Deployment.

Problem 3: Always installing the latest version will guarantee issues

People install package like below quite often.

apt-get -y update && 
apt-get -y install ruby

But what version will we get? Today we get ruby 1.9.5. But months later, it would be ruby 2.0.0, or 2.2.2. You do see the potential risks, don’t you?

Tip: Only install packages with fixed version.

Name Before After
Ubuntu apt-get install docker-engine apt-get install docker-engine=1.12.1-0~trusty
CentOS yum install kernel-debuginfo yum install kernel-debuginfo-2.6.18-238.19.1.el5
Ruby gem install rubocop gem install rubocop -v “0.44.1”
Python pip install flake8 pip install flake8==2.0
NodeJs npm install express npm install express@3.0.0

Problem 4: Avoid installation from third repo

Let’s say we want to install haproxy 1.6. However, the  official Ubuntu repo only provides haproxy with 1.4 or 1.5. So we do this.

sudo apt-get install software-properties-common
add-apt-repository ppa:vbernat/haproxy-1.6

apt-get update
apt-get dist-upgrade

apt-get install haproxy

It works like a charm, but does this really put an end to this problem? Mostly. However, it still fails from time to time.

  • The availability of third repo is usually lower than the official repo.

---- Begin output of apt-key adv --keyserver keyserver.ubuntu.com --recv 1C61B9CD ----
STDOUT: Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --homedir /tmp/tmp.VTYpQ40FG8 --no-auto-check-trustdb --trust-model always --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyring /etc/apt/trusted.gpg.d/brightbox-ruby-ng.gpg --keyring /etc/apt/trusted.gpg.d/oreste-notelli-ppa.gpg --keyring /etc/apt/trusted.gpg.d/webupd8team-java.gpg --keyserver keyserver.ubuntu.com --recv 1C61B9CD
gpgkeys: key 1C61B9CD can't be retrieved
STDERR: gpg: requesting key 1C61B9CD from hkp server keyserver.ubuntu.com
gpg: no valid OpenPGP data found.
gpg: Total number processed: 0
---- End output of apt-key adv --keyserver keyserver.ubuntu.com --recv 1C61B9CD ----
  • Third repo is more likely to change. Now you get 1.6.5 and are happy with that. But suddenly, days later, it starts to install 1.6.6 or 1.6.7. Surprise!

Tip: Avoid third repo as much as possible. If there’s no way to avoid it, track and examine the installed version closely.

Problem 5: Installing from source code could be painful

If we can install directly from source code, it’s much more reliable. But the problem is …

  • It’s usually harder. Try to build linux from the scratch, you will feel the disater and mess. Too many weird errors, missing packages, conflict versions, etc. You my feel like you’re flying a plane without a manual.
  • Compiling from source takes much longer. For example, compile nodejs would take ~30 min. But apt-get only take seconds.
  • Missing facility of service management. We want to manage service by via “service XXX status/stop/start” and configure it to be autostart. With source code installation, they might be missing.

Do containers cure the pain?

Nowadays, more and more people are starting to use containers to avoid installation failure. Yes, this largely reduces the failures for end users. But, it doesn’t solve the problem completely, especially for DevOps. We’re the ones who provide the Docker image. Right?

To build images from Dockerfile, we still have five common failures listed above. Containers shift the failure risks from real deployment to image build process.

Further reading: 5 Tips For Building Docker Image.

Bring it all together

Improvement suggestions for package installation:

  • List all versions and hidden dependencies
  • Monitor all external outgoing traffic
  • Only install packages with fixed version
  • Try your best to avoid third repo

Containers can help to reduce installation failure. But, DevOps folks still need to deal with all of the above possible failures in the image build process.


Original Article: http://dennyzhang.com/installation_failure

More Reading: How To Check Linux Process Deeply With Common Sense


Learn more about DevOps through this new course from The Linux Foundation and EdX: Introduction to DevOps: Transforming and Improving Operations.

“DevOps Is a Management Problem”

Improving your own organization’s performance – from where they are now to performance levels equal to the industry leaders – seems like a very long and difficult road. What is missing in most organizations? We talked to Damon Edwards, co-founder and managing partner of DTO Solutions and DevOpsCon speaker, about the challenges that accompany DevOps and how a repeatable system that empowers teams to find and fix their own problems looks like.

JAXenter: DevOps has the potential to transform not only the IT department but also the whole company. Why is it that DevOps means more than just bringing Devs and Ops together?

Damon Edwards: DevOps, Agile, Lean, Kaizen… they all are built on the same fundamental ideas. All are about teaching an organization how to find and fix its own problems in order to improve time to market and quality while decreasing costs. 

Read more at JAXenter

SUSE Preps Linux for ARM Servers

The move toward ARM-based servers took another step forward this week as SUSE announced plans for server and storage versions of Linux supporting 64-bit ARM SoCs. SUSE Linux Enterprise Server and SUSE Enterprise Storage will be available before the end of the year.

Intel currently dominates the server sector, one of its most profitable markets, with its x86 Xeon processors. The SUSE press release said its partners are using the code to develop “large-scale storage, high performance computing and networking systems.” It did not mention any specific OEMs or other ARM SoC vendors.

Read more at EE Times

Walmart’s Take on Open-Source OpenStack Technology

All of Walmart’s e-commerce runs on OpenStack, and the company also is transitioning its retail back-office workloads onto OpenStack.

Walmart, the world largest retailer and one the largest employers, aims to give back to the OpenStack community. In a session at the OpenStack Summit here, Andrew Mitry, lead architect for Walmart’s OpenStack effort, and Megan Rossetti, part of the OpenStack Operations team at Walmart, detailed how the open-source model is working for the retail giant. Mitry explained that Walmart started on its OpenStack journey four years ago with a proof of concept (PoC) deployment. That original PoC made use of leftover and idle hardware as an initial test case for Walmart. The PoC was successful and now OpenStack has helped transform the way Walmart works.

“Today, 100 percent of our e-commerce runs on OpenStack,” Mitry said. “We are now focused on transitioning our retail back-office workloads onto OpenStack, as well.” Mitry noted that now it’s not just web applications that Walmart is moving to OpenStack, but a lot of data intensive applications, as well.

Read more at eWeek

Top 10 Reasons Why Node.js Is Next Big Thing in Web Application Development

Node.js is an open source, cross platform, java script runtime environment for developing a different variety of tools and applications.  There are many excellent reasons to use node.js, regardless experience level. We are here presenting some points why node.js will be very big thing in Web application development.

Speed- V8 engine developed by Google (for chrome) is used by node.js which is java script runtime. V8 engine compiles java script in to native machine code that’s why using node.js will help developers in terms of speed.

Real time Web applications – Node.js is very helpful to design realtime Web application as it excels at many concurrent connections, so excel multi user applications like game and chat.

Java script – Most of the developers know java script and node.js allows them to use java script on the client, on the server and in the database.

Data streaming – We know that data comes in the form of streams. HTTP, request and response events can be leveraged to complete advantage to develop many great features.

Increases productivity – Node.js has increased the efficiency of the Web application development process at various organisations as it escaped the silos between front end and back end developers.

Effective tooling with NPM – Node.js package manager (NPM) is robust, consistent and superfast that enables to achieve dependency management correctly.

Code Shifting – Node.js codes either written for browser or for the server is same, so one can easily move the code from browser to server and vice versa.

Used for Hosting – Node.js is also helpful when it comes to hosting.  Many PaaS (Platform as a service) providers now deploying Node.js.

High performance- Three big giants of software industry PayPal, Walmart and Groupon revels their experiences after implementing node.js. Their experience was quite positive. Apart from them LinkedIn has also experienced great results post node.js implementation.                                                                                    

Easy modification – Node.js allows developers to work on small modules which can be combined together. Working on small modules concept enables developer to modify module if any change required in future.

For more information visit https://goo.gl/MqXemZ

 

How to Install Rocket.Chat Server with Nginx on Ubuntu 16.04

In this tutorial, I will show you how to build your own chat server using Rocket.Chat. I will use the latest Ubuntu LTS 16.04 server for the installation and Nginx as reverse proxy for the Rocket.Chat application. Rocket.Chat is a free and open source online chat solution for team communication, it allows you to build your own slack like online chat.

Read complete article

How to Check Bad Sectors or Bad Blocks on Hard Disk in Linux

Let us start by defining a bad sector/block, it’s a section on a disk drive or flash memory that can not be read from or written to anymore, as a result of a fixed physical damage on the disk surface or failed flash memory transistors.

As bad sectors continue to accumulate, they can undesirably or destructively affect your disk drive or flash memory capacity or even lead to a possible hardware failure.

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]

Read complete article