Home Blog Page 701

Beginning Grep for Linux SysAdmins

GNU grep is an amazing power tool for finding words, numbers, spaces, punctuation, and random text strings inside of files, and this introduction will get you up and running quickly.

We’ll stick to GNU grep and the Bash shell, because both are the defaults on most Linux distros. You can verify that you have GNU grep, and not some other grep:

$grep -V
grep (GNU grep) 2.21
Copyright (C) 2014 Free Software Foundation, Inc.

It’s unlikely that you will bump into a non-GNU grep on Linux unless you put it there. There are some differences between GNU grep and Unix grep, and these are often discussed in documentation and forums as though we spend our days traipsing through multiples Linuxes and Unixes with gay abandon. Which sounds like fun, but if you use only Linux then you don’t need to worry about any differences.

Basic grep

We humans tend to think in terms of the numbers, words, names, and typos we want to find, but grep doesn’t know about these things; it looks for patterns of text strings to match. That is why you see the phrase “pattern matching” when you’re studying grep and other GNU text-processing tools.

I suggest making a plain text file to use for practicing the following examples because it limits the scope, and you can quickly make changes.

Most of us know how to use grep in simple ways, like finding all occurrences of a word in a file. First type your search term, and then the file to search:

$ grep word filename

By default, grep performs a case-sensitive search. You can perform a recursive case-insensitive search in a directory and its subdirectories:

$ grep -ir word dirname

This is an easy and useful way to find things, but it has a disadvantage: grep doesn’t look for words, it looks for text strings, so when you search for “word” grep thinks that “wordplay” and “sword” are matches. When you want an exact word match use -w:

$ grep -w word filename

Use ^ and $ to find matches at the beginnings and ends of lines:

$ grep ^word filename
$ grep word$ filename

Use -v to invert your match and find the lines that do not contain your search string:

$ grep -v word filename

You can search a list of space-delimited files, which is useful when you have just a few files to search. grep prefixes each match with its filename, so you know which files your matches are in:

$ grep word filename1 filename2 filename3
filename1:Most of us know how to use <code>grep</code> in simple ways
filename2:<pre><code>$ grep word filename</code></pre>
filename3:This is an easy and useful way to find things

You can also see the line numbers with -n, which is fab for large files:

$ grep -n word filename1 filename2 filename3

Sometimes you want to see the surrounding lines, for example when you’re searching log or configuration files. The -Cn option prints the number of preceding and following lines that you specify, which in this example is 4:

$ grep -nC4 word filename

Use -Bn to print your desired number of lines before your match, and -An after.

So how do you search for phrases when grep sees the word after a space as a filename? Search for phrases by enclosing them in single quotes:

$ grep 'two words' filename

What about double quotes? These behave differently than single quotes in Bash. Single quotes perform a literal search, so use these for plain text searches. Use double quotes when you want shell expansion on variables. Try it with this simple example: first create a new Bash variable using a text string that is in your test file, verify it, and then use grep to find it:

$ VAR1=strings
$ echo $VAR1
strings
$ grep "$VAR1" filename
strings

Wildcards

Now let’s play with wildcards. The . matches any single character except newlines. I could use this to match all occurrences of “Linuxes” and “Unixes” in this article:

$ grep -w Linux.. grep_cheat_sheet.html
$ grep -w Unix.. grep_cheat_sheet.html

Or do it in one command:

$ grep -wE '(Linux..|Unix..)' grep_cheat_sheet.html

That is an OR search that matches either one. What about an AND search to find lines that contain both? It looks a little clunky—but this is how it’s done, piping the results of the first grep search to the second one:

$ grep -w Linux.. grep_cheat_sheet.html |grep -w Unix..

I use this one for finding HTML tag pairs:

$ grep -i '<h3>.*</h3>' filename

Or find all header tags:

$ grep -i '<h.>.*</h.>' filename

You need both the dot and the asterisk to behave as a wildcard that matches anything: . means “match a single character,” and * means “match the preceding element 0 or more times.”

Bracket Expressions

Bracket expressions find all kinds of complicated matches. grep matches anything inside the brackets that it finds. For example, you can find specific upper- and lower-case matches in a word:

$ grep -w '[lL]inux' filename 

This example finds all lines with pairs of parentheses that are enclosing any letters and spaces. A-Z and a-z define a range of patterns, A to Z inclusive uppercase, and a to z inclusive lowercase. For a space simply press the spacebar, and you can make it any number of spaces you want:

$ grep '([A-Za-z ]*)' filename 

Character classes are nice shortcuts for complicated expressions. This example finds all of your punctuation, and uses the -o option to display only the punctuation and not the surrounding text:

$ grep -o "[[:punct:]]" filename
<
>
,
.
<
/
>

That example isn’t all that practical, but it looks kind of cool. A more common type of search is using character classes to find lines that start or end with numbers, letters, or spaces. This example finds lines that start with numbers:

$ grep "^[[:digit:]]" filename

Trailing spaces goof up some scripts, so find them with the space character class:

$ grep "[[:space:]]$" filename

Basic Building Blocks

These are the basic building blocks of grep searches. When you understand how these work, you’ll find that the advanced incantations are understandable. GNU grep is ancient and full of functionality, so study the GNU grep manual or man grep to dig deeper.

 

Learn more about system management in the Essentials of System Administration training course from The Linux Foundation.

Navigating OpenStack: Community, Release Cycles and Events

Hopefully last week we piqued your interest in a career path in OpenStack. Adoption is growing and so is the number of OpenStack jobs. Like any other open source project, if you’re going to use it—professionally or personally—it’s important to understand its community and design/release patterns.

The OpenStack community

OpenStack has an international community of more than 60,700 individual members. Not all of these members contribute code, but they are all involved in developing, evangelizing or supporting the project.

Individuals have contributed over 20 million lines of code to OpenStack since its inception in 2010. OpenStack’s next release, Newton, will arrive the first week of October and has more than 2,500 individual contributors. You need not be an expert in infrastructure development to contribute—OpenStack project teams include groups like the internationalization team (which helps translate the dashboard and docs into dozens of languages) and the docs team—work that’s equally important to OpenStack’s success.

You can find a full list of projects here, ranging from core services like compute and storage to emerging solutions like container networking.

The release cycle

OpenStack releases run on a six-month schedule. Releases are named in alphabetical order after a landmark near the location of that release cycle’s Summit, the big event where the community gathers to plan the next release. The first release was named Austin; the current release is Mitaka and the upcoming release is Newton.

Release planning will soon undergo a change in response to community feedback. In the A-N releases, developers and contributors both met with users to gather parameter and worked on solutions in their teams at the OpenStack Summit (an event we’ll talk about momentarily—you don’t want to miss them!). This started to become too large of a task for the allotted time.

Starting next year, the community will try something new: what used to be the Design Summit will be split into two events. More strategic planning conversations and requirements gathering will continue to happen at the Summit in an event to be called the “Forum.” More detailed implementation discussions will happen among contributors at the smaller Project Teams Gathering (PTG), which will occur in between the Summits.

If you’re a developer or administrator working professionally on OpenStack, you might find yourself attending the Forum to give your input on the upcoming release, or becoming a contributor on a project team and attending the PTG!

Learn more about what’s new in the Newton release with the Newton Design video series.

Summits, Hackathons and everything in between

With such a large and active community, there’s no shortage of ways to meet up with other Stackers. The biggest, mark-your-calendar-don’t-miss-it event is the OpenStack Summit. The Summit is a bi-annual gathering of community members, IT leaders, developers and ecosystem supporters. Each year one Summit is held in North America and one Summit rotates between APAC and EMEA. In April 2016, the Austin, Texas Summit brought more than 7,800 Stackers. October 25-28, 2016, the community heads to Barcelona, Spain. Summits are a week of hands-on workshops, intensive trainings, stories and advice from real OpenStack users, and discussions submitted and voted on by the community.

In between Summits, community members host OpenStack Days—one or two day gatherings that draw everyone from active contributors to business leaders interested in learning more about OpenStack. Topics are determined by community organizers, and often reflect the challenges pertinent to that community as well as the local industries’ specialties.

The newest OpenStack events for cloud app developers are OpenStack App Hackathons, another community-led event. Ever wondered what you could build if you had 48 hours, unlimited cloud resources and a bunch of pizza? Taiwan hosted the first Hackathon, where the winning team created a tool that helped rehabilitate damaged neuromuscular connections, followed by Guadalajara, where the winning team created an app that gave users storage of and access to their healthcare records, a major problem in the team’s local community.  

And of course, there’s no shortage of local user groups and Meetups to explore around the world.

Getting Started

In the subsequent pieces in this series, we’ll discuss the tools and resources available for sharpening your OpenStack skills and developing the necessary experience to work on OpenStack professionally. But if you’re ready to start exploring, the community has multiple low risk options to start getting involved.

If you’re interested in development, DevStack is a full OpenStack deployment that you can run on your laptop. If you’re interested in building apps on OpenStack or playing with a public cloud-like environment, you can use TryStack, a free testing sandbox. There is also a plethora of OpenStack Powered clouds in the OpenStack Marketplace.

As you’re exploring OpenStack, keep ask.openstack.org handy—it’s the OpenStack-only version of Stackoverflow.

Common Concerns

You’ve seen the job market, you’ve gotten the community layout, and surely you have more questions. In our third installment, we’ll address the experience it takes to get hired to work on OpenStack, and share the resources you can use to help get you there. If you have a question you want answered, Tweet us at @OpenStack.

 

Want to learn the basics of OpenStack? Take the new, free online course from The Linux Foundation and EdX. Register Now!

The OpenStack Summit is the most important gathering of IT leaders, telco operators, cloud administrators, app developers and OpenStack contributors building the future of cloud computing. Hear business cases and operational experience directly from users, learn about new products in the ecosystem and build your skills at OpenStack Summit, Oct. 25-28, 2016, in Barcelona, Spain. Register Now!

Cloud Foundry Releases Free Online Courses

As an open source Platform as a Service (PaaS) solution, Cloud Foundry makes it extremely easy to focus on delivering services and apps without having to worry about the platform. However, it’s not always so easy for developers and administrators new to Cloud Foundry to quickly get up to speed on the technology.

Pivotal has created a wealth of courses to help developers and others who are interested in learning Cloud Foundry. So far these courses were available internally or were offered through Cloud Foundry Foundation (CFF) events.  Pivotal donated these courses in August to the Cloud Foundry Foundation, and CFF is now releasing the community training and education material under the  Apache 2.0 license.

“This content is the same as we have used for the three courses offered at Cloud Foundry summits across the world for the last year or so,” said Tim Harris, Director of Certification Programs, Cloud Foundry Foundation in an interview.

These courses were developed by Pivotal, and now anyone can take the course online or use the materials for their own training programs. Not only can you use them, since they are available under the Apache open source licence, people can also contribute to them. Materials are available on GitHub.

“As Cloud Foundry becomes the de facto standard for deploying multi clouds, the need for skilled engineers becomes increasingly critical,” said Sam Ramji, CEO, Cloud Foundry Foundation. “We deeply appreciate Pivotal’s effort in developing the material and generosity in open sourcing it to benefit the community.”

Three available courses

Based on the three sets of content, the material spans from beginner to intermediate levels:  “Zero to Hero” (beginner), “Microservices” and “Operating Cloud Foundry”. The courses consist of slides, extensive lab exercises and some sample applications. Each course is meant to be one day long.

Zero to Hero
As the name implies, the course is targeted at people who do have experience with web-based applications, but have little or no Cloud Foundry experience.  Zero to Hero covers deploying and managing applications on Cloud Foundry. Meant for beginners it gives an “overview of Cloud Foundry and how it works, including specifics relating to services, buildpacks, and architecture,” according to the project page.

Microservices on Cloud Foundry: Going Cloud Native 
This course offers hands on experience with designing applications for Cloud Foundry. It’s targeting applications developers who are interested in deploying microservice-based systems into the cloud. The course gives an overview of CF and its tools. It talks about “how to architect polyglot applications for deployment and scaling in the cloud.”

Operating a Platform: BOSH and Everything Else

This course targets those who have experience with managing Linux-based systems, but are not well-versed with Cloud Foundry BOSH experience. The course helps in understanding “the basics of how to deploy and manage the Cloud Foundry platform as well as the stateful data services that power cloud-native applications. It includes an operational overview of Cloud Foundry and data services, and how these can be deployed with the cluster orchestration tool, ‘BOSH’.”

Get Hands-on

There is an abundance of online articles about Cloud Foundry, so what value does this educational material provide? Harris explained, “These courses are heavy with hands-on lab exercises, and hence will provide the Cloud Foundry community with a much more detailed experience than can be obtained by simply reading articles.”

These are typical web 1.0 courses where you consume them instead of interacting with instructors. When we inquired about follow-ups or further questions a new learner might have Harris said that CF will continue to offer instructor led courses at Cloud Foundry Summit events, where you can interact with instructors. There are two upcoming CF events where these courses will be offered: Sept. 26 at the Cloud Foundry Summit in Frankfurt, Germany, and also at the North American Cloud Foundry Summit in May of 2017.

There is clearly a need and demand for such courses that are developed and designed by the projects themselves, and CF is addressing that need.

 

Cloud Migration Is Making Performance Monitoring Crucial

Application performance monitoring (APM) and network performance monitoring (NPM) are becoming increasingly important as businesses that have adopt cloud-based services and virtualized infrastructure.

In the recent SDxCentral report, “Network Performance Management Takes On Applications,” more than half of surveyed respondents are actively looking at APM and NPM systems, and more than one-third are in the testing and deployment phases of adoption. Another 16 to 20 percent are piloting these systems, and roughly 15 percent have already deployed them in their network.

Read more at SDx Central

Is An Editable Blockchain the Future of Finance?

The consultancy firm Accenture is patenting a system that would allow an administrator to make changes to information stored in a blockchain. In an interview with the Financial Times (paywall), Accenture’s global head of financial services, Richard Lumb, said that the development was about “adapting the blockchain to the corporate world” in order to “make it pragmatic and useful for the financial services sector.”

Accenture aims to create a so-called permissioned blockchain—an invitation-only implementation of the technology, and the one currently favored by banks. 

Read more at MIT Technology Review

Why China Is the Next Proving Ground for Open Source Software

Western entrepreneurs still haven’t figured out China. For most, the problem is getting China to pay for software. The harder problem, however, is building software that can handle China’s tremendous scale.

There are scattered examples of success, though. One is Alluxio (formerly Tachyon), which I detailed recently in its efforts to help China’s leading online travel site, Qunar, boost HDFS performance by 15X. Alluxio CEO and founder, Haoyuan Li, recently returned from China, and I caught up with him to better understand the big data infrastructure market there, as China looks to spend $370 million to double its data center capacity in order to serve 710 million internet users.

Read more at TechRepublic

The Power of Protocol Analyzers

In the complicated world of networking, problems happen. But determining the exact cause of a novel issue in the heat of the moment gets dicey. In these cases, even otherwise competent engineers may be forced to rely on trial and error once Google-fu gives out.

Luckily, there’s a secret weapon waiting for willing engineers to deploy—the protocol analyzer. This tool allows you to definitively determine the source of nearly any error, provided you educate yourself on the underlying protocol. The only catch for now? Many engineers avoid it entirely due to (totally unwarranted) dread.

Read more at Ars Technica

Create an Open Source AWS S3 server

Amazon S3 (Simple Storage Service) is a very powerful online file storage web service provided by Amazon Web Services. Think of it as a remote drive where you can store files in directories, retrieve and delete them. Companies such as DropBox, Netflix, Pinterest, Slideshare, Tumblr and many more are relying on it.

While the service is great, it is not open source so you have to trust Amazon with your data and even though they provide a free-tier access for a year, one must enter credit card information to create an account. Because S3 is a must-know for any software engineer, I want my students to gain experience with it and use it in their web applications, yet I don’t want them to pay for it. Some Holberton School students are also working during commutes, meaning either slow Internet connections and expensive bandwidth or no Internet connection at all.

That’s why I started looking into open source solutions that would emulate the S3 API and that could run on any machine. As usual, the open source world did not disappoint me and provided several solutions, here are my favorites:

The first one that I ran into is Fake S3, written in Ruby and available as a gem, it requires only a few seconds to install and the library is very well maintained. It is a great tool to get started but does not implement all S3 commands and is not suited for production usage.

The second option is HPE Helion Eucalyptus which offers a wide spectrum of AWS services emulation (CloudFormation, Cloudwatch, ELB…) including support for S3. This is a very complete solution (only running on CentOS), oriented toward enterprises, and unfortunately too heavyweight for individuals or small businesses.

The last and my preferred option is Scality S3 server. Available via Docker image, making it super easy to start and distribute. The software is suited for individuals, one can get started in seconds without any complicated installation. But also for enterprises as it is production-ready and scalable. The best of both worlds.

Getting started with Scality S3 server

To illustrate how easy it is to emulate AWS S3 with Scality S3 server, let’s do it live!

Requirements:

Launch the Scality S3 server Docker container:

$ docker run -d --name s3server -p 8000:8000 scality/s3server
Unable to find image 'scality/s3server:latest' locally
latest: Pulling from scality/s3server
357ea8c3d80b: Pull complete
52befadefd24: Pull complete
3c0732d5313c: Pull complete
ceb711c7e301: Pull complete
868b1d0e2aad: Pull complete
3a438db159a5: Pull complete
38d1470647f9: Pull complete
4d005fb96ed5: Pull complete
a385ffd009d5: Pull complete
Digest: sha256:4fe4e10cdb88da8d3c57e2f674114423ce4fbc57755dc4490d72bc23fe27409e
Status: Downloaded newer image for scality/s3server:latest
7c61434e5223d614a0739aaa61edf21763354592ba3cc5267946e9995902dc18
$

Check that the Docker container is properly running:

$ docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS                    NAMES
ed54e677b1b3        scality/s3server    "npm start"         5 days ago          Up 5 days           0.0.0.0:8000->8000/tcp   s3server

Install the Ruby gem AWS SDK v2 (documentation here):

$ gem install aws-sdk

Now let’s create a file that we will upload to our bucket (we will use it later):

$ touch myfavoritefile

Using your favorite text editor, create a file containing your Ruby script, let’s name it `s3_script.rb`:

#!/usr/bin/ruby
require 'aws-sdk'
s3 = Aws::S3::Client.new(
 :access_key_id => 'accessKey1',
 :secret_access_key => 'verySecretKey1',
 :region => 'us-west-2',
 :endpoint => 'http://0.0.0.0:8000/',
 :force_path_style => true
)
s3.create_bucket({bucket: "mybucket"})
File.open('myfavoritefile', 'rb') do |file|
 s3.put_object(bucket: 'mybucket', key: 'myfavoritefile', body: file)
end
resp = s3.list_objects_v2(bucket: 'mybucket')
puts resp.contents(&:key)

Run the script:

$ ruby s3_script.rb
$ myfavoritefile

Congratulations, you created your first S3 bucket and uploaded a file to it!

 

Let’s explain the code

Here we indicate that this script should be executed using Ruby and that we are including the AWS SDK library:

#!/usr/bin/ruby
require 'aws-sdk'

We initiate a connection to our S3 server running in our Docker container. Note that `accessKey1` and `verySecretKey1` are the default access key and secret access key defined by Scality S3 server.
s3 = Aws::S3::Client.new(
 :access_key_id => 'accessKey1',
 :secret_access_key => 'verySecretKey1',
 :region => 'us-west-2',
 :endpoint => 'http://127.0.0.1:8000/',
 :force_path_style => true
)

Let’s create a S3 bucket named `mybucket`:

s3.create_bucket({bucket: "mybucket"})

Here, we are uploading, the previously created file `myfavoritefile`, to our bucket `mybucket`:

File.open('myfavoritefile', 'rb') do |file|
 s3.put_object(bucket: 'mybucket', key: 'myfavoritefile', body: file)
end

Finally, this is collecting the content of `mybucket` and displaying it to standard output:

resp = s3.list_objects_v2(bucket: “mybucket”)
puts resp.contents(&:key)

Want to interact more with S3-compatible AWS apps? Join the hackathon co-organized by Scality and Seagate on October 21st to 23rd at Holberton School in San Francisco. No admission fee, food and drinks provided. The goal will be to write a S3Server to a Seagate Kinetic backend including using an erasure coding library and writing a data placement algorithm.

 

21 Open Source Projects for IoT

The Internet of Things market is fragmented, amorphous, and continually changing, and its very nature requires more than the usual attention to interoperability. It’s not surprising then, that open source has done quite well here — customers are hesitant to bet their IoT future on a proprietary platform that may fade or become difficult to customize and interconnect.

In this second entry in a four-part series about open source IoT, I have compiled a guide to major open source software projects, focusing on open source tech for home and industrial automation. I am omitting more vertical projects related to IoT, such as Automotive Grade Linux and Dronecode, and I’m also skipping open source, IoT-oriented OS distributions, such as Brillo, Contiki, Mbed, OpenWrt, Ostro, Riot, Ubuntu Snappy Core, UCLinux, and Zephyr. Next week, I’ll cover hardware projects — from smart home hubs to IoT-focused hacker boards — and in the final part of the series, I’ll look at distros and the future of IoT.

The list of 21 projects below includes two major Linux Foundation hosted projects — AllSeen (AllJoyn) and the OCF (IoTivity) — and many more end-to-end frameworks that link IoT sensor endpoints with gateways and cloud services. I have also included a smattering of smaller projects that address particular segments of the IoT ecosystem. We could list more, but it’s increasingly difficult to determine the difference between IoT software and just plain software. From the embedded world to the cloud, more and more projects have an IoT story to tell.

All 21 projects claim to be open source, although it’s beyond the scope of this article to ensure they fully live up to those claims. They all run Linux on at least one component in the ecosystem, and most support it throughout, from desktop development to cloud/server, gateway, and sensor endpoint components. The vast majority have components that can run on Linux hacker boards like the Raspberry Pi and BeagleBone, and many support Arduino.

There is still plenty of proprietary technology in IoT, especially among the top-down, enterprise platforms. Yet, even some of these offer partially open access. Verizon’s ThingSpace, for example, which targets 4G smart city applications, has a free development API that supports hacker boards, even if the core platform itself is proprietary. Somewhat similarly, Amazon’s AWS IoT suite has a partially open device SDK and open source starter kits.

Other major proprietary platforms include Apple’s HomeKit and Microsoft Azure IoT Suite. Then there’s the 230-member Thread Group, which oversees the peer to peer Thread networking protocol based on 6LoWPAN. Launched by Nest, which is owned by Alphabet, the umbrella organization over Google, The Thread Group does not offer a comprehensive open source framework like AllSeen and the OCF. However, it’s associated with Brillo, as well as the Weave IoT communication protocol. In May, Nest launched an open source version of Thread called OpenThread (see farther below).

Here are 21 open source software projects for the Internet of Things:

AllSeen Alliance (AllJoyn) — The AllJoyn interoperability framework overseen by the AllSeen Alliance (ASA) is probably the most widely adopted open source IoT platform around. 

Bug Labs dweet and freeboard — Bug Labs started out making modular, Linux-based Bug hardware gizmos, but it long ago morphed into a hardware-agnostic IoT platform for the enterprise. Bug Labs offers a “dweet” messaging and alerts platform and a “freeboard” IoT design app. Dweet helps publish and describe data using a HAPI web API and JSON. Freeboard is a drag-and-drop tool for designing IoT dashboards and visualizations.

DeviceHive — DataArt’s AllJoyn-based device management platform runs on cloud services such as Azure, AWS, Apache Mesos, and OpenStack. DeviceHive focuses on Big Data analytics using tools like ElasticSearch, Apache Spark, Cassandra, and Kafka. There’s also a gateway component that runs on any device that runs Ubuntu Snappy Core. The modular gateway software interacts with DeviceHive cloud software and IoT protocols, and is deployed as a Snappy Core service.

DSA — Distributed Services Architecture facilitates decentralized device inter-communication, logic, and applications. The DSA project is building a library of Distributed Service Links (DSLinks), which allow protocol translation and data integration with third party sources. DSA offers a scalable network topology consisting of multiple DSLinks running on IoT edge devices connected to a tiered hierarchy of brokers.

Eclipse IoT (Kura) — The Eclipse Foundation’s IoT efforts are built around its Java/OSGi-based Kura API container and aggregation platform for M2M applications running on service gateways. Kura, which is based on Eurotech’s Everywhere Cloud IoT framework, is often integrated with Apache Camel, a Java-based rules-based routing and mediation engine. Eclipse IoT sub-projects include the Paho messaging protocol framework, the Mosquitto MQTT stack for lightweight servers, and the Eclipse SmartHome framework. There’s also a Java-based implementation of Constrained Application Protocol (CoAP) called Californium, among others.

Kaa — The CyberVision-backed Kaa project offers a scalable, end-to-end IoT framework designed for large cloud-connected IoT networks. The platform includes a REST-enabled server function for services, analytics, and data management, typically deployed as a cluster of nodes coordinated by Apache Zookeeper. Kaa’s endpoint SDKs, which support Java, C++ and C development, handle client-server communications, authentication, encryption, persistence, and data marshalling. The SDKs contain server-specific, GUI-enabled schemas translated into IoT object bindings. The schemas govern semantics and abstract the functions of a diverse group of devices.

Macchina.io — Macchina.io provides a “web-enabled, modular and extensible” JavaScript and C++ runtime environment for developing IoT gateway applications running on Linux hacker boards. Macchina.io supports a wide variety of sensors and connection technologies including Tinkerforge bricklets, XBee ZB sensors, GPS/GNSS receivers, serial and GPIO connected devices, and accelerometers.

GE Predix — GE’s PaaS (Platform as a Service) software for industrial IoT is based on Cloud Foundry. It adds asset management, device security, and real-time, predictive analytics, and supports heterogeneous data acquisition, storage, and access. GE Predix, which GE developed for its own operations, has become one of the most successful of the enterprise IoT platforms, with about $6 billion in revenues. GE recently partnered with HPE, which will integrate Predix within its own services.

Home Assistant — This up and coming grassroots project offers a Python-oriented approach to home automation. See our recent profile on Home Assistant.

Mainspring — M2MLabs’ Java-based framework is aimed at M2M communications in applications such as remote monitoring, fleet management, and smart grids. Like many IoT frameworks, Mainspring relies heavily on a REST web-service, and offers device configuration and modeling tools.

Node-RED — This visual wiring tool for Node.js developers features a browser-based flow editor for designing flows among IoT nodes. The nodes can then be quickly deployed as runtimes, and stored and shared using JSON. Endpoints can run on Linux hacker boards, and cloud support includes Docker, IBM Bluemix, AWS, and Azure.

Open Connectivity Foundation (IoTivity) — This amalgamation of the Intel and Samsung backed Open Interconnect Consortium (OIC) organization and the UPnP Forum is working hard to become the leading open source standards group for IoT. The OCF’s open source IoTivity project depends on RESTful, JSON, and CoAP.

openHAB — This open source smart home framework can run on any device capable of running a JVM. The modular stack abstracts all IoT technologies and components into “items,” and offers rules, scripts, and support for persistence — the ability to store device states over time. OpenHAB offers a variety of web-based UIs, and is supported by major Linux hacker boards.

OpenIoT — The mostly Java-based OpenIoT middleware aims to facilitate open, large-scale IoT applications using a utility cloud computing delivery model. The platform includes sensor and sensor network middleware, as well as ontologies, semantic models, and annotations for representing IoT objects.

OpenRemote — Designed for home and building automation, OpenRemote is notable for its wide-ranging support for smart devices and networking specs such as 1-Wire, EnOcean, xPL, Insteon, and X10. Rules, scripts, and events are all supported, and there are cloud-based design tools for UI, installation, and configuration, and remote updates and diagnostics.

OpenThread — Nest’s recent open source spin-off of the 6LoWPAN-based Thread wireless networking standard for IoT is also backed by ARM, Microchip’s Atmel, Dialog, Qualcomm, and TI. OpenThread implements all Thread networking layers and implements Thread’s End Device, Router, Leader, and Border Router roles.

Physical Web/Eddystone — Google’s Physical Web enables Bluetooth Low Energy (BLE) beacons to transmit URLs to your smartphone. It’s optimized for Google’s Eddystone BLE beacon, which provides an open alternative to Apple’s iBeacon. The idea is that pedestrians can interact with any supporting BLE-enabled device such as parking meters, signage, or retail products.

PlatformIO — The Python-based PlatformIO comprises an IDE, a project generator, and a web-based library manager, and is designed for accessing data from microcontroller-based Arduino and ARM Mbed-based endpoints. It offers preconfigured settings for more than 200 boards and integrates with Eclipse, Qt Creator, and other IDEs.

The Thing System — This Node.js based smart home “steward” software claims to support true automation rather than simple notifications. Its self-learning AI software can handle many collaborative M2M actions without requiring human intervention. The lack of a cloud component provides greater security, privacy, and control.

ThingSpeak — The five-year-old ThingSpeak project focuses on sensor logging, location tracking, triggers and alerts, and analysis. ThingSpeak users can tap a version of MATLAB for IoT analysis and visualizations without buying a license from Mathworks.

Zetta — Zetta is a server-oriented, IoT platform built around Node.js, REST, WebSockets, and a flow-based “reactive programming” development philosophy linked with Siren hypermedia APIs. Devices are abstracted as REST APIs and connected with cloud services that include visualization tools and support for machine analytics tools like Splunk. The platform connects end points such as Linux and Arduino hacker boards with cloud platforms such as Heroku in order to create geo-distributed networks.

Read the previous article in this series, Who Needs the Internet of Things? 

Read the next article in this series, Linux and Open Source Hardware for IoT

Interested in learning how to adapt Linux to an embedded system? Check out The Linux Foundation’s Embedded Linux Development course.

Get Started Writing Web Apps with Node.js: Using the All Powerful NPM

In a previous article, I provided a quick introduction to Node.js and explained what it is, what it does, and how you can get started using it. I mentioned a few modules that can help you get things done, and they all — pug, ejs, and express — are external modules that will need to be installed using Node.js’s npm tool.

It’s definitely worth your time to learn more about npm. It is mighty powerful and it doesn’t just install modules; it also helps you set up a whole environment for your application. But, let’s not jump the gun. First, let’s just use npm for what it’s used for most: installing modules.

In this article, I’ll show how to create a project around what’s called a “single-page web application.” In a traditional web application, each time you click on a link or fill in and submit a form, you are directed to another page that shows the result of your actions.

Not with a single-page application. In a single-page application, you only see one page but its content changes as you interact with it. In this example, you will have the main page present a form with a textbox and submit button (Figure 1). When you fill in the box and press Submit, you will be redirected back to the same page (or so it will seem) and the word you typed into the text box will show up below the form.

Figure 1: This is the single-page web app you’re going to build.

To get started, create a directory and call it, say, form. Then cd into it and create an empty text file called server.js. This is where the code for your web application will go.

Next, running npm init. npm will ask you for some information, such as the name of the project, the version, a description, the entry point (i.e., what file you are going to make Node execute), the Git repository where you’re going to store the project, and so on. When you’re done, npm generates a package.json file that looks something like this:

{
 "name": "form",
 "version": "1.0.0",
 "description": "Example of single-page web app.",
 "main": "server.js",
 "scripts": {
   "test": "echo "Error: no test specified" && exit 1",
   "start": "node server.js"
 },
 "repository": {
   "type": "git",
   "url": "git+ssh://git@github.com/yrgithubaccount/nodejsform.git"
 },
 "keywords": [
   "example",
   "form",
   "get",
   "post",
   "node.js"
 ],
 "author": "Your Name",
 "license": "GPL-3.0",
 "bugs": {
   "url": "https://github.com/yrgithubaccount/nodejsform/issues"
 },
 "homepage": "https://github.com/yrgithubaccount/nodejsform#readme"
}

Let’s install express and also body-parser. Run:

npm install express --save

inside your project’s directory, and you will see something like Figure 2.

Figure 2: Installing express with npm.
That is a tree view of all of express’s dependencies. If you list your directory now, you will also see a new subdirectory called node-modules. That’s where all the code for installed modules live. Notice how convenient this is. You could’ve install express globally with:

npm install express -g

But by installing by default locally and only in your project’s directory, npm makes sure you don’t clutter up your main module directory.

Next, install body_parser with:

npm install body-parser --save

This module makes sure you will be able to parse easily the data you get from your form.

The –save modifier in the install commands above tells npm to include the information about the modules you’re going to use with your project into the package.json file. Check it out:

{
 "name": "form",
 "version": "1.0.0",
 "description": "Example of single-page web app.",
 "main": "server.js",
 "scripts": {
   "test": "echo "Error: no test specified" && exit 1",
   "start": "node server.js"
 },
 "repository": {
   "type": "git",
   "url": "git+ssh://git@github.com/yrgithubaccount/nodejsform.git"
 },
 "keywords": [
   "example",
   "form",
   "get",
   "post",
   "node.js"
 ],
 "author": "Your Name",
 "license": "GPL-3.0",
 "bugs": {
   "url": "https://github.com/yrgithubaccount/nodejsform/issues"
 },
 "homepage": "https://github.com/yrgithubaccount/nodejsform#readme,
 "dependencies": {
   "body-parser": "^1.15.2",
   "express": "^4.14.0"
 }
}

With that, if you want to share your project or upload it to your Git repository, you don’t need to upload all the modules, too. As long as they have the package.json file, other users can clone your project. They can just run npm install, without any other arguments, and npm will download and install all the dependencies automatically. How useful is that?

Let’s write the code for server.js:

var express = require('express'),
   app = express(),
   bodyParser = require('body-parser'),
   fs= require('fs');

app.use(bodyParser.urlencoded({ extended: true }));

fs.readFile('./head.html', function (err, head) {
   if (err) {
       res.sendStatus(500).send('Couldn't read head!');
       res.end();
   }
   fs.readFile('./form.html', function (err, form) {
       if (err) {
           res.sendStatus(500).send('Couldn't read form!');
           res.end();
       }
       fs.readFile('./foot.html', function (err, foot) {
           if (err) {
               res.sendStatus(500).send('Couldn't read foot!');
        res.end();
           }

           app.get('/', function (req, res) {
               res.send(head + form + foot);
               res.end();

               console.log("Got a GET");
           });

           app.post('/', function (req, res) {
               res.send(head + form + '<p>' + req.body.name + '</p>n' + foot);
               res.end();

               console.log("Got a POST");
           });

           var server = app.listen(8081, function () {
           console.log("Example app listening at http://127.0.0.1:8081")
           });
           
       });
   });
});

Let’s see what this does step by step:

  • var express = require(‘express’), app = express() drags in express and then initiates an express application.

  • bodyParser = require(‘body-parser’), drags in body-parser and creates a body-parser object.

  • fs= require(‘fs’);, as in our first example, brings in Node.js’s in-built file-managing module.

  • app.use(bodyParser.urlencoded({ extended: true })); adds body-parsing superpowers for urlencoded data to our express app.

  • The block of chained fs.readFile(‘./some_HTML_file’, function (err, some_variable) {… instructions is similar to the one you saw previously. First, you try to open head.html; if that succeeds, you dump the content into the head variable, and then try and open form.html and dump its contents into the form variable. Finally, open the foot.html file and dump its contents into the foot variable.

  • If the server manages to open all three HTML files, you start listening for GET calls to your server. If you receive one (app.get(‘/’, function (req, res) {), you concatenate all the contents of the files together and send it out to the client (res.send(head + form + foot); res.end();). The server prints a message to the console for debugging purposes. (console.log(“Got a GET”);).

  • The moment you hit the Submit button, you’re sending a POST request to the server. The form is linked up to the same page (see form.html below) it is sent from. This is what app.post(‘/’, function (req, res) { picks up. This is probably the most interesting thing of this exercise. the fact that you can have the express app respond with a different virtual page at the same address, its content depending on the type of request the server receives.

  • res.send(head + form + ‘<p>’ + req.body.name + ‘</p>n’ + foot); grabs the data coming from the name text field in the form (again, see form.html below) and inserts it into the HTML text the server sends back to the web browser.

  • var server = app.listen(8081, function () { starts the server on port 8081.

For reference, the three HTML files look like this:

head.html

<html>
   <body>

form.html

<form action="http://127.0.0.1:8081/" method="POST">
   Name: <input type="text" name="name">  <br>
   <input type="submit" value="Submit">
</form>

foot.html

   </body>
</html>

Templating like this is of course very primitive, but you get the idea.

You could expand this into an app that stores the names in a database and then sets up a system for users with different privileges — and there are modules for that. Or, you could change the app to upload images or documents and have your server show them in a gallery dynamically as you upload them — there are also modules for that. Or any of the million other things Node.js and its army of modules lets you do in a few lines of code.

Conclusion

So, saying Node.js is merely a JavaScript interpreter is unfair. Node.js, just by itself may seem underwhelming at first glance. But, when you delve into it further, the possibilities seem just about endless.

What with all the modules, and the amazingly active community, the scope of projects that Node.js makes possible is staggering. It is little wonder that it powers complex web applications such as those used by Netflix, PayPal, LinkedIn, and many others.

Want to learn more about developing applications? Check out the “Developing Applications with Linux” course from The Linux Foundation.