Home Blog Page 494

Docker Commands for Development to Deployment

The objective of this article to understand the end to end flow of container development to deployment in the target environment and list the Docker commands needed for every action.

1. Introduction

The overall process consists of developing a container image with your code, dependent software, and configurations, running and testing the container in a development environment, publishing the container image into Docker Hub, and finally, deploying the Docker image and running the container in the target environment. This article assumes that you have installed Docker engine in the development and target environment. Please refer to 6.3 for installation instructions.

2. Develop Container Image

To build the container image, we have to create a dockerfile which will contain all the necessary information. Please refer here to develop the dockerfile.

Read more at DZone

Securing Kubernetes Cluster Networking

Network Policies is a new Kubernetes feature to configure how groups of pods are allowed to communicate with each other and other network endpoints. In other words, it creates firewalls between pods running on a Kubernetes cluster. This guide is meant to explain the unwritten parts of Kubernetes Network Policies.

This feature has become stable in Kubernetes 1.7 release. In this guide, I will explain how Network Policies work in theory and in practice. You can directly jump to kubernetes-networkpolicy-tutorial repository for examples of Network Policies or read the documentation.

What can you do with Network Policies

By default, Kubernetes does not restrict traffic between pods running inside the cluster. This means any pod can connect to any other pod as there are no firewalls controlling the intra-cluster traffic.

Read more at Ahmet Alp Balkan

Git Helpers – Simplify Your Git Workflow

Being a Git lover, I got tired of typing same commands over and over again. This motivated me to build some alias for my git workflow I use every day. Most of the alias was inherited from Nicholas C. Zakas’s gist. I took that a step further by creating more alias which I use every time I contribute to open source projects like Eslint.

Commands

Actual code for all the commands is here

Work on an issue

When you plan to work on an issue on an open source project, you will want to create a new branch on your project and then start work on it. 

Read more at Dev.to

The Internet of Underwater Things: Open Source JANUS Standard for Undersea Communications

Open standards exist for all manner of wireless and terrestrial communications, but so far none has emerged for underwater communications. Below the waves, submarines, autonomous underwater vehicles (AUVs), and undersea sensor stations use a hodgepodge of incompatible proprietary technologies including acoustic, radio, and optical modems.

Manned submarines and many automated subs can surface to communicate over the air, where the bandwidth is much higher, and some submersible AUVs and research stations can  be tethered to floating wireless buoys. Yet, there are times when neither option is feasible, and with the huge expansion in AUVs, there’s a growing need for a universal undersea communication standard for persistent mobile communications.

Internet of Underwater Things

Enter NATO, which has a keen interest in reliably communicating beneath the waves, both for military and emergency response purposes. The multinational defense organization recently announced it has adopted a new JANUS digital communications standard for underwater acoustic modems. Deployed as an official NATO standard called STANAG 4748 (PDF), JANUS will be implemented on all NATO vessels.

As detailed by IEEE Spectrum, the open source, GPL-licensed JANUS standard uses acoustic modems. Acoustic technology has a much longer range than higher-bandwidth optical systems, which top out at 100 meters, and RF radios, which can’t do much better.

JANUS, which is named after the Roman god of gateways, has been tested at 900Hz to 60kHz frequencies at distances of up to 28 kilometers. However, it’s optimized for sending data underwater at up to 10 km.

JANUS assigns the 11.5 kHz band for initial discovery, and defines a procedure for handshaking, synchronization, and 80bps data transmission using 56-bit packets. Once synchronized, the systems can then switch to a different frequency or protocol shared by both parties, depending on bandwidth, distance, or security requirements.

JANUS includes a standard modulation encoding scheme called Frequency-Hopped Binary Frequency Shift Keying (FH-BFSK) that describes how acoustic waveforms are encoded. It also provides redundancy checking for reducing errors caused by interference from varying water temperatures, currents, and Doppler effects caused by motion. JANUS can also communicate with wireless enabled buoys to extend the network.

The technology was developed by a team led by NATO principal scientist João Alves at NATO’s Centre for Maritime Research and Experimentation (CMRE) in La Spezia, Italy and is sponsored by NATO’s Allied Command Transformation group.

IEEE Spectrum quotes Chiara Petrioli, a researcher and JANUS project collaborator at La Sapienza, the University of Rome, as saying that JANUS could evolve into an “Internet of Underwater Things” — a network of sensor systems, AUVs, and buoys that could connect the underwater world.

The NATO announcement cites potential JANUS applications including harbor protection, maritime surveillance, mine detection, archaeology, and surveys for offshore wind farms and pipelines. Other examples include search and rescue operations and real-time underwater sensor networks, as well as enabling AUVs to more quickly report oil leaks. JANUS modems are also sufficiently small and affordable to be worn by scuba divers.

BeagleBone drives first JANUS acoustic modem

Linux is found in many AUVs, such as the SeaBED and Bluefin-21, so we wondered if it might be involved in the JANUS, as well. As it turned out, one of the first JANUS acoustic modems was built around Linux code running on a BeagleBone Black. The hacker board, which came in fifth place out of 98 in our recent open-spec SBC reader survey is the controller behind the first commercial, JANUS-compatible acoustic modem: the SeaModem.

One of the early JANUS tests conducted by Petrioli and other La Sapienza researchers is described in this research paper (PDF). The researchers used the SeaModem, developed by AppliCon, a spinoff of the University of Calabria, connected to a BeagleBone running Linux-based JANUS code.

The SeaModem consists of a power board with an amplifier and ceramic transducer, among other circuitry, as well as a DSP board with a digital signal processor, real-time clock, and audio codec. The device communicates via a UART interface with the BeagleBone, which is here augmented with a second audio codec module. GPIO connections between the boards enable improved signal-to-noise ratio.

The Linux stack included JANUS compatible networking software called Sunset, developed by La Sapienza and a spinoff called WSENSE. Sunset, which also runs on ARM modules such as Gumstix boards, can seamlessly switch between the JANUS acoustic link and a higher-bandwidth proprietary modulation scheme.

The system was tested in San Diego aboard a highly modified Sea Robotics USV-2600 autonomous catamaran called the Gemellina USV. The sensor-laden catamaran navigates using an Ubuntu-driven computer.

The La Sapienza team added a module mounted under the Gemellina’s hull comprising the SeaModem, BeagleBone, audio codec, and other gear packed inside a waterproof aluminum case. The experiment tested JANUS communications and handoffs to proprietary links between the Gemellina and three stationary underwater JANUS nodes. La Sapienza concluded that JANUS could be used as “as a reliable robust channel to exchange short control messages,” and can capably switch to a faster communication protocol.

One problem with acoustic communications is the potential risk to marine animals. Yet, NATO claims that biologists were consulted in JANUS’ development to ensure limited interference.

Even if JANUS is safe to marine life, there is no guarantee that underwater communications vendors will sign on. Still, the large scope of potential NATO contracts is already ramping up commercial development. The technology is relatively affordable, and in most cases, it’s not intended to replace other technologies, but to augment them. Vendors can therefore add JANUS as an optional link for emergency rescue and collaborative projects.

In other words, it’s likely JANUS will emerge as the forerunner of an underwater Internet. It’s also likely that Linux will play a large role in making it happen.

Connect with the embedded community at Embedded Linux Conference Europe in Prague, October 23-25. Register now!

Future Proof Your SysAdmin Career: Looking to the Cloud

Sysadmins will always need core competencies such as networking and security, but increasingly, they can differentiate themselves by mastering new platforms and tools. Previously in this series, we’ve provided an overview of essentialsevolving network skills, and securityIn this article, we’ll look at how experience with open cloud computing platforms such as OpenStack can make a difference in your sysadmin career. 

The cloud advantage

Experience with emerging cloud infrastructure tools and open source technologies can make a substantial compensation difference for sysadmins. According to a salary study from Puppet, “Sysadmins aren’t making as much as their peers. The most common salary range for sysadmins in the United States is $75,000-$100,000, while the four other most common practitioner titles (systems developer/engineer, DevOps engineer, software developer/engineer, and architect) are most likely to earn $100,000-$125,000.”

If you search recruitment sites for sysadmin positions that demand cloud skills, opportunities abound. There are many positions that require strong cloud monitoring skills, and jobs that demand facility with both open source and popular public cloud platforms.

future proof ebook

Certification also makes a difference. The value of cloud-centric certification is being driven by shortages in the number of skilled cloud-skilled professionals. CEB, a company focused on best practices in technology, recently provided Forbes with the results of a database dive on cloud computing hiring trends. It found shortages in expertise surrounding many cloud computing platforms, and it also called out a strong job market for skilled professionals. In fact, $124,300 was the median advertised salary for cloud computing professionals in 2016, according to the database.

Some sysadmins are blogging about their experiences in adding OpenStack skills to their arsenals. For example, Michalis Giannos, writing for Stackmasters, said, “As an old-school system administrator, what impressed me about OpenStack is that it extends resource management over to storage and network — that is, going beyond the CPU and memory management options that you get with the typical virtual machine offerings. Having a unified view of your computing resources utilization, and having the ability to manage it from a single place is a very powerful feature. And it’s especially mind blowing, even to an old hat like me, raised up on the CLI, that you can access all that power from an easy to use web-based UI.”

Giannos also said, “The ease of creating images and customized flavors of your virtual machines allows you to deploy a new server in minutes without having to repeat trivial configurations all over again. Heck, you can literally create an HTTP Load Balancer AND the back-end service farm for it in just a few minutes.”

Indeed, OpenStack, CloudStack, Nextcloud, and some other open cloud platforms automate and streamline many tasks that old school sysadmins may be most familiar with. With all of this in mind, providing cloud platform training aimed directly at sysadmins is on the radar at technology vendors focused on the cloud and at independent training organizations.

The Linux difference

Fluency with Linux can make a big difference for sysadmins, which should be no surprise. Several salary studies have shown that Linux-savvy sysadmins are better compensated than others. With that in mind, note that Linux is the bedrock for the majority of cloud deployments, according to The OpenStack Foundation.

This leads to a multi-faceted career path that many sysadmins can take to differentiate themselves from the pack. In fact, Tom’s IT Pro has called this path “the triple threat career path to IT success.” Specifically, it involves obtaining certification as a Linux-savvy sysadmin, as a project manager, and as a cloud administrator.

Training options for Linux-focused sysadmins are expanding accordingly. For professional certification, CompTIA Linux+ is an option, as are certifications from Linux Professional Institute. The Linux Foundation’s Linux Foundation Certified System Administrator (LFCS) training and certification is another good choice.

Earning the title of Red Hat Certified System Administrator in Red Hat OpenStack demonstrates that you have the skills needed to create, configure, and manage private clouds using Red Hat OpenStack Platform. Red Hat’s training for this certification covers configuring and managing images, adding compute nodes, and managing storage using Swift and Cinder.

Mirantis and other vendors also offer certified OpenStack administrator curriculum. The Linux Foundation offers an OpenStack Administration Fundamentals course, which serves as preparation for certification. The course is available bundled with the COA exam, enabling students to learn the skills they need to work as an OpenStack-skilled administrator and get the certification to prove it. A unique feature of the course is that it provides each participant with a live OpenStack lab environment that can be rebooted at any time. Customers also have access to the course and the lab environment for a full 12 months after purchase. Like the exam, the course is available anytime, anywhere. It is online and self-paced — definitely worth looking into.

The OpenStack Foundation works directly with The Linux Foundation to make the Certified OpenStack Administrator (COA) exam available, and getting certified is a rock solid credential for many sysadmins. The Guide to the Open Cloud 2016 from The Linux Foundation also includes a comprehensive look at other cloud platforms and tools that many sysadmins would be wise to pick up skills for, including tools that orbit the open cloud ecosystem.

Clearly, sysadmins interested in adding meaningful skills and credentials to their arsenals should not ignore the cloud. Training and certification opportunities are proliferating, and widespread skills shortages are well documented. Next time, we’ll take a closer look at configuration and automation.

Learn more about essential sysadmin skills: Download the Future Proof Your SysAdmin Career ebook now.

 

Read more:

Future Proof Your SysAdmin Career: An Introduction to Essential Skills 

Future Proof Your SysAdmin Career: New Networking Essentials

Future Proof Your SysAdmin Career: Locking Down Security

Future Proof Your SysAdmin Career: Looking to the Cloud

Future Proof Your SysAdmin Career: Configuration and Automation

Future Proof Your SysAdmin Career: Embracing DevOps

Future Proof Your SysAdmin Career: Getting Certified

Future Proof Your SysAdmin Career: Communication and Collaboration

Future Proof Your SysAdmin Career: Advancing with Open Source

The HTTP Series (Part 4): Authentication Mechanisms

In the previous post, we’ve talked about the different ways that websites can use to identify the visiting user.

But identification itself represents just a claim. When you identify yourself, you are claiming that you are someone. But there is no proof of that.

Authentication, on the other hand, is showing a proof that you are what you claim to be, like showing your personal id or typing in your password.

More often than not, websites need some sort of proof to serve you sensitive resources.

HTTP has its own authentication mechanisms that allow the servers to issue challenges and get the proof they need. You will learn about what they are and how they work. We’ll also cover the pros and cons of each one and find out if they are really good enough to be used on their own (spoiler: they are not).

This is what we have learned so far:

Read more at DZone

Testing in Production: Yes, You Can (And Should)

I wrote a piece recently about why we are all distributed systems engineers now. To my surprise, lots of people objected to the observation that you have to test large distributed systems in production. 

It seems testing in production has gotten a bad rap—despite the fact that we all do it, all the time.

Maybe we associate it with cowboy engineering. We hear “testing in production” and assume this means no unit tests, functional tests, or continuous integration.

It’s good to try and catch things before production — we should do that too! But these things aren’t mutually exclusive. Here are some things to consider about testing in production.

Read more at OpenSource.com

For more about testing, see the DevOps Fundamentals series and training course by John Willis.

Highly Distributed Platform Deployment

This post will explore less technical and more “big picture” concepts. When we (Red Hat Professional Services) come on-site to help to deploy OpenShift (Enterprise-ready Kubernetes distribution) we work with the customer to determine what kind of journey they will experience while onboarding and the challenges they may encounter in the future.

For this purpose, I created small MindMap to help visualize most of the dependencies that exist when building highly distributed platform. There is much more to this, and this visualization is being updated even as you read this article, buts it’s a good start.

All dependencies are split into 10 categories:

  • Strategy
  • Storage
  • Operations
  • BCR & DR
  • AppDev
  • Security
  • Automation
  • Networks
  • Provisioning
  • External dependencies

Each of these contain multiple sub-areas to consider. So let’s go through them one by one.

Read more at OpenShift

Time, Security Cited as Hurdles to Adoption of Containers

Containers remain a nascent cloud platform choice for many enterprises. But lack of ecosystem maturity and familiarity are seen as early-stage hurdles toward adoption.

Dustin Kirkland, VP of product at Canonical who’s on the company’s Ubuntu products and strategy team, said one of the biggest issues facing container adoption today is simply time. Containers are still relatively new in the eyes of enterprise customers who have only recently come to understand the benefits of virtual machines (VMs).

Read more at SDxCentral

A History of Microprocessor Debug, 1980–2016

Since the dawn of electronics design, where there have been designs, there have been bugs. But where they have been bugs, there inevitably was debug, engaged in an epic wrestling match with faults, bugs, and errors to determine which would prevail — and how thoroughly.

In many ways, the evolution of debug technology is as fascinating as any aspect of design; but it rarely receives the spotlight. Debug has evolved from simple stimulus-response-observe approaches to sophisticated tools, equipment, and methodologies conceived to address increasingly complex designs. Now, in 2017, we sit at the dawn of a new and exciting era with the introduction of debug over functional I/O.

This is the culmination of decades of hard work and invention from around the globe. I’ve been involved in debug since 1984, so to truly appreciate the paradigm shift we’re now experiencing in debug, it’s useful to take a look back at the innovation that has taken place over the years.

1970s-1980s

System design was very different in this period compared to the way things are today. A typical system would consist of a CPU, (EP)ROM, RAM, and some peripherals (PIC, UART, DMA, TIMERs, IO…), each implemented in its own IC.

Read more at Embedded