Home Blog Page 334

‘Sed’ Command In Linux: Useful Applications Explained

Have you ever needed to replace some text in a file really quickly? Then you have to open up your text editor, find the line, and then type out your replacement. What if you have to do that many times? What if it isn’t also exactly the same thing and you have to run multiple searches and replace each occurrence? It gets tedious very quickly, but there’s a better way of doing it with a tool called sed.

We’ve written about POSIX and went over some of the interfaces and utilities a system must provide in order to be POSIX compliant. The command line tool sed is one of those utilities that provide a feature-rich way to filter, find, substitute, and rearrange data in texts files. It is an extremely powerful tool that is very easy to get started with, but very difficult to learn through and through due to its seemingly endless number of features.

Read more at FOSSbytes

Cloud Management: The Good, The Bad, and The Ugly – Part 1

The journey to becoming a truly software-driven, digital-native organizations requires enterprises to develop cultural practices and  technology capabilities that support three main goals:

  1. Corporate IT needs to become aligned with and responsive to the lines of business and to the software delivery functions that are in charge of software and digital innovation.
  2. IT needs to lead the charge on fostering and enabling a culture of constant business innovation.
  3. The inevitable transformation in IT needs to be accompanied by a reduction in IT spend. Cloud modernization needs to not become a massive business transformation project that is bound to fail, and that is placing undue pressures on the company – both from a cost perspective, as well as processes and tooling.

Cloud Management is a key aspect that organizations are looking at in order to simplify operations, increase IT efficiency and reduce data center costs.

Given the strains that digital disruption puts on IT Ops, we often see that large and complex enterprises that have invested in Cloud Management Platform (CMP) capabilities struggle to identify the highest priority areas to target across lines of business or in shared services, and can’t really realize the promise of CMPs to optimize their IT processes across various company initiatives. The CMP implementation often becomes another ‘Moby Dick’ endless chase, sucking time and resources and causing frustration throughout the organization, with often not a lot to show for it.

In this article, I want to share our point of view and some insights into the fundamentals of Cloud Management capabilities that large enterprises need to put in place in order to support digital transformation in their organization, for both legacy infrastructure as well as new, modern applications and technologies.

Read more at Platform9

5 Essential Tools for Linux Development

Linux has become a mainstay for many sectors of work, play, and personal life. We depend upon it. With Linux, technology is expanding and evolving faster than anyone could have imagined. That means Linux development is also happening at an exponential rate. Because of this, more and more developers will be hopping on board the open source and Linux dev train in the immediate, near, and far-off future. For that, people will need tools. Fortunately, there are a ton of dev tools available for Linux; so many, in fact, that it can be a bit intimidating to figure out precisely what you need (especially if you’re coming from another platform).

To make that easier, I thought I’d help narrow down the selection a bit for you. But instead of saying you should use Tool X and Tool Y, I’m going to narrow it down to five categories and then offer up an example for each. Just remember, for most categories, there are several available options. And, with that said, let’s get started.

Containers

Let’s face it, in this day and age you need to be working with containers. Not only are they incredibly easy to deploy, they make for great development environments. If you regularly develop for a specific platform, why not do so by creating a container image that includes all of the tools you need to make the process quick and easy. With that image available, you can then develop and roll out numerous instances of whatever software or service you need.

Using containers for development couldn’t be easier than it is with Docker. The advantages of using containers (and Docker) are:

  • Consistent development environment.

  • You can trust it will “just work” upon deployment.

  • Makes it easy to build across platforms.

  • Docker images available for all types of development environments and languages.

  • Deploying single containers or container clusters is simple.

Thanks to Docker Hub, you’ll find images for nearly any platform, development environment, server, service… just about anything you need. Using images from Docker Hub means you can skip over the creation of the development environment and go straight to work on developing your app, server, API, or service.

Docker is easily installable of most every Linux platform. For example: To install Docker on Ubuntu, you only have to open a terminal window and issue the command:

sudo apt-get install docker.io

With Docker installed, you’re ready to start pulling down specific images, developing, and deploying (Figure 1).

Figure 1: Docker images ready to deploy.

Version control system

If you’re working on a large project or with a team on a project, you’re going to need a version control system. Why? Because you need to keep track of your code, where your code is, and have an easy means of making commits and merging code from others. Without such a tool, your projects would be nearly impossible to manage. For Linux users, you cannot beat the ease of use and widespread deployment of Git and GitHub. If you’re new to their worlds, Git is the version control system that you install on your local machine and GitHub is the remote repository you use to upload (and then manage) your projects. Git can be installed on most Linux distributions. For example, on a Debian-based system, the install is as simple as:

sudo apt-get install git

Once installed, you are ready to start your journey with version control (Figure 2).

Figure 2: Git is installed and available for many important tasks.

Github requires you to create an account. You can use it for free for non-commercial projects, or you can pay for commercial project housing (for more information check out the price matrix here).

Text editor

Let’s face it, developing on Linux would be a bit of a challenge without a text editor. Of course what a text editor is varies, depending upon who you ask. One person might say vim, emacs, or nano, whereas another might go full-on GUI with their editor. But since we’re talking development, we need a tool that can meet the needs of the modern day developer. And before I mention a couple of text editors, I will say this: Yes, I know that vim is a serious workhorse for serious developers and, if you know it well vim will meet and exceed all of your needs. However, getting up to speed enough that it won’t be in your way, can be a bit of a hurdle for some developers (especially those new to Linux). Considering my goal is to always help win over new users (and not just preach to an already devout choir), I’m taking the GUI route here.

As far as text editors are concerned, you cannot go wrong with the likes of Bluefish. Bluefish can be found in most standard repositories and features project support, multi-threaded support for remote files, search and replace, open files recursively, snippets sidebar, integrates with make, lint, weblint, xmllint, unlimited undo/redo, in-line spell checker, auto-recovery, full screen editing, syntax highlighting (Figure 3), support for numerous languages, and much more.

Figure 3: Bluefish running on Ubuntu Linux 18.04.

IDE

Integrated Development Environment (IDE) is a piece of software that includes a comprehensive set of tools that enable a one-stop-shop environment for developing. IDEs not only enable you to code your software, but document and build them as well. There are a number of IDEs for Linux, but one in particular is not only included in the standard repositories it is also very user-friendly and powerful. That tool in question is Geany. Geany features syntax highlighting, code folding, symbol name auto-completion, construct completion/snippets, auto-closing of XML and HTML tags, call tips, many supported filetypes, symbol lists, code navigation, build system to compile and execute your code, simple project management, and a built-in plugin system.

Geany can be easily installed on your system. For example, on a Debian-based distribution, issue the command:

sudo apt-get install geany

Once installed, you’re ready to start using this very powerful tool that includes a user-friendly interface (Figure 4) that has next to no learning curve.

Figure 4: Geany is ready to serve as your IDE.

diff tool

There will be times when you have to compare two files to find where they differ. This could be two different copies of what was the same file (only one compiles and the other doesn’t). When that happens, you don’t want to have to do that manually. Instead, you want to employ the power of tool like Meld. Meld is a visual diff and merge tool targeted at developers. With Meld you can make short shrift out of discovering the differences between two files. Although you can use a command line diff tool, when efficiency is the name of the game, you can’t beat Meld.

Meld allows you to open a comparison between to files and it will highlight the differences between each. Meld also allows you to merge comparisons either from the right or the left (as the files are opened side by side – Figure 5).

Figure 5: Comparing two files with a simple difference.

Meld can be installed from most standard repositories. On a Debian-based system, the installation command is:

sudo apt-get install meld

Working with efficiency

These five tools not only enable you to get your work done, they help to make it quite a bit more efficient. Although there are a ton of developer tools available for Linux, you’re going to want to make sure you have one for each of the above categories (maybe even starting with the suggestions I’ve made).

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Bringing Intelligence to the Edge with Cloud IoT

There are also many benefits to be gained from intelligent, real-time decision-making at the point where these devices connect to the network—what’s known as the “edge.” Manufacturing companies can detect anomalies in high-velocity assembly lines in real time. Retailers can receive alerts as soon as a shelved item is out of stock. Automotive companies can increase safety through intelligent technologies like collision avoidance, traffic routing, and eyes-off-the-road detection systems.

But real-time decision-making in IoT systems is still challenging due to cost, form factor limitations, latency, power consumption, and other considerations. We want to change that.

Bringing machine learning to the edge
Today, we’re announcing two new products aimed at helping customers develop and deploy intelligent connected devices at scale: Edge TPU, a new hardware chip, and Cloud IoT Edge, a software stack that extends Google Cloud’s powerful AI capability to gateways and connected devices. This lets you build and train ML models in the cloud, then run those models on the Cloud IoT Edge device through the power of the Edge TPU hardware accelerator.

Read more at Google Cloud

ctop – Top-like Interface for Monitoring Docker Containers

ctop is a free open source, simple and cross-platform top-like command-line tool for monitoring container metrics in real-time. It allows you to get an overview of metrics concerning CPU, memory, network, I/O for multiple containers and also supports inspection of a specific container.

At the time of writing this article, it ships with built-in support for Docker (default container connector) and runC; connectors for other container and cluster platforms will be added in future releases.

How to Install ctop in Linux Systems

Installing the latest release of ctop is as easy as running the following commands to download the binary for your Linux distribution and install it under /usr/local/bin/ctop and make it executable to run it.

Read more at Tecmint

How to Lead a Disaster Recovery Exercise For Your On-Call Team

A disaster recovery exercise is a fire drill for your on-call team. The exercise is the most useful when it is as realistic as possible. A well-designed exercise will involve engineers searching through your production codebase trying to find the tools to operate on a production-like environment.

Our disaster recovery exercises follow four basic principles:

  • All on-call engineers are gathered in one room
  • Sterilized environment (like prod, but not prod)
  • Clear objective
  • Timeboxed recovery

At SigOpt, we run on AWS, so our first exercise was to spin up an API from scratch in our backup region. Our sterilized environment was us-east-1, with no access to AMIs, instances, or databases in our production region. Our objective was to hit dr-api.sigopt.com and service an API requests. Our timebox was 4 hours, which we chose from an engineering OKR.

Disaster Recovery Exercise as an Infrastructure Diagnostic

We ran our original disaster recovery exercise to diagnose holes in our ability to recovery our infrastructure. True to our goal, the exercise produced a few months of projects to work on.

Read more at Medium

 

​Opera is Available in a Snap on Linux

Opera is far from the most popular web browser, but it has its loyal fans. Now, if those fans also happen to be Linux desktop users, CanonicalUbuntu Linux‘s parent company, and Opera SA have made it easier than ever to install it on almost any Linux distribution.

They’ve done this by packing Opera into a Snap in the Snap Store. The Opera snap is supported on Debian, Elementary, Fedora, Linux Mint, Manjaro, OpenSUSE, Ubuntu, and other Linux distributions.

Read more at ZDNet

EdgeX Adds Security and Reduces Footprint with “California” Release

EdgeX Foundry’s “California” release of its EdgeX IoT middleware adds security features and is rewritten in Go for faster boot and a smaller footprint, enabling it to run on the Raspberry Pi and other small footprint computers.

The Linux Foundation’s EdgeX Foundry project announced the second major release of its EdgeX IoT middleware for edge computing. The “California” release adds security features including reverse proxy and secure credentials storage. It’s also rewritten in Go to offer a smaller footprint This makes it possible to run EdgeX on relatively constrained edge devices such as the Raspberry Pi 3.

EdgeX Foundry was announced in late July 2017, with a goal of developing a standardized, open source interoperability framework for Internet of Things edge computing. EdgeX Foundry is creating and certifying an ecosystem of interoperable, plug-and-play components to create an open source EdgeX stack that will mediate between multiple sensor network messaging protocols as well as cloud and analytics platforms.

The framework facilitates interoperability code that spans edge analytics, security, system management, and services. It also eases the integration of pre-certified software for IoT gateways and smart edge devices.

Security and flexibility

“Our goal is to decouple connectivity standards and device interfaces from applications,”  said Jason A. Shepherd, Dell Technologies IoT CTO and Chair of the EdgeX Foundry Governing Board, in an email interview with Linux.com. “EdgeX will enable flexibility and scalability through platform independence, loosely-coupled microservices, and the ability to bring together services written in different languages through common APIs. These cloud-native tenets are absolutely required at the edge to scale in an inherently fragmented, multi-edge and multi-cloud world.”

EdgeX is based on Dell’s seminal FUSE IoT middleware framework, with inputs from a similar AllJoyn-compliant project called IoTX. Dell is one of three Platinum members alongside Analog Devices, and Samsung. EdgeX Foundry now has 61 members overall, including AMD, Canonical, Cloud Foundry, Linaro, Mocana, NetFoundry, Opto 22, RFMicron, and VMware.

The California release follows the initial Barcelona release, which arrived last October. Barcelona provided reference Device Services supporting BACNet, Modbus, Bluetooth Low Energy (BLE), MQTT, SNMP, and Fischertechnik, as well as connectors to Azure IoT Suite and Google IoT Core.

The major new features in in EdgeX California aim to improve security. A new reverse proxy based on Kong helps protect REST API communications and secrets storage. The reverse proxy requires any external client of an EdgeX microservice to first authenticate itself before loading an EdgeX API.

The new secure storage facility for secrets is based on HashiCorp’s open source Vault. It lets you securely store sensitive data such as username/password credentials, certificates, and tokens within EdgeX for performing tasks such as encryption, making HTTPS calls to the enterprise, or securely connecting EdgeX to a cloud provider.

“Our Barcelona release had no security features because we wanted all the security layers to be defined by a community of industry experts such as RSA, Analog Devices, Thales, ForgeRock, and Mocana, rather than only from Dell,” said Shepherd. “The Reverse Proxy and Secrets Store is the foundation from which everything else is built.”

Shift to Go

The other major change in California was that the code was rebuilt from the original Java with the Go programming language. The process delayed the release by several months, but as a result, California has a significantly reduced footprint, startup time, memory, and CPU usage. It fits into 42MB — or 68MB with container – and can now boot in less than a second per service compared to about 35 seconds (see chart below).

Additional new features in the California release include:

  • Export services additions for “northbound” connectivity to the XMPP messaging standard, the ThingsBoard IoT platform for device management, data collection, processing, and visualization, and Samsung’s Brightics IoT IoT interoperability platform,
  • Improved documentation, now available in Github
  • Full support for Arm 64
  • Blackbox tests for all micro services within build and continuous integration processes
  • Improved continuous integration to streamline developer contributions

According to Dell’s Shepherd, the switch to Go was not only about reducing footprint, but to avoid the need for vendors to pay a Java license fee for commercial deployments. In addition, Go has expanded EdgeX’s developer base.

“Go’s concurrency model is superior to most programming languages, has the support of Google, is used by Docker, Kubernetes and many other large software development efforts, and is growing broadly in IoT circles,” said Shepherd. “We doubled our community in the months after the January Go-Lang Preview. There is a learning curve associated with getting a typical object (Java, C++) developer to move to Go (a functional versus object language), but overall the move has been good for fostering more enthusiasm about the platform as well as improving it.”

Shepherd noted that Go is only a baseline reference language. Developers can use the same APIs with other languages, and the project will support C in addition to Go for the Device Service SDKs. Because C can reduce the footprint even further than Go, it may be the better choice for applications built on a low-end “thin edge” gateway with a lot of Device Services, such as many different sensor protocols, said Shepherd. However, EdgeX Foundry chose Go because it is more platform independent in terms of hardware and OS.

Next up: Delhi and beyond

The upcoming Delhi release due in October will include components such as manageability services, Device Service SDKs, improved unit and performance testing, and a basic EdgeX UI for demos. It will also add more security features including improved security service bootstrapping of Kong and Vault.

According to Shepherd, other security enhancements planned for Delhi include “tying Kong and potentially other security services to an access control system providing access control lists for granting access to various services.” Future versions of EdgeX will also establish a Chain of Trust API for systems that don’t have something like TPM. “We want to build out an API that allows EdgeX to establish a root of trust with the platform it rides on,” said Shepherd.

Other plans call for automating security testing, including “building an entire security testing apparatus and look at pen-testing type of needs,” said Shepherd. The project will also enhance the Vault-based secure storage system. “Today, EdgeX microservices get their configuration and secrets from the Consul configuration/registry service, but the secrets, such as passwords for database connections, are not secure. We want application secrets to come from Vault.  Vault and Consul are provided by HashiCorp and we think there is a good way to use the two together.”

Looking forward to future releases, EdgeX plans to reduce the footprint even more to run in 128MB or lower. There are also roadmap items for “more integration to edge analytics, rules engines, and CEPs,” said Shepherd.  “We are currently working with NodeRed as an example. “

When asked about the potential for integrating with other cloud-driven IoT platforms such as AWS Greengrass or Google’s new Cloud IoT Edge platform, Shepherd had this to say:

Our ability to work with some of the proprietary cloud stacks depends on their openness and architecture, but we are certainly exploring the opportunities. The whole point is that a developer or end user can use their choice of edge analytics and backend services without having to reinvent the foundational elements for data ingestion, security and manageability.”

Separately, Shepherd noted: “Our completely open APIs — managed by the vendor-neutral Technical Steering Committee (TSC) to ensure stability and transparency — decouple developers’ choice of standards and application/cloud services to prevent them from being locked in via one particular provider’s proprietary APIs when the data meter starts spinning.”

Join us at Open Source Summit + Embedded Linux Conference Europe in Edinburgh, UK on October 22-24, 2018, for 100+ sessions on Linux, Cloud, Containers, AI, Community, and more.

Michelle Noorali: Helping Users and Developers Consume Open Source

Open source events create the best interaction points between developers and users, and one person you’re likely to meet at these events is Michelle Noorali, one of the most visible and recognizable faces in one of the biggest open source communities: Kubernetes.

Most modern software development, which is by default open source, is done by people spread across the globe, many of whom have never met in person. That’s why events like Open Source Summit are extremely important in creating opportunities for interaction for the people who are managing, developing, and using these open source projects.

Noorali, Senior Software Engineer at Microsoft, says she loves meeting people at events and learning about how they are using cloud-native tools and what they need. “I am trying to see if those tools that I work on can also meet other people’s needs,” she said.

This direct interaction gives Noorali a unique perspective for understanding the pain points. For example, “It’s really hard to pick from all of the cloud native technologies and figure out how they work together because at the end of the day, you are trying to deploy and run applications in the cloud or on bare metal,” she said. “The second point is how do I expose my developers, my teams to this stuff and get them to actually use cloud native tools, without having to learn about everything from scratch.”

Read more at The Linux Foundation

New Open Source Effort: Legal Code to Make Reporting Security bugs Safer

The Disclose.io framework seeks to standardize “safe harbor” language for security researchers.

Not a week goes by without another major business or Internet service announcing a data breach. And while many companies have begun to adopt bug bounty programs to encourage the reporting of vulnerabilities by outside security researchers, they’ve done so largely inconsistently. That’s the reason for Disclose.io, a collaborative and open source effort to create an open source standard for bug bounty and vulnerability-disclosure programs that protects well-intentioned hackers.

…Companies that manage bug bounties for large organizations, including HackerOne and Bugcrowd, have made their own efforts to get customers to standardize security terms. But these efforts haven’t been translating into a wider adoption of those best practices—which is why Disclose.io was formed. The project has its roots in two separate-but-similar efforts being rolled into Disclose.io. The first is #LegalBugBounties, which is an effort started by Amit Elazari, a doctoral candidate at the University of California at Berkeley School of Law and a grantee of the university’s Center for Long-Term Cybersecurity. The second is the Open Source Vulnerability Disclosure Framework, an effort launched in April by Bugcrowd and the law firm CipherLaw.

Read more at Ars Technica