Home Blog Page 282

SAP: One of Open Source’s Best Kept Secrets

SAP has been working with open source for decades and has now established an open source program office (OSPO) to further formalize the coordination of its open source activities and expand its engagement with the open source communities. “SAP was one of the first industry players to formally define processes for open source consumption and contribution,” says Peter Giese, director of the Open Source Program Office.

Even so, many people do not yet consider SAP to be a company that embraces open source engagement and contributions.

“In the past, we may not have been active enough in sharing our open source activities,” says Giese.

Now, SAP is shining a spotlight on its work in open source. Transparency is an essential part of the new open source mandate, beginning with an explanation of what the company has been up to and where it is headed with open source.

How SAP came to adopt open source

“In 1998, SAP started to port the R/3 system, our market-leading ERP system, to Linux,” says Giese. “That was an important milestone for establishing Linux in the enterprise software market.”

Porting a system to Linux was just a first step, and a successful one. The action spurred an internal discussion and exploration of how and where to adopt Linux going forward.

Read more at The Linux Foundation

Container Storage Interface (CSI) for Kubernetes GA

The Kubernetes implementation of the Container Storage Interface (CSI) has been promoted to GA in the Kubernetes v1.13 release. Support for CSI was introduced as alpha in Kubernetes v1.9 release, and promoted to beta in the Kubernetes v1.10 release.

The GA milestone indicates that Kubernetes users may depend on the feature and its API without fear of backwards incompatible changes in future causing regressions. GA features are protected by the Kubernetes deprecation policy.

Why CSI?

Although prior to CSI Kubernetes provided a powerful volume plugin system, it was challenging to add support for new volume plugins to Kubernetes: volume plugins were “in-tree” meaning their code was part of the core Kubernetes code and shipped with the core Kubernetes binaries—vendors wanting to add support for their storage system to Kubernetes (or even fix a bug in an existing volume plugin) were forced to align with the Kubernetes release process. In addition, third-party storage code caused reliability and security issues in core Kubernetes binaries and the code was often difficult (and in some cases impossible) for Kubernetes maintainers to test and maintain.

CSI was developed as a standard for exposing arbitrary block and file storage storage systems to containerized workloads on Container Orchestration Systems (COs) like Kubernetes. With the adoption of the Container Storage Interface, the Kubernetes volume layer becomes truly extensible. Using CSI, third-party storage providers can write and deploy plugins exposing new storage systems in Kubernetes without ever having to touch the core Kubernetes code. This gives Kubernetes users more options for storage and makes the system more secure and reliable.

Read more at Kubernetes Blog

Using more to View Text Files at the Linux Command Line

There are a number of utilities that enable you to view text files when you’re at the command line. One of them is more.

more is similar to another tool I wrote about called less. The main difference is that more only allows you to move forward in a file.

While that may seem limiting, it has some useful features that are good to know about. Let’s take a quick look at what more can do and how to use it.

The basics

Let’s say you have a text file and want to read it at the command line. Just open the terminal, pop into the directory that contains the file, and type this command:

more <filename>

Read more at OpenSource.com

How Companies Are Building Sustainable AI and ML Initiatives

In 2017, we published “How Companies Are Putting AI to Work Through Deep Learning,” a report based on a survey we ran aiming to help leaders better understand how organizations are applying AI through deep learning. We found companies were planning to use deep learning over the next 12-18 months. In 2018, we decided to run a follow-up survey to determine whether companies’ machine learning (ML) and AI initiatives are sustainable—the results of which are in our recently published report, “Evolving Data Infrastructure.”

The current generation of AI and ML methods and technologies rely on large amounts of data—specifically, labeled training data. In order to have a longstanding AI and ML practice, companies need to have data infrastructure in place to collect, transform, store, and manage data. On one hand, we wanted to see whether companies were building out key components. On the other hand, we wanted to measure the sophistication of their use of these components. In other words, could we see a roadmap for transitioning from legacy cases (perhaps some business intelligence) toward data science practices, and from there into the tooling required for more substantial AI adoption?

Here are some notable findings from the survey:

  • Companies are serious about machine learning and AI. Fifty-eight percent of respondents indicated that they were either building or evaluating data science platform solutions. Data science (or machine learning) platforms are essential for companies that are keen on growing their data science teams and machine learning capabilities.

Read more at O’Reilly

More About Angle Brackets in Bash

In the previous article, we introduced the subject of angle brackets (< >) and demonstrated some of their uses. Here, we’ll look at the topic from a few more angles. Let’s dive right in.

You can use < to trick a tool into believing the output of a command is data from a file.

Let’s say you are not sure your backup is complete, and you want to check that a certain directory contains all the files copied over from the original. You can try this:

diff <(ls /original/dir/) <(ls /backup/dir/)

diff is a tool that typically compares two text files line by line, looking for differences. Here it gets the output from two ls commands and treats them as if coming from a file and compares them as such.

Note that there is no space between the < and the (...).

Running that on the original and backup of a directory where I save pretty pictures, I get:

diff <(ls /My/Pictures/) <(ls /My/backup/Pictures/) 5d4 < Dv7bIIeUUAAD1Fc.jpg:large.jpg

The < in the output is telling me that there is file (Dv7bIIeUUAAD1Fc.jpg:large.jpg) on the left side of the comparison (in /My/Pictures) that is not on the right side of the comparison (in /My/backup/Pictures), which means copying over has failed for some reason. If diff didn’t cough up any output, it would mean that the list of files were the same.

So, you may be wondering, if you can take the output of a command or command line, make it look like the contents of a file, and feed it to an instruction that is expecting a file, that means that in the sorting by favorite actor example from above, you could’ve done away with the intermediate file and just piped the output from the loop into sort.

In short, yep! The line:

sort -r <(while read -r name surname films;do echo $films $name $surname ; done < CBactors)

does the trick nicely.

Here string! Good string!

There is one more case for redirecting data using angle brackets (or arrows, or whatever you want to call them).

You may be familiar with the practice of passing variables to commands using echo and a pipe (|). Say you want to convert a variable containing a string to uppercase characters because… I don’t know… YOU LIKE SHOUTING A LOT. You could do this:

myvar="Hello World" echo $myvar | tr '[:lower:]' '[:upper:]' HELLO WORLD

The tr command translates strings to different formats. In the example above, you are telling tr to change all the lowercase characters that come along in the string to uppercase characters.

It is important to know that you are not passing on the variable, but only its contents, that is, the string “Hello World“. This is called the here string, as in “it is here, in this context, that we know what string we are dealing with“. But there is shorter, clearer, and all round better way of delivering here strings to commands. Using

tr '[:lower:]' '[:upper:]' <<< $myvar

does the same thing with no need to use echo or a pipe. It also uses angle brackets, which is the whole obsessive point of this article.

Conclusion

Again, Bash proves to give you lots of options with very little. I mean, who would’ve thunk that you could do so much with two simple characters like < and >?

The thing is we aren’t done. There are plenty of more characters that bring meaning to chains of Bash instructions. Without some background, they can make shell commands look like gibberish. Hopefully, post by post, we can help you decipher them. Until next time!

9 Trends to Watch in Systems Engineering and Operations

If your job or business relies on systems engineering and operations, be sure to keep an eye on the following trends in the months ahead.

AIOps

Artificial intelligence for IT operations (AIOps) will allow for improved software delivery pipelines in 2019. This practice incorporates machine learning in order to make sense of data and keep engineers informed about both patterns and problems so they can address them swiftly. Rather than replace current approaches, however, the goal of AIOps is to enhance these processes by consolidating, automating, and updating them. A related innovation, Robotic Process Automation (RPA), presents options for task automation and is expected to see rapid and substantial growth as well.

Knative vs. AWS Lambda vs. Microsoft Azure Functions vs. Google Cloud

The serverless craze is in full swing, and shows no signs of stopping—since December 2017 alone, the technology has grown 22%, and Gartner reports that by 2020, more than 20% of global enterprises will be deploying serverless. This is a huge projected increase from the mere 5% that are currently utilizing it. The advantages of serverless are numerous…

Read more at O’Reilly

Kali Linux Is the Complete Toolbox for Penetration Testing

Every IT infrastructure offers points of attack that hackers can use to steal and manipulate data. Only one thing can prevent these vulnerabilities from being exploited by unwelcome guests: You need to preempt the hackers and identify and close the gaps. Kali Linux can help.

To maintain the security of a network, you need to check it continuously for vulnerabilities and other weak points through penetration testing. You have a clear advantage over attackers because you know the critical infrastructure components, the network topology, points of attack, the services and servers executed, and so on. Exploitation tests should look for vulnerabilities in a secure, real environment, so you can shut down any vulnerabilities found – and you need to do this over and over again.

The variety of IT components dedicated to security does not make selecting a suitable tool any easier, because all possible attack vectors need to be subjected to continuous testing. Kali Linux [1] meets these requirements – and does much more.

Kali Linux at a Glance

The Debian-based Kali Linux distribution is at the heart of most penetration testing systems. …

Kali Linux is particularly resource-friendly and can be run in a virtual machine, so any notebook can become a full-fledged penetration test system with very little effort. Most administrators are familiar with classics like Wireshark and Nmap, so I will focus on the less common applications.

Security Scanners

Penetration testing begins with an overview of the infrastructure and then searches for specific weak points. To do this, you first use a security scanner. Depending on their nature and type, these tools are capable of checking entire networks or individual systems or applications for known weak points.

Read more at ADMIN

Finding Equilibrium in Post-Kubernetes Open-Source Computing

As Kubernetes is leveraged as the foundation for an increasing number of critical enterprise technologies and enables the new industry standard of hybrid cloud, open-source participants are reckoning with both the challenge and opportunity of working within a new collaborative digital economy.

“The scale is coming from real adoption and businesses that are moving their applications into the cloud,” said Liz Rice (pictured), technology evangelist at Aqua Security Software Ltd. and program co-chair at KubeCon + CloudNativeCon. “The end users who want to be part of the community actually want to contribute to the community.”

Rice spoke with John Furrier (@furrier) and Stu Miniman (@stu), co-hosts of theCUBE, SiliconANGLE Media’s mobile livestreaming studio, during the KubeCon + CloudNativeCon event in Seattle, Washington. (* Disclosure below.)

This week, theCUBE spotlights Liz Rice in our Women in Tech feature.

Read more at SiliconAngle

How To Delete a Local and Remote Git Branch

Branches are part of the everyday development process and one of the most powerful features in Git. Once a branch is merged, it serves no purpose except for historical research. It is common and recommended practice to delete the branch after a successful merge.

This guide covers how to delete local and remote Git branches.

Delete a Local Git Branch

To delete a local Git branch use the git branch command with the -d (--delete) flag:

git branch -d branch_name
Deleted branch branch_name (was 17d9aa0).

Read more at Linuxize

Project EVE Promotes Cloud-Native Approach to Edge Computing

The LF Edge umbrella organization for open source edge computing that was announced by The Linux Foundation last week includes two new projects: Samsung Home Edge and Project EVE. We don’t know much about Samsung’s project for home automation, but we found out more about Project EVE, which is based on Zededa’s edge virtualization technology. Last week, we spoke with Zededa co-founder Roman Shaposhnik about Project EVE, which provides a cloud-native based virtualization engine for developing and deploying containers for industrial edge computers (see below).

LF Edge aims to establish “an open, interoperable framework for edge computing independent of hardware, silicon, cloud, or operating system.” It is built around The Linux Foundation’s telecom-oriented Akraino Edge Stack, as well as its EdgeX Foundry, an industrial IoT middleware project..

Like the mostly proprietary cloud-to-edge platforms emerging from Google (Google Cloud IoT Edge), Amazon (AWS IoT), Microsoft (Azure Sphere), and most recently Baidu (Open Edge), among others, the LF Edge envisions a world where software running on IoT gateway and edge devices evolves top down from the cloud rather than from the ground up with traditional embedded platforms.

The Linux Foundation, which also supports numerous “ground up” embedded projects such as the Yocto Project and Iotivity, but with LF Edge it has taken a substantial step toward the cloud-centric paradigm. The touted benefits of a cloud-native approach for embedded include easier software development, especially when multiple apps are needed, and improved security via virtualized, regularly updated container apps. Cloud-native edge computing should also enable more effective deployment of cloud-based analytics on the edge while reducing expensive, high-latency cloud communications.

None of the four major cloud operators listed above are currently members of LF Edge, which poses a challenge for the organization. However, there’s already a deep roster of companies onboard, including Arm, AT&T, Dell EMC, Ericsson, HPE, Huawei, IBM, Intel, Nokia Solutions, Qualcomm, Radisys, Red Hat, Samsung, Seagate, and WindRiver (see the LF Edge announcement for the full list.)

With developers coming at the edge computing problem from both the top-down and bottom-up perspectives, often with limited knowledge of the opposite realm, the first step is agreeing on terminology. Back in June, the Linux Foundation launched an Open Glossary of Edge Computing project to address this issue. Now part of LF Edge, the Open Glossary effort “seeks to provide a concise collection of terms related to the field of edge computing.”

There’s no mention of Linux in the announcements for the LF Edge projects, all of which propose open source, OS-agnostic, approaches to edge computing. Yet, there’s no question that Linux will be the driving force here.

Project EVE aims to be the Android of edge computing

Project EVE is developing an “open, agnostic and standardized architecture unifying the approach to developing and orchestrating cloud-native applications across the enterprise edge,” says the Linux Foundation. Built around an open source EVE (Edge Virtualization Engine) version of the proprietary Edge Virtualization X (EVx) engine from Santa Clara startup Zededa, Project EVE aims to reinvent embedded using Docker containers and other open source cloud-native software such as Kubernetes. Cloud-native edge computing’s “simple, standardized orchestration” will enable developers to “extend cloud applications to edge devices safely without the need for specialized engineering tied to specific hardware platforms,” says the project.

Earlier this year, Zededa joined the EdgeX Foundry project, and its technology similarly targets the industrial realm. However, Project EVE primarily concerns the higher application level rather than middleware. The project’s cloud-native approach to edge software also connects it to another LF project: the Cloud Native Computing Foundation.

In addition to its lightweight virtualization engine, Project EVE also provides a zero-trust security framework. In conversation with Linux.com, Zededa co-founder Roman Shaposhnik proposed to consign the word “embedded” to the lower levels of simple, MCU-based IoT devices that can’t run Linux. “To learn embedded you have to go back in time, which is no longer cutting it,” said Shaposhnik. “We have millions of cloud-native software developers who can drive edge computing. If you are familiar with cloud-native, you should have no problem in developing edge-native applications.”

If Shaposhnik is critical of traditional, ground-up embedded development, with all its complexity and lack of security, he is also dismissive of the proprietary cloud-to-edge solutions. “It’s clear that building silo’d end-to-end integration cloud applications is not really flying,” he says, noting the dangers of vendor lock-in and lack of interoperability and privacy.

To achieve the goals of edge computing, what’s needed is a standardized, open source approach to edge virtualization that can work with any cloud, says Shaposhnik. Project EVE can accomplish this, he says, by being the edge computing equivalent of Android.

“The edge market today is where mobile was in the early 2000s,” said Shaposhnik, referring to an era when early mobile OSes such as Palm, BlackBerry, and Windows Mobile created proprietary silos. The iPhone changed the paradigm with apps and other advanced features, but it was the far more open Android that really kicked the mobile world into overdrive.

“Project EVE is doing with edge what Android has done with mobile,” said Shaposhnik. The project’s standardized edge virtualization technology is the equivalent of Android package management and Dalvik VM for Java combined, he added. “As a mobile developer you don’t think about what driver is being used. In the same way our technology protects the developer from hardware complexity.”

Project EVE is based on Zededa’s EVx edge virtualization engine, which currently runs on edge hardware from partners including Advantech, Lanner, SuperMicro, and Scalys. Zededa’s customers are mostly large industrial or energy companies that need timely analytics, which increasingly requires multiple applications.

“We have customers who want to optimize their wind turbines and need predictive maintenance and vibration analytics,” said Shaposhnik. “There are a half dozen machine learning and AI companies that could help, but the only way they can deliver their product is by giving them a new box, which adds to cost and complexity.”

A typical edge computer may need only a handful of different apps rather than the hundreds found on a typical smartphone. Yet, without an application management solution such as virtualized containers, there’s no easy way to host them. Other open source cloud-to-edge solutions that use embedded container technology to provide apps include the Balena IoT fleet management solution from Balena (formerly Resin.io) and Canonical’s container-like Ubuntu Core distribution.

Right now, the focus is on getting the open source version of EVx out the door. Project EVE plans to release a 1.0 version of the EVE in the second quarter along with an SDK for developing EVE edge containers. An app store platform will follow later in the year. More information may be found in this Zededa blog post.

Learn more about LF Edge