Home Blog Page 160

Q&A: Unleashing the Beast—Bringing Linux to IBM Z

IBM has detailed the oral history of bringing Linux to the mainframe:

Two of the original team members from the IBM Böblingen Lab in Germany, Ingo Adlung and Boas Betzler, played crucial roles in bringing Linux to the IBM Z. Adlung is now Distinguished Engineer, Chief Architect & CTO, IBM Z, and LinuxONE Virtualization and Linux. Betzler is IBM Distinguished Engineer and Master Inventor. Here, they recall that work of 20 years ago.  

Read more at the IBM Newsroom

 

Online Bootcamps Provide Clear Onramp to Cloud Engineering Careers

Since launching the Cloud Engineer Bootcamp and Advanced Cloud Engineer Bootcamp, thousands of individuals have begun their journey to becoming a qualified, certified cloud engineer. These programs offer newbies and experienced IT professionals respectively the opportunity to gain the skills needed to launch their cloud career. With a recent D2IQ study finding “only 23% of organizations believe they have the talent required to successfully complete their cloud native journey”, now is the time to make a move into this rapidly growing space.

New, Free Training Course Explores How to Deploy a Microservice-Based Architecture Using the TARS Project

The Linux Foundation and The TARS Foundation have released a new, free training course, Building Microservice Platforms with TARS, on the edX platform. The course explains how to efficiently develop microservices programs using different programming languages and quickly deploy the corresponding services into applications. It also delves into the powerful functionalities of TARS – a high performance, open source RPC framework developed by Tencent as a full-fledged enterprise solution for microservice maintenance, development and operation – and the components that make up the TARS ecosystem.

IBM Contributes A2O Processor Core and PowerAI as Open Source at OpenPOWER Summit

Today at OpenPOWER Summit 2020, OpenPOWER Foundation announced two key technologies contributed by IBM to the open source community.

  • A2O POWER processor core, an out-of-order follow-up to the A2I core, and associated FPGA environment
  • Open Cognitive Environment (Open-CE), based on IBM’s PowerAI to enable improved consumability of AI and deep learning frameworks

The contributions follow the open sourcing of the POWER ISA and associated reference designs in August 2019 and the A2I POWER processor core in June 2020. They represent IBM’s continued commitment to fostering innovation around the POWER architecture from the OpenPOWER ecosystem.

Read more at the OpenPOWER Foundation blog

What’s new in the Linux kernel (HP enterprise.nxt)

Steven J. Vaughn-Nichols writes at HP enterprise.nxt

Linux runs pretty much everything: all 500 of the world’s 500 fastest supercomputers; most of the public cloud, even on Microsoft Azure; and 74 percent of smartphones. Indeed, thanks to Android, Linux is the most popular end-user operating system, nudging out Windows by 4 percent (39% vs. 35%).

So, where does Linux go next? After covering Linux for almost all 29 years of its history and knowing pretty much anyone who’s anyone in Linux development circles, up to and including Linus Torvalds, I think I have a clue.

Read more at HP enterprise.nxt

Where’s the Yelp for open-source tools? (Functionize)

Steven J. Vaughan-Nichols writes at Functionize:

We’d like an easy way to judge open-source programs. It can be done. But easily? That’s another matter. When it comes to open source, you can’t rely on star power.

The “wisdom of the crowd” has inspired all sorts of online services wherein people share their opinions and guide others in making choices. The Internet community has created many ways to do this, such as Amazon reviews, Glassdoor (where you can rate employers), and TripAdvisor and Yelp (for hotels, restaurants, and other service providers). You can rate or recommend commercial software, too, such as on mobile app stores or through sites like product hunt. But if you want advice to help you choose open-source applications, the results are disappointing.

It isn’t for lack of trying. Plenty of people have created systems to collect, judge, and evaluate open-source projects, including information about a project’s popularity, reliability, and activity. But each of those review sites – and their methodologies – have flaws.

Read more at Functionize

Linux Certifications: 4 Things You Need to Know About Obtaining Them

The rise of open cloud platforms is fostering a rise in demand for Linux specialists equipped with the right expertise. In this new environment, obtaining a Linux certification can boost your career by proving your skills in increasingly critical areas.

With the vast majority of Amazon servers running Linux, and many servers running open-source software, Linux is, in the eyes of many, the de facto OS of the cloud. No wonder sysadmins, systems engineers, and system administrators with Linux skills can earn a healthy salary premium. 

Source: Dice Insights

New, Free Training Course Teaches Fundamentals of Serverless on Kubernetes

The Linux Foundation and Cloud Native Computing Foundation have released a new, free training course, Introduction to Serverless on Kubernetes, on the edX platform. The course explains how to build serverless functions that can run on any cloud, without being restricted by limits on the execution duration, languages available, or the size of your code. It is designed to provide an overview of how a serverless approach works in tandem with a Kubernetes cluster.

TODO Group: Why Open Source matters to your enterprise

Overview

There are many business reasons to use open source software. Many of today’s most significant business breakthroughs, including big data, machine learning, cloud computing, Internet of Things, and streaming analytics, sprang from open source software innovations. Open source software often comes into an organization as the backbone of many essential devices, programs, platforms, and tools such as robotics, sensors, the Internet of Things (IoT), automotive telematics, and autonomous driving, edge computing, and big data computing. Open source software code is working on many smartphones, laptops, servers, databases, and cloud infrastructures and services. Developers build most applications by leveraging frameworks like Node. js or pulling in libraries that have been tested and proven in many production use cases. To use almost any of these things is to use open source software in one form or another, and often in combination.

By using open source software, companies also avoid building everything from the ground up, saving time, money, and effort while also rendering more innovation from the investment. Open source software is generally more secure than using the commercial proprietary counterparts too. That is due in large part to the collaborative nature of open source software projects. A common phrase used by Open Source developers and advocates is that “given enough eyeballs, all bugs are shallow.” That holds so long as there are “enough eyeballs,” which, given open source software’s adoption rate, may be challenging to have across all projects. Drawbacks do exist, as no software is perfect, not even open source software. However, for most organizations, the good far outweighs the bad. The codebase’s open nature also means it’s easier to report and fix software versus alternative models.

While open source software offers many reliable and provable business advantages, sometimes those advantages remain obscure to those who have not looked deeply into the topic, including many high-level decision-makers. This paper, published by the European Chapter of the TODO Group, aims to provide a balanced and quick overview of the business pros and cons of using open source software.

To download Why Open Source Matters to Your Enterprise click on the button below

The post TODO Group: Why Open Source matters to your enterprise appeared first on The Linux Foundation.

Developing an email alert system using a surveillance camera with Node-RED and TensorFlow.js

Overview

In a previous article, we introduced a procedure for developing an image recognition flow using Node-RED and TensorFlow.js. Now, let’s apply those learnings from what we have done and develop an e-mail alert system that uses a surveillance camera together with image recognition. As shown in the following image, we will create a flow that automatically sends an email alert when a suspicious person is captured within a surveillance camera frame.

Objective: Develop flow

In this flow, the image of the surveillance camera is periodically acquired from the webserver, and the image is displayed under the “Original image” node in the lower left. After that, the image is recognized using the TensorFlow.js node. The recognition result and the image with recognition results are displayed under the debug tab and the “image with annotation” node, respectively.

If a person is detected by image recognition, an alert mail with the image file attached will be sent using the SendGrid node.  Since it is difficult to set up a real surveillance camera, we will use a sample image sent by a surveillance camera in Kanagawa Prefecture of Japan  to check the amount of water in the river.

We will explain the procedure for creating this flow in the following sections. For the Node-RED environment, use your local PC, a Raspberry Pi, or a cloud-based deployment.

Install the required nodes

Click the hamburger menu on the top right of the Node-RED flow editor, go to “Manage palette” -> “Palette” tab -> “Install” tab, and install the following nodes.

Create a flow of acquiring image data

First, create a flow that acquires the image binary data from the webserver. As in the flow below, place an inject node (the name will be changed to “timestamp” when placed in the workspace), http request node, and image preview node, and connect them with wires in the user interface.

Then double-click the http request node to change the node property settings.

Adjust http request node property settings

 

Paste the URL of the surveillance camera image to the URL on the property setting screen of the http request node. (In Google Chrome, when you right-click on the image and select “Copy image address” from the menu, the URL of the image is copied to the clipboard.) Also, select “a binary buffer” as the output format.

Execute the flow to acquire image data

Click the Deploy button at the top right of the flow editor, then click the button to the inject node’s left. Then, the message is sent from the inject node to the http request node through the wire, and the image is acquired from the web server that provides the image of the surveillance camera. After receiving the image data, a message containing the data in binary format is sent to the image preview node, and the image is displayed under the image preview node.

 An image of the river taken by the surveillance camera is displayed in the lower right.

Create a flow for image recognition of the acquired image data

Next, create a flow that analyzes what is in the acquired image. Place a cocossd node, a debug node (the name will be changed to msg.payload when you place it), and a second image preview node.

Then, connect the output terminal on the right side of the http request node, and the input terminal on the left side of the cocossd node.

Next, connect the output terminal on the right side of the cocossd node and the debug node, the output terminal on the right side of the cocossd node, and the input terminal on the left side of the image preview node with the respective wires.

Through the wire, the binary data of the surveillance camera image is sent to the cocossd node, and after the image recognition is performed using TensorFlow.js, the object name is displayed in the debug node, and the image with the image recognition result is displayed in the image preview node. 

The cocossd node is designed to store the object name in the variable msg.payload, and the binary data of the image with the annotation in the variable msg.annotatedInput

To make this flow work as intended, you need to double-click the image preview node used to display the image and change the node property settings.

Adjust image preview node property settings

By default, the image preview node displays the image data stored in the variable msg.payload. Here, change this default variable to msg.annotatedInput.

Adjust inject node property settings

Since the flow is run regularly every minute, the inject node’s property needs to be changed. In the Repeat pull-down menu, select “interval” and set “1 minute” as the time interval. Also, since we want to start the periodic run process immediately after pressing the Deploy button, select the checkbox on the left side of “inject once after 0.1 seconds”.

Run the flow for image recognition

The flow process will be run immediately after pressing the Deploy button. When the person (author) is shown on the surveillance camera, the image recognition result “person” is displayed in the debug tab on the right. Also, below the image preview node, you will see the image annotated with an orange square.

Create a flow of sending an email when a person caught in the surveillance camera

Finally, create a flow to send the annotated image by email when the object name in the image recognition result is “person”. As a subsequent node of the cocossd node, place a switch node that performs condition determination, a change node that assigns values, and a sendgrid node that sends an email, and connect each node with a wire.

Then, change the property settings for each node, as detailed in the sections below.

Adjust the switch node property settings

Set the rule to execute the subsequent flow only if msg.payload contains the string “person” 

To set that rule, enter “person” in the comparison string for the condition “==” (on the right side of the “az” UX element in the property settings dialog for the switch node).

Adjust the change node property settings

To attach the image with annotation to the email, substitute the image data stored in the variable msg.annotatedInput to the variable msg.payload. First, open the pull-down menu of “az” on the right side of the UX element of “Target value” and select “msg.”. Then enter “annotatedInput” in the text area on the right.

If you forget to change to “msg.” in the pull-down menu that appears when you click “az”, the flow often does not work well, so check again to be sure that it is set to “msg.”.

Adjust the sendgrid node property settings

Set the API key from the SendGrid management screen. And then input the sender email address and recipient email address.

Finally, to make it easier to see what each node is doing, open each node’s node properties, and set the appropriate name.

Validate the operation of the flow to send an email when the surveillance camera captures a person in frame

When a person is captured in the image of the surveillance camera, the image recognition result is displayed in the debug tab the same as in the previous flow of confirmation and the orange frame is displayed in the image under the image preview node of “Image with annotation”. You can see that the person is recognized correctly.

After that, if the judgment process, the substitution process, and the email transmission process works as designed, you will receive an email with the image file with the annotation attached to your smartphone as follows:

Conclusion

By using the flow created in this article, you can also build a simple security system for your own garden using a camera connected to a Raspberry Pi. At a larger scale, image recognition can also be run on image data acquired using network cameras that support protocols such as ONVIF.

About the author: Kazuhito Yokoi is an Engineer at Hitachi’s OSS Solution Center, located in Yokohama, Japan.