Home Blog Page 317

What Is Deep Learning AI? A Simple Guide With 8 Practical Examples

The amount of data we generate every day is staggering—currently estimated at 2.6 quintillion bytes—and it’s the resource that makes deep learning possible. Since deep-learning algorithms require a ton of data to learn from, this increase in data creation is one reason that deep learning capabilities have grown in recent years. In addition to more data creation, deep learning algorithms benefit from the stronger computing power that’s available today as well as the proliferation of Artificial Intelligence (AI) as a Service. AI as a Service has given smaller organizations access to artificial intelligence technology and specifically the AI algorithms required for deep learning without a large initial investment.

Deep learning allows machines to solve complex problems even when using a data set that is very diverse, unstructured and inter-connected. The more deep learning algorithms learn, the better they perform.

8 practical examples of deep learning

Now that we’re in a time when machines can learn to solve complex problems without human intervention, what exactly are the problems they are tackling? Here are just a few of the tasks that deep learning supports today and the list will just continue to grow as the algorithms continue to learn via the infusion of data.

Read more at Forbes

How Open Source Projects Are Pushing the Shift to Edge Computing

Gnanavelkandan Kathirvel of AT&T is sure of one thing: it will take a large group of open-source projects working together to push computing closer to the edge.

He’s behind the telecom’s efforts at Akraino Edge Stack, a Linux Foundation project that aims to create an open-source software for edge.  The AT&T contribution is designed for carrier-scale edge computing applications running in virtual machines and containers to support reliability and performance requirements.

To accomplish this, Arkraino will count on collaboration from other open source projects including ONAP, OpenStack, Airship, Kubernetes, Docker, Ceph, ONF, EdgeXFoundry and more. To ensure that there are no holes in the functionalities will require strong collaboration between Arkraino and upstream open-source communities.

Read more at SuperUser

AT&T Details Open White Box Specs for Linux-Based 5G Routers

This week AT&T will release detailed specs to the Open Compute Project for building white box cell site gateway routers for 5G. Over the next few years, more than 60,000 white box routers built by a variety of manufacturers will be deployed as 5G routers in AT&T’s network.

In its Oct. 1 announcement, AT&T said it will load the routers with its Debian Linux based Vyatta Network Operating System (NOS) stack. Vyatta NOS forms the basis for AT&T’s open source dNOS platform, which in turn is the basis for a new Linux Foundation open source NOS project called DANOS, which similarly stands for Disaggregated Network Operating System (see below).

AT&T’s white box blueprint “decouples hardware from software” so any organization can build its own compliant systems running other software. This will provide the cellular gateway industry with flexibility as well as the security of building on an interoperable, long-lifecycle platform. The white box spec appears to OS agnostic. However, routers typically run Linux-based NOS stacks, and that does not appear to be changing with 5G.

The release of specs to the Open Compute Project — an organization that helps standardize open white box designs — departs from the traditional practice of contracting a few vendors to build proprietary solutions for cellular routers. AT&T’s next-gen router blueprint will enable any hardware manufacturer willing to build to spec to compete for the orders. By attracting more manufacturers, AT&T aims to reduce costs, spur innovation, and more quickly meet the “surging data demands” for 5G.

“We now carry more than 222 petabytes of data on an average business day,” stated Chris Rice, SVP, Network Cloud and Infrastructure at AT&T. “The old hardware model simply can’t keep up, and we need to get faster and more efficient.”

The reference design blueprint is said to be flexible enough to enable manufacturers to offer custom platforms for different use cases. In addition to offering faster mobile services, AT&T’s 5G services will enable new applications in “autonomous cars, drones, augmented reality and virtual reality systems, smart factories, and more,” says AT&T.

5G technology will not only provide a major boost in bandwidth for mobile customers, it should also enable wireless services to better compete with the cable providers’ wired broadband Internet services for the home. This week, AT&T rival Verizon opened pre-orders for consumer customers to sign up for 5G home internet service targeted for a launch in 2019.

At publication time, neither AT&T or the Open Compute Project had not yet published the white box specs, but AT&T offered a few details:

  • Supports a wide range of client-side speeds including “100M/1G needed for legacy Baseband Unit systems and next generation 5G Baseband Unit systems operating at 10G/25G and backhaul speeds up to 100G”

  • Supports industrial temperature ranges (-40 to 65°C)

  • Integrates the Broadcom Qumran-AX switching chip with deep buffers to support advanced features and QOS

  • Integrates a baseboard management controller (BMC) for platform health status monitoring and recovery

  • Include a “powerful CPU for network operating software”

  • Provides timing circuitry that supports a variety of I/O</ul>

Vyatta NOS to dNOS to DANOS

Vyatta launched the Debian based, OpenVPN compliant Vyatta Community Edition over a decade ago. The distribution, which later added features like Quagga support and a standardized management console, was available in both subscription-based and open source Vyatta Core versions.

When Brocade acquired Vyatta in 2012, it discontinued the open source version. However, independent developers forked Vyatta Core to create an open source VyOS platform. Last year, Brocade sold its proprietary Vyatta assets to AT&T, which developed it as Vyatta NOS.

AT&T will initially load the proprietary, “production-hardened” Vyatta NOS on the white box routers it purchases. However, the goal appears to be to eventually replace this with AT&T’s dNOS stack under the emerging DANOS framework.

Robert Bays, assistant VP of Vyatta Development at AT&T Labs, stated: “Consistent with our previous announcements to create the DANOS open source project, hosted by the Linux Foundation, we are now sorting out which components of the open cell site gateway router NOS we will be contributing to open source.”

dNOS/DANOS aims to be the world’s first open source, carrier-grade operating system for wide area networks. The software is designed to interoperate with the widely endorsed ONAP (Open Network Automation Platform), a Linux Foundation project for standardizing open source cloud networking software. In AT&T’s dNOS announcement in January, which preceded the DANOS project launch in March, the company stated: “Just as the ONAP platform has become the open network operating system for the network cloud, the dNOS project aims to be the open operating system for white box.”

The DANOS project is also aligned with Linux Foundation projects like FRRouting, OpenSwitch, and the AT&T-derived Akraino Edge Stack. The Akraino project aims to standardize open source edge computing software for basestations and will also support telecom, enterprise networking, and IoT edge platforms.

Different Akraino blueprints will target technologies and standards such as DANOS, Ceph, Kata Containers, Kubernetes, StarlingX, OpenStack, Acumos AI, and EdgeX Foundry. In a few years, we will likely see DANOS-based white box gateway routers running Akraino software to enable 5G applications ranging from autonomous car communications to augmented reality.

Join us at Open Source Summit + Embedded Linux Conference Europe in Edinburgh, UK on October 22-24, 2018, for 100+ sessions on Linux, Cloud, Containers, AI, Community, and more.

Open Source Communities Unite Around Cloud-Native Network Functions

Cloud Native Computing Foundation (CNCF), chiefly responsible for Kubernetes, and the recently established Linux Foundation Networking (LF Networking) group are collaborating on a new class of software tools called Cloud-native Network Functions (CNFs).

CNFs are the next generation Virtual Network Functions (VNFs) designed specifically for private, public and hybrid cloud environments, packaged inside application containers based on Kubernetes.

VNFs are primarily used by telecommunications providers; CNFs are aimed at telecommunications providers that have shifted to cloud architectures, and will be especially useful in the deployment of 5G networks.

Read more at Datacenter Dynamics

Top Five Reasons to Attend Hyperledger Global Forum

In just over two months, the global Hyperledger community will gather in Basel, Switzerland, for the inaugural Hyperledger Global Forum.

With business and technical tracks filled with a diverse range of speakers – from Kiva to the Royal Bank of Canada and from Oracle to the Sovrin Foundation – there’s plenty of educating and engaging content for anyone looking to deepen their knowledge about enterprise blockchain. However, there are many other great reasons to make sure Hyperledger Global Forum is on your calendar for December 12-15, 2018. Here are five things that make this a must-attend event:

Applications

The fast-growing Hyperledger community is putting blockchain to work with PoCs and production deployments around the world. Hyperledger Global Forum is your chance to see live demos and roadmaps showing how the biggest names in financial services, healthcare, supply chain and more are integrating Hyperledger technologies for commercial, production deployments.

Read more at Coinspeaker

Keeping the Software in Docker Containers Up to Date

Docker has radically changed the way admins roll out their software in many companies. However, regular updates are still needed. How does this work most effectively with containers?

From an admin’s point of view, Docker containers have much to recommend them: They can be operated with reduced permissions and thus provide a barrier between a service and the underlying operating system. They are easy to handle and easy to replace. They save work when it comes to maintaining the system on which they run: Software vendors provide finished containers that are ready to run immediately. If you want to roll out new software, you download the corresponding container and start it – all done. Long installation and configuration steps with RPM, dpkg, Ansible, and other tools are no longer necessary. All in all, containers offer noticeable added value from the sys admin’s perspective.

…No matter what the reasons for updating applications running in Docker containers, as with normal systems, you need a working strategy for the update. Because running software in Docker differs fundamentally from operations without a container layer, the way updates are handled also differs.

Several options are available with containers, which I present in this article, but first, you must understand the structure of Docker containers, so you can make the right decisions when updating.

Read more at ADMIN Magazine

Shareware: Yesterday, Today, and Tomorrow

Shareware software had a simple premise: You could try the application and if you liked it, you paid for it.

In those halcyon days, the PC software market was still getting its traction. Most programs were expensive—a single application often retailed for $495, in 1980s dollars. Often, they were complex and difficult to use. Then, Jim “Button” Knopf, creator of PC-File, a simple flat database, and Andrew Fluegelman, inventor of the program PC-Talk, a modem interface program, came up with the same idea: share their programs with other users for a voluntary, nominal payment. Knopf and Fluegelman supported each other’s efforts, and a new software marketing and sales model was born. 

Joel Diamond, vice president of the Association of Software Professionals (originally called Association of Shareware Professionals), believes shareware should be recognized as the first version of e-commerce. You can also point to shareware as the ancestor of mobile app development, collaborative development (with philosophies of trust that led to open source), and community support.

Read more at HPE

Greg Kroah-Hartman Explains How the Kernel Community Is Securing Linux

As Linux adoption expands, it’s increasingly important for the kernel community to improve the security of the world’s most widely used technology. Security is vital not only for enterprise customers, it’s also important for consumers, as 80 percent of mobile devices are powered by Linux. In this article, Linux kernel maintainer Greg Kroah-Hartman provides a glimpse into how the kernel community deals with vulnerabilities.

There will be bugs

Greg Kroah-Hartman
As Linus Torvalds once said, most security holes are bugs, and bugs are part of the software development process. As long as the software is being written, there will be bugs.

“A bug is a bug. We don’t know if a bug is a security bug or not. There is a famous bug that I fixed and then three years later Red Hat realized it was a security hole,” said Kroah-Hartman.

There is not much the kernel community can do to eliminate bugs, but it can do more testing to find them. The kernel community now has its own security team that’s made up of kernel developers who know the core of the kernel.

“When we get a report, we involve the domain owner to fix the issue. In some cases it’s the same people, so we made them part of the security team to speed things up,” Kroah Hartman said. But he also stressed that all parts of the kernel have to be aware of these security issues because kernel is a trusted environment and they have to protect it.

“Once we fix things, we can put them in our stack analysis rules so that they are never reintroduced,” he said.

Besides fixing bugs, the community also continues to add hardening to the kernel. “We have realized that we need to have mitigations. We need hardening,” said Kroah-Hartman.

Huge efforts have been made by Kees Cook and others to take the hardening features that have been traditionally outside of the kernel and merge or adapt them for the kernel. With every kernel released, Cook provides a summary of all the new hardening features. But hardening the kernel is not enough, vendors have to enable the new features and take advantage of them. That’s not happening.  

Kroah-Hartman releases a stable kernel every week, and companies pick one to support for a longer period so that device manufacturers can take advantage of it. However, Kroah-Hartman has observed that, aside from the Google Pixel, most Android phones don’t include the additional hardening features, meaning all those phones are vulnerable. “People need to enable this stuff,” he said.

“I went out and bought all the top of the line phones based on kernel 4.4 to see which one actually updated. I found only one company that updated their kernel,” he said.  “I’m working through the whole supply chain trying to solve that problem because it’s a tough problem. There are many different groups involved — the SoC manufacturers, the carriers, and so on. The point is that they have to push the kernel that we create out to people.”

The good news is that unlike with consumer electronics, the big vendors like Red Hat and SUSE keep the kernel updated even in the enterprise environment. Modern systems with containers, pods, and virtualization make this even easier. It’s effortless to update and reboot with no downtime. It is, in fact, easier to keep things secure than it used to be.

Meltdown and Spectre

No security discussion is complete without the mention of Meltdown and Spectre. The kernel community is still working on fixes as new flaws are discovered. However, Intel has changed its approach in light of these events.

“They are reworking on how they approach security bugs and how they work with the community because they know they did it wrong,” Kroah-Hartman said. “The kernel has fixes for almost all of the big Spectre issues, but there is going to be a long tail of minor things.”

The good news is that these Intel vulnerabilities proved that things are getting better for the kernel community. “We are doing more testing. With the latest round of security patches, we worked on our own for four months before releasing them to the world because we were embargoed. But once they hit the real world, it made us realize how much we rely on the infrastructure we have built over the years to do this kind of testing, which ensures that we don’t have bugs before they hit other people,” he said. “So things are certainly getting better.”

The increasing focus on security is also creating more job opportunities for talented people. Since security is an area that gets eyeballs, those who want to build a career in kernel space, security is a good place to get started with.

“If there are people who want a job to do this type of work, we have plenty of companies who would love to hire them. I know some people who have started off fixing bugs and then got hired,” Kroah-Hartman said.

You can hear more in the video below:

Check out the schedule of talks for Open Source Summit Europe and sign up to receive updates:

How to Install OpenLDAP on Ubuntu 18.04

LDAP is the Lightweight Directory Access Protocol, which allows for the querying and modification of an X.500-based directory service. LDAP is used over an IP network to manage and access a distributed directory service. The primary purpose of LDAP is to provide a set of records in a hierarchical structure. If you’re curious as to how LDAP fits in with Active Directory, think of it this way: Active Directory is a directory service database, and LDAP is one of the protocols used to communicate with it. LDAP can be used for user validation, as well as the adding, updating, and removing of objects within a directory.

I want to show you how to install OpenLDAP on the latest iteration of Ubuntu, and then how to populate an LDAP database with a first entry. All you will need for this is a running instance of Ubuntu 18.04 and a user account with sudo privileges.

And with that said, let’s install.

Read more at Tech Republic

A New Method of Containment: IBM Nabla Containers

By James Bottomley

In the previous post about Containers and Cloud Security, I noted that most of the tenants of a Cloud Service Provider (CSP) could safely not worry about the Horizontal Attack Profile (HAP) and leave the CSP to manage the risk.  However, there is a small category of jobs (mostly in the financial and allied industries) where the damage done by a Horizontal Breach of the container cannot be adequately compensated by contractual remedies.  For these cases, a team at IBM research has been looking at ways of reducing the HAP with a view to making containers more secure than hypervisors.  For the impatient, the full open source release of the Nabla Containers technology is here and here, but for the more patient, let me explain what we did and why.  We’ll have a follow on post about the measurement methodology for the HAP and how we proved better containment than even hypervisor solutions.

The essence of the quest is a sandbox that emulates the interface between the runtime and the kernel (usually dubbed the syscall interface) with as little code as possible and a very narrow interface into the kernel itself.

The Basics: Looking for Better Containment

The HAP attack worry with standard containers is shown on the left: that a malicious application can breach the containment wall and attack an innocent application.  

Read more at Hansen Partnership