Home Blog Page 579

IBM Unveils Blockchain as a Service Based on Open Source Hyperledger Fabric Technology

IBM unveiled its “Blockchain as a Service” today, which is based on the open source Hyperledger Fabric, version 1.0 from The Linux Foundation.

IBM Blockchain is a public cloud service that customers can use to build secure blockchain networks. The company introduced the idea last year, but this is the first ready-for-primetime implementation built using that technology.

The blockchain is a notion that came into the public consciousness around 2008 as a way to track bitcoin digital-currency transactions.

Read more at TechCrunch

ftrace: Trace your Kernel Functions!

Hello! Today we’re going to talk about a debugging tool we haven’t talked about much before on this blog: ftrace. What could be more exciting than a new debugging tool?!

Better yet, ftrace isn’t new! It’s been around since Linux kernel 2.6, or about 2008. here’s the earliest documentation I found with some quick Gooogling. So you might be able to use it even if you’re debugging an older system!

I’ve known that ftrace exists for about 2.5 years now, but hadn’t gotten around to really learning it yet. I’m supposed to run a workshop tomorrow where I talk about ftrace, so today is the day we talk about it!

Read more at Julia Evans 

MIT-Stanford Project Uses LLVM to Break Big Data Bottlenecks

Written in Rust, Weld can provide orders-of-magnitude speedups to Spark and TensorFlow.

The more cores you can use, the better — especially with big data. But the easier a big data framework is to work with, the harder it is for the resulting pipelines, such as TensorFlow plus Apache Spark, to run in parallel as a single unit.

Researchers from MIT CSAIL, the home of envelope-pushing big data acceleration projects like Milk and Tapir, have paired with the Stanford InfoLab to create a possible solution. Written in the Rust language, Weld generates code for an entire data analysis workflow that runs efficiently in parallel using the LLVM compiler framework.

Read more at InfoWorld

Docker to Donate its Container Runtime, containerd, to the Cloud Native Computing Foundation

Docker plans to donate its containerd container runtime to the Cloud Native Computing Foundation, a nonprofit organization dedicated to organizing a set of open source container-based cloud-native technologies.

In December, Docker released as open source the code for containerd, which provides a runtime environment for Docker containers. By open sourcing this component of the Docker stack, the company wanted to assure users, partners, and other actors in the container ecosystem that the core container component would remain stable, and that the community would have a say in its advancement.

Read more at The New Stack

Best Practices for Value Stream Mapping and DevOps

In a recent Continuous Discussions (#c9d9) video podcast expert panelists discussed Value Stream mapping and DevOps.

Our expert panel included: Andi Mann, Chief Technology Advocate at Splunk; Marc Priolo, Configuration Manager, Urban Science; Mark Dalton, CEO at AutoDeploy; and, our very own Anders Wallgren and Sam Fell.

During the episode, the panelists discussed what is Value Stream Mapping and how it relates to DevOps, best practices for Value Stream Mapping, how it can help scale your DevOps adoption, and more. Continue reading for their best practices and insights.

The Week in Open Source News: Web Titans Influence Data Center Networking, How Blockchain Kickstarts Business & More

This week in open source news, SDxCentral calls The Linux Foundation crucial to the networking evolution, the cloud should be central in kickstarting your business, and more! Read on for more Linux and OSS headlines.

1) “With the importance of open source and SDN, virtual switches, and open software stacks, the Linux Foundation has become highly relevant to the next-gen data center networking evolution.”

Web Titans Have Big Influence on Data Center Networking Efforts– SDxCentral

2) The cloud can help developers achieve great success while keeping costs down. The Register delves into how startups, PaaS, and blockchain factor in.

How the Cloud Can Kickstart Your Business– The Register

3) Karl-Heinz Schneider claims that there are no good reasons to migrate back to Windows, after a back and forth city debate.

Munich IT Chief Slams City’s Decision to Dump Linux For Windows– The Inquirer

4) A dangerous flaw in the kernel allowed attackers to elevate their access rights and crash systems.

Another Years-Old Flaw Fixed in the Linux Kernel– BleepingComputer

5) “Dramatic changes in the use of open source require modifications to organizations’ application security strategies.”

Security in the Age of Open Source– DarkReading

Bruce Schneier on New Security Threats from the Internet of Things

Security expert Bruce Schneier says we’re creating an Internet that senses, thinks, and acts, which is is the classic definition of a robot. “I contend that we’re building a world-sized robot without even realizing it,” he said recently at the Open Source Leadership Summit (OSLS).

In his talk, Schneier explained this idea of a world-sized robot, created out of the Internet, that has no single consciousness, no single goal, and no single creator. You can think of it, he says, as an Internet that affects the world in a direct physical manner. This means Internet security becomes everything security.

And, as the Internet physically affects our world, the threats become greater. “It’s the same computers, it could be the same operating systems, the same apps, the same vulnerability, but there’s a fundamental difference between when your spreadsheet crashes, and you lose your data, and when your car crashes and you lose your life,” Schneier said.

Here, Schneier discusses some of these new threats and how to manage them.

Linux.com: In your talk, you say “the combination of mobile, cloud computing, the Internet of Things, persistent computing, and autonomy are resulting in something different.” What are some of the new threats resulting from this different reality?

Bruce Schneier: The new threats are the same as the old threats, just ratcheted up. Ubiquitous surveillance becomes even more pervasive as more systems can do it. Malicious actions become even more serious when they can be performed autonomously by computer systems.

Security technologist Bruce Schneier (Image credit: Lynne Henry)

Our data continues to move even further out of our control, as more processing and storage migrates to the cloud. And our dependence on these systems continues to increase, as we use them for more critical applications and never turn them off. My primary worry, though, is the emergent properties that will arise from these fundamental changes in how we use computers — things we can’t predict or prepare for.

Linux.com: What are some of the new security and privacy risks specifically associated with IoT?

Schneier: The Internet of Things is fundamentally changing how computers get incorporated into our lives. Through the sensors, we’re giving the Internet eyes and ears. Through the actuators, we’re giving the Internet hands and feet. Through the processing — mostly in the cloud — we’re giving the Internet a brain. Together, we’re creating an Internet that senses, thinks, and acts. This is the classic definition of a robot, and I contend that we’re building a world-sized robot without even realizing it.

We have lots of experience with the old security and privacy threats. The new ones revolve around an Internet that can affect the world in a direct physical manner, and can do so autonomously. This is not something we’ve experienced before.

Linux.com: What past lessons are most relevant in managing these new threats?

Schneier: As computers permeate everything, what we know about computer and network security will become relevant to everything. This includes the risks of poorly written software, the inherent dangers that arise from extensible computer systems, the problems of complexity, and the vulnerabilities that arise from interconnections. But most importantly, computer systems fail differently than traditional machines. The auto industry knows all about how traditional cars fail, and has all sorts of metrics to predict rates of failure. Cars with computers can have a completely different failure mode: one where they all work fine, until one day none of them work at all.

Linux.com: What will be most effective in mitigating these threats in the future?

Schneier: There are two parts to any solution: a technical part and a policy part. Many companies are working on technologies to mitigate these threats: secure IoT building blocks, security systems that assume the presence of malicious IoT devices on a network, ways to limit catastrophic effects of vulnerabilities.

I have 20 IoT-security best-practices documents from various organizations. But the primary barriers here are economic; these low-cost devices just don’t have the dedicated security teams and patching/upgrade paths that our phones and computers do. This is why we also need regulation to force IoT companies to take security seriously from the beginning. I know regulation is a dirty word in our industry, but when people start dying, governments will take action. I see it as a choice not between government regulation and no government regulation, but between smart government regulation and stupid government regulation.

Linux.com: What can individuals do to make a difference?

Schneier: At this point, there isn’t much. We can choose to opt out: not buy the Internet-connected thermostat or refrigerator. But this is increasingly hard. Smartphones are essential to being a fully functioning person in the 21st century. New cars come with Internet connections. Everyone is using the cloud. We can try to demand security from the products and services we buy and use, but unless we’re part of a mass movement, we’ll just be ignored. We need to make this a political issue, and demand a policy solution. Without that, corporations will act in their own self-interest to the detriment of us all.

To hear more from Schneier, you can watch the complete keynote below.

https://www.youtube.com/watch?v=8tDU0zcptCY?list=PLbzoR-pLrL6rm2vBxfJAsySspk2FLj4fM

Bruce Schneier is the author of 13 books, as well as the Crypto-Gram newsletter and the Schneier on Security blog. He is also a fellow at the Berkman Klein Center for Internet & Society at Harvard, a Lecturer in Public Policy at the Harvard Kennedy School, a board member of the Electronic Frontier Foundation, an Advisory Board Member of the Electronic Privacy Information Center, and the Chief Technology Officer at IBM Resilient.

NFV vs. VNF: What’s the Difference?

NFV versus VNF: SDN engineer Darien Hirotsu explains the differences between network functions virtualization and virtual network functions. 

Networking professionals sometimes use the terms virtual network functions, or VNF, and network functions virtualization, or NFV, interchangeably, which can be a source of confusion. However, if we refer to the NFV specifications the European Telecommunications Standards Institute, or ETSI, sets forth, it becomes clear the two acronyms have related but distinct meanings.

Read more at TechTarget

Monitoring Google Compute Engine Metrics

This post is part 1 in a 3-part series about monitoring Google Compute Engine (GCE). Part 2 covers the nuts and bolts of collecting GCE metrics, and part 3describes how you can get started collecting metrics from GCE with Datadog. This article describes in detail the resource and performance metrics that can be obtained from GCE.

What is Google Compute Engine?

Google Compute Engine (GCE) is an infrastructure-as-a-service platform that is a core part of the Google Cloud Platform. The fully managed service enables users around the world to spin up virtual machines on demand. It can be compared to services like Amazon’s Elastic Compute Cloud (EC2), or Azure Virtual Machines.

Read more at DataDog

ETSI is Bullish on the Results of Its First NFV Interoperability Tests

The European Telecommunication Standards Institute (ETSI) recently put on a plugtest event in Madrid, Spain, where 35 commercial and open source implementations were tested for interoperability, and it saw promising results as released in its report.

For features like network service on-boarding, instantiation, and termination, 98 percent of the interoperability tests succeeded. The standards body also saw positive results for more complex tests like scaling and network service updates.

Read more at SDx Central