Home Blog Page 579

Docker to Donate its Container Runtime, containerd, to the Cloud Native Computing Foundation

Docker plans to donate its containerd container runtime to the Cloud Native Computing Foundation, a nonprofit organization dedicated to organizing a set of open source container-based cloud-native technologies.

In December, Docker released as open source the code for containerd, which provides a runtime environment for Docker containers. By open sourcing this component of the Docker stack, the company wanted to assure users, partners, and other actors in the container ecosystem that the core container component would remain stable, and that the community would have a say in its advancement.

Read more at The New Stack

Best Practices for Value Stream Mapping and DevOps

In a recent Continuous Discussions (#c9d9) video podcast expert panelists discussed Value Stream mapping and DevOps.

Our expert panel included: Andi Mann, Chief Technology Advocate at Splunk; Marc Priolo, Configuration Manager, Urban Science; Mark Dalton, CEO at AutoDeploy; and, our very own Anders Wallgren and Sam Fell.

During the episode, the panelists discussed what is Value Stream Mapping and how it relates to DevOps, best practices for Value Stream Mapping, how it can help scale your DevOps adoption, and more. Continue reading for their best practices and insights.

The Week in Open Source News: Web Titans Influence Data Center Networking, How Blockchain Kickstarts Business & More

This week in open source news, SDxCentral calls The Linux Foundation crucial to the networking evolution, the cloud should be central in kickstarting your business, and more! Read on for more Linux and OSS headlines.

1) “With the importance of open source and SDN, virtual switches, and open software stacks, the Linux Foundation has become highly relevant to the next-gen data center networking evolution.”

Web Titans Have Big Influence on Data Center Networking Efforts– SDxCentral

2) The cloud can help developers achieve great success while keeping costs down. The Register delves into how startups, PaaS, and blockchain factor in.

How the Cloud Can Kickstart Your Business– The Register

3) Karl-Heinz Schneider claims that there are no good reasons to migrate back to Windows, after a back and forth city debate.

Munich IT Chief Slams City’s Decision to Dump Linux For Windows– The Inquirer

4) A dangerous flaw in the kernel allowed attackers to elevate their access rights and crash systems.

Another Years-Old Flaw Fixed in the Linux Kernel– BleepingComputer

5) “Dramatic changes in the use of open source require modifications to organizations’ application security strategies.”

Security in the Age of Open Source– DarkReading

Bruce Schneier on New Security Threats from the Internet of Things

Security expert Bruce Schneier says we’re creating an Internet that senses, thinks, and acts, which is is the classic definition of a robot. “I contend that we’re building a world-sized robot without even realizing it,” he said recently at the Open Source Leadership Summit (OSLS).

In his talk, Schneier explained this idea of a world-sized robot, created out of the Internet, that has no single consciousness, no single goal, and no single creator. You can think of it, he says, as an Internet that affects the world in a direct physical manner. This means Internet security becomes everything security.

And, as the Internet physically affects our world, the threats become greater. “It’s the same computers, it could be the same operating systems, the same apps, the same vulnerability, but there’s a fundamental difference between when your spreadsheet crashes, and you lose your data, and when your car crashes and you lose your life,” Schneier said.

Here, Schneier discusses some of these new threats and how to manage them.

Linux.com: In your talk, you say “the combination of mobile, cloud computing, the Internet of Things, persistent computing, and autonomy are resulting in something different.” What are some of the new threats resulting from this different reality?

Bruce Schneier: The new threats are the same as the old threats, just ratcheted up. Ubiquitous surveillance becomes even more pervasive as more systems can do it. Malicious actions become even more serious when they can be performed autonomously by computer systems.

Security technologist Bruce Schneier (Image credit: Lynne Henry)

Our data continues to move even further out of our control, as more processing and storage migrates to the cloud. And our dependence on these systems continues to increase, as we use them for more critical applications and never turn them off. My primary worry, though, is the emergent properties that will arise from these fundamental changes in how we use computers — things we can’t predict or prepare for.

Linux.com: What are some of the new security and privacy risks specifically associated with IoT?

Schneier: The Internet of Things is fundamentally changing how computers get incorporated into our lives. Through the sensors, we’re giving the Internet eyes and ears. Through the actuators, we’re giving the Internet hands and feet. Through the processing — mostly in the cloud — we’re giving the Internet a brain. Together, we’re creating an Internet that senses, thinks, and acts. This is the classic definition of a robot, and I contend that we’re building a world-sized robot without even realizing it.

We have lots of experience with the old security and privacy threats. The new ones revolve around an Internet that can affect the world in a direct physical manner, and can do so autonomously. This is not something we’ve experienced before.

Linux.com: What past lessons are most relevant in managing these new threats?

Schneier: As computers permeate everything, what we know about computer and network security will become relevant to everything. This includes the risks of poorly written software, the inherent dangers that arise from extensible computer systems, the problems of complexity, and the vulnerabilities that arise from interconnections. But most importantly, computer systems fail differently than traditional machines. The auto industry knows all about how traditional cars fail, and has all sorts of metrics to predict rates of failure. Cars with computers can have a completely different failure mode: one where they all work fine, until one day none of them work at all.

Linux.com: What will be most effective in mitigating these threats in the future?

Schneier: There are two parts to any solution: a technical part and a policy part. Many companies are working on technologies to mitigate these threats: secure IoT building blocks, security systems that assume the presence of malicious IoT devices on a network, ways to limit catastrophic effects of vulnerabilities.

I have 20 IoT-security best-practices documents from various organizations. But the primary barriers here are economic; these low-cost devices just don’t have the dedicated security teams and patching/upgrade paths that our phones and computers do. This is why we also need regulation to force IoT companies to take security seriously from the beginning. I know regulation is a dirty word in our industry, but when people start dying, governments will take action. I see it as a choice not between government regulation and no government regulation, but between smart government regulation and stupid government regulation.

Linux.com: What can individuals do to make a difference?

Schneier: At this point, there isn’t much. We can choose to opt out: not buy the Internet-connected thermostat or refrigerator. But this is increasingly hard. Smartphones are essential to being a fully functioning person in the 21st century. New cars come with Internet connections. Everyone is using the cloud. We can try to demand security from the products and services we buy and use, but unless we’re part of a mass movement, we’ll just be ignored. We need to make this a political issue, and demand a policy solution. Without that, corporations will act in their own self-interest to the detriment of us all.

To hear more from Schneier, you can watch the complete keynote below.

https://www.youtube.com/watch?v=8tDU0zcptCY?list=PLbzoR-pLrL6rm2vBxfJAsySspk2FLj4fM

Bruce Schneier is the author of 13 books, as well as the Crypto-Gram newsletter and the Schneier on Security blog. He is also a fellow at the Berkman Klein Center for Internet & Society at Harvard, a Lecturer in Public Policy at the Harvard Kennedy School, a board member of the Electronic Frontier Foundation, an Advisory Board Member of the Electronic Privacy Information Center, and the Chief Technology Officer at IBM Resilient.

NFV vs. VNF: What’s the Difference?

NFV versus VNF: SDN engineer Darien Hirotsu explains the differences between network functions virtualization and virtual network functions. 

Networking professionals sometimes use the terms virtual network functions, or VNF, and network functions virtualization, or NFV, interchangeably, which can be a source of confusion. However, if we refer to the NFV specifications the European Telecommunications Standards Institute, or ETSI, sets forth, it becomes clear the two acronyms have related but distinct meanings.

Read more at TechTarget

Monitoring Google Compute Engine Metrics

This post is part 1 in a 3-part series about monitoring Google Compute Engine (GCE). Part 2 covers the nuts and bolts of collecting GCE metrics, and part 3describes how you can get started collecting metrics from GCE with Datadog. This article describes in detail the resource and performance metrics that can be obtained from GCE.

What is Google Compute Engine?

Google Compute Engine (GCE) is an infrastructure-as-a-service platform that is a core part of the Google Cloud Platform. The fully managed service enables users around the world to spin up virtual machines on demand. It can be compared to services like Amazon’s Elastic Compute Cloud (EC2), or Azure Virtual Machines.

Read more at DataDog

ETSI is Bullish on the Results of Its First NFV Interoperability Tests

The European Telecommunication Standards Institute (ETSI) recently put on a plugtest event in Madrid, Spain, where 35 commercial and open source implementations were tested for interoperability, and it saw promising results as released in its report.

For features like network service on-boarding, instantiation, and termination, 98 percent of the interoperability tests succeeded. The standards body also saw positive results for more complex tests like scaling and network service updates.

Read more at SDx Central

Algorithm Time Complexity and Big O Notation

 
1*FMAE5dpR1YH6EdWv-Xct8Q.png

In an age where computing power surrounds us, it’s easy to become wrapped up in the idea that information is processed and delivered like magic; so fast that we sometimes forget that millions of calculations per second are being done between the time we requested the information and the time it is delivered.

While it’s true that machine computation time is taken for granted, on a more granular scale, all work still requires time. And when the work-load begins to mount, it becomes important to understand how the complexity of an algorithm influences the time it takes to complete a task.

An algorithm is a step-by-step list of instructions used to perform an ultimate task. A good software engineer will consider time complexity when planning their program. From the start, an engineer should consider a scenario that their program may encounter that would require the most time to complete if at all. This is known as the worst-case time complexity of an algorithm. Starting from here and working backwards allows the engineer to form a plan that gets the most work done in the shortest amount of time.

Big O notation is the most common metric for calculating time complexity. It describes the execution time of a task in relation to the number of steps required to complete it.

Big O notation is written in the form of O(n) where O stands for “order of magnitude” and n represents what we’re comparing the complexity of a task against. A task can be handled using one of many algorithms, each of varying complexity and scalability over time.

There are myriad types of complexity, the more complicated of which are beyond the scope of this post. The following, however, are a few of the more basic types and the O notation that represent them. For the sake of clarity, let’s assume the examples below are executed in a vacuum, eliminating background processes (e.g. checking for email).

Constant Complexity: O(1)

constant task’s run time won’t change no matter what the input value is. Consider a function that prints a value in an array.

1*991UCaNio5YdoZ6FAc29Bg.png

No matter which element’s value you’re asking the function to print, only one step is required. So we can say the function runs in O(1) time; its run-time does not increase. Its order of magnitude is always 1.

Linear Complexity: O(n)

linear task’s run time will vary depending on it’s input value. If you ask a function to print all the items in a 10-element array, it will require less steps to complete than it would a 10,000 element array. This is said to run at O(n); it’s run time increases at an order of magnitude proportional to n.

1*rwdlcXVO9OgtiCKlyfg36Q.png

Quadratic Complexity: O(N²)

quadratic task requires a number of steps equal to the squaure of it’s input value. Lets look at a function that takes an array and N as it’s input values where N is the number of values in the array. If I use a nested loop both of which use N as it’s limit condition, and I ask the function to print the array’s contents, the function will perform N rounds, each round printing N lines for a total of N² print steps.

Let’s look at that practically. Assume the index length N of an array is 10. If the function prints the contents of it’s array in a nested-loop, it will perform 10 rounds, each round printing 10 lines for a total of 100 print steps. This is said to run in O(N²) time; it’s total run time increases at an order of magnitude proportional to N².

1*6WLoTBf6orTEwiR4HTUFgQ.png

Exponential: O(2^N)

O(2^N) is just one example of exponential growth (among O(3^n), O(4^N), etc.). Time complexity at an exponential rate means that with each step the function performs, it’s subsequent step will take longer by an order of magnitude equivalent to a factor of NFor instance, with a function whose step-time doubles with each subsequent step, it is said to have a complexity of O(2^N). A function whose step-time triples with each iteration is said to have a complexity of O(3^N) and so on.

Logarithmic Complexity: O(log n)

This is the type of algorithm that makes computation blazingly fast. Instead of increasing the time it takes to perform each subsequent step, the time is decreased at magnitude inversely proportional to N.

Let’s say we want to search a database for a particular number. In thedata set below, we want to search 20 numbers for the number 100. In this example, searching through 20 numbers is a non-issue. But imagine we’re dealing with data sets that store millions of users’ profile information. Searching through each index value from beginning to end would be ridiculously inefficient. Especially if it had to be done multiple times.

A logarithmic algorithm that performs a binary search looks through only half of an increasingly smaller data set per step.

Assume we have an asscending ordered set of numbers. The algorithim starts by searching half of the entire data set. If it doesn’t find the number, it discards the set just checked and then searches half of the remaining set of numbers.

1*NHigO2ooU2KxK3MeOMZscA.jpeg

As illustrated above, each round of searching consists of a smaller data set than the previous, decreasing the time each subsequent round is performed. This makes log n algorithms very scalable.

While I’ve only touched on the basics of time complexity and Big O notation, this should get you off to a good start. Besides the benefit of being more adept at scaling programs efficiently, understanding the concept of time complexity is HUGE benefit as I’ve been told by a few people that it comes up in interviews a lot.

Principles for C Programming

In the words of Doug Gwyn, “Unix was not designed to stop you from doing stupid things, because that would also stop you from doing clever things”. C is a very powerful tool, but it is to be used with care and discipline. Learning this discipline is well worth the effort, because C is one of the best programming languages ever made. A disciplined C programmer will…

Prefer maintainability. Do not be clever where cleverness is not required. Instead, seek out the simplest and most understandable solution that meets the requirements. Most concerns, including performance, are secondary to maintainability. You should have a performance budget for your code, and you should be comfortable spending it.

Read more at Drew DeVault’s blog

Paving with Good Intentions: The Attempt to Rescue the Network Time Protocol

After the Heartbleed bug revealed in April 2014 how understaffed and under-funded the OpenSSL project was, the Network Time Foundation was discovered to be one of several projects in a similar condition. Unfortunately, thanks to a project fork, the efforts to lend NTP support have only divided the development community and created two projects scrambling for funds where originally there was only one.

NTP was originally written by David L. Mills in 1985. Today, the project has been managed for years by Harlan Stenn, whose hours developing the protocol have regularly exceeded funding, and were volunteered largely at the expense of his own consulting business. Currently, the project has four main contributors, one of whom is on sabbatical. The project is part of the Network Time Foundation, but other contributors are working on related projects, all of which are just as understaffed.

Read more at The New Stack