Home Blog Page 330

Kubernetes Design and Development Explained

Kubernetes is quickly becoming the de facto way to deploy workloads on distributed systems. In this post, I will help you develop a deeper understanding of Kubernetes by revealing some of the principles underpinning its design.

Declarative Over Imperative

As soon as you learn to deploy your first workload (a pod) on the Kubernetes open source orchestration engine, you encounter the first principle of Kubernetes: the Kubernetes API is declarative rather than imperative.

In an imperative API, you directly issue the commands that the server will carry out, e.g. “run container,” “stop container,” and so on. In a declarative API, you declare what you want the system to do, and the system will constantly drive towards that state.

Think of it like manually driving vs setting an autopilot system.

So in Kubernetes, you create an API object (using the CLI or REST API) to represent what you want the system to do. And all the components in the system work to drive towards that state, until the object is deleted.

For example, when you want to schedule a containerized workload instead of issuing a “run container” command, you create an API object, a pod, that describes your desired state:

simple-pod.yaml

apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: internal.mycorp.com:5000/mycontainer:1.7.9

Read more at The New Stack

James Bottomley on Linux, Containers, and the Leading Edge

It’s no secret that Linux is basically the operating system of containers, and containers are the future of the cloud, says James Bottomley, Distinguished Engineer at IBM Research and Linux kernel developer. Bottomley, who can often be seen at open source events in his signature bow tie, is focused these days on security systems like the Trusted Platform Module and the fundamentals of container technology.

With Open Source Summit happening this month in conjunction with Linux Security Summit — and Open Source Summit Europe coming up fast — we talked with Bottomley about these and other topics. …

The Linux Foundation: Who should attend Open Source Summit and why?

Bottomley: I think it’s no secret that Linux is basically the OS of containers and containers are the future of the cloud, so anyone who is interested in keeping up to date with what’s going on in the cloud because this would be the only place they can keep up with the leading edge of Linux.

Read more at The Linux Foundation

How Blockchain and the Auto Industry Will Fit Together

“At this point, most of the specific potential uses for blockchain in various industries are quite speculative and a number of years out,” says Gordon Haff, technology evangelist at Red Hat. “What we can do, though, is think about the type of uses that play to blockchain strengths.”

It is plenty productive to explore sectors where, as Haff says, blockchain’s strong suits might be a good fit. The automotive industry quickly stands out. Some of its fundamental characteristics and concerns – think about the massive global supply chain, the complex web of licensing, taxation, and other regulations, and the important safety and trust issues – make it a fascinating candidate for blockchain-enabled innovation…

However, one catalyst for change is that the auto industry is deeply connected with some other sectors where blockchain technology shows promise. Marta Piekarska, director of ecosystem at Hyperledger, points out several major ones: supply chain, insurance, and payments. And that’s not necessarily a comprehensive list. The vehicles we drive and ride in cross many more avenues than we may realize.

“The automotive industry might be unique in the way that it combines many other platforms: entertainment, manufacturing, tracking of CO2 emissions, payments, and many others,” she explains.

Read more at Enterprisers 

What is CI/CD?

Continuous integration (CI) and continuous delivery (CD) are extremely common terms used when talking about producing software. But what do they really mean? In this article, I’ll explain the meaning and significance behind these and related terms, such as continuous testing and continuous deployment.

Quick summary

An assembly line in a factory produces consumer goods from raw materials in a fast, automated, reproducible manner. Similarly, a software delivery pipeline produces releases from source code in a fast, automated, and reproducible manner. The overall design for how this is done is called “continuous delivery.” The process that kicks off the assembly line is referred to as “continuous integration.” The process that ensures quality is called “continuous testing” and the process that makes the end product available to users is called “continuous deployment.” And the overall efficiency experts that make everything run smoothly and simply for everyone are known as “DevOps” practitioners.

What does “continuous” mean?

Continuous is used to describe many different processes that follow the practices I describe here. It doesn’t mean “always running.” It does mean “always ready to run.” In the context of creating software, it also includes several core concepts/best practices. 

Read more at OpenSource.com

10 Reasons to Attend ONS Europe in September | Registration Deadline Approaching – Register & Save $605

Here’s a sneak peek at why you need to be at Open Networking Summit Europe in Amsterdam next month! But hurry – spots are going quickly. Secure your spot and register by September 1 to save $605.

Open Networking Summit, the premier open networking event in North America now in its 7th year, comes to Europe for the first time next month. This event is like no other, with content presented by your peers in the networking community, sessions carefully selected by networking specialists in the program committee, and plenty of networking and collaboration opportunities, this is an event you won’t want to miss.

Highlights include:

  1. Learn About the Future & Lessons Learned in Open Networking: Hear about innovative ideas on the disruption and change of the landscape of networking and networking-enabled markets in the next 3-5 years across AI, ML, and deep learning applied to networking, SD-WAN, IIOT, data insights, business intelligence, blockchain & telecom, and more. Get an in-depth scoop on the lessons learned from today’s global deployments.
  2. 100+ Sessions Covering Telecom, Enterprise, and Cloud Networking: With a blend of deep technical/developer sessions and business/architecture sessions, there are a plethora of learning opportunities for everyone. Plan your schedule now and choose from sessions, labs, tutorials, and lightning talks presented by Airbnb, Deutsche Telekom AG, Thomas Reuters, Huawei, General Motors, Türk Telekom, China Mobile, and many more.

Read more at The Linux Foundation

A Git Origin Story

A look at Linux kernel developers’ various revision control solutions through the years, Linus Torvalds’ decision to use BitKeeper and the controversy that followed, and how Git came to be created.

Originally, Linus Torvalds used no revision control at all. Kernel contributors would post their patches to the Usenet group, and later to the mailing list, and Linus would apply them to his own source tree. Eventually, Linus would put out a new release of the whole tree, with no division between any of the patches. The only way to examine the history of his process was as a giant diff between two full releases. 

This was not because there were no open-source revision control systems available. CVS had been around since the 1980s, and it was still the most popular system around. At its core, it would allow contributors to submit patches to a central repository and examine the history of patches going into that repository….

One of Linus’ primary concerns, in fact, was speed. This was something he had never fully articulated before, or at least not in a way that existing projects could grasp. With thousands of kernel developers across the world submitting patches full-tilt, he needed something that could operate at speeds never before imagined. 

Read more at Linux Journal

Why Locking Down the Kernel Won’t Stall Linux Improvements

The Linux Kernel Hardening Project is making significant strides in reducing vulnerabilities and increasing the effort required to exploit vulnerabilities that remain. Much of what has been implemented is obviously valuable, but sometimes the benefit is more subtle. In some cases, changes with clear merit face opposition because of performance issues. In other instances, the amount of code change required can be prohibitive. Sometimes the cost of additional security development overwhelms the value expected from it.

The Linux Kernel Hardening Project is not about adding new access controls or scouring the system for backdoors. It’s about making the kernel harder to abuse and less likely for any abuse to result in actual harm. The former is important because the kernel is the ultimate protector of system resources. The latter is important because with 5,000 developers working on 25 million lines of code, there are going to be mistakes in both how code is written and in judgment about how vulnerable a mechanism might be. Also, the raw amount of ingenuity being applied to the process of getting the kernel to do things it oughtn’t continues to grow in lockstep with the financial possibilities of doing so.

Read more at The New Stack

Top Linux Developers’ Recommended Programming Books

Without question, Linux was created by brilliant programmers who employed good computer science knowledge. Let the Linux programmers whose names you know share the books that got them started and the technology references they recommend for today’s developers. How many of them have you read?

Linux is, arguably, the operating system of the 21st century. While Linus Torvalds made a lot of good business and community decisions in building the open source community, the primary reason networking professionals and developers adopted Linux is the quality of its code and its usefulness. While Torvalds is a programming genius, he has been assisted by many other brilliant developers.

I asked Torvalds and other top Linux developers which books helped them on their road to programming excellence. This is what they told me.

By shining C

Linux was developed in the 1990s, as were other fundamental open source applications. As a result, the tools and languages the developers used reflected the times, which meant a lot of C programming language. 

Read more at HPE

Diversity Empowerment Summit Highlights Importance of Allies

Diversity and inclusion are hot topics as projects compete to attract more talent to power development efforts now as well as build their ranks to carry the projects into the future. The Diversity Empowerment Summit co-located with Open Source Summit coming up in Vancouver August 29-31, will offer key insights to help your project succeed in these endeavors.

Although adoption of diversity and inclusion policies is generally seen as simply the right thing to do, finding good paths to building and implementing such policies within existing community cultures continues to be challenging. The Diversity Empowerment Summit, however, provides hard insights, new ideas, and proven examples to help open source professionals navigate this journey.

Nithya Ruff,  Senior Director, Open Source Practice at Comcast, and member of the Board of Directors for The Linux Foundation, says “the mission of open source communities to attract and retain diverse contributors with unique talent and perspectives has gathered momentum, but we cannot tackle these issues without the support of allies and advocates.” Ruff will be moderating a panel discussion at the conference examining the role of allies in diversity and inclusion and exploring solid strategies for success.

Read more at The Linux Foundation

A Quick Reminder on HTTPS Everywhere

HTTPS Everywhere! So the plugin says, and now browsers are warning users that sites not implementing https:// are security risks. Using HTTPS everywhere is good advice. And this really means “everywhere”: the home page, everything. Not just the login page, or the page where you accept donations. Everything.

Implementing HTTPS everywhere has some downsides, as Eric Meyer points out. It breaks caching, which makes the web much slower for people limited to satellite connections (and that’s much of the third world); it’s a problem for people who, for various reasons, have to use older browsers… The real problem isn’t HTTPS’s downsides; it’s that I see and hear more and more complaints from people who run simple non-commercial sites asking why this affects them. Do you need cryptographic security if your site is a simple read-only, text-only site with nothing controversial? Unfortunately, you do. Here’s why. Since the ISPs’ theft of the web (it’s not limited to the loss of Network Neutrality, and not just an issue in the U.S.), the ISPs themselves can legally execute man-in-the-middle attacks…

Read more at O’Reilly