Home Blog Page 434

What Is A Distributed System?

“Hello world!”

The simplest application to write and operate is one that runs in one thread on a single processor. If that’s so easy, why on earth do we ever build anything else? Usually because we also want operational and developmental performance. A single server is a physical and organisational limitation on what you can achieve with an application.

Machine Performance (AKA Things)

There are three main operational performance issues introduced by running on a single machine.

  • Scale. You might want more CPU, memory or storage than is available on one server no matter how big, or it might be more efficient (cost/server utilization) to use machines with different properties for different functions. For example: CPUs vs GPUs.
  • Resilience. Any software or piece of physical hardware will crash (even mainframes die eventually). A single server is a single point of failure.
  • Location. “Propinquity” means useful proximity. Unless the only user of your application will be sitting at a keyboard plugged into your server, eventually your single-machine application will need to talk to something else.

Read more at Container Solutions

Linux And Windows Machines Being Attacked By “Zealot” Campaign To Mine Cryptocurrency

As the cryptocurrency craze is reaching new heights, cybercriminals are looking for new methods to steal digital coins. In the past, we have seen methods like crypto jacking and spearphishing attacks. In a related development, security researchers have found a new malware campaign to mine cryptocurrency.

Named Zealot Campaign, this malware targets Linux and Windows machines on an internal network. The most noticeable property of Zealot is the use of NSA’s EternalBlue and EternalSynergy exploits.

Read more at FOSSBytes

PowerfulSeal: A Testing Tool for Kubernetes Clusters

Bloomberg has adopted Kubernetes, the open source system for deploying and managing containerized applications which has gained a great deal of industry momentum, in its infrastructure. As a result, systems are becoming more distributed than ever before, running on machines scattered around the globe and across the cloud. This means there are more moving parts, any of which could fail for a long list of reasons.

Systems engineers want to feel confident that the complex systems they’ve built will withstand problems and keep running. To do that, they run batteries of elaborate tests designed to simulate all sorts of problems. But it’s impossible to imagine every potential problem, let alone plan for all of them.

Read more at Tech at Bloomberg

What Are Containers and Why Should You Care?

What are containers? Do you need them? Why? In this article, we aim to answer some of these basic questions.

But, to answer these questions, we need more questions.  When you start considering how containers might fit into your world, you need to ask: Where do you develop your application? Where do you test it and where is it deployed?

You likely develop your application on your work laptop, which has all the libraries, packages, tools, and framework needed to run that application. It’s tested on a platform that resembles the production machine and then it’s deployed in production. The problem is that not all three environments are the same; they don’t have same tools, frameworks, and libraries. And, the app that works on your dev machine may not work in the production environment.

Containers solved that problem. As Docker explains, “a container image is a lightweight, standalone, executable package of a piece of software that includes everything needed to run it: code, runtime, system tools, system libraries, settings.”

What this means is that once an application is packaged as a container, the underlying environment doesn’t really matter. It can run anywhere, even on a multi-cloud environment. That’s one of the many reasons containers became so popular among developers, operations teams, and even CIOs.

Containers for developers

Now developers or operators don’t have to concern themselves with what platforms they are using to run applications. Devs don’t have to tell ops that “it worked on my system” anymore.

Another big advantage of containers is isolation and security. Because containers isolate the app from the platform, the app remains safe and keeps everything around it safe. At the same time, different teams can run different applications on the same infrastructure at the same time — something that’s not possible with traditional apps.

Isn’t that what virtual machines (VM) offer? Yes and no. VMs do offer isolation, but they have massive overhead. In a white paper, Canonical compared containers with VM and wrote, “Containers offer a new form of virtualization, providing almost equivalent levels of resource isolation as a traditional hypervisor. However, containers are lower overhead both in terms of lower memory footprint and higher efficiency. This means higher density can be achieved — simply put, you can get more for the same hardware.” Additionally, VMs take longer to provision and start; containers can be spinned up in seconds, and they boot instantly.

Containers for ecosystem

A massive ecosystem of vendors and solutions now enable companies to deploy containers at scale, whether it’s orchestration, monitoring, logging, or lifecycle management.

To ensure that containers run everywhere, the container ecosystem came together to form the Open Container Initiative (OCI), a Linux Foundation project to create specifications around two core components of containers — container runtime and container image format. These two specs ensure that there won’t be any fragmentation in the container space.

For a long time, containers were specific to the Linux kernel, but Microsoft has been working closely with Docker to bring support for containers on Microsoft’s platform. Today you can run containers on Linux, Windows, Azure, AWS, Google Compute Engine, Rackspace, and mainframes. Even VMware is adopting containers with vSphere Integrated Container (VIC), which lets  IT pros run containers and traditional workloads on their platforms.

Containers for CIOs

Containers are very popular among developers for all the reasons mentioned above, and they offer great advantages for CIOs, too. The biggest advantage of moving to containerized workloads is changing the way companies operate.

Traditional applications have a life-cycle of a about a decade. New versions are released after years of work and because they are platform dependent, sometimes they don’t see production for years. Due to this lifecycle, developers try to cram in as many features as they can, which can make the application monolithic, big, and buggy.

This process affects the innovative culture within companies. When people don’t see their ideas translated into products for months and years, they are demotivated.

Containers solve that problem, because you can break the app into smaller microservices. You can develop, test, and deploy in a matter of weeks or days. New features can be added as new containers. They can go into production as soon as they are out of testing. Companies can move faster and stay ahead of the competitors. This approach breeds innovation as ideas can be translated into containers and deployed quickly.

Conclusion

Containers solve many problems that traditional workloads face. However, they are not the answer to every problem facing IT professionals. They are one of many solutions. In the next article, we’ll cover some of the basic terminology of containers, and then we will explain how to get started with containers.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

As Kubernetes Surged in Popularity in 2017, It Created a Vibrant Ecosystem

For a technology that the average person has probably never heard of, Kubernetes surged in popularity in 2017 with a particular group of IT pros who are working with container technology. Kubernetes is the orchestration engine that underlies how operations staff deploy and manage containers at scale. (For the low-down on containers, check out this article.)

In plain English, that means that as the number of containers grows then you need a tool to help launch and track them all. And because the idea of containers — and the so-called “microservices” model it enables — is to break down a complex monolithic app into much smaller and more manageable pieces, the number of containers tends to increase over time. Kubernetes has become the de facto standard tool for that job.

Kubernetes is actually an open source project, originally developed at Google, which is managed by the Cloud Native Computing Foundation (CNCF).

Read more at TechCrunch

How to Market an Open Source Project

The widely experienced and indefatigable Deirdré Straughan presented a talk at Open Source Summit NA on how to market an open source project. Deirdré currently works with open source at Amazon Web Services (AWS), although she was not representing the company at the time of her talk. Her experience also includes stints at Ericsson, Joyent, and Oracle, where she worked with cloud and open source over several years.

Through it all, Deirdré said, the main mission in her career has been to “help technologies grow and thrive through a variety of marketing and community activities.” This article provides highlights of Deirdré’s talk, in which she explained common marketing approaches and why they’re important for open source projects.

Read more at The Linux Foundation

Ops Checklist for Monitoring Kubernetes at Scale

By design, the Kubernetes open source container orchestration engine is not self-monitoring, and a bare installation will typically only have a subset of the monitoring tooling that you will need. In a previous post, we covered the five tools for monitoring Kubernetes in production, at scale, as per recommendations from Kenzan.

However, the toolsets your organization chooses to monitor Kubernetes is only half of the equation. You must also know what to monitor, where to put processes in place in order to assimilate the results of monitoring and how to take appropriate corrective measures in response. This last item is often overlooked by DevOps teams.

All of the Kubernetes components — container, pod, node and cluster — must be covered in the monitoring operation. Let’s go through monitoring requirements for each one.

Read more at The New Stack

Xen Project Member Spotlight: Bitdefender

The Xen Project is comprised of a diverse set of member companies and contributors that are committed to the growth and success of the Xen Project Hypervisor. The Xen Project Hypervisor is a staple technology for server and cloud vendors, and is gaining traction in the embedded, security and automotive space. This blog series highlights the companies contributing to the changes and growth being made to the Xen Project, and how the Xen Project technology bolsters their business.

When did you join the Xen Project and why/how is your organization involved?
Bitdefender has been collaborating with Linux Foundation for the past three years, and active within the Xen Project community, especially around Virtual Machine Introspection, for about the same time. We officially joined the Xen Project toward the end of 2017. We are focused on security, which is core to the philosophy of the Xen Project.

Read more at Xen Project

OpenStack SDN – OpenDaylight With BGP VPN

In this post I’ll demonstrate how to build a simple OpenStack lab with OpenDaylight-managed virtual networking and integrate it with a Cisco IOS-XE data centre gateway using EVPN.

For the last 5 years OpenStack has been the training ground for a lot of emerging DC SDN solutions. OpenStack integration use case was one of the most compelling and easiest to implement thanks to the limited and suboptimal implementation of the native networking stack. Today, in 2017, features like L2 population, local ARP responderL2 gateway integrationdistributed routing and service function chaining have all become available in vanilla OpenStack and don’t require a proprietary SDN controller anymore. Admittedly, some of the features are still not (and may never be) implemented in the most optimal way (e.g. DVR). This is where new opensource SDN controllers, the likes of OVN and Dragonflow, step in to provide scalable, elegant and efficient implementation of these advanced networking features. However one major feature still remains outside of the scope of a lot of these new opensource SDN projects, and that is data centre gateway (DC-GW) integration. Let me start by explain why you would need this feature in the first place.

Read more at Network-Oriented Programming

Kali Linux 2017.3 Hands-On: The Best Alternative to Raspbian for Your Raspberry Pi

The latest release of this excellent security, forensic, and penetration testing Linux distribution is everything I have come to expect from the software and more, with both PC (32 and 64 bit) and Raspberry Pi images.

The new release, 2017.3, is primarily a roll-up, incorporating all patches and updates issued since the last release into a clean set of installation images. Remember, though, Kali is a rolling-release distribution, so if you already have it installed you don’t need to reinstall from these new images; just make sure that you have the latest updates installed.

If you do want or need to make a fresh installation, the distribution images for the PC version (32 and 64 bit) can be obtained from the Kali downloads page. There are a number of different versions there, and people sometimes get confused by them, so here is a quick summary:

Read more at ZDNet