Home Blog Page 513

Tech Giants Rally Today in Support of Net Neutrality

Technology giants like Amazon, Spotify, Reddit, Facebook, Google, Twitter and many othersare rallying today in a so-called “day of action” in support of net neutrality, five days ahead of the first deadline for comments on the US Federal Communications Commission’s planned rollback of the rules.

In a move that’s equal parts infuriating and exasperating, Ajit Pai, the FCC’s new chairman appointed by President Trump, wants to scrap the open internet protections installed in 2015 under the Obama administration. Those consumer protections mean providers such as AT&T, Charter, Comcast, and Verizon are prevented from blocking or slowing down access to the web.

Read more at The Verge

FD.io: Breaking the Terabit Barrier!

 At launch, FD.io’s VPP technology could route/switch at half a Terabit per second at multimillion fib entry scales.  Close examination of the bottlenecks revealed that it was being limited by the ability of the PCI bus to deliver packets from the NIC to the CPU.  VPP had headroom to do more, but the PCI bus bandwidth imposed limitations.

Today we are delighted to announce that limitation has moved further out. The increased PCI bandwidth in the Intel® Xeon® Processor Scalable family have doubled the amount of traffic the PCI bus can deliver to the CPU, and VPP has risen to the occasion without the need of new software optimizations.  This proves what we have long suspected: VPP can route/switch in software at multi-million fib entry scale as much traffic as the PCI bus can throw at it.

Read more at FDio

The Changing Face of the Hybrid Cloud

Depending upon the event you use to start the clock, cloud computing is only a little more than 10 years old. Some terms and concepts around cloud computing that we take for granted today are newer still. The National Institute of Standards and Technology (NIST) document that defined now-familiar cloud terminology—such as Infrastructure-as-a-Service (IaaS)—was only published in 2011, although it widely circulated in draft form for a while before that.

Among other definitions in that document was one for hybrid cloud. Looking at how that term has shifted during the intervening years is instructive. Cloud-based infrastructures have moved beyond a relatively simplistic taxonomy. Also, it highlights how priorities familiar to adopters of open source software—such as flexibility, portability, and choice—have made their way to the hybrid cloud.

Read more at OpenSource.com

Dangerous Logic – De Morgan & Programming

Programmers are master logicians – well they sometimes are. Most of the time they are as useless at it as the average joe. The difference is that the average joe can avoid logic and hence the mistakes. How good are you at logical expressions and why exactly is Augustus De Morgan your best friend, logically speaking?

It is commonly held that programming is a logical subject.

Programmers are great at working out the logic of it all and expressing it clearly and succinctly, but logic is tough to get right.

IFs and Intervals

A logical expression is just something that works out to be true or false.

Generally you first meet logical expressions as part of learning about if statements. Most languages have a construct something like…

Read more at I Programmer

How to Get Started with Kubernetes

Kubernetes, the product of work done internally at Google to solve that problem, provides a single framework for managing how containers are run across a whole cluster. The services it provides are generally lumped together under the catch-all term “orchestration,” but that covers a lot of territory: scheduling containers, service discovery between containers, load balancing across systems, rolling updates/rollbacks, high availability, and more.

In this guide we’ll walk through the basics of setting up Kubernetes and populating it with container-based applications. This isn’t intended to be an introduction to Kubernetes’s concepts, but rather a way to show how those concepts come together in simple examples of running Kubernetes.

Read more at InfoWorld

OpenStack: Driving the Future of the Open Cloud

As cloud computing continues to evolve, it’s clear that the OpenStack platform is guaranteeing a strong open source foundation for the cloud ecosystem. At the recent OpenStack Days conference in Melbourne, OpenStack Foundation Executive Director Jonathan Bryce noted that although the early stages of cloud technology emphasized public platforms such as AWS, Azure and Google, the latest stage is much more focused on private clouds.

According to the The OpenStack Foundation User Survey, organizations everywhere have moved beyond just kicking the tires and evaluating OpenStack to deploying the platform. In fact, the survey found that OpenStack deployments have grown 44 percent year-over-year. More than 50 percent of Fortune 100 companies are running the platform, and OpenStack is a global phenomenon. According to survey findings, five million cores of compute power, distributed across 80 countries, are powered by OpenStack.

The typical size of an OpenStack cloud increased over the past year as well. Thirty-seven percent of clouds have 1,000 or more cores, compared to 29 percent a year ago, and 3 percent of clouds have more than 100,000 cores. You can see the survey findings, which are based on responses from 2561 users, in this video overview.

The fact that OpenStack is built on open source is not lost on organizations deploying it. The OpenStack Foundation User Survey shows that avoiding vendor lock-in and accelerating the ability to innovate are the top reasons cited for OpenStack deployment. According to the survey, the highest number of OpenStack deployments fall within the Information Technology industry (56 percent), followed by telecommunications, academic/research, finance, retail/e-commerce, manufacturing/industrial, and government/defense.

The survey also found that most OpenStack deployments consist of on-premises private clouds (70 percent), with public cloud deployments at 12 percent.  Interestingly, containers remain the top emerging technology of interest to OpenStack users. And, 65 percent of organizations running OpenStack services inside containers use Docker runtime, while nearly 50 percent of those using containers to orchestrate apps on OpenStack use Kubernetes.

Organizations are building infrastructure around OpenStack, too. Survey results show that the median user runs 61–80 percent of their overall cloud infrastructure on OpenStack, while the typical large user (deployment with 1,000+ cores) reports running 81–100 percent of their total infrastructure on OpenStack.

It’s proven that OpenStack skills are in high-demand in the job market, and if you are seeking training and certification, opportunities abound. The OpenStack Foundation offers a Certified OpenStack Administrator (COA) exam. Developed in partnership with The Linux Foundation, the exam is performance-based and available anytime, anywhere. It allows professionals to demonstrate their OpenStack skills and helps employers gain confidence that new hires are ready to work.

The Linux Foundation also offers an OpenStack Administration Fundamentals course, which serves as preparation for the certification. The Foundation also offers comprehensive Linux training and other classes. You can explore options here.  Red Hat and Mirantis offer very popular OpenStack training options as well.

For a comprehensive look at trends in the open cloud, The Linux Foundation’s Guide to the Open Cloud report is a good place to start. The report covers not only OpenStack, but well-known projects like Docker and Xen Project, and up-and-comers such as Apache Mesos, CoreOS and Kubernetes.

Now updated for OpenStack Newton! Our Essentials of OpenStack Administration course teaches you everything you need to know to create and manage private and public clouds with OpenStack. Download a sample chapter today!

Fabric 1.0: Hyperledger Releases First Production-Ready Blockchain Software

Open-source software isn’t so much built, it grows. And today, the open-source blockchain consortium Hyperledger has announced that its first production-ready solution for building applications, Fabric, has finished that process.

But even before the formal release of Fabric 1.0 today, hundreds of proofs-of-concept had been built. With contributions to the platform for building shared, distributed ledgers across a number of industries (coming from 159 different engineers in 28 organizations), no single company owns the platform, which is hosted by the Linux Foundation.

For those going forward with that work, the group’s executive director Brian Behlendorf indicated that production-grade functionality is just a download and a few tweaks away. Behledorf told CoinDesk:

“It’s not as easy as drop in and upgrade. But the intent is that anyplace where there were changes, that those changes will be justified.”

Read more at CoinDesk

How Linux Containers Have Evolved

In the past few years, containers have become a hot topic among not just developers, but also enterprises. This growing interest has caused an increased need for security improvements and hardening, and preparing for scaleability and interoperability. This has necessitated a lot of engineering, and here’s the story of how much of that engineering has happened at an enterprise level at Red Hat.

When I first met up with representatives from Docker Inc. (Docker.io) in the fall of 2013, we were looking at how to make Red Hat Enterprise Linux (RHEL) use Docker containers. (Part of the Docker project has since been rebranded as Moby.) We had several problems getting this technology into RHEL. The first big hurdle was getting a supported Copy On Write (COW) file system to handle container image layering. Red Hat ended up contributing a few COW implementations, including Device Mapperbtrfs, and the first version of OverlayFS. For RHEL, we defaulted to Device Mapper, although we are getting a lot closer on OverlayFS support.

The next major hurdle was on the tooling to launch the container. At that time, upstream docker was using LXC tools for launching containers, and we did not want to support LXC tools set in RHEL. Prior to working with upstream docker, I had been working with the libvirt team on a tool called virt-sandbox, which used libvirt-lxc for launching containers.

Read more at OpenSource.com

Cloud Infrastructure Spending to Reach $40B in 2017, IDC Says

IT infrastructure spending on products — servers, enterprise storage, and Ethernet switches — for cloud deployments will increase 12.4 percent, year over year in 2017 to $40.1 billion, according to an IDC report.

Public cloud data centers will account for the bulk of this infrastructure spending, or 60.7 percent. These will also grow at the fastest rate year over year: 13.8 percent.

Off-premises private cloud environments will represent 14.9 percent of overall spending and will grow 11.9 percent year over year. On-premises private clouds will account for 62.2 percent of spending on private cloud IT infrastructure and will grow 9.6 percent year over year.

Read more at SDxCentral

Unikernels Are Secure. Here Is Why.

There have been put forth various arguments for why unikernels are the better choice security wise and also some contradictory opinions on why they are a disaster. I believe that from a security perspective unikernels can offer a level of security that is unprecedented in mainstream computing.

A smaller codebase

Classic operating systems are nothing if not generic. They support everything and the kitchen sink. Since they ship in their compiled form and since users cannot be expected to compile functionality as it is needed, everything needs to come prebuilt and activated. Case in point; your Windows laptop might come with various services activated (bluetooth, file sharing, name resolution, and similar services). You might not use them but they are there. Go to some random security conference and these services will likely be the attack vector that is used to break into your laptop — even though you’ve never used them.

Unikernels use sophisticated build systems that analyze the code you’re using and only link in the code that is actually used. The unused code doesn’t make it into the image created and doesn’t pose a security risk. Typically, unikernel images are in the 500KB-32MB range. Our own load balancer appliances weigh in at around 2MB.

Read more at Unikernel