Home Blog Page 718

Platform9 & ZeroStack Make OpenStack a Little More VMware-Friendly

Platform9 and ZeroStack are adding VMware high availability to their prefab cloud offerings, part of the ongoing effort to make OpenStack better accepted by enterprises.

OpenStack is a platform, an archipelago of open source projects that help you run a cloud. But some assembly is required. Both Platform9 and ZeroStack are operating on the theory that OpenStack will better succeed if it’s turned into more of a shrink-wrapped productAdding a feature like high availability could help OpenStack stay on par with public clouds such as Amazon Web Services (AWS).

Read more at SDx Central

Putting Ops Back in DevOps

What Agile means to your typical operations staff member is, “More junk coming faster that I will get blamed for when it breaks.” There always is tension between development and operations when something goes south. Developers are sure the code worked on their machine; therefore, if it does not work in some other environment, operations must have changed something that made it break. Operations sees the same code perform differently on the same machine with the same config, which means if something broke, the most recent change must have caused it … i.e. the code did it. The finger-pointing squabbles are epic (no pun intended). So how do we get Ops folks interested in DevOps without promising them only a quantum order of magnitude more problems—and delivered faster?

Ops has an extended role in understanding what lives underneath the abstraction layer that lives on top. Over time, only Ops will understand these particulars. They will become the only in-house experts about which cloud provider to use, which sets of physical hardware performs best and under what conditions.

Read more  at DevOps.com

5 Cool Unikernels Projects

Unikernels are poised to become the next big thing in microservices after Docker containers. Here’s a look at some of the cool things you can do with unikernels. First, though, here’s a quick primer on what unikernels are, for the uninitiated. Unikernels are similar to containers in that they let you run an app inside a portable, software-defined environment. But they go a step further than containers by packaging all of the libraries required to run the app directly into the unikernel.

The result is an app that can boot and run on its own. It does not require a host of any kind. That makes unikernels leaner and meaner than containers, which require a container engine such as Docker and a host operating system such as Linux to run.

Read more at Container Journal

Secure the Internet: Core Infrastructure Initiative’s Aim

VIDEO: Nicko van Someren, CTO of the Linux Foundation, discusses how the CII is moving forward to make open-source software more secure.

In the aftermath of the Heartbleed vulnerability’s emergence in 2014, the Linux Foundation created the Core Infrastructure Initiative (CII)to help prevent that type of issue from recurring. Two years later, the Linux Foundation has tasked its newly minted CTO, Nicko van Someren, to help lead the effort and push it forward.

CII has multiple efforts under way already to help improve open-source security. Those efforts include directly funding developers to work on security, a badging program that promotes security practices and an audit of code to help identify vulnerable code bases that might need help. In a video interview with eWEEKat the LinuxCon conference here, Van Someren detailed why he joined the Linux Foundation and what he hopes to achieve.

Read more at eWeek

Understanding Different Classifications of Shell Commands and Their Usage in Linux

When it comes to gaining absolute control over your Linux system, then nothing comes close to the command line interface (CLI). In order to become a Linux power user, one must understand the different types of shell commands and the appropriate ways of using them from the terminal.

In Linux, there are several types of commands, and for a new Linux user, knowing the meaning of different commands enables for efficient and precise usage. Therefore, in this article, we shall walk through the various classifications of shell commands in Linux.

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]

Read complete article

Linux Took Over the Web. Now, It’s Taking Over the World

ON AUGUST 25, 1991, a Finnish computer science student named Linus Torvalds announced a new project. “I’m doing a (free) operating system,” he wrote on an Internet messaging system, insisting this would just be a hobby.

But it became something bigger. Much bigger. Today, that open source operating system—Linux—is one of the most important pieces of computer software in the world. Chances are, you use it every day. Linux runs every Android phone and tablet on Earth. And even if you’re on an iPhone or a Mac or a Windows machine, Linux is working behind the scenes, across the Internet, serving up most of the webpages you view and powering most of the apps you use. Facebook, Google, Pinterest, Wikipedia—it’s all running on Linux.

Read more at WIRED

Why Linux is Poised to Lead the Tech Boom in Africa

Certain emerging markets are advancing so quickly that they aren’t just speeding through the technology phases of developed countries. They’re skipping stages entirely — a phenomenon economists call “leapfrogging.”

The most visible signs of leapfrogging are in consumer technologies, including the rapid adoption of the internet, mobile phones and social media. By 2020, Sub-Saharan Africa is expected to be the world’s second-largest mobile Internet market, surpassing Europe and ranking only behind Asia-Pacific, according to Frost & Sullivan.

These advances in consumer technologies are creating a corresponding need for advances in IT infrastructure. This week to help meet that need, IBM announced a new LinuxONE Community Cloud for Africa. Developers will have access at no charge for 120 days utilizing the cloud to create and test their applications on IBM LinuxONE, the industry’s most powerful Linux system.

Read more at IBM’s blog.

Keep It Small: A Closer Look at Docker Image Sizing

A recent blog post, 10 things to avoid in docker containers, describes ten scenarios you should avoid when dealing with docker containers. However, recommendation #3 – Don’t create large images and the sentence “Don’t install unnecessary packages or run “updates” (yum update) that download files to a new image layer” has generated quite a few questions.  Some of you are wondering how a simple “yum update” can create a large image. In an attempt to clarify the point, this post explains how docker images work, some solutions to maintain a small docker image, yet still keep it up to date.

To better illustrate the problem, let’s start with a fresh Fedora 23 (or RHEL) image. (Use `docker pull fedora:23`). 

Read more at Red Hat Developers Blog.

Want to Work for a Cloud Company? Here’s the Cream of the Crop

What do Asana, Greenhouse Software, WalkMe, Chef Software, and Sprout Social have in common? They’ve been deemed the very best privately held “cloud” companies to work for, according to new rankings compiled by Glassdoor and venture capital firm Battery Ventures.

For “The 50 Highest Rated Private Cloud Computing Companies,” Glassdoor and Battery worked with Mattermark to come up with a list of non-public companies that offer cloud-based services, and then culled them, making sure that each entry had at least 30 Glassdoor reviews, Neeraj Agrawal, Battery Ventures general partner told Fortune.

Read more at Fortune.

Let’s Encrypt: Every Server on the Internet Should Have a Certificate

The web is not secure. As of August 2016, only 45.5 percent of Firefox page loads are HTTPS, according to Josh Aas, co-founder and executive director of Internet Security Research Group. This number should be 100 percent, he said in his talk called “Let’s Encrypt: A Free, Automated, and Open Certificate Authority” at LinuxCon North America.

Why is HTTPS so important? Because without security, users are not in control of their data and unencrypted traffic can be modified. The web is wonderfully complex and, Aas said, it’s a fool’s errand to try to protect this certain thing or that. Instead, we need to protect everything. That’s why, in the summer of 2012, Aas and his friend and co-worker Eric Rescorla decided to address the problem and began working on what would become the Let’s Encrypt project.

The web is not secure because security is seen as too difficult, said Aas. But, security only involves two main requirements: encryption and authentication. You can’t really have one without the other. The encryption part is relatively easy; the authentication part, however, is hard and requires certification. As the two developers explored various options to address this, they realized that any viable solution meant they needed a new Certificate Authority (CA). And, they wanted this CA to be free, automated, open, and global.

These features break down some of the existing obstacles to authentication. For example, making authentication free makes it easy to obtain, automation brings ease of use, reliability, and scalability, and the global factor means anyone can get a certificate.

In explaining the history of the project, Aas said they spent the first couple of years just building the foundation of the project, getting sponsors, and so forth. Their initial backers were Akamai, Mozilla, Cisco, the EFF, and their CA partner was IDenTrust. In April of 2015, however, Let’s Encrypt became a Linux Foundation project, and The Linux Foundation’s organizational development support has allowed the project to focus on their technical operations, Aas said.

Built-in Is Best

Let’s Encrypt works through the ACME protocol, which is “DHCP for certificates,” Aas said. The Boulder software implements ACME, running on the Let’s Encrypt infrastructure, consisting of 42 rack units of hardware between two highly secure sites. Linux is the primary operating system, and there’s a lot of physical and logical redundancy built in.

They issue three types of certificates and have made the process of getting a certificate as simple as possible.

“We want every server on the Internet to have a certificate,” said Aas.

The issuance process involves a series of challenges between the ACME client and ACME server. If you complete all the challenges, you get a cert. The challenges, which are aimed at proving you have control over the domain, include putting a file on your web server, provisioning a virtual host at your domain’s IP address, or provisioning a DNS record for your domain. Additionally, there are three types of clients to use: simple, full-featured, and built-in — the last of which is preferred.

“Built-in is the best client experience,” Aas said. “It all just happens for you.”

Currently, Let’s Encrypt certificates have a 90-day lifetime. Shorter lifetimes are important for security, Aas said, because they encourage automation and limit damage in the case of compromise. This is still not ideal, he noted. Revocation is not an option, so if the certificate gets stolen, you’re stuck until it expires. For some people, 90 days is still too long, and shorter lifetimes are something they’re considering. Again, Aas said, “If it’s all automated, it doesn’t matter… It just happens.”

Additionally, Aas noted that Let’s Encrypt’s policy is not to revoke certificates based on suspicion. “Do you really want CAs to be the content police of the web?” Let’s Encrypt doesn’t want to be in that position; it becomes censorship, he said.

Let’s Encrypt now has 5.3 million active certs, which equates to 8.5 million active domains. And, Aas said, 92 percent of Let’s Encrypt certificates are issued to domains that didn’t have certificates before.

He concluded by saying that we have a chance within 2016 to create a web that is more encrypted than not. You can take the next step by adopting encryption via TLS by default.