Home Blog Page 718

Putting Ops Back in DevOps

What Agile means to your typical operations staff member is, “More junk coming faster that I will get blamed for when it breaks.” There always is tension between development and operations when something goes south. Developers are sure the code worked on their machine; therefore, if it does not work in some other environment, operations must have changed something that made it break. Operations sees the same code perform differently on the same machine with the same config, which means if something broke, the most recent change must have caused it … i.e. the code did it. The finger-pointing squabbles are epic (no pun intended). So how do we get Ops folks interested in DevOps without promising them only a quantum order of magnitude more problems—and delivered faster?

Ops has an extended role in understanding what lives underneath the abstraction layer that lives on top. Over time, only Ops will understand these particulars. They will become the only in-house experts about which cloud provider to use, which sets of physical hardware performs best and under what conditions.

Read more  at DevOps.com

5 Cool Unikernels Projects

Unikernels are poised to become the next big thing in microservices after Docker containers. Here’s a look at some of the cool things you can do with unikernels. First, though, here’s a quick primer on what unikernels are, for the uninitiated. Unikernels are similar to containers in that they let you run an app inside a portable, software-defined environment. But they go a step further than containers by packaging all of the libraries required to run the app directly into the unikernel.

The result is an app that can boot and run on its own. It does not require a host of any kind. That makes unikernels leaner and meaner than containers, which require a container engine such as Docker and a host operating system such as Linux to run.

Read more at Container Journal

Secure the Internet: Core Infrastructure Initiative’s Aim

VIDEO: Nicko van Someren, CTO of the Linux Foundation, discusses how the CII is moving forward to make open-source software more secure.

In the aftermath of the Heartbleed vulnerability’s emergence in 2014, the Linux Foundation created the Core Infrastructure Initiative (CII)to help prevent that type of issue from recurring. Two years later, the Linux Foundation has tasked its newly minted CTO, Nicko van Someren, to help lead the effort and push it forward.

CII has multiple efforts under way already to help improve open-source security. Those efforts include directly funding developers to work on security, a badging program that promotes security practices and an audit of code to help identify vulnerable code bases that might need help. In a video interview with eWEEKat the LinuxCon conference here, Van Someren detailed why he joined the Linux Foundation and what he hopes to achieve.

Read more at eWeek

Understanding Different Classifications of Shell Commands and Their Usage in Linux

When it comes to gaining absolute control over your Linux system, then nothing comes close to the command line interface (CLI). In order to become a Linux power user, one must understand the different types of shell commands and the appropriate ways of using them from the terminal.

In Linux, there are several types of commands, and for a new Linux user, knowing the meaning of different commands enables for efficient and precise usage. Therefore, in this article, we shall walk through the various classifications of shell commands in Linux.

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]

Read complete article

Linux Took Over the Web. Now, It’s Taking Over the World

ON AUGUST 25, 1991, a Finnish computer science student named Linus Torvalds announced a new project. “I’m doing a (free) operating system,” he wrote on an Internet messaging system, insisting this would just be a hobby.

But it became something bigger. Much bigger. Today, that open source operating system—Linux—is one of the most important pieces of computer software in the world. Chances are, you use it every day. Linux runs every Android phone and tablet on Earth. And even if you’re on an iPhone or a Mac or a Windows machine, Linux is working behind the scenes, across the Internet, serving up most of the webpages you view and powering most of the apps you use. Facebook, Google, Pinterest, Wikipedia—it’s all running on Linux.

Read more at WIRED

Why Linux is Poised to Lead the Tech Boom in Africa

Certain emerging markets are advancing so quickly that they aren’t just speeding through the technology phases of developed countries. They’re skipping stages entirely — a phenomenon economists call “leapfrogging.”

The most visible signs of leapfrogging are in consumer technologies, including the rapid adoption of the internet, mobile phones and social media. By 2020, Sub-Saharan Africa is expected to be the world’s second-largest mobile Internet market, surpassing Europe and ranking only behind Asia-Pacific, according to Frost & Sullivan.

These advances in consumer technologies are creating a corresponding need for advances in IT infrastructure. This week to help meet that need, IBM announced a new LinuxONE Community Cloud for Africa. Developers will have access at no charge for 120 days utilizing the cloud to create and test their applications on IBM LinuxONE, the industry’s most powerful Linux system.

Read more at IBM’s blog.

Keep It Small: A Closer Look at Docker Image Sizing

A recent blog post, 10 things to avoid in docker containers, describes ten scenarios you should avoid when dealing with docker containers. However, recommendation #3 – Don’t create large images and the sentence “Don’t install unnecessary packages or run “updates” (yum update) that download files to a new image layer” has generated quite a few questions.  Some of you are wondering how a simple “yum update” can create a large image. In an attempt to clarify the point, this post explains how docker images work, some solutions to maintain a small docker image, yet still keep it up to date.

To better illustrate the problem, let’s start with a fresh Fedora 23 (or RHEL) image. (Use `docker pull fedora:23`). 

Read more at Red Hat Developers Blog.

Want to Work for a Cloud Company? Here’s the Cream of the Crop

What do Asana, Greenhouse Software, WalkMe, Chef Software, and Sprout Social have in common? They’ve been deemed the very best privately held “cloud” companies to work for, according to new rankings compiled by Glassdoor and venture capital firm Battery Ventures.

For “The 50 Highest Rated Private Cloud Computing Companies,” Glassdoor and Battery worked with Mattermark to come up with a list of non-public companies that offer cloud-based services, and then culled them, making sure that each entry had at least 30 Glassdoor reviews, Neeraj Agrawal, Battery Ventures general partner told Fortune.

Read more at Fortune.

Let’s Encrypt: Every Server on the Internet Should Have a Certificate

The web is not secure. As of August 2016, only 45.5 percent of Firefox page loads are HTTPS, according to Josh Aas, co-founder and executive director of Internet Security Research Group. This number should be 100 percent, he said in his talk called “Let’s Encrypt: A Free, Automated, and Open Certificate Authority” at LinuxCon North America.

Why is HTTPS so important? Because without security, users are not in control of their data and unencrypted traffic can be modified. The web is wonderfully complex and, Aas said, it’s a fool’s errand to try to protect this certain thing or that. Instead, we need to protect everything. That’s why, in the summer of 2012, Aas and his friend and co-worker Eric Rescorla decided to address the problem and began working on what would become the Let’s Encrypt project.

The web is not secure because security is seen as too difficult, said Aas. But, security only involves two main requirements: encryption and authentication. You can’t really have one without the other. The encryption part is relatively easy; the authentication part, however, is hard and requires certification. As the two developers explored various options to address this, they realized that any viable solution meant they needed a new Certificate Authority (CA). And, they wanted this CA to be free, automated, open, and global.

These features break down some of the existing obstacles to authentication. For example, making authentication free makes it easy to obtain, automation brings ease of use, reliability, and scalability, and the global factor means anyone can get a certificate.

In explaining the history of the project, Aas said they spent the first couple of years just building the foundation of the project, getting sponsors, and so forth. Their initial backers were Akamai, Mozilla, Cisco, the EFF, and their CA partner was IDenTrust. In April of 2015, however, Let’s Encrypt became a Linux Foundation project, and The Linux Foundation’s organizational development support has allowed the project to focus on their technical operations, Aas said.

Built-in Is Best

Let’s Encrypt works through the ACME protocol, which is “DHCP for certificates,” Aas said. The Boulder software implements ACME, running on the Let’s Encrypt infrastructure, consisting of 42 rack units of hardware between two highly secure sites. Linux is the primary operating system, and there’s a lot of physical and logical redundancy built in.

They issue three types of certificates and have made the process of getting a certificate as simple as possible.

“We want every server on the Internet to have a certificate,” said Aas.

The issuance process involves a series of challenges between the ACME client and ACME server. If you complete all the challenges, you get a cert. The challenges, which are aimed at proving you have control over the domain, include putting a file on your web server, provisioning a virtual host at your domain’s IP address, or provisioning a DNS record for your domain. Additionally, there are three types of clients to use: simple, full-featured, and built-in — the last of which is preferred.

“Built-in is the best client experience,” Aas said. “It all just happens for you.”

Currently, Let’s Encrypt certificates have a 90-day lifetime. Shorter lifetimes are important for security, Aas said, because they encourage automation and limit damage in the case of compromise. This is still not ideal, he noted. Revocation is not an option, so if the certificate gets stolen, you’re stuck until it expires. For some people, 90 days is still too long, and shorter lifetimes are something they’re considering. Again, Aas said, “If it’s all automated, it doesn’t matter… It just happens.”

Additionally, Aas noted that Let’s Encrypt’s policy is not to revoke certificates based on suspicion. “Do you really want CAs to be the content police of the web?” Let’s Encrypt doesn’t want to be in that position; it becomes censorship, he said.

Let’s Encrypt now has 5.3 million active certs, which equates to 8.5 million active domains. And, Aas said, 92 percent of Let’s Encrypt certificates are issued to domains that didn’t have certificates before.

He concluded by saying that we have a chance within 2016 to create a web that is more encrypted than not. You can take the next step by adopting encryption via TLS by default.

Best Practices for Using Open Source in Your DevOps Toolchain

Last week, we hosted another episode of our Continuous Discussion (#c9d9) video podcast, featuring expert panelists discussing the benefits of using open source tools and how DevOps can mitigate risks in quality and security when incorporating open source into your application code, environments and tool chain.

Our expert panel included: Chris Stump, full stack Chicago Ruby on Rails developer who’s big on Docker, Linux, and DevOps and currently working at Airspace Technologies;Eduardo Piairo, database administrator at Celfinet; Moritz Lenz, software engineer, architect, and contributor to the Perl 6 language working at noris network, where he set up their Continuous Delivery pipeline; and, our very own Anders Wallgren and Sam Fell.

During the episode, panelists told viewers where they use open source in their code and processes, and discussed the quality, security and legal implications associated with using open source tools, and how DevOps can help.

Open Source – Free as in Beer or Free as in Puppy?

Stump says open source can be both free as in beer and as in puppy: “I think it’s both. If you are new to open source it’s definitely going to feel like it’s new as in puppy because you are in an unfamiliar territory and for any one task that you want to do it’s going to feel like there is a million different projects. Once you have your tooling, you know what everyone is using and what is well supported, then it becomes pretty easy and it becomes more like free as in beer.”

Piario explains that open source isn’t completely free: “In open source you always have a cost, it depends on the size of your team, the complexity of your task and the frequency of change, and every change has a cost. The good thing about open source is you can contribute to the change and take it in your direction.”

Open source allows for a free flow of ideas, explains Lenz: “Some big companies like Google and Facebook open source their own stuff, and they get additional ideas for what to do with their tools and how to improve them. They also get patterns. But I think the ideas are the main thing, so open source also allows us a free flow of ideas which you can then use in commercial products.”

Fell expands more on the flow of ideas in open source: “The idea of competing with the potential innovation from a cloud of people is a very difficult thing to do. You will find lots of outside/in ideas and lots of enthusiasm for those ideas.”

Culture is a big part of successful open source, says Wallgren: “As with companies, there are communities that have good open source culture and communities that have bad open source culture – because it’s people. There are open source communities that are open, that are welcoming, that are aware of their own shortcomings and strengths. Then there’s other open source communities that are like ‘Yeah, sorry we don’t really want your contribution,’ and it’s difficult to get things going.”

Where Do You Use Open Source?

Some companies work with dozens of different open source tools, explains Fell: “When we talk to customers or prospects, most of them have about 60 tools in their pipeline – 60 different combinations of things just to move something out of Source Code Repository Land and into Production Land, to help with the various configurations or monitoring that needs to be done.”

There isn’t anywhere Stump can’t use open source in his pipeline: “As a Ruby on Rails developer which is an open source stack, I pretty much use open source through and through from the front-end using Angular or React JavaScript frameworks all the way down to the back-end, to a Postgres database with all the Ruby Gems that lie in between them and make our projects run. For servers we definitely use some Debian variants, usually on Ubuntu server containerization stick with Docker.”

Wallgren advises to question what it takes to work in an open source tool: “The thing you have to be concerned about is, what is the cost of ownership for this thing? Is it something that has to grow with me; is it something where if it’s broken it’s a really big problem; or, do I have alternatives? There’s things you have to worry about, but for the most part I use open source just about anywhere. It’s just another tool.”

Lenz uses open source tools for essentially every part of the pipeline: “There are areas where we use it because it’s just the best fit, but wherever open source excels we use it and that’s basically 90% of everything that we do. We use Puppet, we use Ansible, we use all the different test frameworks for Perl and Python and for automating the browser, Selenium, all this good stuff, both inside the product as libraries, and as backing services, for authentication, and then in the pipeline, in the build tool chain, test, deployment, everything… statistics, monitoring, you name it.”

Piario gives his advice on picking the right open source tools: “As a startup we started with a lot of open source tools, and with the evolution and complexity we started to migrate to commercial tools. We say ‘Try it before you buy it.’ We started to try different combinations, Jenkins, TFS, TFS build, TFS Release – for testing we use tSQLt a framework for testing databases.”

Quality Concerns?

There are three main areas to look for when assessing quality in open source tools, per Stump: “You have to know how to find the quality in open source tooling and usually it boils down to: how active is the community, how widely used is that software, and is there a strong leadership team behind that particular project (and do they have good quality control practices). Once you identify that a tool that has all those attributes for the thing you are trying to accomplish, I think quality is just on par if not better than a lot of proprietary solutions.”

Open source quality requires individual responsibility, according to Fell: “When you are using open source components as part of your product what exposure do you have from a quality perspective? Not that they are any more or less quality than what you would have if you did it yourself, but it doesn’t abdicate you from taking responsibility for it when it’s there. If there is a quality problem, if it’s open source you can go in and try to fix it, but if they don’t accept your changes then you are stuck.”

Take extra quality precautions if open source is baked into your product, says Piario: “If the tools support your pipeline, you can better manage the exposure to errors, but if the tool is included in your product then you have to assure that quality is there. Your client will talk to you if some problem happens, not the maker of the tool.”

Even though you have the freedom to change the source code in open source, it’s not an easy task to do, says Wallgren: “Even if you have the source code you still may be kind of screwed because you may not be able to build it, you may not understand it, you may not be able to document it – you now have to go solve a problem that would be nice to have somebody else solve for you.”

Security Concerns?

 

Quality concerns trump security concerns in open source, per Piario: “My main concern is about quality, as for security – it’s a closed environment so it’s more controlled. We try not to deliver open source to the client, we use it to support our activity.”

Having the right toolchain is important in ensuring open source security, says Lenz: “Last year there was a study by HP Security that half of the breaches they investigated were vulnerabilities that were known for two years or longer. Whether the batch comes out this week or maybe in two weeks is not as relevant, often it is a question of, do I have the toolchain to notice that I have to build a new product and ideally automatically upgrade, build, integrate, test and then release the product.”

Having code easily visible to any and all means flaws can actually be fixed more quickly in open source, says Stump: “I would say that with open source it’s like a double edged sword, because the code is open so there are more eyeballs on the code to find security flaws, but there are also more eye balls on the code that can exploit those flaws. But most of the time I think it leads to them getting discovered and patched quickly.”

Transparency is key to addressing security concerns in open source, saysFell: “Transparency is very important. Apple had a security case where the FBI tried to hack into the phone. Up until now Apple had never released the source code for the Kernel of the iPhone, and just this last time around when they did their SDK for the developers for iOS 10 they apparently released the source code un-obfuscated so that people could really start to dig in. You ask yourself, will people find exploits? Probably. Will there be transparency around those exploits? Probably.”

More testing needs to be done to ensure security of open source code, advises Wallgren: “Just because somebody finds a problem doesn’t mean it’s automatically going to be updated and patched in all the applications that use the open source platform because updating is a big deal. The lack of unit testing in open source code is pretty deplorable. It’s stunning how easy some of these things get by and how long bugs can sit there and the mean time to discovery for bugs is pretty long. Every assumes someone else is reading the code, finding the bugs and fixing it, and is that really true?”

Legal Concerns?

Be prepared to have to share what open source is in your product or code, says Fell: “When you are trying to acquire a company or a solution, a lot of the times they’ll say ‘Tell me what open source components that isn’t your code is within your code. It’s not necessarily a black mark against you, but it’s something you need to be aware of.”

Even Electric Cloud clients ask to know what open source or third part tools we use in our products, explains Wallgren: “This is something a decent portion of our customer community cares about is what third-party tools, not just open source tools, are we using, what are the relevant licenses, are we allowed to ship it. All of those things are concerns of anybody who even buys our software.”

Stump recommends spending more time learning the different open source licenses to help you pick the right tool for your needs: “Open source, I like to think, maybe demands a little more respect rather than just clicking through on the 48 pages of the user license agreement that we are all used to seeing, simply because it’s people’s free time and people aren’t getting paid for it generally and there are a lot of different open source licenses out there. If you work with and deal with open source you need to be aware of the difference between the MIT license, BSD license, GPL, that can affect your tooling choices.”

Lenz reminds us that there are legal implications for all types of software: “Proprietary software can also come at a high legal risk, for example there is software that is licensed by the CPU, by the core count, so when you are in a virtualization environment, can you safely use that software? If yes, for which cores do you pay? Do you pay for the cores that are assigned for the virtual machine? To the whole cluster? There are actually very valid legal reasons not to use some types of proprietary source software.”

Watch the full episode here