Home Blog Page 705

OpenStack Jobs Are Growing and There’s Plenty of Seats at the Table

OpenStack’s adoption by business users has created an opportunity for devs, architects, sysadmins and engineers to pay the rent by working on free software–and there’s plenty of open seats at the table.

OpenStack has seen rapid growth since its beginnings in 2010, when 75 developers gathered to contribute to the project, to 2016, where more than 59,110 community members and 20 million lines of code. OpenStack’s maturity has been praised by analysts like Forrester, who say that, “OpenStack meets the needs of production workloads and is ready to enable CIOs in tackling the strategic requirements of their business.”  

Part of OpenStack’s success is its adoption by business users—whether offering services that run atop OpenStack, using OpenStack to power key internal operations, or a blend of both. While OpenStack’s license doesn’t require contributions back to the code, the vast majority of companies understand the importance of participating in OpenStack’s development, and employ positions across the stack to do just that. Even the companies that aren’t able to contribute code spend time participating in community events and IRC chats.

If you’re not already working on OpenStack, finding your first OpenStack job can feel daunting. In our four-part series, we’ll start by taking a step back and discuss why you might want to work on OpenStack, debunk some common myths about OpenStack and its ecosystem, talk about navigating the OpenStack community, and share resources for getting you started as a professional Stacker.

Why you might want to work on OpenStack

It’s growing

The OpenStack ecosystem has seen steady growth that’s only anticipated to climb. When we say “ecosystem,” we’re referring to the vendors, enterprises, service providers and training partners whose products directly or indirectly touch OpenStack. Whether it’s these organizations or OpenStack end users, they all need OpenStack talent.

A 451 report “expects total OpenStack-related revenue to exceed $2.4 billion by 2017,” nearly triple the 2014 valuation. [1] This growth isn’t limited to a particular geographic region, which makes for an internationally vibrant community, as well as a globe of opportunities.

It’s powering amazing things

You don’t have to look hard to find an OpenStack user; Walmart, Cisco, the MIT Computer Science and Artificial Intelligence Lab, GMO Internet, NTT, Time Warner Cable, NeCTAR and China Mobile represent just a small slice of OpenStack users. Retail, finance, healthcare, scientific research and media segments are all leveraging OpenStack to solve their organization-specific challenges.

For Betfair, the world’s largest Internet betting exchange, OpenStack was the solution to support their 2.7 billion daily API calls and 120 million daily transactions. At CERN, OpenStack allows them to provide data from the Large Hadron Collider out to more than 11,000 users around 150 sites worldwide, while securely changing permission access for an average of 200 individuals each month. KakaoTalk is a South Korean VoIP app that turned to OpenStack to keep the region connected through a set up that involves more than 5,000 VMs.       

A day in the life means working on free software

“OpenStack has done an amazing job of proving that companies can stick whole teams of hackers on a free software project, without it being counter to their core business principles,” says Jeremy Stanley, an infrastructure engineer with the OpenStack Foundation and member of both OpenStack’s Infra and Vulnerability Management Teams. For an OpenStack professional, a day in the life includes not only working on using this software to solve organization-specific problems, but getting to share best practices and new ideas with the community as you encounter them.

The demand for OpenStack professionals is increasing just as quickly as the ecosystem is growing. According to Indeed, the number of OpenStack job listings doubled in 2015. And since OpenStack is not a proprietary solution, skills learned and experienced developed are transferable anywhere within the ecosystem, making it a “highly transferrable specialty”—a rarity in career fields.

Now you have questions

How do I become an OpenStack contributor? When do the releases come out? How do I find out about community events? If there’s a question you’re dying to know, you can tweet us at @OpenStack, and we’ll do our best to include it before next week!  

Want to learn the basics of OpenStack? Take the new, free online course from The Linux Foundation and EdX. Register Now!

 

The OpenStack Summit is the most important gathering of IT leaders, telco operators, cloud administrators, app developers and OpenStack contributors building the future of cloud computing. 

Hear business cases and operational experience directly from users, learn about new products in the ecosystem and build your skills at OpenStack Summit, Oct. 25-28, 2016, in Barcelona, Spain. Register Now!

 

 

 

[Bit]coin Flipping: It’s Up To the Developers How Soon Blockchain Goes Mainstream

The discussion about blockchain’s adoption is gaining momentum, but where are we now? How far are we from seeing blockchain in all industries and how can we help speed up the process? We talked to Brian Behlendorf, Executive Director of the Hyperledger Project about all this and more.

It’s been four months since Brian Behlendorf became the Executive Director of the Hyperledger Project. We talked to him about his latest blog post in which he claims that Hyperledger is an “umbrella” for software developer communities building open source blockchain and related technologies.

Read more at Jaxenter

Highly Available & Distributed Containers by Kendrick Coleman, EMC {code}

https://www.youtube.com/watch?v=tZ5dYxpVjcQ?list=PLbzoR-pLrL6qBYLdrGWFHbsolIdJIjLnN

Learn how to scale a typical 3-tier app using Swarm, serve a persistent Database with Docker Volume drivers and tie them all together on a single private network with libNetwork. 

Docker + Golang = <3

This is a short collection of tips and tricks showing how Docker can be useful when working with Go code. For instance, I’ll show you how to compile Go code with different versions of the Go toolchain, how to cross-compile to a different platform (and test the result!), or how to produce really small container images.

The following article assumes that you have Docker installed on your system. It doesn’t have to be a recent version (we’re not going to use any fancy features here).

Read more at Docker blog

14 DevOps Vendors Link Up to Simplify Enterprise Adoption of ‘Best of Breed’ Tools

Just as DevOps seeks to bring together the traditionally separate development and operations functions, so 14 vendors of technology and services designed to assist this process have come together to form a new initiative called DevOps Express.

The idea behind DevOps Express is to streamline the way enterprises transform their software development and delivery environments to embrace DevOps. Its members include software firms, as well as providers of consulting, training and professional services. The initiative was founded by continuous delivery firm CloudBees and software supply chain automation vendor Sonatype. The remaining members are Atlassian, BlazeMeter, CA Technologies, Chef, DevOps Institute, GitHub, Infostretch, JFrog, Puppet, Sauce Labs, SOASTA and SonarSource.

Read more at Computing

Telemetry: What It Is, What It Isn’t, and Why It’s Important in Distributed Systems

In this episode of The New Stack Makers, we learn about the nuances behind software-defined infrastructure, how new approaches to telemetry are changing the way users interact with their data and the ways that distributed analytics can be put into practice in the enterprise. The New Stack founder Alex Williams spoke with Intel Software-Defined Infrastructure (SDI) Distributed Analytics Engineer Brian Womack during the 2016 Intel Developer Forum (IDF) in San Francisco to get his thoughts on these topics and more.

In his role at Intel, Womack explained the concept of data as we recognize it today has shifted. Rather than working with traditional analytics, many of todays platforms and services are taking a distributed approach to data and their infrastructures. We introduced a term here at IDF called a software-defined resource. Theres four types: Processor, memory, fabric and storage. People who manage data centers have collected telemetry in the past to try to observe what software-defined resources are doing so that you can do something about it, Womack said.

 

Read more at The New Stack

Why Enterprises Are Embracing Microservices and Node.js

This contributed piece is from a speaker at Node.js Interactive Europe, an event offering an in-depth look at the future of Node.js from the developers who are driving the code forward, taking place in Amsterdam from September 15 to September 18.

Most software projects start with solving one problem. Then comes another one and the project continues growing without the engineering team being able to cope with it.

This is how monoliths are built. Every new feature gets added to the existing application making it more and more complex. Scaling becomes hard and resource-wasting since everything has to be scaled together. Deployment turns into a nightmare thanks to the million lines of code waiting to be pushed into production every time. Meanwhile, management will encounter grave challenges with coordinating large, siloed teams interfering with each other.

Read more at The New Stack

Mirantis Acquires TCP Cloud to Advance Kubernetes Ambitions

Moving to accelerate the rate at which the OpenStack cloud platform can be hosted on the Kubernetes container orchestration platform, Mirantis today announced it has acquired TCP Cloud.

Based in Prague, TCP Cloud provides managed services around deployments of OpenStack, OpenContrail and Kubernetes technologies. Mirantis CEO Alex Freedland says the addition of technology developed by TCP Cloud will reduce the amount of time it would have taken Mirantis to move OpenStack to Kubernetes by six to nine months. As a result, he says, Mirantis expects to show the first fruits of a joint development effort involving CoreOS, Google and Intel in the first quarter of 2017.

Read more at Container Journal

DevOps and the Art of Secure Application Deployment

Secure application deployment principles must extend from the infrastructure layer all the way through the application and include how the application is actually deployed, according to Tim Mackey, Senior Technical Evangelist at Black Duck Software. In his upcoming talk, “Secure Application Development in the Age of Continuous Delivery” at LinuxCon + ContainerCon Europe, Mackey will discuss how DevOps principles are key to reducing the scope of compromise and examine why it’s important to focus efforts on what attackers’ view as vulnerable.

Tim Mackey, Senior Technical Evangelist, Black Duck Software

Linux.com: You say that the prevalence of microservices makes it imperative to focus on vulnerabilities. Are microservices inherently more vulnerable or less? Can you explain?

Tim Mackey: With every new development pattern, we need to ensure operations and security teams are deeply involved in deployment plans so their vulnerability response plans keep pace. Microservices development doesn’t change that requirement; even with a focus on creating tasks which perform a single operation. When developing a microservice we’re already thinking about the minimum code required to perform the task.  We’re also thinking about ways to reduce the attack surface. This makes vulnerability planning a logical component of the design process, and by extension something which should be communicated throughout the component lifecycle.

If we make an assumption that our services are deployed using continuous delivery, we’re also accepting more frequent deployments for our services. This gives us an opportunity to resolve security issues as they arise, potentially without outage windows or downtime. In such an environment, we really want active monitoring for vulnerability disclosures not only for what we’ve deployed, but also what’s in our library and currently under development.

One other point to note: If the lifespan of a given microservice is very short, we’ve raised the bar for attackers. While that’s a really good thing, we don’t want to become complacent about vulnerability planning. After all, a short service lifespan can also mask attempts at malicious activity and addressing that should be part of a microservice-centric vulnerability response plan.

Linux.com: Can you give us some examples of how vulnerabilities get into production deployments?

Tim: We see from numerous sources that open source development models are all but de facto in 2016. The freedom developers have to incorporate ideas from other projects, either directly or via a fork, has increased the pace of innovation. It is precisely this freedom which provides an avenue for upstream security issues to impact downstream projects.

For practical purposes, we can assume that most code – open source or otherwise – has some critical bug with exploit potential. It’s not uncommon to find that such bugs have been present in code for significant periods of time and may have been subject to multiple reviews and even tests from a variety of tools. We then see a security researcher identify the significance of the bug as a security issue and a vulnerability report is disclosed.

Once disclosed, the big question for users becomes “is this issue present in our environment?” If we were talking about packaged commercial products, it would be up to the vendor to provide both a fix and guidance for mitigation. Open source projects also provide guidance and fixes, but only for direct usage of their components. With the source for a given product often coming from upstream efforts, tracking the provenance of the source and associated security issues is a critical requirement for any vulnerability response plan.

Linux.com: How can those be mitigated? What are some tools to determine the vulnerabilities?

Tim: Mitigation starts with understanding the scope of the problem, and ends with implementation of some form of “fix.” Unfortunately, the available information on a vulnerability is often written for developers and the people needing to perform the mitigation are on the operations side.

If we consider the glibc vulnerability from February, CVE-2015-7547, the bug was first reported in July 2015, and over the course of nine months, the development team determined the nature of the bug, then how to fix it, and subsequently disclosed it as a vulnerability. This is the normal process for most vulnerabilities disclosed against projects under active development. In the case of CVE-2015-7547, the disclosure occurred first on the project list and two days later in the National Vulnerability Database (NVD) maintained by NIST. The contents of the NVD are freely available and form the basis of many vulnerability scanning solutions, including the Black Duck Hub.

What differentiates basic vulnerability scanning solutions from the leaders are two key attributes:

  • Independent security research activities. These activities are primarily focused on identifying activity within projects which signal an impending disclosure. In the case of CVE-2015-7547, such research would have identified the impending disclosure from the development list activity.

  • Breadth of the underlying knowledge base against which potentially vulnerable code is validated. As I mentioned earlier, vulnerable code is often incorporated from multiple sources and disclosed against specific product versions. Being able to clearly identify the vulnerable aspects of a project based on commits allows for easier identification of latent vulnerabilities in forked code.

Linux.com: What level of certainty can be achieved regarding the vulnerability status of a container?

Tim: Like any security process, container vulnerability status is best determined using a variety of tools, each with a clear focus, and each gating the delivery of a container image into a production registry. This includes static and dynamic analysis tools, but a comprehensive vulnerability plan also requires active monitoring of dependent upstream and forked components for their vulnerability status. No single tool will ever provide a guarantee a container is free of known vulnerabilities, or is otherwise free of vulnerabilities. In other words, even if you follow every available best practice and create a container image with no known issues, that doesn’t mean that a day later vulnerabilities won’t be disclosed in a dependent component.

Linux.com: It sounds like DevOps principles come into play in achieving greater security. Can you explain further?

Tim: DevOps principles are absolutely a key component to reducing the scope of compromise from any vulnerability. The process starts with a clear understanding of what upstream components are included in any container image available for deployment. This builds a level of trust for a container image and a requirement that only trusted images can be deployed. From there, a set of deployment requirements can be created which govern the expected usage for the container. This includes simple things like network configuration, but also extends to container runtime security elements like SELinux profiles and required kernel capabilities.

Once these items are in place, a vulnerability response plan can be created, one which understands what the scope of compromise might be for a given container. The vulnerability response plan needs to start with an implicit assumption that a sufficiently motivated bad actor will eventually be attracted to the container and gain control of it to some degree. It’s then up to the operations team to determine what could happen next, and given the implicit assumption of control of a container behind a perimeter defense, those defenses may not be sufficient to limit the scope of compromise. Monitoring of deployed containers absolutely must include continuous validation of the state of trust for container images, and the vulnerability response plan must include procedures should that trust be questioned.

 

You won’t want to miss the stellar lineup of keynotes, 185+ sessions and plenty of extracurricular events for networking at LinuxCon + ContainerCon Europe in Berlin. Secure your spot before it’s too late! Register now.

 

Hitchhiker’s Guide to IoT Standards and Protocols

When it comes to IoT protocols, there’s no one clear winner. This article outlines communications protocols from MQTT and Wi-Fi to less well-known offerings. In this article, we focus on a framework of how you can think about this problem of standards, protocols, and radios. 

The framework of course depends on if your deployment is going to be internal, such as in a factory, or external, such as a consumer product. In this conversation, we’ll focus on products that are launching externally to a wider audience of customers, and for that, we have a lot to consider.

Let’s look at the state of the IoT right now — bottom line, there’s not a standard that’s so prolific or significant that you’re making a mistake by not using it. What we want to do, then, is pick the thing that solves the problem that we have as closely as possible and has acceptable costs to implement and scale, and not worry too much about fortune telling the future popularity of that standard.  

Read more at DZone