This is a short collection of tips and tricks showing how Docker can be useful when working with Go code. For instance, I’ll show you how to compile Go code with different versions of the Go toolchain, how to cross-compile to a different platform (and test the result!), or how to produce really small container images.
The following article assumes that you have Docker installed on your system. It doesn’t have to be a recent version (we’re not going to use any fancy features here).
Just as DevOps seeks to bring together the traditionally separate development and operations functions, so 14 vendors of technology and services designed to assist this process have come together to form a new initiative called DevOps Express.
The idea behind DevOps Express is to streamline the way enterprises transform their software development and delivery environments to embrace DevOps. Its members include software firms, as well as providers of consulting, training and professional services. The initiative was founded by continuous delivery firm CloudBees and software supply chain automation vendor Sonatype. The remaining members are Atlassian, BlazeMeter, CA Technologies, Chef, DevOps Institute, GitHub, Infostretch, JFrog, Puppet, Sauce Labs, SOASTA and SonarSource.
In this episode of The New Stack Makers, we learn about the nuances behind software-defined infrastructure, how new approaches to telemetry are changing the way users interact with their data and the ways that distributed analytics can be put into practice in the enterprise. The New Stack founder Alex Williams spoke with Intel Software-Defined Infrastructure (SDI) Distributed Analytics Engineer Brian Womack during the 2016 Intel Developer Forum (IDF) in San Francisco to get his thoughts on these topics and more.
In his role at Intel, Womack explained the concept of data as we recognize it today has shifted. Rather than working with traditional analytics, many of todays platforms and services are taking a distributed approach to data and their infrastructures. We introduced a term here at IDF called a software-defined resource. Theres four types: Processor, memory, fabric and storage. People who manage data centers have collected telemetry in the past to try to observe what software-defined resources are doing so that you can do something about it, Womack said.
This contributed piece is from a speaker at Node.js Interactive Europe, an event offering an in-depth look at the future of Node.js from the developers who are driving the code forward, taking place in Amsterdam from September 15 to September 18.
Most software projects start with solving one problem. Then comes another one and the project continues growing without the engineering team being able to cope with it.
This is how monoliths are built. Every new feature gets added to the existing application making it more and more complex. Scaling becomes hard and resource-wasting since everything has to be scaled together. Deployment turns into a nightmare thanks to the million lines of code waiting to be pushed into production every time. Meanwhile, management will encounter grave challenges with coordinating large, siloed teams interfering with each other.
Moving to accelerate the rate at which the OpenStack cloud platform can be hosted on the Kubernetes container orchestration platform, Mirantis today announced it has acquired TCP Cloud.
Based in Prague, TCP Cloud provides managed services around deployments of OpenStack, OpenContrail and Kubernetes technologies. Mirantis CEO Alex Freedland says the addition of technology developed by TCP Cloud will reduce the amount of time it would have taken Mirantis to move OpenStack to Kubernetes by six to nine months. As a result, he says, Mirantis expects to show the first fruits of a joint development effort involving CoreOS, Google and Intel in the first quarter of 2017.
Secure application deployment principles must extend from the infrastructure layer all the way through the application and include how the application is actually deployed, according to Tim Mackey, Senior Technical Evangelist at Black Duck Software. In his upcoming talk, “Secure Application Development in the Age of Continuous Delivery” at LinuxCon + ContainerCon Europe, Mackey will discuss how DevOps principles are key to reducing the scope of compromise and examine why it’s important to focus efforts on what attackers’ view as vulnerable.
Tim Mackey, Senior Technical Evangelist, Black Duck Software
Linux.com: You say that the prevalence of microservices makes it imperative to focus on vulnerabilities. Are microservices inherently more vulnerable or less? Can you explain?
Tim Mackey: With every new development pattern, we need to ensure operations and security teams are deeply involved in deployment plans so their vulnerability response plans keep pace. Microservices development doesn’t change that requirement; even with a focus on creating tasks which perform a single operation. When developing a microservice we’re already thinking about the minimum code required to perform the task. We’re also thinking about ways to reduce the attack surface. This makes vulnerability planning a logical component of the design process, and by extension something which should be communicated throughout the component lifecycle.
If we make an assumption that our services are deployed using continuous delivery, we’re also accepting more frequent deployments for our services. This gives us an opportunity to resolve security issues as they arise, potentially without outage windows or downtime. In such an environment, we really want active monitoring for vulnerability disclosures not only for what we’ve deployed, but also what’s in our library and currently under development.
One other point to note: If the lifespan of a given microservice is very short, we’ve raised the bar for attackers. While that’s a really good thing, we don’t want to become complacent about vulnerability planning. After all, a short service lifespan can also mask attempts at malicious activity and addressing that should be part of a microservice-centric vulnerability response plan.
Linux.com: Can you give us some examples of how vulnerabilities get into production deployments?
Tim: We see from numerous sources that open source development models are all but de facto in 2016. The freedom developers have to incorporate ideas from other projects, either directly or via a fork, has increased the pace of innovation. It is precisely this freedom which provides an avenue for upstream security issues to impact downstream projects.
For practical purposes, we can assume that most code – open source or otherwise – has some critical bug with exploit potential. It’s not uncommon to find that such bugs have been present in code for significant periods of time and may have been subject to multiple reviews and even tests from a variety of tools. We then see a security researcher identify the significance of the bug as a security issue and a vulnerability report is disclosed.
Once disclosed, the big question for users becomes “is this issue present in our environment?” If we were talking about packaged commercial products, it would be up to the vendor to provide both a fix and guidance for mitigation. Open source projects also provide guidance and fixes, but only for direct usage of their components. With the source for a given product often coming from upstream efforts, tracking the provenance of the source and associated security issues is a critical requirement for any vulnerability response plan.
Linux.com: How can those be mitigated? What are some tools to determine the vulnerabilities?
Tim: Mitigation starts with understanding the scope of the problem, and ends with implementation of some form of “fix.” Unfortunately, the available information on a vulnerability is often written for developers and the people needing to perform the mitigation are on the operations side.
If we consider the glibc vulnerability from February, CVE-2015-7547, the bug was first reported in July 2015, and over the course of nine months, the development team determined the nature of the bug, then how to fix it, and subsequently disclosed it as a vulnerability. This is the normal process for most vulnerabilities disclosed against projects under active development. In the case of CVE-2015-7547, the disclosure occurred first on the project list and two days later in the National Vulnerability Database (NVD) maintained by NIST. The contents of the NVD are freely available and form the basis of many vulnerability scanning solutions, including the Black Duck Hub.
What differentiates basic vulnerability scanning solutions from the leaders are two key attributes:
Independent security research activities. These activities are primarily focused on identifying activity within projects which signal an impending disclosure. In the case of CVE-2015-7547, such research would have identified the impending disclosure from the development list activity.
Breadth of the underlying knowledge base against which potentially vulnerable code is validated. As I mentioned earlier, vulnerable code is often incorporated from multiple sources and disclosed against specific product versions. Being able to clearly identify the vulnerable aspects of a project based on commits allows for easier identification of latent vulnerabilities in forked code.
Linux.com: What level of certainty can be achieved regarding the vulnerability status of a container?
Tim: Like any security process, container vulnerability status is best determined using a variety of tools, each with a clear focus, and each gating the delivery of a container image into a production registry. This includes static and dynamic analysis tools, but a comprehensive vulnerability plan also requires active monitoring of dependent upstream and forked components for their vulnerability status. No single tool will ever provide a guarantee a container is free of known vulnerabilities, or is otherwise free of vulnerabilities. In other words, even if you follow every available best practice and create a container image with no known issues, that doesn’t mean that a day later vulnerabilities won’t be disclosed in a dependent component.
Linux.com: It sounds like DevOps principles come into play in achieving greater security. Can you explain further?
Tim: DevOps principles are absolutely a key component to reducing the scope of compromise from any vulnerability. The process starts with a clear understanding of what upstream components are included in any container image available for deployment. This builds a level of trust for a container image and a requirement that only trusted images can be deployed. From there, a set of deployment requirements can be created which govern the expected usage for the container. This includes simple things like network configuration, but also extends to container runtime security elements like SELinux profiles and required kernel capabilities.
Once these items are in place, a vulnerability response plan can be created, one which understands what the scope of compromise might be for a given container. The vulnerability response plan needs to start with an implicit assumption that a sufficiently motivated bad actor will eventually be attracted to the container and gain control of it to some degree. It’s then up to the operations team to determine what could happen next, and given the implicit assumption of control of a container behind a perimeter defense, those defenses may not be sufficient to limit the scope of compromise. Monitoring of deployed containers absolutely must include continuous validation of the state of trust for container images, and the vulnerability response plan must include procedures should that trust be questioned.
You won’t want to miss the stellar lineup of keynotes, 185+ sessions and plenty of extracurricular events for networking at LinuxCon + ContainerCon Europe in Berlin. Secure your spot before it’s too late! Register now.
When it comes to IoT protocols, there’s no one clear winner. This article outlines communications protocols from MQTT and Wi-Fi to less well-known offerings. In this article, we focus on a framework of how you can think about this problem of standards, protocols, and radios.
The framework of course depends on if your deployment is going to be internal, such as in a factory, or external, such as a consumer product. In this conversation, we’ll focus on products that are launching externally to a wider audience of customers, and for that, we have a lot to consider.
Let’s look at the state of the IoT right now — bottom line, there’s not a standard that’s so prolific or significant that you’re making a mistake by not using it. What we want to do, then, is pick the thing that solves the problem that we have as closely as possible and has acceptable costs to implement and scale, and not worry too much about fortune telling the future popularity of that standard.
Across the history of data analytics, marquee-level applications have always given rise to useful front ends and connectors that extend what the original applications were capable of. For example, the dominance of the spreadsheet gave rise to macros, plugins, and extensions. Likewise, the rise of SQL database applications ushered in database front ends, plugins, and connectors. Now, Big Data titanHadoop is inspiring its own ecosystem of powerful extensions and front ends.
To explain what a difference these extenders and connectors can make, here are some examples of how Hadoop can be taken in new directions with these tools.
Reaching out to BI. In 2015, as Hadoop’s star continued to rise in the Big Data arena, startup companyAtScale came out of stealth mode, showing off its tools for making data stored in Hadoop’s file system accessible within popular Business Intelligence (BI) applications. The result of these bridges between BI tools and Hadoop is a more holistic collection of Hadoop-driven insights, which AtScale bills as “digestible for the masses.”
According to the company: “AtScale software requires no data movement, no custom driver and no separate cluster in order to perform. When customers deploy AtScale, their business users can analyze the entirety of their Hadoop data, at lightning speed and from the BI tools they are already familiar with.” In other words, familiar BI tools become the dashboard through which users can leverage Hadoop — and that can reduce Hadoop’s learning curve.
Hadoop and Everyday Productivity Applications. There are now many common productivity applications that are inheriting bridges and connectors to Hadoop, too. Here again, the familiarity that users have with these common applications can reduce the Hadoop learning curve. Microsoft, for example, is making it easier to work with Hadoop directly from the Excel spreadsheet. The company has a simple guide to bridging Excel and Hadoop. Meanwhile, Hortonworks, a leader in the Big Data arena, has an straightforward tutorial on how you can use Excel as a front end for culling insights with Hadoop.
Under the Hood with Talend. Talend’sOpen Studio for Big Data provides a friendly front end for easily working with Hadoop to mine large data sets, which is released under an Apache license. You can download it and try it for freehere. It lets you use graphical tools to map Big Data sources and targets, then automatically generates code that run natively on your cluster.
Apache’s Hadoop Enhancements.Many of the most notable free enhancement tools for Hadoop come directly from the Apache Software Foundation, which is, of course, the steward of Hadoop. Here are a few of the free tools that have recently graduated to Top-Level Status at the foundation, ensuring that they benefit from strong development and support:
Twill.Twill is an abstraction over Apache Hadoop YARN that reduces the complexity of developing distributed Hadoop applications, allowing developers to focus more on their application logic. Twill focuses on features for common distributed applications for development, deployment, and management, and is targeted to ease Hadoop cluster operation and administration.
Kylin. Kylin, originally created at eBay and now a Top-Level Apache project, also extends what you can do with Hadoop. Kylin is an open source Distributed Analytics Engine designed to provide an SQL interface and multi-dimensional analysis (OLAP) on Apache Hadoop, supporting extremely large datasets.
As an OLAP-on-Hadoop solution, Apache Kylin aims to fill the gap between Big Data exploration and human use, “enabling interactive analysis on massive datasets with sub-second latency for analysts, end users, developers, and data enthusiasts,” according to developers. “Apache Kylin brings back business intelligence (BI) to Apache Hadoop to unleash the value of Big Data,” they added.
Lens.Apache also recently announced thatApache Lens, an open source Big Data and analytics tool, has become a Top-Level Project. It, too, enhances what you can do with Hadoop. According to its developers:
“Apache Lens is a Unified Analytics platform. It provides an optimal execution environment for analytical queries in the unified view. Apache Lens aims to cut the Data Analytics silos by providing a single view of data across multiple tiered data stores. By providing an online analytical processing (OLAP) model on top of data, Lens seamlessly integrates Apache Hadoop with traditional data warehouses to appear as one. It also provides query history and statistics for queries running in the system along with query life cycle management.”
Apache’s collection of Hadoop extenders and connectors is rapidly growing. To stay current, you can check in on all of the Hadoop-focused Apache projectshere.
There’s a general consensus among people working on telco virtualization that open source groups are replacing traditional standards groups.
“In open source, code is the coin of the realm; express yourself with something that is useful,” said Tom Anschutz, distinguished member of AT&T’s technical staff, speaking yesterday at Light Reading’s 2016 NFV & Carrier SDN event.
Anschutz said while standards groups often follow a waterfall process, open source groups take a more DevOps approach. … But the DevOps mentality can blow the mind of a seasoned network engineer.
We discussed the role of middle managers in leading DevOps adoption in the enterprise and scaling DevOps throughout your organization; How Conway’s law (and the “Reverse Conway’s Law”) comes into play in DevOps and in IT and developer productivity; What are some of the DevOps challenges our speakers and the conference programming committee identify, and some patterns for addressing them; the importance of community and learning from peers, and more.