Today, we’re announcing that Docker for AWS is graduating to public beta, just in time for AWS re:Invent. Docker for AWS is a great way for ops to setup and maintain secure and scalable Docker deployments on AWS. With Docker for AWS, IT ops teams can:
Deploy a standard Docker platform to ensure teams can seamlessly move apps from developer laptops to Dockerized staging and production environments, without risk of incompatibilities or lock-in.
Integrate deeply with underlying infrastructure to ensure Docker takes advantage of the host environment’s native capabilities and exposes a familiar interface to administrators.
Deploy the platform to all the places where you want to run Dockerized apps, simply and efficiently…
Abhishek Chauhan, VP and CTO at Citrix discusses the importance of changing the way we develop network services to support microservices at LinuxCon NA.
“TCP: Treason uncloaked!” Abhishek Chauhan, VP and CTO at Citrix, launched his LinuxCon North America keynote with a trip down memory lane, when this was an actual Linux kernel log message. What is the significance of this silly message? Chauhan says that when this message was changed to something more benign, back around 2008, he knew it was a sign that Linux was becoming a serious contender. In 2016 Linux turned 25, so he was right.
All the buzz these days is on microservices, those emphemeral flashes-in-the-pan that appear and disappear on demand, thousands or even millions of times per minute. Chauhan discusses the importance of changing the way we develop network services to support microservices. He says, “The first thing that you need…the obvious thing to me and you is that in order to be a networking component for microservices, the network has to be in software itself. Micro-application services require micro-network services to support them.”
“If the application went from monolithic to micro, the network will have to go follow the same route. Now you expect the network to also be small, independent, composable, and desegregated, just like the microservices themselves.”
Chauhan then introduces the concept of “lots of little.” So, instead of having a few large boxes, you’ll have lots of little boxes. Instead of working for stability and uptimes, the networking becomes as stateless and expendable as microservices, and the management system takes on the burden of managing state. “The separation of stateless and software-defined at the top, and centralized and intelligent at the bottom is fundamental to the change of adopting microservices for networking.”
The business imperative is “change or die.” Chauhan sees micro-networking as a bridge from the past to the future, building software-defined, data-driven network functions to manage and monitor operations.
Watch Chauhan’s keynote (below) to learn more about “lots of little” and how busting your network into a zillion little smithereens is the key to moving your business forward.
Explore Software Defined Networking Fundamentals today by downloading the free sample.Download Now
Join us in this three-part weekly blog series to get a sneak peek at The Linux Foundation’s Software Defined Networking Fundamentals(LFS265) self-paced, online course.
Part 2 of this series discussed the architecture of a traditional data switch and the inherent limitations created when striving for the highest performance of wire speed switching. This inflexibility and resultant vendor lock-in spawned Software Defined Networking (SDN)
According to Wikipedia,
“Software Defined Networking (SDN) is an approach to computer networking that allows network administrators to manage network services through abstraction of higher-level functionality. This is done by decoupling the system that makes decisions about where traffic is sent (the control plane) from the underlying systems that forward traffic to the selected destination (the data plane).”
As we saw in the previous article, the TCAM (Ternary Content Addressable Memory) in a traditional networking switch is the cornerstone of the Data Plane’s operation.
The foundations of SDN lie in the following question: What if we could access or program these remotely? We will discuss the consequences and effects next.
Changes to Networking Components
Software Defined Networking introduces new components to traditional networking infrastructure, as well as adapts the functionality of the existing components to the software defined architecture. The most important changes are:
• The physical or virtual switch (generalized as network device) does not have to implement all the protocols or features. Instead, it provides access to the data plane through a standardized API (e.g. OpenFlow). The software defined switch needs less internal logic thereby reducing complexities and costs.
• The concept of the SDN controller is new. It represents the control plane logic to the connected switches, calculates the forwarding state and populates the tables. It will be discussed below.
• The applications that reside on top of the Services API provided by the controller are also new. It abstracts the differences in the switch hardware from the network applications (e.g. firewall, QoS, NaaS). Thus application programmers can program “the network” and they do not need to know the details and unique management interfaces of each and every switch in the network.
The SDN Switch
The fundamental difference between a traditional switch and an SDN switch is the separation of the switch (data plane) and the controller (Control Plane). Once separated the control function doesn’t need to be co-located with the switching function. Additionally, a single remote controller can manage/control multiple switches throughout the network. Once the controller is managing switches across the network these controller can glean more information about the state of the entire network and not just the state of switches or nodes of the network. This additional information can be used in a number of ways such as dynamically modifying flows. For example, if part of the network is highly congested the controller can re-program the forwarding tables to re-route sensitive traffic around the congested links and switches.
As illustrated in Figure 1, there is a split between the switch and the controller:
• The switch hardware will have a minimal abstraction layer, with some control and management functionality. This is needed for cold-start operation and configuration.
• The decisions are now made in the control plane, which is part of the controller. The controller offers a central management plane as well.
Figure 1: An SDN-capable switch.
The SDN Controller and APIs
A key part of all of this is the Application Programming Interface (API). APIs enable people and different layers of the network to communication with underlying components using a common and simplified approach. Applications do not need to know the details of the underlying hardware, they only need to know how to talk to a switch using this common language, the API. You do not need to know the vendor specific details as long as the vendor has implemented the API. Thus, high layer functions can “program the network” and not individual hardware devices. This network-wide programmability is a fundamental of SDN.
SDN introduces a new component, the SDN Controller (Figure 2). The SDN Controller is connected to the switches and represents the control plane logic. It’s responsible for calculating the forwarding rules and populates the switch’s tables.
When the controller is separated from the individual switch, a single controller is able to manage multiple SDN-capable devices.
This capability has some important consequences:
• A controller can aggregate more data regarding the state of the network.
• More knowledge about the state of the network leads to improved decisions.
• The flows can be dynamically adapted by the controller.
Figure 2: Multiple SDN switches connected to one controller.
Network Transformation
SDN will transform the network away from specialized hardware with protocols and applications implemented for each Switch/Router hardware/software combination (Figure 3). Instead, the functionality is implemented at a higher level, using the controllers’ APIs independent of the underlying hardware. Instead of programming individual devices, we can now program the network. This is referred to as the separation of the Data Plane from the Control Plane.
Figure 3: Transformation of the network infrastructure.
The Internet Research Task Force (IRTF) RFC 7426 standard was released in January 2015 and it introduced a number of SDN fundamental concepts in the form of “abstraction layers”. The first is that of the Network Services Abstraction Layer (NSAL). This layer “sits” between the higher level applications and network services and the SDN Controllers. It is the API for these controllers. It also introduced other abstraction layers including the Control Abstraction Layer (CAL), the Management Abstraction Layer (MAL) and the Device and resource Abstraction Layer (DAL). Each of these layers or APIs provide the higher layers with a common way to communicate their requirements to the layer or layers below them.
The terms “Northbound” and “Southbound” emanated from the way networks were illustrated. The hardware was typically shown on the bottom of the whiteboard or Powerpoint slide and the applications, services and management systems were shown at the top. In the middle is the network control logic. Thus, above, northbound, from the controller are the services and applications and below, southbound, are the actual network elements. Traditionally these interfaces were tightly coupled to the proprietary hardware-centric network elements resulting is the inflexibility previously discussed. The APIs between the control and applications (northbound) and between the control and hardware (southbound) create a common “language” to communicate and thereby eliminates the need for programmers to learn vendor-specific management interfaces. Today, any function or interface shown above a specific function is northbound and those below are southbound. Most functions have both northbound and southbound interfaces and data traffic flows East and West.
The northbound SDN layer architecture (Figure 4) includes:
The Network Services Abstraction Layer (NSAL), which represents the application API of the controller.
Network Servicesare provided by the controller (the control and management planes) asService Interfaces.
Figure 4: RFC 7426 – SDN Layer Architecture (Northbound).
The southbound SDN architecture (Figure 5) includes:
• The Control Abstraction Layer (CAL).
• The Management Abstraction Layer (MAL).
• The Device and resource Abstraction Layer (DAL).
Figure 5: RFC 7426 – SDN Layer Architecture (Southbound)
This three-part series provides the history of how we got to where we are today and illustrates the fundamental difference with SDN: the separation of the control plane from the data plane. Once separated, network operators have new degrees of freedom in the design and management of large scale IP networks.
The incorporation of application programming interfaces (API) into SDN is shown to greatly simplify the operation of these networks. Network engineers and other professionals can both glean network-wide state and make network-wide programming changes without the need to understand the minute details of the features and functionalities above them (Northbound) or below them (Southbound). Lastly, IETF RFC 7426 (Figure 6) was introduced showing the creation of a number of “abstraction layers” which standardize the locations of the APIs. Each of these abstraction layers hides the complexity of logic on either side further simplifying the deployment and operation of large scale IP network.
Figure 6: RFC 7426 – SDN layer architecture.
After this course, you should understand the historical drivers that spawned SDN. It then introduced the concept of “planes” and illustrated the primary technical differences traditional hardware-centric switches with new software-centric switches, namely the separation of the control plane from the data plane. Once separated, network operators will benefit from flexibility and visibility. The control plane in SDN now has increased visibility to the entire network and not just each individual network devices. Then RFC 7426 was discussed highlighting a number of abstraction layers created to simplify the underlying complexities and to accelerate innovation in software defined networks.
The “Software Defined Networking Fundamentals” training course from The Linux Foundation is designed to provide system and network administrators and engineers with the skills necessary to maintain an SDN deployment in a virtual networking environment. Download the sample chapter today!
There is growing anxiety within tech companies about the lack of skilled professionals to keep up with demand. There’s also a realization that one of the largest untapped resources is women. A keynote at the recent Embedded Linux Conference Europe in Berlin described a potential solution to the challenge called Greenlight for Girls, a non-profit organization with a mission to provide girls around the world with the opportunity to love STEM.
The problem is that many girls who have a natural talent for STEM (science, technology, engineering, and math) are often steered elsewhere by teachers, parents, peers, and stereotypes reinforced by the media. A recent National Science Foundation report claimed that more than twice as many U.S. men than women attend graduate school in computer science, and more than four times as many men are enrolled in engineering. While gender discrimination continues to be a problem in hiring, a greater challenge is that relatively few girls get hooked on STEM at an early age and then stick with it.
At ELCE, Greenlight for Girls project founder Melissa Rancourt and International Project Manager Jelena Lucin explained how their organization sponsors hands-on STEM workshops and events for girls around the world, often led by role-model volunteers from industry.
Rancourt has been a computer engineer for 20 years. “I love everything about it, from programming to all the science and math behind it, so I’m constantly amazed that not everybody gets how fabulous this is,” Rancourt told the ELCE audience. “For 20 years I’ve enjoyed the opportunity to get women and girls interested in STEM, but really it was just pissing me off because the numbers aren’t changing. In some fields, the number of women was even going down.”
Several years ago, Rancourt decided to address the problem on a larger scale. She sent off an email blast to gauge interest in launching an organization that promotes STEM education among girls, and was overwhelmed with the favorable response. Within the first year, Greenlight for Girls launched with an international board of directors, more than 500 volunteers, and 1,000 participating girls. Today, the group has grown to 2,500 volunteers serving 13,000 students. So far, there have been 90 events held on six continents.
“We go into the classrooms and into the community, and provide scholarships and libraries,” explained Rancourt, who mentioned projects including robotics, routers, rocket science, telepresence, Arduino hacking, and the physics of playing rugby. “We have role models who come together and show how STEM is linked to everything,” she said. The group also provides girls with free meals, which can make a big difference in many communities.
Greenlight for Girls decided it was important to start working with girls at an early age. “It’s never too early to incite a passion for science,” said Rancourt. “Some school systems put kids in boxes pretty quickly, and it’s easy to veer off the STEM path. Once you’re off the path, it’s so difficult get back on.”
Global Reach
The group’s global reach has led to a flexible approach based on local needs. “In our first group in the Congo, they asked to change the format to cover things the kids could do in their community to make a difference, so we changed the workshops to orient to their needs,” said Rancourt. Even in the U.S. where girls have easier access to technology, there’s a need for programs like Greenlight for Girls, she added. “In Silicon Valley we have had parents tell us that they don’t have anything like this in the area.”
Decisions about where to launch new groups often come from the program’s sponsors, which contribute volunteers and equipment. “We ask our global partners like Cisco, AIG, Swift, and Proctor & Gamble, where they need to build a workforce, and then we see if we can set up a program there,” said Rancourt. “We have new city launches every two weeks.”
The group has now been around long enough that Rancourt and Lucin are beginning to see results. “We have so many wonderful stories about kids who stay with it,” said Rancourt. “One 14-year-old created an app, and we went with her to Apple for the launch.”
Yet, the impact of Greenlight for Girls goes beyond the girls who are inspired to find a career in STEM. “The passion we encourage in them will let them ignore the ‘nos’ and keep on going,” said Rancourt. “One child can change a lot in a community.”
Greenlight for Girls is an international organization dedicated to inspiring girls of all ages and backgrounds by demonstrating just how fun and interactive STEM can be. The Greenlight for Girls team tells the story of how an initiative can grow from one email to a global organization in a few short years by breaking barriers, creatively and courageously.
This post is part 1 in a 4-part series about Docker monitoring. Part 2 explores metrics that are available from Docker, part 3 covers the nuts and bolts of collecting those Docker metrics, and part 4 describes how the largest TV and radio outlet in the U.S. monitors Docker. This article dives into some of the new challenges Docker creates for infrastructure monitoring.
You have probably heard of Docker—it is a young container technology with a ton of momentum. But if you haven’t, you can think of containers as easily—configured, lightweight VMs that start up fast, often in under one second. Containers are ideal for microservice architectures and for environments that scale rapidly or release often.
OpenStack is becoming the de facto standard for infrastructure orchestration for NFV deployment by leading Communications Service Providers (CSPs). CSPs are trading off the challenges of OpenStack implementations (e.g. immature technology and evolving standards) for the benefits of open source and open architectures (i.e. reduced vendor lock-in). Lack of standards for NFV management and orchestration (MANO) remains a leading impediment.
NFV and OpenStack
OpenStack is a set of open source software tools for building and managing cloud computing platforms. It enables service providers to provision and orchestrate pools of data center resources across compute and storage. With regards to NFV, CSPs deploy different types of data centers than the typical large enterprise or hyper-scale cloudprovides. Their compute capabilities are distributed in tiers including centralized cores, aggregation points, and local points of presence (PoPs). OpenStack implementations for CSPs must be highly reliable and be able to distribute workloads across hundreds of geographically distributed data centers (of varying sizes).
As data center sprawl is now understood to be expensive and may not deliver performance increases for all types of applications, new technologies are coming to the rescue. A field-programmable gate array (FPGA) is an integrated circuit designed to be configured by a customer or a designer after manufacturing – hence “field-programmable”. While the use of GPUs and HPC accelerators are generally understood today, there are a number of misconceptions about FPGAs that need to be understood.
The first is that FPGAs are only good for embedded devices. However, this is not the case. FPGAs can be used to sift through the massive amounts of data that are created in any given time frame for Internet of Things (IoT) environments as well as a wide range of Big Data applications. FPGAs can be programmed to do many different tasks and are becoming more mainstream.
I’ve used the term Feature Factory at a couple conference talks over the past two years. I started using the term when a software developer friend complained that he was “just sitting in the factory, cranking out features, and sending them down the line.”
How do you know if you’re working in a feature factory?
No measurement. Teams do not measure the impact of their work. Or, if measurement happens, it is done in isolation by the product management team and selectively shared. You have no idea if your work worked…