Home Blog Page 398

SDN Trends: The Business Benefits and Emerging SD-WAN Technology

The 2018 Open Networking Summit is rapidly approaching. In anticipation of this event, we spoke to Shunmin Zhu, Head of Alibaba Cloud Network Services to get more insights on two of the hot topics that will be discussed at the event: the future of Software Defined Networking (SDN) and the emerging SD-WAN technology.

“SDN is a network design approach beyond just a technology protocol. The core idea is decoupling the forwarding plane from the control plane and management plane. In this way, network switches and routers only focus on packet forwarding,” said Zhu.

“The forwarding policies and rules are centrally managed by a controller. From a cloud service provider’s perspective, SDN enables customers to manage their private networks in a more intelligent manner through API.”

Shunmin Zhu, Head of Alibaba Cloud Network Services

This newfound approach to networks that were previously thought to be nearly unfathomable black boxes brings welcome transparency and flexibility. And, that naturally leads to more innovation such as SD-WAN and Hybrid-WAN.

Zhu shared more information on both of those cutting-edge developments later in this interview. Here is what he had to say about how all these things come together to shape the future of the networking.

Linux.com:  Please tell us a little more about SDN for the benefit of readers who may not be familiar with it.

Shunmin Zhu: Today, cloud services make it very convenient for a user to buy a virtual machine, set up the VM, change the configurations at any time, and choose the most suitable billing method. SDN offers the flexibility of using network products the same way as using a VM. Such degree of flexibility was not seen in networks before the advent of SDN.

Before, it was unlikely for a user to divide his cloud network into several private subnets. In the SDN era, however, with VPC (Virtual Private Cloud) users are able to customize their cloud networks by choosing the private subnets and dividing them further. In short, SDN puts the power of cloud network self-management into the hands of users.

Linux.com: What were the drivers behind the development of SDN? What are the drivers spurring its adoption now?

Zhu: Traditional networks prior to SDN find it hard to support the rapid development of business applications. The past few decades witnessed fast growth in the computing industry but not so much innovation was seen in the networking sector. With emerging trends, such as cloud computing and virtualization, organizations need their networks to become as flexible as the cloud computing and storage resources in order to respond to IT and business requirements. Meanwhile the hardware, operating system, and network application of the traditional network are tightly coupled and not accessible to an outsider. The three components are usually controlled by the same OEM. Any innovation or update is thus heavily dependent on the device OEMs.

The shortcomings of the traditional network are apparent from a user’s perspective. First and foremost is the speed of delivery. Network capacity extension usually takes several months, and even a simple network configuration could take several days, which is hard for customers to accept today.

From the perspective of an Internet Service Provider (ISP), the traditional network could hardly satisfy the need of their customers. Additionally, heterogeneous network devices from multiple vendors complicate network management. There’s little that ISPs could do to improve the situation as the network functions are controlled by the device OEMs. User and carrier’s urgent need for SDN has made this technology popular. In a large extent, SDN overcomes the heterogeneity of the physical network devices and opens up network functions via APIs. Business applications can call APIs to turn on network services on demand, which is revolutionary in the network industry.

Linux.com: What are the business benefits overall?

Zhu: The benefits of SDN are twofold. On the one hand, it helps to reduce cost, increase productivity, and reuse the network resources. SDN makes the use of networking products and services very easy and flexible. It gives users the option to pay by usage or by duration. The cost reduction and productivity boost empowers the users to invest more time and money into core business and application innovations. SDN also increases the reuse of the overall network resources in an organization.

On the other hand, SDN brings new innovations and business opportunities to the networking industry. SDN technology is fundamentally reshaping networking toward a more open and prosperous ecosystem. Traditionally, only a few network device manufacturers and ISPs were the major players in the networking industry. With the arrival of SDN, more participants are encouraged to create new networking applications and services, generating tons of new business opportunities.

Linux.com: Why is SDN gaining in popularity now?

Zhu: SDN is gaining momentum because it brings revolutionary changes and tremendous business value to the networking industry. The rise of cloud computing is another factor that accelerates the adoption of SDN. The cloud computing network offers the perfect usage scenario for SDN to quickly land as a real-world application. The vast scale, large scope, and various needs of the cloud network pose a big challenge to the traditional network. SDN technology works very well with cloud computing in terms of elasticity. SDN virtualizes the underlay physical network to provide richer and more customized services to the vast number of cloud computing users.

Linux.com: What are future trends in SDN and the emerging SD-WAN technology?

Zhu: First of all, I think SDN will be adopted in more networking usage scenarios. Most of the future networks will be designed by the rule of SDN. In addition to cloud computing data centers, WAN, carrier networks, campus networks, and even wireless networks will increasingly embrace the adoption of SDN.

Secondly, network infrastructure based on SDN will further combine the power of hardware and software. By definition, SDN is software defined network. The technology seems to be prone to the software side. On the flipside, SDN cannot leave the physical network devices upon which it builds the virtual network. The difficulty to improve performance is another disadvantage of a pure software-based solution. In my vision, SDN technology will evolve towards a tighter combination with hardware.

The more powerful next generation network will be built upon the mutually reinforcing software and hardware. Some cloud service providers have already started to use SmartNIC as a core component in their SDN solution for performance boost.

The next trend is the rapid development of SDN-based network applications. SDN helps build an open industry environment. It’s a good time for technology companies to start businesses around innovative network applications such as network monitoring, network analytics, cyber security and NFV (Network Function Virtualization).

SD-WAN is the application of SDN technology in the wide area network (WAN) space. Generally speaking, WAN refers to a communications network that connects multiple remote local area networks (LANs) with a distance of tens to thousands of miles to each other. For example, a corporate WAN may connect the networks of its headquarters, branch offices, and cloud service providers. Traditional WAN solutions, such as MPLS, could be expensive and require a long period before service provisioning. Wireless networks, on the other hand, fall short in bandwidth capacity and stability. The invention of SD-WAN fixes these problems to a large extent.

For instance, a company can build its corporate WAN by connecting branch offices to the headquarters via virtual dedicated line and internet, also known as a Hybrid-WAN solution. The Internet link brings convenience to network connections between the branches to the headquarters while the virtual dedicated line guarantees the quality of the network service. The Hybrid-WAN solution balances cost, efficiency, and quality in creating a corporate WAN. Other benefits of SD-WAN include SLA, QoS, and application-aware routing rules – key applications are tagged and prioritized in network communication for a better performance. With these benefits, SD-WAN is getting increasing attention and popularity.

Linux.com: What kind of user experience do you think is expected regarding SDN products and services?

Zhu: There are three things that are most important to SDN user experience. First is the simplicity. Networking technologies and products sometimes impress users as over complicated and hard to manage. The SDN network products should be radically simplified. Even a user with limited knowledge in networking should be able to use and configure the product.

Second is the intelligence. SDN network products should be smart enough to identify incidents and fix the issues by itself. This will minimize the impact to the customer’s business and reduce the management costs.

The third most important thing is the transparency. The network is the underlying infrastructure to all applications. The lack of transparency sometimes makes users feel that their network is a black box. A successful SDN product should give more transparency to the network administrators and other network users.

This article was sponsored by Alibaba and written by Linux.com.

Sign up to get the latest updates on ONS NA 2018!

A Primer on Nvidia-Docker — Where Containers Meet GPUs

Traditional programs cannot access GPUs directly. They need a special parallel programming interface to move computations to GPU. Nvidia, the most popular graphics card manufacturer, has created Compute Unified Device Architecture (CUDA), as a parallel computing platform and programming model for general computing on GPUs. With CUDA, developers will be able to dramatically speed up computing applications by harnessing the power of GPUs.

In GPU-enabled applications, the sequential part of the workload continues to run on the CPU — which is optimized for single-threaded performance — while the parallelized compute intensive part of the application is offloaded to run on thousands of GPU cores in parallel. To integrate CUDA, developers program in popular languages such as C, C++, Fortran, Python and MATLAB by expressing parallelism through extensions in the form of a few basic keywords.

Read more at The New Stack

What Is Open Source Programming?

At the simplest level, open source programming is merely writing code that other people can freely use and modify. But you’ve heard the old chestnut about playing Go, right? “So simple it only takes a minute to learn the rules, but so complex it requires a lifetime to master.” Writing open source code is a pretty similar experience. It’s easy to chuck a few lines of code up on GitHub, Bitbucket, SourceForge, or your own blog or site. But doing it right requires some personal investment, effort, and forethought.

Let’s be clear up front about something: Just being on GitHub in a public repo does not make your code open source. Copyright in nearly all countries attaches automatically when a work is fixed in a medium, without need for any action by the author. For any code that has not been licensed by the author, it is only the author who can exercise the rights associated with copyright ownership. Unlicensed code—no matter how publicly accessible—is a ticking time bomb for anyone who is unwise enough to use it.

Read more at OpenSource.com

Multiversion Testing With Tox

In the Python world, tox (documentation) is a powerful testing tool that allows a project to test against many combinations of versioned environments. The django-coverage-plugin package (Github) uses tox to test against a matrix of Python versions (2.7, 3.4, 3.5, and 3.6) and Django versions (1.8, 1.9, 1.10, 1.11, 1.11tip, 2.0, 2.0tip), resulting in 25 valid combinations to test.

Preparing Your System Environments

tox needs to run from a virtual environment where it is installed and from which it’s run. As of Feb 2018, I would recommend a Python 2.7 environment so that you can use the detox package (see below) to parallelize your build’s workload. Installation of tox is usually into your base development environment and tox is usually included in your project’s requirements.txt file:

tox >= 1.8
detox

Read more at CloudCity

Migrating to Linux: Using Sudo

This article is the fifth in our series about migrating to Linux. If you missed earlier ones, you can catch up here:

Part 1 – An Introduction

Part 2 – Disks, Files, and Filesystems

Part 3 – Graphical Environments

Part 4 – The Command Line

You may have been wondering about Linux for a while. Perhaps it’s used in your workplace and you’d be more efficient at your job if you used it on a daily basis. Or, perhaps you’d like to install Linux on some computer equipment you have at home. Whatever the reason, this series of articles is here to make the transition easier.

Linux, like many other operating systems supports multiple users. It even supports multiple users being logged in simultaneously.

User accounts are typically assigned a home directory where files can be stored. Usually this home directory is in:

/home/<login name>

This way, each user has their own separate location for their documents and other files.

Admin Tasks

In a traditional Linux installation, regular user accounts don’t have permissions to perform administrative tasks on the system. And instead of assigning rights to each user to perform various tasks, a typical Linux installation will require a user to log in as the admin to do certain tasks.

The administrator account on Linux is called root.

Sudo Explained

Historically, to perform admin tasks, one would have to login as root, perform the task, and then log back out. This process was a bit tedious, so many folks logged in as root and worked all day long as the admin. This practice could lead to disastrous results, for example, accidentally deleting all the files in the system. The root user, of course, can do anything, so there are no protections to prevent someone from accidentally performing far-reaching actions.

The sudo facility was created to make it easier to login as your regular user account and occasionally perform admin tasks as root without having to login, do the task, and log back out.  Specifically, sudo allows you to run a command as a different user. If you don’t specify a specific user, it assumes you mean root.

Sudo can have complex settings to allow users certain permissions to use sudo for some commands but not for others. Typically, a desktop installation will make it so the first account created has full permissions in sudo, so you as the primary user can fully administer your Linux installation.

Using Sudo

Some Linux installations set up sudo so that you still need to know the password for the root account to perform admin tasks. Others, set up sudo so that you type in your own password. There are different philosophies here. 

When you try to perform an admin task in the graphical environment, it will usually open a dialog box asking for a password. Enter either your own password (e.g., on Ubuntu), or the root account’s password (e.g., Red Hat).

When you try to perform an admin task in the command line, it will usually just give you a “permission denied” error. Then you would re-run the command with sudo in front. For example:

systemctl start vsftpd
Failed to start vsftpd.service: Access denied

sudo systemctl start vsftpd
[sudo] password for user1:

When to Use Sudo

Running commands as root (under sudo or otherwise) is not always the best solution to get around permission errors. While will running as root will remove the “permission denied” errors, it’s sometimes best to look for the root cause rather than just addressing the symptom. Sometimes files have the wrong owner and permissions.

Use sudo when you are trying to perform a task or run a program and the program requires root privileges to perform the operation. Don’t use sudo if the file just happens to be owned by another user (including root). In this second case, it’s better to set the permission on the file correctly.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Open Source LimeSDR Mini Takes Off in Satellites

The topic of 5G mobile networks dominated the recent Mobile World Congress in Barcelona, despite the expectation that widespread usage may be years away. While 5G’s mind-boggling bandwidth captivates our attention, another interesting angle is found in the potential integration with software defined radio (SDR), as seen in OpenAirInterface’s proposed Cloud-RAN (C-RAN) software-defined radio access network.

As the leading purveyor of open source SDR solutions, UK-based Lime Microsystems is well positioned to play a key role in the development of 5G SDR. SDR enables the generation and augmentation of just about any wireless protocol without swapping hardware, thereby affordably enabling complex networks across a range of standards and frequencies.

In late February, Lime announced a collaboration with the European Space Agency (ESA) to make 200 of its Ubuntu Core-driven LimeSDR Mini boards available for developing applications running on ESA’s communications satellites, as part of ESA’s Advanced Research in Telecommunications Systems (ARTES) program. The Ubuntu Core-based, Snap-packaged satcom apps will include prototypes of SDR-enabled 5G satellite networks.

Other applications will include IoT networks controlled by fleets of small, low-cost CubeSat satellites. CubeSats, as well as smaller NanoSats, have been frequently used for open source experimentation. The applications will be shared in an upcoming SDR App Store for Satcom to be developed by Lime and Canonical.

LimeSDR Mini Starts Shipping

Lime Microsystems recently passed a major milestone when its ongoing Crowd Supply campaign for the LimeSDR Mini passed the $500,000 mark. On Mar. 4, the company reported it had shipped the first 300 boards to backers, with plans to soon ship 900 more.

At MWC, Lime demonstrated the LimeSDR Mini and related technologies working with Quortus’ cellular core and Amarisoft’s LTE stack. There was also a demonstration with Vodafone regarding the carrier’s plans to use Lime’s related LimeNET computers to help develop Vodafone’s Open RAN initiative.

Back in May 2016, Lime expanded beyond its business of building field programmable RF (FPRF) transceivers for wireless broadband systems when it successfully launched the $299, open spec LimeSDR board. The $139 LimeSDR Mini that was unveiled last September has a lower-end Intel/Altera FPGA — a MAX 10 instead of a Cyclone IV — but uses the same Lime LS7002 RF transceiver chip. At 69×31.4mm, it’s only a third the size of the LimeSDR.

The LimeSDR boards can send and receive using UMTS, LTE, GSM, WiFi, Bluetooth, Zigbee, LoRa, RFID, Digital Broadcasting, Sigfox, NB-IoT, LTE-M, Weightless, and any other wireless technology that can be programmed with SDR. The boards drive low-cost, multi-lingual cellular base stations and wireless IoT gateways, and are used for various academic, industrial, hobbyist, and scientific SDR applications, such as radio astronomy.

Raspberry Pi integration

Unlike the vast majority of open source Linux hacker boards, the LimeSDR boards don’t run Linux locally. Instead, their FPGAs manage DSP and interfacing tasks, while a USB 3.0-connected host system running Ubuntu Core provides the UI and high-level supervisory functions. Yet, the LimeSDR Mini can be driven by a Raspberry Pi or other low-cost hacker board that supports Ubuntu Core instead of requiring an x86-based desktop

In late January, the LimeSDR Mini campaign added a Raspberry Pi compatible Grove Starter Kit option with a GrovePi+ board, 15 Grove sensor and actuator modules, and dual antennas for 433/868/915MHz bands. Lime is supporting the kit with its LimeSDR optimized ScratchRadio extension.

Around the same time, Lime announced an open source prototype hack that combines a LimeSDR Mini board, a Raspberry Pi Zero, and a PiCam. Lime calls the DVB (digital video broadcasting) based prototype “one of the world’s smallest DVB transmitters.”

Compared to the LimeSDR, the LimeSDR Mini has a reduced frequency range, RF bandwidth, and sample rate. The board operates at 10MHz to 3.5 GHz compared to 100 kHz to 3.8 GHz for the original. Both models, however, can achieve up to 10 GHz frequencies with the help of an LMS8001 Companion board that was added as a LimeSDR Mini stretch goal project in October.

With Ubuntu Core’s Snap application packages and support for app marketplaces, LimeSDR apps can easily be downloaded, installed, developed, and shared. The drivers that run on the Ubuntu host system are developed with an open source Lime Suite library.

Lime was one of the earliest supporters of the lightweight, transactional Ubuntu Core, in part because it’s designed to ease OTA updates — a chief benefit of SDR. Ubuntu Core continues to steadily expand on hacker boards such as the Orange Pi, as well as on smart home hubs and IoT gateways like Rigado’s recently updated Vesta IoT gateways. The use of Ubuntu Core has helped to quickly expand the open LimeSDR development community.

LimeNET expands on the high end

In May 2017, Lime Microsystems launched three open source embedded LimeNET computers that don’t require a separate tethered computer. The LimeNET Mini, LimeNET Enterprise, and LimeNET Base Station, which range in price from $2,600 to over $17,000, run Ubuntu Core on various 14nm fabricated Intel Core processors. They offer a variety of ports, antennas, WiFi, Bluetooth, and other features that turn the underlying LimeSDR boards into wireless base stations.

The top-of-the-line LimeNET Base Station features dual RF transceiver chips, as well as a LimeNET QPCIe variant of the LimeSDR board with a faster PCIe interface instead of USB. It also adds an amplifier with dual MIMO units that greatly expands the range beyond the 15-meter limit of the other LimeNET systems. If you don’t want this separately available LimeNET Amplifier Chassis, you can buy LimeNET QPCIe board as part of a cheaper LimeNET Core system.

Lime’s boards and systems aren’t the only low-cost SDR solutions running on Linux. Last year, for example, Avnet launched a Linux- and Xilinx Zynq-7020 based PicoZed SDR computer-on-module. Earlier products include the Epiq Solutions Matchstiq Z1, a handheld system that runs Linux on an iVeia Atlas-I-Z7e module equipped with a Zynq Z-7020.

Sign up for ELC/OpenIoT Summit updates to get the latest information:

4 Themes From the Open Source Leadership Summit (OSLS)

This week we attended The Linux Foundation’s Open Source Leadership Summit (OSLS) in Sonoma. Over the past three decades infrastructure open source software (OSS) has evolved from Linux and the Apache web server to touching almost every component of the infrastructure stack. We see OSS’s widespread reach from MySQL and PostgreSQL for databases, OpenContrail and OpenDaylight for networking to Openstack and Kubernetes for cloud operating systems. Its increasing influence up and down the stack is best exemplified by the explosion of solutions included on the Cloud Native Landscape that Redpoint co-published with Amplify and the CNCF.

During the conference we heard four main themes: 1) OSS security, 2) serverless adoption, 3) public cloud vendors’ open source involvement, and 4) Kubernetes’ success.

Read more at Medium

Dell EMC: The Next Big Shift in Open Networking Is Here

This article was sponsored by Dell EMC and written by Linux.com.

Ahead of the much anticipated 2018 Open Networking Summit, we spoke to Jeff Baher, director, Dell EMC Networking and Service Provider Solutions, about what lies ahead for open networking in the data center and beyond.

Jeff Baher, Director of Marketing for Networking at Dell EMC

“For all that time that the client server world was gaining steam in decoupling hardware and software, networking was always in its own almost mainframe-like world, where the hardware and software were inextricably tied,” Baher explained. “Fast forward to today and there exists a critical need to usher networking into the modern world, like its server brethren, where independent decisions are made around hardware and software functions and services modules are assembled and invoked.”

Indeed, the decoupling is well on its way as is the expected rise of independent open network software vendors, such as Cumulus, Big Switch, IP Infusion and Pluribus, as well as Dell EMC’s OS10 Open Edition that are shaping a rapidly evolving ecosystem. Baher describes the progress in the industry thus far as Open Networking ‘1.0’, proving out the model successfully of decoupling networking hardware and software. And with this, the industry is forging ahead taking open networking to the next level.

Here are the insights Baher shared with us about where open networking is headed.

Linux.com: You refer to an industry shift around open networking, tell us about the shift that Dell EMC is talking about at ONS this year.

Jeff Baher:  Well, to date we and our partners have been working hard to prove out the viability of the basic premise of open networking, disaggregating or decoupling networking hardware and software to drive an increase in customer choice and capability. This first phase, or as we say Open Networking 1.0, is four years in the making, and I would say it has been a resounding success as evidenced by some of the pioneering Tier 1 service provider deployments we’ve enabled. There is a clear-cut market fit here as we’ve witnessed both significant innovation and investment. And the industry is not standing still as it moves quickly to its 2.0 version. In this next version, the focus is shifting from decoupling the basic elements of hardware and software, to a focus on disaggregating the software stack itself.

Disaggregating the software stack involves exposing both the silicon and system software for adaption and abstraction This level of disaggregation also assumes a decoupling of the network application (i.e., routing or switching) from the platform operating system (the software that makes lights blink and fans spin). In this manner, with all the software functional elements exposed and disaggregated, independent software decisions can be made and development communities can form around flexible software composition, assembly and delivery models.

Linux.com: Why do people want this level of disaggregation?

Baher: Ultimately, it’s about more control, choice and velocity. With traditional networking systems, there’s typically a lot of code that isn’t necessarily always used. By moving to this new model predicated on disaggregated software elements, users can scale back that unused code and run a highly optimized network operating system (NOS) and applications allowing them to get peak performance, with increased security. And this can all be done independent of the underlying silicon, allowing user to be able to make independent decisions around silicon technology and software adaptation.

All of this, of course, is geared for a fairly savvy network department with most likely a large-scale operation to contend with. For the vast majority of IT shops, they won’t want to “crack the hood” of the network stack and disaggregate pieces. Instead, they will look for pre-packaged offerings derived from these larger “early adopter” experiences. For the larger early adopters, however, there can be virtually an immediate payback by customizing the networking stack, making any operational or technical hurdles well worth it.  These early adopters typically already live in a disaggregated world and hence will feel comfortable mixing and matching hardware, OS layers, and protocols to optimize their network infrastructure. A Tier 1 service provider deployment analysis by ACG Research estimates the realized gains with a disaggregated approach to be 47% lower for TCO, three time the service agility for new services at less than a third of the cost to enable them.

And it is worth noting the prominent role that open source technologies play in disaggregating the networking software stack. In fact, many would contend that open source technologies are foundational and critical to how this happens. This adds in a community aspect to innovation, arguably accelerating its pace along the way. Which brings us back full circle to why people want this level of disaggregation – to have more control over how networking software is architected and written, and how networks operate.

Linux.com: How does the disaggregation of the networking stack help fuel innovation in other areas, for example edge computing and IoT?

Baher: Edge computing is interesting as it really is the confluence of compute and networking. For some, it may look like a distributed data center, a few large hyperscale data centers with spokes out to the edge for IoT, 5G and other services. Each edge element is different in capability, form factor, software footprint and operating models. And when viewed through a compute lens, it will be assumed to be inherently a disaggregated, distributed element (with compute, networking and storage capabilities). In other words, hardware elements that are open, standards-based and without any software dependencies. And software for the IoT, 5G and enterprise edge that is also open and disaggregated such that it can be right-sized and optimized for that specific edge task. So if anything, I would say a disaggregated “composite” networking stack is a critical first step for enabling the next-generation edge.

We’re seeing this with mobile operators as they look to NFV solutions for 5G and IoT edge. We’re also seeing this at the enterprise edge, in particular with universal CPE (uCPE) solutions. Unlike previous generations where the enterprise edge meant a proprietary piece of hardware and monolithic software, it is now rapidly transforming into a compute-oriented open model where select networking functions are selected as needed. All of this is made possible by disaggregating the networking functions and applications from the underlying operating system. A ‘not so big a deal’ thing if from a server-minded vantage point, monumental if you come from “networking land”. Exciting times once again in the world of open networking!

Sign up to get the latest updates on ONS NA 2018!

Creating an Open Source Program for Your Company

The recent growth of open source has been phenomenal; the latest GitHub Octoverse survey reports the GitHub community reached 24 million developers working across 67 million repositories. Adoption of open source has also grown rapidly with studies showing that 65% of companies are using and contributing to open source. However, many decision makers in those organizations using and contributing to open source do not fully understand how it works. The collaborative development model utilized in open source is different from the closed, proprietary models many individuals are used to, requiring a change in thinking.

An ideal starting place is creating a formal open source program office, which is a best practice pioneered by Google and Facebook and can support a company’s open source strategy. Such an office helps explain to employees how open source works and its benefits, while providing supporting functions such as training, auditing, defining policies, developer relations and legal guidance. Although the office should be customized to a specific organization’s needs, there are still some standard steps everyone will go through.

Read more at Information Week

A Guide To Securing Docker and Kubernetes Containers With a Firewall

Before deploying any container-based applications, it’s crucial to first protect its security by ensuring a Docker, Kubernetes, or other container firewall is in place. There are two ways to implement your container firewall: manually or through the use of a commercial solution. However, manual firewall deployment is not recommended for Kubernetes-based container deployments. Regardless, with either strategy, creating a set of network firewall rules to safeguard your deployment is critical so that the containers are defended from unwanted access into your sensitive systems and data.

The accelerated discovery of new vulnerabilities and exploits reinforces the necessity of proper container security. The creativity of the hackers behind the Apache Struts, the Linux stack clash, and the dirty cow exploits – all made infamous by major data breaches and ransomware attacks – prove that businesses never know what is coming next. Furthermore, these attacks feature a sophistication that requires more than just vulnerability scanning and patching to address the threats.

Read more at SDxCentral