Home Blog Page 398

4 Themes From the Open Source Leadership Summit (OSLS)

This week we attended The Linux Foundation’s Open Source Leadership Summit (OSLS) in Sonoma. Over the past three decades infrastructure open source software (OSS) has evolved from Linux and the Apache web server to touching almost every component of the infrastructure stack. We see OSS’s widespread reach from MySQL and PostgreSQL for databases, OpenContrail and OpenDaylight for networking to Openstack and Kubernetes for cloud operating systems. Its increasing influence up and down the stack is best exemplified by the explosion of solutions included on the Cloud Native Landscape that Redpoint co-published with Amplify and the CNCF.

During the conference we heard four main themes: 1) OSS security, 2) serverless adoption, 3) public cloud vendors’ open source involvement, and 4) Kubernetes’ success.

Read more at Medium

Dell EMC: The Next Big Shift in Open Networking Is Here

This article was sponsored by Dell EMC and written by Linux.com.

Ahead of the much anticipated 2018 Open Networking Summit, we spoke to Jeff Baher, director, Dell EMC Networking and Service Provider Solutions, about what lies ahead for open networking in the data center and beyond.

Jeff Baher, Director of Marketing for Networking at Dell EMC

“For all that time that the client server world was gaining steam in decoupling hardware and software, networking was always in its own almost mainframe-like world, where the hardware and software were inextricably tied,” Baher explained. “Fast forward to today and there exists a critical need to usher networking into the modern world, like its server brethren, where independent decisions are made around hardware and software functions and services modules are assembled and invoked.”

Indeed, the decoupling is well on its way as is the expected rise of independent open network software vendors, such as Cumulus, Big Switch, IP Infusion and Pluribus, as well as Dell EMC’s OS10 Open Edition that are shaping a rapidly evolving ecosystem. Baher describes the progress in the industry thus far as Open Networking ‘1.0’, proving out the model successfully of decoupling networking hardware and software. And with this, the industry is forging ahead taking open networking to the next level.

Here are the insights Baher shared with us about where open networking is headed.

Linux.com: You refer to an industry shift around open networking, tell us about the shift that Dell EMC is talking about at ONS this year.

Jeff Baher:  Well, to date we and our partners have been working hard to prove out the viability of the basic premise of open networking, disaggregating or decoupling networking hardware and software to drive an increase in customer choice and capability. This first phase, or as we say Open Networking 1.0, is four years in the making, and I would say it has been a resounding success as evidenced by some of the pioneering Tier 1 service provider deployments we’ve enabled. There is a clear-cut market fit here as we’ve witnessed both significant innovation and investment. And the industry is not standing still as it moves quickly to its 2.0 version. In this next version, the focus is shifting from decoupling the basic elements of hardware and software, to a focus on disaggregating the software stack itself.

Disaggregating the software stack involves exposing both the silicon and system software for adaption and abstraction This level of disaggregation also assumes a decoupling of the network application (i.e., routing or switching) from the platform operating system (the software that makes lights blink and fans spin). In this manner, with all the software functional elements exposed and disaggregated, independent software decisions can be made and development communities can form around flexible software composition, assembly and delivery models.

Linux.com: Why do people want this level of disaggregation?

Baher: Ultimately, it’s about more control, choice and velocity. With traditional networking systems, there’s typically a lot of code that isn’t necessarily always used. By moving to this new model predicated on disaggregated software elements, users can scale back that unused code and run a highly optimized network operating system (NOS) and applications allowing them to get peak performance, with increased security. And this can all be done independent of the underlying silicon, allowing user to be able to make independent decisions around silicon technology and software adaptation.

All of this, of course, is geared for a fairly savvy network department with most likely a large-scale operation to contend with. For the vast majority of IT shops, they won’t want to “crack the hood” of the network stack and disaggregate pieces. Instead, they will look for pre-packaged offerings derived from these larger “early adopter” experiences. For the larger early adopters, however, there can be virtually an immediate payback by customizing the networking stack, making any operational or technical hurdles well worth it.  These early adopters typically already live in a disaggregated world and hence will feel comfortable mixing and matching hardware, OS layers, and protocols to optimize their network infrastructure. A Tier 1 service provider deployment analysis by ACG Research estimates the realized gains with a disaggregated approach to be 47% lower for TCO, three time the service agility for new services at less than a third of the cost to enable them.

And it is worth noting the prominent role that open source technologies play in disaggregating the networking software stack. In fact, many would contend that open source technologies are foundational and critical to how this happens. This adds in a community aspect to innovation, arguably accelerating its pace along the way. Which brings us back full circle to why people want this level of disaggregation – to have more control over how networking software is architected and written, and how networks operate.

Linux.com: How does the disaggregation of the networking stack help fuel innovation in other areas, for example edge computing and IoT?

Baher: Edge computing is interesting as it really is the confluence of compute and networking. For some, it may look like a distributed data center, a few large hyperscale data centers with spokes out to the edge for IoT, 5G and other services. Each edge element is different in capability, form factor, software footprint and operating models. And when viewed through a compute lens, it will be assumed to be inherently a disaggregated, distributed element (with compute, networking and storage capabilities). In other words, hardware elements that are open, standards-based and without any software dependencies. And software for the IoT, 5G and enterprise edge that is also open and disaggregated such that it can be right-sized and optimized for that specific edge task. So if anything, I would say a disaggregated “composite” networking stack is a critical first step for enabling the next-generation edge.

We’re seeing this with mobile operators as they look to NFV solutions for 5G and IoT edge. We’re also seeing this at the enterprise edge, in particular with universal CPE (uCPE) solutions. Unlike previous generations where the enterprise edge meant a proprietary piece of hardware and monolithic software, it is now rapidly transforming into a compute-oriented open model where select networking functions are selected as needed. All of this is made possible by disaggregating the networking functions and applications from the underlying operating system. A ‘not so big a deal’ thing if from a server-minded vantage point, monumental if you come from “networking land”. Exciting times once again in the world of open networking!

Sign up to get the latest updates on ONS NA 2018!

Creating an Open Source Program for Your Company

The recent growth of open source has been phenomenal; the latest GitHub Octoverse survey reports the GitHub community reached 24 million developers working across 67 million repositories. Adoption of open source has also grown rapidly with studies showing that 65% of companies are using and contributing to open source. However, many decision makers in those organizations using and contributing to open source do not fully understand how it works. The collaborative development model utilized in open source is different from the closed, proprietary models many individuals are used to, requiring a change in thinking.

An ideal starting place is creating a formal open source program office, which is a best practice pioneered by Google and Facebook and can support a company’s open source strategy. Such an office helps explain to employees how open source works and its benefits, while providing supporting functions such as training, auditing, defining policies, developer relations and legal guidance. Although the office should be customized to a specific organization’s needs, there are still some standard steps everyone will go through.

Read more at Information Week

A Guide To Securing Docker and Kubernetes Containers With a Firewall

Before deploying any container-based applications, it’s crucial to first protect its security by ensuring a Docker, Kubernetes, or other container firewall is in place. There are two ways to implement your container firewall: manually or through the use of a commercial solution. However, manual firewall deployment is not recommended for Kubernetes-based container deployments. Regardless, with either strategy, creating a set of network firewall rules to safeguard your deployment is critical so that the containers are defended from unwanted access into your sensitive systems and data.

The accelerated discovery of new vulnerabilities and exploits reinforces the necessity of proper container security. The creativity of the hackers behind the Apache Struts, the Linux stack clash, and the dirty cow exploits – all made infamous by major data breaches and ransomware attacks – prove that businesses never know what is coming next. Furthermore, these attacks feature a sophistication that requires more than just vulnerability scanning and patching to address the threats.

Read more at SDxCentral

CNCF Webinar to Present New Data on Container Adoption and Kubernetes Users in China

Last year, the Cloud Native Computing Foundation (CNCF) conducted its first Mandarin-language survey of the Kubernetes community. While the organization published the early results of the English-language survey in a December blog post, the Mandarin survey results will be released on March 20 in a webinar with Huawei and The New Stack.

Many of China’s largest cloud providers and telecom companies — including Alibaba Cloud, Baidu, Ghostcloud, Huawei and ZTE — have joined the CNCF. And the first KubeCon + CloudNativeCon China will be held in Beijing later this year.

The Mandarin survey results, when they are released, will help illuminate container adoption trends and cloud-native ecosystem development…

Read more at The New Stack

Microservices, Service Mesh, and CI/CD Pipelines: Making It All Work Together

Brian Redmond, Azure Architect on the Global Black Belt team at Microsoft, showed how to build CI/CD pipelines into Kubernetes-based applications in a talk at KubeCon + CloudNativeCon.

Applications that get deployed via a CI/CD pipeline, get patches and new components added to them all the time, usually using what is called a “blue/green” update process: While a “blue” (stable and tested) version of the application runs for the users, a “green” version (originally the same as the blue version, but which gets updates applied to it) remains idle, being tested. When the green version is considered tested enough to deploy, it is made available to the user, and the blue version is made idle. If the green version fails, the blue version can be redeployed, and the green version will be taken offline to correct its faults.

Deployments

During a CI/CD-based deployment, it is usual to carry out canary-testing instead of having an all blue or all green instance of the application accessible by the users. This means that, while most users, say 90 percent, get to use the blue, stable version of the application, a smaller percentage, the remaining 10 percent, gets to use the version being tested. This allows developers to see how the “test” version behaves under real-world conditions. That is, two different versions of the application are often running at the same time.

To further complicate matters, most applications are not single and monolithic, but a series of microservices that must communicate effectively with each other. This means you need:

  • Advanced routing  A mechanism which will allow routing traffic to specific versions of specific services using specific routing rules.
  • Observability  This will allow you to gather metrics and see what is happening and tell you what happens when traffic hits the canary test.
  • Chaos testing  A testing model that shows what happens when things go wrong.

A pipeline in such a scenario would look like this: An update to the code is taken as a pull request and is deployed as a canary build. You would modify the routing to push some traffic over to that release and then score the release. If the release scores above what you have established as an acceptable level, you would automatically push it to production. If it scores below — or would require some sort of human interaction to find out what issues it has you would decommission it completely and the update would be rejected.

This is where Istio comes in. Istio is an open platform to connect manage and secure microservices. It helps with service discovery and routing, provides a sidecar (Envoy) that controls where traffic is going, and takes care of health checking and security, among other many features.

When you deploy using Istio, each service has an Envoy proxy as a sidecar. All the traffic from each service going anywhere outside the pod is routed through the proxy. The sidecar will also handle telemetry, delays, etc. At the control plane layer, the Istio components allow you to manage how the proxies behave.

Brigade

For the CI/CD tool, Redmond uses Brigade. Brigade implements event-driven scripting for Kubernetes and allows you to encapsulate various steps of the CI/CD workflow as functions within the container. The functions can then run in serial or parallel, and be triggered by various events and web-hooks. By default, Brigade includes hooks to the GitHub and Docker registries. The pipeline itself is described using JavaScript.

Redmond also includes Kashti in his toolbox. Developed by the same team that developed Brigade, Kashti is web dashboard which provides easy viewing and constructing of Brigade pipelines.

During the demo, Redmond deployed a web app with several APIs, modified a branched version and deployed it as a canary test version. He showed who Istio’s observability features allowed to follow every step of the pipeline and, using Prometheus coupled with the Graphana, track the performance peaks and valleys of each deployed version.

Watch the entire presentation below:

Learn more about Kubernetes at KubeCon + CloudNativeCon Europe, coming up May 2-4 in Copenhagen, Denmark.

HardwareCon: The Slope of Enlightenment

Having spent the last 13 years helping over 900 hardware startups (they used to be called inventors) taking innovative, physical products to market, I’ve seen the industry go from one that was dominated by large established companies to the burgeoning hardware revolution that has already brought thousands of startup products to market with new tools and technologies and is just getting started.

Despite the growth of the market, many promising hardware startups shut their doors in 2017 and the IoT hype cycle is wearing off. The industry has been doing some soul searching and ultimately discovered it needed to pivot. I believe we’ve entered a “slope of enlightenment”, discovered that the current model is broken and are realizing that it’s almost impossible for startups to learn all the aspects of a hardware startup business while simultaneously developing and launching an innovative technology in the 2-3 year available window.

With only 3 percent of startups seeing any meaningful exit to date, I believe the path forward for successful hardware innovations will come from startups that create robust business models with value driven solutions (solve real problems) and connect with the right partners who are true experts that uniquely understand hardware.  

Allan Alcorn to Speak at HardwareCon

Four years ago, we saw the chance to provide that educational support and bring the community together to facilitate these critical partnerships. I’m excited to say that HardwareCon continues to grow with a big step up this year by moving to the SJ Convention Center April 19th and 20th.  We’re excited to announce one of the godfathers of the Silicon Valley hardware scene, Allan Alcorn, founding engineer at Atari, inventor of the world’s first popular video game, Pong, and Steve Jobs’ last boss as our keynote speaker to kick things off.  

This year’s conference agenda will feature two full days of keynotes, panels, and workshops focused on the most important topics around building a successful hardware company and will feature key insights from the hardware investment community.  

We’ve just announced a new IoT Summit , presented by Parks Associates, titled Transformation of Consumer Products: Connectivity & IoT. The Summit will address the impact of IoT on the development, design, and monetization of consumer products. This Summit  is designed to help executive level hardware innovators better understand market trends and how to position for growth. The consumer device segment of the hardware market remains one of the most exciting areas for innovation.  

To support this new vision for success, we’re bringing together the world’s thought leaders, entrepreneurs, investors, innovators, and key decision makers across the hardware ecosystem to “Get Deals Done, ” our theme this year.  We chose this theme as it’s evident that if the next generation of startups want to succeed, they will need to have access to and knowledge of the right platforms, software and hardware, investors that share their vision and a complete ecosystem of external service providers who can fill their roles.  Hardware became more global in 2017 with many different models and directions tried and developed. Come be a part of the conversation, meet your future partners, and get deals done.

I’d like to personally invite you to join me at HardwareCon 2018 and right now is the best time to save on tickets as we’re offering our Insider Rate at 25% off the full ticket price.  I look forward to connecting with you there and shaping the future of the next generation of hardware innovation.

Greg Fisher is Founder at HardwareCon.

Keeping Governance Simple and Uncomplicated

I am a firm believer that the way in which we collaborate should be as much of a collaborative product as the output of a community project. Just like an open source project, we should review, iterate, and review the performance of our iterations. We should constantly assess how we can optimize our governance to be as simple and thin as possible. We should build an environment where someone can file a metaphorical or literal pull request with pragmatic ways to optimize how the project is governed. This assures the project is pulling the best insight from members to ensure it is as efficient and as lightweight as possible.

To do this, honestly observe how the governance performs. Is it accomplishing the goals it is designed for? 

Read more at Jono Bacon

What’s New in LLVM

The LLVM compiler framework has gone from being a technological curiosity to a vital piece of the modern software landscape. It is the engine behind the Clang compiler, as well as the compilers for the Rust and Swift languages, and provides a powerful toolkit for creating new languages.

It is also a fairly fast-moving project, with major point revisions announced every six months or so. Version 6.0, released earlier this month, continues LLVM’s ongoing mission to deepen and broaden support for a variety of compilation targets. The update also adds many timely fixes to guard against recently discovered processor-level system attacks.

To ensure that applications built with LLVM can do their part to guard against such attacks, LLVM now offers support for ”retpolines,”

Read more at InfoWorld

Becoming a 10x Developer

So when I first heard the concept of the 10x engineer, I was confused. How could someone be so talented that it overshadows the power of teamwork? In my experience, individual excellence is necessary, but not sufficient, for greatness. Focusing purely on individual achievement misses the larger picture that teams are required to build great software. So I decided to change the definition of a 10x engineer to this: 

A 10x engineer isn’t someone who is 10x better than those around them, but someone who makes those around them 10x better.

Over the years I’ve combined my personal experience with research about building and growing effective teams and turned that into a list of 10 ways to be a better teammate, regardless of position or experience level. While many things on this list are general pieces of advice for how to be a good teammate, there is an emphasis on how to be a good teammate to people from diverse backgrounds.

Read more at Kate Heddleston