Home Blog Page 628

Kubernetes Helps Comcast Re-Engineer Cable TV

Comcast cable is undergoing a major technical shift. The company is moving away from an always-on transmission of every single channel to every single customer, with the signal converted on either end by a piece of proprietary hardware, which is how cable has worked for decades. The new system is IP-based, on-demand streaming model where channel signal is sent only when requested by the user, explained Erik St. Martin, a systems architect at Comcast, at CloudNativeCon in November.

The change will save an enormous amount of bandwidth for Comcast, improving signal quality and allowing the transmission of several different formats and device-specific tailored signals.  

Simple, right? Turns out, it’s not so simple, especially when you consider the shift must happen while keeping 99.999 percent uptime so customers don’t freak out.

Out of the Box

St. Martin is part of the team building Comcast’s new widely distributed and intensely fault tolerant broadcast system. The list of requirements is daunting, he said, and was difficult to face at the start. Then, he found Kubernetes, and out of the box many of the tricky technical obstacles were addressed.

“Can you imagine trying to design a system like this from scratch? … it’d be a massive effort and it’d have a ton of edge cases,” St. Martin said. “It probably comes as no surprise, standing here talking to you at a Kubernetes conference, the Kubernetes has actually solved most of these problems for us.”

The Kubernetes platform for dealing with applications clusters is very well suited to help Comcast update to an IP-based streaming system, said St. Martin; the system of labels and notations works perfectly for different channel streams thanks to its flexibility and simplicity. Teams tasked with managing streams needn’t worry about hardware; hardware teams needn’t worry about bandwidth.

“This is a huge shift from the way things currently work,” he said. “Today, the video engineering team needs to know about every single one of those [signal transmission and translation] devices. They maintain spreadsheets of these things and log into them by IP address … sometimes it even comes down to physically moving hardware or cables.”

There are still many facets that need to be built through Comcast’s Kubernetes implementation, with plenty of tricky engineering problems to keep everyone up at night. But the Kubernetes platform — and the community — has already made a significant dent in what seemed like an impossible task.

“All in all, these are tiny issues in comparison to the complexity and edge cases of the system we would’ve had to create from scratch,” St. Martin said. “With each release of Kubernetes, there seems to be less work for our own components to do. There’s no doubt that Kubernetes has changed the way we deploy and manage applications.

“Kubernetes can be just as impactful as a framework for building your own applications,” he said. “You can save yourself complexity and development time by leveraging functionality in tools that already exist. We can also create clean abstractions between teams by writing our own resource types and controllers. It’s a beautifully abstracted system. Each component, with a distinct role, making it effortless to replace components or customize them to fit our use cases, and even use cases that are at their surface may not seem particularly suited for Kubernetes.”

Watch the complete video below:

Do you need training to prepare for the upcoming Kubernetes certification? Pre-enroll today to save 50% on Kubernetes Fundamentals (LFS258), a self-paced, online training course from The Linux Foundation. Learn More >>

Keynote: Kubernetes: As Seen On TV by Erik St. Martin, Systems Architect, Comcast

The Kubernetes platform for dealing with applications clusters is very well suited to help Comcast update to an IP-based streaming system, said Erik St. Martin at CloudNativeCon.

Effective Application Security Testing in DevOps Pipelines

Before considering what it means to have application security testing integrated into the DevOps Continuous Integration/Continuous Delivery (CI/CD) pipeline, it is worth asking why it is valuable to integrate application security testing into these pipelines in the first place.  A fundamental tenet of DevOps and the reason for having CI/CD pipelines for software builds is to allow teams to have up-to-the-minute feedback on the status of their development efforts so that they know if a build is ready to push to production. This involves testing quality, performance and other characteristics of the system. And it should include security as well.

By integrating security into the CI/CD pipeline, security vulnerabilities are found quickly and reported to developers in the tools they’re already using. 

Read more at Denim Group

IHS Markit: 70% of Carriers Will Deploy CORD in the Central Office

Seventy percent of respondents to an IHS Markit survey plan to deploy CORD in their central offices — 30 percent by the end of 2017 and an additional 40 percent in 2018 or later. The findings come from IHS Markit’s 2016 Routing, NFV & Packet-Optical Strategies Service Provider Survey.

The Central Office Re-Architected as a Data Center (CORD) combines network functions virtualization (NFV) and software-defined networking (SDN) to bring data center economics and cloud agility to the telco central office. CORD garnered so much attention in 2016 that its originator — On.Lab‘s Open Network Operating System (ONOS) — established CORD as a separate open source entity. And non-telcos have joined the open source group, including Google and Comcast.

Read more at SDxCentral

SUSE Formalizes Container Strategy with a New Linux Distro, MicroOS

The company has been working on a platform called SUSE Container as a Service Platform. SUSE CaaSP puts together SUSE Linux Enterprise MicroOS, a variant of SUSE Linux Enterprise Server optimized for running Linux containers (also in development), and container orchestration software based on Kubernetes.

In an interview, SUSE’s new CTO, Dr. Thomas Di Giacomo told us that there are many customers who are running legacy systems but they want to migrate to modern technologies over time. Today, if you want to start from scratch, you will start with containers. “We want to make sure that companies that have legacy infrastructure and legacy applications can move to modern technologies, where container as a service is offered through that OS itself,” said “Dr. T” (as he is known in SUSE circles). That’s what CaaSP with MicroOS is being designed to do.

Read more at The New Stack

How Stack Overflow Plans to Survive the Next DNS Attack

Let’s talk about DNS. After all, what could go wrong? It’s just cache invalidation and naming things.

tl;dr

This blog post is about how Stack Overflow and the rest of the Stack Exchange network approaches DNS:

  • By bench-marking different DNS providers and how we chose between them
  • By implementing multiple DNS providers
  • By deliberately breaking DNS to measure its impact
  • By validating our assumptions and testing implementations of the DNS standard

The good stuff in this post is in the middle, so feel free to scroll down to “The Dyn Attack” if you want to get straight into the meat and potatoes of this blog post.

Read more at StackExchange

Ubuntu-Based Ultimate Edition 5.0 Gamers Distribution Is Out for Linux Gaming

It’s been almost three months since we last heard something from TheeMahn, the developer of the Ultimate Edition (formerly Ubuntu Ultimate Edition) operating system, a fork of Ubuntu and Linux Mint, but we’ve been tipped by one of our readers about the availability of Ultimate Edition 5.0 Gamers.

The goal of the Ultimate Edition project is to offer users a complete, out-of-the-box Ubuntu-based computer operating system for desktops, which is easy to install or upgrade with the click of a button. It usually ships with 3D effects, support for the latest Wi-Fi and Bluetooth devices, and a huge collection of open-source applications.

There are several editions of Ultimate Edition that are maintained even to this day, and while Ultimate Edition 5.0 shipped last year in September, based on Ubuntu 16.04 LTS (Xenial Xerus), it’s time for the Ultimate Edition Gamers to get a new release. As such, we’d like to tell you all about Ultimate Edition 5.0 Gamers.

Read more at Softpedia

Linus Torvalds, Guy Hoffman, and Imad Sousou to Speak at Embedded Linux Conference Next Month

Linux creator Linus Torvalds will speak at Embedded Linux Conference and OpenIoT Summit again this year, along with renowned robotics expert Guy Hoffman and Intel VP Imad Sousou, The Linux Foundation announced today. These headliners will join session speakers from embedded and IoT industry leaders, including AppDynamics, Free Electrons, IBM, Intel, Micosa, Midokura, The PTR Group, and many others. View the full schedule now.

The co-located conferences, to be held Feb. 21-23 in Portland, Oregon, bring together embedded and application developers, product vendors, kernel and systems developers as well systems architects and firmware developers to learn, share, and advance the technical work required for embedded Linux and the Internet of Things (IoT).

Now in its 12th year, Embedded Linux Conference is the premier vendor-neutral technical conference for companies and developers using Linux in embedded products. While OpenIoT Summit is the first and only IoT event focused on the development of IoT solutions.

Keynote speakers at ELC and OpenIOT 2017 include Guy Hoffman, Cornell professor of mechanical engineering and IDC Media Innovation Lab co-director; Imad Sousou, vice president of the software and services group at Intel Corporation; and Linus Torvalds. Additional keynote speakers will be announced in the coming weeks.

Last year was the first time in the history of ELC that Torvalds, a Linux Foundation fellow, spoke at the event. He was joined on stage by Dirk Hohndel, chief open source officer at VMware, who will conduct a similar on-stage interview again this year. The conversation ranged from IoT, to smart devices, security concerns, and more. You can see a video and summary of the conversation here.

Embedded Linux Conference session highlights include:

  • Making an Amazon Echo Compatible Linux System, Mike Anderson, The PTR Group

  • Transforming New Product Development with Open Hardware, Stephano Cetola, Intel

  • Linux You Can Drive My Car, Walt Miner, The Linux Foundation

  • Embedded Linux Size Reduction Techniques, Michael Opdenacker, Free Electrons

OpenIoT Summit session highlights include:

  • Voice-controlled home automation from scratch using IBM Watson, Docker, IFTTT, and serverless, Kalonji Bankole, IBM

  • Are Device Response Times a Neglected Risk of IoT?, Balwinder Kaur, AppDynamics

  • Enabling the management of constrained devices using the OIC framework, James Pace, Micosa

  • Journey to an Intelligent Industrial IOT Network, Susan Wu, Midokura

Check out the full schedule and register today to save $300. The early bird deadline ends on January 15. One registration provides access to all 130+ sessions and activities at both events. Linux.com readers can register now with the discount code, LINUXRD5, for 5% off the registration price. Register Now!

10 Lessons from 10 Years of Amazon

Amazon launched their Simple Storage Service (S3) service about 10 years ago followed shortly by Elastic Compute Cloud (EC2). In the past 10 years, Amazon has learned a few things about running these services. In his keynote at LinuxCon Europe, Chris Schlaeger, Director Kernel and Operating Systems at the Amazon Development Center in Germany, shared 10 lessons from Amazon.
 
1. Build evolvable systems

The cloud is all about scale and being able to get compute power only when you need it and getting rid of it when you don’t need it anymore. Schlaeger says that “the lesson that we learned isn’t to design for a certain scale, you always get it wrong. What you want to do instead is design your system so you can evolve it … over time without the customers or users knowing it.”

2. Expect the unexpected

Hardware has a finite lifespan, so things will fail, but you can design your systems to check for failure, deal with it, isolate failures, and then react to them. “Control the blast radius and raise failure as a natural occurrence of your software and hardware, all the time,” Schlaeger suggests.

3. Primitives, not frameworks

Amazon doesn’t know what every customer wants to do, and they don’t want try to tell customers how to do their work. However, they do want to evolve quickly to follow the needs of their customers, and this agility is something that is much easier to accomplish with primitives rather than frameworks.

4. Automation is key

Schlaeger points out that “if you want to scale up, you need to have some form of automation in place.” If someone can log into your servers and make changes on the fly, then you can’t track what changes have been made over time.

5. APIs are forever

APIs can be tricky because if you want to keep your customers happy, you can’t keep changing your APIs. “You need to be very, very cautious and conscious about the APIs you have and make sure you don’t change them,” Schlaeger says.

6. Know your resource usage

When Amazon first launched S3, they charged for storage space and transactions, so people quickly learned that storing and retrieving tiny thumbnail images for items on eBay was quite cheap. However, the large numbers of API calls generated a big enough load on Amazon’s servers that they had to start including call rates in the pricing model. Understanding all of your costs and building them into your prices is important.

7. Build security in from the ground up

It is important that you get the security involved in the design of a system in addition to the implementation. You should also do regular check-ins as your service evolves over time to make sure that it stays secure. 

8. Encryption is a first class citizen

Schlaeger points out that “the best way you can prove to your customers that the data is safe from access from other parties … is to have them encrypted.” Within AWS, customers can encrypt all of their data and only the customer has access to the keys used to encrypt and decrypt the data. 

9. Importance of the network

This is probably the hardest part to get right, because the network is a shared resource for everybody across all use cases. Various customers have unique and often contradictory requirements for using the network.

10. No gatekeepers

“The more open you are with your platform, … the more success you will have,” Schlaeger says. Amazon doesn’t try to limit what their customers can do beyond what they need to protect the instances or services of other customers.

For more details about each of these 10 lessons, watch the full video below.

Interested in speaking at Open Source Summit North America on September 11 – 13? Submit your proposal by May 6, 2017. Submit now>>

Not interested in speaking but want to attend? Linux.com readers can register now with the discount code, LINUXRD5, for 5% off the all-access attendee registration price. Register now to save over $300!

 

Darktrace Automates Network Security Through Machine Learning

 

Darktrace co-founder Poppy Gustafsson recently predicted, at TechCrunch Disrupt London, that malicious actors will increasingly use artificial intelligence to create more sophisticated spearphishing attacks.

Criminals are just as capable of using artificial intelligence as those trying to thwart them, according to security vendor ESET‘s 2017 trends report, with “next-gen” security marketers throwing around the buzzwords “machine learning,” “behavioral analysis” and more. That’s making it more difficult for potential customers to sift through all the hype.

It predicts the rise of “jackware” or Internet-of-Things ransomware, such as locking the software in cars until a ransom is paid.

Read more at The New Stack