Home Blog Page 542

Google’s Networking Lead Talks SDN Challenges for the Next Decade

“The question of whether Software Defined Networking is a good idea or not is closed. Software Defined Networking is how we do networking,” said Amin Vahdat, Fellow & Technical Lead For Networking at Google, during his Open Networking Summit (ONS) keynote.  At Google, they’ve gone head first into the cloud through the Google Cloud Platform, which Vahdat says has expanded their network in new and exciting ways. They built one of the largest networks in the world over the past decade to support Google services, like web search, Gmail, and YouTube. But with the move to the cloud, they are now hosting services for others, which is pushing them into new functionality and capabilities.

Google’s cloud architecture is built around disaggregation with storage and compute spread across the entire data center. It doesn’t matter which server holds a particular piece of data, because they’re replicating the data across the entire data center. The networking challenge with this approach is that the bandwidth and latency requirements for accessing anything anywhere increase substantially, which pushes their requirements for networking within the data center, Vahdat points out.

Software Defined Networking (SDN) has been evolving at Google. Vahdat says that in 2013, they presented B4, a wide area network interconnect for their data centers. In 2014, it was Andromeda, their network virtualization stack that forms the basis of Google Cloud. In 2015, Google had Jupiter for data center networking. Now, they have Espresso, which is SDN for the public Internet with per-metro global views and real-time optimization across many routers and many servers.

“What we need to be doing, and what we will be doing moving forward, is moving to Cloud 3.0. And here the emphasis is on compute, not on servers,” Vahdat says. This allows people to focus on their business, instead of worrying about where their data is placed, load balancing among the different components, or configuration management of operating systems on virtual machines. With networking playing a critical role in Cloud 3.0, there are a few key elements to think about: storage disaggregation, seamless telemetry, transparent live migration, service level objectives, and more.

Vahdat suggests that, “the history of modern network architecture is going to begin at ONS. In other words, this community has been responsible for defining what networking looks like in the modern age and it’s really different from what it has been.”

Watch the video to see more about how networking and the cloud are evolving at Google.

https://www.youtube.com/watch?v=1xBZ5DGZZmQ?list=PLbzoR-pLrL6p01ZHHvEeSozpGeVFkFBQZ

Interested in open source SDN? The “Software Defined Networking Fundamentals” training course from The Linux Foundation provides system and network administrators and engineers with the skills to maintain an SDN deployment in a virtual networking environment. Download the sample chapter today!

Check back with Open Networking Summit for upcoming news on ONS 2018. 

See more presentations from ONS 2017:

AT&T on the Next Generation of Network Software and Hardware

Martin Casado, General Partner, Andreessen Horowitz, talks about how he’s seen SDN change over the past 10 years. 

Junior Python Developer Wins Hackathon, Lands Internship After Attending ApacheCon 2016 with Diversity Scholarship

The Linux Foundation’s diversity scholarship program provides support to those from traditionally underrepresented or marginalized groups in the technology and open source communities who may not otherwise have the opportunity to attend Linux Foundation events for financial reasons.

We firmly believe the power of collaboration is heightened when many different perspectives are included, so these efforts benefit the community, not just those who participate.

Linux Foundation scholarship recipient Khushbu Parakh
In 2016, The Linux Foundation awarded more than $75,000 in complimentary registration passes for diversity scholarship recipients to attend 12 Linux Foundation events in a variety of industries — from automotive and cloud computing to embedded, IoT, and networking.  

Khushbu Parakh was one of last year’s scholarship recipients. She is a junior developer who favors Python and says she’s truly fascinated with it. Parakh is also a Google Summer of Code mentor with the Anita Borg Organization.

“Besides that, I’m a geek. I like new ideas and pushing the envelope of the possible uses for computing,” Parakh said. “However, technology isn’t just about building a nifty new widget. I like being on the cutting edge of the what is going to be (the) next hurdle.”

Linux.com asked her for her thoughts on the program and The Linux Foundation event she chose to attend. She named a long list of benefits that she reaped afterwards, including landing an internship at Avi Networks whom she met at the conference, winning a hackathon, and being awarded with further training and research support. She also said she’s sharing what she learned through her work mentoring young girls.

Here’s what else she said.

Linux.com: Why did you apply for a scholarship?

Khushbu Parakh: I applied because I wanted the opportunity to network with developers who had the same interests but a variety of experiences in computer science and related fields. From this community, I can learn more about their areas of study as well as the problems they encounter in their research and within their professional development. In addition to expanding my technical knowledge through networking and workshops, The Linux Foundation offers the opportunity to meet women in computing through lunches during the conferences. Further, with graduation soon approaching, I was looking for a variety of options open to my current skill set. I also wanted to find a career path that fits my personal goals of creating an environment that fosters and encourages young females to pursue all science, with an emphasis on computing.

Linux. com: Which event did you attend and why?

Parakh: I attended ApacheCon: Big Data Seville 2016. I was looking forward to doing my research on macro connections so the conference gave me nice exposure and a chance to meet the mentors. I ended up spending more time using tools from Kaggle, a Big Data repository.

I have also applied to MesosCon Europe 2017 where I not only want to learn more but also to contribute to open source. I want to have a hands-on session on technology and get to know the ways that it can help me.

Linux.com: How did you benefit from attending? What did you gain from attending the event?

Parakh: Four great things happened to me after attending the conference:

  1. I am shortlisted to pursue my research in Big Data (Macro connections) by my research professor at the University of Zurich.

  2. I became part of chapter of Women in Big Data and was awarded training in LFS252 OpenStack Administration Fundamentals.

  3. I and my team were the winners in a hackathon. We contributed to DC/OS Mesos dashboard by adding new features.  

  4. I got my internship at Avi Networks whom I met at the conference.

Linux.com: Will you be sharing what you learned at these events? If so, how?

Parakh: I participate in Science Immersions, local meetups, and sessions in a program designed to expose students from economically disadvantaged backgrounds to different types of STEM. There I am a role model to young girls, talking to them about my research and other aspects of being a female in STEM. My technical expertise is currently in cloud computing. I am working on load balancing using OpenStack and Google Cloud Platform (GCP) monitoring the performances using the functional automation testing.

Particularly, the young girls learn about different perspectives of STEM in an entertaining and educational manner that instills excitement and love for STEM. Learning these skills at the conference helped me to be a mentor in Google Summer of Code which encourages students all over the world to contribute to FOSS. I felt immensely happy to see that I can bring change, at least in the life of some people who want to do something to better the world.

LinuxCon + ContainerCon + CloudOpen China | June 19-20, 2017

Apply for a diversity scholarship >>

MesosCon Asia | June 20 – 22, 2017

Apply for a diversity scholarship >>

Open Source Summit North America 2017

Apply for a diversity scholarship >>

MesosCon North America | September 13 – 15,  2017

Apply for a diversity scholarship >>

Node.js Interactive North America

Apply for a diversity scholarship >>

Embedded Linux Conference | October 23 – 25, 2017

Apply for a diversity scholarship >>

Open Source Summit Europe | October 23 – 25, 2017

Apply for a diversity scholarship >>

APIStrat | October 31 – November 2, 2017

Apply for a diversity scholarship >>

Faster Machine Learning Is Coming to the Linux Kernel

It’s been a long time in the works, but a memory management feature intended to give machine learning or other GPU-powered applications a major performance boost is close to making it into one of the next revisions of the kernel.

Heterogenous memory management (HMM) allows a device’s driver to mirror the address space for a process under its own memory management. As Red Hat developer Jérôme Glisse explains, this makes it easier for hardware devices like GPUs to directly access the memory of a process without the extra overhead of copying anything. It also doesn’t violate the memory protection features afforded by modern OSes.

Read more at InfoWorld

TensorFlow in Kubernetes in 88 MB

TensorFlow is a beast. It deals with machine learning algorithms, it uses Bazel to be built, it uses gRPC, etc., but let’s be honest, you are dying to play with machine learning. Come on, you know you want to! Especially in combination with Docker and Kubernetes. At Bitnami, we love apps, so we wanted to!

TensorFlow + Kubernetes + Docker + Machine learning = Awesomeness.

You just need to add bi-modal in there and you will hit buzzword bingo.

Jokes aside, TensorFlow is an open source library for machine learning. You can train data models and use those models to predict or infer information. I did not dig into TensorFlow itself, but I hear it is using some advanced neural networks techniques. NN have been around for a while, but due to computational complexity, they were not extremely useful passed a couple of layers and a couple dozen neurons (20 years ago at least 🙂 ). Once you have trained a network with some data, you can use that network (aka model) to predict results for data not used in the training data set. As a side note, I am pretty sure we will soon see a marketplace of TF models.

Read more at DZone

Stephen Wolfram: A New Kind of Data Science

When it comes to scientific computing, few names are more well known than Stephen Wolfram. He was the creator of the Mathematica, a program that researchers have been using for decades to aid in their computations. Later Wolfram expanded Mathematica into a full multi-paradigm programming language, called Wolfram Language. The company also packaged many of the Mathematica formulas, and a lot of outside data, into a cloud-based service and API. So at this year’s SXSW Interactive, we spoke with Wolfram about how to use this new cloud service to add computational intelligence into your own programs.

Read more at The New Stack

DevOps: The Key to IT Infrastructure Agility

These days, digital grabs a lot of headlines that trumpet how it’s radically changing customer behaviors. This typically means that IT departments have to deliver new features faster even in the face of more demanding requirements for availability (24/7) and security.

DevOps promises to do exactly that, by fostering a high degree of collaboration across the full IT value chain (from business, over development, operations and IT infrastructure). But there’s a problem.

While many software-development and operations teams have made steps toward DevOps methods, most enterprise IT-infrastructure organizations still work much as they did in the first decade of this century: They use a “plan-build-run” operating model organized by siloed infrastructure components , such as network, storage, and computing. 

Read more at McKinsey

Important Open Source Ruling Confirms Enforceability of Dual-Licensing and Breach of GPL for Failing to Distribute Source Code

A recent federal district court decision denied a motion to dismiss a complaint brought by Artifex Software Inc. (“Artifex”) for breach of contract and copyright infringement claims against Defendant Hancom, Inc. based on breach of an open source software license. The software, referred to as Ghostscript, was dual-licensed under the GPL license and a commercial license. 

This case highlights the need to understand and comply with the terms of open source licenses. … It also highlights the validity of certain dual-licensing open source models and the need to understand when which of the options apply to your usage. If your company does not have an open source policy or has questions on these issues, it should seek advice.

Read more at National Law Review

Enlightenment Foundation Libraries – Case Studies of Optimizing for Wearable Devices – Cedric Bail

Cedric Bail, a long-time contributor to the Enlightenment project who works on EFL integration with Tizen at Samsung Open Source Group, discussed some of the lessons learned in optimizing wearable apps for low battery, memory, and CPU usage.

Keynote: Redefining the Tech that Powers Travel – Rashesh Jethi, Amadeus

https://www.youtube.com/watch?v=jV0kAt64yy0?list=PLbzoR-pLrL6p01ZHHvEeSozpGeVFkFBQZ

Rashesh Jethi, SVP Engineering at Amadeus, describes the company’s journey to build their platform as a service layer.

 

Linux You Can Drive My Car – Walt Miner, Linux Foundation

https://www.youtube.com/watch?v=Ub8bNo9yM_4?list=PLbzoR-pLrL6pSlkQDW7RpnNLuxPq6WVUR

At the recent Embedded Linux Conference, Walt Miner provided an AGL update and summarized AGL’s Yocto Project based Unified Code Base (UCB) for automotive infotainment.