Home Blog Page 597

How China Mobile Is Using Linux and Open Source

China Mobile is one of the biggest telecom companies in the world, with more than 800 million users in China — all of whom are served with open source technologies. During the 2016 Mobile World Congress, China Mobile declared that the operational support system running their massive network would be based on open source software. China Mobile is not alone; many major networking vendors are moving to open source technologies. For example, AT&T is building their future network on top of OpenStack, and they have invested in software-defined technology so significantly that they now call themselves a software company.

I sat down with Zhang Zhihong, Deputy General Manager of Cloud Computing Products, China Mobile Suzhou R&D Center to discuss how the company is embracing open source and Linux, and how they are giving back.

China Mobile is not the only player embracing Linux in this industry. Zhihong said that companies like Alibaba and Baidu also have internal groups whose job is to just build optimized Linux distributions for their own consumption. Not only does Linux cut costs heavily (you don’t have to pay millions of dollars to acquire subscriptions or licenses for thousands of machines), but the company can also fine tune it to get the most out of their network and infrastructure.

“We thought, when they can do it, why can’t we? We built an internal team at China Mobile and created our own Linux distribution,” said Zhihong.

China Mobile buys around 4000-5000 servers every year, and most of these servers deploy Linux. Previously, they used commercial versions of Linux — mainly SUSE Linux Enterprise Server and RHEL — but in 2015 their team created a custom version of Linux that gave them more control over their infrastructure while also cutting costs.

The new operating system is based on CentOS, and in 2016, China Mobile deployed more than 10,000 physical servers running this customized version of Linux in a production environment.

Cost and Control

When asked about the advantages of using their own custom Linux, Zhihong pointed out two deciding factors: cost and control. Cost has been the most important factor, Zhihong said. The purchasing department makes all decisions and controls all deals for the company, and they think it’s too expensive to pay for a commercial operating system as the cost can run into hundreds of millions of dollars. That’s a lot of money.

The second reason was better control over their infrastructure. “With custom Linux distribution, we can push our limits as we have a lot of low-level software. We use KVM for virtualization and Ceph for storage, with a lot of fine tuning and optimization at the kernel level. If there are bugs, or if we need a new feature it can take a lot of time to talk to the vendor and get those changes into the OS. By using our own distribution we gain this capability.”

However, running their own distribution doesn’t mean that they don’t contribute to the Linux community. Zhihong said that contribution is the core part of using Linux or another open source software. He said more than 100 contributors from the company contribute to the kernel. Whenever there is a bug, they fix it and submit the patch.

Zhihong gave an example of working with upstream when they hit a KVM bug in the public cloud production environment, they were running several virtual machines with several CPUs and when they attached more than two disks it would crash. Their kernel teams traced the problem, which had something to do with buffer overflow, fixed the bug, and submitted the patch upstream.

He also added that Linux is very strong in China, with many local Linux groups, and he said many employees from the company are part of those local Linux communities. “Linux is very much welcomed in China, there are lot of Linux programmers,” Zhihong said.

OpenStack Superuser

In addition to Linux, China Mobile is a heavy user of other open source technologies. “We use a lot of open source technologies: OpenStack, Hadoop, Zookeeper, Tomcat, Ceph, and so many that I can’t list them all,” said Zhihong.

China Mobile is more than just a mobile carrier; they offer many more services. They have many IT applications so they have been running their own private cloud for many years. Their private cloud is spread across three pools in three regions of China. There are thousands of servers running in these pools, but the cloud is proprietary and not open source.

By 2015, OpenStack had stabilized and matured enough to be considered seriously by the likes of China Mobile. So, China Mobile began building a new OpenStack private cloud spanning across two pools, with each pool running more than 3,000 servers. Once the project is complete, they will connect it to the existing proprietary cloud and little by little replace it with OpenStack.

Their commitment to open source and OpenStack led them to win the OpenStack Superuser award last year. OpenStack is mostly seen as a private cloud answer to AWS and Microsoft Azure, but China Mobile uses OpenStack both in its private and public clouds. Their public cloud has more than 3000 servers. It’s similar to AWS where it provides virtual machines, object based storage, and other such services to customers. It has more than 20,000 registered users and around 2,000 enterprise users.

China is a huge market for companies like China Mobile with more than a billion potential customers, and it’s also the manufacturing hub of the world. As more and more big companies embrace Linux and open source, China may evolve from a consumer of Linux to one of its leading contributors.

The program for Open Networking Summit is now available!

Look forward to over 75 sessions led by networking visionaries including Martin Casado, General Partner, Andreessen Horowitz; Amin Vahdat, Google Fellow and technical lead for networking, Google; Justin Dustzadeh, VP, Head of Global Infrastructure Network Services; Dr. Hossein Eslambolchi, Technical advisor to Facebook, Chairman & CEO, 2020 Venture Partners; and many more. Register now >>

3 Reasons FLOSS Developers Should Attend Devoxx US, Eclipse Converge and Eclipse IoT Day

We are just a month away from Devoxx US and Eclipse Converge, and I’m really excited about what is coming up. Like I often say, there are only so many conferences that one can attend, so it is always hard to figure out which ones are really worthwhile. Although I am certainly a bit biased, since I am involved in its organization, here are three reasons why I think you should plan on being in San Jose the week of March 20.

#1 | Three co-located conferences to make the most of your week focusing on Cloud, IoT, Blockchain, Linux, and more!

If you’re a developer, chances are you have attended one or many Devoxx conferences in the past. It’s been the largest developer conference in Europe for many years, with events in the UK, Belgium, Morocco, France, and Poland! I’ve attended a few myself and always come away amazed at how much I learned, and how many great people I had the opportunity to meet.

The first edition of Devoxx US will take place on March 21-23 and I am very excited about this year’s program. With rock star speakers from Docker, Red Hat, and Google, I am personally really looking forward to learning more about building scalable software, which is a topic dear to my heart given my involvement with IoT. Also, the “Cloud, Containers & Infrastructure” track will feature lots of cool talks covering Linux technology, such as:

Building on the success of EclipseCon in the past, the Eclipse Converge conference will be a one-day event co-located with Devoxx US. One area that has kept the Eclipse community very busy recently is the integration of the Language Server Protocol (LSP) in all things IDE (from the classical Eclipse desktop IDE to Eclipse Che’s cloud-based IDE). The LSP talks from Red Hat, Pivotal and TypeFox will be a great opportunity to learn more about the technology, and how it will help make our development environments more flexible.

Last but not least is the Eclipse IoT Day, which will be on Monday March 20, see #2 below for the details!

All in all, that’s three conferences that will be happening all in the same week! Oh, and not to mention that you can also plan on bringing your kids to Devoxx4Kids! This should provide  you with plenty of opportunities to catch up on what’s hot and learn about the future of software development from world class experts.

#2 | A strong focus on IoT with the Eclipse IoT Day

The Eclipse IoT community has grown significantly over the past five years. Eclipse IoT Days are hosted all around the world and are a great place to learn more about both open source projects and how people are using them to create IoT solutions.

On March 20, the Eclipse IoT Day San Jose will have a very impressive line-up of speakers from Red Hat, Intel, Bosch SI, Deutsche Telekom, and more who will be sharing their experiences building and using Eclipse IoT technology. There are a bunch of projects in the Eclipse ecosystem that I believe are pretty unique in the marketplace. I, for one, am very much looking forward to getting an update on Eclipse hawkBit, a full-blown solution for managing software rollouts for IoT devices, that is used in production by Bosch today. I’m also very interested in hearing about Red Hat’s take on why building an open source IoT cloud platform matters, and how they are scaling their IoT infrastructure using Kubernetes and OpenShift. Large organizations are adopting Eclipse IoT open source technology and deploying solutions today and that is exciting!

#3 | A community event

Besides the technical talks, there will be many opportunities to meet with the community at-large and to network: Devoxx Hackergarten, Eclipse IoT Working Group meeting, or Birds of a Feather sessions just to name a few. I don’t know about you, but I typically spend as much time (if not more!) talking to people in the hallways as I do attending talks, and that’s what any good conference should be about, right? We expect many attendees from diverse backgrounds and with a lot to share, so brace yourself for a very busy week!

You can register now to attend any or all of these events: Devoxx US, Eclipse Converge and the Eclipse IoT day.

I look forward to seeing many of you in San Jose at the end of March!

5 Videos to Get You Pumped to Speak at MesosCon 2017

Last year, experts from Uber, Twitter, PayPal, and Hubspot, and many other companies shared how they use Apache Mesos at MesosCon events in North America and Europe. Their talks helped inspire developers to get involved in the project, try out an installation, stay informed on project updates, and generally get pumped to use and participate in Apache Mesos.

The MesosCon program committee is now seeking proposals from speakers with fresh ideas, enlightening case studies, best practices, or deep technical knowledge to share with the Apache Mesos community again this year. MesosCon is an annual conference held in three locations around the globe and organized by the Apache Mesos community in partnership with The Linux Foundation.

March 25 is the deadline for speakers to submit proposals for MesosCon Asia. MesosCon North America’s deadline is May 20 and MesosCon Europe’s is July 8. Here, we’ve rounded up the top 5 videos from the 2016 MesosCon North America event for some inspiration. Submit your speaking proposal now!

1. How Verizon Labs Built a 600 Node Bare Metal Mesos Cluster in Two Weeks

Craig Neth, Distinguished Member of the Technical Staff at Verizon Labs, describes building a 600-node Mesos cluster from bare metal in two weeks. His team didn’t really get it all done in two weeks, but it’s a fascinating peek at some ingenious methods for accelerating the installation and provisioning of the bare hardware, and some advanced ideas on hardware and rack architectures.

https://www.youtube.com/watch?v=6P8htQnXCfM?list=PLGeM09tlguZQVL7ZsfNMffX9h1rGNVqnC

2.  4 Unique Ways Uber, Twitter, PayPal, and Hubspot Use Apache Mesos

Dr. Abhishek Verma, first author of the Google Borg Paper, describes how Uber used the Apache Cassandra database and Apache Mesos to build a fluid, efficient cluster of geographically diverse datacenters. The goals of this project were five nines reliability, low cost, and reducing hardware requirements. Mesos allows such flexible resource management that you can co-locate services on the same machine.

https://www.youtube.com/watch?v=U2jFLx8NNro

3. Apache Mesos for Beginners: 3 Videos to Help You Get Started

“How do I get my hands on this? I don’t have a datacenter or a team of engineers. What if I want to become a contributor? How do I make this all go in my own little test lab?”

The talks highlighted in this article will help you answer these questions. Aaron Williams, Joris Van Remoorter, and Michael Park of Mesosphere, and Frank Scholten of Container Solutions share how to run Mesos on a laptop, how to become a contributor, and the basic architecture of a Mesos-based datacenter.

https://www.youtube.com/watch?v=J14_H4T0JB0?list=PLGeM09tlguZQVL7ZsfNMffX9h1rGNVqnC

4.  Apache Spark Creator Matei Zaharia Describes Structured Streaming in Spark 2.0

Apache Spark has been an integral part of Mesos from its inception. Spark is one of the most widely used big data processing systems for clusters. Matei Zaharia, the CTO of Databricks and creator of Spark, talked about Spark’s advanced data analysis power and new features in its upcoming 2.0 release in his MesosCon 2016 keynote.

https://www.youtube.com/watch?v=L029ZNBG7bk?list=PLGeM09tlguZQVL7ZsfNMffX9h1rGNVqnC

5. Open Source Is Key to the Modern Data Center, Says EMC’s Joshua Bernstein

DevOps is key to agility, agility is key to innovation and success, and open source powers DevOps. Joshua Bernstein, Vice President of Technology at EMC, describes the value that this brings to an organization: “We automate everything. We drive out corner cases. We strive for commodity hardware…The biggest thing is that we value this ability to interoperate. This goes along with microservices and the way that we build microservice applications now. We also value tremendously the ability to leverage a collaborative community.”

https://www.youtube.com/watch?v=-Xts6Mkujkg

Submit a proposal to speak at MesosCon Asia » The deadline is March 25.

Submit a proposal to speak at MesosCon North America » The deadline is May 20.

Submit a proposal to speak at MesosCon Europe » The deadline is July 8.

Not interested in speaking but want to attend? Linux.com readers receive 5% off the “attendee” registration with code LINUXRD5.

Register for MesosCon Asia » Save $125 through April 30.

Register for MesosCon North America » Save $200 through July 2.

Register for MesosCon Europe » Save $200 through August 27.

Apache, Apache Mesos, and Mesos are either registered trademarks or trademarks of the Apache Software Foundation (ASF) in the United States and/or other countries. MesosCon is run in partnership with the ASF.

Distributed Logging for Containers

The era of microservices calls for a new approach to logging with built-in infrastructure for both aggregation and storage. Multiple applications running in isolated containers require a specialized approach to make sure all data is collected, stored and usable later. Eduardo Silva, a software engineer at Treasure Data, gave a crash course in distributed logging during his keynote at CloudNativeCon last November, showing the pros and cons of different infrastructure models and highlighting the open source project Fluentd.

“When we are talking about this kind of architecture, we need to stop thinking about just a file, just a file system,” Silva said. “It’s about how I’m going to deal with the different aggregation patterns, how I’m going to distribute my logs.”

Silva said there are three main parts of a distributed logging infrastructure: collector nodes, aggregator nodes, and then a destination — a database, a file system, or another service, etc. Collectors retrieve the raw logs from the application and parse their content. Aggregators pull in that log data from multiple sources and then convert the logs — which could be in a number of different formats — into streams. Destinations access the data streams and store the information somewhere permanent.

Depending on variables like CPU resources, network traffic, and whether or not the system needs high availability and/or redundancy, there are different ways to configure a distributed logging system, Silva said. The main question is where to put the aggregator — either in the collection container nodes, if high network traffic is an issue, or closer to the destination, if network failure is likely or data loss is unforgivable.

“To [best] implement all these integration patterns,” Silva said, “you need the right tool for this kind of solution. So this is where Fluentd joins in. Fluentd is an open source data and load collector, which was designed to achieve all these kind of aggregation patterns and adapt to your own needs. It was made with high performance in mind. It has built-in reliability, structured logs, and a pluggable architecture.”

Silva said the native Docker logging driver uses Fluentd, and both Kubernetes and OpenShift use Fluentd as the main logging aggregator. It’s infrastructure includes built-in parsers and filters to handle and convert multiple data types, and buffers to store more intense logging streams in memory to protect against database or network failures. It’s been an active project since 2011.

Silva announced on stage that Fluentd has joined the Cloud Native Computing Foundation as a partner, so the open source project is poised to become an even bigger part of the open source foundation’s work.

“We have thousands of companies using Fluentd,” Silva said. “We have thousands of individual users, and as you saw we have more than 600 plugins around and most of them are made by individuals.”

Watch the complete presentation below:

Want to learn more about Kubernetes? Get unlimited access to the new Kubernetes Fundamentals training course for one year for $199. Sign up now!

Logging for Containers by Eduardo Silva, Treasure Data

Eduardo Silva, a software engineer at Treasure Data, gave a crash course in distributed logging during his keynote at CloudNativeCon last November.
 

Basic Rules to Streamline Open Source Compliance For Software Development

The following is adapted from The Linux Foundation’s e-book, Open Source Compliance in the Enterprise, by Ibrahim Haddad, PhD.

Companies will almost certainly face challenges establishing their open source compliance program. In this series of articles, based on The Linux Foundation’s e-book, Open Source Compliance in the Enterprise, we discuss some of the most common challenges, and offer recommendations on how to overcome them.

The first challenge is to balance the compliance program and its supporting infrastructure with (existing) internal processes while meeting deadlines to ship products and launch services. Various approaches can help ease or solve such challenges and assist in the creation of a streamlined program that is not seen as a burden to development activities.

Companies should streamline open source management upon two important foundational elements: a simple and clear compliance policy and a lightweight compliance process.

Mandate basic rules

It’s important to first have executive-level commitment to the open source management program to ensure success and continuity. In addition, policies and processes have to be light and efficient so that development teams do not regard them as overly burdensome to the development process. Establish some simple rules that everyone must follow:

  • Require developers to fill out a request form for any open source software they plan to incorporate into a product of software stack.

  • Require third-party software suppliers to disclose information about open source software included in their deliverables. Your software suppliers may not have great open source compliance practices, and it is recommended that you update your contractual agreement to include language related to open source disclosures.

  • Mandate architecture reviews and code inspections for the Open Source Review Board (OSRB) to understand how software components are interrelated and to discover license obligations that can propagate from open source to proprietary software. You will need proper tooling to accommodate a large-scale operation.

  • Scan all incoming software received from third party software providers and ensure that their open source disclosures are correct and complete.

Integrate Rules Into the Existing Development Process

Once the basic rules have been established, the most successful way to create compliance is to incorporate the compliance process and policies, checkpoints and activities as part of existing software development processes.

The priority for all organizations is to ship products and services on time while building and expanding their internal open source compliance infrastructure. Therefore, you should expect to build your compliance infrastructure as you go, keeping in mind scalability for future activities and products. The key is thoughtful and realistic planning.

Plan a complete compliance infrastructure to meet your long- term goals, and then implement the pieces stepwise, as needed for short-term execution.

For instance, if you are just starting to develop a product or deliver a service that includes open source and you do not yet have any compliance infrastructure in place, the most immediate concern should be establishing a compliance team, processes and policy, tools and automation, and training your employees. Having kicked off these activities (in that order) and possessing a good grip on the build system (from a compliance perspective), you can move on to other program elements.

The next challenge to establishing an open source compliance program is clearly communicating your organization’s efforts to meet its open source license obligations with others inside and outside the company. In the next article, we’ll cover some practical ways to approach communication.

Get the open source compliance training you need. Take the free “Compliance Basics for Developers” course from The Linux Foundation. Sign up now!

Read the other articles in this series:

The 7 Elements of an Open Source Management Program: Strategy and Process

The 7 Elements of an Open Source Management Program: Teams and Tools

How and Why to do Open Source Compliance Training at Your Company

Evolution of Business Logic from Monoliths through Microservices, to Functions

The whole point of running application software is to deliver business value of some sort. That business value is delivered by creating business logic and operating it so it can provide a service to some users. The time between creating business logic and providing service to users with that logic is the time to value. The cost of providing that value is the cost of creation plus the cost of delivery. …

As technology has progressed over the last decade, we’ve seen an evolution from monolithic applications to microservices and are now seeing the rise of serverless event driven functions, led by AWS Lambda. What factors have driven this evolution? Low latency messaging enabled the move from monoliths to microservices, low latency provisioning enabled the move to Lambda.

Read more at Adrian Cockcroft’s blog

Blockchain: A New Hope, or Just Hype?

Cryptocurrencies such as bitcoin may have captured the public’s fancy – and also engendered a healthy dose of skepticism — but it is their underlying technology that is proving to be of practical benefit to organizations: the blockchain. Many industries are exploring its benefits and testing its limitations, with financial services leading the way as firms eye potential windfalls in the blockchain’s ability to improve efficiency in such things as the trading and settlement of securities. The real estate industry also sees potential in the blockchain to make homes — even portions of homes — and other illiquid assets trade and transfer more easily. The blockchain is seen as disrupting global supply chains as well, by boosting transaction speed across borders and improving transparency.

Read more at World Economic Forum

5 Open Source Security Tools Too Good to Ignore

If you haven’t been looking to open source to help address your security needs, it’s a shame—you’re missing out on a growing number of freely available tools for protecting your networks, hosts, and data. The best part is, many of these tools come from active projects backed by well-known sources you can trust, such as leading security companies and major cloud operators. And many have been tested in the biggest and most challenging environments you can imagine. 

Open source has always been a rich source of tools for security professionals—Metasploit, the open source penetration testing framework, is perhaps the best-known—but information security is not restricted to the realm of researchers, investigators, and analysts, and neither are the five open source security tools we survey below. 

Read more at InfoWorld

Why Upstream Contributions Matter when Developing Open Source NFV Solutions

When software is developed using open source methods, an upstream repository of the code is accessible to all members of the project. Members contribute to the code, test it, write documentation and can create a solution from that code to use or distribute under license. If an organization follows the main stream or branch of the upstream code their solution will receive all the changes and updates created in the upstream repository. Those changes simply “flow down” to the member’s solution. However, if a member organization forks the code — if they create a solution that strays from the main stream — their solution no longer receives updates, fixes and changes from the upstream repository. This organization is now solely responsible for maintaining their solution without the benefit of the upstream community, much like the baby salmon that took a tributary and then have to fend for themselves rather than remain in the main stream and receive the benefit and guidance of the other salmon making their way to the ocean.

Network functions virtualization (NFV) is the revolution sweeping telcos as they modernize their infrastructure. After years of lock-in to proprietary vendors who were solely responsible for the solution…

Read more at Vertical Industries Blog