Home Blog Page 673

Fireside Chat: GKH Talks Licensing, Email, and Aging Maintainers

No one aside from Linus Torvalds has more influence or name recognition in the Linux Kernel project than Greg Kroah-Hartman. More commonly known as GKH, the ex SUSE kernel developer and USB driver maintainer is now a Linux Foundation Fellow and the full-time maintainer of the -stable Linux branch and staging subsystem, among other roles. In a recent Fireside Chat with Kroah-Hartman at Embedded Linux Conference Europe, Tim Bird, Chair of the Architecture Group of the Linux Foundation’s CE Working Group, described him as the hardest working person he knows.

Not only does Kroah-Hartman review an endless series of kernel patchsets and explore new directions for Linux — he attends almost every Linux-related conference in the world, said Bird. This year, GKH will only reach about 100,000 miles of travel, down from last year’s 140K. This slacker schedule may in part be due to recently moving with his family from the Pacific Northwest to Paris, France.

But why Paris, asked Bird. “The food and wine are good,” said GKH. “My daughter thinks I’m having a midlife crisis. I claimed she’s on the coattails of it – she worked at LinuxCon last week.” For the record, GKH said his main goal in Paris was to collaborate with researchers at Pierre Marie Curie University on applied research in OS and system design.

The Keynote Fireside Chat at ELC Europe, held Oct, 11-13 in Berlin, focused primarily on two issues: whether older kernel maintainers should hand their jobs over to younger developers, and how to best bring open source scofflaws into compliance (see below). Meanwhile, here are a few other edited quick takes from GKH about issues ranging from patch review technologies to the role of Linux on microcontrollers.

On whether Linux has a role on microcontrollers…

GKH: A student of mine got Linux running on a Cortex-M3 with 4MB, which is great for Linux, but 2MB is pushing it. At LinuxCon in Toronto, some of us were drunk and found ways that we think we can get the kernel into 512KB — but it won’t do anything. Stripping Linux down for these chips would be awesome — I’d love to do that. But there are already so many good OSes for this. Zephyr is now a good alternative to Nuttx.

On whether email still makes sense as the basis for patch review, vs. say, Gerritt…

GKH: There’s nothing else that’s better, faster, or more widely used around the world than email. It’s free, and it works great for people who can’t use GUIs, who have intermittent Internet access, or who don’t speak English as their first language. You can also use tools on top of email such as Patchwork, which can tie into continuous integration and testing, and that’s what people use Gerrit for.

On whether there are too many aging kernel developers and maintainers…

GKH: Yes, we are getting old, but it beats the alternative. Age is a dual factor. David Miller has maintained the network stack for 21 years, and I’ve been maintaining USB since the 1990s. That’s knowledge, depth, and information. When USB 3 came out, Microsoft put a whole new team on it. We had one really good developer – Sarah Sharp – implement USB 3 for us in half the time. So knowledge is good.

But we also work on getting in new developers. We work with Outreachy and Summer of Code and lots of universities to bring in interns, some of whom are younger than Linux. We have tons work – if people want to do it. We have subsystems that nobody maintains. The Parallel port subsystem hadn’t had a maintainer for over 12 years because no one wanted to do it. A new developer came in and converted the Parallel ports to the driver model, and he did great, and he got a job. So youthful ignorance and blind ambition are great. I was there.

On the dos and don’ts of open source licensing compliance…

GKH: Amazon is an example of a company that perfectly complies with the license, but all they do is throw this random tarball up on some website. It’s the old SourceForge, ‘Let’s bury it somewhere’ crap, and that’s not good. It costs them money, and it’s a pain for us. So we just ignore them. They’re not receiving any of the benefits of being part of the community.

The biggest problem we have is the dumping of these huge patchsets. Look at Qualcomm’s 2.5 million lines of code crap in a git repository – okay, so it’s getting better, now it’s only 1.5 million lines. That’s crazy — it’s impossible to mine. They say, ‘Ooh, our new chips are based on kernel 3.18.’ Good job, guys. You’ll reach 4.4 just in time for me to obsolete that kernel. So all these embedded devices are running these crazy patchsets. It’s Linux ‘like’. There are entire SoCs and graphics drivers that nobody’s ever seen or touched.

On whether the Linux community is tough enough in enforcing open source compliance…

GKH: There’s been a lot of discussion lately about GPL enforcement. People have claimed that if we don’t enforce the GPL, it’s the same as BSD. That’s flat out false. Yes, people violate our license. That always happens. But it’s gotten a lot better. Back in the 1990s, people were shipping closed source Ethernet, SCSI, and controller drivers. It was crazy.

Intel used to be one of the biggest GPL violators, and now they’re our biggest supporter. And that happened due to us working with them. If you go into a company with lawyers, walls are going to come down, and you’re going to alienate everybody. It’s better if your developers contact the developers inside the company and say ‘What can we do to help you get your code merged into the kernel?’

Look at Microsoft, which is now an active contributor to Linux. That happened because Microsoft’s customers wanted open source, and because I knocked on their door nicely and asked if we could help with their kernel code. Their initial code dump was 12,000 lines of crap. We added it to the kernel staging directory, and after a year when we finally merged it out, it was only 7,000 lines, and supported four new device types. We showed them that if they worked with us, we could make their code smaller, make their stuff run better, and make their customers happier.

We’re in it for the long haul. We don’t just want an instant code dump — we want them to become part of our community. You may want to get one device working, but what you really want is to get them to join the community. The only way we’re going to survive is if we bring in more people. Make them realize that working with the community saves them time and money. One day, they will become so reliant on Linux that they have to invest. They will turn to their partners and ask why they aren’t doing the same. This has been proven again and again.

Watch the complete fireside chat with Greg Kroah-Hartman below:

https://www.youtube.com/watch?v=s2I_7uCto5Q?list=PLbzoR-pLrL6pRFP6SOywVJWdEHlmQE51q

Watch all 125+ sessions from ELC + OpenIoT Summit covering the latest on embedded Linux development and open source IoT. Watch now >>

Embedded Linux Conference + OpenIoT Summit North America will be held on February 21 – 23, 2017 in Portland, Oregon. Check out over 130 sessions on the Linux kernel, embedded development & systems, and the latest on the open Internet of Things.

 

Linux.com readers can register now with the discount code, LINUXRD5, for 5% off the attendee registration price. Register now>>

 

Nov. 7 Webinar on Taking the Complexity Out of Hadoop and Big Data

The Linux Foundation’s Hadoop project, ODPi, and Enterprise Strategy Group (ESG) are teaming up on November 7 for a can’t miss webinar for Chief Data Officers and their Big Data Teams.

As a bonus, all registrants will receive a free copy of Nik’s latest Big Data report.
Join ESG analyst Nik Rouda and ODPi Director John Mertic for “Taking the Complexity out of Hadoop and Big Data” to learn:

  1. How ODPi pulls complexity out of Hadoop, freeing enterprises and their vendors to innovate in the application space

  2. How CDOs and app vendors port apps easily across cloud, on prem and Hadoop distros. Nik revels ESG’s latest research on where enterprises are deploying net new Hadoop installs across on-premise, public, private and hybrid cloud

  3. What big data industry leaders are focusing on in the coming months

Removing Complexity

As ESG’s Nik Rouda observes, “Hadoop is not one thing, but rather a collection of critical and complementary components. At its core are MapReduce for distributed analytics jobs processing, YARN to manage cluster resources, and the HDFS file system. Beyond those elements, Hadoop has proven to be marvelously adaptable to different data management tasks. Unfortunately, too much variety in the core makes it harder for stakeholders (and in particular, their developers) to expand their Hadoop-enhancing capabilities.”
The ODPI Compliant certification program ensures greater simplicity and predictability for everyone downstream of Hadoop Core – SIs, app vendors and end users.

Application Portability

ESG reveals their latest findings on how enterprises are deploying Hadoop, and you may be surprised at the percent moving to the cloud. Find out who’s deploying on premise (dedicated and shared), who’s using pre-configured on-prem infrastructure, what percent are moving to private, public and hybrid cloud.

Where Industry Leaders are Headed

ESG interviewed leaders like Capgemini, VMWare, and more as part of this ODPi research – let their thinking light your way as you develop your Hadoop and Big Data Strategy.

Reserve your spot for this informative webinar. 

As a bonus, all registrants will receive a free copy of Nik’s latest Big Data report.

Managing Production Systems with Kubernetes in Chinese Enterprises

Kubernetes has rapidly evolved from running production workloads at Google to deployment in an increasing number of global enterprises. Interestingly, US and Chinese enterprises have different expectations when it comes to requirements, platforms, and tools. In his upcoming talk at KubeCon, Xin Zhang, CEO of Caicloud, will describe his company’s experiences using Kubernetes to manage production systems in large-scale Chinese enterprises. We spoke with him to learn more.

Linux.com: Is there anything holding back Kubernetes adoption and/or successful Kubernetes deployments in China?

Xin Zhang: There are several pain points of Kubernetes adoption we have encountered during Chinese enterprise deployment. Some examples are listed below:

  • The most obvious one is that people may immediately stumble onto is the Internet inaccessibility to certain Docker images hosted outside the Chinese network. Some traditional industries even require no outbound network accessibility (no traffic going out of the enterprise intranet), so being able to deploy Kubernetes without outside network access is a must.
  • Currently, most mutating operations of Kubernetes require using command-line and writing yaml or JSON files, whereas a considerable amount of Chinese enterprise users are more familiar and comfortable with UI operations.
  • Many of the networking and storage plugins of Kubernetes are based on US cloud providers such as AWS, GCE, or Azure, which may not be always available or satisfactory (performance-wise) to Chinese enterprise users.
  • The complexity of Kubernetes (both its concept and its operations manual) may seem a burden to certain users.

Linux.com: Are there certain required features of a production system that are unique to Chinese enterprises?

Xin: When working with our customers, we did observe a set of commonly requested features that are missing or not currently mature from the official upstream releases. While these patterns are summarized from our Chinese customers, they may have broader applicability elsewhere. We sketch some of them below:

  • A better logging mechanism is required. The default logging module requires applications to dump their logs to stdout or stderr, while system components like fluentd will correctly do the right thing. However, Chinese enterprise applications are usually old-school style, which write logs to local files, and some applications use separate files to do fine-grained logging classification. Sometime enterprises even want to send logs into their existing, separate log store and processing pipeline, instead of using the EFK plugins.

  • Monitoring: There are several customized monitoring requests complementing the upstream solution:

    • Some customers consider running the somewhat heavyweight monitoring components in the same cluster as their applications a potential risk, and we did observe cases where monitoring components eat up system resources and affect user applications. Hence, being able to run monitoring components separately from the application cluster represents a common request.
    • While Kubernetes monitors applications running in it, a follow-up question is who monitors Kubernetes itself (its system components) and makes sure even the master is highly available.
    • Chinese enterprises tend to have existing monitoring infrastructure and tools (Zabbix is extremely popular!), and they’d like to have a unified monitoring panel that include both Kubernetes container level monitoring and existing metrics.
  • Network separation: While the default Kubernetes networking model allows any point-to-point network access within a cluster, complex enterprise usage scenarios require network policies, isolation, access control, or QoS among pods or services. Some enterprises even require Kubernetes to manage or cope with underlying SDN devices such as Huawei SDN controller.

Linux.com: What are the most common pitfalls you’ve seen when running Kubernetes in the wild?

Xin: We did encounter a handful of pitfalls during production usage in large-scale enterprise workloads. Some of them are summarized below:

  • Resource quota and limit: While the resource quota and limit are intended to perform resource isolation and allocation, a good percentage of Chinese enterprise users have little idea of what values are appropriate to set. As a result, users may set inappropriate min or max resource range for applications, that either result in task OOM or very low resource utilization.

  • Monitoring instability: We found in our setting using the default heapster + influxdb solution for monitoring is not very stable in large-scale deployments, which can cause missed alerting or instability of the whole system.

  • Running out of disk: As there is little limitation on disk usage in certain scenarios, an application that writes excessive logs may exhaust the local disk quota and cause other tasks to fail.

  • Update the cluster: We provide commercial distributions of Kubernetes to customers and update our version every three months, roughly aligned with the upstream release schedule. And updating a live Kubernetes cluster is still cumbersome.

Linux.com: What well-known Chinese enterprises currently run Kubernetes in production today? What are they using it for? 

Xin: Some of our own Kubernetes users cover leaders in a variety of industries, some example customers or industries are:

  • Jinjiang Travel International is one of the top 5 largest OTA and hotel companies that sells hotels, travel packages, and car rentals. They use Kubernetes containers to speed up their software release velocity from hours to just minutes, and they leverage Kubernetes to increase the scalability and availability of their online workloads.
  • China Mobile is one of the largest carriers in China. They use containers to replace VMs to run various applications on their platform in a lightweight fashion, and they leverage Kubernetes to increase resource utilization.
  • State Power Grid is the state-owned power supply company in China. They use containers and Kubernetes to provide failure resilience and fast recovery.

Linux.com: How can Kubernetes be used more effectively in global environments?

Xin: To us, some imminent needs that will enable wider Kubernetes adoption globally are the following:

  • Ease of deployability with more diverse IaaS settings, in the parts of world where GCE, AWS, etc. are not the best choices.

  • More performance tuning and optimization: Production systems have stringent performance requirements, hence continuing to push the boundary of Kubernetes performance is of great value.

  • Better documentation and education: We have received customer complaints that the official document is still hard to follow and too many cross-references exist. We hope more efforts could be devoted to better documentation and more educational events happening around the globe (such as training, certification, and technical meetups/conferences).

Registration for this event is sold out, but you can still watch the keynotes via livestream and catch the session recordings on CNCF’s YouTube channel. Sign up for the livestream now.

Microsoft Open Sources Its Next-Gen Cloud Hardware Design

Microsoft today open sourced its next-gen hyperscale cloud hardware design and contributed it to the Open Compute Project (OCP). Microsoft joined the OCP, which also includes Facebook, Google, Intel, IBM, Rackspace and many other cloud vendors, back in 2014. Over the last two years, it already contributed a number of server, networking and data center designs.

With this new contribution, Project Olympus, it’s taking a slightly different approach to open source hardware, however. Instead of contributing designs that are already finalized, which is the traditional approach to open sourcing this kind of work, the Project Olympus designs aren’t production-ready yet. The idea here is to ensure that the community can actually collaborate in the design process.

Read more at Tech Crunch

Node.js Is Helping Developers Get the Most Out of JavaScript

Node.js, the JavaScript runtime of choice for high-performance, low latency apps, continues to gain popularity among developers on the strength of JavaScript.

When a small startup decided to launch its technological foundation on top of Microsoft’s .NET platform, it needed a .NET expert to provide a master view. Being lean and distributed, the company chose .NET guru Carl Franklin to serve remotely as CTO to oversee things.

However, at the DEVintersection conference in Las Vegas last week, Franklin, now executive vice president of App vNext and co-host and founder of .NET Rocks!, said he held the CTO position for all of two days before someone whispered in the CEO’s ear and convinced him that hot, new Node.js—not shriveled old .NET—was the way to go.

“Node.js is rapidly replacing Java and .NET due to the agility of the Node.js software development life cycle,” said Dan Shaw, CTO and co-founder of NodeSource, a provider of support services for Node.js shops. “Building a Java app typically takes six to 24 months from start to finish. In contrast, Node.js applications take two to six months.”

Read more at eWeek

Let’s Automate Let’s Encrypt

HTTPS is a small island of security in this insecure world, and in this day and age, there is absolutely no reason not to have it on every Web site you host. Up until last year, there was just a single last excuse: purchasing certificates was kind of pricey. That probably was not a big deal for enterprises; however, if you routinely host a dozen Web sites, each with multiple subdomains, and have to pay for each certificate out of your own dear pocket—well, that quickly could become a burden.

Now you have no more excuses. Enter Let’s Encrypt — a free Certificate Authority that officially left Beta status in April 2016. 

Read more at Linux Journal

‘Thanks for Using Containers!’ … Said No CEO Ever

“We think we’re going to get magical powers when we use other people’s servers,” said Casey West, Principal Technologist for Pivotals Cloud Foundry platform, during his OSCON Europe talk, in which he provided a humorous, and insightful look  at how the CEO sees, or doesn’t see or honestly doesn’t care about  the vast majority of the work that IT professional do in cloud.

IT pros work across pretty much every industry these days. But the expectations are largely the same across all of them, no matter if the projects they work on are “greenfield projects” designed to break into new areas of business, or “brownfield projects,” which is a nice way of saying you are updating legacy systems.

With greenfield systems, all you have to do is create something from thin air and compete with billion dollar companies. No big deal.” The requirements are basically twofold: All you have to do is…

  • Deliver faster than everyone else.
  • Never make a mistake.

With brownfield systems, All you have to do is modernize an existing application that makes all our revenue in order to compete with companies theoretically valued at a billion dollars.” No big deal. Oh and…

  • Deliver faster than everyone else.
  • Never make a mistake.

Read more at The New Stack

How DNS Works: A Primer

The Domain Name System is critical to fundamental IP networking. Learn DNS basics in this primer.

DNS has been in the news a great deal as of late. First, there was the controversy over the United States government essentially handing over control of the Internet’s root domain naming system. Then DNS made headlines when cybercriminals performed three separate distributed denial of service (DDoS) attacks on a major DNS service provider by leveraging a botnet army of millions of compromised IoT devices. Yet with all the hoopla surrounding DNS, it surprises me how many IT pros don’t fully understand DNS and how it actually works.

DNS stands for Domain Name System. Its purpose is to resolve and translate human-readable website names to IPv4 or IPv6 addresses. 

Read more at Network Computing

Hyperledger Eyes Mobile Blockchain Apps With ‘Iroha’ Project

A blockchain project developed by several Japanese firms including by startup Soramitsu and IT giant Hitachi has been accepted into the Hyperledger blockchain initiative.

Developed by Hyperledger member and blockchain startup Soramitsu, Iroha was first unveiled during a meeting of the project’s Technical Steering Committee last month. Iroha is being pitched as both a supplement to other Hyperledger-tied infrastructure projects like IBM’s Fabric (on which it is based) and Intel’s Sawtooth Lake.

Read more CoinDesk

Deployment Automation: The Linchpin of DevOps Success

Deployment Automation is the linchpin of DevOps transformation. I cannot put it more simply:
To accelerate your DevOps adoption, and get the biggest bang for your buck: FOCUS ON DEPLOYMENTS.

2015-2016-state-of-the-devops-reportsThe previous State of the DevOps reports have shown a pretty straightforward equation:

Deployment frequency is THE indicator for success
and deployment pain is a predictor of failure.

More Deploys

We all remember the impressive hockey-stick graph from the 2015 report, comparing the number of deploys/day/developer between high-performing IT organizations (in Orange) and low-performing ones.

That difference became even more staggering in the 2016 research.

The 2016 State of the DevOps report showed that high-performing IT organizations deploy 200 times more frequently than low performers, with 2,555 times faster lead times.
 

Less Pain

The 2016 State of the DevOps report showed that high-performing IT organizations deploy 200 times more frequently than low performers, with 2,555 times faster lead times.

The 2015 report found that deployment automation (along with CI, testing and version control practices) predicted lower levels of deployment pain, higher IT performance, and lower change failure rates.

banging-head-against-a-wallThe reports show that those high performing IT organizations also have higher employee loyalty and engagement.

On the other end of the spectrum: deployment pain is correlated with employee churn.

This makes Deployment automation one of the clearest examples of the convergence of the 2 axes of DevOps: the technology/process axis, and the culture/people one.

If your talent is running for the hills, and your developers are tired of banging their heads against the wall trying to get stuff to work — you should look at how you deploy.

ARA

Not only the State of the DevOps reports, but ALL analyst research confirms how critical deployments are. For example, in the recent Gartner Magic Quadrant for Application Release Automation, Gartner mentions that ARA – which Deployment Automation and release coordination are key tenants of – is the most important technology to an organization’s adoption of DevOps.

Bottom Line

Electric Cloud focuses primarily on Application Release Automation and on the Ops-side of large-scale deployment automation because we understand the bottom line is pretty simple:
To accelerate your DevOps transformation, and keep employee satisfaction high – focus on deployments.

Read the full article here.