Home Blog Page 326

The Hidden Benefit of Giving Back to Open Source Software

Companies that contribute to open source software and use it in their own IT systems and applications can gain a competitive advantage—even though they may be helping their competitors in the short run.

Open source software is software whose code can be adopted, adapted and modified by anyone. As part of the open source ethos, it is expected that people or companies who use open source code will “give back” to the community in the form of code improvements and enhancements.

And that presents an interesting dilemma for firms that rely heavily on open source. Should they allow employees on company time to make updates and edits to the software for community use that could be used by competitors? New research by Assistant Professor Frank Nagle, a member of the Strategy Unit at Harvard Business School, shows that paying employees to contribute to such software boosts the company’s productivity from using the software by as much as 100 percent, when compared with free-riding competitors.

Read more at Harvard Business School – Working Knowledge

5 Tips to Improve Productivity with zsh

The Z shell known as zsh is a shell for Linux/Unix-like operating systems. It has similarities to other shells in the sh (Bourne shell) family, such as as bash and ksh, but it provides many advanced features and powerful command line editing options, such as enhanced Tab completion.

It would be impossible to cover all the options of zsh here; there are literally hundreds of pages documenting its many features. In this article, I’ll present five tips to make you more productive using the command line with zsh.

1. Themes and plugins

Through the years, the open source community has developed countless themes and plugins for zsh. A theme is a predefined prompt configuration, while a plugin is a set of useful aliases and functions that make it easier to use a specific command or programming language.

Read more at OpenSource.com

Find Out How to Leverage AI, Blockchain, Kubernetes & Cloud Native Technologies at Open FinTech Forum, NYC, Oct. 10 & 11

Join Open FinTech Forum: AI, Blockchain & Kubernetes on Wall Street next month to learn:

  • How to build internal open source programs
  • How to leverage cutting-edge open source technologies to drive efficiencies and flexibility

Blockchain Track:

Hear about the latest distributed ledger deployments, use cases, trends, and predictions of blockchain adoption. Session highlights include:

  • Panel Discussion: Distributed Ledger Technology Deployments & Use Cases in Financial Services – Jesse Chenard, MonetaGo; Umar Farooq, JP Morgan; Julio Faura, Santander Bank; Hanna Zubko, IntellectEU; Robert Hackett, Fortune Magazine
  • Enterprise Blockchain Adoption – Trends and Predictions – Saurabh Gupta, HfS Research
  • Blockchain Based Compliance Management System – Ashish Jadhav, Reliance Jio Infocomm Limited

Read more at The Linux Foundation

Open Source Summit: Innovation, Allies, and Open Development

August was an exciting month for Linux and open source, with the release of Linux kernel 4.18, a new ebook offering practical advice for enterprise open source, and the formation of the Academy Software Foundation. And, to cap it off, we ended the month with a successful Open Source Summit event highlighting open source innovation at every level and featuring keynote presentations from Linus Torvalds, Van Jones, Jim Zemlin, Jennifer Cloer, and many others.

In his welcoming address in Vancouver, The Linux Foundation’s Executive Director, Jim Zemlin, explained that The Foundation’s job is to create engines of innovation and enable the gears of those engines to spin faster.

This acceleration can be seen in the remarkable growth of the Cloud Native Computing Foundation (CNCF) and in the Google Cloud announcement transferring ownership and management of the Kubernetes project’s cloud resources to the CNCF, along with a $9 million grant over three years to cover infrastructure costs.

Such investment underscores a strong belief in the power of open source technologies to speed innovation and solve problems, which was echoed by Zemlin, who encouraged the audience to go solve big problems, one person, one project, one industry at a time.

Empathy

In another conference keynote, Van Jones, President and founder of Dream Corps, best-selling author, and CNN contributor, spoke with Jamie Smith, Chief Marketing Officer at The Linux Foundation about the power of tech and related social responsibilities.  

“There was a time when the future was written in law,” Jones said. “Now the future is written in Silicon Valley in code.” Jones went on to say that those working in technology today possess a new set of superpowers and they need to understand how to use those powers for good.

A big deficit that Jones sees, not just in technology but in politics and elsewhere, is an empathy gap. He noted, however, that listening and mentoring can help bridge this gap. “Each person has an opportunity to mentor one person… Don’t underestimate the one person in your life who gave you a shot; you can be that person,” he said.

Allies and advocates

Jennifer Cloer, founder and lead consultant at reTHINKit PR and co-founder of Wicked Flicks, also explored the power of mentors and supporters in her talk highlighting the “Chasing Grace” video project. Cloer offered a preview of the project in a short episode featuring Nithya Ruff, Senior Director, Open Source Practice at Comcast, and member of the Board of Directors for The Linux Foundation. In the video preview, Ruff described the important role that her father played in supporting her career.

Ruff also moderated a panel discussion at Open Source Summit examining issues of diversity and inclusion and exploring solid strategies for success. Ruff acknowledged that the efforts of open source communities to attract and retain diverse contributors with unique talent and perspectives have gathered momentum, but she said, “We cannot tackle these issues without the support of allies and advocates.”

Open development

On the last day of the conference, Linux creator Linus Torvalds sat down with Dirk Hohndel, VMware VP and chief open source officer, for their now-familiar fireside chat session. In the discussion, they touched on topics including hardware, quantum computing, kernel maintainership, and more.

In speaking of recent hardware vulnerabilities, Torvalds said, “These hardware issues were kept under wraps. Because it was secret and we were not allowed to talk about it, we were not allowed to use our usual open development model. That makes it way more painful than it should be.”

“When you’re doing a complex project, the only way to deal with complexity is to have the code out there,” Torvalds said. “There are so many layers. No one knows how all this works,” he continued, describing it as an “explosion of complexity.”

Nonetheless, Torvalds said he doesn’t worry so much about issues of technology within the kernel. “What I’m really worried about is the flow of patches. If you have the right workflow, the code will sort itself out.”

When asked whether he still understands the Linux kernel, Torvalds replied, “No. … Nobody knows the whole kernel. Having looked at patches for many, many years, I know the big picture, and I can tell by looking if it’s right or wrong.”

Join us at Open Source Summit + Embedded Linux Conference Europe in Edinburgh, UK on October 22-24, 2018, for 100+ sessions on Linux, Cloud, Containers, AI, Community, and more.

Understanding the Difference Between CI and CD

There is a lot of information out there regarding Continuous Integration (CI) and Continuous Delivery (CD). Multiple blog posts attempt to explain in technical terms what these methodologies do and how they can help your organization. Unfortunately, in several cases, both methodologies are usually associated with specific tools or even vendors. A very common conversation in the company cafeteria may be:

  1. Do you use Continuous Integration in your team?
  2. Yes, of course, we use X tool

Let me tell you a little secret. Continuous Integration and Delivery are both development approaches. They are not linked to a specific tool or vendor. Even though there are tools and solutions out there that DO help you with both (like Codefresh), in reality, a company could practice CI/CD using just bash scripts and Perl one-liners (not really practical, but certainly possible).

Therefore, rather than falling into the common trap of explaining CI/CD using tools and technical terms, we will explain CI/CD using what matters most: people!

Read more at The New Stack

MySQL High Availability at GitHub

GitHub uses MySQL as its main datastore for all things non-git, and its availability is critical to GitHub’s operation. The site itself, GitHub’s API, authentication and more, all require database access. We run multiple MySQL clusters serving our different services and tasks. Our clusters use classic master-replicas setup, where a single node in a cluster (the master) is able to accept writes. The rest of the cluster nodes (the replicas) asynchronously replay changes from the master and serve our read traffic.

The availability of master nodes is particularly critical. With no master, a cluster cannot accept writes: any writes that need to be persisted cannot be persisted. Any incoming changes such as commits, issues, user creation, reviews, new repositories, etc., would fail.

To support writes we clearly need to have an available writer node, a master of a cluster. But just as important, we need to be able to identify, or discover, that node.

On a failure, say a master box crash scenario, we must ensure the existence of a new master, as well as be able to quickly advertise its identity. The time it takes to detect a failure, run the failover and advertise the new master’s identity makes up the total outage time.

This post illustrates GitHub’s MySQL high availability and master service discovery solution, which allows us to reliably run a cross-data-center operation, be tolerant of data center isolation, and achieve short outage times on a failure.

Read more at GitHub

Helm, The Package Manager for Kubernetes

A few weeks ago, the CNCF family was extended with a new project – Helm, the package manager for Kubernetes.

Kubernetes was developed as a solution to manage and orchestrate containerized workloads. At the same time, managing the pure containers is not always enough. At the final end, Kubernetes is being used to run applications, and having a solution that will simplify the ability to run and deploy applications with Kubernetes, was a high demand. Helm was this solution.

Originally developed by Deis, Helm shortly became a de-facto open source standard for running and managing applications with Kubernetes.

Imagine Kubernetes as an Operating System (OS), Helm is the apt or yum for it. Any operating system is a great foundation, but the real value is in the applications. Package managers like apt, yum, or similar, simplify the operations so instead of building the application from the source files, you can easily install it with the package manager in a few clicks.

Read more at CNCF

Linux Weather Forecast

Welcome to the Linux Weather Forecast

This page is an attempt to track ongoing developments in the Linux development community that have a good chance of appearing in a mainline kernel and/or major distributions sometime in the near future. Your “chief meteorologist” is Jonathan Corbet, Executive Editor at LWN.net. If you have suggestions on improving the forecast (and particularly if you have a project or patchset that you think should be tracked), please add your comments below.

Forecast Summaries

Current conditions: the 4.18 kernel was released on August 12. Some of the more important changes in this release are:

  • The power domains subsystem has seen a number of enhancements that will lead to improved whole-system power management.
  • It is now possible for unprivileged processes to mount filesystems when running in a user namespace. This will make it possible to mount filesystems within containers without the need for additional privileges.
  • Zero-copy TCP data reception is now supported.
  • The AF_XDP subsystem will eventually lead to highly accelerated networking in a number of settings. This work is part of a larger effort to win back users of user-space networking stacks by providing better facilities in the kernel.
  • Bpfilter is a new packet-filtering mechanism based on the BPF virtual machine. The 4.18 version of bpfilter won’t do much, but it is expected to be the base on which the next generation of kernel firewalling systems is built.
  • Restartable sequences, a new API for performing highly optimized lockless operations in user space, have finally made it into the mainline kernel.

It’s also worth noting that the new AIO-based polling mechanism, originally merged for 4.18, was later reverted pending further work.

The 4.18 development cycle saw the merging of 13,283 unique changesets from nearly 1,700 developers, 226 of whom where first-time contributors. See this article for more information on the changes merged for 4.18.

Short-term forecast: The 4.19 kernel can be expected in mid-October. Some of the key features merged for this release are:

  • The AIO-based polling mechanism that didn’t quite make 4.18 will be there in 4.19.
  • Load tracking in the scheduler has been augmented with a better understanding of the resources used by realtime and deadline processes; this information will support better power-management decisions.
  • Intel’s cache pseudo-locking feature is now supported.
  • An extensive set of mitigations for the L1TF vulnerability has been provided.
  • The block I/O latency controller allows an administrator to provide block (disk) I/O response-time guarantees to specific processes.
  • Time-based packet transmission is now supported in the networking subsystem.
  • The CAKE queueing discipline offers significantly better networking performance, especially in home or small-business settings.
  • The minimum version of GCC needed to build the kernel is now 4.6.
  • As usual, thousands of bug fixes, clean-ups, and small improvements have been merged as well.

The 4.19 kernel is in the stabilization phase now, with only bug fixes accepted. There is a small chance that this kernel will be called “5.0” when it is released, though your weatherman would predict that 5.0 won’t happen for one more release cycle.

This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.

Thoughts on the New State of DevOps Report

This week, the latest State of DevOps report by the team at DORA was released. There is a lot of conversations in various social media platforms on what makes an engineering team ‘successful’ and how to increase feature velocity in tech companies and the DORA report does not just rely on anecdotes but surveys the field using rigorous scientific methods and statistical constructs to back or debunk common practices with real data. Because of the strong scientific backbone of their findings, I have been following the report for the last few years. If you have not, the book Accelerate is a great way to learn the findings of the previous four years.

Database management as a part of team performance

One of the new things in this year’s report is expanding behaviors by high performing teams to more than just ‘development’ practices. This year expands the view on high performing engineering organizations to how these organizations handle database management.

We discovered that good communication and comprehensive configuration management that includes the database matter. Teams that do well at continuous delivery store database changes as scripts in version control and manage these changes in the same way as production application changes.

Read more at dbsmasher

How to See What’s Going On With Your Linux System Right Now

Is that service still running? What application is using that TCP port? These questions and more can be answered easily by sysadmins who know simple Linux commands.

If you’re a system administrator responsible for Linux servers, it’s essential that you master the Bash shell, especially the commands that show what software is running. That’s necessary for troubleshooting, obviously, but it is always a good idea to know what’s happening on a server for other reasons—not the least of which is security awareness.

Previously, I summarized 16 Linux server monitoring commands you really need to knowwhat to do when a Linux server keels over, and Linux commands you should never use. Here are some more commands to further your system administration skills—and help you identify the current status of your Linux server. I consider these commands to be fundamentals, whether your servers run on bare iron, a virtual machine, or a container with a lifespan of only a few hours.

To put these in some kind of context, let’s follow sysadmin Sammie as she goes through a typical day.

Read more at HPE