Home Blog Page 503

Faster Tied Together: Bundling Your App with webpack

In the first post of our series, we outlined three components of a modern front-end stack. In the second post, we untangled the challenge of package management with Yarn. In this post, we’ll take a look at the next component in our stack: webpack™, a way of building and bundling assets for web apps.

Webpack is a robust and extensible tool that brings speed, parity between environments, and organized code to your application. It does its best work graphing a modular codebase, tying many graphed dependencies together into a few output files. For anything webpack doesn’t do readily, it can be taught to do with plugins. It can graph JavaScript modules naturally, and it can transform just about anything into a JavaScript module with a special kind of plugin called a loader.

It’s also growing rapidly. In May of 2017, the npm registry reported nearly 6.7 million downloads per month, up from 323,000 two years previously. That’s a 2000% increase. Just last year, the project established a core team, launched a much-improved documentation site, and joined the JS Foundation. So if you haven’t already, it’s probably time to consider whether you should be using webpack in your project.

Of course, every engineering choice comes with tradeoffs. To make the best investment decision, you need to know what each webpack feature gets you and what each one asks of you. With that in mind, we’ll do some accounting of the costs and benefits of four powerful features: “lean building”, code-splitting, tree-shaking, and hot module replacement. Along the way we’ll introduce the basic concepts vital for understanding how webpack works. When we’re done, we’ll take a look at the bottom line to help you decide if webpack should be your web app bundler of choice.

Build Lean

First let’s look at “lean building”. This is not a core feature of webpack so much as a powerful side benefit. Webpack reduces the overhead and friction associated with orchestrating the build of applications for local development and production. As such, it covers enough territory by itself to toss out the bulky tools we once depended on. Let’s start with its main function.

File Bundling

At its most basic level, webpack reads source files and then rewrites their contents into new, fewer, tightly-packed files. As a bundler, it knows how to read JavaScript that follows a module pattern. For example, some common patterns—like CommonJS and ES2015 (ES6)—divide chunks by file, with each describing an export to name what they provide.

If you organize your JavaScript this way, you usually have one or a few “main” files that start the app (i.e. the “bootstrap”), a bit like the first tile in a line of dominoes. Webpack captures the file path from each import or require in this “main” file, reads each of those files, captures the file paths in those, and—like a digital version of Six Degrees of Kevin Bacon—learns how all the chunks connect to one another. Then it bundles all the chunks into a small set of files that’s easier to load than the myriad original parts.

It doesn’t have to be just JavaScript. You can extend your webpack configuration to read and connect any type of file you might require, including images, CSS, and more. It can even handle connections across types—for example, scripts can require CSS, and CSS in turn can require an image. For the price of a configuration for each type, you can pack together any of the files you need to deliver.

Command Automation

This egalitarian ability comes at a good time. If you’re installing webpack and related tools, that means you’re using a package manager like npm or Yarn, both of which natively support arbitrary package scripts.

In the past, many packages interfaced only with Node modules. Grunt™ and Gulp facilitated an ecosystem of plugins that shimmed these modules for the command line. But build configurations that used them for large projects lumbered and cracked under their own weight.

Today, Node packages offer a wide variety of packages usable from a CLI, even for “little” build tasks like recursive file removal or copying. With them, tasks may often be expressed in breezy one-liners. You can do all this with just the overhead of depending on npm, which you’ve already bought into.

Accounting

Let’s start tracking the costs and benefits of webpack.

The costs:

  • Configure webpack to handle multiple types

  • Include cross-type imports in modules

The benefits:

  • One toolset to cover the primary build concern

  • Maintainable automation CLI

Split Code

Next, let’s consider webpack’s code-splitting feature. Tobias Koppers (sokra on GitHub) originally wrote webpack to ensure that apps only load the code you need when you need it. Your app’s first page probably doesn’t need to load everything at once. If you split the sum total of your code up into parts—what shows up first and then what shows up later—the critical components of your app load more quickly.

Greater speed via webpack comes with a few costs. You’ll need to tell webpack where to split your code, and also where to put the bundled code. Let’s take a closer look.

Entry Point

Before webpack can split your code, it needs to know your application’s entry point. In your webpack configuration, you don’t have to describe where to find every file—just the one where it all starts:

From there, webpack will dig down recursively (following every branch of the tree) to find all of the dependencies related to the entry point. You can have as many entry points as you want. If you add a second entry point, then this is the point at which webpack splits the code:

For simplicity’s sake we’ll assume that where-it-all-starts.js is our primary code that needs to load first, and a-library.js is an additional library that can load later. The two do not overlap. Refer to the guide on code-splitting libraries for greater detail on how to configure webpack if they do. (Also, stay tuned for the final post in our series, which will present a case study showing our modern stack in action, including webpack.)

Output

Now only one step remains. This next concept is the output. webpack needs to know where to put the new, bundled version of the code for all the entry points.

Here the [name] substitution is a placeholder for each of the property names for the entry configuration. When you execute webpack with this configuration, it will create (or overwrite) two new files in /absolute/path/to/dist: main.js and other.js. Again, for the sake of simplicity, this overview doesn’t mention how you load the right script on the right page. That’s a separate concern (but easy to implement). All you need to code split is multiple entry points and an output configuration.

Accounting

The costs:

  • Modular JavaScript

  • Knowing and creating the correct webpack configuration

  • Multiple script requests over HTTP

The benefits:

  • Faster startup speeds

  • Modular JavaScript

You probably noticed we listed “Modular JavaScript” as a cost and a benefit. Converting to modular code, if you aren’t already using it, could cost you some time, but it gives you the power to build discrete units of functionality without worrying when they might be loaded.

Shake Trees

Now let’s grab the tree and give it a shake to see what comes out. Webpack can find code that will never execute for the life of the application, otherwise known as “dead code”, and remove it for you. In other words, it shakes out the dead and loose parts of your dependency tree. You’re most likely to benefit from this if your application uses part, but not all of a third-party library. Without shaking, you force your users to spend time loading code that they’ll never use. With shaking, it’s like that code was never there.

Static Modules

To reap these benefits, you’ll have to use statically-structured modules, a new feature available via the import/export syntax in ES2015. While webpack can support many common module types, it cannot tree shake all of them. You have to provide it with a way to map the functions that you do and do not use, and it cannot do this reliably or efficiently if it’s possible for the modules to change dynamically.

Unfortunately, as of June 2017, native browsers only thinly support ES2015 modules, as shown in the graph below. Out of 15 common browsers, only Safari (on macOS and iOS) supports it by default as of June 2017.

IJs5ardeVvZa-DdZg1_Z2DJ6HrCR8MiylgLefYCY

“Can I Use” graph of ES2015 module support

If you want both static analysis and a wide audience today, you’ll need a transpilation step that takes your modern source and turns it into a form widely supported by browsers. In this scenario, the language you write in (for example, ES.Next or TypeScript) must be treated a little differently than typical JavaScript. We’ll need to explain to webpack how to handle that with a concept called loaders.

Loaders

Loaders for webpack often come in the form of Node packages that you add on to your installation as a development dependency. Which one you need depends on your source language. For example, if you’re using TypeScript, you need the ts-loader:

Next, configure webpack to include the loader in a list of rules. For each rule, you need to provide a filename pattern to look for and the name of the loader. If we build on the configuration from the previous section, and turn all of our imaginary JavaScript entry points into TypeScript, it might look like this (changes in green):

The added rule tells webpack to use the ts-loader on any file ending with .ts that’s part of an entry point dependency graph.

Production Option

We’re almost done. By default, webpack does not remove dead code from the bundle. When you run webpack with this configuration, it does not assume you want the output optimized for production (that is, minified and tree-shaken). To get all of those, run webpack with the production option—you just need to add a -p flag to the command:

That’s it. If you want more control, you can have it with additional configuration details. But for a minimum viable tree shake, this’ll do.

Accounting

The costs:

  • Use static modules

  • Transpile such modules into widely-supported code with loaders

  • Add a mode for running webpack in production

The benefits:

  • No dead code, which again means speed

  • Less costly third-party libraries in terms of page weight

Replace Modules at Runtime

Like the previous features, hot module replacement (HMR) buys you speed. Unlike the previous features, it speeds up development, not production. Webpack can watch the files related to your entry points to see when they change during development. Each time they do, it can replace just the things you changed while the application keeps running. What’s more, it does this quickly enough that it feels immediate. Sound good? Let’s go over what it takes to set up.

Development Server

Hot module replacement requires webpack’s development server. Up until now, we have only considered webpack in terms of reading files and writing them out again. To get webpack involved in swapping modules at runtime, it’s going to have to be involved in the serving process. While developing your application, you can create the HTTP static asset server quickly by installing webpack-dev-server and running it.

To include HMR with the server, add the –hot flag.

At this point, you have a development server that can update your application as you change your files. While the app is running, if you change anything in the where-it-all-starts.ts file from earlier, you will see output from the dev server showing that webpack recompiled.

App State

While instant recompiles on the fly are nice, you won’t see how different HMR feels until your development process has to contend with state. Let’s take form validations as an example. Imagine that your app has a signup form that includes a field for email address. If form validation detects a badly-formatted email, it stops the form from being submitted and displays an error.

Now let’s say you want to change the messaging in that error. If you use a typical HTTP server for development, then you will have to retrace those steps every time you make a change to your source: refresh the page, go to the form, fill out the email field with a bad address, and check the message. Depending on your application, any scenario with a lot of state could easily take more steps than that.

So how does this look with hot module replacement? With HMR, webpack can load the module that provides the error message without forcing you to recreate all the circumstances from scratch. However, to do its best, your application must be able to reload what you’ve changed without affecting state. This means you need to write your application in a way that separates state from the view, or that offers a way to restore state. How exactly that happens depends on your app. You might find an existing loader that can determine the means of replacement. Or, if you have to make your own way, HMR offers an API to help.

Accounting

The costs:

  • Serve in development through webpack-dev-server

  • Loaders that support HMR

  • A way for each asset module to separate or restore runtime state

The benefits:

  • Quick runtime feedback from source changes in development

Bottom Line

It’s time to make a decision. Let’s look at the bottom line. We’ve checked out four features webpack offers and the minimum effort it takes to realize them. In sum, these features offer an economy of opportunities—each one is optional, but they build on one another.

The broad community of webpack plugins and tools offers even more opportunities. Just keep in mind that each addition has a cost and steepens the learning curve. Maximizing the abilities of webpack involves considerable configuration, so expect to put in a decent time investment before you’re comfortable with it (especially if modular codebases are new to you and your team). Don’t let that daunt you. Take on the costs one at a time and you’ll soon find yourself enjoying all the benefits. We encourage you to read our case study article (part 5 of this series) and its accompanying repositories to see a full webpack build in action. Also, be sure to crack open the concepts and guides documentation.

At Kenzan, we’ve found that webpack gives us all the basics we expect from task runners like Grunt and Gulp, but it brings so much more to both the user and developer experience with leaner code, faster page loads, and the power to swap modules on the fly while developing. It’s our default choice for building applications. But the best choice is always one that suits the situation. Understanding your options in terms of the benefits and trade-offs gives you a basis for comparison to any alternatives.

Stay tuned for the next post in this series, where we’ll look at the last core component of our modern front-end stack: TypeScript.

Kenzan is a software engineering and full service consulting firm that provides customized, end-to-end solutions that drive change through digital transformation. Combining leadership with technical expertise, Kenzan works with partners and clients to craft solutions that leverage cutting-edge technology, from ideation to development and delivery. Specializing in application and platform development, architecture consulting, and digital transformation, Kenzan empowers companies to put technology first.

Grunt and webpack are trademarks of the JS Foundation.

SPDX Could Help Organizations Better Manage Their Thickets of Open Source Licenses

As open source becomes more pervasive, companies are consuming products that have open source components. Today you literally can’t use any piece of software that doesn’t have any open source code in it, making it very complicated for companies to keep a tab on what they are consuming and stay compliant with open source licenses.

To help simplify matters is a new Linux Foundation project called Software Package Data Exchange. With SPDX, the Foundation hosts the project and owns the copyright on the specification and trademark assets. It’s an open community of volunteers and as such has people participating across a broad spectrum of companies, academia and other foundations.

Read more at The New Stack

Web Development Trends 2017

Web development is progressing at incredible speed these days and trends that were hot in 2016, today will be considered nothing less than archaic. Users are having more control and power and companies are shifting their services according to the user needs, which may be unpredictable. In this article we will cover the biggest and most promising trends of web development.

Artificial intelligence

AI is something that is shaking modern IT world and companies are competing against each other to hire and maintain the best professionals of the industry. Started by Facebook and Google, artificial intelligence is applied in more and more apps these days allowing devices to think and act more like humans. The basic AI example is face recognition, which is widely used in Facebook photo tagging.

Read more at HackerNoon

All Your Streaming Data Are Belong to Kafka

Apache Kafka is on a roll. Last year it registered a 260 percent jump in developer popularity, as Redmonk’s Fintan Ryan highlights, a number that has only ballooned since then as IoT and other enterprise demands for real-time, streaming data become common. Hatched at LinkedIn, Kafka’s founding engineering team spun out to form Confluent, which has been a primary developer of the Apache project ever since.

But not the only one. Indeed, given the rising importance of Kafka, more companies than ever are committing code, including Eventador, started by Kenny Gorman and Erik Beebe, both co-founders of ObjectRocket (acquired by Rackspace). Whereas ObjectRocket provides the MongoDB database as a service, Eventador offers a fully managed Kafka service, further lowering the barriers to streaming data.

Read more at InfoWorld

How Does the Kubernetes Scheduler Work?

Hello! We talked about Kubernetes’ overall architecture a while back.

This week I learned a few more things about how the Kubernetes scheduler works so I wanted to share! This kind of gets into the weeds of how the scheduler works exactly.

It’s also an illustration of how to go from “how is this system even designed I don’t know anything about it?” to “okay I think I understand the basic design decisions here and why they were made” without actually.. asking anyone (because I don’t know any kubernetes contributors really, certainly not well enough to be like PLEASE EXPLAIN THE SCHEDULER TO ME THANKS).

This is a little stream of consciousness but hopefully it will be useful to someone anyway. The best most useful link I found while researching this was this Writing Controllers document from the amazing amazing amazing kubernetes developer documentation folder.

Read more at Julia Evans

Internet History Timeline: ARPANET to the World Wide Web

Credit for the initial concept that developed into the World Wide Web is typically given to Leonard Kleinrock. In 1961, he wrote about ARPANET, the predecessor of the Internet, in a paper entitled “Information Flow in Large Communication Nets.” Kleinrock, along with other innnovators such as J.C.R. Licklider, the first director of the Information Processing Technology Office (IPTO), provided the backbone for the ubiquitous stream of emails, media, Facebook postings and tweets that are now shared online every day. Here, then, is a brief history of the Internet:

The precursor to the Internet was jumpstarted in the early days of computing history, in 1969 with the U.S. Defense Department’s Advanced Research Projects Agency Network (ARPANET). ARPA-funded researchers developed many of the protocols used for Internet communication today. This timeline offers a brief history of the Internet’s evolution:

1965: Two computers at MIT Lincoln Lab communicate with one another using packet-switching technology.

Read more at LiveScience

Future Proof Your SysAdmin Career: An Introduction to Essential Skills

As the technology industry evolves, today’s system administrators need command of an ever-expanding array of technical skills. However, many experts agree that skills like effective communication and collaboration are just as important. With that in mind, in this series we are highlighting essential skills for sysadmins to stay competitive in the job market. Over the next several weeks, we will delve into important technical requirements as well as non-technical skills that hiring managers see as crucial.

future proof ebookLinux.com has published several lists highlighting important skills for sysadmins. These lists correctly balance generalized skills like problem solving and collaboration with technical skills such as experience with security tools and network administration.

Today, sysadmins also need command of configuration management tools such as Puppet, cloud computing platforms such as OpenStack, and, in some cases, emerging data center administration platforms such as Mesosphere’s Data Center Operating System. Facility with open source tools is also a key differentiator for many sysadmins.

As Dice data scientist Yuri Bykov has noted, “Like many other tech positions, the role of the system administrator has evolved significantly over time due, in large part, to the shift from on-premise data centers to more cloud-based infrastructure and open source technologies. While some of the core responsibilities of a system administrator have not changed, the expectations and needs from employers have.”

Promising outlook

Additionally, “as businesses have begun relying more upon open source solutions to support their business needs, the sysadmin role has evolved, with employers looking for individuals with cloud computing and networking experience and a strong working knowledge of configuration management tools. … The future job outlook for system administrators looks promising, with current BLS research indicating employment for these professionals is expected to grow 8 percent from 2014 to 2024,” Bykov said.

Experience with emerging cloud infrastructure tools and open source technologies can also make a substantial compensation difference for sysadmins. According to a salary study from Puppet, “Sysadmins aren’t making as much as their peers. The most common salary range for sysadmins in the United States is $75,000-$100,000, while the four other most common practitioner titles (systems developer/engineer, DevOps engineer, software developer/engineer, and architect) are most likely to earn $100,000-$125,000.”

Sysadmins who have experience with OpenStack and Linux can also fare better in the hiring and salary pool. Fifty-one percent of surveyed hiring managers said that knowledge of cloud platforms has a big impact on open source hiring decisions, according to the 2016 Linux Foundation/Dice Open Source Jobs Report. There is also healthy hiring demand for sysadmins, with 48 percent of respondents in the same study reporting that they are actively looking for sysadmins.

The fact that fluency with Linux can make a big difference for sysadmins should come as no surprise. After all, Linux is the foundation for many servers and cloud deployments, as well as mobile devices. Several salary studies have shown that Linux-savvy sysadmins are better compensated than others.

More to come

In this series, we will look at the essential skills sysadmins need to stay relevant and competitive in the job market, well into the future, which include:

  • Networking essentials

  • Cloud infrastructure

  • Security and authentication

  • Configuration and automation

  • DevOps

  • Professional certification

  • Communication and collaboration

  • Open source participation

As we explore these topics, we’ll keep three guiding principles in mind:

  • Successful sysadmins are actively moving up the technology stack with their skillsets and embracing open source as rapidly as organizations are doing so.

  • Training for sysadmins is more readily available than ever — ranging from instructor-led courses to online, on-demand courses that allow the student to set the pace.

  • Sysadmins have an increasingly crucial role in keeping organizations performing at their best.

Learn more about essential sysadmin skills: Download the Future Proof Your SysAdmin Career ebook now.

 

Read more:

Future Proof Your SysAdmin Career: An Introduction to Essential Skills 

Future Proof Your SysAdmin Career: New Networking Essentials

Future Proof Your SysAdmin Career: Locking Down Security

Future Proof Your SysAdmin Career: Looking to the Cloud

Future Proof Your SysAdmin Career: Configuration and Automation

Future Proof Your SysAdmin Career: Embracing DevOps

Future Proof Your SysAdmin Career: Getting Certified

Future Proof Your SysAdmin Career: Communication and Collaboration

Future Proof Your SysAdmin Career: Advancing with Open Source

This Week in Open Source: Microsoft’s Open Source Lovefest, Adobe Flash Looks for Open Source Lifeboat & More

This week in Linux and open source, Microsoft’s new CNCF membership represents the company’s ongoing love for open source, Adobe Flash is the subject of enthusiast rescue mission, and much more

1) Microsoft continues its Linux lovefest with new CNCF membership.

Microsoft Further Pledges Linux Loyalty by Joining Cloud Native Computing Foundation– Beta News

2) While Adobe is “mercy killing” Flash, enthusiasts are hoping for an open source lifeboat.

Adobe Flash Fans Want a Chance to Fix Its One Million Bugs Under an Open Source License– Gizmodo

3) A project intended to “develop open source technology and standards for “computational contracting” for the legal world that deploys blockchain technology” is getting ready for liftoff

Accord Project’s Consortium Launching First Legal ‘Smart Contracts’ With Hyperledger– Forbes

4) Version 60 of Google Chrome has been released for Linux and features security fixes, developer-related changes, and more

Google Chrome 60 Released for Linux, Mac, and Windows– Bleeping Computer

5) SambaCry doesn’t just favor Linux…

Creators Of SambaCry Linux Malware Also Have A Windows Backdoor Program– Forbes

Aiming to Be a Zero: The Ultimate Open Source Philosophy

Guy Martin, Director of the Open@ADSK initiative at Autodesk, had two dreams growing up  to be either an astronaut or a firefighter. Martin has realized his second dream through his work as a volunteer firefighter with Cal Fire, but his love for space is what led to “Aiming to Be an Open Source Zero,” the talk he will be delivering at Open Source Summit NA.  

Martin has more than two decades of experience in the software industry, helping companies understand, contribute to, and better leverage open source software. He has held senior open source roles with Samsung Research, Red Hat, and Sun Microsystems, among others, and is a frequent speaker at conferences.

During his stint at Samsung, on a long flight to South Korea, Martin read An Astronaut’s Guide to Life on Earth by Chris Hadfield to pass the time. In the book, Hadfield talks about his philosophy for getting along and working with others. Simply put, in aiming to be a zero, Hadfield built credibility with others and was eventually able to show them that he was a +1. He recounts stories of fellow astronauts who never flew in space because they kept trying to show that they were +1s, but in reality their attitudes made them -1s.

“This made me realize that large companies who are getting into open source for the first time often think that they can ‘buy’ influence, or that their reputation in the industry means that open source projects/communities should listen to them. Now, we know that’s not the case, but until I read Hadfield’s book, I never knew how to effectively explain that to people,” said Martin.

Here, Martin explains more about this philosophy and how it applies to open source.

Linux.com: Can you explain the title of your talk? What does “being a zero” mean?

Martin: Aiming to be a zero means that you aren’t coming into a new situation (or open source community) intent on proving your value at the expense of understanding the dynamics of the people involved. Trying to be a +1 without sufficient understanding of what was done before you arrived can make you appear arrogant and out of touch, or worse, can make you an active detractor (-1) to that community.

Aiming to be a zero gives you the right balance between trying to do too much and doing too little. Once you have proven your value to the community, your ability to showcase +1 talents becomes easier.

Linux.com: What was the inspiration behind this philosophy?

Martin: I can’t take credit for that  Col. Chris Hadfield (the first Canadian astronaut to command an International Space Space mission) speaks about it in his amazing book An Astronaut’s Guide to Life on Earth. I read this book on an international flight, and it literally changed my perspective on working with communities and helping individuals and companies understand how to get the most out of (and contribute to) open source projects.

Linux.com: You have two passions firefighting and space. How does aiming to be a zero fit in the firefighting scenario?

Martin: Despite the fact that fire departments are paramilitary organizations in nature, with clear chains of command and hierarchical organization, the bedrock of firefighting is community/family. We support each other in incredibly difficult times and celebrate in joyous times.

To do that, and to build up the trust needed to rely on each other in all situations, you have to start out as a zero  offer to do the dirty work, learn from others, and most importantly listen and understand the dynamics of the team. The fire ground, just like space, can be an unforgiving place. Thankfully, people are unlikely to die in open source communities, but the lessons learned from space travel and firefighting translate well when you are considering how to bring a diverse group of people together to solve big challenges.

Linux.com: What problems do you see in the open source world where you think being zero is the right approach?

Martin: Despite the prevalence of open source in all aspects of our lives, and in devices of all sizes and shapes, there are still companies and individuals who see open source projects and communities as something strictly to consume from, without necessarily giving back to.

Now, they aren’t obligated in most cases to give back, but, inevitably, someone finds a bug, or needs a feature, and all too often, the approach is to come in with requirements or assert their +1 status (usually related to their company’s size or market value) and expect the community to just kowtow to their demands. I’ve seen it throughout my career, and while I always understood that wasn’t a good approach, it wasn’t until I read Hadfield’s book that I truly understood how to talk about this and relate it to people and companies in a way that was likely to get results.

Linux.com: Can you give an example of how aiming for +1 damages companies and the community?

Martin: I won’t give specific company names (for obvious reasons :)), but I can say that I’ve witnessed engineers from large multinational companies being asked by their superiors to “just get this feature into the open source project” or to “land x number of patches in this community so that we can get influence.”

Although there is nothing wrong with landing patches to help gain strategic influence in a project, if the goal is to push in a ton of mediocre patches in hopes that the company’s name will sway the community to go in a particular direction, then that is a clear example of attempting to be a +1 before you’ve gained the trust of the community by being a zero and contributing in a way that benefits both the company and the community.

Check out the full schedule for Open Source Summit here and save $150 on registration through July 30. Linux.com readers save an additional $47 with discount code LINUXRD5. Register now!

The 4 Quadrants of Open Source Entrepreneurship

The Key to a Flourishing Career in the 21st Century

image?w=624&h=485&rev=165&ac=1

Some time ago, I noticed something missing in our discussions about open source software development. A few somethings, in fact. Nobody was talking about product management as it pertains to open source development. Admittedly, this was spurred by a question from a product management team member who was confronted for the first time by the reality of working with an engineering team that runs an open source project. Her question was simply, “So… what should we be doing?” Her question was born of a fear that product management had no role in this new regime and thus rendered her unnecessary. I had to think for a moment because I, experienced open source project hand that I was, wasn’t quite sure. For quite some time, my standard response had been for product management and other “corporate types” to stay the hell away from my open source project. But that didn’t feel right. In fact, it felt darn right anachronistic and counterproductive.

Over the next few weeks, I thought about that question and gradually realized that there was no defined role for product management in most open source projects. As I looked further, I found that there was startlingly little in the way of best practices for creating products from open source software. Red Hat had made a company by creating efficient processes designed to do just that, but most industry observers assumed (wrongly) that they were the only ones who could do it. Lots of companies, even large proprietary ones, had started to use open source software in their products and services, but there was very little in the way of sharing that came from them. Even so, many of them did a poor job of participating in the upstream communities that created the software they used. Shouldn’t these companies get the full benefit of open source participation? I also came across a few startups who wanted to participate in open source communities but were struggling with how to find the best approach for open source participation while creating great products that would fund their business. Most of them felt that these were separate processes with different aims, but I thought they were really part of the same thing. As I continued down this fact-finding path, I felt strongly that there needed to be more resources to help businesses get the most out of their open source forays.

This was the seed for creating the Open Source Entrepreneur Network, my personal passion for the past year. Yes, there have been a smattering of articles about business models and some words of advice for startups seeking funding, but there’s been no comprehensive resource for businesses who want to prioritize and optimize for open source participation. There’s also a false sense of security that comes from adopting modern tooling. While I’m glad that devops practitioners argue forcefully for better automation and better internal collaboration, it misses the larger point about external collaboration with upstream communities and how to optimize your engineering for such. Articles about licensing compliance are much-needed but are but one small part of the larger picture of building a business.

As I’ve spoken with many folks over the last few months, I would break down open source business, or entrepreneurship, into 4 basic components, which I’ll describe below. If you look at the diagram above, you already know their names: Automation, Collaboration, Community and Governance. You’ll find much that overlaps with methodologies and practices from InnerSource, devops, and community management, but I think that an open source entrepreneur needs to at least understand all of them to create a successful open source business. And I don’t mean only for startups – this applies equally well to those who lead teams in large companies. Either way, the principles are the same.

Automation

This part focuses on tooling and is probably the best covered in the literature of the four components. Even so, startlingly few enterprises have gone far in adopting it wholesale, for a variety of reasons, ranging from team members’ fears of becoming redundant, to middle management fears of same, to a perceived large one-time cost of changing out tools and procedures.

Collaboration

If you’re a devops or innersource practitioner, this will be your gospel. This is all about breaking down silos and laying the groundwork for teams to work together. I’m always astounded by how little teams work together in company settings, even small ones. So much would change if companies would simply adopt community management principles.

Community

One might think that this is the same as the above, but I’m thinking more in terms of external collaboration. To be sure, there are many differences between them, but companies that are bad at one of them tend to be awful at the other. The corollary is also true: companies good at one tend to be good at the other as well. There’s also the matter of how to structure engineering and product management teams to reduce technical debt and learn how to optimize for more upstream development.

Governance

This is all about licensing, supply chain management, regulatory compliance, and how to get your legal team to think like an open source entrepreneur. It’s not easy. In many companies, a lack of understanding from business affairs, legal, and software asset management serves as significant obstacles to open source collaboration.

So there you have it – open source entrepreneurship in a nutshell. A successful product owner, engineering manager, CIO, CTO, startup founder or investor will need to understand all of the above and demonstrate mastery in at least 1 or 2 areas. This is the subject matter for both my Linux Foundation Webinar on August 1 and the Open Source Entrepreneur Network Symposium, co-located with the Open Source Summit on September 14. The webinar will be an hour-long introduction to the concept. The symposium will feature talks from myself on open source product management that reduces technical debt, Stephen Walli on creating a business through better engineering process management, Shane Coghlan from the OpenChain project on building a compliance and software asset management model, and VM Brasseur on FOSS as an emerging market that companies need to master.