Home Blog Page 340

Doing Good Data Science

There has been a lot of healthy discussion about data ethics lately. We want to be clear: that discussion is good, and necessary. But it’s also not the biggest problem we face. We already have good standards for data ethics. The ACM’s code of ethics, which dates back to 1993, is clear, concise, and surprisingly forward-thinking; 25 years later, it’s a great start for anyone thinking about ethics. The American Statistical Association has a good set of ethical guidelines for working with data. So, we’re not working in a vacuum.

And, while there are always exceptions, we believe that most people want to be fair. Data scientists and software developers don’t want to harm the people using their products. There are exceptions, of course; we call them criminals and con artists. Defining “fairness” is difficult, and perhaps impossible, given the many crosscutting layers of “fairness” that we might be concerned with. But we don’t have to solve that problem in advance, and it’s not going to be solved in a simple statement of ethical principles, anyway.

The problem we face is different: how do we put ethical principles into practice?

Read more at O’Reilly

Machine Learning: A Micro Primer with a Lawyer’s Perspective

What Is Machine Learning

I am partial towards this definition by Nvidia:

“Machine Learning at its most basic is the practice of using algorithms to parse data, learn from it, and then make a determination or prediction about something in the world.”

The first step to understanding machine learning is understanding what kinds of problems it intends to solve, based on the foregoing definition. It is principally concerned with mapping data to mathematical models — allowing us to make inferences (predictions) about measurable phenomena in the world. From the machine learning model’s predictions, we can then make rational, informed decisions with increased empirical certainty.

Take, for example, the adaptive brightness on your phone screen. Modern phones have front- and rear-facing cameras that allow the phone to constantly detect the intensity of ambient light, and then adjust the brightness of the screen to make it more pleasant for viewing. But, depending on an individuals taste, they might not like the gradient preselected by the software, and have to constantly fiddle with the brightness by hand. In the end, they turn off adaptive brightness altogether!

What if, instead, the phone employed machine learning software, that registered the intensity of ambient light, and the brightness that the user selected by hand as one example of their preference. Over time, the phone could build a model of an individuals preferences, and then make predictions of how to set the screen brightness on a full continuum of ambient light conditions. (This is a real thing in the next version of Android).

As you can imagine, machine learning could be deployed for all kinds of data relationships susceptible to modeling, which would allow programmers and inventors to increasingly automate decision-making. To be sure, a perfect understand of the methodology of machine learning is to have a fairly deep appreciation for statistical sampling methods, data modeling, linear algebra, and other specialized disciplines.

I will instead try to offer a brief overview of the important terms and concepts in machine learning, and offer an operative (but fictional) example from my own time as a discovery lawyer. In closing, I will discuss a few of the common problems associated with machine learning.

Nuts & Bolts: Key Terms

Feature. A feature represents the the variable that you change in order to observe and measure the outcome. Features are the inputs to a machine learning scheme; in other words, a feature is like the independent variable in a science experience, or the x variable on a line graph.

If you were designing a machine learning model that would classify emails as “important” (e.g., as Gmail does with labels), the model could take more than one feature as input to the model: whether the sender is in your address book (yes or no); whether the email contains a particular phrase, like “urgent”; whether the receiver has previously marked emails from the sender as important, among potential measurable features relating to “importance.”

Features need to be measurable, in that they can be mapped to numeric values for the data model.

Label. The label refers to the variable that is observed or measured in response to the various features in a model. For example, if a model meant to predict college applicant acceptance/rejection based off two features — SAT Score & GPA— the label would indicate yes (1) or no (0) for each example fed into the model. A label is like the dependent variable in a science experiment, or the Y variable on a line graph.

Example. An example is one data entry (like a line on an excel spreadsheet) that includes all features and their associated values. An example can be labeled (includes the label, i.e., the Y variable, with it’s value), or unlabeled (the value of the Y variable is unknown).

Training: Broadly, training is the act of feeding a mathematical model with labeled examples, so that the model can infer and make predictions for unlabeled examples.

Loss. Loss is the difference between the models prediction and the label on a single example. Statistical models aim to reduce loss as much as possible. For example, if you are to fit a line through a cloud of data points to show the linear growth on the Y-axis as X varies, a model would want a line that fits through all the points such that the sum of every loss is minimized as much as possible. Humans can do this intuitively, but computers can be automated to try different slopes until it arrives at the best mathematical answer.

Generalization/Overfitting: Overfitting is an outcome in which a model does not accurately predict testing data (i.e., doesnt “generalize” well) because the model tries to fit the training data too precisely. These problems can occur from poor sampling.

Types of Models

Regression. A regression model is a model that tries to predict a value along some continuum. For example, a model might try to predict the number of people that will move to California; or the probability that a person will get a dog; or the resell price of a used bicycle on craigslist.

Classification. A classification model is a model that predicts discrete outcomes — in a sense, it sorts inputs into various buckets. For example, a model might look at an image and determine if it is a donut, or not a donut.

Model Design: Linear, or…?

Conceptually, the simplest models are those in which the label can be predicted with a line (i.e., are linear). As you can imagine, some distributions cannot naturally be mapped along a continuous line, and therefore you need other mathematical tools to fit a model to the data. One simple machine learning tool to deal with nonlinear classification problems is feature crosses, which is merely adding a new feature that is the cross product (multiplication) of other existing features.

On the more complex side, models can rely on “neural networks”, so called because they mirror the architecture of neurons in humans cognitive architecture, to model complex linear and non-linear relationships. These networks consists of stacked layers of nodes, each represented a weighted sum with some bias from various input features (with potentially a non-linear layer added in), that ultimately yields an output after the series of transformations is complete. Now onto a (simpler) real life example.

Real Life Example: An Attorney’s Perspective

Perhaps only 20+ years ago, the majority of business records (documents) were kept on paper, and it was the responsibility of junior attorneys to sift through literal reams of paper (most of it irrelevant) to find the proverbial “smoking gun” evidence in all manner of cases. In 2018, nearly all business records are electronic. The stage of litigation in which documents are produced, exchanged, and examined is called “discovery.”

As most business records are now electronic, the process of discovery is now aided and facilitated by the use of computers. This is not a fringe issue for a businesses. Every industry is subject to one or more federal or state stricture relating to varied document retention requirements — prohibitions on the destruction of business records — in order to check and enforce compliance with whatever regulatory schema (tax, environmental, public health, occupational safety, etc.)

Document retention also becomes extremely important in litigation — when a dispute or lawsuit arises between two parties — because there are evidentiary and procedural rules that aim to preserve all documents (information) that are relevant and responsive to the issues in the litigation. Critically, a party that fails to preserve business records in accordance with the court’s rules is subject to stiff penalties, including fines, adverse instructions to a jury, or the forfeit of some or all claims.

A Common but High-Stakes Regulatory Problem Solved by Machine Learning

So, imagine the government is investigating the merger between two broadband companies, and the government suspects that the two competitors engaged in illegal coordination to raise the price of broadband service and simultaneously lower quality. Before they approve the merger, the government wants to be certain that the two parties did not engage in unfair and deceptive business practices.

So, the government commands the two parties to produce for inspection, electronically, every business record (email, internal documents, transcripts of meetings…) that include communications directly between the two parties, has discussions relating to the merger, and/or is related to the pricing of broadband services.

As you might imagine, the total corpus of ALL documents controlled by the two companies borders on the hundreds of millions. That is a lot of paper to sift through for the junior attorneys, and given the government’s very specific criteria, it will take a long time for a human to read each document, determine its purpose, and decide whether its contents merit inclusion in the discovery set. The companies lawyers are also concerned that if they over-include non-responsive documents (i.e., just dump 100’s of millions of documents on the government investigator), they will be deemed to not have complied with the order, and lose out on the merger.

Aha! But our documents can be stored and searched electronically, so maybe they can just design a bunch of keyword searches pick out every document that has the term “pricing”, among other features, before having to review the document for relevancy. This is a huge improvement, but it is still slow, as the lawyers have to anticipate a lot of keyword searches, and still need to read the documents themselves. Enter the machine learning software.

With modern tools, lawyers can load the entire body of documents into one database. First, they will code a representative sample (a number of “examples”) for whether the document should or should not be included in the production of records to the government. These labeled examples will form the basis of the training material to be fed to the model. After the model has been trained, you can provide unlabeled examples (i.e., emails that haven’t been coded for relevance yet), and the machine learning model will predict the probability that the document would have been coded as relevant by a person from all the historical examples it has been fed.

On that basis, you might be able to confidently produce/exclude a million documents, but only have human-coded .1% of those as a training set. In practice, this might mean automatically producing all the documents above some probability threshold, and having a human manually review anything that the model is unsure about. This can result in huge cost savings in time and human capital. Anecdotally, I recall someone expressed doubt to me that the FBI could not have reviewed all of Hillary Clinton’s emails in a scant week or two. With machine learning and even a small team of people, the task is actually relatively trivial.

Upside, Downsides

Bias. It is important to underscore that machine learning techniques are not infallible. Biased selection of the training examples can result in bad predictions. Consider, for example, our discovery example above. Let’s say one of our errant document reviewers rushed through his batch of 1000 documents, and just haphazardly picked yes or no, nearly at random. Now, we might no longer expect the prediction model will accurately identify future examples as fitting the government’s express criteria, or not.

The discussion of “bias” in machine learning can also relate to human invidious discrimination. In one famous example, a twitter chatbot began to make racist and xenophobic tweets, which is a socially unacceptable outcome, even though the statistical model itself cannot be ascribed to have evil intent. Although this short primer is not an appropriate venue for the topic, policymakers should remain wary of drawing conclusions based on models whose inputs are not scrutinized and understood.

Job Displacement. In the case of our junior attorneys sequestered in the basement sifting through physical paper, machine learning has enabled them to shift to more intellectual redeeming tasks (like brewing coffee and filling out time-sheets). But, on the flip side, you no longer need so many junior lawyers, since their previous scope of work can now largely be automated. And, in fact, the entire industry of contract document review attorneys is seeing incredible consolidation and shrinkage. Looking towards the future, our leaders will have to contemplate how to either protect or redistribute human labor in light of such disruption.

Links/Resources/Sources

Framing: Key ML Terminology | Machine Learning Crash Course | Google Developers — developers.google.com

What is Machine Learning? – An Informed Definition — www.techemergence.com

The Risk of Machine-Learning Bias (and How to Prevent It) — sloanreview.mit.edu

How to See What’s Going on With Your Linux System Right Now

Is that service still running? What application is using that TCP port? These questions and more can be answered easily by sysadmins who know simple Linux commands.

If you’re a system administrator responsible for Linux servers, it’s essential that you master the Bash shell, especially the commands that show what software is running. That’s necessary for troubleshooting, obviously, but it is always a good idea to know what’s happening on a server for other reasons—not the least of which is security awareness.

Previously, I summarized 16 Linux server monitoring commands you really need to knowwhat to do when a Linux server keels over, and Linux commands you should never use. Here are some more commands to further your system administration skills—and help you identify the current status of your Linux server. I consider these commands to be fundamentals, whether your servers run on bare iron, a virtual machine, or a container with a lifespan of only a few hours.

Read more at HPE

What Is Distributed Ledger Technology?

 

While the concept of distributed ledgers has been around for a long while, it wasn’t until in recent years that it became a reality. The most popular implementation of distributed ledgers is blockchain, the technology that supports cryptocurrencies such as bitcoin and ethereum.

Blockchain uses computing power and cryptography to maintain its distributed ledger. Every node that participates in the network stores a copy of the blockchain. Blockchain records are grouped into blocks and are sequentially linked together through cryptographic hashes. Hashes are mathematical representations tied to the data structure of each block. The slightest change in any transaction changes the hash of its block and all blocks that come after it, making it invalid. This mechanism further protects the blockchain against tampering and makes it easier for the nodes to validate new transactions.

Blockchains have a consensus mechanism that enables the participants in the network to agree on transactions that can be added to the ledger. Every few minutes, a network of computers called “miners” runs mathematical operations to create new blocks from recently submitted transactions. Once a new block is confirmed, all the nodes append it to their copy of the blockchain (learn more at Blockchain explained and How does bitcoin mining work?).

Read more at Amity

Optimized Clear Linux Kernel Now Available for Fedora 28 and Fedora Rawhide

A recent devel list discussion for popular Linux distro Fedora mentioned Clear Linux optimizations, which may be relevant to Fedora developers in the future. It was mentioned that Intel’s Clear Linux show noticeable performance gains over Xubuntu, as based on graphs by Phoronix.

A Fedora user then decided to release a Clear Linux optimized Fedora kernel for Fedora 28 and Fedora Rawhide, with the following notes:

Intels clear linux kernel packaged for fedora. The aim of this kernel is to mimic similar performance to intels clear linux os on Intel based machines running fedora. Kernel only supports accelerated performance on Intel Cpu’s,similar performance on Amd based machines is not guarenteed.

Read more at Appuals

Learn more about Clear Linux in Jack Wallen’s review.

Linux: The New Frontier of Enterprise in the Cloud

Everyone from developer teams, to operations, to managers of virtual machines needs to know Linux, said Red Hat’s Brian Gracely.

TechRepublic’s Dan Patterson spoke with with Brian Gracely, director of cloud strategy at Red Hat OpenShift, about Linux and the cloud.

Gracely: …the really big trend lately with Linux has been Linux containers and technologies like [Google] Kubernetes. So, we’re seeing enterprises want to build new applications. We’re seeing the infrastructure be more software defined. Linux ends up becoming the foundation for a lot of the things going on in enterprise IT these days.

Patterson: And how can employers find the employees that have the right skill sets, and what are the best skill sets for deploying Red Hat in the cloud?

Gracely: Yeah. So I think there’s a couple of core skill sets. One, obviously, we’re seeing more and more Linux system admin, system administrators, who are developing those skills, because those skills are applicable in their own data centers. They’re applicable in the public cloud. That’s a great foundation.

Read more at Tec Republic

Greg Kroah-Hartman on Linux, Security, and Making Connections at Open Source Summit

People might not think about the Linux kernel all that much when talking about containers, serverless, and other hot technologies, but none of them would be possible without Linux as a solid base to build on, says Greg Kroah-Hartman.  He should know. Kroah-Hartman maintains the stable branch of the Linux kernel along with several subsystems.  He is also co-author of the Linux Kernel Development Report, a Fellow at The Linux Foundation, and he serves on the program committee for Open Source Summit.

In this article, we talk with Kroah-Hartman about his long involvement with Linux, the importance of community interaction, and the upcoming Open Source Summit.

Greg Kroah-Hartman (right) talks about Linux and Open Source Summit.

The Linux Foundation: New technologies (cloud, containers, machine learning, serverless) are popping up on weekly basis, what’s the importance of Linux in the changing landscape?

Greg K-H: There’s the old joke, “What’s a cloud made of? Linux servers.” That is truer than most people realize. All of those things you mention rely on Linux as a base technology to build on top of.  So while people might not think about “Linux the kernel” all that much when talking about containers, serverless and the other “buzzwords of the day,” none of them would be possible without Linux being there to ensure that there is a rock-solid base for everyone to build on top of.  

The goal of an operating system is to provide a computing platform to userspace that looks the same no matter what hardware it runs on top of.  Because of this, people can build these other applications and not care if they are running it locally on a Raspberry Pi or in a cloud on a shared giant PowerPC cluster as everywhere the application API is the same.

So, Linux is essential for all of these new technologies to work properly and scale and move to different places as needed.  Without it, getting any of those things working would be a much more difficult task.

LF: You have been involved with Linux for a very long time. Has your role changed within the community? You seem to focus a lot on security these days.

Greg K-H: I originally started out as a driver writer, then helped write the security layer in the kernel many many years ago.  From there I started to maintain the USB subsystem and then co-created the driver model. From there I ended up taking over more driver subsystems and when the idea for the stable kernel releases happened back in 2005, I was one of the developers who volunteered for that.

So for the past 13 years, I’ve been doing pretty much the same thing, not much has changed since then except the increased number of stable trees I maintain at the same time to try to keep devices in the wild more secure.

I’ve been part of the kernel security team I think since it was started back in the early 2000’s but that role is more of a “find who to point the bug at” type of thing.  The kernel security team is there to help

take security problem reports and route them to the correct developer who maintains or knows that part of the kernel best.  The team has grown over the years as we have added the people that ended up getting called on the most to reduce the latency between reporting a bug and getting it fixed.

LF: We agree that Linux is being created by people all over the map, but once in a while it’s great to meet people in person. So, what role does Open Source Summit play in bringing these people together?

Greg K-H: Because open source projects are all developed by people who work for different companies and who live in different places, it’s important to get together when ever possible to actually meet the people behind the email if at all possible.  Development is an interaction that depends on trust, if I accept patches from you, then I am now responsible for those changes as well. If you disappear I am on the hook for them, so either I need to ensure they are correct, or even better, I can know that you will be around to fix the code if there is a problem.  By meeting people directly, you can establish a face behind the email to help smooth over any potential disagreements that can easily happen due to the lack of “tone” in online communication.

It’s also great to meet developers of other projects to hear of ways they are abusing your project to get it to bend to their will, or learn of problems they are having that you did not know about.  Or just learn about new things that are being developed in totally different development groups.  The huge range of talks at a conference like this makes it easy to pick up on what is happening in a huge range of different developer communities easily.

LF: You obviously meet a lot of people during the event. Have you ever come across an incident where someone ended up becoming a contributor or maintainer because of the exposure such an event provided?

Greg K-H: At one of the OSS conferences last year, I met a college student who was attending the conference for the first time.  They mentioned that they were looking of any project ideas that someone with their skill level could help out with. At a talk later that day a new idea for how to unify a specific subsystem of the kernel came up and how it was going “just take a bunch of grunt work” to accomplish.  Later that night, at the evening event, I saw the student again and mentioned the project to them and pointed them at the developer who asked for the help. They went off to talk in the corner about the specifics that would be needed to be done.

A few weeks later, a lot of patches started coming from the student and after a few rounds of review, were accepted by the maintainer.  More patches followed and eventually the majority of the work was done, which was great to see, the kernel really benefited from their contribution.

This year, I ran into the student again at another OSS conference and asked them what they were doing now.  Turns out they had gotten a job offer and were working for a Linux kernel company doing development on new products during their summer break.  Without that first interaction, meeting the developers directly that worked on the subsystem that needed the help, getting a job like that would have been much more difficult.

So, while I’m not saying that everyone who attends one of these types of conferences will instantly get a job, you will interact with developers who know what needs to be done in different areas of their open source projects.  And from there it is almost an easy jump to getting solid employment with one of the hundreds of companies that rely on these projects for their business.

LF: Are you also giving any talks at Open Source Summit?

Greg K-H:  I’m giving a talk about the Spectre and Meltdown problems that have happened this year.  It is a very high-level overview, going into the basics of what they are, and describing when the many different variants were announced and fixed in Linux.  This is a new security type of problem that is going to be with us for a very long time and I give some good tips on how to stay on top of the problem and ensure that your machines are safe.

Sign up to receive updates on Open Source Summit North America:

This article originally appeared at The Linux Foundation.

Linux File Server Guide

Linux file servers play an essential role. The ability to share files is a basic expectation with any modern operating system in the workplace. When using one of the popular Linux distributions, you have a few different file sharing options to choose from. Some of them are simple but not that secure. Others are highly secure, yet require some know-how to set up initially.

Once set up on a dedicated machine, you can utilize these file sharing technologies on a dedicated file server. This article will address these technologies and provide some guidance on choosing one option over another.

Samba Linux File Server

Samba is essentially a collection of tools to access networked SMB (Server Message Block) shares. The single biggest advantage to Samba as a file sharing technology is that it’s compatible with all popular operating systems, especially Windows. Setup correctly, Samba works flawlessly between Windows and Linux servers and clients.

An important thing to note about Samba is that it’s using the SMB protocol to make file sharing possible. SMB is a protocol native to Windows whereas Samba merely provides SMB support to Linux. So when considering a file sharing technology for your needs, keep this in mind.

Read more at Datamation

Viewing Linux Logs from the Command Line

Learn how to easily check Linux logs in this article from our archives.

At some point in your career as a Linux administrator, you are going to have to view log files. After all, they are there for one very important reason…to help you troubleshoot an issue. In fact, every seasoned administrator will immediately tell you that the first thing to be done, when a problem arises, is to view the logs.

And there are plenty of logs to be found: logs for the system, logs for the kernel, for package managers, for Xorg, for the boot process, for Apache, for MySQL… For nearly anything you can think of, there is a log file.

Most log files can be found in one convenient location: /var/log. These are all system and service logs, those which you will lean on heavily when there is an issue with your operating system or one of the major services. For desktop app-specific issues, log files will be written to different locations (e.g., Thunderbird writes crash reports to ‘~/.thunderbird/Crash Reports’). Where a desktop application will write logs will depend upon the developer and if the app allows for custom log configuration.

We are going to be focus on system logs, as that is where the heart of Linux troubleshooting lies. And the key issue here is, how do you view those log files?

Fortunately there are numerous ways in which you can view your system logs, all quite simply executed from the command line.

/var/log

This is such a crucial folder on your Linux systems. Open up a terminal window and issue the command cd /var/log. Now issue the command ls and you will see the logs housed within this directory (Figure 1).

Figure 1: A listing of log files found in /var/log/.

Now, let’s take a peek into one of those logs.

Viewing logs with less

One of the most important logs contained within /var/log is syslog. This particular log file logs everything except auth-related messages. Say you want to view the contents of that particular log file. To do that, you could quickly issue the command less /var/log/syslog. This command will open the syslog log file to the top. You can then use the arrow keys to scroll down one line at a time, the spacebar to scroll down one page at a time, or the mouse wheel to easily scroll through the file.

The one problem with this method is that syslog can grow fairly large; and, considering what you’re looking for will most likely be at or near the bottom, you might not want to spend the time scrolling line or page at a time to reach that end. Will syslog open in the less command, you could also hit the [Shift]+[g] combination to immediately go to the end of the log file. The end will be denoted by (END). You can then scroll up with the arrow keys or the scroll wheel to find exactly what you want.

This, of course, isn’t terribly efficient.

Viewing logs with dmesg

The dmesg command prints the kernel ring buffer. By default, the command will display all messages from the kernel ring buffer. From the terminal window, issue the command dmesg and the entire kernel ring buffer will print out (Figure 2).

Figure 2: A USB external drive displaying an issue that may need to be explored.

Fortunately, there is a built-in control mechanism that allows you to print out only certain facilities (such as daemon).

Say you want to view log entries for the user facility. To do this, issue the command dmesg –facility=user. If anything has been logged to that facility, it will print out.

Unlike the less command, issuing dmesg will display the full contents of the log and send you to the end of the file. You can always use your scroll wheel to browse through the buffer of your terminal window (if applicable). Instead, you’ll want to pipe the output of dmesg to the less command like so:

dmesg | less

The above command will print out the contents of dmesg and allow you to scroll through the output just as you did viewing a standard log with the less command.

Viewing logs with tail

The tail command is probably one of the single most handy tools you have at your disposal for the viewing of log files. What tail does is output the last part of files. So, if you issue the command tail /var/log/syslog, it will print out only the last few lines of the syslog file.

But wait, the fun doesn’t end there. The tail command has a very important trick up its sleeve, by way of the -f option. When you issue the command tail -f /var/log/syslog, tail will continue watching the log file and print out the next line written to the file. This means you can follow what is written to syslog, as it happens, within your terminal window (Figure 3).
Figure 3: Following /var/log/syslog using the tail command.

Using tail in this manner is invaluable for troubleshooting issues.

To escape the tail command (when following a file), hit the [Ctrl]+[x] combination.

You can also instruct tail to only follow a specific amount of lines. Say you only want to view the last five lines written to syslog; for that you could issue the command:

tail -f -n 5 /var/log/syslog

The above command would follow input to syslog and only print out the most recent five lines. As soon as a new line is written to syslog, it would remove the oldest from the top. This is a great way to make the process of following a log file even easier. I strongly recommend not using this to view anything less than four or five lines, as you’ll wind up getting input cut off and won’t get the full details of the entry.

There are other tools

You’ll find plenty of other commands (and even a few decent GUI tools) to enable the viewing of log files. Look to more, grep, head, cat, multitail, and System Log Viewer to aid you in your quest to troubleshooting systems via log files.   

Advance your career with Linux system administration skills. Check out the Essentials of System Administration course from The Linux Foundation.

Eliminating the Product Owner Role

“The Product Owner role no longer exists” I recently announced to an entire department in a large company. A few POs looked a bit shocked and concerned. What would they do instead?

Before I get into who or what would replace the PO role, let me offer a bit of background on this group. Three coaches, including myself, had assessed this group prior to beginning work with them. Our findings were typical:

  • Too much technical debt was slowing development to a crawl
  • There was insufficient clarity on what needed to be built
  • The developers spent little time with their Product Owner
  • The team was scattered around a building, not co-located
  • etc.

When you perform numerous assessments of teams or departments in many industries, you tend to see patterns. The above issues are common. We’ve worked out solutions to these problems eons ago. The challenge is whether people want to embrace change and actually solve their problems. This group apparently was hungry enough to want change….

Chartering is a vital skill I learned from a software industry legend named III. It helps teams and organizations figure out what outcome they’d like to achieve, how they would know they achieved it and who is necessary to help achieve it.

Read more from Joshua Kerievsky on Medium