Home Blog Page 607

The 7 Elements of an Open Source Management Program: Teams and Tools

The following is adapted from Open Source Compliance in the Enterprise by Ibrahim Haddad, PhD.

A successful open source management program has seven essential elements that provide a structure around all aspects of open source software. In the previous article, we gave an overview of the strategy and process behind open source management. This time we’ll discuss two more essential elements: staffing on the open source compliance team and the tools they use to automate and audit open source code.

Compliance Teams

The open source compliance team is a cross-disciplinary group consisting of various individuals tasked with the mission of ensuring open source compliance. There are actually a pair of teams involved in achieving compliance: the core team and the extended team.

  • The core team, often called the Open Source Review Board (OSRB), consists of representatives from engineering and product teams, one or more legal counsel, and the Compliance Officer.

  • The extended team consists of various individuals across multiple departments that contribute on an ongoing basis to the compliance efforts: Documentation, Supply Chain, Corporate Development, IT, Localization and the Open Source Executive Committee (OSEC). However, unlike the core team, members of the extended team are only working on compliance on a part-time basis, based on tasks they receive from the OSRB.

Various individuals and teams within an organization help ensure open source compliance.

Tools

Open source compliance teams use several tools to automate and facilitate the auditing of source code and the discovery of open source code and its licenses. Such tools include:

A compliance project management tool to manage the compliance project and track tasks and resources.

A software inventory tool to keep track of every single software component, version, and product that uses it, and other related information.

• A source code and license identification tool to help identify the origin and license of the source code included in the build system.

A linkage analysis tool to identify the interactions of any given C/C++ software component with other software components used in the product. This tool will allow you to discover linkages between source code packages that do not conform to company policy. The goal is to determine if any open source obligations extend to proprietary or third party software components. If a linkage issue is found, a bug ticket is assigned to Engineering with a description of the issue in addition to a proposal on how to solve the issue.

A source code peer review tool to review the changes introduced to the original source code before disclosure as part of meeting license obligations.

A bill of material (BOM) difference tool to identify the changes introduced to the BOM of any given product given two different builds. This tool is very helpful in guiding incremental compliance efforts.

Next time we’ll cover another key element of any open source management program: education. Employees must possess a good understanding of policies governing the use of open source software. Open source compliance training — formal or informal — raises awareness of open source policies and strategies and builds a common understanding within the organization.

Open Source Compliance

Read the previous article in this series:

The 7 Elements of an Open Source Management Program: Strategy and Process

Read the next articles in this series:

How and Why to do Open Source Compliance Training at Your Company

Basic Rules to Streamline Open Source Compliance For Software Development

Keynote: OpenTracing and Containers: Depth, Breadth, and the Future of Tracing – Ben Sigelman

Ben Sigelman shows how OpenTracing can deliver zero-touch, black-box instrumentation of distributed applications via orchestration systems like Kubernetes, and why that could change the way we all reason about distributed computation.

 

Linux Security Fundamentals Part 3: Risk Assessment / Trade-offs and Business Considerations

Start exploring Linux Security Fundamentals by downloading the free sample chapter today. DOWNLOAD NOW

Earlier in this series, you learned the types of hackers who might try to compromise your Linux system, where attacks might originate, and the kinds of attacks to expect. The next step is to assess the security risks to your own system and the costs of both securing, and not securing, your assets in order to begin formulating a security plan.

Focusing on likely threats to the highest value assets is a reasonable place to start your risk assessment. A common method for determining likelihood is to create a use case from the point of view of a malicious actor attempting to cause harm to the system.

Next, calculating the value of the assets will help determine the amount of security that should be implemented to protect those assets. It may not always be cost-effective to protect everything. Many types of attacks can be mitigated by implementing minimal security. It is not likely possible to protect all assets, all of the time.

And finally, knowing the potential impact to business operations is also essential in determining the level of security required for any particular asset. If the business is severely impacted due to a compromise, then more resources should be dedicated to maintaining the security of the assets. Another business consideration is the impact of adding additional security to the environment, possibly creating a performance challenge.

Let’s look at each of these areas in turn and some important factors to consider and questions to ask as you’re evaluating the trade-offs.

Likelihood

Evaluating the feasibility of a potential attack is important. Is the threat real or theoretical? You can begin to asses the risk by asking:

• Method: Are the skills, knowledge, tools, etc. available?

• Opportunity: Is there time and access?

• Motive: Is it an intentional act or an accidental damage?

Recently, it has been demonstrated that fingerprint scanners on smart phones can be fooled into thinking an authorized user has scanned their fingerprint. The researchers claimed that the attack was rather easy to accomplish. The reality is that the particular attack required a fair amount of specific things to happen in proper order to be successful. This is rather unlikely.

Even if the methods are well-known, if the tools are difficult to acquire, only the most resource-wealthy will be able to perpetrate the attack. Access and opportunity are also areas that can be designed into a system, such that attacks can only be accomplished during certain windows. By limiting the opportunity to certain situations, time-based or access-based, security costs can be reduced outside of those situations.

Asset Value

A thorough inventory of business assets will be the basis for the valuation required when determining what and how much security will be required.

Most environments handle this process via an Asset Management System. The roles of each asset will also determine the importance of the asset in the business operations. Components that are not expensive and yet carry large responsibility for operations should be considered highly valuable. Estimating the impact of a service outage, damage to the infrastructure, or compromise will also be necessary in determining the value of the assets.

To determine asset value, you should:

• Identify network/system/service assets

• Determine asset roles and relationships

• Evaluate the impact of asset damage/failure/loss.

In part four we’ll consider the difficulty of estimating the cost of a cyber attack and give you some questions to ask when weighing the cost of protecting your business assets, with the business impact of a potential security compromise.

Stay one step ahead of malicious hackers with The Linux Foundation’s Linux Security Fundamentals course. Download a sample chapter today!

Read the other articles in the series:

Linux Security Threats: The 7 Classes of Attackers

Linux Security Threats: Attack Sources and Types of Attacks

Linux Security Fundamentals: Estimating the Cost of a Cyber Attack

Linux Security Fundamentals Part 5: Introduction to tcpdump and wireshark

Linux Security Fundamentals Part 6: Introduction to nmap

Understand Your Distributed Apps with the OpenTracing Standard

Microservices and services-oriented architecture are here to stay, but this kind of distributed system destroys the traditional type of process monitoring. Nonetheless, companies still need to understand just what’s happening inside the flow of an application. Ben Sigelman, Co-founder of LightStep, said at his keynote at CloudNativeCon that by adopting a new standard for distributed applications called OpenTracing can tell those stories without building complex instrumentation, or fundamentally changing the code of your application.

“If you previously told a story about what happened in your process in that way with this squiggly line going through a single process, that story is gone,” Sigelman said. “In our conversations with numerous companies that have adopted this sort of technology, what they’ve been telling us is that, as they decouple their systems and their transactions I/O, behold they are no longer on any single process, they are literally unable to answer the most basic questions about what’s happening.

“The solution historically for this has been distributed tracing,” Sigelman said. “It still is a solution and it’s wonderful. So the question is, why isn’t it ubiquitous? … That is what OpenTracing is here for. The reason is the instrumentation has been too difficult. It’s required you to instrument not just across processes but across library boundaries in a way that often couples you to poorly engineered libraries that were written in an afternoon or a weekend by someone.”

OpenTracing is a vendor-neutral API standard, not something that one deploys, Sigelman said. Instead it’s something you program against, something you build into your microservices architecture. The OpenTracing API sits in the middle of the microservices process, like application logic, control-flow packages or existing instrumentation, and tracing infrastructure like LightStep, Zipkin, or Jaeger.

Sigelman showed how OpenTracing works through a demo involving a fake donuts-as-a-service website he created (DonutSalon.com, imbued with the glorious motto “Move Fast and Bake Things”), showing how to track where bottlenecks occurred when the audience faithfully ordered free donuts all at once.

“This can be really powerful if you think about a real system, in that any time you have a latency issue, it’s probably due to some kind of throughput concurrency bottleneck,” Sigelman said. “Being able to actually root-cause where these requests came from in the distributed system is actually fairly profound and something that is not possible with logging at the local level.”

In a Kubernetes system, OpenTracing tracks both the breadth of transactions (called “spans”) and the depth (the communication between clients and services, called “references”). Just tracking one or the other is essentially traditional logging, Sigelman said, but capturing both leads to a much better picture of the traffic in a distributed applications.

“I think it’s possible to get good quality tracing and avoid the pain and suffering of adding a lot of instrumentation or even really changing an application in meaningful ways if we can add the existing API standardization of OpenTracing to a little bit of magic between applications, client proxies, and then the network that connects containers to each other,” he said.

Watch the complete presentation below:

Want to learn more about Kubernetes? Get unlimited access to the new Kubernetes Fundamentals training course for one year for $199. Sign up now!

What Do You Mean by “Event-Driven”?

Towards the end of last year I attended a workshop with my colleagues in ThoughtWorks to discuss the nature of “event-driven” applications. Over the last few years we’ve been building lots of systems that make a lot of use of events, and they’ve been often praised, and often damned. Our North American office organized a summit, and ThoughtWorks senior developers from all over the world showed up to share ideas.

The biggest outcome of the summit was recognizing that when people talk about “events”, they actually mean some quite different things. So we spent a lot of time trying to tease out what some useful patterns might be. This note is a brief summary of the main ones we identified.

Event Notification

This happens when a system sends event messages to notify other systems of a change in its domain. 

Read more at Martin Fowler

Meet OpenStack Big Tent Projects Storlets and Tricircle

OpenStack’s big tent has expanded to include two projects: Storlets and Tricircle.

We’ll give you a quick overview of them and find out how you can get involved from the project team leads (PTLs) ahead of the Project Teams Gathering (PTG).

Storlets

What

Storlets enables a user-friendly, cost-effective scalable and secure way for executing storage-centric user-defined functions near the data within OpenStack Swift.

Read more at Superuser

Blockchain: The Invisible Technology That’s Changing the World

Blockchain isn’t a household buzzword, like the cloud or the Internet of Things. It’s not an in-your-face innovation you can see and touch as easily as a smartphone or a package from Amazon. But when it comes to our digital lives—every digital transaction; exchange of value, goods and services; or private data —blockchain is the answer to a question we’ve been asking since the dawn of the internet age: How can we collectively trust what happens online?

 

Every year we run more of our lives—more core functions of our governments, economies, and societies—on the internet. We do our banking online. We shop online. We log into apps and services that make up our digital selves and send information back and forth. Think of blockchain as a historical fabric underneath recording everything that happens exactly as it occurs.

Read more at PCMag

 

Optimizing Linux for Slow Computers

I’ve been researching a lot about Linux on the Desktop these days as you may see from my posts about Fedora 25 and Arch Linux (recommended!).

There are some things you must have in mind when migrating to Linux systems. Even if you have a top of the line Intel Core i7 Kaby Lake, 32GB of RAM, 2TB M.2 SSDs, you may still benefit from the optimizations I will talk about.

One of the best resources is Arch Linux’s Wiki page on “Improving Performance”. You don’t need everything there but it’s a comprehensive resource that any enthusiast must read.

Read more at AkitaOnRails

Keymetrics Is a Node.js Monitoring Tool for Your Server Infrastructure

French startup Keymetrics just raised $2 million from Alven Capital and Runa Capital to build the best monitoring tool for your Node.js infrastructure. The startup’s founder and CEO Alexandre Strzelewicz also created the popular open source Node.js process manager PM2.

How do you turn a popular open source project into a successful startup? This question has so many different answers that sometimes it’s hard to find the right one from the first try, and Keymetrics is no exception.

A few years ago, when Strzelewicz developed PM2 while living in Shanghai, he was just trying to create a better process manager for Node.js because existing solutions were lacking. 

Read more at TechCrunch

rtop – An Interactive Tool to Monitor Remote Linux Server Over SSH

rtop is a straightforward and interactive, remote system monitoring tool based on SSH that collects and shows important system performance values such as CPUdiskmemorynetwork metrics.

It is written in Go Language and does not require any extra programs to be installed on the server that you want to monitor except SSH server and working credentials.

rtop basically functions by launching an SSH session, and executing certain commands on the remote server
Read more at Tecmint