Home Blog Page 581

How to Maintain Open Source Compliance After Code Changes

The previous article in this series covered how to establish a baseline for open source software compliance by finding exactly which open source software is already in use and under which licenses it is available. But how do you make sure that future revisions of the same product (or other products built using the initial baseline) stay compliant once the baseline is established?

This is the concept of incremental compliance: you need to ensure compliance of whatever source code changes took place between the initial compliant baseline and the current version.

Maintaining open source license compliance throughout code changes is a continuous effort that depends on discipline and commitment to build compliance activities into existing engineering and business processes. And it’s a process that involves maintaining both the open source code, as well as the open source culture of an organization.

Below are some recommendations, based on The Linux Foundation’s e-book Open Source Compliance in the Enterprise, for some of the best ways to maintain compliance as your organization’s code and company evolves.

Maintaining Code Compliance

First, companies can maintain open source code compliance through processes and improvements aimed at the development process:

  • Adherence to the company’s compliance policy and process, in addition to any provided guidelines

  • Continuous audits of all source code integrated in the code base, regardless of its origins

  • Continuous improvements to the tools used in ensuring compliance and automating as much of the process as possible to ensure high efficiency in executing the compliance program

Maintaining a Culture of Compliance

In addition to the code, companies need to take steps to maintain compliance activities as the organization itself grows and ships more products and services using open source software. They must institutionalize compliance within their development culture to ensure its sustainability. Below are a few ways that companies can maintain the culture of compliance, as well as code compliance.

Sponsorship

Executive-level commitment is essential to ensure sustainability of compliance activities. There must be a company executive who acts as ongoing compliance champion and who ensures corporate support for open source management functions.

Consistency

Achieving consistency across the company is key in large companies with multiple business units and subsidiaries. A consistent interdepartmental approach helps with recordkeeping, and also facilitates sharing code across groups.

Measurement and analysis

Measure and analyze the impact and effectiveness of compliance activities, processes, and procedures with the goal of studying performance and improving the compliance program. Metrics will help you communicate the productivity advantages that accrue from each program element when promoting the compliance program.

Refining compliance processes

The scope and nature of an organization’s use of open source is dynamic — dependent on products, technologies, mergers, acquisitions, offshore development activities, and many other factors. Therefore, it is necessary to continuously review compliance policies and processes and introduce improvements.

Furthermore, open source license interpretations and legal risks continue to evolve. In such a dynamic environment, a compliance program must evolve as well.

A compliance program is of no value unless it is enforced. An effective compliance program should include mechanisms for ongoing monitoring of adherence to the program and for enforcing policies, procedures, and guidelines throughout the organization. One way to enforce the compliance program is to integrate it within the software development process and ensure that some measurable portion of employee performance evaluation depends on their commitment to and execution of compliance program activities.

Staffing

Ensure that staff is allocated to the compliance function, and that adequate compliance training is provided to every employee in the organization. In larger organizations, the compliance officer and related roles may grow to be FTEs (full time equivalents); in smaller organizations, the responsibility of open source management is more likely to be a shared and/or a part-time activity.

6WMmHe-e9aR6ui-QMbFTdRuEm5DvNzhvPkCdr6e9

Read the first article in this series:

An Introduction to Open Source Compliance in the Enterprise

6 Operational Challenges to Using Open Source Software

In today’s rapidly evolving markets, companies that consistently innovate, most quickly and at the least cost, will win. And, as you’ve seen in our ongoing series, using Open Source Software (OSS) enables rapid, low-cost innovation. But it can also introduce operational challenges and legal risks.

We’re at a point now that OSS has become such a mainstream phenomenon that not using open source almost certainly places your organization at a disadvantage. So you must learn how to navigate the challenges and risks in order to remain competitive.

“Open source is ubiquitous, it’s unavoidable… having a policy against open source is impractical and places you at a competitive disadvantage.” — Gartner.

In this post, we’ll explore how open source became the de facto way to build software. Then we’ll cover the challenges this new method of software development has introduced for organizations. You can download the entire series now in our Fundamentals of Professional Open Source Management sample chapter.

The open source development revolution

From innovative beginnings in academic research and the GNU tools project started roughly four decades ago, OSS has grown into a major phenomenon that has reshaped multiple industries. Today, there are more than 1.5 million unique open source projects offering terabytes of working code to software developers. The availability of these resources and the trends toward modular software and software reuse have radically changed the way most companies develop software.

Not so long ago, we developed most of our software products in-house. We might have used a few third-party components for connectivity to other systems or some specialized processing, but these were acquired through a carefully controlled procurement process.

Today, we develop more complicated software faster by using open source components freely available on the Internet. Most of our activity has shifted from specifying and implementing our own custom software to integrating already available working pieces. We only code the parts that are truly unique to our application.

But now, instead of a few carefully controlled code acquisitions, we are repeatedly downloading code from the Internet to evaluate, prototype, and integrate. Although this approach speeds up development, it has created some significant new challenges.

6 OSS operational challenges

While using OSS brings many advantages, it can introduce risks and ​additional operational complexity to software development lifecycles.

● Organizations must deal with many new software sources, including commercial and noncommercial suppliers – some use OSS acquired from hundreds of different sources.

● The cornucopia of available open source components drives a higher volume of third-party software acquisition decisions. Where are these decisions being made? Many developers are not qualified to consider all of the necessary aspects including software license analysis, but a heavy-weight process like the old procurement approach is too expensive and time consuming to apply to the new volume of acquisitions.

● Integration of a large number of third party components can create complexity. One area of complexity is software version consistency across multiple interdependent stacks of code.

● Open source projects run the gamut from amateur exercises to professionally developed and tested releases. Your organization must ensure that appropriate levels of quality are chosen for each application.

● How will your organization obtain technical support and updates for all of these different open source components? Healthy open source communities provide excellent support and maintenance, but the self-service model of open source requires consistent participation on the part of your developers.

● Commercial relationships can reinforce your requirements with suppliers by adding a financial incentive, but influencing open source project direction is dependent upon multiple aspects of participation.

In our final article, we’ll discuss the legal issues and risks that come when companies incorporate OSS into their own software projects. And we’ll introduce the spectrum of open source license types with which organizations should become familiar.

Open source software management

Read more:

What Is Open Source Software?

Using Open Source Software to Speed Development and Gain Business Advantage

6 Reasons Why Open Source Software Lowers Development Costs

Why Using Open Source Software Helps Companies Stay Flexible and Innovate

What Is the Point of Learning C?

Take a look at the TIOBE Programming Community Index — an indicator of the popularity of programming languages — and you’ll see that Google’s Go and, to a lesser extent, Dart and Perl are trending up. The venerable C, however, is a language whose popularity is plummeting, according to the index.

In a world where there is huge demand for mobile and web applications coded in higher-level languages that are easy to learn and debug and difficult to make mistakes in — at least compared to C — one might assume there’s no reason to bother with a low-level language that’s going out of fashion. Does this mean that C isn’t worth learning? The answer isn’t quite that simple.

Read more at CIO

Serverless: Redefining DevOps

TL; DR There is a dramatic shift underway in how many in house, and commercial, DevOps tools are being created and used

The growth in interest in serverless computing continues at pace, and for many organisations serverless technologies such as AWS LambdaAzure Functions or Google Cloud Functions are becoming an essential part of their development and operations toolkit.

While some commentators see “functions as a service” as a logical end game in the evolution of microservices, we disagree, rather functions will be part of an overall continuum of development approaches. However, when understood and used properly, functions are already a very disruptive and efficient model for certain classes of applications, and the scope for their usage is rapidly expanding.

Read more at RedMonk

IT Automation Best Practices for Network Engineers and Architects

Software-defined networking (SDN) for L2 and L3 (layer two and three) networking and network function virtualization (NFV) for L4-L7 network services have remained elusive for many IT departments due to the lack of maturation of the technology or specialized skills needed to implement them. But, network automation doesn’t have to be an all or nothing proposition. Software-defined approaches for application and networking services combined with scripting and orchestration tools such as Ansible are enabling practical approaches to network automation that doesn’t require boiling the ocean. In this article, I’ll examine some best practices for network automation in L4-L7  services that can drive immediate improvements in your network.

The motivations to automate may seem obvious but are worth reviewing to understand what you want to achieve and how you will measure your success.

Read more at The New Stack

The Physical Computing Capabilities of the Raspberry Pi

While the Raspberry Pi is an excellent and affordable mini Linux computer with a stylish and functional desktop user interface, it has plenty of scope beyond that of a regular PC. Here’s an overview of the physical computing capabilities of the Pi.

GPIO pins

Since 2014, with the release of the Model B+, the Raspberry Pi’s form factor has stayed the same, including the uniform 40-pin GPIO (General Purpose Input/Output) pin header.

Read more at OpenSource.com

IT Workers Say Their Companies Are Migrating to Virtual Infrastructures, Study Says

An overwhelming 86 percent of respondents to a Chef survey of IT practitioners have completed or are in the progress of migrating from a physical infrastructure to a virtual one.

Chef interviewed more than 1,500 global IT workers that use Chef servers and found that many emerging and legacy technologies are being rebuilt around the needs of developers. In addition, companies are piloting and adapting new technologies like cloudcontainers, and microservices in search of speed.

Cloud

The study found that 81 percent of respondents said they are in the process of or have completed a project to run some applications in cloud-based architectures. Those surveyed said it typically takes an average of six months to complete this.

Read more at SDxCentral

Digital Privacy at the U.S. Border: A New How-To Guide from EFF

Increasingly frequent and invasive searches at the U.S. border have raised questions for those of us who want to protect the private data on our computers, phones, and other digital devices. A new guide released today by the Electronic Frontier Foundation (EFF) gives travelers the facts they need in order to prepare for border crossings while protecting their digital information.

“Digital Privacy at the U.S. Border” helps everyone do a risk assessment, evaluating personal factors like immigration status, travel history, and the sensitivity of the data you are carrying. … Assessing your risk factors helps you choose a path to proactively protect yourself, which might mean leaving some devices at home, moving some information off of your devices and into the cloud, and using encryption. EFF’s guide also explains why some protections, like fingerprint locking of a phone, are less secure than other methods.

Read more at the EFF

Call for Proposals Now Open for Xen Project Developer and Design Summit 2017

We’re excited to announce that registration and the call for proposals is open for Xen Project Developer and Design Summit 2017, which will be held in Budapest, Hungary from July 11-13, 2017. The Xen Project Developer and Design Summit combines the formats of Xen Project Developer Summits with Xen Project Hackathons, and brings together the Xen Project’s community of developers and power users.

Submit a Talk

Do you have an interesting use case around Xen Project technology or best practices around the community? There’s a wide variety of topics we are looking for, including security, embedded environments, network function virtualization (NFV), and more. You can find all the suggested topics for presentations and panels here (make sure you select the Topics tab).

Several formats are being accepted for speaking proposals, including:

  • Presentations and Panels
  • Interactive design and problem solving sessions. These sessions can be submitted as part of the CFP, but we will reserve a number of design sessions to be allocated during the event. Proposers of design sessions are expected to host and moderate design sessions following the format we have used at Xen Project Hackathons. If you have not participated in these in the past, check out past event reports from 20162015 and 2013.

Never talked at a conference before? Don’t worry! We encourage new speakers to submit for our events and have plenty of resources to help you prepare for your presentation.

Here are some dates to remember for submissions and in general:

  • CFP Close: April 14, 2017
  • CFP Notifications: May 5, 2017
  • Schedule Announced: May 16, 2017
  • Event: July 11-13, 2017

Registration

Come join us for this event, and if you register by May 19, you’ll get an early bird discount :) Travel stipends are available for students or individuals that are not associated with a company. If you have any questions, please send a note to community.manager@xenproject.org.

This article originally appeared on Xen Project Blog

Solving Monitoring in the Cloud With Prometheus

Hundreds of companies are now using the open source Prometheus monitoring solution in production, across industries ranging from telecommunications and cloud providers to video streaming and databases.

In advance of CloudNativeCon + KubeCon Europe 2017 to be held March 29-30 in Berlin, we talked to Brian Brazil, the founder of Robust Perception and one of the core developers of the Prometheus project, who will be giving a keynote on Prometheus at CloudNativeCon. Make sure to catch the full Prometheus track at the conference.

Linux.com: What makes monitoring more challenging in a Cloud Native environment?

Brian Brazil: Traditional monitoring tools come from a time when environments were static and machines and services were individually managed. By contrast, a Cloud Native environment is highly automated and dynamic, which requires a more sophisticated approach.

With a traditional setup there were a relatively small number of services, each with their own machine. Monitoring was on machine metrics such as CPU usage and free memory, which were the best way available to alert on user-facing issues. In a Cloud Native world, where many different services not only share machines, but the way in which they’re sharing them is in constant flux, such an approach is not scalable.

For example with a mixed workload of user-facing and batch jobs, a high CPU usage merely indicates that you’re getting good value for money out of your resources. It doesn’t necessarily indicate anything about end-user experience. Thus, metrics like latency, failure ratios, and processing times from services spread across machines must be aggregated up and then used for graphs and alerts.

In the same way that the move was made from manual management of machines and services to tools like Chef and now Kubernetes, we must make a similar transition in the monitoring space.

Linux.com: What are the advantages of Prometheus?

Brian Brazil: Prometheus was created with a dynamic cloud environment in mind. It has integrations with systems such as Kubernetes and EC2 that keep it up to date with what type of containers are running where, which is essential with the rate of change in a modern environment.

Prometheus client libraries allow you to instrument your applications for the metrics and KPIs that matter in your system. For third-party application such as Cassandra, HAProxy or MySQL, there’s a variety of exporters to expose their useful metrics.

The data Prometheus collects is enriched by labels. Labels are arbitrary key-value pairs that can be used to distinguish the development cluster from the production environment, or which HTTP endpoints the metric is broken out by.

The PromQL query language allows for aggregation based on these labels, calculation of 95th percentile latencies per container, service or datacenter, forecasting, and any other math you’d care to do. What’s more: if you can graph it, you can alert on it. This gives you the power to have alerts on what really matters to you and your users, and helps eliminate those late night alerts for non-issues.

Linux.com: Are there things that catch new users off guard?

Brian Brazil: One common misunderstanding is the type of monitoring system that Prometheus is, and where it fits as part of your overall monitoring strategy.

Prometheus is metrics based, meaning it is designed to efficiently deal with numbers — numbers such as how many HTTP requests you’ve served and their latency. What Prometheus is not is an event logging system, and is thus not suitable for tracking the details of each individual HTTP request made. By having both a metrics solution and an event logging solution (such as the ELK stack), you’ll cover a good range in terms of breadth and depth. Neither is sufficient on their own, due to the different engineering tradeoffs each must make.

Linux.com: What has the response to Prometheus been?

Brian Brazil: From its humble beginnings in 2012 when Prometheus had just two developers working on it part time, today in 2017 hundreds of developers have contributed to the Prometheus project itself. In addition a rich ecosystem has spawned, with over 150 third-party party integrations — and that’s just the ones we know of.

There are hundreds of companies using Prometheus in production across all industries from telecommunications to cloud providers, video streaming to databases and startups to Fortune 500s. Since announcing 1.0 last year, the growth in users and the ecosystem has only accelerated.

Linux.com: Are there any talks in particular to watch out for at CloudNativeCon + KubeCon Europe?

Brian Brazil: For those who are used to more static environments, or just trying to reduce pager noise, Alerting in Cloud Native Environments by Fabian Reinartz of CoreOS is essential. If you’re already running Prometheus in a rapidly growing system, in Configuring Prometheus for High Performance, then Soundcloud’s Björn Rabenstein , who wrote the current storage system, will cover what you’ll need to know.

For those on the development side, there’s a workshop on Prometheus Instrumentation that’ll take you from instrumenting your code all the way through visualising the results. My own talk on Counting in Prometheus is a deep dive into the deceptively simple sounding question of counting how many requests there were in the past hour, and how it really works in various monitoring systems.

Not everything is cloud native, Prometheus: The Unsung Heroes is a user story of how Prometheus can monitor infrastructure such as load balancers via SNMP. Finally, in Integrating Long-Term Storage with Prometheus, Julius Volz looks at the plans for our most sought after pieces of future functionality.

All talks will be recorded, so if you aren’t lucky enough to attend in person, you can watch the talks later online.

CloudNativeCon + KubeCon Europe is almost sold out! Register now to secure your seat.