Home Blog Page 643

Hardening the Kernel to Protect Against Attackers

The task of securing Linux systems is so mind-bogglingly complex and involves so many layers of technology that it can easily overwhelm developers. However, there are some fairly straightforward protections you can use at the very core: the kernel. These hardening techniques help developers guard against the bugs that haven’t yet been detected.

“Hardening is about making bugs more difficult to exploit,” explained Mark Rutland, a kernel developer at ARM Ltd, at the recent Embedded Linux Conference Europe 2016 in Berlin. There will always be dangerous bugs that manage to evade the notice of kernel developers, he added. “We do not yet know which particular bugs exist in the next kernel, and we probably won’t for five years,” he said, referring to Kees Cook’s recent analysis of kernel bug lifetimes.

“We see recurring classes of bugs involving things like dereferencing of null pointers or accessing memory controlled by user space, so we can assume that some of the bugs we don’t know about will fall into these buckets.”

Rutland, who earlier this year warned ELC North America attendees about the hidden dangers of unruly caches, noted that bugs are an unavoidable offshoot of programming. Most are relatively benign, but many cause problems, and some can open dangerous vulnerabilities.

“In the kernel 4.8 merge window we fixed over 500 bugs, many of which were in 4.7 or earlier,” said Rutland, noting that while varied techniques are used today to avoid bugs making it into the kernel, some will inevitably slip through and require later fix-ups. This is “slightly terrifying” given the long lifetime of bugs, which might not be discovered until affected devices are end-of-life.

Some of these bugs have significant security implications. Fortunately, kernel developers have in recent years begun to create hardening features that protect against many of the most common bugs. Rutland implored the audience to make use of these hardening features, noting that many are simple to enable, and their protections are “effectively free,” yet don’t see widespread use.

Rutland went on to discuss several of the main classes of hardening protections that pose the least amount of overhead. Here are some edited quotes about each:

Strict kernel memory permissions – “Historically, the kernel has mapped all memory as readable, writable, and executable…which leads to…being able to modify kernel code or const data, or executing data, all of which…are very useful primitives if you’re an attacker. We can get the MMU to enforce these permissions by…mapping that code as read only or mapping constant data as read only and non- executable. If it’s done in the MMU, it’s effectively free, as the hardware is handling it for us.” (For details, study up on CONFIG_DEBUG_RODATA and CONFIG_DEBUG_SET-MODULE_RONX.)

Stack smashing protection – “Stack smashing attacks work on the principle that stacks contain a return address and other data, as well as local variables. On most architectures, the stack grows downwards, and the buffers grow upwards. If you copy some data to a buffer on the stack, and the data is too large to fit in the buffer, you end up overwriting subsequent data, which happens to include the return address. So if an attacker knows what your stack frame layout will look like, they can control where you will return to, and…branch to any code of their choosing…to launch more advanced attacks. Stack smashing protection guards against this by having the compiler insert a secret value known as a canary between the data and the flow control information.” (For details, see CONFIG_STACKPROTECTOR_REGULAR and CONFIG_STACKPROTECTOR_STRONG.)

User/kernel memory segregation – “Typically the kernel shares an address space with user space in hardware. A pointer can encode an address to either space…using the same load and store instructions. If you accidentally dereference an address…controlled by user space, the hardware won’t notice and will happily give you the value, so if an attacker can convince you to dereference the address…it can be used as the basis for a number of attacks. If an attacker puts a buffer of code in a user space address and then uses a stack smashing exploit to branch to that, they can do whatever they want. The MMU can help by letting us change the page table dynamically, which we use to switch processes. Enabling access temporarily and then disabling access…will catch most of these unintentional user memory accesses or branches.”

Stricter permissions – “Some hardware can…automatically prevent arbitrary code execution from a user space buffer. On ARM we have a feature called privilege execute never (PXN), which…says I never want this page to be executed with kernel privileges. x86 has a similar thing called SMEP. An attacker can still branch to another piece of kernel code, so it doesn’t prevent arbitrary execution, but it limits one case. More recently, MMUs have become able to do this with data accesses as well.”

Rutland advised that it will be “years before we have a reasonable number of protections.” He also noted that the protections are not 100 percent effective, and that “we still have to find and fix bugs.”

In Rutland’s view, Linux systems would be more secure if more of these hardening features were turned on by default, which he said happened in kernel 4.9 with now mandatory, aforementioned DEBUG_RODATA. “Resistance is slowly going away for some of these protections,” he said. “Lots of the complaints about the features not looking like kernel code and doing things wrong are being solved quite quickly. People’s opinions about mainline are changing – there’s agreement that yes, we need to do something here.”

Watch the full video of Rutland’s presentation, “Thwarting Unknown Bugs: Hardening Features in the Mainline Linux Kernel” below:

Embedded Linux Conference + OpenIoT Summit North America will be held on February 21 – 23, 2017 in Portland, Oregon. Check out over 130 sessions on the Linux kernel, embedded development & systems, and the latest on the open Internet of Things.

Linux.com readers can register now with the discount code, LINUXRD5, for 5% off the attendee registration price. Register now>>

 

Building an Email Server on Ubuntu Linux, Part 3

Welcome back, me hearty Linux syadmins! In part 1 and part 2 of this series, we learned to how to put Postfix and Dovecot together to make a nice IMAP and POP3 mail server. Now we will learn to make virtual users so that we can manage all of our users in Dovecot.

Sorry, No SSL. Yet.

I know I promised to show you how to set up a proper SSL-protected server. Unfortunately, I underestimated how large that topic is. So, I will realio trulio write a comprehensive how-to by next month.

For today, in this final part of this series, we’ll go into detail on how to set up virtual users and mailboxes in Dovecot and Postfix. It’s a bit weird to wrap your mind around, so the following examples are as simple as I can make them. We’ll use plain flat files and plain-text authentication. You have the options of using database back ends and nice strong forms of encrypted authentication; see the links at the end for more information on these.

Virtual Users

You want virtual users on your email server and not Linux system users. Using Linux system users does not scale, and it exposes their logins, and your Linux server, to unnecessary risk. Setting up virtual users requires editing configuration files in both Postfix and Dovecot. We’ll start with Postfix. First, we’ll start with a clean, simplified /etc/postfix/main.cf. Move your original main.cf out of the way and create a new clean one with these contents:


compatibility_level=2
smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu/GNU)
biff = no
append_dot_mydomain = no

myhostname = localhost
alias_maps = hash:/etc/aliases
alias_database = hash:/etc/aliases
myorigin = $myhostname
mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 192.168.0.0/24
mailbox_size_limit = 0
recipient_delimiter = +
inet_interfaces = all

virtual_mailbox_domains = /etc/postfix/vhosts.txt
virtual_mailbox_base = /home/vmail
virtual_mailbox_maps = hash:/etc/postfix/vmaps.txt
virtual_minimum_uid = 1000
virtual_uid_maps = static:5000
virtual_gid_maps = static:5000
virtual_transport = lmtp:unix:private/dovecot-lmtp0

You may copy this exactly, except for the 192.168.0.0/24 parameter for mynetworks, as this should reflect your own local subnet.

Next, create the user and group vmail, which will own your virtual mailboxes. The virtual mailboxes are stored in vmail's home directory.


$ sudo groupadd -g 5000 vmail
$ sudo useradd -m -u 5000 -g 5000 -s /bin/bash vmail

Then reload the Postfix configurations:


$ sudo postfix reload
[sudo] password for carla: 
postfix/postfix-script: refreshing the Postfix mail system

Dovecot Virtual Users

We’ll use Dovecot’s lmtp protocol to connect it to Postfix. You probably need to install it:


$ sudo apt-get install dovecot-lmtpd

The last line in our example main.cf references lmtp. Copy this example /etc/dovecot/dovecot.conf, replacing your existing file. Again, we are using just this single file, rather than calling the files in /etc/dovecot/conf.d.


protocols = imap pop3 lmtp
log_path = /var/log/dovecot.log
info_log_path = /var/log/dovecot-info.log
ssl = no
disable_plaintext_auth = no
mail_location = maildir:~/.Mail
pop3_uidl_format = %g
auth_verbose = yes
auth_mechanisms = plain

passdb {
  driver = passwd-file
  args = /etc/dovecot/passwd
}

userdb {
  driver = static
  args = uid=vmail gid=vmail home=/home/vmail/studio/%u
}

service lmtp {
 unix_listener /var/spool/postfix/private/dovecot-lmtp {
   group = postfix
   mode = 0600
   user = postfix
  }
}

protocol lmtp {
  postmaster_address = postmaster@studio
}
service lmtp {
  user = vmail
}

At last, you can create the file that holds your users and passwords, /etc/dovecot/passwd. For simple plain text authorization we need only our users’ full email addresses and passwords:


alrac@studio:{PLAIN}password
layla@studio:{PLAIN}password
fred@studio:{PLAIN}password
molly@studio:{PLAIN}password
benny@studio:{PLAIN}password

The Dovecot virtual users are independent of the Postfix virtual users, so you will manage your users in Dovecot. Save all of your changes and restart Postfix and Dovecot:


$ sudo service postfix restart
$ sudo service dovecot restart

Now let’s use good old telnet to see if Dovecot is set up correctly.


$ telnet studio 110
Trying 127.0.1.1...
Connected to studio.
Escape character is '^]'.
+OK Dovecot ready.
user molly@studio
+OK
pass password
+OK Logged in.
quit
+OK Logging out.
Connection closed by foreign host.

So far so good! Now let’s send some test messages to our users with the mail command. Make sure to use the whole user’s email address and not just the username.


$ mail benny@studio
Subject: hello and welcome!
Please enjoy your new mail account!
.

The period on the last line sends your message. Let’s see if it landed in the correct mailbox.


$ sudo ls -al /home/vmail/studio/benny@studio/.Mail/new
total 16
drwx------ 2 vmail vmail 4096 Dec 14 12:39 .
drwx------ 5 vmail vmail 4096 Dec 14 12:39 ..
-rw------- 1 vmail vmail  525 Dec 14 12:39 1481747995.M696591P5790.studio,S=525,W=540

And there it is. It is a plain text file that we can read:

$ less 1481747995.M696591P5790.studio,S=525,W=540
Return-Path: <carla@localhost>
Delivered-To: benny@studio
Received: from localhost
        by studio (Dovecot) with LMTP id V01ZKRuuUVieFgAABiesew
        for <benny@studio>; Wed, 14 Dec 2016 12:39:55 -0800
Received: by localhost (Postfix, from userid 1000)
        id 9FD9CA1F58; Wed, 14 Dec 2016 12:39:55 -0800 (PST)
Date: Wed, 14 Dec 2016 12:39:55 -0800
To: benny@studio
Subject: hello and welcome!
User-Agent: s-nail v14.8.6
Message-Id: <20161214203955.9FD9CA1F58@localhost>
From: carla@localhost (carla)

Please enjoy your new mail account!

You could also use telnet for testing, as in the previous segments of this series, and set up accounts in your favorite mail client, such as Thunderbird, Claws-Mail, or KMail.

Troubleshooting

When things don’t work, check your logfiles (see the configuration examples), and run journalctl -xe. This should give you all the information you need to spot typos, uninstalled packages, and nice search terms for Google.

What Next?

Assuming your LAN name services are correctly configured, you now have a nice usable LAN mail server. Obviously, sending messages in plain text is not optimal, and an absolute no-no for Internet mail. See Dovecot SSL configuration and Postfix TLS Support. VirtualUserFlatFilesPostfix covers TLS and database back ends. And watch for my upcoming SSL how-to. Really.

Advance your career in system administration! Check out the Essentials of System Administration course from The Linux Foundation.

Docker Open Sources Critical Infrastructure Component

Docker announced today that it was open sourcing containerd (pronounced Container D), making a key infrastructure piece of its container platform available for anyone to work on.

Containerd, which acts as the core container runtime engine, is a component within Docker that provides “users with an open, stable and extensible base for building non-Docker products and container solutions,” according to the company. Leading cloud providers have signed on to work on it including Alibaba, AWS, Google, IBM and Microsoft.

Read more at TechCrunch

Data Wrangling at Slack

For a company like Slack that strives to be as data-driven as possible, understanding how our users use our product is essential.

The Data Engineering team at Slack works to provide an ecosystem to help people in the company quickly and easily answer questions about usage, so they can make better and data informed decisions: Based on a team’s activity within its first week, what is the probability that it will upgrade to a paid team?” or “What is the performance impact of the newest release of the desktop app?”

The Dream

We knew when we started building this system that we would need flexibility in choosing the tools to process and analyze our data. Sometimes the questions being asked involve a small amount of data and we want a fast, interactive way to explore the results. Other times we are running large aggregations across longer time series and we need a system that can handle the sheer quantity of data and help distribute the computation across a cluster. Each of our tools would be optimized for a specific use case, and they all needed to work together as an integrated system.

Read more at Slack Engineering

9 Lessons From 25 Years of Linux Kernel Development

Because the Linux kernel community celebrated a quarter-century of development in 2016, many people have asked us the secret to the project’s longevity and success. I usually laugh and joke that we really have no idea how we got here. The project has faced many disagreements and challenges along the way. But seriously, the reason we’ve made it this far has a lot to do with the community’s capacity for introspection and change.

About 16 years ago, most of the kernel developers had never met each other in person—we’d only ever interacted over email—and so Ted T’so came up with the idea of a Kernel Summit. Now every year kernel developers make a point to gather in person to work out technical issues and, crucially, to review what we did right and what we did wrong over the past year. 

Read more at OpenSource.com

Experts, True Believers and Test-Driven Development: How Expert Advice Becomes a Religion

If you’ve encountered test-driven development (TDD), you may have encountered programmers who follow it with almost religious fervor. They will tell you that you must always write unit tests before you write code, no exceptions. If you don’t, your code will be condemned to everlasting brokenness, tortured by evil edge cases for all eternity.

This is an example of a common problem in programming: good advice by experts that gets turned into a counter-productive religion. Test-driven development is useful and worth doing… some of the time, but not always. And the experts who came up with it in the first place will be the first to tell you that.

Read more at Code Without Rules

Containers Are The Future But The Future Isn’t Finished

Containers are a big deal, and they’re only going to get bigger. That’s my view after attending the latest KubeCon (and CloudNativeCon) in Seattle last week.

A year ago, I was confused about what containers mean for IT, because the name ‘container’ had me thinking it was about the little box that code was stored in: the container image. I’m here to tell you that the container image format itself (Docker, rkt, whatever you like) is not the point.

The most important thing about containers is the process of using them, not the things themselves. The process is heavily automated. No more installing software by sitting in front of a console and clicking ‘Next’ every five minutes. Unix people everywhere rejoice that Windows folk have discovered scripting is a good thing.

Read more at Forbes

Cloud Foundry Launches Open Service Broker API Project

The Cloud Foundry Foundation is spearheading an effort to create APIs for connecting applications to cloud-platform services. This involves getting collaborators to work on a piece of not-so-special software that each of them would otherwise have to develop.

The aptly named Open Service Broker API project launched Tuesday with members including FujitsuGoogleIBMPivotalRed Hat, and SAP.

Read more at SDxCentral

Kubernetes 1.5: Supporting Production Workloads

We’re announcing the release of Kubernetes 1.5. This release follows close on the heels of KubeCon/CloundNativeCon, where users gathered to share how they’re running their applications on Kubernetes. Many of you expressed interest in running stateful applications in containers with the eventual goal of running all applications on Kubernetes. If you have been waiting to try running a distributed database on Kubernetes, or for ways to guarantee application disruption SLOs for stateful and stateless apps, this release has solutions for you. 

StatefulSet and PodDisruptionBudget are moving to beta. Together these features provide an easier way to deploy and scale stateful applications, and make it possible to perform cluster operations like node upgrade without violating application disruption SLOs.

You will also find usability improvements throughout the release, starting with the kubectl command line interface you use so often. For those who have found it hard to set up a multi-cluster federation, a new command line tool called ‘kubefed’ is here to help. And a much requested multi-zone Highly Available (HA) master setup script has been added to kube-up. 

Did you know the Kubernetes community is working to support Windows containers? If you have .NET developers, take a look at the work on Windows containers in this release. This work is in early stage alpha and we would love your feedback.

Lastly, for those interested in the internals of Kubernetes, 1.5 introduces Container Runtime Interface or CRI, which provides an internal API abstracting the container runtime from kubelet. This decoupling of the runtime gives users choice in selecting a runtime that best suits their needs. This release also introduces containerized node conformance tests that verify that the node software meets the minimum requirements to join a Kubernetes cluster.

What’s New

StatefulSet beta (formerly known as PetSet) allows workloads that require persistent identity or per-instance storage to be createdscaleddeleted and repaired on Kubernetes. You can use StatefulSets to ease the deployment of any stateful service, and tutorial examples are available in the repository. In order to ensure that there are never two pods with the same identity, the Kubernetes node controller no longer force deletes pods on unresponsive nodes. Instead, it waits until the old pod is confirmed dead in one of several ways: automatically when the kubelet reports back and confirms the old pod is terminated; automatically when a cluster-admin deletes the node; or when a database admin confirms it is safe to proceed by force deleting the old pod. Users are now warned if they try to force delete pods via the CLI. For users who will be migrating from PetSets to StatefulSets, please follow the upgrade guide.

PodDisruptionBudget beta is an API object that specifies the minimum number or minimum percentage of replicas of a collection of pods that must be up at any time. With PodDisruptionBudget, an application deployer can ensure that cluster operations that voluntarily evict pods will never take down so many simultaneously as to cause data loss, an outage, or an unacceptable service degradation. In Kubernetes 1.5 the “kubectl drain” command supports PodDisruptionBudget, allowing safe draining of nodes for maintenance activities, and it will soon also be used by node upgrade and cluster autoscaler (when removing nodes). This can be useful for a quorum based application to ensure the number of replicas running is never below the number needed for quorum, or for a web front end to ensure the number of replicas serving load never falls below a certain percentage.

Kubefed alpha is a new command line tool to help you manage federated clusters, making it easy to deploy new federation control planes and add or remove clusters from existing federations. Also new in cluster federation is the addition of ConfigMaps alpha and DaemonSets alpha and deployments alpha to the federation API allowing you to create, update and delete these objects across multiple clusters from a single endpoint.

HA Masters alpha provides the ability to create and delete clusters with highly available (replicated) masters on GCE using the kube-up/kube-down scripts. Allows setup of zone distributed HA masters, with at least one etcd replica per zone, at least one API server per zone, and master-elected components like scheduler and controller-manager distributed across zones.

Windows server containers alpha provides initial support for Windows Server 2016 nodes and scheduling Windows Server Containers. 

Container Runtime Interface (CRI) alpha introduces the v1 CRI API to allow pluggable container runtimes; an experimental docker-CRI integration is ready for testing and feedback.

Node conformance test beta is a containerized test framework that provides a system verification and functionality test for nodes. The test validates whether the node meets the minimum requirements for Kubernetes; a node that passes the tests is qualified to join a Kubernetes. Node conformance test is available at: gcr.io/google_containers/node-test:0.2 for users to verify node setup.

These are just some of the highlights in our last release for the year. For a complete list please visit the release notes

Availability

Kubernetes 1.5 is available for download here on GitHub and via get.k8s.io. To get started with Kubernetes, try one of the new interactive tutorials. Don’t forget to take 1.5 for a spin before the holidays! 

User Adoption

It’s been a year-and-a-half since GA, and the rate of Kubernetes user adoption continues to surpass estimates. Organizations running production workloads on Kubernetes include the world’s largest companies, young startups, and everything in between. Since Kubernetes is open and runs anywhere, we’ve seen adoption on a diverse set of platforms; Pokémon Go (Google Cloud), Ticketmaster (AWS), SAP (OpenStack), Box (bare-metal), and hybrid environments that mix-and-match the above. Here are a few user highlights:

  • Yahoo! JAPAN — built an automated tool chain making it easy to go from code push to deployment, all while running OpenStack on Kubernetes. 
  • Walmart — will use Kubernetes with OneOps to manage its incredible distribution centers, helping its team with speed of delivery, systems uptime and asset utilization.  
  • Monzo — a European startup building a mobile first bank, is using Kubernetes to power its core platform that can handle extreme performance and consistency requirements.

Kubernetes Ecosystem

The Kubernetes ecosystem is growing rapidly, including Microsoft’s support for Kubernetes in Azure Container Service, VMware’s integration of Kubernetes in its Photon Platform, and Canonical’s commercial support for Kubernetes. This is in addition to the thirty plus Technology & Service Partners that already provide commercial services for Kubernetes users. 

The CNCF recently announced the Kubernetes Managed Service Provider (KMSP) program, a pre-qualified tier of service providers with experience helping enterprises successfully adopt Kubernetes. Furthering the knowledge and awareness of Kubernetes, The Linux Foundation, in partnership with CNCF, will develop and operate the Kubernetes training and certification program — the first course designed is Kubernetes Fundamentals.

Community Velocity

In the past three months we’ve seen more than a hundred new contributors join the project with some 5,000 commits pushed, reaching new milestones by bringing the total for the core project to 1,000+ contributors and 40,000+ commits. This incredible momentum is only possible by having an open design, being open to new ideas, and empowering an open community to be welcoming to new and senior contributors alike. A big thanks goes out to the release team for 1.5 — Saad Ali of Google, Davanum Srinivas of Mirantis, and Caleb Miles of CoreOS for their work bringing the 1.5 release to light.

Offline, the community can be found at one of the many Kubernetes related meetups around the world. The strength and scale of the community was visible in the crowded halls of CloudNativeCon/KubeCon Seattle (the recorded user talks are here). The next CloudNativeCon + KubeCon is in Berlin March 29-30, 2017, be sure to get your ticket and submit your talk before the CFP deadline of Dec 16th.

Ready to start contributing? Share your voice at our weekly community meeting

Thank you for your contributions and support!

— Aparna Sinha, Senior Product Manager, Google

This article originally appeared on the Kubernetes Blog

Open Source Compliance in the Enterprise: Benefits and Risks

The following is adapted from Open Source Compliance in the Enterprise by Ibrahim Haddad, PhD.

There are several benefits to creating programs and processes that help companies and other organizations achieve open source compliance. On the flip side, there are many risks that companies face when they fail to comply with open source licenses.

In part 3 of this series on Open Source Compliance in the Enterprise, we’ll cover the benefits of complying and the risks of non-compliance, as well as give an overview of common ways that companies fail to comply.

The Benefits of Open Source Compliance

Companies that maintain a steady-state compliance program often gain a technical advantage, since compliant software portfolios are easier to service, test, upgrade, and maintain. In addition, compliance activities can also help identify crucial pieces of open source that are in use across multiple products and parts of an organization, and/or are highly strategic and beneficial to that organization.

Conversely, compliance can demonstrate the costs and risks associated with using open source components, as they will go through multiple rounds of review.

A healthy compliance program can deliver major benefits when working with external communities as well. In the event of a compliance challenge, such a program can demonstrate an ongoing pattern of acting in good faith.

Finally, there are less common ways in which companies benefit from strong open source compliance practices. For example, a well-founded compliance program can help a company be prepared for possible acquisition, sale, or new product or service release, where open source compliance assurance is a mandatory practice before the completion of such transactions. Furthermore, there is the added advantage of verifiable compliance in dealing with OEMs and downstream vendors.

Common Compliance Failures

Throughout software development, errors and limitations in processes can lead to open source compliance failures. Examples include:

  • Failure to provide a proper attribution notice, a license notice, or a copyright notice

  • Making inappropriate or misleading statements in the product documentation or advertisement material

  • Failure to provide the source code and build scripts

  • Failure to provide a written notice to users on open source software included in the product and how to download source code.

Accidental admixture of proprietary and open source intellectual property (IP) can also arise during the software development process leading to license compliance issues. We’ll cover these in detail in the next article.

The Risks of Non-Compliance

License compliance problems are typically less damaging than intellectual property problems. This is because IP failures may result in companies being forced to release proprietary source code under an open source license, thus losing control of their (presumably) high-value intellectual property and diminishing their capability to differentiate in the marketplace.

Other risks of license compliance and IP failures include:

• An injunction preventing a company from shipping the product until the compliance issue has been resolved

• Support or customer service headaches as a result of version mismatches (as a result of people calling or emailing the support hotline and inquiring about source code releases).

• A requirement to distribute proprietary source code that corresponds to the binaries in question under an open source license (depending on the specific case)

• A significant re-engineering effort to eliminate the compliance issues

• Embarrassment with customers, distributors, third party proprietary software suppliers and an open source community.

In the past few years, we have witnessed several cases of non-compliance that made their way to the public eye. Increasingly, the legal disposition towards non-compliance has lessons to teach open source professionals — lessons that we will explore in future articles.

Read the other articles in this series:

An Introduction to Open Source Compliance in the Enterprise

Open Compliance in the Enterprise: Why Have an Open Source Compliance Program?

Open Source Compliance in the Enterprise: Benefits and Risks

3 Common Open Source IP Compliance Failures and How to Avoid Them

4 Common Open Source License Compliance Failures and How to Avoid Them

Top Lessons For Open Source Pros From License Compliance Failures

Download the free e-book, Open Source Compliance in the Enterprise, for a complete guide to creating compliance processes and policies for your organization.