Home Blog Page 644

Cloud Foundry Launches Open Service Broker API Project

The Cloud Foundry Foundation is spearheading an effort to create APIs for connecting applications to cloud-platform services. This involves getting collaborators to work on a piece of not-so-special software that each of them would otherwise have to develop.

The aptly named Open Service Broker API project launched Tuesday with members including FujitsuGoogleIBMPivotalRed Hat, and SAP.

Read more at SDxCentral

Kubernetes 1.5: Supporting Production Workloads

We’re announcing the release of Kubernetes 1.5. This release follows close on the heels of KubeCon/CloundNativeCon, where users gathered to share how they’re running their applications on Kubernetes. Many of you expressed interest in running stateful applications in containers with the eventual goal of running all applications on Kubernetes. If you have been waiting to try running a distributed database on Kubernetes, or for ways to guarantee application disruption SLOs for stateful and stateless apps, this release has solutions for you. 

StatefulSet and PodDisruptionBudget are moving to beta. Together these features provide an easier way to deploy and scale stateful applications, and make it possible to perform cluster operations like node upgrade without violating application disruption SLOs.

You will also find usability improvements throughout the release, starting with the kubectl command line interface you use so often. For those who have found it hard to set up a multi-cluster federation, a new command line tool called ‘kubefed’ is here to help. And a much requested multi-zone Highly Available (HA) master setup script has been added to kube-up. 

Did you know the Kubernetes community is working to support Windows containers? If you have .NET developers, take a look at the work on Windows containers in this release. This work is in early stage alpha and we would love your feedback.

Lastly, for those interested in the internals of Kubernetes, 1.5 introduces Container Runtime Interface or CRI, which provides an internal API abstracting the container runtime from kubelet. This decoupling of the runtime gives users choice in selecting a runtime that best suits their needs. This release also introduces containerized node conformance tests that verify that the node software meets the minimum requirements to join a Kubernetes cluster.

What’s New

StatefulSet beta (formerly known as PetSet) allows workloads that require persistent identity or per-instance storage to be createdscaleddeleted and repaired on Kubernetes. You can use StatefulSets to ease the deployment of any stateful service, and tutorial examples are available in the repository. In order to ensure that there are never two pods with the same identity, the Kubernetes node controller no longer force deletes pods on unresponsive nodes. Instead, it waits until the old pod is confirmed dead in one of several ways: automatically when the kubelet reports back and confirms the old pod is terminated; automatically when a cluster-admin deletes the node; or when a database admin confirms it is safe to proceed by force deleting the old pod. Users are now warned if they try to force delete pods via the CLI. For users who will be migrating from PetSets to StatefulSets, please follow the upgrade guide.

PodDisruptionBudget beta is an API object that specifies the minimum number or minimum percentage of replicas of a collection of pods that must be up at any time. With PodDisruptionBudget, an application deployer can ensure that cluster operations that voluntarily evict pods will never take down so many simultaneously as to cause data loss, an outage, or an unacceptable service degradation. In Kubernetes 1.5 the “kubectl drain” command supports PodDisruptionBudget, allowing safe draining of nodes for maintenance activities, and it will soon also be used by node upgrade and cluster autoscaler (when removing nodes). This can be useful for a quorum based application to ensure the number of replicas running is never below the number needed for quorum, or for a web front end to ensure the number of replicas serving load never falls below a certain percentage.

Kubefed alpha is a new command line tool to help you manage federated clusters, making it easy to deploy new federation control planes and add or remove clusters from existing federations. Also new in cluster federation is the addition of ConfigMaps alpha and DaemonSets alpha and deployments alpha to the federation API allowing you to create, update and delete these objects across multiple clusters from a single endpoint.

HA Masters alpha provides the ability to create and delete clusters with highly available (replicated) masters on GCE using the kube-up/kube-down scripts. Allows setup of zone distributed HA masters, with at least one etcd replica per zone, at least one API server per zone, and master-elected components like scheduler and controller-manager distributed across zones.

Windows server containers alpha provides initial support for Windows Server 2016 nodes and scheduling Windows Server Containers. 

Container Runtime Interface (CRI) alpha introduces the v1 CRI API to allow pluggable container runtimes; an experimental docker-CRI integration is ready for testing and feedback.

Node conformance test beta is a containerized test framework that provides a system verification and functionality test for nodes. The test validates whether the node meets the minimum requirements for Kubernetes; a node that passes the tests is qualified to join a Kubernetes. Node conformance test is available at: gcr.io/google_containers/node-test:0.2 for users to verify node setup.

These are just some of the highlights in our last release for the year. For a complete list please visit the release notes

Availability

Kubernetes 1.5 is available for download here on GitHub and via get.k8s.io. To get started with Kubernetes, try one of the new interactive tutorials. Don’t forget to take 1.5 for a spin before the holidays! 

User Adoption

It’s been a year-and-a-half since GA, and the rate of Kubernetes user adoption continues to surpass estimates. Organizations running production workloads on Kubernetes include the world’s largest companies, young startups, and everything in between. Since Kubernetes is open and runs anywhere, we’ve seen adoption on a diverse set of platforms; Pokémon Go (Google Cloud), Ticketmaster (AWS), SAP (OpenStack), Box (bare-metal), and hybrid environments that mix-and-match the above. Here are a few user highlights:

  • Yahoo! JAPAN — built an automated tool chain making it easy to go from code push to deployment, all while running OpenStack on Kubernetes. 
  • Walmart — will use Kubernetes with OneOps to manage its incredible distribution centers, helping its team with speed of delivery, systems uptime and asset utilization.  
  • Monzo — a European startup building a mobile first bank, is using Kubernetes to power its core platform that can handle extreme performance and consistency requirements.

Kubernetes Ecosystem

The Kubernetes ecosystem is growing rapidly, including Microsoft’s support for Kubernetes in Azure Container Service, VMware’s integration of Kubernetes in its Photon Platform, and Canonical’s commercial support for Kubernetes. This is in addition to the thirty plus Technology & Service Partners that already provide commercial services for Kubernetes users. 

The CNCF recently announced the Kubernetes Managed Service Provider (KMSP) program, a pre-qualified tier of service providers with experience helping enterprises successfully adopt Kubernetes. Furthering the knowledge and awareness of Kubernetes, The Linux Foundation, in partnership with CNCF, will develop and operate the Kubernetes training and certification program — the first course designed is Kubernetes Fundamentals.

Community Velocity

In the past three months we’ve seen more than a hundred new contributors join the project with some 5,000 commits pushed, reaching new milestones by bringing the total for the core project to 1,000+ contributors and 40,000+ commits. This incredible momentum is only possible by having an open design, being open to new ideas, and empowering an open community to be welcoming to new and senior contributors alike. A big thanks goes out to the release team for 1.5 — Saad Ali of Google, Davanum Srinivas of Mirantis, and Caleb Miles of CoreOS for their work bringing the 1.5 release to light.

Offline, the community can be found at one of the many Kubernetes related meetups around the world. The strength and scale of the community was visible in the crowded halls of CloudNativeCon/KubeCon Seattle (the recorded user talks are here). The next CloudNativeCon + KubeCon is in Berlin March 29-30, 2017, be sure to get your ticket and submit your talk before the CFP deadline of Dec 16th.

Ready to start contributing? Share your voice at our weekly community meeting

Thank you for your contributions and support!

— Aparna Sinha, Senior Product Manager, Google

This article originally appeared on the Kubernetes Blog

Open Source Compliance in the Enterprise: Benefits and Risks

The following is adapted from Open Source Compliance in the Enterprise by Ibrahim Haddad, PhD.

There are several benefits to creating programs and processes that help companies and other organizations achieve open source compliance. On the flip side, there are many risks that companies face when they fail to comply with open source licenses.

In part 3 of this series on Open Source Compliance in the Enterprise, we’ll cover the benefits of complying and the risks of non-compliance, as well as give an overview of common ways that companies fail to comply.

The Benefits of Open Source Compliance

Companies that maintain a steady-state compliance program often gain a technical advantage, since compliant software portfolios are easier to service, test, upgrade, and maintain. In addition, compliance activities can also help identify crucial pieces of open source that are in use across multiple products and parts of an organization, and/or are highly strategic and beneficial to that organization.

Conversely, compliance can demonstrate the costs and risks associated with using open source components, as they will go through multiple rounds of review.

A healthy compliance program can deliver major benefits when working with external communities as well. In the event of a compliance challenge, such a program can demonstrate an ongoing pattern of acting in good faith.

Finally, there are less common ways in which companies benefit from strong open source compliance practices. For example, a well-founded compliance program can help a company be prepared for possible acquisition, sale, or new product or service release, where open source compliance assurance is a mandatory practice before the completion of such transactions. Furthermore, there is the added advantage of verifiable compliance in dealing with OEMs and downstream vendors.

Common Compliance Failures

Throughout software development, errors and limitations in processes can lead to open source compliance failures. Examples include:

  • Failure to provide a proper attribution notice, a license notice, or a copyright notice

  • Making inappropriate or misleading statements in the product documentation or advertisement material

  • Failure to provide the source code and build scripts

  • Failure to provide a written notice to users on open source software included in the product and how to download source code.

Accidental admixture of proprietary and open source intellectual property (IP) can also arise during the software development process leading to license compliance issues. We’ll cover these in detail in the next article.

The Risks of Non-Compliance

License compliance problems are typically less damaging than intellectual property problems. This is because IP failures may result in companies being forced to release proprietary source code under an open source license, thus losing control of their (presumably) high-value intellectual property and diminishing their capability to differentiate in the marketplace.

Other risks of license compliance and IP failures include:

• An injunction preventing a company from shipping the product until the compliance issue has been resolved

• Support or customer service headaches as a result of version mismatches (as a result of people calling or emailing the support hotline and inquiring about source code releases).

• A requirement to distribute proprietary source code that corresponds to the binaries in question under an open source license (depending on the specific case)

• A significant re-engineering effort to eliminate the compliance issues

• Embarrassment with customers, distributors, third party proprietary software suppliers and an open source community.

In the past few years, we have witnessed several cases of non-compliance that made their way to the public eye. Increasingly, the legal disposition towards non-compliance has lessons to teach open source professionals — lessons that we will explore in future articles.

Read the other articles in this series:

An Introduction to Open Source Compliance in the Enterprise

Open Compliance in the Enterprise: Why Have an Open Source Compliance Program?

Open Source Compliance in the Enterprise: Benefits and Risks

3 Common Open Source IP Compliance Failures and How to Avoid Them

4 Common Open Source License Compliance Failures and How to Avoid Them

Top Lessons For Open Source Pros From License Compliance Failures

Download the free e-book, Open Source Compliance in the Enterprise, for a complete guide to creating compliance processes and policies for your organization.

Remote Logging With Syslog, Part 3: Logfile Rules

In the first article in this series, I introduced the rsyslog tool for logging, and in the second article I provided a detailed look at the main config file. Here, I’ll cover some logfile rules and sample configurations.

I’m a Lumberjack

Now for the juicy stuff as we get our hands a bit dirtier with some logfile rules. Listing 1 shows us the rules included by default with rsyslog on my Debbie-and-Ian machine:

auth,authpriv.*                /var/log/auth.log

*.*;auth,authpriv.none   -/var/log/syslog

#cron.*                           /var/log/cron.log

daemon.*                      -/var/log/daemon.log

kern.*                            -/var/log/kern.log

lpr.*                               -/var/log/lpr.log

mail.*                            -/var/log/mail.log

user.*                            -/var/log/user.log

Listing 1: The standard rules included on the Debian Jessie operating system.

Since I covered the syntax previously, I hope there are no nasty surprises in Listing 1. If you wanted to add lots of content to one log file in particular (the following example is from a Red Hat box) then you would separate entries as so:

*.info;mail.none;authpriv.none;cron.none                /var/log/messages

As you can see we’re throwing a fair amount at the “messages” log file in the example above. Each entry, let’s use “mail.none” as our example, follows a “facility.priority” format.

So, in the Red Hat example above for the “mail” facility, the config “mail.none” speaks volumes whereas to capture “all” mail logs, the config would be “mail.*” as seen in Listing 1. The “none” may be merrily be replaced with any of the 0-7 error codes shown in the very first listing shown in the first article, such as INFO.

The docs talk about both the “facility” and the “priority” being case-insensitive and how they can also receive decimal numbers for arguments. Take note from the manual, however, that’s generally a bad idea: “but don’t do that, you have been warned.”

And, news just in (not really): the documentation is explicit about the “priority” settings “error,” “warn,” and “panic” no longer being used as they are deprecated. Note that this is not visible in other docs that I have read so it likely applies to newer versions.

A final point would be on the way that rsyslog deals with its error levels (a reminder of what we saw in previously and also to take heed that some of those are now deprecated in newer versions). The manual is typically very helpful in its order of “priority” and discusses them as is displayed in Listing 2.

emerg (panic may also be used)

alert

crit

error (err may also be used)

warn (warning may also be used)

notice

info

debug

Listing 2: Preferred rsyslog error levels with those now deprecated struck-through (version v8-stable as of writing).

Onwards we cheerily go. From a “facility” perspective, you can use the options as displayed in Listing 3.

auth

authpriv

cron

daemon

kern

lpr

mail

mark

news

security (equivalent to “auth”) 

syslog

user

uucp

local0 ... local7

Listing 3: Available options for the “facility” setting, abbreviated and missing “local1” to “local6”.

With your newfound knowledge, I’m sure that it goes without saying that if you see any asterisks mentioned then it simply means that “all” of the available “facility” options or all of the “priority” options are included.

Note the configurable “local” settings from zero to seven missing from the abbreviated content in Listing 3. This brings us nicely onto our next section, namely how to configure a remote rsyslog server.

Ladies And Gentlemen

I hope you’ll agree that the above configs are all relatively easy to follow. What about setting your logs live so that they are recorded onto a remote rsyslog server? If you’re sitting comfortably, here’s how to do just that.

First, let’s think about a few things. Consider how busy your logs are. If you’re simply pushing a few errors (because of an attack or a full drive) over to your remote syslog server, then your network isn’t going to be under much pressure. Imagine a very busy web server, however, and you’re going to want to analyze the hits that it receives, using something like the Kibana logging analysis tool via Elasticsearch, for example. That busy server might be pushing any number of uncompressed gigabytes of data across your LAN, and it’s important to bear in mind that these hits will occur 24/7 on a popular website.

In such a scenario, it is clearly key that your logs are all received without fail to ensure the integrity of your log analysis. The challenge is that the logs grow continually, unremittingly, and are generated every second of every day as visitors move around your website.

There’s also a pretty serious gotcha in relation to the rotation of logs (which there may well be a way of circumventing that am I yet to discover on the version (v5.8.10) of Syslog I was using). When you’re creating compendious logs, the sizes can grow so large that you feel like you might begin to encroach on your nearest landfill site. As a result, at some point your disks start to creak at the seams (no matter how big they are) and you have to slice up your logs and preferably compress them, too.

One of the most popular tools for rotating logs is the truly excellent logrotate, of which I’m a big fan. The clever logrotate is well-built, feature-filled, and most importantly highly reliable (logs are valuable commodities after all; especially for forensic analysis following an attack or after an expensive web infrastructure investment to ensure that the bang-for-buck ratio is satisfactory).

The gotcha, which I referred to a moment ago, surfaces in a fairly simple guise. When a log is rotated, the usually reliable rsyslog stops logging at the remote server end — even though the local logs continue to grow. It looks like some people have had problems on other distributions.

When faced with such a pickle, from what I could see at least, there simply weren’t config options to provide a workaround (even having tried different “Polling” configs and $InputFilePersistStateInterval workarounds; these might make more sense shortly). However, and I hold my hands up, it’s quite possible that I may have missed something. In my defense, I was stuck with an older version that couldn’t be upgraded (it’s a long story) and possibly that made a difference. Before we see the solution I chose, let’s look at how to create the remote logging config.

Remember the directory which we looked at in addition to the config file? I’m referring to the /etc/rsyslog.d directory. Well, that’s where we insert our remote rsyslog server config. We dutifully create a file called something like www-syslog.chrisbinnie.tld.conf to refer to our logging server’s hostname, appending a .conf on the end and a www- for the service being in question, which is logged. I’m using the hostname as an example in case your sanity is truly questionable and you want to push different application logs off to various servers. This naming convention should serve you well, if so.

Next time, we’ll look at the entirety of the /etc/rsyslog.d/www-rsyslog.chrisbinnie.tld.conf file and discuss some important networking considerations.

Read the other articles in this series:

Remote Logging With Syslog, Part 1: The Basics

Remote Logging With Syslog, Part 2: Main Config File

Chris Binnie is a Technical Consultant with 20 years of Linux experience and a writer for Linux Magazine and Admin Magazine. His new book Linux Server Security: Hack and Defend teaches you how to launch sophisticated attacks, make your servers invisible and crack complex passwords.

Advance your career in system administration! Check out the Essentials of System Administration course from The Linux Foundation.

Explain Yourself! Documentation for Better Code

Documentation is one of those areas that never feels quite finished. There are almost always areas that could be updated or improved in some way. In his talk at LinuxCon Europe, Chris Ward provided a crash course on ways to make documentation for your projects better, starting with thinking about how to answer the three W’s: 

  • Who are you writing for?
  • What are they trying to achieve?
  • Why are you writing this?

Ward points out that with documentation, you should “assume nothing.” Keep in mind that not everybody has the same programming implementation experience and history as you, so don’t assume that everyone understands the same techniques and methods that you know. What about a particular technique or dependency that you think everyone must have installed? There’s no harm in mentioning it anyway, just in case. It takes an extra few seconds, and everyone has a much nicer experience.

You should also have a solid elevator pitch, a quick, simple explanation of what your project does. Ward feels that “most ideas, no matter how complicated, can be reduced to a simple pitch that everyone can understand.” It’s fine if you lose some of the subtleties and detail, since you’re just explaining enough to allow a person to make up their mind if they’re interested in or not. They can dig into the rest of the documentation for details if they’re interested in learning more.

While API docs are great for describing how to interact with various components of the project, Ward says that API docs are not always enough. They don’t necessarily describe how someone can assemble them into something that makes sense as a component of another project. This is where a getting started tutorial on top of your API descriptions can help explain how to assemble these pieces together.

Consider Your Audience

Ward also thinks that it’s important to consider how people are getting to your documentation. Quite often people are not getting there from within the documentation itself, but from search engines that might drop them into the middle where you can’t guarantee that they’ve seen some previous steps that should have been completed in a certain order. There are techniques, like using navigation and links back to important concepts, to help with this. 

You can also do a few things that make your documentation a bit more interesting. Interactivity can help readers understand a concept, and with most documentation being read online this is actually fairly easy to accomplish, because we have access to a wealth of rich media. A bit of storytelling can also be interesting. When we’re writing technical documentation, we’re not writing fiction, but there is no harm in trying to tell a story through examples or other narrative techniques. 

Keep in mind that many people will use your documentation: marketing, search engines, managers, and more. So, Ward closes with this remark, “documentation isn’t just for developers. It’s actually read by a lot of other people, too.” 

If you want to learn more about documentation, including more tips for managing, testing, and displaying your documentation, watch the full video of Ward’s talk from LinuxCon Europe.

LinuxCon Europe videos

Explain Yourself! Documentation for Better Code by Chris Ward, Crate.IO

 In this talk from LinuxCon Europe, Chris Ward provided a crash course on ways to make documentation for your projects better.

The Classes of Container Monitoring

When discussing container monitoring, we need to talk about the word “monitoring.” There are a wide array of practices considered to be monitoring between users, developers and sysadmins in different industries. Monitoring — in an operational, container and cloud-based context — has four main use cases:
  • Knowing when something is wrong.
  • Having the information to debug a problem.
  • Trending and reporting.
  • Plumbing.

Let’s look at each of these use cases and how each obstacle is best approached.

Read more at The New Stack

IBM Helps Developers Speed Up the Creation of Blockchain Networks

According to a recent report by Research and Markets, the blockchain technology market is skyrocketing: it estimates that the market will grow from $210.2 million in 2016 to $2,312.5 million by 2021, at a Compound Annual Growth Rate (CAGR) of 61.5 percent. Although the author acknowledges that “actors such as lack of awareness about the blockchain technology and uncertain regulatory status are the major restraints in the overall growth of the market,” the HyperLedger Project is working hard to take blockchain to the next level and help it go mainstream.

However, for this to happen, the growing blockchain ecosystem needs to hit a major milestone: convince developers that blockchain is worth their attention. As Brian Behlendorf, Executive Director of the Hyperledger Project told JAXenter a few months ago, “it’s up to the developers how soon blockchain goes mainstream.”

Read more at JAXenter

Popular CentOS Linux Server Gets a Major Refresh

CentOS doesn’t get many headlines. But it’s still the server Linux of choice for many hosting companies, datacenters, and businesses with in-house Linux experts. That’s because CentOS, which is controlled by Red Hat, is a Red Hat Enterprise Linux (RHEL) clone. As such, it reaps the benefits of RHEL’s business Linux development efforts without RHEL’s costs. So, now that CentOS 7 1611, which is based on RHEL 7.3, has arrived, I expect to see many happy companies moving to it.

If you’re considering jumping to CentOS, keep in mind that while its code-base is very close to RHEL, you don’t get Red Hat’s support. As the project web page explains, “CentOS Project does not provide any verification, certification, or software assurance with respect to security for CentOS Linux. … If certified/verified software that has guaranteed assurance is what you are looking for, then you likely do not want to use CentOS Linux.” In short, CentOS is for Linux professionals, not for companies that need high-level technical support.

Read more at ZDNet 

How to Build a Ceph Distributed Storage Cluster on CentOS 7

Ceph is a widely used open source storage platform. It provides high performance, reliability, and scalability. The Ceph free distributed storage system provides an interface for object, block, and file-level storage. Ceph is build to provide a distributed storage system without a single point of failure. In this tutorial, I will guide you to install and build a Ceph cluster on CentOS 7.

Read the complete article at HowToForge.