Home Blog Page 644

Remote Logging With Syslog, Part 3: Logfile Rules

In the first article in this series, I introduced the rsyslog tool for logging, and in the second article I provided a detailed look at the main config file. Here, I’ll cover some logfile rules and sample configurations.

I’m a Lumberjack

Now for the juicy stuff as we get our hands a bit dirtier with some logfile rules. Listing 1 shows us the rules included by default with rsyslog on my Debbie-and-Ian machine:

auth,authpriv.*                /var/log/auth.log

*.*;auth,authpriv.none   -/var/log/syslog

#cron.*                           /var/log/cron.log

daemon.*                      -/var/log/daemon.log

kern.*                            -/var/log/kern.log

lpr.*                               -/var/log/lpr.log

mail.*                            -/var/log/mail.log

user.*                            -/var/log/user.log

Listing 1: The standard rules included on the Debian Jessie operating system.

Since I covered the syntax previously, I hope there are no nasty surprises in Listing 1. If you wanted to add lots of content to one log file in particular (the following example is from a Red Hat box) then you would separate entries as so:

*.info;mail.none;authpriv.none;cron.none                /var/log/messages

As you can see we’re throwing a fair amount at the “messages” log file in the example above. Each entry, let’s use “mail.none” as our example, follows a “facility.priority” format.

So, in the Red Hat example above for the “mail” facility, the config “mail.none” speaks volumes whereas to capture “all” mail logs, the config would be “mail.*” as seen in Listing 1. The “none” may be merrily be replaced with any of the 0-7 error codes shown in the very first listing shown in the first article, such as INFO.

The docs talk about both the “facility” and the “priority” being case-insensitive and how they can also receive decimal numbers for arguments. Take note from the manual, however, that’s generally a bad idea: “but don’t do that, you have been warned.”

And, news just in (not really): the documentation is explicit about the “priority” settings “error,” “warn,” and “panic” no longer being used as they are deprecated. Note that this is not visible in other docs that I have read so it likely applies to newer versions.

A final point would be on the way that rsyslog deals with its error levels (a reminder of what we saw in previously and also to take heed that some of those are now deprecated in newer versions). The manual is typically very helpful in its order of “priority” and discusses them as is displayed in Listing 2.

emerg (panic may also be used)

alert

crit

error (err may also be used)

warn (warning may also be used)

notice

info

debug

Listing 2: Preferred rsyslog error levels with those now deprecated struck-through (version v8-stable as of writing).

Onwards we cheerily go. From a “facility” perspective, you can use the options as displayed in Listing 3.

auth

authpriv

cron

daemon

kern

lpr

mail

mark

news

security (equivalent to “auth”) 

syslog

user

uucp

local0 ... local7

Listing 3: Available options for the “facility” setting, abbreviated and missing “local1” to “local6”.

With your newfound knowledge, I’m sure that it goes without saying that if you see any asterisks mentioned then it simply means that “all” of the available “facility” options or all of the “priority” options are included.

Note the configurable “local” settings from zero to seven missing from the abbreviated content in Listing 3. This brings us nicely onto our next section, namely how to configure a remote rsyslog server.

Ladies And Gentlemen

I hope you’ll agree that the above configs are all relatively easy to follow. What about setting your logs live so that they are recorded onto a remote rsyslog server? If you’re sitting comfortably, here’s how to do just that.

First, let’s think about a few things. Consider how busy your logs are. If you’re simply pushing a few errors (because of an attack or a full drive) over to your remote syslog server, then your network isn’t going to be under much pressure. Imagine a very busy web server, however, and you’re going to want to analyze the hits that it receives, using something like the Kibana logging analysis tool via Elasticsearch, for example. That busy server might be pushing any number of uncompressed gigabytes of data across your LAN, and it’s important to bear in mind that these hits will occur 24/7 on a popular website.

In such a scenario, it is clearly key that your logs are all received without fail to ensure the integrity of your log analysis. The challenge is that the logs grow continually, unremittingly, and are generated every second of every day as visitors move around your website.

There’s also a pretty serious gotcha in relation to the rotation of logs (which there may well be a way of circumventing that am I yet to discover on the version (v5.8.10) of Syslog I was using). When you’re creating compendious logs, the sizes can grow so large that you feel like you might begin to encroach on your nearest landfill site. As a result, at some point your disks start to creak at the seams (no matter how big they are) and you have to slice up your logs and preferably compress them, too.

One of the most popular tools for rotating logs is the truly excellent logrotate, of which I’m a big fan. The clever logrotate is well-built, feature-filled, and most importantly highly reliable (logs are valuable commodities after all; especially for forensic analysis following an attack or after an expensive web infrastructure investment to ensure that the bang-for-buck ratio is satisfactory).

The gotcha, which I referred to a moment ago, surfaces in a fairly simple guise. When a log is rotated, the usually reliable rsyslog stops logging at the remote server end — even though the local logs continue to grow. It looks like some people have had problems on other distributions.

When faced with such a pickle, from what I could see at least, there simply weren’t config options to provide a workaround (even having tried different “Polling” configs and $InputFilePersistStateInterval workarounds; these might make more sense shortly). However, and I hold my hands up, it’s quite possible that I may have missed something. In my defense, I was stuck with an older version that couldn’t be upgraded (it’s a long story) and possibly that made a difference. Before we see the solution I chose, let’s look at how to create the remote logging config.

Remember the directory which we looked at in addition to the config file? I’m referring to the /etc/rsyslog.d directory. Well, that’s where we insert our remote rsyslog server config. We dutifully create a file called something like www-syslog.chrisbinnie.tld.conf to refer to our logging server’s hostname, appending a .conf on the end and a www- for the service being in question, which is logged. I’m using the hostname as an example in case your sanity is truly questionable and you want to push different application logs off to various servers. This naming convention should serve you well, if so.

Next time, we’ll look at the entirety of the /etc/rsyslog.d/www-rsyslog.chrisbinnie.tld.conf file and discuss some important networking considerations.

Read the other articles in this series:

Remote Logging With Syslog, Part 1: The Basics

Remote Logging With Syslog, Part 2: Main Config File

Chris Binnie is a Technical Consultant with 20 years of Linux experience and a writer for Linux Magazine and Admin Magazine. His new book Linux Server Security: Hack and Defend teaches you how to launch sophisticated attacks, make your servers invisible and crack complex passwords.

Advance your career in system administration! Check out the Essentials of System Administration course from The Linux Foundation.

Explain Yourself! Documentation for Better Code

Documentation is one of those areas that never feels quite finished. There are almost always areas that could be updated or improved in some way. In his talk at LinuxCon Europe, Chris Ward provided a crash course on ways to make documentation for your projects better, starting with thinking about how to answer the three W’s: 

  • Who are you writing for?
  • What are they trying to achieve?
  • Why are you writing this?

Ward points out that with documentation, you should “assume nothing.” Keep in mind that not everybody has the same programming implementation experience and history as you, so don’t assume that everyone understands the same techniques and methods that you know. What about a particular technique or dependency that you think everyone must have installed? There’s no harm in mentioning it anyway, just in case. It takes an extra few seconds, and everyone has a much nicer experience.

You should also have a solid elevator pitch, a quick, simple explanation of what your project does. Ward feels that “most ideas, no matter how complicated, can be reduced to a simple pitch that everyone can understand.” It’s fine if you lose some of the subtleties and detail, since you’re just explaining enough to allow a person to make up their mind if they’re interested in or not. They can dig into the rest of the documentation for details if they’re interested in learning more.

While API docs are great for describing how to interact with various components of the project, Ward says that API docs are not always enough. They don’t necessarily describe how someone can assemble them into something that makes sense as a component of another project. This is where a getting started tutorial on top of your API descriptions can help explain how to assemble these pieces together.

Consider Your Audience

Ward also thinks that it’s important to consider how people are getting to your documentation. Quite often people are not getting there from within the documentation itself, but from search engines that might drop them into the middle where you can’t guarantee that they’ve seen some previous steps that should have been completed in a certain order. There are techniques, like using navigation and links back to important concepts, to help with this. 

You can also do a few things that make your documentation a bit more interesting. Interactivity can help readers understand a concept, and with most documentation being read online this is actually fairly easy to accomplish, because we have access to a wealth of rich media. A bit of storytelling can also be interesting. When we’re writing technical documentation, we’re not writing fiction, but there is no harm in trying to tell a story through examples or other narrative techniques. 

Keep in mind that many people will use your documentation: marketing, search engines, managers, and more. So, Ward closes with this remark, “documentation isn’t just for developers. It’s actually read by a lot of other people, too.” 

If you want to learn more about documentation, including more tips for managing, testing, and displaying your documentation, watch the full video of Ward’s talk from LinuxCon Europe.

LinuxCon Europe videos

Explain Yourself! Documentation for Better Code by Chris Ward, Crate.IO

 In this talk from LinuxCon Europe, Chris Ward provided a crash course on ways to make documentation for your projects better.

The Classes of Container Monitoring

When discussing container monitoring, we need to talk about the word “monitoring.” There are a wide array of practices considered to be monitoring between users, developers and sysadmins in different industries. Monitoring — in an operational, container and cloud-based context — has four main use cases:
  • Knowing when something is wrong.
  • Having the information to debug a problem.
  • Trending and reporting.
  • Plumbing.

Let’s look at each of these use cases and how each obstacle is best approached.

Read more at The New Stack

IBM Helps Developers Speed Up the Creation of Blockchain Networks

According to a recent report by Research and Markets, the blockchain technology market is skyrocketing: it estimates that the market will grow from $210.2 million in 2016 to $2,312.5 million by 2021, at a Compound Annual Growth Rate (CAGR) of 61.5 percent. Although the author acknowledges that “actors such as lack of awareness about the blockchain technology and uncertain regulatory status are the major restraints in the overall growth of the market,” the HyperLedger Project is working hard to take blockchain to the next level and help it go mainstream.

However, for this to happen, the growing blockchain ecosystem needs to hit a major milestone: convince developers that blockchain is worth their attention. As Brian Behlendorf, Executive Director of the Hyperledger Project told JAXenter a few months ago, “it’s up to the developers how soon blockchain goes mainstream.”

Read more at JAXenter

Popular CentOS Linux Server Gets a Major Refresh

CentOS doesn’t get many headlines. But it’s still the server Linux of choice for many hosting companies, datacenters, and businesses with in-house Linux experts. That’s because CentOS, which is controlled by Red Hat, is a Red Hat Enterprise Linux (RHEL) clone. As such, it reaps the benefits of RHEL’s business Linux development efforts without RHEL’s costs. So, now that CentOS 7 1611, which is based on RHEL 7.3, has arrived, I expect to see many happy companies moving to it.

If you’re considering jumping to CentOS, keep in mind that while its code-base is very close to RHEL, you don’t get Red Hat’s support. As the project web page explains, “CentOS Project does not provide any verification, certification, or software assurance with respect to security for CentOS Linux. … If certified/verified software that has guaranteed assurance is what you are looking for, then you likely do not want to use CentOS Linux.” In short, CentOS is for Linux professionals, not for companies that need high-level technical support.

Read more at ZDNet 

How to Build a Ceph Distributed Storage Cluster on CentOS 7

Ceph is a widely used open source storage platform. It provides high performance, reliability, and scalability. The Ceph free distributed storage system provides an interface for object, block, and file-level storage. Ceph is build to provide a distributed storage system without a single point of failure. In this tutorial, I will guide you to install and build a Ceph cluster on CentOS 7.

Read the complete article at HowToForge.

Simplify Service Dependencies with Nodes

RPC services can be messy to implement. It can be even messier if you go with microservices. Rather than writing a monolithic piece of software that’s simple to deploy but hard to maintain, you write many small services each dedicated to a specific functionality, with a clean scope of features. They can be running in different environments, written at different times, and managed by different teams. They can be as far as a remote service across the continent, or as close as a logical component living in your own server process providing its work through an interface, synchronous or asynchronous. All said, you want to put together the work of many smaller components to process your request.

This was exactly the problem that Blender, one of the major components of the Twitter Search backend, was facing. As one of the most complicated services in Twitter, it makes more than 30 calls to different services with complex interdependencies for a typical search request, eventually reaching hundreds of machines in the data center darkness. 

Read more at the Twitter Blog

Essential Utilities: Reclaiming Disk Space

The utilities featured in this article help to simplify the process by scanning your drive and producing interactive maps that shows each file as a coloured rectangle that’s proportional to the size of the file. And this software can distinguish between large collections of data which are still in use and ones which have not been accessed for a long time. The latter may be image files or large archives downloaded, unpacked, used once, and never cleaned up.

If you find yourself running low on space, these effective graphical tools will save both time and effort in reclaiming disk space.

Read the full article

 

Blythe Masters Talks ‘Tipping Point’ for Business Blockchain Adoption

Blythe Masters may be helping to lead an industry-wide shift in the development of blockchain tech, but that doesn’t mean her startup isn’t experiencing its own changes as well.

No longer content with building tools that have the potential to make industry more transparent and streamlined, Masters is one of a number of blockchain innovators in pursuit of talent and customers as part of an effort to help make the tech’s theoretical business applications real.

Epitomizing the shift, Masters’ heavily funded startup, Digital Asset Holdings, last week published a white paper with a subtle, but important difference: non-experts can understand it. Marking a transition in the company’s focus, the simply titled paper – “The Digital Asset Platform” – isn’t written for developers that build tools, but for executives with the power to change the direction of a financial institution.

Read more at CoinDesk