Home Blog Page 649

Remote Logging With Syslog, Part 2: Main Config File

In the previous article, we looked at some of the basics of rsyslog — a superfast Syslog tool with some powerful features for log processing. Here, I’ll be taking a detailed look at the main config file. Let’s dive right in.

Something to note — in case this causes you issues in the future — is that although the entries found in our main config file (/etc/rsyslog.conf) are read from the top down, the order in which they are presented does in fact make a difference.

Run this command inside the /etc directory:

# ls rsys*

rsyslog.conf  rsyslog.d/

The main config file is called rsyslog.conf, whereas the rsyslog.d/ is the directory where you save your other configuration files. Looking in the rsyslog.conf file, we use a type of $IncludeConfig statement to pick up any files with the .conf extension that reside in that directory, as follows:

# Include all config files in /etc/rsyslog.d/

$IncludeConfig /etc/rsyslog.d/*.conf

These user-defined configs might include remote logging to a rsyslog server.

You can include an individual file, too, or indeed a whole directory (no matter the file extensions), as follows:

$IncludeConfig /etc/rsyslog.d/chris-binnie-config.conf

$IncludeConfig /etc/rsyslog.d/

What does our main config file look like inside, though? Listing 1 shows that the file includes lots of useful comments and the heart of our rsyslog config in addition. Bear in mind that this file defines local logging in most cases. However if you’re turning your local server into a recipient Syslog server too then you also add config to set that live there too. Note that after each config change you will need to restart the daemon as we will see shortly.

#  /etc/rsyslog.conf    Configuration file for rsyslog.

#

#                       For more information see

#                       /usr/share/doc/rsyslog-doc/html/rsyslog_conf.html


#################

#### MODULES ####

#################


$ModLoad imuxsock # provides support for local system logging

$ModLoad imklog   # provides kernel logging support

#$ModLoad immark  # provides --MARK-- message capability


# provides UDP syslog reception

#$ModLoad imudp

#$UDPServerRun 514


# provides TCP syslog reception

#$ModLoad imtcp

#$InputTCPServerRun 514


###########################

#### GLOBAL DIRECTIVES ####

###########################


#

# Use traditional timestamp format.

# To enable high precision timestamps, comment out the following line.

#

$ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat


#

# Set the default permissions for all log files.

#

$FileOwner root

$FileGroup adm

$FileCreateMode 0640

$DirCreateMode 0755

$Umask 0022


#

# Where to place spool and state files

#

$WorkDirectory /var/spool/rsyslog


#

# Include all config files in /etc/rsyslog.d/

#

$IncludeConfig /etc/rsyslog.d/*.conf


###############

#### RULES ####

###############


#

# First some standard log files. Log by facility.

#

auth,authpriv.*                 /var/log/auth.log

*.*;auth,authpriv.none     -/var/log/syslog

#cron.*                            /var/log/cron.log

daemon.*                        -/var/log/daemon.log

kern.*                             -/var/log/kern.log

lpr.*                                -/var/log/lpr.log

mail.*                              -/var/log/mail.log

user.*                             -/var/log/user.log


#

# Logging for the mail system. Split it up so that

# it is easy to write scripts to parse these files.

#

mail.info                       -/var/log/mail.info

mail.warn                       -/var/log/mail.warn

mail.err                        /var/log/mail.err


#

# Logging for INN news system.

#

news.crit                       /var/log/news/news.crit

news.err                        /var/log/news/news.err

news.notice                     -/var/log/news/news.notice


#

# Some "catch-all" log files.

#

*.=debug;

       auth,authpriv.none;

       news.none;mail.none     -/var/log/debug

*.=info;*.=notice;*.=warn;

       auth,authpriv.none;

       cron,daemon.none;

       mail,news.none          -/var/log/messages


#

# Emergencies are sent to everybody logged in.

#

*.emerg                         :omusrmsg:*


#

# I like to have messages displayed on the console, but only on a virtual

# console I usually leave idle.

#

#daemon,mail.*;

#       news.=crit;news.=err;news.=notice;

#       *.=debug;*.=info;

#       *.=notice;*.=warn       /dev/tty8


# The named pipe /dev/xconsole is for the `xconsole' utility.  To use it,

# you must invoke `xconsole' with the `-file' option:

#

#    $ xconsole -file /dev/xconsole [...]

#

# NOTE: adjust the list below, or you'll go crazy if you have a reasonably

#      busy site..

#

daemon.*;mail.*;

       news.err;

       *.=debug;*.=info;

       *.=notice;*.=warn       |/dev/xconsole

Listing 1:The default Debian “rsyslog.conf” is shown; other flavors vary, some more than others.

Careful consideration has been programmed into this excellent software. It’s worth noting that there might be subtle config differences between versions so employ lookup differences online to compare notes between the older or newer syntax.

We’ll start by separating our view of rsyslog’s config into three parts. These parts would be any imported modules first of all, our config second and finally our rules.

Logging MARK entries

I’ll begin with a look at using modules. There are several different types of modules, but here’s an example to get you started. Simply think of the Input Modules as a way of collecting information from different sources. Output Modules are essentially how the logs are written, whether to files or a network socket. Another module type, which can be used to filter the received message’s content, is called a Parser Module.

As we can see from the top of the config file in Listing 1, we can load up our default modules as so:

$ModLoad imuxsock # provides support for local system logging

$ModLoad imklog     # provides kernel logging support

The comments are hopefully self-explanatory. The first module allows the dropping of logs to our local disks and if I’m reading the docs correctly the second module picks up events and drops them to “dmesg” after a system boot has completed and kernel logging has been taken over by the Syslog daemon.

The following commented-out line is for the “immark” module, which can be very useful in some circumstances:

#$ModLoad immark  # provides --MARK-- message capability

For example, I’ve used it frequently when I’m filling the /var/log/messages file up with several entries a second whilst testing something. In addition to using the functionality in scripts, I like to be able to type a Bash alias super quickly in the file ~/.bashrc during my testing:

alias mes=’/usr/bin/logger xxxxxxxxx’

If you add that alias then you can simply type “mes” at the command prompt, as your user, to add a separator in the “messages” file. If you haven’t altered your .bashrc file in the past, then after changing it you need to do this to refresh it.

# cd ~

# . .bashrc

I’m not sure but I suspect that the –MARK– separators, alluded to in the comment after the module’s config entry, were first introduced to add a line to a log file to show you that Syslog was still running if there has been no logging entries present for a little while.

You could add the markers to your logs every 20 minutes, for example, if your logs are quiet (using this entry

$MarkMessagePeriod      1200 

I imagine, too, that it might be useful functionality, if you have rotated your logs in the middle of the night and then need to see that Syslog was still paying attention to the task in hand shortly after that point in time.

We can see the other modules are commented out. I’ll briefly mention modules later, but let’s continue on through our config file in the meantime.

The Global Directives section in Listing 1 is not too alien I hope. Look at this, for example, the top entry:

# Use default timestamp format

$ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat

Directives start with a dollar sign, as a variable, and then have an associated property. From that entry, you can see that we’re still wearing 1970s flared trousers and opting to go traditional with the format of the our of logging.

The “permissions” entries there probably isn’t too tricky to translate either:

$FileOwner root

$FileGroup adm

$FileCreateMode 0640

$DirCreateMode 0755

$Umask 0022

When rsyslog runs we can alter who owns what and which file-creation mask is used. The working directory and “$IncludeConfig” entries are hopefully easy enough to follow so let’s keep moving forwards. Next time, we’ll get our hands a bit dirtier with some logfile rules and then finish up with some networking considerations.

Chris Binnie is a Technical Consultant with 20 years of Linux experience and a writer for Linux Magazine and Admin Magazine. His new book Linux Server Security: Hack and Defend teaches you how to launch sophisticated attacks, make your servers invisible and crack complex passwords.

Advance your career in system administration! Check out the Essentials of System Administration course from The Linux Foundation.

Paradigm Shift in CI at PayPal with Docker and Mesos by Manivannan Selvaraj, PayPal

This talk will be about how PayPal adopted open source tools Docker, Mesos, Jenkins and Aurora to build a scalable, highly available CI solution for PayPal which resulted in a Paradigm shift compared to the conventional VM based model.

What’s New with Xen Project Hypervisor 4.8?

I’m pleased to announce the release of the Xen Project Hypervisor 4.8. As always, we focused on improving code quality, security hardening as well as enabling new features. One area of interest and particular focus is new feature support for ARM servers. Over the last few months, we’ve seen a surge of patches from various ARM vendors that have collaborated on a wide range of updates from new drivers to architecture to security.

We are also pleased to announce that Julien Grall will be the next release manager for Xen Project Hypervisor 4.9. Julien has been an active developer for the past few years, making significant code contributions to advance Xen on ARM. He is a software virtualization engineer at ARM and co-maintainer of Xen on ARM with Stefano Stabellini.

Read more at Xen Project Blog

IBM Bluemix Wants to Take the Drudgery out of DevOps

IBM’s Bluemix Continuous Delivery offers reusable workflows for devops, with familiar services like GitHub and Slack as part of the plan.

With Bluemix, IBM set out to create a cloud environment rich with tools that developers could then harness to their benefit. Next step for IBM: Make it easy to string together and use those tools in common workflows, without reinventing the wheel with each new project.

That’s the idea behind IBM Bluemix Continuous Delivery, which provides devops teams with end-to-end, preconfigured toolchains for many common tasks, as well as the ability to create new toolchains for future development needs. 

Read more at InfoWorld

High School’s Help Desk Teaches Open Source IT Skills

The following is an adapted excerpt from chapter six of The Open Schoolhouse: Building a Technology Program to Transform Learning and Empower Students, a new book written by Charlie Reisinger, Technology Director for Penn Manor School District in Lancaster County, Pennsylvania. In the book, Reisinger recounts more than 16 years of Linux and open source education success stories.

Penn Manor schools saved over a million dollars by trading proprietary software for open source counterparts with its student laptop program. The budget is only part of the story. As Linux moved out of the server room and onto thousands of student laptops, a new learning community emerged.

By August 2013, Penn Manor High School’s official Student Help Desk program was online. It was an independent study course. There were no course prerequisites—everyone with the curiosity and desire to learn was welcome. There was no formal curriculum. Students would learn alongside the district technology team, and together we would figure out what we needed when we needed it. There were no exams; this was a results-only learning environment, not an academic exercise.

Five seniors represented the core help desk. Andrew Lobos, Ben Thomas, and Nick Joniec were there, as well as their mutual friend, Collin Enders. The four friends formed the nucleus of the inaugural help desk team and served as mentors to incoming students new to technology support. Benjamin Moore, a student with little IT background beyond the motivation to learn more about computers, was the fifth apprentice. Ben Moore’s first love was theater production, but he decided on a whim that the Student Help Desk would be interesting. He thought computers were cool and wanted to learn to code.

Between the five students’ schedules, I had help desk coverage from the start of school until the ending bell. Apprentices reported to the help desk room just like they would to any other course on their schedule. All similarity to a traditional math or science class ended once they entered the room. The help desk was a serious operation, and our first deadline was looming. In less than two weeks, a pilot group of 90 high school students would receive laptops running Linux and open source software exclusively. We needed the apprentices to help us prepare for the pilot program, and for the full 1700 student one-to-one laptop program launch in January 2014.

The help desk classroom, Room 358, was crowded with a wagonload of sinuous network cables, power adapters, carry cases, mice, USB drives, and towers of boxes filled with demo laptops waiting patiently for the chance to greet their new student owners.

To better supervise the students’ activities, Penn Manor Technician, Alex Lagunas, relocated his desk from the high school technology office to the Student Help Desk room. With no physical separation between the student and the staff spaces, the apprentices couldn’t evade oversight. But Alex wasn’t there to bark orders to minions. His role was that of a team leader and co-worker. He directed day-to-day support activities and mentored the young team on everything from repairs to programming tricks. Together, as teacher and apprentice, the entire affair resembled an 18th-century French atelier—except with less painting, and more programming.

It would soon become difficult to discern the line between staff technician and student apprentice. Support roles overlapped and visitors received equal assistance from the apprentices and IT staff. As this community evolved, the student apprentices became even more passionate and energetic. They loved the work and felt a deep commitment to the mission and purpose of the laptop project. As the weeks progressed, any lingering fears that students couldn’t make this happen evaporated.

The student team was tight-knit, and remarkably good at self-organizing. Each student apprentice found an individual role. Collin and Nick were quick to tackle logistics and organizational tasks. Andrew and Ben Thomas preferred writing code. And the core quartet took it upon themselves to welcome and help Ben Moore.

Project-based learning? Check. Everything the student apprentices created was part of an authentic technology project. Challenge-based learning? Absolutely. We had four months to do something Penn Manor High School had never done. How about 20 percent time? Certainly. Innovation was encouraged 100 percent of the time. Hour of code? Plural. Our apprentices were about to log hundreds of hours of programming time.

We had created a paradise for student hackers.

During the first year of the high school one-to-one Linux laptop program, the student apprentices created three important software programs. The first was the Fast Linux Deployment Toolkit (FLDT), a software imaging system Andrew created after he and fellow apprentices grew frustrated by limitations with FOG. The second project was a student laptop and inventory tracking and ticket system. The third, a URL-sharing program called PaperPlane, was born from a staff idea that turned into a student challenge.

Other projects were less practical and much more playful. Collin’s favorite funny memory about the help desk was a mischievous prank—“trolling” Ben. “I worked with Andrew to secretly install a program on his laptop. Once every hour, a Cron job triggered the machine to speak out loud the phrase ‘I’m watching you!’ He had no idea what was going on. That was fun to watch.”

Thinking about Ben Thomas’ laptop inexplicably blurting “I’m watching you!” in the middle of a quiet class still makes me break from the role of serious school official and laugh out loud like a schoolboy. The whimsical caper invokes the genuine spirit of hacking and reminds me that schools shouldn’t be glum factories of curriculum and testing. When you let students go, when you trust them, you change their world.

The Open Schoolhouse is available on Amazon.com.

A Lone Tester at a DevOps Conference

I recently had the chance to go to Velocity Conf in Amsterdam, which one might describe as a DevOps conference. I love going to conferences of all types, restricting the self to discipline specific events is counter intuitive to me, as each discipline involved in building and supporting something isn’t isolated. Even if some organisations try and keep it that way, reality barges its way in. Gotta speak to each other some day.


So, I was in an awesome city, anticipating an enlightening few days. Velocity is big. I sometimes forget how big business some conferences are, most testing events I attend are usually in the hundreds of attendees. With big conferences comes the trappings of big business. For my part, I swapped product and testability ideas with Datadog, Pager Duty and others for swag. My going rate for consultancy appears to be tshirts, stickers, and hats.

Read more at Testing is Believing

There’s a New DDoS Army, and It Could Soon Rival Record-Setting Mirai

For almost three months, Internet-of-things botnets built by software called Mirai have been a driving force behind a new breed of attacks so powerful they threaten the Internet as we know it. Now, a new botnet is emerging that could soon magnify or even rival that threat.

The as-yet unnamed botnet was first detected on November 23, the day before the US Thanksgiving holiday. For exactly 8.5 hours, it delivered a non-stop stream of junk traffic to undisclosed targets, according to this post published Friday by content delivery network CloudFlare. Every day for the next six days at roughly the same time, the same network pumped out an almost identical barrage, which is aimed at a small number of targets mostly on the US West Coast. More recently, the attacks have run for 24 hours at a time.

Read more at Ars Technica

Signs You’re Doing DevOps Right

Your organization has been practicing DevOps for some time. These seven practices will help you determine if you’ve been doing so in the right way.

We have been talking a lot about DevOps and the cultural shift that it focuses on. Let’s assume that you are practicing DevOps in your organization. These seven signs should give you an idea about whether what you are doing is right.

1. You Deploy Frequently and Automatically With Rapid Release Cycles
The software development process has come a long way since from the SDLC model and has been evolving rapidly. Every software-powered organization in the world is aiming to deliver the software and features faster to its audience and considering the competition, this is a must. Hence, deploying frequently with rapid release cycles is one point to be noted here to be Agile. 

2. You Have Tools and Platforms for CI and CD
DevOps is not any set of tools; it’s a cultural shift that embraces agile methodologies. However, to practice DevOps, you need a certain set of tools: Continuous Integration tools, deployment tools, testing tools, version controlling tools, etc.

3. You Leverage Containers and Have Microservices Architecture
Microservices make things faster since a large monolithic software piece is fragmented into several pieces, making it less complex and causing no dependency if any microservice goes off. Containerization is what we call when it comes to Docker containers. his is where microservices are packaged with their own environments and supporting factors. 

4. You Have Operations, Sys Admins, and Developers Working Together
The objective of DevOps is to remove the confusion and collision between Dev and Ops, DevOps should make sure Operations and Developers line of activities are flowing smooth without any friction. 

5. You Have a Continuous Feedback Loop System
Since your developers are committing code frequently and rapidly, you need to have a feedback system in place if you are practicing DevOps to know to know what went wrong. It should be communicated instantly through notifications with tools like Victor Ops, Pager Duty, etc.
This feedback system will help address the issues caused instantly and try to mitigate them as soon as possible.

6. You Have Constant Communication Between Teams
Constant communication is one of the best qualities of an amazing team. Clear and constant communication brings visibility and will let you know who is doing what and what’s going on between the teams in an organization. Slack is one tool that’s taking this very seriously by enabling teams to collaborate and constantly communicate with each other. 

7. You Have a Perfect Metrics Table to See if the Results Are Visible
It’s not just setting up a culture and making people follow it. You need to have proper metrics in place to see if you are making the progress in the right direction. Have a proper set of goals and metrics attached to each goal to know the results. If things seem to divert, it’s time to make changes again. Know what you are doing and have supporting results to prove it.

And to practice DevOps, you need to have some supporting tools and hope you are using them.

There are many wonderful tools that help you practice DevOps, some of them are 
> Docker
> Git (GitHub)
> AWS
> JIRA
> Ansible
> Slack
> Shippable 
> New Relic
> Splunk
> Chef and much more

Conclusion :
Practicing DevOps is the need of the hour to be competitive in the software industry. This will surely boost the productivity and improves your learning curve, also removes repetitive and mundane tasks in an organization making your product deliverables faster to the market getting a healthy feedback loop that can help you understand your mistakes and correct them as early as possible.

SQL Server on Linux Signals Microsoft’s Changing Development Landscape

As 2016 comes to a close, Microsoft is keeping SQL Server users busy with fresh announcements and new releases. SQL Server on Linux, now in public preview, brings together Microsoft and Linux in a way that would have been unimaginable until recently. SearchSQLServer talked with SQL Server expert Joey D’Antoni, principal consultant at Denny Cherry & Associates Consulting, about what these big announcements say about Microsoft and what to expect from SQL Server going forward.

Microsoft plans to add Enterprise Edition features to Standard Edition in SQL Server 2016 Service Pack 1. What do you think motivated this decision?
Joey D’Antoni: Mainly … the software vendors — I think they wanted to drive adoption on some of the features that make SQL Server different from Postgres or MySQL, and where you’re not just using cable. So, I think by encouraging software vendors to take advantage of these features, they better hook in with them.

Read more at TechTarget

The Linux Foundation Seeks Technical and Business Speakers for Open Networking Summit 2017

Help shape the future of open networking! The Linux Foundation is now seeking business and technical leaders to speak at Open Networking Summit 2017.

On April 3-6 in Santa Clara, CA, ONS will gather more than 2,000 executives, developers and network architects to discuss innovations in networking and orchestration. It is the only event that brings together the business and technical leaders across carriers and cloud service providers, vendors, start-ups and investors, and open source and open standards projects in software-defined networking (SDN) and network functions virtualization (NFV).

Submit a talk to speak in one of our five new tracks for 2017 and share your vision and expertise. The deadline for submissions is Jan. 21, 2017.

The theme this year is “Open Networking: Harmonize, Harness and Consume.” Tracks and suggested topics include:

General Interest Track

  • State of Union on Open Source Projects (Technical updates and latest roadmaps)

  • Programmable Open Hardware including Silicon & White Boxes + Open Forwarding Innovations/Interfaces

  • Security in a Software Defined World

Enterprise DevOps/Technical Track

  • Software Defined Data Center Learnings including networking interactions with Software Defined Storage

  • Cloud Networking, End to End Solution Stacks – Hypervisor Based

  • Container Networking

Enterprise Business/Architecture Track

  • ROI on Use Cases

  • Automation – network and beyond Analytics

  • NFV for Enterprise (vPE

Carriers DevOps/Technical Track

  • NFV use Cases – VNFs

  • Scale & Performance of VNFs

  • Next Gen Orchestration OSS/BSS & FCAPS models

Carriers Business/Architecture Track

  • SDN/NFV learnings

  • ROI on Use Cases

  • Architecture Learnings from Cloud

See the full list of potential topics on the ONS website.

Not interested in speaking but want to attend? Linux.com readers can register now with the discount code, LINUXRD5, for 5% off the attendee registration price. Register by February 19 to save over $850.