Home Blog Page 640

OpenSSL After Heartbleed by Rich Salz & Tim Hudson, OpenSSL

https://www.youtube.com/watch?v=Ds1yTZcKE10?list=PLbzoR-pLrL6ovByiWK-8ALCkZoCQAK-i_

In this video from LinuxCon Europe, Rich Salz and Tim Hudson from the OpenSSL team take a deep dive into what happened with Heartbleed and the steps the OpenSSL team are taking to improve the project.

 

Towards Enterprise Storage Interoperability

As you may have noticed, yesterday the Linux Foundation announced that Dell EMC is joining the OpenSDS effort and contributing code in the process. This follows a long list of events in which we have demonstrated increasing levels of participation in open source communities and ecosystems. When we open sourced ViPR Controller and created the CoprHD community, we were responding to our customers. Theyr’e the ones who feel the pain every day when devices don’t work together. They’re the ones who tell me in person about their difficulties with storage interoperability. I’m not the only one hearing this – my fellow colleagues who have also joined the OpenSDS community have experienced the same. In fact, it is so important to us that we deliver on our promises that we are inviting our customers to participate in this community. We hope to share with you which ones very soon.

It used to be that getting storage vendors to collaborate or even be seen in the same place was something akin to a scene from The Godfather movies. Often we have joked about getting “the 5 families” together to combine forces on something, often with disappointing results. But the fact is that those of us in the storage industry see the same trends as everyone else. We know that our customers are moving forward in an ever-changing world, from virtualization and containers to new automation and orchestration frameworks based on Kubernetes, Mesos, Ansible and a host of other technologies that didn’t even exist 5 years ago. In this new world, our customers want multiple layers of technologies to be able to work together. They demand better – and theyr’e right.

With Dell EMC’s contribution of the CoprHD SouthBound SDK (SB SDK) we’re staking a claim for better interoperability. The SB SDK will help customers, developers and every day users be able to take some control over their storage interoperability, with an assist from the OpenSDS community. Right now, you can create block storage drivers pretty easily, with the ability to create filesystem and object storage drivers coming up later next year. The reference implementation you see in the GitHub code repository is designed to work with CoprHD and ViPR Controller, but over time we hope to see other implementations in widespread use across the industry.

Join our webcast today to learn more – it will be recorded for future viewing for those who cannot make it today.

Thanks!

John Mark Walker, Product Manager, Dell EMC

 

Remote Logging With Syslog, Part 4: Log Rotation

As you’ll recall, we’ve been taking a deep dive into the rsyslog tool for logging — we presented an overview in the first article, took a look at the main config file in the second article, and examined some logfile rules and sample configurations in the third. In this final article in the series, I’ll look at the /etc/rsyslog.d/www-rsyslog.chrisbinnie.tld.conf and discuss some important networking considerations.

Have a look at Listing 1, which shows the entirety of our /etc/rsyslog.d/www-rsyslog.chrisbinnie.tld.conf file.

$ModLoad imfile

$InputFileName /var/log/apache2/access_log

$InputFileTag apache.access:

$InputFileStateFile /tmp/apache_state_file

$InputFileSeverity info

$InputFileFacility local3

$InputRunFileMonitor

local3.* @@10.10.1.3:514

Listing 1: Contents of our remote application’s config using the trusty “local3” within the /etc/rsyslog.d/www-rsyslog.chrisbinnie.tld.conf file.

The key lines to focus on in this listing, starting from the top, are as follows.

We need to load up a module using $ModLoad. Step forward the outstanding “imfile” module, which has the magical ability to convert any normal text content into a rsyslog message. The manual says it will gratefully consume any printable characters that have a line feed (LF) at the end of each line to break up the otherwise monotonous content. Pretty clever, I’m sure you’ll agree.

The next important line is obvious. The line starting $InputFileName tells rsyslog which log file you’re interested in sending off to your remote logging server. The following line helps classify the log type with a “Tag” (which if you have multiple servers of the same application type sending logs to one remote server you might alter slightly per server to apache-apache-ww1: etc). Ignore the $InputFileStateFile log file for now and skim through the remaining lines.

We are collecting an “info” level of logging detail and pushing that out to the user-configurable “local3” facility and onto the IP Address “10.10.1.3”. The emoji that you can see — the two @ signs — stands for TCP. Only one @ sign would signify transfers via the UDP networking protocol.

Can Anybody Hear Me?

What about our remote rsyslog server’s config?

Have a quick look at the two lines below, which will sit in your remote rsyslog server’s /etc/rsyslog.conf file. For the sake of clarity, this is the recipient rsyslog server, the one that’ll receive all the logs from multiple places. Unless you’ve lost control of your senses, these two lines are easy to follow:

$ModLoad imtcp

$InputTCPServerRun 514

Incidentally, if an IP address is omitted (it can be explictly stated using something like $TCPServerAddress 10.10.10.10), then rsyslog will attempt to open up all IP addresses on the port in question.

You might be pleasantly surprised at how easy it is to finish off the remote rsyslog server config. We use something called “templates” in rsyslog. They are powerful, extensible, and worth reading about in more detail.

At the foot of our /etc/rsyslog.conf file, we simply add these lines:

$template ChrisBinnie, “/var/log/%HOSTNAME%/%PROGRAMNAME%.log”

*.*   ?ChrisBinnie

I’ve just used an arbitrary template reference in this case, my name, for ease of distinction. You will need to restart the service after making these changes.

We can see that we’re going to drop our logs into a directory off of /var/log, which has a hostname and then the application name. Appended to the end, the “.log” makes sense of the resulting filename. You can see which facilities and priorities are being added to this log file, in this case, “all” and “all” — thanks to the asterisks.

To TCP Or Not To TCP

You might need to spend a second thinking back over some of the remote logging considerations I mentioned, such as network congestion, if you’re pushing lots of logging data across your LAN. I mention this because (as we saw above) the clever rsyslog can use both TCP and UDP for pushing your logs around. TCP is the option best suited to most scenarios, thanks to its ability to error-check against network failures. It also doesn’t require an additional plugin to be loaded up, because it’s built into rsyslog; the reverse is true for the UDP protocol.

There are two minor connection points to note here. First, avoid using hostnames via DNS. Use an IP address for greater reliability (CNAMEs are somewhere in the middle, if you change machines around every now and again). Second, as with all things that might need debugged at some point, you should try to use explicit port numbers on both server and client ends so that there’s no ambiguity introduced. Incidentally, without a port explicitly specified, both protocols default to 514.

Little Dutch Boy

Note that if you’re using properly configured systems, you might need to punch a hole in the dike. Looking at the example below, clearly you need to alter the port number after –dport to your port number of choice. You can then save the setting to make it persistent with something like /sbin/service iptables save or whatever your distribution prefers.

If you need to allow access using the default TCP option and you’re using iptables, you can use this command:

# iptables -A INPUT -p tcp -m state --state NEW -m tcp --dport 514 -j ACCEPT

And, for good, old UDP, use:

# iptables -A INPUT -p udp -m state --state NEW -m udp --dport 514 -j ACCEPT

Stale Bread

Back to the dreaded gotcha. So that we’re clear: On the client or sender machine, we make a little fix to our log rotation bug/problem. Not the remote rsyslog server.

At the risk of repeating myself, there may be better ways of fixing the log rotation problem (with a version upgrade, for example); however, read on to see what fixed it for me. Remember that the issue is that after “logrotate” had run, the remote logging stopped. The solution was a simple script set to run via a cron job after “logrotate” had run in the middle of the night.

There is some method to the madness of my script. By running it, we effectively insist that after a log rotation has taken place the faithful rsyslog pays attention to its “state” file. Let’s peek into an example now:

<Obj:1:strm:1:

+iCurrFNum:2:1:1:

+pszFName:1:25:/var/log/apache2/access_log:

+iMaxFiles:2:1:0:

+bDeleteOnClose:2:1:0:

+sType:2:1:2:

+tOperationsMode:2:1:1:

+tOpenMode:2:3:384:

+iCurrOffs:2:7:1142811:

>End

.

Listing 2: Our rsyslog “state” file example.

We shouldn’t profess to understanding what’s going on in Listing 2, I would hazard a guess that rsyslog is counting lines it’s processed — among other things, to keep it operating correctly. Let’s promptly move on to Listing 3. This is the cron job and script solution that fixed the issue for me.


#!/bin/bash


#

# Deletes stale rsyslog “state” files, appends a timestamp to the new filename in /tmp & restarts rsyslog.

#

# After rsyslog restart the remote logging node should catch up any missed logs fairly quickly.

#


# Declare var

timestamp=$(date +%s)


# Delete all “state” files in /tmp

/bin/rm -f /tmp/apache_state_file*


# Edit rsyslog file which sends data remotely in order to show newly named “state” file

/bin/sed -i.bak 's/apache_state_file-[0-9]*/apache_state_file-'"$timestamp"'/g' /etc/rsyslog.d/www-syslog.chrisbinnie.tld.conf


# Apply changes to “state” file in use

/sbin/service rsyslog restart

Listing 3: Our quick-fix script to get log rotations to speak to rsyslog satisfactorily after “logrotate” has finished doing its business.

If you read the comments at the top of the script then all going well the scripts raison d’etre should make sense. This script is run sometime around 5am after the excellent “logrotate” has finished its sometimes lengthy business. There’s no point in running the script during or before “logrotate” has finished its run (during testing my remote logging still failed).

The small script works like this (you probably need to run it twice initially to clean up /tmp filenames, which should be harmless if you do it manually). It deletes the old “state” file upon which rsyslog relies, works out the current time, and appends that time to the end of the “state” filename.

As a result, the /tmp directory ends up having one or two files that look like this apache_state_file-1321009871. It then backs up the existing remote logging config file and changes the “state” file that it references. Finally, a super-quick service restart means that the remote logging data starts up again and the other end (the remote rsyslog server) catches up with any missed logs in a whizz-pop-bang if there’s lots of data.

My experience is that if you tail the recipient log after running this script (or just performing a restart), you’ll see the catchup taking place super-speedily. In case you’re wondering, I found that sometimes a service restart didn’t pick up the pieces properly but altering the “state” file it referenced was successful without fail. Your mileage might vary of course.

As mentioned, I’m a little suspicious of an old version issue that I needed to fix with this script. In the current version, you can see there is some additional information about the current version’s “state” file. I hope my solution gives you some pointers and helps you out if you encounter a similar scenario, however.

I Can’t Hear You

If your already exceptionally noisy log file /var/log/messages begins to fill up with your application’s logs, too, then here’s another little life-saver. The workaround is simple, you just need a service restart, after applying this ;local3.none addition to the relevant destination log file line in /etc/rsyslog.conf:

*.*;auth,authpriv.none;local3.none  -/var/log/messages

No doubt you get the idea, this disables “local3” logging locally.

End Of Watch

I should have probably put more emphasis on the how configurable and extensible rsyslog is at the start. It would be remiss not to point you at the extensive, well-maintained, documentation, which is all linked to from the application’s main website. There are modules and templates galore to explore in high levels of detail. And, as well as user examples on the main site, there’s an excellent wiki with some trickier configuration examples. If you’re eager for even more, you can check out this list of modules.

Now that I’ve presented the solution to a working remote rsyslog server, even with the challenges that log rotation throws into the mix, I hope you’ll think more about your logging infrastructure. I had originally used a Syslog server, back in the day, to pick up salient events from Cisco routers. So universal are the logging formats supported by rsyslog that you can also connect them to all sorts of devices, such as load balancers and other proprietary devices. You are far from being limited to picking up logs from other Unix-type devices and instead are blessed with the ability to collect logging event information from all over the place.

I hope you enjoy putting your new logging knowledge to creative and productive use.

Read the other articles in this series:

Remote Logging With Syslog, Part 1: The Basics

Remote Logging With Syslog, Part 2: Main Config File

Remote Logging With Syslog, Part 3: Logfile Rules

Chris Binnie is a Technical Consultant with 20 years of Linux experience and a writer for Linux Magazine and Admin Magazine. His new book Linux Server Security: Hack and Defend teaches you how to launch sophisticated attacks, make your servers invisible and crack complex passwords.

Advance your career in system administration! Check out the Essentials of System Administration course from The Linux Foundation.

11 Benefits to Running Your Containers on OpenStack

Enterprises today must keep up with increasing internal and external customer demand, or die trying.

For IT, this means deploying and updating applications faster, and more often than ever before to meet and ideally exceed this demand. At the same time, IT must focus its people power on strategic endeavors, rather than rote tasks.

Linux container technology is helping some organizations make this transition. As part of a team’s DevOps practices, open source containers offer great  flexibility and agility alongside cloud deployment and consumption. Containerization creates the opportunity for a true hybrid cloud computing approach, by which we can manage any application running anywhere in a consistent and efficient way. And in the enterprise data center, OpenStack has become popular as a robust cloud infrastructure framework. How do Linux and OpenStack work together?

Read more at OpenStack

Researchers Propose Using Software-Defined Networking to Unify Cloud and Edge

A team of researchers have proposed a method to use cloud and fog, or edge, computing structures to complement one another – rather than viewing edge computing as a replacement for the cloud. Using Software-Defined Networking (SDN) to manage the interaction between cloud and edge resources, a network can remain dynamic, agile and efficient while providing a better experience for the end user.

Increased use of mobile devices has created stresses on cloud networks, which will only increase as mobile device use increases worldwide. Creating a system where cloud and edge computing resources are unified is a potential response to the challenges of overtaxed resources and unexpected latency, which can cause a degraded quality of experience for the end user.

Read more at The Stack

12 Days of Two-Factor Authentication: This Xmas, Give Yourself the Gift of Opsec

Enabling two-factor authentication—or 2FA for short—is among the easiest, most powerful steps you can take to protect your online accounts. Often, it’s as simple as a few clicks in your settings. However, different platforms sometimes call 2FA different things, making it hard to find: Facebook calls it “login approvals,” Twitter “login verification,” Bank of America “SafePass,” and Google and others “2-step verification.”

That’s why, this holiday season, EFF’s 12 Days of 2FA is here to help you navigate the world of two-factor authentication. In a series of 12 posts, we’ll show you how to enable 2FA on a range of online platforms and services.

Read more at Electronic Frontier Foundation

Dell EMC joins The Linux Foundation’s OpenSDS Project

Dell EMC is joining the OpenSDS Project, a Linux Foundation Collaborative project. To mark its commitment to the project, Dell EMC is contributing CoprHD SouthBound SDK (SB SDK) to the OpenSDS project. The SB SDK allows developers to build drivers and other tools with the assurance that they will be compatible with a wide variety of enterprise-class storage products.

The formation of the OpenSDS community is an industry response to address software-defined storage integration challenges with the goal of driving enterprise adoption of open standards. It’s supported by storage users and vendors, including Huawei, Fujitsu, HDS, Vodafone and Oregon State University.

Read more at CIO.com

D-Bus Tutorial

D-Bus is a mechanism for interprocess communication for Linux systems. D-Bus has a layered architecture. At the lowest level is the D-Bus specification, which specifies the D-Bus wire protocol for communication between two processes. The libdbus library is the low level C API library based on the D-Bus specification. Normally, processes communicate via one of the two message bus daemons, the system bus and the sessions bus. 

 

Interprocess communication using D-Bus

Read more at https://www.softprayog.in/programming/d-bus-tutorial

 

SDN Vendor PLUMgrid is No More; Some Assets Acquired by VMware

A VMware spokesperson told EnterpriseNetworkingPlanetthat on Friday December 16, VMware acquired certain IP assets from the company and that a number of the PLUMgrid employees have now joined VMware.

PLUMgrid founder Pere Monclus wrote in a blog post that, the company, “will be starting a new journey as we continue revolutionizing and transforming the networking industry to build and expand on software-defined infrastructure for private and public clouds. “

Read more at Enterprise Networking Planet

Essentials of OpenStack Administration Part 3: Existing Cloud Solutions

Start exploring Essentials of OpenStack Administration by downloading the free sample chapter today. DOWNLOAD NOW

Infrastructure providers aim to deliver excellent customer service and provide a flexible and cost-efficient infrastructure, as we learned in part one of this series.

Cloud Computing, then, is driven by a very simple motivation from the infrastructure providers’ perspective: “Do as much work as possible only once and automate it afterwards.”

In cloud environments, the provider will simply provide infrastructure that allows customers to do most of the work on their own through a simple interface. After the initial setup, the provider’s main task is to ensure that the whole setup has enough resources. If the provider runs out of resources, they will simply add more capacity. Thus another advantage of automation is that it can facilitate flexibility.

In this article, we’ll contrast what we learned in part two about conventional, un-automated infrastructure offerings with what happens in the cloud.

The Fundamental Components of Clouds

From afar, clouds are automated virtualization and storage environments. But if you look closer, you’ll start seeing a lot more details. So let’s break the cloud down into its fundamental components.

First and foremost, a cloud must be easy to use. Starting and stopping virtual machines (VMs) and commissioning online storage is easy for professionals, but not for the Average Joe! Users must be able to start VMs by pointing and clicking. So any cloud software must provide a way for users to do just that, but without the learning curve.

Installing a fresh operating system on a newly created virtual machine is a tedious process, once again, hard to achieve for non-professionals. Thus, clouds need pre-made images, so that users do not have to install operating systems on their own.

Conventional data centers are heterogeneous environments which grow to meet the organic needs of an organization. While components may have some automation tools available, there is not a consistent framework to deploy resources. Various teams such as storage, networking, backup, and security, each bring their own infrastructure, which must be integrated by hand. A cloud deployment must integrate and automate all of these components.

Customer organizations typically have their own organizational hierarchy. A cloud environment must provide an authorization scheme that is flexible enough to match that hierarchy. For instance, there may be managers who are allowed to start and stop VMs or to add administrator accounts, while interns might only be allowed to browse them.

When a user starts a new VM, presumably from the aforementioned easy-to-use interface, it must be set up automatically. When the user terminates it, the VM itself must be deleted, also automatically.

A bonus of the work to implement this particular kind of automation is that with a little more effort, usually involving the implementation of a component that knows which VMs are running on which servers, the cloud can provide automatic load-balancing.

Online storage is an important part of the cloud. As such, it must be fully automated and easy to use (like Dropbox or Gdrive).

There are a number of cloud solutions, such as Eucalyptus, OpenQRM, OpenNebula, and of course, OpenStack. Open source implementations typically share some design concepts, which we will discuss in part 4.

Various cloud solutions have been in existence since the mid-1960s. Mainframes provide virtualized resources but tend to be proprietary, expensive, and difficult to manage. Since then there have been midrange and PC architecture solutions. They also tend to be expensive and proprietary. These interim solutions also may not provide all of the resources now available through OpenStack.

The Essentials of OpenStack Administration course teaches you everything you need to know to create and manage private and public clouds with OpenStack. Download a sample chapter today!

Read the other articles in the series:

Essentials of OpenStack Administration Part 1: Cloud Fundamentals

Essentials of OpenStack Administration Part 2: The Problem With Conventional Data Centers

Essentials of OpenStack Administration Part 4: Cloud Design, Software-Defined Networking and Storage

Essentials of OpenStack Administration Part 5: OpenStack Releases and Use Cases