Home Blog Page 639

How to Use IPv6 on Apache?

Nowadays IPv6 is getting more and more common to be used on web servers. It’s better to implement IPv6 on servers in order to be accessible on IPv6 networks.  Here it is a really quick instruction how to get ready for IPv6 on your Apache web servers.

I have installed a fresh CentOS and a fresh apache on my test server, without any control panel. If you are using a control panel or any other operation systems, the way of preparing should be the same, however, if you have any problem during your configuration, you can ask me in the comments.

Let’s start with the apache configuration file. Open “/etc/httpd/conf/httpd.conf” with your text editor in the server. I am using nano….

Read more at Huge Server

3 Highly Effective Strategies for Managing Test Data

Think back to the first automated test you wrote. If your like most testing professionals, you probably used an existing user and password and then wrote verification points using data already in the system.  Then you ran the test. If it passed, it was because the data in the system was the same as it was when you wrote the test. And if it didn’t pass, it was probably because the data changed.

Most new automated testers experience this. But they quickly learn that they can’t rely on specific data residing in the system when the test script executes. Test data must be set up in the system so that tests run credibly, and with accurate reporting. 

Read more at TechBeacon

Tuning OpenStack Hardware for the Enterprise

As a cloud management framework OpenStack thus far been limited to the province of telecommunications carriers and providers of Web-scale services that have plenty of engineering talent to throw at managing one of the most ambitious open source projects there is. In contrast, adoption of OpenStack in enterprise IT environments has been much more limited.

But that may change as more advanced networking technologies that are optimized for processor-intensive virtualization come to market. Some of the technologies we have covered here include single root input/output virtualization (SR-IOV) and Data Plane Development Kit (DPDK). Another technology includes  using field programmable gate arrays (FPGA) in Network Interface Cards, to make them smarter about how to offload virtualized loads.

Read more at SDx Central

Merry Linux to You!

Get ready to start caroling around the office with these Linux-centric lyrics to popular Christmas carols.

Running Merrily on Open Source

To the tune of: Chestnuts Roasting on an Open Fire

Running merrily on open source
With users happy as can be
We’re using Linux and getting lots done
And happy everything is free…

Read more at ComputerWorld

3 Useful GUI and Terminal Based Linux Disk Scanning Tools

There mainly two reasons for scanning a computer hard disk: one is to examine it for filesystem inconsistencies or errors that can result from persistent system crashes, improper closure of critical system software and more significantly by destructive programs (such as malware, viruses etc).

And another is to analyze its physical condition, where we can check a hard disk for bad sectorsresulting from physical damage on the disk surface or failed memory transistor.

Read more at Tecmint

Container Security: Your Questions Answered

To help you better understand containers, container security, and the role they can play in your enterprise, The Linux Foundation recently produced a free webinar hosted by John Kinsella, Founder and CTO of Layered Insight. Kinsella covered several topics, including container orchestration, the security advantages and disadvantages of containers and microservices, and some common security concerns, such as image and host security, vulnerability management, and container isolation.

In case you missed the webinar, you can still watch it online. In this article, Kinsella answers some of the follow-up questions we received.

John Kinsella, Founder CTO of Layered Insight
Question 1: If security is so important, why are some organizations moving to containers before having a security story in place?

Kinsella: Some groups are used to adopting technology earlier. In some cases, the application is low-risk and security isn’t a concern. Other organizations have strong information security practices and are comfortable evaluating the new tech, determining risks, and establishing controls on how to mitigate those risks.

In plain talk, they know their applications well enough that they understand what is sensitive. They studied the container environment to learn what risks an attacker might be able to leverage, and then they avoided those risks either through configuration, writing custom tools, or finding vendors to help them with the problem. Basically, they had that “security story” already.

Question 2: Are containers (whether Docker, LXC, or rkt) really ready for production today? If you had the choice, would you run all production now on containers or wait 12-18 months?

Kinsella: I personally know of companies who have been running Docker in production for over two years! Other container formats that have been around longer have also been used in production for many years. I think the container technology itself is stable. If I were adopting containers today, my concern would be around security, storage, and orchestration of containers. There’s a big difference between running Docker containers on a laptop versus running a containerized application in production. So, it comes down to an organization’s appetite for risk and early adoption. I’m sure there are companies out there still not using virtual machines…

We’re running containers in production, but not every company (definitely not every startup!) has people with 20 years of information security experience.

Question 3: We currently have five applications running across two Amazon availability zones, purely in EC2 instances. How should we go about moving those to containers?

Kinsella: The first step would be to consider if the applications should be “containerized.” Usually people consider the top benefits of containers to be quick deployment of new features into production, easy portability of applications between data centers/providers, and quick scalability of an application or microservice. If one or more of those seems beneficial to your application, then next would be to consider security. If the application processes highly sensitive information or your organization has a very low appetite for risk, it might be best to wait a while longer while early adopters forge ahead and learn the best ways to use the technology. What I’d suggest for the next 6 months is to have your developers work with containers in development and staging so they can start to get a feel for the technology while the organization builds out policies and procedures for using containers safely in production.

Early adopter? Then let’s get going! There’s two views on how to adopt containers, depending on how swashbuckling you are: Some folks say start with the easiest components to move to containers and learn as you migrate components over. The alternative is to figure out what would be most difficult to move, plan out that migration in detail, and then take the learnings from that work to make all the other migrations easier. The latter is probably the best way but requires a larger investment of effort up front.

Question 4: What do you mean by anomaly detection for containers?

Kinsella: “Anomaly detection” is a phrase we throw around in the information security industry to refer to technology that has an expectation of what an application (or server) should be doing, and then responds somehow (alerting or taking action) when it determines something is amiss. When this is done at a network or OS level, there’s so many things happening simultaneously that it can be difficult to accurately determine what is legitimate versus malicious, resulting in what are called “false positives.”

One “best practice” for container computing is to run a single process within the container. From a security point of view, this is neat because the signal-to-noise ratio is much better, from an anomaly detection point of view. What type of anomalies are being monitored for? It could be network or file related, or maybe even what actions or OS calls the process is attempting to execute. We can focus specifically on what each container should be doing and keep it within much more narrow boundary for what we consider anomalous for its behavior.

Question 5: How could one go and set up containers in a home lab? Any tips? Would like to have a simpler answer for some of my colleagues. I’m fairly new to it myself so I can’t give a simple answer.

Kinsella: Step one: Make sure your lab machines are running a patched, modern OS (released within the last 12 months).

Step two: Head over to http://training.docker.com/self-paced-training and follow their self-paced training. You’ll be running containers within the hour! I’m sure lxd, rkt, etc. have some form of training, but so far Docker has done the best job of making this technology easy for new users to adopt.

Question 6: You mentioned using Alpine Linux. How does musl compare with glibc?

Kinsella: musl is pretty cool! I’ve glanced over the source — it’s so much cleaner than glibc! As a modern rewrite, it probably doesn’t have 100 percent compatibility with glibc, which has support for many CPU architectures and operating systems. I haven’t run into any troubles with it yet, personally, but my use is still minimal. Definitely looking to change that!

Question 7: Are you familiar with OpenVZ? If so, what would you think could be the biggest concern while running an environment with multiple nodes with hundreds of containers?

Kinsella: Definitely — OpenVZ has been around for quite a while. Historically, the question was “Which is more secure — Xen/KVM or OpenVZ?” and the answer was always Xen/KVM, as they provide each guest VM with hardware-virtualized resources. That said, there have been very few security vulnerabilities discovered in OpenVZ over its lifetime.

Compared to other forms of containers, I’d put OpenVZ in a similar level of risk. As it’s older, it’s codebase should be more mature with fewer bugs. On the other hand, since Docker is so popular, more people will be trying to compromise it, so the chance of finding a vulnerability is higher. A little bit of security-through-obscurity, there. In general, though, I’d go through a similar process of understanding the technology and what is exposed and susceptible to compromise. For both, the most common vector will probably be compromising an app in a container, then trying to burrow through the “walls” of the container. What that means is you’re really trying to defend against local kernel-level exploits: keep up-to-date and be aware of new vulnerability announcements for software that you use.

John Kinsella is the Founder CTO of Layered Insight, a container security startup based in San Francisco, California. His nearly 20-year background includes security and network consulting, software development, and datacenter operations. John is on the board of directors for the Silicon Valley chapter of the Cloud Security Alliance, and has long been active in open source projects, including recently as a contributor, member of the PMC and security team for Apache CloudStack.

Check out all the upcoming webinars from The Linux Foundation.

OpenSSL after Heartbleed

Despite being a library that most people outside of the technology industry have never heard of, the Heartbleed bug in OpenSSL caught the attention of the mainstream press when it was uncovered in April 2014 because so many websites were vulnerable to theft of sensitive server and user data. At LinuxCon Europe, Rich Salz and Tim Hudson from the OpenSSL team did a deep dive into what happened with Heartbleed and the steps the OpenSSL team are taking to improve the project.

The bug, itself was a simple one where the code didn’t check a buffer length, Hudson said. The bug had been in OpenSSL unnoticed for three years by the team member that checked in the code, the other team members, external security reviewers, and users, even though the commit was public and could be viewed by anyone. Hudson pointed out that “one thing that was really important is all of the existing tools that you run for static code analysis, none of them reported Heartbleed.”

Salz talked about how overworked and overcommitted the lead OpenSSL developers were, which was one of contributing factors to this issue, since at the time of HeartBleed, there were basically two developers, barely making enough money to live. OpenSSL was an open source project that barely got $2000 a year to keep going, so the developers had to do consulting work to make money, which made it difficult to find the time for them to address bugs and patches coming in from other people.

Hudson described Heartbleed as “a wake up to the industry and those commercial companies that were effectively getting a free ride on OpenSSL,” which led companies and organizations to realize that they needed to do something about it, instead of relying on just a couple of people who are too poorly funded to maintain such a critical piece of infrastructure. 

As a result, The Linux Foundation set up the Core Infrastructure Initiative (CII) and effectively got a group of a dozen or so commercial companies together to be able to offer funding for not only OpenSSL, but other critical projects that are underresourced. One of the goals was to get more infrastructure, more support, and more ability to address the issues so that better processes can be followed.

As of December 2014, six months after HeartBleed, there were 15 project team members. Two people who are fully funded by the Core Infrastructure Initiative to work on OpenSSL as their day job, and two people funded to do the work full-time based on the donations that came in from people who were concerned, Hudson said.

Today, they have policies for security fixes and a release schedule with alpha and beta releases for people to test, which has worked reasonably well according to Salz. They have a code of conduct, and mailing list traffic has increased and become more useful. Salz says that “there are other members of the community now contributing answers to questions; members of the team are responding more quickly and rapidly; and we seem to be more engaged in having a more virtuous cycle of feedback.” 

Downloading releases, submitting or fixing bugs, and answering questions on the mailing list are great ways to get involved in the project now.

Hudson described a couple of lessons learned. You can’t rely on any one individual, no matter how good they are, to not make mistakes. Also, people really need to take time to understand the code in detail when doing code reviews, and everything going into the project needs to be scrutinized.

For more lessons learned and other details about the OpenSSL project both before and after Heartbleed, watch the video below.

https://www.youtube.com/watch?v=Ds1yTZcKE10?list=PLbzoR-pLrL6ovByiWK-8ALCkZoCQAK-i_

Interested in speaking at Open Source Summit North America (formerly LinuxCon) on September 11 – 13? Submit your proposal by May 6, 2017. Submit now>>

Not interested in speaking but want to attend? Linux.com readers can register now with the discount code, LINUXRD5, for 5% off the all-access attendee registration price. Register now to save over $300!

 

OpenSSL After Heartbleed by Rich Salz & Tim Hudson, OpenSSL

https://www.youtube.com/watch?v=Ds1yTZcKE10?list=PLbzoR-pLrL6ovByiWK-8ALCkZoCQAK-i_

In this video from LinuxCon Europe, Rich Salz and Tim Hudson from the OpenSSL team take a deep dive into what happened with Heartbleed and the steps the OpenSSL team are taking to improve the project.

 

Towards Enterprise Storage Interoperability

As you may have noticed, yesterday the Linux Foundation announced that Dell EMC is joining the OpenSDS effort and contributing code in the process. This follows a long list of events in which we have demonstrated increasing levels of participation in open source communities and ecosystems. When we open sourced ViPR Controller and created the CoprHD community, we were responding to our customers. Theyr’e the ones who feel the pain every day when devices don’t work together. They’re the ones who tell me in person about their difficulties with storage interoperability. I’m not the only one hearing this – my fellow colleagues who have also joined the OpenSDS community have experienced the same. In fact, it is so important to us that we deliver on our promises that we are inviting our customers to participate in this community. We hope to share with you which ones very soon.

It used to be that getting storage vendors to collaborate or even be seen in the same place was something akin to a scene from The Godfather movies. Often we have joked about getting “the 5 families” together to combine forces on something, often with disappointing results. But the fact is that those of us in the storage industry see the same trends as everyone else. We know that our customers are moving forward in an ever-changing world, from virtualization and containers to new automation and orchestration frameworks based on Kubernetes, Mesos, Ansible and a host of other technologies that didn’t even exist 5 years ago. In this new world, our customers want multiple layers of technologies to be able to work together. They demand better – and theyr’e right.

With Dell EMC’s contribution of the CoprHD SouthBound SDK (SB SDK) we’re staking a claim for better interoperability. The SB SDK will help customers, developers and every day users be able to take some control over their storage interoperability, with an assist from the OpenSDS community. Right now, you can create block storage drivers pretty easily, with the ability to create filesystem and object storage drivers coming up later next year. The reference implementation you see in the GitHub code repository is designed to work with CoprHD and ViPR Controller, but over time we hope to see other implementations in widespread use across the industry.

Join our webcast today to learn more – it will be recorded for future viewing for those who cannot make it today.

Thanks!

John Mark Walker, Product Manager, Dell EMC

 

Remote Logging With Syslog, Part 4: Log Rotation

As you’ll recall, we’ve been taking a deep dive into the rsyslog tool for logging — we presented an overview in the first article, took a look at the main config file in the second article, and examined some logfile rules and sample configurations in the third. In this final article in the series, I’ll look at the /etc/rsyslog.d/www-rsyslog.chrisbinnie.tld.conf and discuss some important networking considerations.

Have a look at Listing 1, which shows the entirety of our /etc/rsyslog.d/www-rsyslog.chrisbinnie.tld.conf file.

$ModLoad imfile

$InputFileName /var/log/apache2/access_log

$InputFileTag apache.access:

$InputFileStateFile /tmp/apache_state_file

$InputFileSeverity info

$InputFileFacility local3

$InputRunFileMonitor

local3.* @@10.10.1.3:514

Listing 1: Contents of our remote application’s config using the trusty “local3” within the /etc/rsyslog.d/www-rsyslog.chrisbinnie.tld.conf file.

The key lines to focus on in this listing, starting from the top, are as follows.

We need to load up a module using $ModLoad. Step forward the outstanding “imfile” module, which has the magical ability to convert any normal text content into a rsyslog message. The manual says it will gratefully consume any printable characters that have a line feed (LF) at the end of each line to break up the otherwise monotonous content. Pretty clever, I’m sure you’ll agree.

The next important line is obvious. The line starting $InputFileName tells rsyslog which log file you’re interested in sending off to your remote logging server. The following line helps classify the log type with a “Tag” (which if you have multiple servers of the same application type sending logs to one remote server you might alter slightly per server to apache-apache-ww1: etc). Ignore the $InputFileStateFile log file for now and skim through the remaining lines.

We are collecting an “info” level of logging detail and pushing that out to the user-configurable “local3” facility and onto the IP Address “10.10.1.3”. The emoji that you can see — the two @ signs — stands for TCP. Only one @ sign would signify transfers via the UDP networking protocol.

Can Anybody Hear Me?

What about our remote rsyslog server’s config?

Have a quick look at the two lines below, which will sit in your remote rsyslog server’s /etc/rsyslog.conf file. For the sake of clarity, this is the recipient rsyslog server, the one that’ll receive all the logs from multiple places. Unless you’ve lost control of your senses, these two lines are easy to follow:

$ModLoad imtcp

$InputTCPServerRun 514

Incidentally, if an IP address is omitted (it can be explictly stated using something like $TCPServerAddress 10.10.10.10), then rsyslog will attempt to open up all IP addresses on the port in question.

You might be pleasantly surprised at how easy it is to finish off the remote rsyslog server config. We use something called “templates” in rsyslog. They are powerful, extensible, and worth reading about in more detail.

At the foot of our /etc/rsyslog.conf file, we simply add these lines:

$template ChrisBinnie, “/var/log/%HOSTNAME%/%PROGRAMNAME%.log”

*.*   ?ChrisBinnie

I’ve just used an arbitrary template reference in this case, my name, for ease of distinction. You will need to restart the service after making these changes.

We can see that we’re going to drop our logs into a directory off of /var/log, which has a hostname and then the application name. Appended to the end, the “.log” makes sense of the resulting filename. You can see which facilities and priorities are being added to this log file, in this case, “all” and “all” — thanks to the asterisks.

To TCP Or Not To TCP

You might need to spend a second thinking back over some of the remote logging considerations I mentioned, such as network congestion, if you’re pushing lots of logging data across your LAN. I mention this because (as we saw above) the clever rsyslog can use both TCP and UDP for pushing your logs around. TCP is the option best suited to most scenarios, thanks to its ability to error-check against network failures. It also doesn’t require an additional plugin to be loaded up, because it’s built into rsyslog; the reverse is true for the UDP protocol.

There are two minor connection points to note here. First, avoid using hostnames via DNS. Use an IP address for greater reliability (CNAMEs are somewhere in the middle, if you change machines around every now and again). Second, as with all things that might need debugged at some point, you should try to use explicit port numbers on both server and client ends so that there’s no ambiguity introduced. Incidentally, without a port explicitly specified, both protocols default to 514.

Little Dutch Boy

Note that if you’re using properly configured systems, you might need to punch a hole in the dike. Looking at the example below, clearly you need to alter the port number after –dport to your port number of choice. You can then save the setting to make it persistent with something like /sbin/service iptables save or whatever your distribution prefers.

If you need to allow access using the default TCP option and you’re using iptables, you can use this command:

# iptables -A INPUT -p tcp -m state --state NEW -m tcp --dport 514 -j ACCEPT

And, for good, old UDP, use:

# iptables -A INPUT -p udp -m state --state NEW -m udp --dport 514 -j ACCEPT

Stale Bread

Back to the dreaded gotcha. So that we’re clear: On the client or sender machine, we make a little fix to our log rotation bug/problem. Not the remote rsyslog server.

At the risk of repeating myself, there may be better ways of fixing the log rotation problem (with a version upgrade, for example); however, read on to see what fixed it for me. Remember that the issue is that after “logrotate” had run, the remote logging stopped. The solution was a simple script set to run via a cron job after “logrotate” had run in the middle of the night.

There is some method to the madness of my script. By running it, we effectively insist that after a log rotation has taken place the faithful rsyslog pays attention to its “state” file. Let’s peek into an example now:

<Obj:1:strm:1:

+iCurrFNum:2:1:1:

+pszFName:1:25:/var/log/apache2/access_log:

+iMaxFiles:2:1:0:

+bDeleteOnClose:2:1:0:

+sType:2:1:2:

+tOperationsMode:2:1:1:

+tOpenMode:2:3:384:

+iCurrOffs:2:7:1142811:

>End

.

Listing 2: Our rsyslog “state” file example.

We shouldn’t profess to understanding what’s going on in Listing 2, I would hazard a guess that rsyslog is counting lines it’s processed — among other things, to keep it operating correctly. Let’s promptly move on to Listing 3. This is the cron job and script solution that fixed the issue for me.


#!/bin/bash


#

# Deletes stale rsyslog “state” files, appends a timestamp to the new filename in /tmp & restarts rsyslog.

#

# After rsyslog restart the remote logging node should catch up any missed logs fairly quickly.

#


# Declare var

timestamp=$(date +%s)


# Delete all “state” files in /tmp

/bin/rm -f /tmp/apache_state_file*


# Edit rsyslog file which sends data remotely in order to show newly named “state” file

/bin/sed -i.bak 's/apache_state_file-[0-9]*/apache_state_file-'"$timestamp"'/g' /etc/rsyslog.d/www-syslog.chrisbinnie.tld.conf


# Apply changes to “state” file in use

/sbin/service rsyslog restart

Listing 3: Our quick-fix script to get log rotations to speak to rsyslog satisfactorily after “logrotate” has finished doing its business.

If you read the comments at the top of the script then all going well the scripts raison d’etre should make sense. This script is run sometime around 5am after the excellent “logrotate” has finished its sometimes lengthy business. There’s no point in running the script during or before “logrotate” has finished its run (during testing my remote logging still failed).

The small script works like this (you probably need to run it twice initially to clean up /tmp filenames, which should be harmless if you do it manually). It deletes the old “state” file upon which rsyslog relies, works out the current time, and appends that time to the end of the “state” filename.

As a result, the /tmp directory ends up having one or two files that look like this apache_state_file-1321009871. It then backs up the existing remote logging config file and changes the “state” file that it references. Finally, a super-quick service restart means that the remote logging data starts up again and the other end (the remote rsyslog server) catches up with any missed logs in a whizz-pop-bang if there’s lots of data.

My experience is that if you tail the recipient log after running this script (or just performing a restart), you’ll see the catchup taking place super-speedily. In case you’re wondering, I found that sometimes a service restart didn’t pick up the pieces properly but altering the “state” file it referenced was successful without fail. Your mileage might vary of course.

As mentioned, I’m a little suspicious of an old version issue that I needed to fix with this script. In the current version, you can see there is some additional information about the current version’s “state” file. I hope my solution gives you some pointers and helps you out if you encounter a similar scenario, however.

I Can’t Hear You

If your already exceptionally noisy log file /var/log/messages begins to fill up with your application’s logs, too, then here’s another little life-saver. The workaround is simple, you just need a service restart, after applying this ;local3.none addition to the relevant destination log file line in /etc/rsyslog.conf:

*.*;auth,authpriv.none;local3.none  -/var/log/messages

No doubt you get the idea, this disables “local3” logging locally.

End Of Watch

I should have probably put more emphasis on the how configurable and extensible rsyslog is at the start. It would be remiss not to point you at the extensive, well-maintained, documentation, which is all linked to from the application’s main website. There are modules and templates galore to explore in high levels of detail. And, as well as user examples on the main site, there’s an excellent wiki with some trickier configuration examples. If you’re eager for even more, you can check out this list of modules.

Now that I’ve presented the solution to a working remote rsyslog server, even with the challenges that log rotation throws into the mix, I hope you’ll think more about your logging infrastructure. I had originally used a Syslog server, back in the day, to pick up salient events from Cisco routers. So universal are the logging formats supported by rsyslog that you can also connect them to all sorts of devices, such as load balancers and other proprietary devices. You are far from being limited to picking up logs from other Unix-type devices and instead are blessed with the ability to collect logging event information from all over the place.

I hope you enjoy putting your new logging knowledge to creative and productive use.

Read the other articles in this series:

Remote Logging With Syslog, Part 1: The Basics

Remote Logging With Syslog, Part 2: Main Config File

Remote Logging With Syslog, Part 3: Logfile Rules

Chris Binnie is a Technical Consultant with 20 years of Linux experience and a writer for Linux Magazine and Admin Magazine. His new book Linux Server Security: Hack and Defend teaches you how to launch sophisticated attacks, make your servers invisible and crack complex passwords.

Advance your career in system administration! Check out the Essentials of System Administration course from The Linux Foundation.