Home Blog Page 632

OpenSSL For Apache and Dovecot

At long last, my wonderful readers, here is your promised OpenSSL how-to for Apache, and next week you get SSL for Dovecot. In this two-part series, we’ll learn how to create our own OpenSSL certificates and how to configure Apache and Dovecot to use them.

The examples here build on these tutorials:

Creating Your Own Certificate

Debian/Ubuntu/Mint store private keys and symlinks to certificates in /etc/ssl. The certificates bundled with your system are kept in /usr/share/ca-certificates. Certificates that you install or create go in /usr/local/share/ca-certificates/.

This example for Debian/etc. creates a private key and public certificate, converts the certificate to the correct format, and symlinks it to the correct directory:


$ sudo openssl req -x509 -days 365 -nodes -newkey rsa:2048 
   -keyout /etc/ssl/private/test-com.key -out 
   /usr/local/share/ca-certificates/test-com.crt
Generating a 2048 bit RSA private key
.......+++
......................................+++
writing new private key to '/etc/ssl/private/test-com.key'
-----
You are about to be asked to enter information that will 
be incorporated into your certificate request.
What you are about to enter is what is called a Distinguished 
Name or a DN. There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:US
State or Province Name (full name) [Some-State]:WA
Locality Name (eg, city) []:Seattle
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Alrac Writing Sweatshop
Organizational Unit Name (eg, section) []:home dungeon
Common Name (e.g. server FQDN or YOUR name) []:www.test.com
Email Address []:admin@test.com

$ sudo update-ca-certificates
Updating certificates in /etc/ssl/certs...
1 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d...

Adding debian:test-com.pem
done.
done.

CentOS/Fedora use a different file structure and don’t use update-ca-certificates, so use this command:


$ sudo openssl req -x509 -days 365 -nodes -newkey rsa:2048 
   -keyout /etc/httpd/ssl/test-com.key -out 
   /etc/httpd/ssl/test-com.crt

The most important item is the Common Name, which must exactly match your fully qualified domain name. Everything else is arbitrary. -nodes creates a password-less certificate, which is necessary for Apache. - days defines an expiration date. It’s a hassle to renew expired certificates, but it supposedly provides some extra security. See Pros and cons of 90-day certificate lifetimes for a good discussion.

Configure Apache

Now configure Apache to use your new certificate. If you followed Apache on Ubuntu Linux For Beginners: Part 2, all you do is modify the SSLCertificateFile and SSLCertificateKeyFile lines in your virtual host configuration to point to your new private key and public certificate. The test.com example from the tutorial now looks like this:


SSLCertificateFile /etc/ssl/certs/test-com.pem
SSLCertificateKeyFile /etc/ssl/private/test-com.key

CentOS users, see Setting up an SSL secured Webserver with CentOS in the CentOS wiki. The process is similar, and the wiki tells how to deal with SELinux.

Testing Apache SSL

The easy way is to point your web browser to https://yoursite.com and see if it works. The first time you do this you will get the scary warning from your over-protective web browser how the site is unsafe because it uses a self-signed certificate. Ignore your hysterical browser and click through the nag screens to create a permanent exception. If you followed the example virtual host configuration in Apache on Ubuntu Linux For Beginners: Part 2 all traffic to your site will be forced over HTTPS, even if your site visitors try plain HTTP.

The cool nerdy way to test is by using OpenSSL. Yes, it has a nifty command for testing these things. Try this:


$ openssl s_client -connect www.test.com:443
CONNECTED(00000003)
depth=0 C = US, ST = WA, L = Seattle, O = Alrac Writing Sweatshop, 
OU = home dungeon, CN = www.test.com, emailAddress = admin@test.com
verify return:1
---
Certificate chain
 0 s:/C=US/ST=WA/L=Seattle/O=Alrac Writing Sweatshop/OU=home 
     dungeon/CN=www.test.com/emailAddress=admin@test.com
   i:/C=US/ST=WA/L=Seattle/O=Alrac Writing Sweatshop/OU=home 
     dungeon/CN=www.test.com/emailAddress=admin@test.com
---
Server certificate
-----BEGIN CERTIFICATE-----
[...]

This spits out a giant torrent of information. There is a lot of nerdy fun to be had with openssl s_client; for now it is enough that we know if our web server is using the correct SSL certificate.

Creating a Certificate Signing Request

Should you decide to use a third-party certificate authority (CA), you will have to create a certificate signing request (CSR). You will send this to your new CA, and they will sign it and send it back to you. They may have their own requirements for creating your CSR; this a typical example of how to create a new private key and CSR:


$ openssl req -newkey rsa:2048 -nodes 
   -keyout yourdomain.key -out yourdomain.csr

You can also create a CSR from an existing key:


$ openssl req  -key yourdomain.key 
   -new -out domain.csr

That is all for today. Come back next week to learn how to properly set up Dovecot to use OpenSSL.

Additional Tutorials

Quieting Scary Web Browser SSL Alerts
How to Set Up Secure Remote Networking with OpenVPN on Linux, Part 1
How to Set Up Secure Remote Networking with OpenVPN on Linux, Part 2

Advance your career in system administration! Check out the Essentials of System Administration course from The Linux Foundation.

Understanding Open vSwitch, An OpenStack SDN Component

Open vSwitch is an open-source project that allows hypervisors to virtualize the networking layer. This caters for the large number of virtual machines running on one or more physical nodes. The virtual machines connect to virtual ports on virtual bridges (inside the virtualized network layer.)

This is very similar to a physical server connecting to physical ports on a Layer 2 networking switch. These virtual bridges then allow the virtual machines to communicate with each other on the same physical node. These bridges also connect these virtual machines to the physical network for communication outside the hypervisor node.

In OpenStack, both the Neutron node and the compute node (Nova) are running Open vSwitch to provide virtualized network services.

Read more at OpenStack SuperUser

8 Docker Security Rules to Live By

Odds are, software (or virtual) containers are in use right now somewhere within your organization, probably by isolated developers or development teams to rapidly create new applications. They might even be running in production. Unfortunately, many security teams don’t yet understand the security implications of containers or know if they are running in their companies.

In a nutshell, Linux container technologies such as Docker and CoreOS Rkt virtualize applications instead of entire servers. Containers are superlightweight compared with virtual machines, with no need for replicating the guest operating system. 

Read more at InfoWorld

Zigbee Writes a Universal Language for IoT

The nonprofit Zigbee Alliance today unveiled dotdot, a universal language for the Internet of Things (IoT).

The group says dotdot takes the IoT language at Zigbee’s application layer and enables it to work across different networking technologies.

This is important because currently, most IoT devices don’t speak the same language, even if they use the same wireless technology. The result is an Internet of Things that is often a patchwork of translations done in the cloud. And platform and app developers must maintain a growing set of unique interfaces for each vendor’s products.

Read more at SDx Central

Apache Geode Spawns ‘All Sorts of In-Memory Things’

Apache Geode is kind of like the six blind men describing an elephant. It’s all in how you use it, Nitin Lamba, product manager at Ampool, told a meetup group earlier this year.

Geode is a distributed, in-memory compute and data-management platform that elastically scales to provide high throughput and low latency for big data applications. It pools memory, CPU, and network resources — with the option to also use local disk storage — across multiple processes to manage application objects and behavior.

Using dynamic replication and data partitioning techniques it offers high availability, improved performance, scalability, and fault tolerance.

Read more at The New Stack

DevOps Trends, Predictions and 2017 Resolutions

We’re counting the days till the end of 2016. As 2017 comes into focus, we find ourselves reflecting on the advancements made in the world of DevOps during this past year, the challenges still to overcome, and some of the trends that will shape the software delivery industry in the year(s) to come.

To give a proper farewell to 2016, and welcome in the new year, we hosted a special episode of Continuous Discussions (#c9d9) earlier this week, featuring industry luminaries and experts looking back on the state of DevOps in 2016, as well as what emerging trends they see prevailing in 2017.

Our expert panel included: Robert Stroud, principal analyst at Forrester; Nicole Forsgren, CEO and chief scientist at DORA; Chris Riley, analyst at fixate.io; Alan Shimel, Editor-in-Chief at DevOps.com; Manuel Pais, author on InfoQ and Skelton Thatcher; and our very own Sam Fell and Anders Wallgren. Continue reading for their exclusive insights into what’s in store for DevOps in 2017, plus some of their own DevOps New Year’s resolutions.

Read the full article here. 

Endless Is Bringing its Cheap, User-Friendly Linux PCs to the US

The dream of a Linux computer for normal humans is relatively dead. Sure, Google put Linux in billions of hands and homes with Android and Chrome OS, but neither OS is very much like the desktop Linux flavors well-meaning open-source developers have been crafting for decades.

A company called Endless has marked a third route, a stripped-down Linux operating system without many of the complications and difficulties (and features) of a typical Linux distro, but more apps and offline capabilities than Chrome OS. The OS is available for free download, but it also ships on the quirky Endless Mini and Endless One desktops Endless sells.

Read more at The Verge

Converting Failure to Success Should Be Part of Your Core Process

My life is full of mistakes. They’re like pebbles that make a good road. — Beatrice Wood

You know all the catchphrases and inspirational quotations about failure: fail fast, succeed quicker; fail forward; embrace failure; fail fast, fail often, fail everywhere. As creators of the bleeding edge of technology, we know that if we’re not failing, we’re not trying hard enough, and we’re not learning. But merely failing a lot doesn’t lead to progress. Anyone can fail all the time; the trick is converting failure to success. Ilan Rabinovitch of Datadog tells us, in his LinuxCon North America presentation, how to intelligently learn from our failures, and how to progress from failure to success.

The key to converting failure to success is to collect and analyze useful metrics, and to conduct formal post-mortems (or call them reviews or retrospectives if you don’t care for “post-mortem”). This needs to be part of your core process, because “The monitoring systems that we engage with these days are distributed and complex, more so than ever… All the pieces interact in ways that are much more complex than they might have been 10 years ago when you had a very clear three-tier architecture or static website that you interacted with. There are lots more pieces that can break or interact in unintentional ways” says Rabinovitch.

There are enough new mistakes to make; we don’t need to repeat the old ones. — Ilan Rabinovitch

Your reviews are definitely not about blame and punishments, but rather “We need to go back and see why was I able to do that, why did I make that mistake, why did I think that was the right actions to take. Put away the pitchforks, it should never be about the blame.” Rabinovitch reminds us that “Culture is this idea that we’re working together, we’re seeing the problem as the enemy, not each other… Sharing this idea that we’re going to take our learnings back and help each other be more successful in the future”.

So how to approach this? We’re already drowning in data, and yet Rabinovitch advises us to “Collect as much [data] as you can. If you don’t, it’s going to be expensive to generate again later, going back and trying to recreate the events of a security incident or a technical outage or what you’ve said or didn’t say on a control call.” Then, the next step is to categorize your metrics into three buckets: work metrics, resources, and events. Then what do you do?

Watch the complete presentation (below) to learn excellent insights on what to look for, what kind of tools and processes can help you make sense of what happened, and how to move forward.

LinuxCon videos

Embracing Failure and Learning from Our Mistakes with Effective Post Mortems by Ilan Rabinovitch

In this session, Ilan Rabinovitch discusses how Datadog runs internal postmortems from data collection to building timelines to the blameless review. You will learn about a framework you can apply right away to make postmortems more impactful in your own organizations.

How the Kubernetes Community Drives The Project’s Success

Kubernetes is a hugely popular open source project, one that is in the top percentile on GitHub and that has spawned more than 3,000 other projects. And although the distributed application cluster technology is incredibly powerful in its own right, that’s not the sole reason for its success.

“We think it’s not just the technology, we think that what makes it special is the community that builds the technology,” said Chen Goldberg, Director of Engineering, Container Engine and Kubernetes at Google, during her keynote at CloudNativeCon in Seattle last November.

Goldberg explained how that community works by pointing to three key areas for keeping Kubernetes moving forward: empowering internal special interest groups (SIGs), a commitment to transparency, and a culture of shared learning.

Kubernetes’ SIGs are intertwined; they don’t map to different GitHub repositories. They meet frequently and communicate among each other as often as possible. Goldberg said that the SIGs exist to ensure the community is thinking about how to make the technology as broad and accessible as possible, that every facet of the project is making Kubernetes useful to more people.

“Everything in the Kubernetes community is operating around SIGs,” she said. “They decide what features they want to work on. They discuss roadmap strategy. They triage issues towards the release. They make decisions. That’s the most important thing. When a community is so big, we have to grow leadership and distribute it.”

Hand in hand with that distributed approach is the commitment to transparency. Through the use of the features repository on GitHub, SIGs ensure their alignment, get new members caught up to speed, and generally just conduct business out in the open. There is a project management working group that reviews all features, highlights new breakthroughs, and keeps the SIGs working together.

“We want to make sure that you are informed of decisions if things are happening in the community,” Goldberg said.

There are frequent “burndown” sessions, post-mortems, and other community meetings to keep everyone on the same page and to make sure new features live up to the community’s high standards.

“We take it really seriously, the responsibility for your productions,” Goldberg said. “It means that when we release something, we want to make sure that we put the quality bar really high. We make a community decision when we are ready to release something. We will triage issues together … We want to make sure it works for you.”

The final vital element — a culture of shared learning — is really a nod to the fact that everyone is in uncharted territory with this new technology. There are many great ideas inside the Kubernetes community about what could work, but that’s a far stretch from knowing what does work.

“We don’t know everything,” Goldberg said. “I would lie if I would say it’s easy to manage such a big community. We make mistakes. The important thing is the community, we’re engaged to learn together and to improve.”

To learn more, watch the complete presentation below:

Do you need training to prepare for the upcoming Kubernetes certification? Pre-enroll today to save 50% on Kubernetes Fundamentals (LFS258), a self-paced, online training course from The Linux Foundation. Learn More >>