Home Blog Page 682

Apache on Ubuntu Linux For Beginners: Part 2

You must set up your Apache web server to use SSL, so that your site URL is https:// and not http://. Sure, there are exceptions, such as test servers and lone LAN servers that only you and your cat use.

But any Internet-accessible web server absolutely needs SSL; there is no downside to encrypting your server traffic, and it’s pretty easy to set up. For LAN servers it may not be as essential; think about who uses it, and how easy it is to sniff LAN traffic.

We’ll learn the easy way how to enable SSL on Apache, and the slightly harder and more authoritative way. Please refer to part 1 of this series, Apache on Ubuntu Linux For Beginners, as this builds on the examples shown there.

The Easy Way

Apache installs with a pair of default encryption certificates: /etc/ssl/certs/ssl-cert-snakeoil.pem and /etc/ssl/private/ssl-cert-snakeoil.key. The following virtual host example modifies our example from part 1.


<VirtualHost *:443>
    ServerAdmin carla@localhost
    DocumentRoot /var/www/test.com
    SSLCertificateFile	/etc/ssl/certs/ssl-cert-snakeoil.pem
    SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key
    ServerName test.com
    ServerAlias www.test.com
    ErrorLog ${APACHE_LOG_DIR}/error.log
    CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>

Then enable the Apache SSL module and restart the server:

$ sudo a2enmod ssl
$ sudo service apache2 restart

Point your web browser to https://test.com. The first time you do this you’ll get browser paranoia, and warnings how this site is dangerous and will do terrible things to you. Click through all the steps to make a permanent exception for the site. When, at last, you are allowed to actually visit the site you will see something like Figure 1.

Figure 1: test.com

Hurrah! Success! It should also work for https://www.test.com, and you’ll have to create an exception for that too. Just for fun click on the little padlock in your browser to read about how your SSL is no good because you’re using a self-signed certificate. Your self-signed certificate is fine, and we’ll discuss this more presently.

Troubleshooting

a2enmod is short for “Apache 2 enable module”. Apache always performs a configuration test at start and restart. If it finds any errors it helpfully tells you how to see what they are:


Job for apache2.service failed because the control process 
exited with error code. See "systemctl status apache2.service" 
and "journalctl -xe" for details.

So what are you waiting for? Run the two commands to see what’s wrong. This snippet tells me that I forgot to enable the SSL module:


Syntax error on line 4 of /etc/apache2/sites-enabled/test.com.conf:
Invalid command 'SSLCertificateFile', perhaps misspelled or defined 
by a module
Action 'configtest' failed.

Another way to test SSL is with openssl s_client, a fabulous tool for testing SSL on servers. It spits out a lot of output, and prints the public encryption certificate. Look for these items at the beginning and the end to indicate a correct setup:


$ openssl s_client -connect test.com:443
CONNECTED(00000003)
depth=0 CN = xubuntu
verify return:1
---
Certificate chain
 0 s:/CN=xubuntu
   i:/CN=xubuntu
[...]
    Start Time: 1476393579
    Timeout   : 300 (sec)
    Verify return code: 0 (ok)

This is what you’ll see when SSL is not enabled:


$ openssl s_client -connect test.com:443
connect: Connection refused
connect:errno=111

Another way to check is with netstat. When SSL is correctly configured and you have a virtual host up, it will listen on port 443:


$ sudo netstat -untap
[...]
tcp   0   0 0.0.0.0:443

Apache’s apachectl -S is a great tool for examining your server configuration and finding any errors. It lists your document root, HTTP user and group, configuration file locations, and active virtual hosts.

Forward Port 80 Connections

When you get your nice SSL and HTTPS setup working, you must automatically forward traffic to your HTTPS address. If site visitors try HTTP they’ll see an error message, and then go away and never visit you again. The best way to do this is by editing your virtual host configuration. For our test.com, add this to the existing virtual host file:


<VirtualHost *:80>
   ServerName test.com
   ServerAlias www.test.com
   DocumentRoot /var/www/test.com
   Redirect / https://test.com
</VirtualHost>

Restart Apache, and try both https://test.com and http://test.com. Both should redirect to https://. Refresh your browser to make sure. The Redirect directive defaults to a 302 temporary redirect. Always use this until you have thoroughly tested your configuration, and then you can change it to Redirect permanent.

Using Third-Party SSL Certificates

Managing your own SSL certificate authority and public key infrastructure (PKI) is a royal pain. If you know how to do it, and how to roll out your certificate authorities to your users so they don’t have to battle frightened web browsers, then you are an über guru and I bow to you.

An easier way is to use a trusted third-party certificate authority. These work without freaking out your web browsers because they are already accepted and bundled on your system. Your vendor will have instructions on setting up. See Quieting Scary Web Browser SSL Alerts to learn some ways to tame your SSL madness.

.htaccess

I know, I said I was going to show you how to tame the beastly .htaccess. And I will. Just not today. Soon, I promise you! Until then, this article might be helpful to you: How to Use htaccess to Run Multiple Drupal 7 Websites on Your Cheapo Hosting Account. Sure, it’s about Drupal, but it’s also a good detailed introduction to .htaccess.

Advance your career in Linux System Administration! Check out the Essentials of System Administration course from The Linux Foundation.

ONF to Merge With On.Lab

ONF to Merge With On.Lab The Open Networking Foundation (ONF) is merging with On.Lab, creating one entity that will curate standards such asOpenFlow while developing software projects such as ONOS and the Central Office Re-Imagined as a Datacenter (CORD).

The groups have begun operating as one organization led by On.Lab Executive Director Guru Parulkar. But the merger won’t legally be completed until next year; August 2017 is the target timeframe. (A nonprofit merger turns out to have all the complications of a corporate merger, Parulkar says.)

Read more at SDx Central

Google Open Sources the Code that Powers Its Domain Registry

Google today released Nomulus, the Java-based registry platform that powers Google’s own .google and .foo top level domains (TLDs).

Google says it started working on the technology behind Nomulus after the company applied to operate a number of generic TLDs itself back in 2012. Until then, domain names were mostly restricted to the .com’s, .net’s and various country-level TLDs like .de and .uk. Once the Internet Corporation for Assigned Names and Numbers (ICANN) decided to open TLDs up to so-called generic TLD’s like .app, .blog and .guru, Google jumped into the fray and applied for .google and a number of other TLDs.

Read more at TechCrunch

A Doctor Learns How to Code Through Open Source

Judy Gichoya is a medical doctor from Kenya who became a software developer after joining the open source medical records project, OpenMRS. The open source project creates medical informatics software that helps health professionals collect and present data to improve patient care in developing countries.

After seeing how effective the open medical records system was at increasing efficiency and lowering costs for clinics in impoverished areas of Africa, she began hacking on the software herself to help improve it. Then she set up her own implementation in the slums outside Nairobi, and has done the same for dozens of clinics since.

This is a classic story of open source contributors, who join in order to scratch an itch. But Gichoya was a doctor, not a programmer. How did she make the leap?

Meeting open source

The radiology resident at the Indiana University School of Medicine began her career learning technology at IMIS, an information services management program. But, eventually,  Gichoya went to the Moi University Schools of Medicine (MUSOM) and Public Health (MUSPH) in western Kenya. There she continued her association with the IT world with a part-time job assembling computers for a local company but didn’t have much time, or use, for developing her programming skills.

Dr. Judy Gichoya
After her clinical work began, Gichoya found that instead of taking care of patients, she was spending most of her time completing paperwork. She was running around the hospital from one department to another looking for lab work and other documents. There was no centralized or electronic system for such data. In her own words, “it was very frustrating.”

Because of her computer background, she knew very well that there was a better way of organizing all of this information; something that would vastly improve the way healthcare was being delivered. Gichoya set out to find a solution.

She started talking to people about how they conducted care for their patients, and that’s when she discovered the AMPATH Medical Record System (AMRS), a program to support HIV prevention and treatment in Africa. She got involved with the program and started learning how to code to help with openMRS, an open source project started by two doctors at Indiana University School of Medicine to help scale the AMRS software to serve more clinics in developing countries.

Three years later in 2009, when she graduated from medical school, she still had no idea what open source actually was, despite being involved with openMRS for AMRS. It was only in 2010 when she did her own implementation of openMRS that she came to understand the value of open source.

After graduation, Gichoya went to work in a clinic in the Kibera slums, which is one of the largest slums in Africa, near Nairobi, Kenya. She wanted to improve healthcare in the slum so she decided to use openMRS. She bought some old computers, installed OpenMRS, and trained a young guy to use computers and openMRS who had never used computers before. Since then, she has done many other such implementations across different regions in Africa.

From doctor to software developer?

It wasn’t an easy transformation for Gichoya to get into software development. She had learned Pascal and Visual Basic at IMIS. But, by the time she got involved with OpenMRS no one was using those languages anymore. It was all about Java.

Back in 2006, when Gichoya started working on OpenMRS, Internet connectivity was very poor and very expensive, and it was hard for her to learn Java on her own, due to the lack of mentors and teachers. But, she learned bit by bit, and now, she has learned many other languages including Python, Angular JS, and various HTML5 technologies.

“Things are better nowadays, as there are disciplines like Informatics for doctors that help those who want to study IT and be a doctor,” she said.

As important as software is, Gichoya also feels that it’s just a tool for accomplishing what you want to do.

“Maybe you are just trying to discover a new drug. You want to focus your mind on the new drug and let the tools help you manage data,” she said. “Knowledge of programming helps a lot. But, you shouldn’t be wasting time trying to fix your programming language or trying to fix your operating system. Then you lose focus from your real goal.”

That’s true not only for medical professionals but everyone else.

openMRS and the Future

Gichoya still has one more year of her radiology residency at Indiana University. She is currently working on one of her most ambitious projects: a radiology information system.

Her earlier projects were more about electronic medical record systems, which house patients’ records. Now she wants to make a direct impact on how healthcare is delivered to patients — to make an impact on the real world.

Currently, if someone is diagnosed with a cough, they are sent to a big facility with CT scanning capabilities. They get examined there and get a date for CT scanning. They make a second trip for actual scanning and then are called again to collect the report. That’s three trips. There are only a few such facilities and most patients live far away. Each trip could be 3-4 hours one way. Making three such trips discourages people from going in at an early stage.

Gichoya wants a system where the patient has to go to the big facility only once for the actual scan. The reports would be exchanged between the local medical center and the big facility over email so the treatment can start at an earlier stage. This approach will have a direct impact on how people receive healthcare. It will also encourage people to see doctors at earlier stages and start receiving treatment quicker.

Join the movement

One of the biggest challenges Gichoya faces to this ambitious project  is talent. Previously, she had two developers helping her code — a developer from Austria, doing most of the coding, and a Google Intern from Cameroon.

So Gichoya is thinking innovatively. She said that in many countries students have to do projects at school. What if they could help with actual projects like hers? “Nothing big and serious, just some non-essential things that we need and these students get credit for that. It’s a win-win situation: those students get real projects that help real people and we get the much-needed resources,” she said.

But that’s more or less getting help with the basics. She also has a plan to get real experts involved with her project. Gichoya said that nowadays a lot of big companies have started offering sabbaticals to their employees.

“People have started to take time off work, sabbaticals. Innovation is driven by creative minds, and companies encourage these people to take time off, to remain sharp, to avoid burnout. What if these people come to Africa and work on different projects? They get to take a break from their work and get to see amazing places in Africa. It’s very rewarding, personally: you help a good cause and you also get a new experience. It’s a big win-win,” she said.

Open Source: The Kenyan way

Now that she has learned how to be a developer as well as a doctor, Gichoya must learn another fundamental skill of open source: community building. Open source is as much about people as it is about software. How different people from different cultures and regions come together to weave this fabric of technology that improves everyone’s quality of life.

Fortunately, Gichoya said, this is a skill she comes to naturally as a native of Kenya.

“The most interesting thing about Kenyans is coming together. After colonials left, my parents relocated to a portion we live in. They started digging wells, tubs, and created a community,” she said. “There was no school; the community came together to create schools and build things. And that’s what open source does, too. People come together to help each other and build things.”

Open Source, Third-Party Software Flaws Still Dog Developers

The new 2016 State of Software Security Report from Veracode shows the hazards of buggy libraries and applications.

Application developers are getting burnt by security vulnerabilities in the very open source- and third-party frameworks and software components that make up their finished application product.

That’s one of the major findings in Veracode’s annual State of Software Security 2016 report, published today and based on data from the application security firm’s code-level analysis of billions of lines of code the past 18 months. 

Read more at Dark Reading

Blockchain Technology Can Help Save Refugees by Giving Them a Verified Identity

What if you had no proof of who you are? What would you do when the bank manager asked for ID when you tried to open an account or when the hospital asked for your documentation?

You wouldn’t be able to function, at least not easily. Billions face this problem internationally, but now blockchain technology is helping those with no paper proof of existence get the same services as those with “official” identification.

Blockchain technology, made famous by cryptocurrencies like Bitcoin, is a coding method that allows for secure record keeping in online community ledgers. Network members share and confirm information across computers with no central authority. No one user controls the information or messes with it independently; members must jointly confirm information before it’s added to the jointly-held data repository. 

Read more at Quartz

With OpenStack Users, Dev and Test are King

What’s OpenStack’s killer app? Users say it’s dev and test. According to the latest OpenStack User Survey, most deployments of the open source cloud infrastructure project are on-premises private clouds for dev-and-test work that serve teams of fewer than 100 users.

The survey tallied responses from users running some 260 deployments of OpenStack worldwide, with results available through a portal that allows the data to be tabulated in various ways.

Read more at InfoWorld

Current State of Kernel Audit and Linux Namespaces, Looking Ahead to Containers

Richard Guy Briggs, a kernel security engineer and Senior Software Engineer at Red Hat, spoke about the current state of Kernel Audit and Linux Namespaces at Linux Security Summit. He also shared problems plaguing containers and what might be done to address them soon.

Node.js v6 Transitions to LTS

The Node.js project has three major updates this month:

  • Node.js v7 will become a current release line.
  • Node.js v6, code named “Boron,” transitions to LTS.
  • Node.js v0.10 will reach “End of Life” at the end of the month. There will be no further releases of this line, including security or stability patches.

Node.js v6 transitioned to LTS line today, so let’s talk about what this means, where other versions stand, and what to expect with Node.js v7.

Node.js Project’s LTS Strategy

In a nutshell, the Long Term Support (LTS) strategy is focused on creating stability and security to organizations with complex environments that find it cumbersome to continually upgrade Node.js. These release lines are even numbered and are supported for 30 months — more information on the LTS strategy can be found here.

 
0*O1JbbvvEGtUkNxd_.

*This image is under copyright of NodeSource.

Another good source for the history and strategy of the Node.js release lines, can be found in Rod Vagg’s blog post, “Farewell to Node.js v5, Preparing for Node.js v7.” Rod is the Node.js Project’s technical steering committee director and a Node.js Foundation board member.

Node.js follows semantic versioning (semver). Essentially, semver is how we signal how changes will affect the software, and whether or not upgrading will “break” software to help developers determine whether they should download a new version, and when they should download a new version. There is a simple set of rules and requirements that dictate how version numbers are assigned and incremented, and whether they fall into the following categories:

  • Patch Release: Is a bug fix or a small improvement to performance. It doesn’t add new features or change the way the software works. Patches are an easy upgrade.
  • Minor Release: This is any change to the software that introduces new features, but does not change the way that the software works. Given that there is a new feature being release, it is generally best to wait to upgrade to a minor release after it has been tested and patched.
  • Major Release: This is a big breaking change. It changes how the software works and functions. With Node.js, it can be as simple as changing an error message to upgrading V8.

If you want more information on how releases work, watch Myles Borins’ presentation at JSConf Uruguay: https://www.youtube.com/watch?v=5un1I2qkojg. Myles is a member of the Node.js Project and Node.js Core Technical Committee.

Node.js v6 Moves from “Current” to “LTS”

Node.js v6 will be the LTS release line until April 2018, meaning new features (semver-minor) may only land with consent of the Node.js project’s Core Technical Committee and the LTS Working Group. These features will land on an infrequent basis.

Changes in a LTS-covered major version are limited to:

  1. Bug fixes;
  2. Security updates;
  3. Non-semver-major npm updates;
  4. Relevant documentation updates;
  5. Certain performance improvements where the risk of breaking existing applications is minimal;
  6. Changes that introduce a large amount of code churn where the risk of breaking existing applications is low and where the change in question may significantly ease the ability to backport future changes due to the reduction in diff noise.

After April 2018, Node.js v6 will transition into “maintenance” mode for 12 additional months. Maintenance mode means that only critical bugs, critical security fixes, and documentation updates will be permitted.

Node.js v6 is important to enterprises and users that need stability. If you have a large production environment and need to keep Node.js humming, then you want to be on an LTS release line. If you fall within this category, we suggest that you update to Node.js v6, especially if you are on v0.10 or v0.12. *More information on this as well as what to do if you are on Node.js v4 below.

Features, Focus and More Features

Node.js v6 became a current release line in April 2016. Its main focus is on performance improvements, increased reliability and better security. A few notable features and updates include:

Security Enhancements

  • New Buffer API creation for increased safety and security.
  • Experimental support for the “v8_inspector,” a new experimental debugging protocol. *If you have an environment that cannot handle updates or testing, do not try this new feature as it is not fully supported and could have bugs.

Increased Reliability

  • Print warning to standard error when a native Promise rejection occurs, but there is no handler to receive it. This is particularly important for distributed teams building applications. Before this capability, they would have to chase down the problem, which is equivalent to finding a needle in a haystack. Now, they can easily pinpoint where the problem is and solve it.

Performance Improvements

Node.js v6 Equipped with npm v3

  • Npm3 resolves dependencies differently than npm2 in that it tries to mitigate the deep trees and redundancy that nesting causes in npm2 — more on this can be found in npm’s blog post on the subject. The flattened dependency tree will be very important in particular to Windows users who have file path length limitations.
  • In addition, npm’s shrinkwrap functionality has changed. The updates will provide a more consistent way to stay in sync with package.json when you use the save flag or adjust dependencies. Users who deploy projects using shrinkwrap consistently (most enterprises do) should watch for changes in behaviour.

Updating Node.js v4 to Node.js v6

If you are on Node.js v4, you have 18 months to transition from Node.js v4 to Node.js v6. We suggest starting now. Node.js v4 will stop being maintained April 2018.

At the current rate of download, Node.js v6 will take over the current LTS line v4 in downloads by the end of the year. This is a good thing as v6 will be the LTS line and in maintenance mode for the next 30 months. Node.js v4 will stop being maintained in April 2018.

 
0*zGWdvtPWdbr3Ctz5.

*Data pulled from Node.js metrics section: https://nodejs.org/metrics/

Time To Transition Off v0.12 & v0.10

On v0.12, v0.10, v5? Please upgrade! We understand you may have time constraints, but Node.js v0.10 will not be maintained after this month (October). This means no further official releases, including fixes for critical security bugs. End of life for Node.js v0.12 will be December 2016.

You might be wondering what our main reasons are for doing this? After December 31, we won’t be able to get OpenSSL updates for those versions. So that means we won’t be able to provide any security updates.

Additionally, the Node.js Core team has been maintaining the version of V8 included in Node.js v0.10 alone since the Chromium team retired it four years ago. This represents a risk for users as the team will no longer maintain this.

If you have a robust test environment setup, then an upgrade to Node.js v6 is what we would suggest. If you don’t feel comfortable making that big of a version leap, then Node.js v4 is also a good upgrade, however it won’t be supported as long as Node.js v6.

Node.js v4 and Node.js v6 are more stable than Node.js v0.10 and v0.12 and have more modern versions of V8, OpenSSL, and other critical dependencies. Bottom line: it’s time to get update.

What’s holding you back from upgrading? Let us know it the comments section below. If you have questions along the way, please ask them in this forum: https://github.com/nodejs/help

Okay, So What’s the Deal with Node.js v7?

Node.js v7 was released into beta at the end of September and is due to be released the week of October 25. Node.js v7 is a checkpoint release for the Node.js project and will focus on stability, incremental improvement over Node.js v6, and updating to the latest versions of V8, libuv, and ICU.

Node.js v7 will ship with JavaScript Engine V8 5.4, which focuses on performance improvements linked to memory. Included in this are new JavaScript language features such as the exponentiation operator, new Object property iterators, and experimental support for async functions. To note, the async function support will be unsupported until V8 5.5 ships. These features are still in experimental mode, so you can play around with them, but they likely contain bugs and should not be used in production.

Given it is an odd numbered release, it will only be available for eight months with its end of life slated for June 2017. It has some really awesome features, but it might not be right for you to download. If you can easily upgrade your deployment and can tolerate a bit of instability, then this is a good upgrade for you.

Want more technical information about breaking changes in Node.js v7? See the full list here: https://github.com/nodejs/node/pull/9099

Beyond v7, we’ll be focusing our efforts on language compatibility, adopting modern web standards, growth internally for VM neutrality and API development, and support for growing Node.js use cases. To learn more, check out James Snell’s’ recent keynote from Node.js Interactive Amsterdam “Node.js Core State of the Union” on where Node.js core has been over the past year and where we’re going. James is a member of the Node.js Technical Steering Committee. Additional technical details around Node.js v6 and additional release lines can be found here.

This article originally appeared on the Node.js Foundation blog.

Watch Videos from LinuxCon + ContainerCon Europe

Thank you for your interest in the recorded sessions from LinuxCon + ContainerCon Europe 2016! View more than 25+ sessions from the event below.

Keynotes

 

Developer

 

Wildcard