Home Blog Page 547

What are Containers? Learn the Basics in Online Course from The Linux Foundation

In this series, we’ll provide a preview of the new Containers Fundamentals (LFS253) course from The Linux Foundation. The course is designed for those who are new to container technologies, and it covers container building blocks, container runtimes, container storage and networking, Dockerfiles, Docker APIs, and more. In this installment, we start from the basics. You can also sign up to access all the free sample chapter videos now.

What Are Containers?

In today’s world, developers, quality assurance engineers, and everyone involved in the application lifecycle are listening to customer feedback and striving to implement the requested features as soon as possible.

Containers are an application-centric way to deliver high-performing, scalable applications on the infrastructure of your choice by bundling the application code, the application runtime, and the libraries.

Additionally, using containers with microservices makes a lot of sense, because you can do rapid development and deployment with confidence. With containers, you can also record a deployment by building an immutable infrastructure. If something goes wrong with the new changes, you can simply return to the previously known working state.

This self-paced course — taught by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, former Red Hat engineer, Docker Captain, and author of the Docker Cookbook — is provided almost entirely in video format. This video from chapter 1 gives an overview of containers.

Want to learn more? Access all the free sample chapter videos now!

4 Best Practices for Web Browser Security on Your Linux Workstation

There is no question that the web browser will be the piece of software with the largest and the most exposed attack surface on your Linux workstation. It is a tool written specifically to download and execute untrusted, frequently hostile code.

It attempts to shield you from this danger by employing multiple mechanisms such as sandboxes and code sanitization, but they have all been previously defeated on multiple occasions. System administrators should learn to approach browsing websites as the most insecure activity you’ll engage in on any given day.

There are several ways you can reduce the impact of a compromised browser, but the truly effective ways will require significant changes in the way you operate your workstation.

1: Graphical environment

The venerable X protocol was conceived and implemented for a wholly different era of personal computing and lacks important security features that should be considered essential on a networked workstation. To give a few examples:

• Any X application has access to full screen contents

• Any X application can register to receive all keystrokes, regardless into which window they are typed

A sufficiently severe browser vulnerability means attackers get automatic access to what is effectively a built-in keylogger and screen recorder and can watch and capture everything you type into your root terminal sessions.

You should strongly consider switching to a more modern platform like Wayland, even if this means using many of your existing applications through an X11 protocol wrapper. With Fedora starting to default to Wayland for all applications, we can hope that most software will soon stop requiring the legacy X11 layer.

2: Use two different browsers

This is the easiest to do, but only offers minor security benefits. Not all browser compromises give an attacker full unfettered access to your system — sometimes they are limited to allowing one to read local browser storage, steal active sessions from other tabs, capture input entered into the browser, etc. Using two different browsers, one for work/ high security sites, and another for everything else will help prevent minor compromises from giving attackers access to the whole cookie jar. The main inconvenience will be the amount of memory consumed by two different browser processes.

Here’s what we on The Linux Foundation sysadmin team recommend:

Firefox for work and high security sites

Use Firefox to access work-related sites, where extra care should be taken to ensure that data like cookies, sessions, login information, keystrokes, etc, should most definitely not fall into attackers’ hands. You should NOT use this browser for accessing any other sites except select few. You should install the following essential Firefox add-ons:

NoScript

• NoScript prevents active content from loading, except from user whitelisted domains. It is a great hassle to use with your default browser (though offers really good security benefits), so we recommend only enabling it on the browser you use to access work-related sites.

Privacy Badger  

• EFF’s Privacy Badger will prevent most external trackers and ad platforms from being loaded, which will help avoid compromises on these tracking sites from affecting your browser (trackers and ad sites are very commonly targeted by attackers, as they allow rapid infection of thousands of systems worldwide).

HTTPS Everywhere

• This EFF-developed Add-on will ensure that most of your sites are accessed over a secure connection, even if a link you click is using http:// (great to avoid a number of attacks, such as SSL-strip).

Certificate Patrol is also a nice-to-have tool that will alert you if the site you’re accessing has recently changed their TLS certificates — especially if it wasn’t nearing expiration dates or if it is now using a different certification authority. It helps alert you if someone is trying to man-in-the-middle your connection, but generates a lot of benign false-positives.

You should leave Firefox as your default browser for opening links, as NoScript will prevent most active content from loading or executing.

Chrome/Chromium for everything else

Chromium developers are ahead of Firefox in adding a lot of nice security features (at least on Linux), such as seccomp sandboxes, kernel user namespaces, etc, which act as an added layer of isolation between the sites you visit and the rest of your system.

Chromium is the upstream open-source project, and Chrome is Google’s proprietary binary build based on it (insert the usual paranoid caution about not using it for anything you don’t want Google to know about).

It is recommended that you install Privacy Badger and HTTPS Everywhere extensions in Chrome as well and give it a distinct theme from Firefox to indicate that this is your “untrusted sites” browser.

3: Use Firejail

Firejail is a project that uses Linux namespaces and seccomp-bpf to create a sandbox around Linux applications. It is an excellent way to help build additional protection between the browser and the rest of your system. You can use Firejail to create separate isolated instances of Firefox to use for different purposes — for work, for personal but trusted sites (such as banking), and one more for casual browsing (social media, etc).

Firejail is most effective on Wayland, unless you use X11-isolation mechanisms (the —x11 flag). To start using Firejail with Firefox, please refer to the documentation provided by the project:

Firefox Sandboxing Guide

4: Fully separate your work and play environments via virtualization

This step is a bit paranoid, but as I’ve said (many times) before, security is just like driving on the highway — anyone going slower than you is an idiot, while anyone driving faster than you is a crazy person.  

See the QubesOS project, which strives to provide a “reasonably secure” workstation environment via compartmentalizing your applications into separate fully isolated VMs. You may also investigate SubgraphOS that achieves similar goals using container technology (currently in Alpha).

Over the next few weeks in this ongoing Linux workstation security series, we’ll cover more best practices. Next time, join us to learn how to combat credential phishing with FidoU2F and generate secure passwords with password manager recommendations.

Workstation Security

Read more:

Part 6: How to Safely and Securely Back Up Your Linux Workstation

Part 1: 3 Security Features to Consider When Choosing a Linux Workstation

Redefining the Tech that Powers Travel

We all know that the technology industry has been going through a period of incredible change. Rashesh Jethi, Head of Research & Development at Amadeus, began his keynote at the Open Networking Summit (ONS) with a story about how when his grandfather went to university in India, the 760-mile journey took three days and involved a camel, a ship, and a train. Contrast this to Jethi’s 2700 mile journey to ONS in 6 hours where he checked into the flight from his watch. The rapid evolution of technology is continuing to redefine the travel industry and how we approach travel. 

Five or six years ago, Jethi said that Amadeus had about 5000 micro-services, 1500 databases, and a peak of about 80,000 transactions per second. In the time before continuous integration and continuous development, they still made about 600 application software changes every month, which equates to about to 20 to 25 changes every single day. Clearly, that was not going to scale with the amount of change that was coming. Over a couple of years, they completely virtualized their infrastructure as a service using VMware integrated OpenStack on the computer side and NSX for the networking side with about 90 percent of their servers running Linux. This technology change has drastically improved their time to market from 3 weeks down to 20 minutes to deploy a new server.

After solving some of the technical challenges, they had another problem, which Jethi attributes to you and me, and all of us on our phones and tablets that are always connected thanks to ubiquitous networks. We are always out there checking to see if we can get a good deal on our next planned vacation, and that kept increasing the amount of transaction load and the volumes that they had to deal with particularly in the frontend. With all of these networked devices, they have grown from 80,000 to a million transactions per second. Jethi said that it was clear that just virtualizing their infrastructure was not going to be enough. They had to move to a model where they could deploy the application as a whole with all these dependencies to instances that could be managed as clusters.

Jethi describes this as the second phase in their journey to move and build their platform as a service layer called Amadeus Cloud Services. To do this, they have been working with Red Hat and OpenShift using Docker to containerize their applications and Kubernetes for deployment, scaling, and management of those containers. This has allowed them to scale up and down with elastic scaling and self-healing where if one particular cluster flames out, it gets instantiated somewhere else and life goes on. “The more our teams are able to worry less about scaling of the infrastructure, … the more we are able to actually focus on specific problems that our industry and our customer is facing,” says Jethi.

Watch the video to learn more about how Amadeus is redefining the technology that powers travel.

https://www.youtube.com/watch?v=jV0kAt64yy0?list=PLbzoR-pLrL6p01ZHHvEeSozpGeVFkFBQZ

Interested in open source SDN? The “Software Defined Networking Fundamentals” training course from The Linux Foundation provides system and network administrators and engineers with the skills to maintain an SDN deployment in a virtual networking environment. Download the sample chapter today!

Check back with Open Networking Summit for upcoming news on ONS 2018. 

 

See more presentations from ONS 2017:

Google’s Networking Lead on Challenges for the Next Decade

How to Password Protect a Vim File in Linux

Vim is a popular, feature-rich and highly-extensible text editor for Linux, and one of its special features is support for encrypting text files using various crypto methods with a password.

In this article, we will explain to you one of the simple Vim usage tricks; password protecting a file using Vim in Linux. We will show you how to secure a file at the time of its creation as well as after opening it for modification.

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]

Read more at Tecmint

Making Chips Smarter

It is no secret that artificial intelligence (AI) and machine learning have advanced radically over the last decade, yet somewhere between better algorithms and faster processors lies the increasingly important task of engineering systems for maximum performance—and producing better results.

The problem for now, says Nidhi Chappell, director of machine learning in the Datacenter Group at Intel, is that “AI experts spend far too much time preprocessing code and data, iterating on models and parameters, waiting for training to converge, and experimenting with deployment models. Each step along the way is either too labor-and/or compute-intensive.”

Read more at ACM

OpenStack Summit Emphasizes Emerging Deployment Models

The OpenStack Summit kicked off here today with multiple announcements and an emphasis on the evolution of the cloud deployment model. 

Jonathan Bryce, executive director of the OpenStack Foundation, said during his keynote that there has been a 44 percent year-over-year increase in the volume of OpenStack deployments, with OpenStack now running on more than 5 million compute cores around the world.

Although OpenStack has had success, the path has not been a straight line upward since NASA and Rackspace first started the project in June 2010.

“We’re now at a major inflection point in the cloud,” Bryce said.

Read more at eWeek

NIST to Security Admins: You’ve Made Passwords too Hard

Despite the fact that cybercriminals stole more than 3 billion user credentials in 2016, users don’t seem to be getting savvier about their password usage. The good news is that how we think about password security is changing as other authentication methods become more popular.

Password security remains a Hydra-esque challenge for enterprises. Require users to change their passwords frequently, and they wind up selecting easy-to-remember passwords. Force users to use numbers and special characters to select a strong password and they come back with passwords like Pa$$w0rd.

Read more at InfoWorld

Self Contained Systems (SCS): Microservices Done Right

Everybody seems to be building microservices these days. There are many different ways to split a system into microservices, and there appears to be little agreement about what microservices actually are – except for the fact that they can be deployed independently. Self-contained Systems are one approach that has been used by a large number of projects.

What are Self-contained Systems?

The principles behind Self-contained Systems (SCSs) are defined at the SCS website. Self-contained Systems have some specific characteristics:

  • Each SCS is an autonomous web application. Therefore it includes the web UI as well as the logic and the persistence. So a user story will typically be implemented by changing just one SCS even if they require changes to UI, logic and persistence. To achieve this the SCS has to have its own data storage so each SCS can modify its database schema independently from the others.

 

Read more at InfoQ

What is Docker’s Moby Project?

During DockerCon 2017, a few major announcements were made, including the Moby Project

What is the Moby Project? It’s a framework to assemble specialized container systems without reinventing the wheel.

The Moby Project is to Docker what Fedora is to Red Hat Enterprise Linux. – Solomon Hykes, Docker CTO/Founder

In becoming the container project equivalent to the Fedora project, how Docker is built is changing.

Red Hat did a good job in the early days of the RHEL confusion in that they delineated the project from product; they split Fedora from RHEL. Docker sees this approach as a way to better engage community. The boundaries between community and products were fuzzy before. People couldn’t necessarily tell when they are contributing to the project vs the product. This separation of code between the moby/moby repository and the docker/docker repository clarifies this distinction.

Read more at NetworkWorld

DevConf Comes to India May 11-12, 2017

DevConf is a developer-focused conference organized by Red Hat. Originally started at Red Hat’s Brno site as DevConf.cz, it has evolved to be the important event for open source project communities where Red Hat participates and contributes.

This year Red Hat, HasGeek, and The Linux Foundation have come together to bring DevConf.in as part of HasGeek’s Rootconf 2017 event. DevConf.in is an outreach to application developers; architects; systems engineers who like to share knowledge and exchange notes on large, distributed application platforms. As significant number of (micro)services are deployed on hosted platforms, topics of resiliency; recovery; administration; operational security processes; design patterns become important.

The DevConf.in editorial team have been mindful of these themes and the talk roster reflect the expectations from this first edition of the event. We have talks from developers — Baiju M, Suraj Deshmukh, Ratnadeep Debnath, and Raghavendra Talur spanning the application development lifecycle for designing and deploying containerized applications of large fabric while making use of a deployment pipeline. Ligaya Turmelle will be walking the audience through best practices of deploying and administering MySQL. Aravinda MK will be talking about challenges in monitoring a distributed filesystem with the traditional tools and how an events API helps solve them. Recently, Snapdeal made a conscious choice to move from a public to a private/self-hosted cloud and Ruchi Singh will share the learnings and anti-patterns from that move. Mehul Ved focuses on a move to dynamic cloud infrastructure and will be talking about choices – decisions and implementation detail.

In her talk on infrastructure for Open Source projects, Amye Scavarda will talk about the “Church of the Shaven Yak” where there is so much to do, but sometimes little progress being made. You pick a piece of the yak to shave every day, and you continue to make good progress, but while you are shaving one piece of the yak, the hair is growing back in another area. There’s a buzz-phrase going around that “Internet is moving to the edge” — there’s a number of “things” on the Internet. Jim Perrin talks about how CentOS can be a development platform for IoT business and highlights the security bits which are often overlooked in the quick and large scale deployment gold rush.

The topics being brought together at DevConf.in have their own flourishing communities which focus on specialized approaches. DevConf.in intends to bring in these practitioners along with the customers and decisions makers to talk about design patterns and architecture, and to discuss deployment models and efficiencies. We expect that such a forum will kick off a healthy model of soup-to-nuts conversations which provide a strong directional guidance to developers and businesses eager to derive benefits from large and repeatable deployments of distributed services across multiple geographical regions.

For more details, visit https://rootconf.in/2017