Home Blog Page 546

Red Hat OpenStack Platform 11 Released

Red Hat’s love affair with OpenStack continues with enhanced support for upgrades with composable roles, new networking capabilities, and improved integration with Red Hat CloudForms for cloud management. This latest OpenStack distribution delivers a reliable cloud platform built on the proven backbone of Red Hat Enterprise Linux (RHEL).

OpenStack enables its users their own private cloud service mix. This makes it easier for enterprises to customize OpenStack deployments to fit specific needs, but it also makes it challenging in upgrade. To address this need, Ocata, and thus, OpenStack Platform 11, now make in-place upgrades much easier.

Read more at ZDNet

Enterprise Open Source Programs: From Concept to Reality

How pervasive is open source in today’s businesses? According to the 2016 Future of Open Source Survey from Black Duck and North Bridge, a mere three percent of respondents say they don’t use any open source tools or platforms.

Leveraging open source has also become a key avenue for fostering new ideas and technologies. Gartner’s Hype Cycle for Open Source Software (2016) notes that organizations are using open source today not just for cost savings, but increasingly for innovation. With this in mind, major companies and industries are quickly building out their open source programs, and the open source community is responding.

Open Source as a Business Priority

The Linux Foundation, the OpenStack Foundation, Cloud Foundry, InnerSource Commons, and the Free Software Foundation are just some of the organizations that businesses can turn to when advancing their open source strategies. Meanwhile, countless enterprises have rolled out professional, in-house programs focused on advancing open source and encouraging its adoption.

In a previous article, I covered some of these businesses that may not immediately come to mind. For example, Walmart’s  Walmart Labs division has released a slew of open source projects, including a notable one called Electrode, which is a product of Walmart’s migration to a React/Node.js platform. General Electric also might not be the first company that you think of when it comes to moving the open source needle, but GE is a powerful player in open source. GE Software has an “Industrial Dojo” – run in collaboration with the Cloud Foundry Foundation – to strengthen its efforts to solve the world’s biggest industrial challenges.

Over the past couple of years, we have also seen increased focus on open source within vertical industries not historically known for embracing it. As this post notes, LMAX Exchange, Bloomberg, and CME Group are just three of the companies in the financial industry that are innovating with open source tools and components and moving past merely consuming open source software to becoming contributors. For example, you can find projects that Bloomberg has contributed to the open source community on GitHub. Capital One and Goldman Sachs are advancing their open source implementations and programs as well.

The telecom industry, previously known for its proprietary ways, is also embracing open source in a big way. Ericsson, for example, regularly contributes projects and is a champion of several key open source initiatives. You can browse through the company’s open source hub here. Ericsson is also one of the most active telecom-focused participants in the effort to advance open NFV and other open technologies that can eliminate historically proprietary components in telecom technology stacks. Ericsson works directly with The Linux Foundation on these efforts, and engineers and developers are encouraged to interface with the open source community.

Other telecom players who are deeply involved with NFV and open source projects include AT&T, Bloomberg LP, China Mobile, Deutsche Telekom, NTT Group, SK Telekom, and Verizon.

Red Hat maintains a dedicated blog on telecoms transforming their infrastructure and technology stacks with open source tools and components. Likewise, you can find Red Hat’s ongoing coverage of how open source is transforming the oil and gas industry here.

Resources for Leveraging Open Source

Organizations of any size can take advantage of resources, ranging from training to technology, to help them build out their own open source programs and initiatives. On the training front, The Linux Foundation offers courses on everything from Essentials of OpenStack Administration to Software Defined Networking. The Foundation also offers a free ebook, Open Source Compliance in the Enterprise, that can help organizations understand issues related to the licensing, development and reuse of open source software. Additionally, The Linux Foundation’s ebook Open Source Compliance in the Enterprise offers practical guidelines on how best to use open source code in products and services.

Organizations can also partner with businesses already leveraging open source, so that they can share resources. The Linux Foundation’s TODO is a partnered group of companies that collaborates on practices and policies for running powerful open source projects and programs. According to TODO:

“Open source is part of the fabric of each of our companies. Between us, our open source programs enable us to use, contribute to, and maintain, thousands of projects – both large and small. These programs face many challenges, such as ensuring high-quality and frequent releases, engaging with developer communities, and contributing back to other projects effectively. The members of this group are committed to working together in order to overcome these challenges. We will be sharing experiences, developing best practices, and working on common tooling. But we can’t do this alone. If you are a company using or sharing open source, we welcome you to join us and help make this happen.”

InnerSource Commons, founded by PayPal, also specializes in helping companies pursue open source methods and practices as well as exchange ideas. You can find the group’s GitHub repositories here, and a free O’Reilly ebook called Getting Started with InnerSource offers more details. The ebook reviews the principles that make open source development successful and includes case study material on how InnerSource has worked at PayPal.

Are you interested in how organizations are bootstrapping their own open source programs internally? You can learn more in the Fundamentals of Professional Open Source Management training course from The Linux Foundation. Download a sample chapter now.

MapD Open Sources GPU-Powered Database

Since starting work on MapD more than five years ago while taking a database course at MIT, I had always dreamed of making the project open source. It is thus with great pleasure to announce that today our company is open sourcing the MapD Core database and associated visualization libraries, effective immediately.

The code is available on Github under an Apache 2.0 license. It has everything you need to build a fully functional installation of the MapD Core database, enabling sub-second querying across many billions of records on a multi-GPU server. All of our core tech, including our tiered caching system and our LLVM query compilation engine, is contained in today’s open source release.

Hence in conjunction with the open source announcement we are also excited today to announce the foundation of the GPU Open Analytics Initiative (GOAI) with Continuum Analytics and H2O.ai. Together we are unveiling our first project, the GPU Data Frame (GDF).

Read more at MapD

Network Microsegmentation Possible with NFV and SDN Combined

Network microsegmentation is foundational to zero-trust architectures. In a microsegmented model, the network knows which systems are allowed to talk to which other systems, in which ways and under what circumstances. Network microsegmentation allows sanctioned traffic to pass, allows each network node to see only what it needs to talk to or listen to and hides the rest. …

In an SDN environment, some of this processing can be done using data plane devices as distributed policy enforcement points. Network functions virtualization (NFV) offers further help to the service provider implementing zero-trust models by making it easier to put security processing in virtual network function (VNF) packages and download it as needed to compute nodes immediately preceding or following (proximate) to the traffic being processed.

Read more at TechTarget

Much Ado About Communication

One of the first challenges an open source project faces is how to communicate among contributors. There are a plethora of options: forums, chat channels, issues, mailing lists, pull requests, and more. How do we choose which is the right medium to use and how do we do it right?

Sadly and all too often, projects shy away from making a disciplined decision and instead opt for “all of the above.” This results in a fragmented community: Some people sit in Slack/Mattermost/IRC, some use the forum, some use mailing lists, some live in issues, and few read all of them.

This is a common issue I see in organizations I’m working with to build their internal and external communities. Which of these channels do we choose and for which purposes? Also, when is it OK to say no to one of them?

Read more at OpenSource.com

SUSE Unveils OpenStack Cloud Monitoring & Supports TrilioVault

Today at the OpenStack Summit 2017 in Boston, MA, SUSE, aside from celebrating its 25th anniversary, announced its new open source software solution that makes it simple to monitor and manage the health and performance of enterprise OpenStack cloud environments and workloads, SUSE OpenStack Cloud Monitoring. In other SUSE related news, Trilio Data, announced that its TrilioVault is Ready Certified for SUSE OpenStack Cloud.

SUSE OpenStack Cloud Monitoring is based on the OpenStack Monasca project. Its main goal is to make it easy for operators and users to monitor and analyze the health and performance of complex private clouds, delivers reliability, performance and high service levels for OpenStack clouds. Through automation and preconfiguring, SUSE OpenStack Cloud Monitoring is also aimed at reducing costs.

Read more at StorageReview

3 Ways to Run a Remote Desktop on Raspberry Pi

3 ways to Remote Desktop on Raspberry Pi

In this post, we will tell you about 3 ways to run Remote Desktop on your Raspberry Pi.

The first one is by using TeamViewer. Using TeamViewer is as simple as making a pie. You just install TeamViewer on Raspberry Pi, find the provided login and password, and enter them on PC. That’s it! No need in static IP address from your provider, no tricks with setting up of port forwarding on your router.

The second way to run Remote Desktop on RPi is by using VNC. VNC is a graphical desktop protocol that allows you to access the full Raspberry Pi desktop from another PC. So, you can see the start menu and run programs from desktop shortcuts. VNC is simple if your PC and Raspberry Pi are located on the same local network. But if you want to connect from office to your home RPi you’ll have to do some pretty some tricky configurations to set up port forwarding on your home router.

The third way of running Remote Desktop is via ssh + X11 forwarding. It is pretty simple, requires few configurations, but is limited to show windows of a separate program only. However, if you are on the same local network with your RPi and are going to access RPi from time to time, it is a good option.

Using TeamViewer for Remote Desktop on Raspberry Pi

Raspberry Pi setup

There is no version of Teamviewer available for ARM-based devices such as Raspberry Pi. Fortunately, there is a way to run TeamViewer on Raspberry Pi using ExaGear Desktop, which allows running x86 apps on Raspberry Pi.

1. Obtain your ExaGear Desktop. Unpack the downloaded archive and install ExaGear by running install-exagear.sh script in the directory with deb packages and one license key:

$ tar -xvzpf exagear-desktop-rpi2.tar.gz
$ sudo ./install-exagear.sh

2. Enter the guest x86 system using the following command:

$ exagear
Starting the shell in the guest image /opt/exagear/images/debian-8-wine2g

3. Download and install TeamViewer:

$ sudo apt-get update
$ sudo apt-get install wget
$ wget http://download.teamviewer.com/download/teamviewer_i386.deb
$ sudo dpkg -i teamviewer_i386.deb
$ sudo apt-get install -f
$ wget http://s3.amazonaws.com/wine1.6-2g-2g/wine1.6-2g-2g.tar.gz
$ tar -xzvf wine1.6-2g-2g.tar.gz
$ sudo ./teamviewer-fix-2g.sh

4. Now you can run TeamViewer from Raspberry Pi start menu:

Using TeamViewer for Remote Desktop on Raspberry Pi

5. Setup static password for remote connection in TeamViewer GUI:

Setup unattended access of TeamViewer on Raspberry Pi

Define password on TeamViewer on Raspberry Pi

Remote Desktop on Raspberry Pi using TeamViewer personal ID

Remember the personal ID and password for remote access to RPi using TeamViewer.

Windows PC setup

1. Download and install TeamViewer for Windows from www.teamviewer.com.

2. Run TeamViewer from the start menu, enter your personal ID in the “Partner ID” field and press “Connect to partner” button:

Remote Desktop on Raspberry Pi using TeamViewer. Enter ID.

Enter your personal password in the new pop-up window and log on:

Remote Desktop on Raspberry Pi using TeamViewer. Enter password.

That’s it! You connected to your Raspberry Pi:

Using TeamViewer for Remote Desktop on Raspberry Pi

Using VNC for Remote Desktop on Raspberry Pi

Raspberry Pi setup

1. Install VNC server on Raspberry:

$ sudo apt-get install tightvncserver

2. Start VNC server:

$ vncserver

On the first run, you’ll be asked to enter a password which will be used to access RPi remotely.

3. Check and keep in mind your Raspberry’s IP address

$ sudo ifconfig

and find the string like

inet addr: 192.168.0.109

The last two numbers might vary depending on your network but 192.168 is always there. So, this is your IP address.

That’s it for RPi setup.

Windows PC setup

1. You will need to download and install a VNC client program. For example, you can use TightVNC (tightvnc.com).

2. Run the downloaded file to install TightVNC client and follow the installation instruction:

Using VNC for Remote Desktop on Raspberry Pi. VNC client installation step 1.

Choose “Custom” setup type:

Using VNC for Remote Desktop on Raspberry Pi. VNC client installation step 2.

Using VNC for Remote Desktop on Raspberry Pi. VNC client installation step 3.

Using VNC for Remote Desktop on Raspberry Pi. VNC client installation step 4.

Using VNC for Remote Desktop on Raspberry Pi. VNC client installation step 5.

Now VNC client is installed.

3. Run TightVNC Client from the start menu. In Remote Host field enter: IP address of Raspberry, colon, 1 (in my case it was 192.168.0.109:1 ) and press Connect:

Remote access to Raspberry Pi using VNC

That’s it! You connected to your Raspberry Pi:

Remote Desktop on Raspberry Pi using TightVNC

Unfortunately, this method works only when your PC and Raspberry are located on the same local network. It’s possible to set up VCN connection if PC and RPi are in different networks, but it requires tricky configuration of port forwarding on your router.

Using ssh + X11 forwarding for Remote Desktop on Raspberry Pi

This case doesn’t require any additional package installation on your Raspberry Pi.

On Window PC do the following:

1. Install Xming X Server for Window

2. Run Xming Server

3. Run Putty, enter your RPi IP address, select X11 in the options menu and check the box labeled “Enable X11 forwarding”:

Enable X11 forwarding on Putty for Remote Desktop on Raspberry Pi

4. Login to Raspberry Pi and run GUI of a program:

Using ssh + X11 forwarding for Remote Desktop on Raspberry PiIn case you need the software Exagear Desktop, used in this post, get it here

the original aricle is here

What are Containers? Learn the Basics in Online Course from The Linux Foundation

In this series, we’ll provide a preview of the new Containers Fundamentals (LFS253) course from The Linux Foundation. The course is designed for those who are new to container technologies, and it covers container building blocks, container runtimes, container storage and networking, Dockerfiles, Docker APIs, and more. In this installment, we start from the basics. You can also sign up to access all the free sample chapter videos now.

What Are Containers?

In today’s world, developers, quality assurance engineers, and everyone involved in the application lifecycle are listening to customer feedback and striving to implement the requested features as soon as possible.

Containers are an application-centric way to deliver high-performing, scalable applications on the infrastructure of your choice by bundling the application code, the application runtime, and the libraries.

Additionally, using containers with microservices makes a lot of sense, because you can do rapid development and deployment with confidence. With containers, you can also record a deployment by building an immutable infrastructure. If something goes wrong with the new changes, you can simply return to the previously known working state.

This self-paced course — taught by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, former Red Hat engineer, Docker Captain, and author of the Docker Cookbook — is provided almost entirely in video format. This video from chapter 1 gives an overview of containers.

Want to learn more? Access all the free sample chapter videos now!

4 Best Practices for Web Browser Security on Your Linux Workstation

There is no question that the web browser will be the piece of software with the largest and the most exposed attack surface on your Linux workstation. It is a tool written specifically to download and execute untrusted, frequently hostile code.

It attempts to shield you from this danger by employing multiple mechanisms such as sandboxes and code sanitization, but they have all been previously defeated on multiple occasions. System administrators should learn to approach browsing websites as the most insecure activity you’ll engage in on any given day.

There are several ways you can reduce the impact of a compromised browser, but the truly effective ways will require significant changes in the way you operate your workstation.

1: Graphical environment

The venerable X protocol was conceived and implemented for a wholly different era of personal computing and lacks important security features that should be considered essential on a networked workstation. To give a few examples:

• Any X application has access to full screen contents

• Any X application can register to receive all keystrokes, regardless into which window they are typed

A sufficiently severe browser vulnerability means attackers get automatic access to what is effectively a built-in keylogger and screen recorder and can watch and capture everything you type into your root terminal sessions.

You should strongly consider switching to a more modern platform like Wayland, even if this means using many of your existing applications through an X11 protocol wrapper. With Fedora starting to default to Wayland for all applications, we can hope that most software will soon stop requiring the legacy X11 layer.

2: Use two different browsers

This is the easiest to do, but only offers minor security benefits. Not all browser compromises give an attacker full unfettered access to your system — sometimes they are limited to allowing one to read local browser storage, steal active sessions from other tabs, capture input entered into the browser, etc. Using two different browsers, one for work/ high security sites, and another for everything else will help prevent minor compromises from giving attackers access to the whole cookie jar. The main inconvenience will be the amount of memory consumed by two different browser processes.

Here’s what we on The Linux Foundation sysadmin team recommend:

Firefox for work and high security sites

Use Firefox to access work-related sites, where extra care should be taken to ensure that data like cookies, sessions, login information, keystrokes, etc, should most definitely not fall into attackers’ hands. You should NOT use this browser for accessing any other sites except select few. You should install the following essential Firefox add-ons:

NoScript

• NoScript prevents active content from loading, except from user whitelisted domains. It is a great hassle to use with your default browser (though offers really good security benefits), so we recommend only enabling it on the browser you use to access work-related sites.

Privacy Badger  

• EFF’s Privacy Badger will prevent most external trackers and ad platforms from being loaded, which will help avoid compromises on these tracking sites from affecting your browser (trackers and ad sites are very commonly targeted by attackers, as they allow rapid infection of thousands of systems worldwide).

HTTPS Everywhere

• This EFF-developed Add-on will ensure that most of your sites are accessed over a secure connection, even if a link you click is using http:// (great to avoid a number of attacks, such as SSL-strip).

Certificate Patrol is also a nice-to-have tool that will alert you if the site you’re accessing has recently changed their TLS certificates — especially if it wasn’t nearing expiration dates or if it is now using a different certification authority. It helps alert you if someone is trying to man-in-the-middle your connection, but generates a lot of benign false-positives.

You should leave Firefox as your default browser for opening links, as NoScript will prevent most active content from loading or executing.

Chrome/Chromium for everything else

Chromium developers are ahead of Firefox in adding a lot of nice security features (at least on Linux), such as seccomp sandboxes, kernel user namespaces, etc, which act as an added layer of isolation between the sites you visit and the rest of your system.

Chromium is the upstream open-source project, and Chrome is Google’s proprietary binary build based on it (insert the usual paranoid caution about not using it for anything you don’t want Google to know about).

It is recommended that you install Privacy Badger and HTTPS Everywhere extensions in Chrome as well and give it a distinct theme from Firefox to indicate that this is your “untrusted sites” browser.

3: Use Firejail

Firejail is a project that uses Linux namespaces and seccomp-bpf to create a sandbox around Linux applications. It is an excellent way to help build additional protection between the browser and the rest of your system. You can use Firejail to create separate isolated instances of Firefox to use for different purposes — for work, for personal but trusted sites (such as banking), and one more for casual browsing (social media, etc).

Firejail is most effective on Wayland, unless you use X11-isolation mechanisms (the —x11 flag). To start using Firejail with Firefox, please refer to the documentation provided by the project:

Firefox Sandboxing Guide

4: Fully separate your work and play environments via virtualization

This step is a bit paranoid, but as I’ve said (many times) before, security is just like driving on the highway — anyone going slower than you is an idiot, while anyone driving faster than you is a crazy person.  

See the QubesOS project, which strives to provide a “reasonably secure” workstation environment via compartmentalizing your applications into separate fully isolated VMs. You may also investigate SubgraphOS that achieves similar goals using container technology (currently in Alpha).

Over the next few weeks in this ongoing Linux workstation security series, we’ll cover more best practices. Next time, join us to learn how to combat credential phishing with FidoU2F and generate secure passwords with password manager recommendations.

Workstation Security

Read more:

Part 6: How to Safely and Securely Back Up Your Linux Workstation

Part 1: 3 Security Features to Consider When Choosing a Linux Workstation

Redefining the Tech that Powers Travel

We all know that the technology industry has been going through a period of incredible change. Rashesh Jethi, Head of Research & Development at Amadeus, began his keynote at the Open Networking Summit (ONS) with a story about how when his grandfather went to university in India, the 760-mile journey took three days and involved a camel, a ship, and a train. Contrast this to Jethi’s 2700 mile journey to ONS in 6 hours where he checked into the flight from his watch. The rapid evolution of technology is continuing to redefine the travel industry and how we approach travel. 

Five or six years ago, Jethi said that Amadeus had about 5000 micro-services, 1500 databases, and a peak of about 80,000 transactions per second. In the time before continuous integration and continuous development, they still made about 600 application software changes every month, which equates to about to 20 to 25 changes every single day. Clearly, that was not going to scale with the amount of change that was coming. Over a couple of years, they completely virtualized their infrastructure as a service using VMware integrated OpenStack on the computer side and NSX for the networking side with about 90 percent of their servers running Linux. This technology change has drastically improved their time to market from 3 weeks down to 20 minutes to deploy a new server.

After solving some of the technical challenges, they had another problem, which Jethi attributes to you and me, and all of us on our phones and tablets that are always connected thanks to ubiquitous networks. We are always out there checking to see if we can get a good deal on our next planned vacation, and that kept increasing the amount of transaction load and the volumes that they had to deal with particularly in the frontend. With all of these networked devices, they have grown from 80,000 to a million transactions per second. Jethi said that it was clear that just virtualizing their infrastructure was not going to be enough. They had to move to a model where they could deploy the application as a whole with all these dependencies to instances that could be managed as clusters.

Jethi describes this as the second phase in their journey to move and build their platform as a service layer called Amadeus Cloud Services. To do this, they have been working with Red Hat and OpenShift using Docker to containerize their applications and Kubernetes for deployment, scaling, and management of those containers. This has allowed them to scale up and down with elastic scaling and self-healing where if one particular cluster flames out, it gets instantiated somewhere else and life goes on. “The more our teams are able to worry less about scaling of the infrastructure, … the more we are able to actually focus on specific problems that our industry and our customer is facing,” says Jethi.

Watch the video to learn more about how Amadeus is redefining the technology that powers travel.

https://www.youtube.com/watch?v=jV0kAt64yy0?list=PLbzoR-pLrL6p01ZHHvEeSozpGeVFkFBQZ

Interested in open source SDN? The “Software Defined Networking Fundamentals” training course from The Linux Foundation provides system and network administrators and engineers with the skills to maintain an SDN deployment in a virtual networking environment. Download the sample chapter today!

Check back with Open Networking Summit for upcoming news on ONS 2018. 

 

See more presentations from ONS 2017:

Google’s Networking Lead on Challenges for the Next Decade