Home Blog Page 547

SUSE Unveils OpenStack Cloud Monitoring & Supports TrilioVault

Today at the OpenStack Summit 2017 in Boston, MA, SUSE, aside from celebrating its 25th anniversary, announced its new open source software solution that makes it simple to monitor and manage the health and performance of enterprise OpenStack cloud environments and workloads, SUSE OpenStack Cloud Monitoring. In other SUSE related news, Trilio Data, announced that its TrilioVault is Ready Certified for SUSE OpenStack Cloud.

SUSE OpenStack Cloud Monitoring is based on the OpenStack Monasca project. Its main goal is to make it easy for operators and users to monitor and analyze the health and performance of complex private clouds, delivers reliability, performance and high service levels for OpenStack clouds. Through automation and preconfiguring, SUSE OpenStack Cloud Monitoring is also aimed at reducing costs.

Read more at StorageReview

3 Ways to Run a Remote Desktop on Raspberry Pi

3 ways to Remote Desktop on Raspberry Pi

In this post, we will tell you about 3 ways to run Remote Desktop on your Raspberry Pi.

The first one is by using TeamViewer. Using TeamViewer is as simple as making a pie. You just install TeamViewer on Raspberry Pi, find the provided login and password, and enter them on PC. That’s it! No need in static IP address from your provider, no tricks with setting up of port forwarding on your router.

The second way to run Remote Desktop on RPi is by using VNC. VNC is a graphical desktop protocol that allows you to access the full Raspberry Pi desktop from another PC. So, you can see the start menu and run programs from desktop shortcuts. VNC is simple if your PC and Raspberry Pi are located on the same local network. But if you want to connect from office to your home RPi you’ll have to do some pretty some tricky configurations to set up port forwarding on your home router.

The third way of running Remote Desktop is via ssh + X11 forwarding. It is pretty simple, requires few configurations, but is limited to show windows of a separate program only. However, if you are on the same local network with your RPi and are going to access RPi from time to time, it is a good option.

Using TeamViewer for Remote Desktop on Raspberry Pi

Raspberry Pi setup

There is no version of Teamviewer available for ARM-based devices such as Raspberry Pi. Fortunately, there is a way to run TeamViewer on Raspberry Pi using ExaGear Desktop, which allows running x86 apps on Raspberry Pi.

1. Obtain your ExaGear Desktop. Unpack the downloaded archive and install ExaGear by running install-exagear.sh script in the directory with deb packages and one license key:

$ tar -xvzpf exagear-desktop-rpi2.tar.gz
$ sudo ./install-exagear.sh

2. Enter the guest x86 system using the following command:

$ exagear
Starting the shell in the guest image /opt/exagear/images/debian-8-wine2g

3. Download and install TeamViewer:

$ sudo apt-get update
$ sudo apt-get install wget
$ wget http://download.teamviewer.com/download/teamviewer_i386.deb
$ sudo dpkg -i teamviewer_i386.deb
$ sudo apt-get install -f
$ wget http://s3.amazonaws.com/wine1.6-2g-2g/wine1.6-2g-2g.tar.gz
$ tar -xzvf wine1.6-2g-2g.tar.gz
$ sudo ./teamviewer-fix-2g.sh

4. Now you can run TeamViewer from Raspberry Pi start menu:

Using TeamViewer for Remote Desktop on Raspberry Pi

5. Setup static password for remote connection in TeamViewer GUI:

Setup unattended access of TeamViewer on Raspberry Pi

Define password on TeamViewer on Raspberry Pi

Remote Desktop on Raspberry Pi using TeamViewer personal ID

Remember the personal ID and password for remote access to RPi using TeamViewer.

Windows PC setup

1. Download and install TeamViewer for Windows from www.teamviewer.com.

2. Run TeamViewer from the start menu, enter your personal ID in the “Partner ID” field and press “Connect to partner” button:

Remote Desktop on Raspberry Pi using TeamViewer. Enter ID.

Enter your personal password in the new pop-up window and log on:

Remote Desktop on Raspberry Pi using TeamViewer. Enter password.

That’s it! You connected to your Raspberry Pi:

Using TeamViewer for Remote Desktop on Raspberry Pi

Using VNC for Remote Desktop on Raspberry Pi

Raspberry Pi setup

1. Install VNC server on Raspberry:

$ sudo apt-get install tightvncserver

2. Start VNC server:

$ vncserver

On the first run, you’ll be asked to enter a password which will be used to access RPi remotely.

3. Check and keep in mind your Raspberry’s IP address

$ sudo ifconfig

and find the string like

inet addr: 192.168.0.109

The last two numbers might vary depending on your network but 192.168 is always there. So, this is your IP address.

That’s it for RPi setup.

Windows PC setup

1. You will need to download and install a VNC client program. For example, you can use TightVNC (tightvnc.com).

2. Run the downloaded file to install TightVNC client and follow the installation instruction:

Using VNC for Remote Desktop on Raspberry Pi. VNC client installation step 1.

Choose “Custom” setup type:

Using VNC for Remote Desktop on Raspberry Pi. VNC client installation step 2.

Using VNC for Remote Desktop on Raspberry Pi. VNC client installation step 3.

Using VNC for Remote Desktop on Raspberry Pi. VNC client installation step 4.

Using VNC for Remote Desktop on Raspberry Pi. VNC client installation step 5.

Now VNC client is installed.

3. Run TightVNC Client from the start menu. In Remote Host field enter: IP address of Raspberry, colon, 1 (in my case it was 192.168.0.109:1 ) and press Connect:

Remote access to Raspberry Pi using VNC

That’s it! You connected to your Raspberry Pi:

Remote Desktop on Raspberry Pi using TightVNC

Unfortunately, this method works only when your PC and Raspberry are located on the same local network. It’s possible to set up VCN connection if PC and RPi are in different networks, but it requires tricky configuration of port forwarding on your router.

Using ssh + X11 forwarding for Remote Desktop on Raspberry Pi

This case doesn’t require any additional package installation on your Raspberry Pi.

On Window PC do the following:

1. Install Xming X Server for Window

2. Run Xming Server

3. Run Putty, enter your RPi IP address, select X11 in the options menu and check the box labeled “Enable X11 forwarding”:

Enable X11 forwarding on Putty for Remote Desktop on Raspberry Pi

4. Login to Raspberry Pi and run GUI of a program:

Using ssh + X11 forwarding for Remote Desktop on Raspberry PiIn case you need the software Exagear Desktop, used in this post, get it here

the original aricle is here

What are Containers? Learn the Basics in Online Course from The Linux Foundation

In this series, we’ll provide a preview of the new Containers Fundamentals (LFS253) course from The Linux Foundation. The course is designed for those who are new to container technologies, and it covers container building blocks, container runtimes, container storage and networking, Dockerfiles, Docker APIs, and more. In this installment, we start from the basics. You can also sign up to access all the free sample chapter videos now.

What Are Containers?

In today’s world, developers, quality assurance engineers, and everyone involved in the application lifecycle are listening to customer feedback and striving to implement the requested features as soon as possible.

Containers are an application-centric way to deliver high-performing, scalable applications on the infrastructure of your choice by bundling the application code, the application runtime, and the libraries.

Additionally, using containers with microservices makes a lot of sense, because you can do rapid development and deployment with confidence. With containers, you can also record a deployment by building an immutable infrastructure. If something goes wrong with the new changes, you can simply return to the previously known working state.

This self-paced course — taught by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, former Red Hat engineer, Docker Captain, and author of the Docker Cookbook — is provided almost entirely in video format. This video from chapter 1 gives an overview of containers.

Want to learn more? Access all the free sample chapter videos now!

4 Best Practices for Web Browser Security on Your Linux Workstation

There is no question that the web browser will be the piece of software with the largest and the most exposed attack surface on your Linux workstation. It is a tool written specifically to download and execute untrusted, frequently hostile code.

It attempts to shield you from this danger by employing multiple mechanisms such as sandboxes and code sanitization, but they have all been previously defeated on multiple occasions. System administrators should learn to approach browsing websites as the most insecure activity you’ll engage in on any given day.

There are several ways you can reduce the impact of a compromised browser, but the truly effective ways will require significant changes in the way you operate your workstation.

1: Graphical environment

The venerable X protocol was conceived and implemented for a wholly different era of personal computing and lacks important security features that should be considered essential on a networked workstation. To give a few examples:

• Any X application has access to full screen contents

• Any X application can register to receive all keystrokes, regardless into which window they are typed

A sufficiently severe browser vulnerability means attackers get automatic access to what is effectively a built-in keylogger and screen recorder and can watch and capture everything you type into your root terminal sessions.

You should strongly consider switching to a more modern platform like Wayland, even if this means using many of your existing applications through an X11 protocol wrapper. With Fedora starting to default to Wayland for all applications, we can hope that most software will soon stop requiring the legacy X11 layer.

2: Use two different browsers

This is the easiest to do, but only offers minor security benefits. Not all browser compromises give an attacker full unfettered access to your system — sometimes they are limited to allowing one to read local browser storage, steal active sessions from other tabs, capture input entered into the browser, etc. Using two different browsers, one for work/ high security sites, and another for everything else will help prevent minor compromises from giving attackers access to the whole cookie jar. The main inconvenience will be the amount of memory consumed by two different browser processes.

Here’s what we on The Linux Foundation sysadmin team recommend:

Firefox for work and high security sites

Use Firefox to access work-related sites, where extra care should be taken to ensure that data like cookies, sessions, login information, keystrokes, etc, should most definitely not fall into attackers’ hands. You should NOT use this browser for accessing any other sites except select few. You should install the following essential Firefox add-ons:

NoScript

• NoScript prevents active content from loading, except from user whitelisted domains. It is a great hassle to use with your default browser (though offers really good security benefits), so we recommend only enabling it on the browser you use to access work-related sites.

Privacy Badger  

• EFF’s Privacy Badger will prevent most external trackers and ad platforms from being loaded, which will help avoid compromises on these tracking sites from affecting your browser (trackers and ad sites are very commonly targeted by attackers, as they allow rapid infection of thousands of systems worldwide).

HTTPS Everywhere

• This EFF-developed Add-on will ensure that most of your sites are accessed over a secure connection, even if a link you click is using http:// (great to avoid a number of attacks, such as SSL-strip).

Certificate Patrol is also a nice-to-have tool that will alert you if the site you’re accessing has recently changed their TLS certificates — especially if it wasn’t nearing expiration dates or if it is now using a different certification authority. It helps alert you if someone is trying to man-in-the-middle your connection, but generates a lot of benign false-positives.

You should leave Firefox as your default browser for opening links, as NoScript will prevent most active content from loading or executing.

Chrome/Chromium for everything else

Chromium developers are ahead of Firefox in adding a lot of nice security features (at least on Linux), such as seccomp sandboxes, kernel user namespaces, etc, which act as an added layer of isolation between the sites you visit and the rest of your system.

Chromium is the upstream open-source project, and Chrome is Google’s proprietary binary build based on it (insert the usual paranoid caution about not using it for anything you don’t want Google to know about).

It is recommended that you install Privacy Badger and HTTPS Everywhere extensions in Chrome as well and give it a distinct theme from Firefox to indicate that this is your “untrusted sites” browser.

3: Use Firejail

Firejail is a project that uses Linux namespaces and seccomp-bpf to create a sandbox around Linux applications. It is an excellent way to help build additional protection between the browser and the rest of your system. You can use Firejail to create separate isolated instances of Firefox to use for different purposes — for work, for personal but trusted sites (such as banking), and one more for casual browsing (social media, etc).

Firejail is most effective on Wayland, unless you use X11-isolation mechanisms (the —x11 flag). To start using Firejail with Firefox, please refer to the documentation provided by the project:

Firefox Sandboxing Guide

4: Fully separate your work and play environments via virtualization

This step is a bit paranoid, but as I’ve said (many times) before, security is just like driving on the highway — anyone going slower than you is an idiot, while anyone driving faster than you is a crazy person.  

See the QubesOS project, which strives to provide a “reasonably secure” workstation environment via compartmentalizing your applications into separate fully isolated VMs. You may also investigate SubgraphOS that achieves similar goals using container technology (currently in Alpha).

Over the next few weeks in this ongoing Linux workstation security series, we’ll cover more best practices. Next time, join us to learn how to combat credential phishing with FidoU2F and generate secure passwords with password manager recommendations.

Workstation Security

Read more:

Part 6: How to Safely and Securely Back Up Your Linux Workstation

Part 1: 3 Security Features to Consider When Choosing a Linux Workstation

Redefining the Tech that Powers Travel

We all know that the technology industry has been going through a period of incredible change. Rashesh Jethi, Head of Research & Development at Amadeus, began his keynote at the Open Networking Summit (ONS) with a story about how when his grandfather went to university in India, the 760-mile journey took three days and involved a camel, a ship, and a train. Contrast this to Jethi’s 2700 mile journey to ONS in 6 hours where he checked into the flight from his watch. The rapid evolution of technology is continuing to redefine the travel industry and how we approach travel. 

Five or six years ago, Jethi said that Amadeus had about 5000 micro-services, 1500 databases, and a peak of about 80,000 transactions per second. In the time before continuous integration and continuous development, they still made about 600 application software changes every month, which equates to about to 20 to 25 changes every single day. Clearly, that was not going to scale with the amount of change that was coming. Over a couple of years, they completely virtualized their infrastructure as a service using VMware integrated OpenStack on the computer side and NSX for the networking side with about 90 percent of their servers running Linux. This technology change has drastically improved their time to market from 3 weeks down to 20 minutes to deploy a new server.

After solving some of the technical challenges, they had another problem, which Jethi attributes to you and me, and all of us on our phones and tablets that are always connected thanks to ubiquitous networks. We are always out there checking to see if we can get a good deal on our next planned vacation, and that kept increasing the amount of transaction load and the volumes that they had to deal with particularly in the frontend. With all of these networked devices, they have grown from 80,000 to a million transactions per second. Jethi said that it was clear that just virtualizing their infrastructure was not going to be enough. They had to move to a model where they could deploy the application as a whole with all these dependencies to instances that could be managed as clusters.

Jethi describes this as the second phase in their journey to move and build their platform as a service layer called Amadeus Cloud Services. To do this, they have been working with Red Hat and OpenShift using Docker to containerize their applications and Kubernetes for deployment, scaling, and management of those containers. This has allowed them to scale up and down with elastic scaling and self-healing where if one particular cluster flames out, it gets instantiated somewhere else and life goes on. “The more our teams are able to worry less about scaling of the infrastructure, … the more we are able to actually focus on specific problems that our industry and our customer is facing,” says Jethi.

Watch the video to learn more about how Amadeus is redefining the technology that powers travel.

https://www.youtube.com/watch?v=jV0kAt64yy0?list=PLbzoR-pLrL6p01ZHHvEeSozpGeVFkFBQZ

Interested in open source SDN? The “Software Defined Networking Fundamentals” training course from The Linux Foundation provides system and network administrators and engineers with the skills to maintain an SDN deployment in a virtual networking environment. Download the sample chapter today!

Check back with Open Networking Summit for upcoming news on ONS 2018. 

 

See more presentations from ONS 2017:

Google’s Networking Lead on Challenges for the Next Decade

How to Password Protect a Vim File in Linux

Vim is a popular, feature-rich and highly-extensible text editor for Linux, and one of its special features is support for encrypting text files using various crypto methods with a password.

In this article, we will explain to you one of the simple Vim usage tricks; password protecting a file using Vim in Linux. We will show you how to secure a file at the time of its creation as well as after opening it for modification.

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]

Read more at Tecmint

Making Chips Smarter

It is no secret that artificial intelligence (AI) and machine learning have advanced radically over the last decade, yet somewhere between better algorithms and faster processors lies the increasingly important task of engineering systems for maximum performance—and producing better results.

The problem for now, says Nidhi Chappell, director of machine learning in the Datacenter Group at Intel, is that “AI experts spend far too much time preprocessing code and data, iterating on models and parameters, waiting for training to converge, and experimenting with deployment models. Each step along the way is either too labor-and/or compute-intensive.”

Read more at ACM

OpenStack Summit Emphasizes Emerging Deployment Models

The OpenStack Summit kicked off here today with multiple announcements and an emphasis on the evolution of the cloud deployment model. 

Jonathan Bryce, executive director of the OpenStack Foundation, said during his keynote that there has been a 44 percent year-over-year increase in the volume of OpenStack deployments, with OpenStack now running on more than 5 million compute cores around the world.

Although OpenStack has had success, the path has not been a straight line upward since NASA and Rackspace first started the project in June 2010.

“We’re now at a major inflection point in the cloud,” Bryce said.

Read more at eWeek

NIST to Security Admins: You’ve Made Passwords too Hard

Despite the fact that cybercriminals stole more than 3 billion user credentials in 2016, users don’t seem to be getting savvier about their password usage. The good news is that how we think about password security is changing as other authentication methods become more popular.

Password security remains a Hydra-esque challenge for enterprises. Require users to change their passwords frequently, and they wind up selecting easy-to-remember passwords. Force users to use numbers and special characters to select a strong password and they come back with passwords like Pa$$w0rd.

Read more at InfoWorld

Self Contained Systems (SCS): Microservices Done Right

Everybody seems to be building microservices these days. There are many different ways to split a system into microservices, and there appears to be little agreement about what microservices actually are – except for the fact that they can be deployed independently. Self-contained Systems are one approach that has been used by a large number of projects.

What are Self-contained Systems?

The principles behind Self-contained Systems (SCSs) are defined at the SCS website. Self-contained Systems have some specific characteristics:

  • Each SCS is an autonomous web application. Therefore it includes the web UI as well as the logic and the persistence. So a user story will typically be implemented by changing just one SCS even if they require changes to UI, logic and persistence. To achieve this the SCS has to have its own data storage so each SCS can modify its database schema independently from the others.

 

Read more at InfoQ