Home Blog Page 370

CNCF’s CloudEvents Spec Could Facilitate Interoperability across Serverless Platforms

The Cloud Native Computing Foundation (CNCF) wants to foster greater interoperability between serverless platforms, through its release of the CloudEvents specification. The project is at version 0.1 iteration, and hopes that it will be approved as a CNCF sandbox project in June.

The CloudEvents specification provides (formerly called OpenEvents) a path that would allow any two components to transfer an event, regardless of whether they are functions, apps, containers or services, said Doug Davis, an IBM senior technical staff member at IBM and a member of the CNCF serverless working group.

“Much in the same way HTTP — in its most basic form — helped interoperability between any two components by standardizing how to represent well-defined metadata about the message being transferred, CloudEvents is doing the same thing,” said Davis. “Defining the common metadata will aid in the transferring of an event from any producer to any consumer.”

Read more at The New Stack

Build a Real VPN with OpenVPN

Learn how to set up your own VPN in this tutorial from our archives.

 A real, genuine, honest-to-gosh virtual private network (VPN) is an encrypted network-to-network virtual tunnel that connects trusted endpoints. It is not a HTTPS web portal that trusts all clients. Let us build a proper strong VPN with OpenVPN.

The definition of VPN has been stretched beyond recognition with the proliferation of HTTPS VPNs, which trust all clients. These work for shopping sites, which permit only limited client access. Many are sold to businesses as “Easy client-less configuration!” to provide remote employee access. But I do not trust them as extensions of my networks. A VPN connects two networks, such as branch offices, or a remote worker to an office server. A real VPN requires that both the server and clients authenticate to each other.

Setting up a VPN where both servers and clients authenticate to each other is a bit of work, and that is why “Easy client-less configuration!” sells. But it’s really not that hard to set up a proper strong OpenVPN server. You need two hosts on different networks to set up a nice OpenVPN test lab, such as a couple of virtual machines, or two hosts on different networks, like a wireless and a wired machine. All hosts need OpenVPN and Easy-RSA installed.

Set up PKI

First. we’ll create a proper public key infrastructure (PKI) on the server. Your OpenVPN server is the machine that external users will connect to. As with all Linux servers, “server” refers to function, and a computer can be both a server and a client. A PKI offers several advantages: you have a Certificate Authority (CA) which simplifies key distribution and management, and you can revoke client certificates at the server. When you don’t use a CA the server needs a copy of every client certificate. A CA doesn’t need all those client certificates; it only needs to know whether the client certificates have been signed by the CA. (OpenVPN also supports static keys, which are fine for one or two users; see How to Set Up Secure Remote Networking with OpenVPN on Linux, Part 1.)

Remember, private keys must always be protected and never shared, while public keys are meant to be shared. In OpenVPN, the public key is called a certificate and has a .crt extension, and the private key is called a key, with a .key extension.

In the olden days, OpenVPN came with nice helper scripts to set this up: the Easy-RSA scripts. These are now maintained as a separate project, so if your Linux distribution doesn’t package them you can get them fresh from GitHub. Browse the Releases page to get ready-to-use tarballs. You might want to download them from GitHub anyway, to get the current 3.0.1 release. This release dates back to October 2015, but a lot of Linux distributions are stuck on the old 2.x releases. Let’s go ahead and use the new release.

Download and unpack the Easy-RSA tarball into your /etc/openvpn directory. Change to your Easy-RSA directory, then run this command to initialize your new PKI:

$ sudo ./easyrsa init-pki

init-pki complete; you may now create a CA or requests.
Your newly created PKI dir is: /etc/openvpn/easyrsa/pki

Now go ahead and create your new CA:

$ sudo ./easyrsa build-ca
Generating a 2048 bit RSA private key
........................................................+++
................+++
writing new private key to '/etc/openvpn/easyrsa/pki/private/ca.key.tJXulR8Ery'
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Common Name (eg: your user, host, or server name) [Easy-RSA CA]:server.net

CA creation complete and you may now import and sign cert requests.
Your new CA certificate file for publishing is at:
/etc/openvpn/easyrsa/pki/ca.crt

You will copy your new ca.crt into /etc/openvpn on all client machines. The next steps takes place on your client machine, creating a PKI environment, the client’s private key, and a signing request. Replace “AliceRemote” with whatever name you want to identify the client:

$ sudo ./easyrsa init-pki
$ sudo ./easyrsa gen-req AliceRemote
[...]
Keypair and certificate request completed. Your files are:
req: /etc/openvpn/easyrsa/pki/reqs/AliceRemote.req
key: /etc/openvpn/easyrsa/pki/private/AliceRemote.key

Copy the .req file to your server, import it, and then sign it:

$ sudo ./easyrsa import-req /media/carla/4gbstik/AliceRemote.req AliceRemote
$ sudo ./easyrsa sign-req client AliceRemote
[..]
Certificate created at: /etc/openvpn/easyrsa/pki/issued/AliceRemote.crt

Copy the signed certificate to the client machine. Now both server and client have all the necessary certificates and key pairs.

If you plan to use TLS, you need to generate Diffie-Hellman parameters on the server. You probably will, so go ahead and do it:

$ sudo ./easyrsa gen-dh

Server Configuration

Look in your openvpn/examples/ directory for configuration file examples. This is a complete example server configuration, and it goes in /etc/openvpn/server.conf. Edit the commented options for your own setup:

port 1194
proto udp
dev tun
keepalive 10 120
status openvpn-status.log
verb 3
persist-tun
persist-key
ifconfig-pool-persist /etc/openvpn/ipp.txt

# Your server keys
ca /etc/openvpn/easyrsa/pki/ca.crt
key /etc/openvpn/easyrsa/pki/private/ca.key
dh /etc/openvpn/easyrsa/pki/dh.pem

# Set server mode, and define a virtual pool of IP
# addresses for clients to use. Use any subnet
# that does not collide with your existing subnets.
server 192.168.10.0 255.255.255.0

# Set up route(s) to subnet(s) behind
# OpenVPN server
push "route 192.168.11.0 255.255.255.0"
push "route 192.168.12.0 255.255.255.0"

Client Configuration

Use this on your client. This example is /etc/openvpn/client.conf:

client
dev tun
proto udp
resolv-retry infinite
nobind
persist-key
persist-tun

# The hostname/IP address and port of the server
remote servername 1194

# Your certificates and keys
cert /etc/openvpn/easyrsa/pki/AliceRemote.crt
ca /etc/openvpn/easyrsa/pki/ca.crt
key /etc/openvpn/easyrsa/pki/private/AliceRemote.key

Connecting to the Server

Start OpenVPN on the server from the command line by referencing the configuration file, for example openvpn /etc/openvpn/server.conf. Start it on the client in the same way, for example openvpn /etc/openvpn/client.conf. You may name your configuration files anything you want, and you may create multiple files for multiple server and client configurations. Once your OpenVPN tunnel is established it’s just like having a shielded Ethernet cable to carry your session safely over untrusted networks, and you can log into your usual programs just as though you were sitting next to the server.

This should get you up and running. There are many configuration and command line options for OpenVPN; see the OpenVPN Documentation. Easy-RSA has a lot of good howtos on GitHub, and bundled in the tarball.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

How the Four Components of a Distributed Tracing System Work Together

Ten years ago, essentially the only people thinking hard about distributed tracing were academics and a handful of large internet companies. Today, it’s turned into table stakes for any organization adopting microservices. The rationale is well-established: microservices fail in surprising and often spectacular ways, and distributed tracing is the best way to describe and diagnose those failures.

That said, if you set out to integrate distributed tracing into your own application, you’ll quickly realize that the term “Distributed Tracing” means different things to different people. Furthermore, the tracing ecosystem is crowded with partially-overlapping projects with similar charters. This article describes the four (potentially) independent components in distributed tracing, and how they fit together.

Distributed tracing: A mental model

Most mental models for tracing descend from Google’s Dapper paperOpenTracinguses similar nouns and verbs, so we will borrow the terms from that project…

Read more at OpenSource.com

Introduction to Security and TLS

IoT (Internet of Things) is all about connecting to the internet and even more about security. Without security and encrypted communication, everyone can possibly see what I send or receive. And this is especially bad if passwords or user names are sent in an unencrypted way. So, encryption and secure communication is key. The solution to that is to use a connection which uses the TLS (Transport Layer Security) protocol, which I want to use for my MQTT communication (see MQTT with lwip and NXP FRDM-K64F Board).

This article walks through the basic principles for secure communication using TLS with MQTT in mind. TLS is the successor of SSL (Secure Sockets Layer), and the two are often used together (TLS/SSL). TLS (as the name indicates) is an encryption on the transport layer: that means that the application layer does not have to implement the encryption itself. Instead, it configures the transport layer to use the encryption protocol.

Read more at DZone

How and Why to Secure Your Linux System with VPN and Firejail

We have previously discussed VPNs and Firejail here on Linux.com, but here’s a quick refresher to help you remember why you would want to use these tools:

  • VPNs help protect your Internet traffic from prying eyes — such as those of your ISP, the wi-fi provider you happen to be using, or any malicious attackers who may be in control of various pieces of routing equipment between you and the resource you are trying to access. VPNs may also enable you to gain access to online content that is for some reason unavailable via your current online provider.

  • Firejail is a tool that helps set up additional sandboxing around your desktop applications to help further reduce the impact of accessing potentially malicious content online. It is most commonly used in conjunction with Firefox.

I am very fond of combining both VPN and Firejail on my travel laptop (where I cannot use QubesOS), but I have recently discovered that I was leaving myself exposed to online tracking via so-called “WebRTC leaks.” My VPN provider offers a convenient testing page to see how well protected my connection is, and they very helpfully alerted me to this problem:

WebRTC is an open-source protocol that allows establishing peer-to-peer Real-Time Communication (RTC) via two browsers. It is normally used for native audio and video conferencing that does not require any additional plug-ins or extensions, and works across different browsers and different platforms. If you’ve ever used Google Hangouts, you’ve relied on WebRTC.

What makes WebRTC leak your real IP address? As part of establishing the communication channel, both parties exchange their networking information in order to find the network route that offers the least amount of latency. So, WebRTC will tell the remote party all of your local IP addresses, in hopes that it will help establish a better communication channel. Obviously, if exposing your local IP address is specifically something you do not want, then this is a problem.

One way to plug this leak is to turn off WebRTC entirely. However, if you do this, then you will no longer be able to use online conferencing — and depending on your needs, this may not be what you want. Thankfully, if you’re already using Firejail, then you can benefit from its support for network namespaces in order to hide your local networking information from WebRTC.

Setting up network namespaces manually is a bit of a chore, since in addition to the virtual interface, you will need to set up things like IP forwarding and DNS resolving (this script may help get you going, if you are interested). However, if you are using Fedora, then you should already have something you can use for this purpose: a virbr0 virtual bridge that is automatically available after the default workstation install.

Here’s what happens when you start Firefox inside a firejail and tell it to use virbr0 for its networking:

$ firejail --net=virbr0 firefox -no-remote

Interface     MAC             IP                         Mask                 Status
 
lo                                   127.0.0.1             255.0.0.0          UP  

eth0          x:x:x:x:x:x    192.168.124.38   255.255.255.0  UP    

Default gateway 192.168.124.1

Firejail automatically obtains a private IP address inside the virtual networking range and sets up all the necessary routing information to be able to get online. And indeed, if I now look on my VPN provider’s verification page, they give me a clean bill of health:

I ended up writing a small wrapper that helps me bring up Firefox in various profiles — one I use for work, one I use for personal browsing, and one I bring up when I want a temporary junk profile for testing (a kind of “incognito mode” on steroids).

You can test if you are vulnerable to WebRTC leaks yourself on the browserleaks site. If there is anything showing up in the “Local IP Address” field, then you are potentially leaking your IP information online to people who can use it against you. Hopefully, you are now well-protected against this leak — but also against others that may use similar mechanisms of passing your local networking information to a remote adversary.

Kubernetes and Microservices: A Developers’ Movement to Make the Web Faster, Stable, and More Open

As web development has evolved, there has been a tendency to develop “monolithic” applications — that is, software that contains most or all parts of the code for a given company or service. Over time, those code bases have grown to massive sizes and become hugely complex, which has led to a wide array of problems.

Developing and maintaining such applications can take an enormous number of developers. Even for companies that have made the necessary investments and hired those developers, making any changes or updates can be cumbersome and take weeks. For others, the resources needed to build the technology can seem like an insurmountable challenge.

“Software has gotten a lot more complex,” said Ben Sigelman, cofounder and CEO of LightStep, a San Francisco-based startup that makes performance management tools for microservices. “It’s gotten a lot more powerful, but it crossed a threshold where the complexity of the code to deliver those features requires hundreds and hundreds of developers….”

Read more at VentureBeat

Tutorial: Git for Absolutely Everyone

Imagine you have a brand new project. Naturally, you plan to store all related files in a single new directory. As work progresses, these files will change. A lot. Things will get disorganized, even messy, and at some point even completely fubar. At that point, you would want to go back in time to the most recent not-messy, still-working version of your project — if only that were possible!

Well, thanks to git, it is. Version control happens when you install git on your computer. Git is built to create that new project directory, and to keep track of all the changes you make to any and all files you put in that directory. As things progress and you make additions and changes, git takes a “snapshot” of the current version. And that, friends, is version control: make a small change, take a snapshot, make another small change, take a snapshot…And save all of these snapshots in chronological order. You can then use git to step back and forth as necessary through each version of your project directory.

So when you screw up, git is like having this magic ability to go back in time to the last good version before you gaffed. Thus, version control. git is not the only version control systems out there, but it is probably the most widely used.

Read more at The New Stack

Docker for Desktop is Certified Kubernetes

“You are now Certified Kubernetes.” With this comment, Docker for Windows and Docker for Mac passed the Kubernetes conformance tests. Kubernetes has been available in Docker for Mac and Docker for Windows since January, having first being announced at DockerCon EU last year. But why is this important to the many of you who are using Docker for Windows and Docker for Mac?

Kubernetes is designed to be a platform that others can build upon. As with any similar project, the risk is that different distributions vary enough that applications aren’t really portable. The Kubernetes project has always been aware of that risk – and this led directly to forming the Conformance Working Group. The group owns a test suite that anyone distributing Kubernetes can run, and submit the results for to attain official certification. This test suite checks that Kubernetes behaves like, well, Kubernetes; that the various APIs are exposed correctly and that applications built using the core APIs will run successfully. In fact, our enterprise container platform, Docker Enterprise Edition, achieved certification using the same test suite  You can find more about the test suite at https://github.com/cncf/k8s-conformance.

Read more at Docker

Did You Know Linux Is in Your TV?

From humble beginnings, Linux has been adopted for everything from low-power electronics to supercomputers running in space. It is able to do this because of its versatility and the openness of the Linux community to entertain new use-cases. The multiplier effect of community software development allows companies and individuals in different industries to work together on the same software and do the things that are important to them.

Let’s look deeper into four interesting places you’ll find Linux.

In your TV

If you have a SmartTV, BluRay player, or set-top box from your internet provider, chances are you are streaming your home entertainment over Linux. Linux has become a leading embedded OS for SmartTVs.

Read more at OpenSource.com

In-Vehicle Computers Run Linux on Apollo Lake

Lanner’s Linux-friendly V3 Series of Apollo Lake based in-vehicle computers includes V3G and V3S models with -40 to 70°C and MIL-STD-810G ruggedization. The V3S adds a third mini-PCIe slot and 4x PoE-ready GbE ports for IP cameras.



Lanner has launched the first two models in a rugged new V3 Series of “vehicle gateway controllers.” The V3G is designed for smart bus implementation, including fleet management and passenger information display, while the similar, but more feature rich V3S is intended for video surveillance, recording, and analytics.

Both the V3G and V3S are equipped with quad-core, 1.6GHz Atom x7-E3950 SoC from Intel’s Apollo Lake generation. They run Red Hat Enterprise Linux (RHEL) 5 and Fedora 14, with Linux Kernel 2.6.18 or later, as well as Windows 10.

Read more at LinuxGizmos