Home Blog Page 496

How to Integrate Git into Your Linux Desktop

Ask a developer to name their most important tools and very often the reply will include Git. There’s a good reason for that: Git is one of the most widely used distributed version control systems. Git can be set up as a local repository, used on a LAN, or used via the official, global service. With Git you can do things like add access control to your code, display the contents of a Git repository (via the web), and manage multiple repositories.

Most users (especially of the Linux variety) work with Git through the command line—with good reason. The Git command-line tools are second nature to Linux users; and considering that most developers working on the Linux platform are already acclimated to the command line, it’s efficient. However, not every user wants to spend all of their Git time working within the terminal. Fortunately, for those users, there are plenty of various GUI tools that can help you get your Git on. Not all of these tools are created equal, so what you use will depend upon your needs.

I want to highlight three such tools—centered on file manager/desktop integration. How you use Git (and how you need it integrated into your desktop) will determine what tool is best for you.

First, we’ll talk about tools that integrate into your file manager. For this, I’m going to focus on the GNOME and KDE desktops (as these two offer the best integration environments). Next, we’ll discuss a very powerful tool that does a great job of integrating into the Linux desktop and connecting with your remote Git account.

With that said, let’s take a look at two very handy tools that integrate into file managers.

RabbitVCS-git

If you use GNOME, chances are you use Nautilus. If you use Nautilus and work with Git, you’re going to want to install one of the best Linux desktop integration tools for Git—RabbitVCS-git. RabbitVCS-git is an SCM client that integrates itself with the Nautilus file manager to manage local Git or SVN repositories.

To install RabbitVCS-git on Ubuntu (or a Ubuntu derivative), issue the following commands from the terminal window:

sudo add-apt-repository ppa:rabbitvcs/ppa
sudo apt-get update
sudo apt-get install rabbitvcs-nautilus 

Once installed, logout of GNOME and log back in. Open up Nautilus and navigate to any project folder, and right-click a blank spot to reveal the RabbitVCS-git contextual menu (Figure 1).

Figure 1: The RabbitVCS-git contextual menu within Nautilus.

At this point, you can begin to work with a very well done bit of Git integration with Nautilus.

Git in KDE

The KDE file manager, Dolphin, offers Git integration by way of a plugin. Before you attempt to use the plugin, you might have to install it. Open up a terminal window and issue the command:

sudo apt-get install dolphin-plugins

Once that command completes, open up Dolphin and then click the Control button > Configure Dolphin. Click on the Services tab, scroll down until you find Git, and click to enable (Figure 2).

Figure 2: Adding the Git plugin to Dolphin.

With Git checked, click OK and then, when prompted, click to restart Dolphin. There is one thing to note: Installing the Dolphin plugins package does not install git. If you haven’t installed the git package (which I assume you already have), you’ll need to do so before Dolphin can actually work with Git. You will also have had to create a new repository from the command line and do a first commit. Once you’ve taken care of that, you will see the Git-related right-click context menu entries in Dolphin (Figure 3).

Figure 3: The Dolphin context menu showing our new Git entries.

From that context menu, you can checkout, show local changes, commit, create tags, push, and pull.

SparkleShare

Now we’re going to venture into the realm of something a bit more powerful than simple file manager integration. The tool I want to demonstrate is SparkleShare, a unique self-hosted service that allows you to do file syncing/sharing, as well as version control, client-side encryption, and (to the point) connect and sync with your GitHub account.

SparkleShare is available from within the standard repositories, so to install (I’ll be demonstrating this on Linux Mint, using the Cinnamon desktop), the following steps will do the trick:

  1. Open a terminal window.

  2. Update apt with the command sudo apt-get update.

  3. Type your sudo password and hit Enter.

  4. Once the update completes, issue the command sudo apt-get install -y sparkleshare.

  5. Allow the installation to finish.

Once the installation is done, go to your desktop menu and search for the SparkleShare entry. Upon first run, you will be prompted for a name and email address (Figure 4).

Figure 4: Adding a name and email to SparkleShare.

Click Continue and then either view the tutorial or click to skip. You will be given a unique Client ID (an ssh-rsa key). Copy and save that key. Click the Finish button. Before you continue on with the SparkleShare GUI setup tool, you need to configure your GitHub account with your SparkleShare ssh-rsa pub key. To do this, follow these steps:

  1. Open a terminal window.

  2. Issue the command cd ~/.config/sparkleshare.

  3. Find the name of your .pub key with the command ls (it will end in .pub).

  4. Open the pub key with a text editor and copy the contents.

  5. Open your GitHub account in your desktop browser.

  6. Go to Settings | SSH and GPG keys.

  7. Click New SSH Key.

  8. Title the key SparkleShare.

  9. Copy the contents of your SparkleShare pub key into the Key text area (Figure C).

  10. Click Add SSH Key.

With the key in place, you can finish up the SparkleShare GitHub connection. You will see a new icon in your system tray; click that icon and then select SparkleShare > Add Hosted Project. Select GitHub from the list and then fill out the Remote Path section in the form /gitusername/repository (Figure 5).

Figure 5: Connecting SparkleShare to your GitHub account.

SparkleShare will automatically sync the repository to your desktop, where you can start to work locally on your project, knowing the files will seamlessly sync back to your GitHub account.

Git integration made easy

And there you have it, Git integration into your Linux desktop made easy. If you’re a developer who works on the Linux desktop, and you use Git, you’ll want to try one of these three tools. Yes, there are full-blown Git GUIs for Linux (such as Giggle, Git GUI, Git-Cola, Smart Git, and many more), but if you’re looking for easy file manager or desktop integration, look no further than these options.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Microsoft Wants to Make Blockchain Networks Enterprise-Ready With Its New Coco Framework

Interest in blockchains is at an all-time high, but there are still plenty of technical issues to solve, especially for enterprises that want to adopt this technology for smart contracts and other use cases. For them, issues like throughput, latency, governance and confidentiality are still major stumbling blocks for using blockchains. With its new Coco Framework, Microsoft wants to solve these issues and make blockchains more suitable for the enterprise.

In an interview earlier this week, Microsoft’s CTO for Azure (and occasional novelist) Mark Russinovich told me the company is seeing a lot of interest in blockchain technology among its users. They like the general idea of a distributed ledger, but a system that can only handle a handful of transactions a second doesn’t work for them — what they want is a technology that can handle a thousand or more transactions per second.

The Coco Framework solves these fundamental issues with blockchains by introducing a trusted execution environment (TEE). 

Read more at TechCrunch

How Captive Portals Interfere With Wireless Security and Privacy

If you have ever wanted to use the wifi at a coffee shop or library, you have probably had to click through a screen to do it. This screen might have shown you the network’s Terms of Service and prompted you to click an “I agree” button. Depending on where you were, it might have asked you for information about yourself, like your email, social media accounts, room number (in a hotel), account number (in a library), or other identifying information. Sometimes you even have to watch a short video or ad before wifi access is granted.

These kinds of screens are called captive portals, and they interfere with wireless security without providing many user benefits.

Read more at EFF

How Kubernetes Certificate Authorities Work

Today, let’s talk about Kubernetes private/public keys & certificate authorities!

This blog post is about how to take your own requirements about how certificate authorities + private keys should be organized and set up your Kubernetes cluster the way you need to.

The various Kubernetes components have a TON of different places where you can put in a certificate/certificate authority. When we were setting up a cluster I felt like there were like 10 billion different command line arguments for certificates and keys and certificate authorities and I didn’t understand how they all fit together.

Read more at Julia Evans

Open Source Leaders: Solomon Hykes and the Docker Revolution

Hykes likes to try new ideas and new things. “If they work I keep it. If it doesn’t work I try something else,” said Hykes. At Docker, they call themselves builders. “We like to build new things and if they’re useful we build more of them and if they’re not useful we try something else. It’s almost like a way of life.”

He believes that the open-source community traditionally has been a home for that kind of approach. It’s always been attractive for builders to work in an environment where there’s a direct feedback loop.

“It’s you, your code and either it works or it doesn’t. Either people adopt it or not. There’s less layers of indirection and ambiguity. I think that’s attractive to a lot of open-source contributors. We’re trying to follow that philosophy,” said Hykes.

Read more at The New Stack

Top 4 Reasons I Use dwm for my Linux Window Manager

I like minimalistic views. If I could run everything in a terminal I would. It’s free from shiny stuff that hogs my resources and distracts my feeble mind. I also grow tired of resizing and moving windows, never getting them to align perfectly.

On my quest for minimalism, I grew fond of Xfce and used it as my main desktop environment for years on my Linux computers. Then, one day I came across a video of Bryan Lunduke talking about the awesome window manager he used called Awesome. It neatly arranges all of your windows for you, and so, sounded like just what I wanted. I tried it out but didn’t get the hang of the configuration needed to tweak it into my liking. So, I moved on and discovered xmonad, but I had a similar result. It worked fine but I couldn’t get around the Haskell part to really turn it into my perfect desktop.

Read more at OpenSource.com

Linux Foundation Open Source Summit Talk: “Developing an Open Source Strategy — The First 90 Days”

“Open source culture” is something many of us who work with the open source community, either directly or indirectly, believe we have a handle on. There’s a general presumption that it should center around the concept of sharing ideas, and contribute to a broader ideal of building better software. But how should a business go through a transformative journey to an open source culture in 90 days?

At the forthcoming Open Source Summit in Los Angeles, Nithya A. Ruff, who directs the open source practice for CATV provider and content producer Comcast, will speak on the subject of building an open source culture within organizations in a 90-day time span. We spoke with Ruff about her upcoming talk.

When you join a company to build the OSS strategy, what are the factors that you consider first?

There really are three factors to look at. First, what business is the company in, and how can open source be an innovation driver in that business? Your open source strategy needs to align with the company business and software strategy, and not be separate.

Read more at The New Stack

How to Calculate Network Addresses with ipcalc

The math behind IP addresses is convoluted. Our nice IPv4 addresses start out as 32-bit binary numbers, which are then converted to base 10 numbers in four 8-bit fields. Decimal numbers are easier to manage than long binary strings; still, calculating address ranges, netmasks, and subnets is a bit difficult and error-prone, except for the brainiacs who can do binary conversions in their heads. For the rest of us, meet ipcalc and ipv6calc.

ipcalc is for IPv4 networks, and ipv6calc is for IPv6 networks. Today, we’ll play with IPv4, and next week IPv6.

You must understand Classless Inter-Domain Routing (CIDR) as this is fundamental to IP addressing; if you don’t, please look it up. (See Practical Networking for Linux Admins: Real IPv6 for a concise description.)

ipcalc

Run ipcalc with your IP address to see everything you need to know:

$ ipcalc 192.168.0.135
Address:   192.168.0.135        11000000.10101000.00000000. 10000111
Netmask:   255.255.255.0 = 24   11111111.11111111.11111111. 00000000
Wildcard:  0.0.0.255            00000000.00000000.00000000. 11111111
=>
Network:   192.168.0.0/24       11000000.10101000.00000000. 00000000
HostMin:   192.168.0.1          11000000.10101000.00000000. 00000001
HostMax:   192.168.0.254        11000000.10101000.00000000. 11111110
Broadcast: 192.168.0.255        11000000.10101000.00000000. 11111111
Hosts/Net: 254                   Class C, Private Internet

What I especially like about ipcalc is seeing the binary form of IP addresses, which clearly shows what netmasks do. These don’t make much sense in dotted decimal notation, but in binary they are as clear as can be. A mask covers things, and in this example it’s plain that the 24-bit netmask covers the first three quads. The first three quads make up the network ID, and the final quad is the host address.

ipcalc tells us the network range (192.168.0.0/24), the first and last available host addresses (192.168.0.1 and 192.168.0.254), and that it’s a Class C private network. Each 8-bit field contains values ranging from 0-255. The very first value in the host field, 0, is always reserved as the network address, and the last value, 255, is always reserved for the broadcast address, so you cannot use these as host addresses. ipcalc reports only available addresses, so in this example, that is 254 instead of 256.

Classful Networking

IPv4 networks have five classes: A, B, and C, which we use all the time, D, the multicast class, and E, which is experimental and reserved for future use. With the advent of IPv6, it’s likely class E will never be used. A picture is worth a thousand words, and this table from Classful network on Wikipedia shows the relationship from decimal to binary.

In the following table:

  • n indicates a bit used for the network ID.
  • H indicates a bit used for the host ID.
  • X indicates a bit without a specified purpose.
Class A
  0.  0.  0.  0 = 00000000.00000000.00000000.00000000
127.255.255.255 = 01111111.11111111.11111111.11111111
                  0nnnnnnn.HHHHHHHH.HHHHHHHH.HHHHHHHH

Class B
128.  0.  0.  0 = 10000000.00000000.00000000.00000000
191.255.255.255 = 10111111.11111111.11111111.11111111
                  10nnnnnn.nnnnnnnn.HHHHHHHH.HHHHHHHH

Class C
192.  0.  0.  0 = 11000000.00000000.00000000.00000000
223.255.255.255 = 11011111.11111111.11111111.11111111
                  110nnnnn.nnnnnnnn.nnnnnnnn.HHHHHHHH

Class D
224.  0.  0.  0 = 11100000.00000000.00000000.00000000
239.255.255.255 = 11101111.11111111.11111111.11111111
                  1110XXXX.XXXXXXXX.XXXXXXXX.XXXXXXXX

Class E
240.  0.  0.  0 = 11110000.00000000.00000000.00000000
255.255.255.255 = 11111111.11111111.11111111.11111111
                  1111XXXX.XXXXXXXX.XXXXXXXX.XXXXXXXX

In the first three classes, note the leading bits, which are the most significant bits. Class A has a leading bit of 0, which means the following 7 bits comprise the network ID, 0-127. Class B has 10, which converts to 128-191, and Class C is 110, 192-223.

You can calculate the total size of each class from the number of leading bits:

$ ipcalc 0.0.0.0/1 
[...]
Hosts/Net: 2147483646  Class A, In Part Private Internet

$ ipcalc 128.0.0.0/2
[...]
Hosts/Net: 1073741822  Class B, In Part APIPA

$ ipcalc 192.0.0.0/3
[...]
Hosts/Net: 536870910   Class C, In Part Private Internet

APIPA is Automatic Private IP Addressing, the range from 169.254.0.1 through 169.254.255.254. This is used primarily in Windows networks; DHCP clients automatically assign themselves one of these addresses when there is no DHCP server.

Private Networks

Most of us are familiar with the private IPv4 network ranges, because we can use these freely on our LANs without requesting globally unique address allocations from a service provider.

Class A  10.0.0.0/8
Class B  172.16.0.0/12
Class C  192.168.0.0/16

Plug any of these into ipcalc and see what it tells you.

Subnetting

Thanks to CIDR, we can finely slice and dice A, B, or C networks into multiple subnets, and ipcalc makes subnetting easy. Suppose you want to make two Class B subnets; just tell ipcalc the network segment you want to divide and how many hosts in each segment. In this example we want 15 hosts in each subnet:

$ ipcalc 172.16.1.0/24 -s 15 15
Address:   172.16.1.0           10101100.00010000.00000001. 00000000
Netmask:   255.255.255.0 = 24   11111111.11111111.11111111. 00000000
Wildcard:  0.0.0.255            00000000.00000000.00000000. 11111111
=>
Network:   172.16.1.0/24        10101100.00010000.00000001. 00000000
HostMin:   172.16.1.1           10101100.00010000.00000001. 00000001
HostMax:   172.16.1.254         10101100.00010000.00000001. 11111110
Broadcast: 172.16.1.255         10101100.00010000.00000001. 11111111
Hosts/Net: 254                   Class B, Private Internet

1. Requested size: 15 hosts
Netmask:   255.255.255.224 = 27 11111111.11111111.11111111.111 00000
Network:   172.16.1.0/27        10101100.00010000.00000001.000 00000
HostMin:   172.16.1.1           10101100.00010000.00000001.000 00001
HostMax:   172.16.1.30          10101100.00010000.00000001.000 11110
Broadcast: 172.16.1.31          10101100.00010000.00000001.000 11111
Hosts/Net: 30                    Class B, Private Internet

2. Requested size: 15 hosts
Netmask:   255.255.255.224 = 27 11111111.11111111.11111111.111 00000
Network:   172.16.1.32/27       10101100.00010000.00000001.001 00000
HostMin:   172.16.1.33          10101100.00010000.00000001.001 00001
HostMax:   172.16.1.62          10101100.00010000.00000001.001 11110
Broadcast: 172.16.1.63          10101100.00010000.00000001.001 11111
Hosts/Net: 30                    Class B, Private Internet

Needed size:  64 addresses.
Used network: 172.16.1.0/26
Unused:
172.16.1.64/26
172.16.1.128/25

I think that is pretty darned fabulous. Why does the example use 172.16.1.0/24? Trial and error: I ran ipcalc with different CIDR ranges until I found one close to the size I needed. 172.16.1.0/24 = 254 hosts per subnet, leaving room to grow.

16,777,214 Loopback Addresses

Did you know that the loopback address range is an entire Class A 8-bit range?

$ ipcalc 127.0.0.0/8
Address:   127.0.0.0            01111111. 00000000.00000000.00000000
Netmask:   255.0.0.0 = 8        11111111. 00000000.00000000.00000000
Wildcard:  0.255.255.255        00000000. 11111111.11111111.11111111
=>
Network:   127.0.0.0/8          01111111. 00000000.00000000.00000000
HostMin:   127.0.0.1            01111111. 00000000.00000000.00000001
HostMax:   127.255.255.254      01111111. 11111111.11111111.11111110
Broadcast: 127.255.255.255      01111111. 11111111.11111111.11111111
Hosts/Net: 16777214              Class A, Loopback

Try it for yourself — you can ping any address in this range. ping localhost returns 127.0.0.1 because most Linux distributions configure this in /etc/hosts.

Sorry, but that’s all the fun we can have today. Come back next week to learn how to tame those massive unwieldy IPv6 addresses with ipv6calc.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

3 Open Source Projects That Make Kubernetes Easier

From cluster state management to snapshots and DR, companion tools from Heptio, Kubed, and Kubicorn aim to fill the gaps in Kubernetes.

Clearly, Kubernetes is an elegant solution to an important problem. Kubernetes allows us to run containerized applications at scale without drowning in the details of balancing loads, networking containers, ensuring high availability for apps, or managing updates or rollbacks. So much complexity is hidden safely away. 

But using Kubernetes is not without its challenges. Getting up and running with Kubernetes takes some work, and many of the management and maintenance tasks around Kubernetes are downright thorny. 

Read more at InfoWorld

How to Build Clusters for Scientific Applications-as-a-Service

How can we make a workload easier on cloud? In a previous article we presented the lay of the land for HPC workload management in an OpenStack environment. A substantial part of the work done to date focuses on automating the creation of a software-defined workload management environment – SLURM-as-a-Service.

SLURM is only one narrow (but widely used) use case in a broad ecosystem of multi-node scientific application clusters: let’s not over-specialize. It raises the question of what is needed to make a generally useful, flexible system for creating Cluster-as-a-Service?

What do users really want?

A user of the system will not care how elegantly the infrastructure is orchestrated:

  • Users will want support for the science tools they need, and when new tools are needed, the users will want support for those too.
  • Users will want to get started with minimal effort. The learning curve they must climb to deploy tools needs to be shallow.
  • Users will want easy access to the datasets upon which their research is based.

Read more at OpenStack SuperUser