Home Blog Page 407

Protecting Code Integrity with PGP — Part 2: Generating Your Master Key

In this article series, we’re taking an in-depth look at using PGP and provide practical guidelines for developers working on free software projects. In the previous article, we provided an introduction to basic tools and concepts. In this installment, we show how to generate and protect your master PGP key.

Checklist

  1. Generate a 4096-bit RSA master key (ESSENTIAL)

  2. Back up the master key using paperkey (ESSENTIAL)

  3. Add all relevant identities (ESSENTIAL)

Considerations

Understanding the “Master” (Certify) key

In this and next section we’ll talk about the “master key” and “subkeys.” It is important to understand the following:

  1. There are no technical differences between the “master key” and “subkeys.”

  2. At creation time, we assign functional limitations to each key by giving it specific capabilities.

  3. A PGP key can have four capabilities.

    • [S] key can be used for signing

    • [E] key can be used for encryption

    • [A] key can be used for authentication

    • [C] key can be used for certifying other keys

  4. A single key may have multiple capabilities.

The key carrying the [C] (certify) capability is considered the “master” key because it is the only key that can be used to indicate relationship with other keys. Only the [C] key can be used to:

  • Add or revoke other keys (subkeys) with S/E/A capabilities

  • Add, change or revoke identities (uids) associated with the key

  • Add or change the expiration date on itself or any subkey

  • Sign other people’s keys for the web of trust purposes

In the Free Software world, the [C] key is your digital identity. Once you create that key, you should take extra care to protect it and prevent it from falling into malicious hands.

Before you create the master key

Before you create your master key you need to pick your primary identity and your master passphrase.

Primary identity

Identities are strings using the same format as the “From” field in emails:

Alice Engineer <alice.engineer@example.org>

You can create new identities, revoke old ones, and change which identity is your “primary” one at any time. Since the primary identity is shown in all GnuPG operations, you should pick a name and address that are both professional and the most likely ones to be used for PGP-protected communication, such as your work address or the address you use for signing off on project commits.

Passphrase

The passphrase is used exclusively for encrypting the private key with a symmetric algorithm while it is stored on disk. If the contents of your .gnupg directory ever get leaked, a good passphrase is the last line of defense between the thief and them being able to impersonate you online, which is why it is important to set up a good passphrase.

A good guideline for a strong passphrase is 3-4 words from a rich or mixed dictionary that are not quotes from popular sources (songs, books, slogans). You’ll be using this passphrase fairly frequently, so it should be both easy to type and easy to remember.

Algorithm and key strength

Even though GnuPG has had support for Elliptic Curve crypto for a while now, we’ll be sticking to RSA keys, at least for a little while longer. While it is possible to start using ED25519 keys right now, it is likely that you will come across tools and hardware devices that will not be able to handle them correctly.

You may also wonder why the master key is 4096-bit, if later in the guide we state that 2048-bit keys should be good enough for the lifetime of RSA public key cryptography. The reasons are mostly social and not technical: master keys happen to be the most visible ones on the keychain, and some of the developers you interact with will inevitably judge you negatively if your master key has fewer bits than theirs.

Generate the master key

To generate your new master key, issue the following command, putting in the right values instead of “Alice Engineer:”

$ gpg --quick-generate-key 'Alice Engineer <alice@example.org>' rsa4096 cert

A dialog will pop up asking to enter the passphrase. Then, you may need to move your mouse around or type on some keys to generate enough entropy until the command completes.

Review the output of the command, it will be something like this:

pub   rsa4096 2017-12-06 [C] [expires: 2019-12-06]
     111122223333444455556666AAAABBBBCCCCDDDD
uid                      Alice Engineer <alice@example.org>

Note the long string on the second line — that is the full fingerprint of your newly generated key. Key IDs can be represented in three different forms:

  • Fingerprint, a full 40-character key identifier

  • Long, last 16-characters of the fingerprint (AAAABBBBCCCCDDDD)

  • Short, last 8 characters of the fingerprint (CCCCDDDD)

You should avoid using 8-character “short key IDs” as they are not sufficiently unique.

At this point, I suggest you open a text editor, copy the fingerprint of your new key and paste it there. You’ll need to use it for the next few steps, so having it close by will be handy.

Back up your master key

For disaster recovery purposes — and especially if you intend to use the Web of Trust and collect key signatures from other project developers — you should create a hardcopy backup of your private key. This is supposed to be the “last resort” measure in case all other backup mechanisms have failed.

The best way to create a printable hardcopy of your private key is using the paperkey software written for this very purpose. Paperkey is available on all Linux distros, as well as installable via brew install paperkey on Macs.

Run the following command, replacing [fpr] with the full fingerprint of your key:

$ gpg --export-secret-key [fpr] | paperkey -o /tmp/key-backup.txt

The output will be in a format that is easy to OCR or input by hand, should you ever need to recover it. Print out that file, then take a pen and write the key passphrase on the margin of the paper. This is a required step because the key printout is still encrypted with the passphrase, and if you ever change the passphrase on your key, you will not remember what it used to be when you had first created it — guaranteed.

Put the resulting printout and the hand-written passphrase into an envelope and store in a secure and well-protected place, preferably away from your home, such as your bank vault.

Note on printers: Long gone are days when printers were dumb devices connected to your computer’s parallel port. These days they have full operating systems, hard drives, and cloud integration. Since the key content we send to the printer will be encrypted with the passphrase, this is a fairly safe operation, but use your best paranoid judgement.

Add relevant identities

If you have multiple relevant email addresses (personal, work, open-source project, etc), you should add them to your master key. You don’t need to do this for any addresses that you don’t expect to use with PGP (e.g., probably not your school alumni address).

The command is (put the full key fingerprint instead of [fpr]):

$ gpg --quick-add-uid [fpr] 'Alice Engineer <allie@example.net>'

You can review the UIDs you’ve already added using:

$ gpg --list-key [fpr] | grep ^uid
Pick the primary UID

GnuPG will make the latest UID you add as your primary UID, so if that is different from what you want, you should fix it back:

$ gpg --quick-set-primary-uid [fpr] 'Alice Engineer <alice@example.org>'

Next time, we’ll look at generating PGP subkeys, which are the keys you’ll actually be using for day-to-day work. 

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

AT&T Puts Smart City IoT ‘Edge’ Computing On Direct Dial

Technology platforms in the post-millennial era are heavily characterized by their use of automation and optimization techniques. As we increasingly analyze our software in order to quantify and qualify what applications and data workloads work well in situation A, we can start to automate an element of other software deployments with managed optimized controls in situation B. We can then sub-classify our automation and optimization efforts by vertical industry use case, by specific application type and by the type of device that the software lives on be it desktop, mobile or Internet of Things (IoT).

This broad brush summary statement goes some way to explaining current efforts emanating from AT&T. The brand that we used to know as a phone company, but would now rather we think of it as a ‘business data network’ company has been working with the Linux community to refine an approach to devices installed out of the ‘edge’ of the IoT. Specifically, a new open source project from The Linux Foundation called Akraino will now be built to create a ‘software stack’ capable of running cloud services optimized for edge IoT devices and applications.

Read more at Forbes

A Look Into the Kubernetes Master Components

This blog post looks at the most important control plane components of a single Kubernetes master node — etcd, the API server, the scheduler and the controller manager — and explains how they work together. Although other components, such as DNS and the dashboard, come into play in a production environment, the focus here is on these specific four.

etcd

etcd is a distributed key-value store written in golang that provides a reliable way to store data across a cluster of machines. Kubernetes uses it as its brain, but only the kube-apiserver can communicate to it to save desired states. To get an idea of how etcd works, download the latest binary version for your preferred operating system and just execute etcd. If you have a golang development environment ready on your system (on a Mac just do: brew install go), you can also clone the etcd GitHub repo and start a cluster with goremanas follows:

Read more at VMware

10 Breakthrough Technologies for 2018

Dueling neural networks. Artificial embryos. AI in the cloud. Welcome to our annual list of the ten technology advances we think will shape the way we work and live now and for years to come.

Every year since 2001 we’ve picked what we call the 10 Breakthrough Technologies. People often ask, what exactly do you mean by “breakthrough”? It’s a reasonable question—some of our picks haven’t yet reached widespread use, while others may be on the cusp of becoming commercially available. What we’re really looking for is a technology, or perhaps even a collection of technologies, that will have a profound effect on our lives.

For this year, a new technique in artificial intelligence called GANs is giving machines imagination; artificial embryos, despite some thorny ethical constraints, are redefining how life can be created and are opening a research window into the early moments of a human life; and a pilot plant in the heart of Texas’s petrochemical industry is attempting to create completely clean power from natural gas—probably a major energy source for the foreseeable future. These and the rest of our list will be worth keeping an eye on.

Read more at MIT Technology Review

Choosing a Tool to Track and Mitigate Open Source Security Vulnerabilities

To successfully deal with open source security, you need your developers (and DevOps teams) to operate the solution. Given the fast pace of modern development, boosted in part by the use of open source itself, an outnumbered security team will never be able to keep you secure. Therefore, the SCA solution you choose must be designed for developers to be successful with.

Unfortunately, all too often, security tools (including SCA solutions) simply don’t understand the developer as a user. Integrating into an IDE or creating a Jenkins plug-in does not automatically make a tool developer-friendly, nor does adding the term “DevSecOps” into your documentation. To be successful with developers, tools need to be on par with the developer experience (DX) other dev tools offer, adapt to the user flows of the tools they connect to, and have the product communicate in a dev-friendly manner.

Read more at O’Reilly

Choosing Project Names: 4 Key Considerations

Names set expectations. Your project’s name should showcase its functionality in the ecosystem and explain to users what your story is. In the crowded open source software world, it’s important not to get entangled with other projects out there. Taking a little extra time now, before sending out that big announcement, will pay off later.

Here are four factors to keep in mind when choosing a name for your project.

What does your project’s code do?

Start with your project: What does it do? You know the code intimately—but can you explain what it does to a new developer?

Read more at OpenSource.com

IBM Index: A Community Event for Open Source Developers

The first-ever INDEX community event, happening now  in San Francisco, is an open developer conference featuring sessions on topics including artificial intelligence, machine learning, analytics, cloud native, containers, APIs, languages, and more.

The event will also feature a keynote presentation from The Linux Foundation’s executive director, Jim Zemlin, who will discuss building sustainable open source projects to advance the next generation of modern computing.

Angel Diaz, VP of Developer Advocacy and OSS at IBM

In this article, we talk with Angel Diaz, VP of Developer Advocacy and OSS at IBM, who explains more about what to look forward to at this event.

It looks like there’s a heavy Open Source flavor in the Index conference lineup. Tell us more.

Angel Diaz: Absolutely. There are 26 different developer sessions related to Open Source at Index — covering everything from Building Cloud Native Applications Best Practices, to Open Main Frame, and everything in between.

Open Source is the reality of the enterprise stack today. Open Source has brought compute, storage and network together in OpenStack. It’s brought unity around 12- factor applications in Cloud Foundry. It’s brought the world together around microservices via the Cloud Native Computing Foundation, Docker and Kubernetes. And it’s brought the industry together in serverless around projects like Open Whisk. When you look at data, we’ve been democratizing to the masses around Apache Spark and Data Science — and in AI, projects like SystemML and TensorFlow are bridging the gap between the information and the insights you can get with that data. With transactions — Open Source is of course behind the hyperledger and re-establishing what a transaction is through Blockchain. If it’s hot and if it matters in the enterprise stack today–chances are it’s Open Source.

Linux obviously was a huge part of IBM’s heritage with Open Source. Tell us a little bit about the modern outlook of the company as Open Source has grown up so much, and the role that IBM sees itself playing in the community?

Diaz: In this second renaissance of Open Source that we’re in today, IBM and our partners have clear centers of gravity around cloud, data, AI, and transactions. It’s very important as an industry to create these centers of gravity. When you’re creating an open platform for cloud, data and AI, it’s important that you’re bringing together communities — where code bases, use cases and developers are all equal. That’s how you create a great platform as vendor too. That’s how you create an environment where everyone can benefit. Open Source innovation around cloud, data and AI is really the outlook for how IBM built our cloud. We’re trying to make the IBM Cloud the best platform for any Open Source developer.

And how you consume Open Source is as important as the code you write. IBM has been doing Open Source since the 1990s. It’s a huge part of our strategy. From the days of Linux, Eclipse, Apache, to where we are now — we have the IBM Open Source Way. It’s how we culturally think about and leverage Open Source. It describes to the world how we at IBM do Open Source at scale. We don’t just use Open Source, we contribute as much as we use. And in the Open Source Way, we talk about how we have operationalized Open Source at scale across IBM. These methods are best practices that any enterprise that wants to learn how to do Open Source should take to heart.

For example, what is the Open Source etiquette? Don’t be a talker, be a doer. Don’t flaunt your title, don’t be a drive by committer, start small and earn trust. Most importantly, be authentic. Anybody that wants to build an Open Source program or be a citizen of Open Source should take a look at this. How you behave in Open Source is just as important as the code that you build.

There are a lot of great events out there. Let’s hear why any enterprise developers that care about Open Source should be at the Index event next month?

Diaz: It’s a great opportunity for developers who are embedded in the second renaissance of Open Source to meet their peers in these centers of gravity around cloud, data and AI. It’s a vendor neutral event, where we will be bringing together developers across the globe to build these Open Source technologies. Everything from Kubernetes to OpenAPI, Tensorflow, Spark — every Open Source community. It’s a great opportunity for you to go, participate, very inexpensive, developer-to-developer, no marketing. Index will be a conference to learn about these technologies and their place in Open Source.

Don’t miss the opportunity to join the conversation February 20-22 at INDEX.

How to Get Started Using WSL in Windows 10

In the previous article, we talked about the Windows Subsystem for Linux (WSL) and its target audience. In this article, we will walk through the process of getting started with WSL on your Windows 10 machine.

Prepare your system for WSL

You must be running the latest version of Windows 10 with Fall Creator Update installed. Then, check which version of Windows 10 is installed on your system by searching on “About” in the search box of the Start menu. You should be running version 1709 or the latest to use WSL.

Here is a screenshot from my system.

kHFKOvrbG1gXdB9lsbTqXC4N4w0Lbsz1Bul5ey9m

If an older version is installed, you need to download and install the Windows 10 Fall Creator Update (FCU) from this page. Once FCU is installed, go to Update Settings (just search for “updates” in the search box of the Start menu) and install any available updates.

Go to Turn Windows Features On or Off (you know the drill by now) and scroll to the bottom and tick on the box Windows Subsystem for Linux, as shown in the following figure. Click Ok. It will download and install the needed packages.

oV1mDqGe3zwQgL0N3rDasHH6ZwHtxaHlyrLzjw7x

Upon the completion of the installation, the system will offer to restart. Go ahead and reboot your machine. WSL won’t launch without a system reboot, as shown below:

GsNOQLJlHeZbkaCsrDIhfVvEoycu3D0upoTdt6aN

Once your system starts, go back to the Turn features on or off setting to confirm that the box next to Windows Subsystem for Linux is selected.

Install Linux in Windows

There are many ways to install Linux on Windows, but we will choose the easiest way. Open the Windows Store and search for Linux. You will see the following option:

YAR4UgZiFAy2cdkG4U7jQ7_m81lrxR6aHSMOdED7

Click on Get the apps, and Windows Store will provide you with three options: Ubuntu, openSUSE Leap 42, and SUSE Linux Enterprise Server. You can install all three distributions side by side and run all three distributions simultaneously. To be able to use SLE, you need a subscription.

In this case, I am installing openSUSE Leap 42 and Ubuntu. Select your desired distro and click on the Get button to install it. Once installed, you can launch openSUSE in Windows. It can be pinned to the Start menu for quick access.

4LU6eRrzDgBprDuEbSFizRuP1J_zS3rBnoJbU2OA

Using Linux in Windows

When you launch the distro, it will open the Bash shell and install the distro. Once installed, you can go ahead and start using it. Simple. Just bear in mind that there is no user in openSUSE and it runs as root user, whereas Ubuntu will ask you to create a user. On Ubuntu, you can perform administrative tasks as sudo user.

You can easily create a user on openSUSE:

# useradd [username]

# passwd [username]

Create a new password for the user and you are all set. For example:

# useradd swapnil

# passwd swapnil

You can switch from root to this use by running the su command:

su swapnil

You do need non-root use to perform many tasks, like using commands like rsync to move files on your local machine.

The first thing you need to do is update the distro. For openSUSE:

zypper up

For Ubuntu:

sudo apt-get update

sudo apt-get dist-upgrade

7cRgj1O6J8yfO3L4ol5sP-ZCU7_uwOuEoTzsuVW9

You now have native Linux Bash shell on Windows. Want to ssh into your server from Windows 10? There’s no need to install puTTY or Cygwin. Just open Bash and then ssh into your server. Easy peasy.

Want to rsync files to your server? Go ahead and use rsync. It really transforms Windows into a usable machine for those Windows users who want to use native Linux command linux tools on their machines without having to deal with VMs.

Where is Fedora?

You may be wondering about Fedora. Unfortunately, Fedora is not yet available through the store. Matthew Miller, the release manager of Fedora said on Twitter, “We’re working on resolving some non-technical issues. I’m afraid I don’t have any more than that right now.”

We don’t know yet what these non-technical issues are. When some users asked why the WSL team could not publish Fedora themselves — after all it’s an open source project — Rich Turner, a project manager at Microsoft responded, “We have a policy of not publishing others’ IP into the store. We believe that the community would MUCH prefer to see a distro published by the distro owner vs. seeing it published by Microsoft or anyone else that isn’t the authoritative source.”

So, Microsoft can’t just go ahead and publish Debian or Arch Linux on Windows Store. The onus is on the official communities to bring their distros to Windows 10 users.

What’s next

In the next article, we will talk about using Windows 10 as a Linux machine and performing most of the tasks that you would perform on your Linux system using the command-line tools.

Things To Know About Three Upcoming Cloud Technologies

In the recent times, three noteworthy trends have been seen in cloud computing. And we have seen how microservices has risen, and the public cloud has emerged as a new open source cloud computing projects. The particular projects influence public cloud elasticity and help to a great extent in designing applications.

Knowing The Market

Previously, in cloud computing, a migration of applications to Azure, Google and Amazon Web Services was observed. Practically, applications which run on hardware in the private data centers can be virtualized and installed in the cloud. The present scenario is that the cloud market has become mature. Thus, more applications are now written and directly installed to the cloud. 

1.    Cloud Native Applications

If you search that what native cloud applications mean, you will see there is no textbook definition. In simple words, it means the applications are designed in such a way that it is capable of scaling thousand of nodes and can run on modern distributed systems environment.  It has been observed that many organizations, be it small or large, are moving to the cloud as there are innumerable benefits associated with it.  Let’s ponder on the design pattern of the applications.

Before the emergence of cloud, we saw virtualization played a pivotal role where the operating systems were portable and were inside the virtual machines.  In this way, depending on the compatibility with hypervisors such as KVM, Vmware or Xen project, the machine could move from one server to the other. In the recent times, the abstraction level has been seen at the application level that all applications are container based as well as run in portable units which can move from server to server with ease in spite of hypervisor all because of container technologies such as Core OS and Docker.

2.    Containers  

The container is the recent addition in the cloud technologies, noteworthy Core OS and Docker. These applications are nothing but the evolution of earlier innovation which includes Linux control groups (c-groups) as well as LXC, thus making the application portable. It allows the applications to move to production from development environment without reconfiguration.

The application is now installed from registers as well as through continuous operation systems to containers which are organized using tools such as Puppet, Chef or Ansible.

Ultimately, to scale out the applications, the schedulers like Docker Swarm, Mesos, Kubernetes, and Diego synchronize the containers across nodes and machines.

3.    Unikernels

Unikernels are also an upcoming technology with similarity to containers. A unikernel is a paired down OS, combined with the single application into a unikernel application that runs inside a virtual machine. Unikernels at times is also known as library operating system, as it includes libraries which allow applications to use hardware and network protocols in amalgamation with a set of policies for isolation of network layer and access control. In the 90’s, systems were known as Nemesis and Exokernel, in the present time unikernels include Osv and Mirage OS. You will be happy to know that unikernel application can be installed across various environments. Unikernel is capable of creating highly specialized as well as isolated services, and in the recent times, it has been used increasingly for applications development in microservices architecture.

GitHub Predicts Hottest 2018 Open Source Trends

According to the GitHub’s announcement of its findings, the company looked at three different types of activity. It identified the top 100 projects that had at least 2,000 contributors in 2016 and experienced the largest increase in contributors in 2017. It also identified the top 100 projects that received the largest increase in visits to the project’s repo in 2017. It also identified the top 100 projects that received the most new stars in 2017. Combining these lists, the company grouped projects into broad communities, looking towards the communities that were the most represented at the top of the lists.

The hottest project and community results in 2017, then, would logically foretell growth areas and trends for the coming year. This is what emerged:

Read more at The New Stack