Home Blog Page 407

Why UX Practitioners Should Learn About SRE

Understanding reliability is an equally complex problem to understanding user needs and we still need to consider the user — even more important than poor reliability is the perception of poor reliability. That why it’s essential that balanced teams start involving UX researchers in the reliability research of their product as ultimately this is a tool for product design.

That said understanding reliability requires a new vocabulary and a comfort level with automation and statistical analysis that may not be familiar to some. These are new skills for most researchers but they are teachable. Most researchers will find it most natural to start with the concepts of Operator Experience Design (OX) but the research toolkit is much richer than that.

Read more at Medium

How Kubernetes Became the Solution for Migrating Legacy Applications

You don’t have to tear down your monolith to modernize it. You can evolve it into a beautiful microservice using cloud-native technologies.

Kubernetes and containers didn’t only change the ability to manage at scale, but also to take massive, monolithic applications and more easily slice them into more manageable microservices. Each service can be managed to scale up and down as needed. Microservices also allow for faster deployments and faster iteration in keeping with modern continuous integration practices. Kubernetes-based orchestration allows you to improve efficiency and resource utilization by dynamically managing and scheduling these microservices. It also adds an extraordinary level of resiliency. You don’t have to worry about container failure, and you can continue to operate as demand goes up and down.

Read more at OpenSource.com

The Next Generation of TinyFPGAs

Field-programmable gate arrays (FPGAs) have come of age. Once viewed as exotic and scary there are a number of FPGA boards targeting the maker market and among them is a new range of open source TinyFPGA boards.

The latest TinyFPGA board is the TinyFPGA BX board, an updated version of their B2 board, and it’s arriving soon on Crowd Supply.

The new BX board is based around the same ICE40LP8K FPGA as the original B2; however, with stock of their original B-series board running low, the new piece of hardware is a big improvement

Read more at Hackster

Protecting Code Integrity with PGP — Part 2: Generating Your Master Key

In this article series, we’re taking an in-depth look at using PGP and provide practical guidelines for developers working on free software projects. In the previous article, we provided an introduction to basic tools and concepts. In this installment, we show how to generate and protect your master PGP key.

Checklist

  1. Generate a 4096-bit RSA master key (ESSENTIAL)

  2. Back up the master key using paperkey (ESSENTIAL)

  3. Add all relevant identities (ESSENTIAL)

Considerations

Understanding the “Master” (Certify) key

In this and next section we’ll talk about the “master key” and “subkeys.” It is important to understand the following:

  1. There are no technical differences between the “master key” and “subkeys.”

  2. At creation time, we assign functional limitations to each key by giving it specific capabilities.

  3. A PGP key can have four capabilities.

    • [S] key can be used for signing

    • [E] key can be used for encryption

    • [A] key can be used for authentication

    • [C] key can be used for certifying other keys

  4. A single key may have multiple capabilities.

The key carrying the [C] (certify) capability is considered the “master” key because it is the only key that can be used to indicate relationship with other keys. Only the [C] key can be used to:

  • Add or revoke other keys (subkeys) with S/E/A capabilities

  • Add, change or revoke identities (uids) associated with the key

  • Add or change the expiration date on itself or any subkey

  • Sign other people’s keys for the web of trust purposes

In the Free Software world, the [C] key is your digital identity. Once you create that key, you should take extra care to protect it and prevent it from falling into malicious hands.

Before you create the master key

Before you create your master key you need to pick your primary identity and your master passphrase.

Primary identity

Identities are strings using the same format as the “From” field in emails:

Alice Engineer <alice.engineer@example.org>

You can create new identities, revoke old ones, and change which identity is your “primary” one at any time. Since the primary identity is shown in all GnuPG operations, you should pick a name and address that are both professional and the most likely ones to be used for PGP-protected communication, such as your work address or the address you use for signing off on project commits.

Passphrase

The passphrase is used exclusively for encrypting the private key with a symmetric algorithm while it is stored on disk. If the contents of your .gnupg directory ever get leaked, a good passphrase is the last line of defense between the thief and them being able to impersonate you online, which is why it is important to set up a good passphrase.

A good guideline for a strong passphrase is 3-4 words from a rich or mixed dictionary that are not quotes from popular sources (songs, books, slogans). You’ll be using this passphrase fairly frequently, so it should be both easy to type and easy to remember.

Algorithm and key strength

Even though GnuPG has had support for Elliptic Curve crypto for a while now, we’ll be sticking to RSA keys, at least for a little while longer. While it is possible to start using ED25519 keys right now, it is likely that you will come across tools and hardware devices that will not be able to handle them correctly.

You may also wonder why the master key is 4096-bit, if later in the guide we state that 2048-bit keys should be good enough for the lifetime of RSA public key cryptography. The reasons are mostly social and not technical: master keys happen to be the most visible ones on the keychain, and some of the developers you interact with will inevitably judge you negatively if your master key has fewer bits than theirs.

Generate the master key

To generate your new master key, issue the following command, putting in the right values instead of “Alice Engineer:”

$ gpg --quick-generate-key 'Alice Engineer <alice@example.org>' rsa4096 cert

A dialog will pop up asking to enter the passphrase. Then, you may need to move your mouse around or type on some keys to generate enough entropy until the command completes.

Review the output of the command, it will be something like this:

pub   rsa4096 2017-12-06 [C] [expires: 2019-12-06]
     111122223333444455556666AAAABBBBCCCCDDDD
uid                      Alice Engineer <alice@example.org>

Note the long string on the second line — that is the full fingerprint of your newly generated key. Key IDs can be represented in three different forms:

  • Fingerprint, a full 40-character key identifier

  • Long, last 16-characters of the fingerprint (AAAABBBBCCCCDDDD)

  • Short, last 8 characters of the fingerprint (CCCCDDDD)

You should avoid using 8-character “short key IDs” as they are not sufficiently unique.

At this point, I suggest you open a text editor, copy the fingerprint of your new key and paste it there. You’ll need to use it for the next few steps, so having it close by will be handy.

Back up your master key

For disaster recovery purposes — and especially if you intend to use the Web of Trust and collect key signatures from other project developers — you should create a hardcopy backup of your private key. This is supposed to be the “last resort” measure in case all other backup mechanisms have failed.

The best way to create a printable hardcopy of your private key is using the paperkey software written for this very purpose. Paperkey is available on all Linux distros, as well as installable via brew install paperkey on Macs.

Run the following command, replacing [fpr] with the full fingerprint of your key:

$ gpg --export-secret-key [fpr] | paperkey -o /tmp/key-backup.txt

The output will be in a format that is easy to OCR or input by hand, should you ever need to recover it. Print out that file, then take a pen and write the key passphrase on the margin of the paper. This is a required step because the key printout is still encrypted with the passphrase, and if you ever change the passphrase on your key, you will not remember what it used to be when you had first created it — guaranteed.

Put the resulting printout and the hand-written passphrase into an envelope and store in a secure and well-protected place, preferably away from your home, such as your bank vault.

Note on printers: Long gone are days when printers were dumb devices connected to your computer’s parallel port. These days they have full operating systems, hard drives, and cloud integration. Since the key content we send to the printer will be encrypted with the passphrase, this is a fairly safe operation, but use your best paranoid judgement.

Add relevant identities

If you have multiple relevant email addresses (personal, work, open-source project, etc), you should add them to your master key. You don’t need to do this for any addresses that you don’t expect to use with PGP (e.g., probably not your school alumni address).

The command is (put the full key fingerprint instead of [fpr]):

$ gpg --quick-add-uid [fpr] 'Alice Engineer <allie@example.net>'

You can review the UIDs you’ve already added using:

$ gpg --list-key [fpr] | grep ^uid
Pick the primary UID

GnuPG will make the latest UID you add as your primary UID, so if that is different from what you want, you should fix it back:

$ gpg --quick-set-primary-uid [fpr] 'Alice Engineer <alice@example.org>'

Next time, we’ll look at generating PGP subkeys, which are the keys you’ll actually be using for day-to-day work. 

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

AT&T Puts Smart City IoT ‘Edge’ Computing On Direct Dial

Technology platforms in the post-millennial era are heavily characterized by their use of automation and optimization techniques. As we increasingly analyze our software in order to quantify and qualify what applications and data workloads work well in situation A, we can start to automate an element of other software deployments with managed optimized controls in situation B. We can then sub-classify our automation and optimization efforts by vertical industry use case, by specific application type and by the type of device that the software lives on be it desktop, mobile or Internet of Things (IoT).

This broad brush summary statement goes some way to explaining current efforts emanating from AT&T. The brand that we used to know as a phone company, but would now rather we think of it as a ‘business data network’ company has been working with the Linux community to refine an approach to devices installed out of the ‘edge’ of the IoT. Specifically, a new open source project from The Linux Foundation called Akraino will now be built to create a ‘software stack’ capable of running cloud services optimized for edge IoT devices and applications.

Read more at Forbes

A Look Into the Kubernetes Master Components

This blog post looks at the most important control plane components of a single Kubernetes master node — etcd, the API server, the scheduler and the controller manager — and explains how they work together. Although other components, such as DNS and the dashboard, come into play in a production environment, the focus here is on these specific four.

etcd

etcd is a distributed key-value store written in golang that provides a reliable way to store data across a cluster of machines. Kubernetes uses it as its brain, but only the kube-apiserver can communicate to it to save desired states. To get an idea of how etcd works, download the latest binary version for your preferred operating system and just execute etcd. If you have a golang development environment ready on your system (on a Mac just do: brew install go), you can also clone the etcd GitHub repo and start a cluster with goremanas follows:

Read more at VMware

10 Breakthrough Technologies for 2018

Dueling neural networks. Artificial embryos. AI in the cloud. Welcome to our annual list of the ten technology advances we think will shape the way we work and live now and for years to come.

Every year since 2001 we’ve picked what we call the 10 Breakthrough Technologies. People often ask, what exactly do you mean by “breakthrough”? It’s a reasonable question—some of our picks haven’t yet reached widespread use, while others may be on the cusp of becoming commercially available. What we’re really looking for is a technology, or perhaps even a collection of technologies, that will have a profound effect on our lives.

For this year, a new technique in artificial intelligence called GANs is giving machines imagination; artificial embryos, despite some thorny ethical constraints, are redefining how life can be created and are opening a research window into the early moments of a human life; and a pilot plant in the heart of Texas’s petrochemical industry is attempting to create completely clean power from natural gas—probably a major energy source for the foreseeable future. These and the rest of our list will be worth keeping an eye on.

Read more at MIT Technology Review

Choosing a Tool to Track and Mitigate Open Source Security Vulnerabilities

To successfully deal with open source security, you need your developers (and DevOps teams) to operate the solution. Given the fast pace of modern development, boosted in part by the use of open source itself, an outnumbered security team will never be able to keep you secure. Therefore, the SCA solution you choose must be designed for developers to be successful with.

Unfortunately, all too often, security tools (including SCA solutions) simply don’t understand the developer as a user. Integrating into an IDE or creating a Jenkins plug-in does not automatically make a tool developer-friendly, nor does adding the term “DevSecOps” into your documentation. To be successful with developers, tools need to be on par with the developer experience (DX) other dev tools offer, adapt to the user flows of the tools they connect to, and have the product communicate in a dev-friendly manner.

Read more at O’Reilly

Choosing Project Names: 4 Key Considerations

Names set expectations. Your project’s name should showcase its functionality in the ecosystem and explain to users what your story is. In the crowded open source software world, it’s important not to get entangled with other projects out there. Taking a little extra time now, before sending out that big announcement, will pay off later.

Here are four factors to keep in mind when choosing a name for your project.

What does your project’s code do?

Start with your project: What does it do? You know the code intimately—but can you explain what it does to a new developer?

Read more at OpenSource.com

IBM Index: A Community Event for Open Source Developers

The first-ever INDEX community event, happening now  in San Francisco, is an open developer conference featuring sessions on topics including artificial intelligence, machine learning, analytics, cloud native, containers, APIs, languages, and more.

The event will also feature a keynote presentation from The Linux Foundation’s executive director, Jim Zemlin, who will discuss building sustainable open source projects to advance the next generation of modern computing.

Angel Diaz, VP of Developer Advocacy and OSS at IBM

In this article, we talk with Angel Diaz, VP of Developer Advocacy and OSS at IBM, who explains more about what to look forward to at this event.

It looks like there’s a heavy Open Source flavor in the Index conference lineup. Tell us more.

Angel Diaz: Absolutely. There are 26 different developer sessions related to Open Source at Index — covering everything from Building Cloud Native Applications Best Practices, to Open Main Frame, and everything in between.

Open Source is the reality of the enterprise stack today. Open Source has brought compute, storage and network together in OpenStack. It’s brought unity around 12- factor applications in Cloud Foundry. It’s brought the world together around microservices via the Cloud Native Computing Foundation, Docker and Kubernetes. And it’s brought the industry together in serverless around projects like Open Whisk. When you look at data, we’ve been democratizing to the masses around Apache Spark and Data Science — and in AI, projects like SystemML and TensorFlow are bridging the gap between the information and the insights you can get with that data. With transactions — Open Source is of course behind the hyperledger and re-establishing what a transaction is through Blockchain. If it’s hot and if it matters in the enterprise stack today–chances are it’s Open Source.

Linux obviously was a huge part of IBM’s heritage with Open Source. Tell us a little bit about the modern outlook of the company as Open Source has grown up so much, and the role that IBM sees itself playing in the community?

Diaz: In this second renaissance of Open Source that we’re in today, IBM and our partners have clear centers of gravity around cloud, data, AI, and transactions. It’s very important as an industry to create these centers of gravity. When you’re creating an open platform for cloud, data and AI, it’s important that you’re bringing together communities — where code bases, use cases and developers are all equal. That’s how you create a great platform as vendor too. That’s how you create an environment where everyone can benefit. Open Source innovation around cloud, data and AI is really the outlook for how IBM built our cloud. We’re trying to make the IBM Cloud the best platform for any Open Source developer.

And how you consume Open Source is as important as the code you write. IBM has been doing Open Source since the 1990s. It’s a huge part of our strategy. From the days of Linux, Eclipse, Apache, to where we are now — we have the IBM Open Source Way. It’s how we culturally think about and leverage Open Source. It describes to the world how we at IBM do Open Source at scale. We don’t just use Open Source, we contribute as much as we use. And in the Open Source Way, we talk about how we have operationalized Open Source at scale across IBM. These methods are best practices that any enterprise that wants to learn how to do Open Source should take to heart.

For example, what is the Open Source etiquette? Don’t be a talker, be a doer. Don’t flaunt your title, don’t be a drive by committer, start small and earn trust. Most importantly, be authentic. Anybody that wants to build an Open Source program or be a citizen of Open Source should take a look at this. How you behave in Open Source is just as important as the code that you build.

There are a lot of great events out there. Let’s hear why any enterprise developers that care about Open Source should be at the Index event next month?

Diaz: It’s a great opportunity for developers who are embedded in the second renaissance of Open Source to meet their peers in these centers of gravity around cloud, data and AI. It’s a vendor neutral event, where we will be bringing together developers across the globe to build these Open Source technologies. Everything from Kubernetes to OpenAPI, Tensorflow, Spark — every Open Source community. It’s a great opportunity for you to go, participate, very inexpensive, developer-to-developer, no marketing. Index will be a conference to learn about these technologies and their place in Open Source.

Don’t miss the opportunity to join the conversation February 20-22 at INDEX.