Home Blog Page 274

Open Source Maintainers Want to Reduce Application Security Risk

According to Snyk’s “State of Open Source Security Report 2019,” which surveyed over 500 open source users and maintainers, 30 percent of developers that maintain open source (OS) projects are highly confident in their security knowledge, which is up from 17 percent the year before. In addition, the percentage of OS maintainers that run security audits on their projects has risen twenty percentage points to 74 percent as compared to last year’s survey. Yet, only 42 percent of maintainers are auditing their code at least once a quarter. This is a problem because the goals for development velocity are so much higher than just a few years ago.

The New Stack and Linux Foundation’s survey of open source leaders found that the average development team was releasing code into production at more than two-thirds of companies. Other studies are less optimistic and indicate that only about a quarter of companies have reached that level of speed.

Read more at The New Stack

 

Linux Security: Cmd Provides Visibility, Control Over User Activity

There’s a new Linux security tool you should be aware of — Cmd (pronounced “see em dee”) dramatically modifies the kind of control that can be exercised over Linux users. It reaches way beyond the traditional configuration of user privileges and takes an active role in monitoring and controlling the commands that users are able to run on Linux systems.

Provided by a company of the same name, Cmd focuses on cloud usage. Given the increasing number of applications being migrated into cloud environments that rely on Linux, gaps in the available tools make it difficult to adequately enforce required security. However, Cmd can also be used to manage and protect on-premises systems.

Read more at Network World

Introduction to YAML: Creating a Kubernetes Deployment

There’s an easier and more useful way to use Kubernetes to spin up resources outside of the command line: creating configuration files using YAML. In this article, we’ll look at how YAML works and use it to define first a Kubernetes Pod, and then a Kubernetes Deployment.

YAML Basics

It’s difficult to escape YAML if you’re doing anything related to many software fields — particularly Kubernetes, SDN, and OpenStack. YAML, which stands for Yet Another Markup Language, or YAML Ain’t Markup Language (depending who you ask) is a human-readable text-based format for specifying configuration-type information. For example, in this article, we’ll pick apart the YAML definitions for creating first a Pod, and then a Deployment.

Using YAML for K8s definitions gives you a number of advantages, including:

  • Convenience: You’ll no longer have to add all of your parameters to the command line
  • Maintenance: YAML files can be added to source control, so you can track changes
  • Flexibility: You’ll be able to create much more complex structures using YAML than you can on the command line

YAML is a superset of JSON, which means that any valid JSON file is also a valid YAML file. So on the one hand, if you know JSON and you’re only ever going to write your own YAML (as opposed to reading other people’s) you’re all set. On the other hand, that’s not very likely, unfortunately. Even if you’re only trying to find examples on the web, they’re most likely in (non-JSON) YAML, so we might as well get used to it.  Still, there may be situations where the JSON format is more convenient, so it’s good to know that it’s available to you.

Read more at CNCF

Protecting Code Integrity with PGP — Part 1: Basic Concepts and Tools

Learn PGP basics and best practices in this series of tutorials from our archives. 

In this article series, we take an in-depth look at using PGP to ensure the integrity of software. These articles will provide practical guidelines aimed at developers working on free software projects and will cover the following topics:

  1. PGP basics and best practices

  2. How to use PGP with Git

  3. How to protect your developer accounts

We use the term “Free” as in “Freedom,” but the guidelines set out in this series can also be used for any other kind of software that relies on contributions from a distributed team of developers. If you write code that goes into public source repositories, you can benefit from getting acquainted with and following this guide.

Structure

Each section is split into two areas:

  • The checklist that can be adapted to your project’s needs

  • Free-form list of considerations that explain what dictated these decisions, together with configuration instructions

Checklist priority levels

The items in each checklist include the priority level, which we hope will help guide your decision:

  • (ESSENTIAL) items should definitely be high on the consideration list. If not implemented, they will introduce high risks to the code that gets committed to the open-source project.

  • (NICE) to have items will improve the overall security, but will affect how you interact with your work environment, and probably require learning new habits or unlearning old ones.

Remember, these are only guidelines. If you feel these priority levels do not reflect your project’s commitment to security, you should adjust them as you see fit.

Basic PGP concepts and tools

Checklist

  1. Understand the role of PGP in Free Software Development (ESSENTIAL)

  2. Understand the basics of Public Key Cryptography (ESSENTIAL)

  3. Understand PGP Encryption vs. Signatures (ESSENTIAL)

  4. Understand PGP key identities (ESSENTIAL)

  5. Understand PGP key validity (ESSENTIAL)

  6. Install GnuPG utilities (version 2.x) (ESSENTIAL)

Considerations

The Free Software community has long relied on PGP for assuring the authenticity and integrity of software products it produced. You may not be aware of it, but whether you are a Linux, Mac or Windows user, you have previously relied on PGP to ensure the integrity of your computing environment:

  • Linux distributions rely on PGP to ensure that binary or source packages have not been altered between when they have been produced and when they are installed by the end-user.

  • Free Software projects usually provide detached PGP signatures to accompany released software archives, so that downstream projects can verify the integrity of downloaded releases before integrating them into their own distributed downloads.

  • Free Software projects routinely rely on PGP signatures within the code itself in order to track provenance and verify integrity of code committed by project developers.

This is very similar to developer certificates/code signing mechanisms used by programmers working on proprietary platforms. In fact, the core concepts behind these two technologies are very much the same — they differ mostly in the technical aspects of the implementation and the way they delegate trust. PGP does not rely on centralized Certification Authorities, but instead lets each user assign their own trust to each certificate.

Our goal is to get your project on board using PGP for code provenance and integrity tracking, following best practices and observing basic security precautions.

Extremely Basic Overview of PGP operations

You do not need to know the exact details of how PGP works — understanding the core concepts is enough to be able to use it successfully for our purposes. PGP relies on Public Key Cryptography to convert plain text into encrypted text. This process requires two distinct keys:

  • A public key that is known to everyone

  • A private key that is only known to the owner

Encryption

For encryption, PGP uses the public key of the owner to create a message that is only decryptable using the owner’s private key:

  1. The sender generates a random encryption key (“session key”)

  2. The sender encrypts the contents using that session key (using a symmetric cipher)

  3. The sender encrypts the session key using the recipient’s public PGP key

  4. The sender sends both the encrypted contents and the encrypted session key to the recipient

To decrypt:

  1. The recipient decrypts the session key using their private PGP key

  2. The recipient uses the session key to decrypt the contents of the message

Signatures

For creating signatures, the private/public PGP keys are used the opposite way:

  1. The signer generates the checksum hash of the contents

  2. The signer uses their own private PGP key to encrypt that checksum

  3. The signer provides the encrypted checksum alongside the contents

To verify the signature:

  1. The verifier generates their own checksum hash of the contents

  2. The verifier uses the signer’s public PGP key to decrypt the provided checksum

  3. If the checksums match, the integrity of the contents is verified

Combined usage

Frequently, encrypted messages are also signed with the sender’s own PGP key. This should be the default whenever using encrypted messaging, as encryption without authentication is not very meaningful (unless you are a whistleblower or a secret agent and need plausible deniability).

Understanding Key Identities

Each PGP key must have one or multiple Identities associated with it. Usually, an “Identity” is the person’s full name and email address in the following format:

Alice Engineer <alice.engineer@example.com>

Sometimes it will also contain a comment in brackets, to tell the end-user more about that particular key:

Bob Designer (obsolete 1024-bit key) <bob.designer@example.com>

Since people can be associated with multiple professional and personal entities, they can have multiple identities on the same key:

Alice Engineer <alice.engineer@example.com>
Alice Engineer <aengineer@personalmail.example.org>
Alice Engineer <webmaster@girlswhocode.example.net>

When multiple identities are used, one of them would be marked as the “primary identity” to make searching easier.

Understanding Key Validity

To be able to use someone else’s public key for encryption or verification, you need to be sure that it actually belongs to the right person (Alice) and not to an impostor (Eve). In PGP, this certainty is called “key validity:”

  • Validity: full — means we are pretty sure this key belongs to Alice

  • Validity: marginal — means we are somewhat sure this key belongs to Alice

  • Validity: unknown — means there is no assurance at all that this key belongs to Alice

Web of Trust (WOT) vs. Trust on First Use (TOFU)

PGP incorporates a trust delegation mechanism known as the “Web of Trust.” At its core, this is an attempt to replace the need for centralized Certification Authorities of the HTTPS/TLS world. Instead of various software makers dictating who should be your trusted certifying entity, PGP leaves this responsibility to each user.

Unfortunately, very few people understand how the Web of Trust works, and even fewer bother to keep it going. It remains an important aspect of the OpenPGP specification, but recent versions of GnuPG (2.2 and above) have implemented an alternative mechanism called “Trust on First Use” (TOFU).

You can think of TOFU as “the SSH-like approach to trust.” With SSH, the first time you connect to a remote system, its key fingerprint is recorded and remembered. If the key changes in the future, the SSH client will alert you and refuse to connect, forcing you to make a decision on whether you choose to trust the changed key or not.

Similarly, the first time you import someone’s PGP key, it is assumed to be trusted. If at any point in the future GnuPG comes across another key with the same identity, both the previously imported key and the new key will be marked as invalid and you will need to manually figure out which one to keep.

In this guide, we will be using the TOFU trust model.

Installing OpenPGP software

First, it is important to understand the distinction between PGP, OpenPGP, GnuPG and gpg:

  • PGP (“Pretty Good Privacy”) is the name of the original commercial software

  • OpenPGP is the IETF standard compatible with the original PGP tool

  • GnuPG (“Gnu Privacy Guard”) is free software that implements the OpenPGP standard

  • The command-line tool for GnuPG is called “gpg”

Today, the term “PGP” is almost universally used to mean “the OpenPGP standard,” not the original commercial software, and therefore “PGP” and “OpenPGP” are interchangeable. The terms “GnuPG” and “gpg” should only be used when referring to the tools, not to the output they produce or OpenPGP features they implement. For example:

  • PGP (not GnuPG or GPG) key

  • PGP (not GnuPG or GPG) signature

  • PGP (not GnuPG or GPG) keyserver

Understanding this should protect you from an inevitable pedantic “actually” from other PGP users you come across.

Installing GnuPG

If you are using Linux, you should already have GnuPG installed. On a Mac, you should install GPG-Suite or you can use brew install gnupg2. On a Windows PC, you should install GPG4Win, and you will probably need to adjust some of the commands in the guide to work for you, unless you have a unix-like environment set up. For all other platforms, you’ll need to do your own research to find the correct places to download and install GnuPG.

GnuPG 1 vs. 2

Both GnuPG v.1 and GnuPG v.2 implement the same standard, but they provide incompatible libraries and command-line tools, so many distributions ship both the legacy version 1 and the latest version 2. You need to make sure you are always using GnuPG v.2.

First, run:

$ gpg --version | head -n1

If you see gpg (GnuPG) 1.4.x, then you are using GnuPG v.1. Try the gpg2 command:

$ gpg2 --version | head -n1

If you see gpg (GnuPG) 2.x.x, then you are good to go. This guide will assume you have the version 2.2 of GnuPG (or later). If you are using version 2.0 of GnuPG, some of the commands in this guide will not work, and you should consider installing the latest 2.2 version of GnuPG.

Making sure you always use GnuPG v.2

If you have both gpg and gpg2 commands, you should make sure you are always using GnuPG v2, not the legacy version. You can make sure of this by setting the alias:

$ alias gpg=gpg2

You can put that in your .bashrc to make sure it’s always loaded whenever you use the gpg commands. 

In part 2 of this series, we will explain the basic steps for generating and protecting your master PGP key. 

Read more:

Part 1: Basic Concepts and Tools

Part 2: Generating Your Master Key

Part 3: Generating PGP Subkeys

Part 4: Moving Your Master Key to Offline Storage

Part 5: Moving Subkeys to a Hardware Device

Part 6: Using PGP with Git

Part 7: Protecting Online Accounts

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Linux 5.0-rc8

This may be totally unnecessary, but we actually had more patches come in this last week than we had for rc7, which just didn’t make me feel the warm and fuzzies. And while none of the patches looked all that scary, some of them were to pretty core files, so it wasn’t all just random rare drivers (although those kinds also existed).

So I agonized about it a bit, and then decided to just say “no hurry” and make an rc8. And after I had tagged the rc, I noticed a patch in my inbox that I had missed that was a regression from one of the very patches this last week, so that made me feel like rc8 was the right decision.

Read more at the Linux Kernel Mailing List

Core Technologies and Tools for AI, Big Data, and Cloud Computing

In this post, I’ll describe some of the core technologies and tools companies are beginning to evaluate and build. Many companies are just beginning to address the interplay between their suite of AI, big data, and cloud technologies. I’ll also highlight some interesting uses cases and applications of data, analytics, and machine learning. The resource examples I’ll cite will be drawn from the upcoming Strata Data conference in San Francisco, where leading companies and speakers will share their learnings on the topics covered in this post.

AI and machine learning in the enterprise

When asked what holds back the adoption of machine learning and AI, survey respondents for our upcoming report, “Evolving Data Infrastructure,” cited “company culture” and “difficulties in identifying appropriate business use cases” among the leading reasons. Attendees of the Strata Business Summit will have the opportunity to explore these issues through training sessions, tutorials, briefings, and real-world case studies from practitioners and companies. Recent improvements in tools and technologies has meant that techniques like deep learning are now being used to solve common problems, including forecasting, text mining and language understanding, and personalization. We’ve assembled sessions from leading companies, many of which will share case studies of applications of machine learning methods, including multiple presentations involving deep learning:

Read more at O’Reilly

Happy Little Accidents – Debugging JavaScript

Last year I gave a talk in HelsinkiJS and Turku ❤️ Frontend meetups titled Happy Little Accidents – The Art of Debugging (slides).

This week I was spending a lot of time debugging weird timezone issues and the talk popped back up from my memories. So I wanted to write a more detailed and Javascript focused post about the different options.

Print to console

All of the examples below are ones you can copy-paste to your developer console and starting playing around with.

console.log

One of the most underrated but definitely powerful tool is console.log and its friends. It’s also usually the first and easiest step in inspecting what might be the issue. …

Debugger

Javascript’s debugger keyword is a magical creature. It gives you access to the very spot with full access to local and global scope. Let’s take a look at a hypothetical example with a React Component that gets some props passed down to it.

Read more at Dev.to

Job Hunt Etiquette: New Best Practices for 2019

Although a lot has changed about the job interview process over the years, basic interview etiquette rules still apply. Be polite. Don’t lie about your experience. Send a thank you note. Follow up with hiring managers to stay top of mind. Avoid wearing a Darth Vader costume to your interview. (The last one should go without saying, but based on CareerBuilder’s annual survey on interview mistakes, at least one person could have used this tip before their interview.)

Classic advice like this holds true today, but in the digital era, there are nuances that job seekers should keep in mind. For instance, candidates no longer have to snail mail their thank you letters – email is instantaneous. But how soon is too soon – the next day? When they are leaving the building? Is texting OK?

We tapped experts to answer these and other pressing questions about job hunting rules and follow-up etiquette for 2019. Here’s their advice and updated best practices for job seekers.

Read more at Enterprisers Project

5 Linux GUI Cloud Backup Tools

We have reached a point in time where most every computer user depends upon the cloud … even if only as a storage solution. What makes the cloud really important to users, is when it’s employed as a backup. Why is that such a game changer? By backing up to the cloud, you have access to those files, from any computer you have associated with your cloud account. And because Linux powers the cloud, many services offer Linux tools.

Let’s take a look at five such tools. I will focus on GUI tools, because they offer a much lower barrier to entry to many of the CLI tools. I’ll also be focusing on various, consumer-grade cloud services (e.g., Google Drive, Dropbox, Wasabi, and pCloud). And, I will be demonstrating on the Elementary OS platform, but all of the tools listed will function on most Linux desktop distributions.

Note: Of the following backup solutions, only Duplicati is licensed as open source. With that said, let’s see what’s available.

Insync

I must confess, Insync has been my cloud backup of choice for a very long time. Since Google refuses to release a Linux desktop client for Google Drive (and I depend upon Google Drive daily), I had to turn to a third-party solution. Said solution is Insync. This particular take on syncing the desktop to Drive has not only been seamless, but faultless since I began using the tool.

The cost of Insync is a one-time $29.99 fee (per Google account). Trust me when I say this tool is worth the price of entry. With Insync you not only get an easy-to-use GUI for managing your Google Drive backup and sync, you get a tool (Figure 1) that gives you complete control over what is backed up and how it is backed up. Not only that, but you can also install Nautilus integration (which also allows you to easy add folders outside of the configured Drive sync destination).

Figure 1: The Insync app window on Elementary OS.

You can download Insync for Ubuntu (or its derivatives), Linux Mint, Debian, and Fedora from the Insync download page. Once you’ve installed Insync (and associated it with your account), you can then install Nautilus integration with these steps (demonstrating on Elementary OS):

  1. Open a terminal window and issue the command sudo nano /etc/apt/sources.list.d/insync.list.

  2. Paste the following into the new file: deb http://apt.insynchq.com/ubuntu precise non-free contrib.

  3. Save and close the file.

  4. Update apt with the command sudo apt-get update.

  5. Install the necessary package with the command sudo apt-get install insync-nautilus.

Allow the installation to complete. Once finished, restart Nautilus with the command nautilus -q (or log out and back into the desktop). You should now see an Insync entry in the Nautilus right-click context menu (Figure 2).

Figure 2: Insync/Nautilus integration in action.

Dropbox

Although Dropbox drew the ire of many in the Linux community (by dropping support for all filesystems but unencrypted ext4), it still supports a great deal of Linux desktop deployments. In other words, if your distribution still uses the ext4 file system (and you do not opt to encrypt your full drive), you’re good to go.

The good news is the Dropbox Linux desktop client is quite good. The tool offers a system tray icon that allows you to easily interact with your cloud syncing. Dropbox also includes CLI tools and a Nautilus integration (by way of an additional addon found here).

The Linux Dropbox desktop sync tool works exactly as you’d expect. From the Dropbox system tray drop-down (Figure 3) you can open the Dropbox folder, launch the Dropbox website, view recently changed files, get more space, pause syncing, open the preferences window, find help, and quite Dropbox.

Figure 3: The Dropbox system tray drop-down on Elementary OS.

The Dropbox/Nautilus integration is an important component, as it makes quickly adding to your cloud backup seamless and fast. From the Nautilus file manager, locate and right-click the folder to bad added, and select Dropbox > Move to Dropbox (Figure 4).

Figure 4: Dropbox/Nautilus integration.

The only caveat to the Dropbox/Nautilus integration is that the only option is to move a folder to Dropbox. To some this might not be an option. The developers of this package would be wise to instead have the action create a link (instead of actually moving the folder).

Outside of that one issue, the Dropbox cloud sync/backup solution for Linux is a great route to go.

pCloud

pCloud might well be one of the finest cloud backup solutions you’ve never heard of. This take on cloud storage/backup includes features like:

  • Encryption (subscription service required for this feature);

  • Mobile apps for Android and iOS;

  • Linux, Mac, and Windows desktop clients;

  • Easy file/folder sharing;

  • Built-in audio/video players;

  • No file size limitation;

  • Sync any folder from the desktop;

  • Panel integration for most desktops; and

  • Automatic file manager integration.

pCloud offers both Linux desktop and CLI tools that function quite well. pCloud offers both a free plan (with 10GB of storage), a Premium Plan (with 500GB of storage for a one-time fee of $175.00), and a Premium Plus Plan (with 2TB of storage for a one-time fee of $350.00). Both non-free plans can also be paid on a yearly basis (instead of the one-time fee).

The pCloud desktop client is quite user-friendly. Once installed, you have access to your account information (Figure 5), the ability to create sync pairs, create shares, enable crypto (which requires an added subscription), and general settings.

Figure 5: The pCloud desktop client is incredibly easy to use.

The one caveat to pCloud is there’s no file manager integration for Linux. That’s overcome by the Sync folder in the pCloud client.

CloudBerry

The primary focus for CloudBerry is for Managed Service Providers. The business side of CloudBerry does have an associated cost (one that is probably well out of the price range for the average user looking for a simple cloud backup solution). However, for home usage, CloudBerry is free.

What makes CloudBerry different than the other tools is that it’s not a backup/storage solution in and of itself. Instead, CloudBerry serves as a link between your desktop and the likes of:

  • AWS

  • Microsoft Azure

  • Google Cloud

  • BackBlaze

  • OpenStack

  • Wasabi

  • Local storage

  • External drives

  • Network Attached Storage

  • Network Shares

  • And more

In other words, you use CloudBerry as the interface between the files/folders you want to share and the destination with which you want send them. This also means you must have an account with one of the many supported solutions.
Once you’ve installed CloudBerry, you create a new Backup plan for the target storage solution. For that configuration, you’ll need such information as:

  • Access Key

  • Secret Key

  • Bucket

What you’ll need for the configuration will depend on the account you’re connecting to (Figure 6).

Figure 6: Setting up a CloudBerry backup for Wasabi.

The one caveat to CloudBerry is that it does not integrate with any file manager, nor does it include a system tray icon for interaction with the service.

Duplicati

Duplicati is another option that allows you to sync your local directories with either locally attached drives, network attached storage, or a number of cloud services. The options supported include:

  • Local folders

  • Attached drives

  • FTP/SFTP

  • OpenStack

  • WebDAV

  • Amazon Cloud Drive

  • Amazon S3

  • Azure Blob

  • Box.com

  • Dropbox

  • Google Cloud Storage

  • Google Drive

  • Microsoft OneDrive

  • And many more

Once you install Duplicati (download the installer for Debian, Ubuntu, Fedora, or RedHat from the Duplicati downloads page), click on the entry in your desktop menu, which will open a web page to the tool (Figure 7), where you can configure the app settings, create a new backup, restore from a backup, and more.

Figure 7: Duplicati web page.

To create a backup, click Add backup and walk through the easy-to-use wizard (Figure 8). The backup service you choose will dictate what you need for a successful configuration.

Figure 8: Creating a new Duplicati backup for Google Drive.

For example, in order to create a backup to Google Drive, you’ll need an AuthID. For that, click the AuthID link in the Destination section of the setup, where you’ll be directed to select the Google Account to associate with the backup. Once you’ve allowed Duplicati access to the account, the AuthID will fill in and you’re ready to continue. Click Test connection and you’ll be asked to okay the creation of a new folder (if necessary). Click Next to complete the setup of the backup.

More Where That Came From

These five cloud backup tools aren’t the end of this particular rainbow. There are plenty more options where these came from (including CLI-only tools). But any of these backup clients will do a great job of serving your Linux desktop-to-cloud backup needs.

7 Key Considerations for Kubernetes in Production

In this post, we share seven fundamental capabilities large enterprises need to instrument around their Kubernetes investments in order to be able to effectively implement it and utilize it to drive their business.

Typically, when developers begin to experiment with Kubernetes, they end up deploying Kubernetes on a set of servers. This is only a proof of concept (POC) deployment, and what we see is that this basic deployment is not something you can take into production for long-standing applications, since it is missing critical components to ensure smooth operations of mission-critical Kubernetes-based apps. While deploying a local Kubernetes environment can be a simple procedure that’s completed within days, an enterprise-grade deployment is quite another challenge.

A complete Kubernetes infrastructure needs proper DNS, load balancing, Ingress and K8’s role-based access control (RBAC), alongside a slew of additional components that then makes the deployment process quite daunting for IT. Once Kubernetes is deployed comes the addition of monitoring and all the associated operations playbooks to fix problems as they occur — such as when running out of capacity, ensuring HA, backups, and more. Finally, the cycle repeats again, whenever there’s a new version of Kubernetes released by the community, and your production clusters need to be upgraded without risking any application downtime.

Read more at The New Stack