Home Blog Page 530

FPGAs and the New Era of Cloud-Based ‘Hardware Microservices’

What is that FPGA-powered future going to look like and how are developers going to use it?

FPGAs aren’t a new technology by any means; Traditionally, they have been reserved for specialized applications where the need for custom processing hardware that can be updated as very demanding algorithms evolve outweigh the complexity of programming the hardware.

…FPGAs can offer massive parallelism targeted only for a specific algorithm, and at much lower power compared to a GPU. And unlike an application-specific integrated circuit (ASIC), they can be reprogrammed when you want to change that algorithm (that’s the field-programmable part).

Read more at The New Stack

Crypto Tokens: A Breakthrough in Open Network Design

Today, tech companies like Facebook, Google, Amazon, and Apple are stronger than ever, whether measured by market cap, share of top mobile apps, or pretty much any other common measure. These companies also control massive proprietary developer platforms. The dominant operating systems — iOS and Android — charge 30% payment fees and exert heavy influence over app distribution. The dominant social networks tightly restrict access, hindering the ability of third-party developers to scale. Startups and independent developers are increasingly competing from a disadvantaged position.

Tokens are a breakthrough in open network design that enable: 1) the creation of open, decentralized networks that combine the best architectural properties of open and proprietary networks, and 2) new ways to incentivize open network participants, including users, developers, investors, and service providers. By enabling the development of new open networks, tokens could help reverse the centralization of the internet, thereby keeping it accessible, vibrant and fair, and resulting in greater innovation.

Read more at Medium

An Introduction to Timekeeping in Linux VMs

Keeping time in Linux is not simple, and virtualization adds additional challenges and opportunities. In this article, I’ll review KVM, Xen, and Hyper-V related time-keeping techniques and the corresponding parts of the Linux kernel.

Timekeeping is the process or activity of recording how long something takes. We need “instruments” to measure time. The Linux kernel has several abstractions to represent such devices:

  • Clocksource is a device that can give a timestamp whenever you need it. In other words, Clocksource is any ticking counter that allows you to get its value.

Read more at OpenSource.com

OpenDaylight Carbon SDN Platform Expands Performance and Scalability

Carbon is the sixth major release from OpenDaylight and follows the Boron release that debuted in September 2016. OpenDaylight first started in April 2013 as Linux Foundation Collaborative Project, with the goal of building an open SDN platform.

Phil Robb, Interim Executive Director for OpenDaylight told EnterpriseNetworkingPlanet that from the beginning, OpenDaylight was designed to be a general-purpose development platform for building programmable networks He noted that early releases of OpenDaylight put the core platform in place.

Among the use cases that OpenDaylight Carbon helps to support is Internet of Things (IOT). Robb explained that Carbon adds the standards-based IoTDM plugin infrastructure to allow easy implementation of new device plugins.

Read more at EntepriseNetworkingPlanet

The 30 Highest Velocity Open Source Projects

Open Source projects exhibit natural increasing returns to scale. That’s because most developers are interested in using and participating in the largest projects, and the projects with the most developers are more likely to quickly fix bugs, add features and work reliably across the largest number of platforms. So, tracking the projects with the highest developer velocity can help illuminate promising areas in which to get involved, and what are likely to be the successful platforms over the next several years. (If the embedded version below isn’t clear enough, you can view the chart directly on Google Sheets.)

Read more at CNCF

Securing Private Keys on a Linux Sysadmin Workstation

In this last article of our ongoing Linux workstation security series for sysadmins, we’ll lay out our recommendations for how to secure your private keys. If you’re interested in more security tips and a list of resources for more reading (to go further down the rabbit hole of Linux security), I recommend that you download our free security guide for sysadmins.

Personal encryption keys, including SSH and PGP private keys, are going to be the most prized items on your Linux workstation. Attackers will be most interested in obtaining them, as that would allow them to further attack your infrastructure or impersonate you to other admins. Linux sysadmins should take extra steps to ensure that your private keys are well protected against theft:

  • Strong passphrases are used to protect private keys (Essential)

  • PGP Master key is stored on removable storage (Nice-to-have)

  • Auth, Sign and Encrypt Subkeys are stored on a smartcard device (Nice)

  • SSH is configured to use PGP Auth key as ssh private key (Nice)

Private key security best practices

The best way to prevent private key theft is to use a smartcard to store your encryption private keys and never copy them onto the workstation. There are several manufacturers that offer OpenPGP capable devices:

  • Kernel Concepts, where you can purchase both the OpenPGP compatible smartcards and the USB readers, should you need one.

  • Yubikey, which offers OpenPGP smartcard functionality in addition to many other cool features (U2F, PIV, HOTP, etc).

  • NitroKey, which is based on open-source software and hardware

It is also important to make sure that the master PGP key is not stored on the main workstation, and only subkeys are used. The master key will only be needed when signing someone else’s keys or creating new subkeys — operations which do not happen very frequently. You may follow the Debian’s subkeys guide to learn how to move your master key to removable storage and how to create subkeys.

You should then configure your gnupg agent to act as ssh agent and use the smartcard-based PGP Auth key to act as your ssh private key. We publish a detailed guide on how to do that using either a smartcard reader or a Yubikey NEO.

If you are not willing to go that far, at least make sure you have a strong passphrase on both your PGP private key and your SSH private key, which will make it harder for attackers to steal and use them.

Workstation Security

Read more:

Part 8:  Best Practices for 2-Factor Authentication and Password Creation on Linux

Part 1: 3 Security Features to Consider When Choosing a Linux Workstation

​Why You Must Patch the New Linux sudo Security Hole

If you want your Linux server to be really secure, you defend it with SELinux. Many sysadmins don’t bother because SELinux can be difficult to set up. But, if you really want to nail down your server, you use SELinux. This makes the newly discovered Linux security hole — with the sudo command that only hits SELinux-protected systems — all the more annoying.

Sudo enables users to run commands as root or another user, while simultaneously providing an audit trail of these commands. It’s essential for day-in, day-out Linux work. Qualys, a well-regarded security company, discovered this essential command — but only on systems with SELinux enabled — can be abused to give the user full root-user capabilities.

Or, as they’d say on the Outer Limits, “We will control the horizontal, we will control the vertical.” This is not what you want to see on your Linux server.

Read more at ZDNet

Core Kubernetes: Jazz Improv over Orchestration

This is the first in a series of blog posts that details some of the inner workings of Kubernetes. If you are simply an operator or user of Kubernetes you don’t necessarily need to understand these details. But if you prefer depth-first learning and really want to understand the details of how things work, this is for you.

This article assumes a working knowledge of Kubernetes. I’m not going to define what Kubernetes is or the core components (e.g. Pod, Node, Kubelet).

Read more at Heptio

The Difference Between Data Science and Data Analytics

Data science and data analytics: people working in the tech field or other related industries probably hear these terms all the time, often interchangeably. However, although they may sound similar, the terms are often quite different and have differing implications for business. Knowing how to use the terms correctly can have a large impact on how a business is run, especially as the amount of available data grows and becomes a greater part of our everyday lives.

Data Science

Much like science is a large term that includes a number of specialities and emphases, data science is a broad term for a variety of models and methods to get information. Under the umbrella of data science is the scientific method, math, statistics, and other tools that are used to analyze and manipulate data. If it’s a tool or process done to data to analyze it or get some sort of information out of it, it likely falls under data science.

Read more at insideHPC

Enough with the Microservices

Don’t even consider microservices unless you have a system that’s too complex to manage as a monolith. The majority of software systems should be built as a single monolithic application. Do pay attention to good modularity within that monolith, but don’t try to separate it into separate services.

– Martin Fowler

If you can’t build a well-structured monolith, what makes you think microservices is the answer?

– Simon Brown

Intro

Much has been written on the pros and cons of microservices, but unfortunately I’m still seeing them as something being pursued in a cargo cult fashion in the growth-stage startup world. At the risk of rewriting Martin Fowler’s Microservice Premium article, I thought it would be good to write up some thoughts so that I can send them to clients when the topic arises, and hopefully help people avoid some of the mistakes I’ve seen. The mistake of choosing a path towards a given architecture or technology on the basis of so-called best practices articles found online is a costly one, and if I can help a single company avoid it then writing this will have been worth it.

Read more at Adam Drake’s blog