Home Blog Page 456

Cloud-Native, Seven Years On…

The high-level concept of cloud-native is simple: systems that give users a better experience by virtue of operating in the cloud in a genuinely cloud-centric way. In other words, the cloud may make an existing database easier to start up, but if the database doesn’t support elasticity then it can’t take advantage of the scaling capabilities of the cloud.

The motivation for defining cloud-native was driven by two distinct aspects. First, we wanted to capture the thinking and architecture that went into creating properly “cloudy” systems. Secondly, we wanted to highlight that not every system that has been rebranded “cloud” was (or is) actually taking proper advantage of cloud.

Fast forward to today and we have a new definition of cloud-native from the CNCF. The new definition is much simpler, offering three main characteristics:

  • Containerized
  • Dynamically orchestrated
  • Microservices oriented

Read more at The New Stack

New Network Security Standards Will Protect Internet’s Routing

Electronic messages traveling across the internet are under constant threat from data thieves, but new security standards created with the technical guidance of the National Institute of Standards and Technology (NIST) will reduce the risk of messages being intercepted or stolen. These standards address a security weakness that has been a part of the internet since its earliest days(link is external).

The set of standards, known as Secure Inter-Domain Routing(link is external) (SIDR), have been published by the Internet Engineering Task Force (IETF(link is external)) and represent the first comprehensive effort to defend the internet’s routing system from attack. The effort has been led by a collaboration between NIST and the Department of Homeland Security (DHS) Science and Technology Directorate, working closely with the internet industry. The new specifications provide the first standardized approach for global defense against sophisticated attacks on the internet’s routing system.

Read more at NIST

Starting Out In Development – Subversion

This is and entry in a series about Starting Out In Development. The goal of this series is to provide brief introductions to critical tools, concepts, and skills you’ll need as a developer.

By now you should be familiar with what version control is. If you’re unsure, check out my article introducing it. Now that you know what version control is in general, it’s time to get familiar with some of its specific implementations. In this article, we’ll discuss Subversion, it’s take on version control, and how to use it.

Subversion (often abbreviated as SVN) is a software implementation of version control. It was created by CollabNet and is now a major Apache project. It’s been around since the year 2000 and is fairly actively developed and updated. There are also many tools that can make using SVN a bit easier and more convenient. Among the most popular of those tools is TortoiseSVN. I’ll be using that later in the examples that follow.

Read more at Dev.to

Understanding Shared Libraries in Linux

In programming, a library is an assortment of pre-compiled pieces of code that can be reused in a program. Libraries simplify life for programmers, in that they provide reusable functions, routines, classes, data structures and so on (written by a another programmer), which they can use in their programs.

For instance, if you are building an application that needs to perform math operations, you don’t have to create a new math function for that, you can simply use existing functions in libraries for that programming language.

Examples of libraries in Linux include libc (the standard C library) or glibc (GNU version of the standard C library), libcurl (multiprotocol file transfer library), libcrypt (library used for encryption, hashing, and encoding in C) and many more.

Read more at Tecmint

Migrating to Linux: An Introduction

Computer systems running Linux are everywhere. Linux runs our Internet services, from Google search to Facebook, and more. Linux also runs in a lot of devices, including our smartphones, televisions, and even cars. Of course, Linux can also run on your desktop system. If you are new to Linux, or you would just like to try something different on your desktop computer, this series of guides will briefly cover the basics and help you in migrating to Linux from another system.

Switching to a different operating system can be a challenge because every operating system provides a different way of doing things. What is second nature on one system can take frustrating time on another as we need to look up how to do things online or in books.

Vive la différence

To getting started with Linux, one thing you’ll likely notice is that Linux is packaged differently. In other operating systems, many things are bundled together and are just a part of the package. In Linux, however, each component is called out separately. For example, under Windows, the graphical interface is just a part of Windows. With Linux, you can choose from multiple graphical environments, like GNOME, KDE Plasma, Cinnamon, and MATE, to name a few.

At a high level, a Linux installation includes the following things:

  1. The kernel

  2. System programs and files residing on disk

  3. A graphical environment

  4. A package manager

  5. Applications

The Kernel

The core of the operating system is called the kernel. The kernel is the engine under the hood. It allows multiple applications to run simultaneously, and it coordinates their access to common services and devices so everything runs smoothly.

System programs and files

System programs reside on disk in a standard hierarchy of files and directories. These system programs and files include services (called daemons) that run in the background, utilities for various operations, configuration files, and log files.

Instead of running inside the kernel, these system programs are applications that perform tasks for basic system operation — for example, set the date and time and connect on the network so you can get on the Internet.

Included here is the init program – the very first application that runs. This program is responsible to starting all the background services (like a web server), starting networking, and starting the graphical environment. This init program will launch other system programs as needed.

Other system programs provide facilities for simple tasks like adding users and groups, changing your password, and configuring disks.

Graphical Environment

The graphical environment is really just more system programs and files. The graphical environment provides the usual windows with menus, a mouse pointer, dialog boxes, status and indicators and more.

Note that you aren’t stuck with the graphical environment that was originally installed. You can change it out for others, if you like. Each graphical environment will have different features. Some look more like Apple OS X, some look more like Windows, and others are unique and don’t try to mimic other graphical interfaces.

Package Manager

The package manager used to be difficult for people to grasp coming from a different system, but nowadays there is a similar system that people are very familiar with — the App Store. The packaging system is really an app store for Linux. Instead of installing this application from that web site, and the other application from a different site, you can use the package manager to select which applications you want. The package manager then installs the applications from a central repository of pre-built open source applications.

Applications

Linux comes with many pre-installed applications. And you can get more from the package manager. Many of the applications are quite good, which others need work. Sometimes the same application will have different versions that run in Windows or Mac OS or Linux.

For example, you can use Firefox browser and Thunderbird (for email). You can use LibreOffice as an alternative to Microsoft Office and run games through Valve’s Steam program. You can even run some native Windows applications on Linux using WINE.

Installing Linux

Your first step is typically to install a Linux distribution. You may have heard of Red Hat, Ubuntu, Fedora, Arch Linux, and SUSE, to name a few. These are different distributions of Linux.

Without a Linux distribution, you would have to install each component separately. Many components are developed and provided by different groups of people, so to install each component separately would be a long, tedious task. Luckily, the people who build distros do this work for you. They grab all the components, build them, make sure they work together, and then package them up under a single installation.

Various distributions may make different choices and use different components, but it’s still Linux. Applications written to work in one distribution frequently run on other distributions just fine.

If you are a Linux beginner and want to try out Linux, I recommend installing Ubuntu. There are other distros you can look into as well: Linux Mint, Fedora, Debian, Zorin OS, elementary OS, and many more. In future articles, we will cover additional facets of a Linux system and provide more information on how to get started using Linux.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Deploy Atomically with Travis & npm

I think I am a software developer because I am lazy.

The second or third time I have to perform the same exact task, I find myself saying, “Ugh, can’t I tell the computer how to do it?” 

So imagine my reaction when our team’s deployment process started looking like this:

  1. git pull
  2. npm run build to create the minified packages
  3. git commit -am "Create Distribution" && git push
  4. Navigate to GitHub
  5. Create a new release

I was thinking steps 1–3 are easy enough to put in a shell script and steps 4–5 are probably scriptable, but is that all? What else needs to be done?

  • The version in package.json was never getting updated and it would be nice to have that in synch with the GitHub release.
  • Can this script be run after the CI build without having to task a human to manually run it?

Read more at Dev.to

Tenets of SRE

While the nuances of workflows, priorities, and day-to-day operations vary from SRE team to SRE team, all share a set of basic responsibilities for the service(s) they support, and adhere to the same core tenets. In general, an SRE team is responsible for the availability, latency, performance, efficiency, change management, monitoring, emergency response, and capacity planning of their service(s). We have codified rules of engagement and principles for how SRE teams interact with their environment—not only the production environment, but also the product development teams, the testing teams, the users, and so on. Those rules and work practices help us to maintain our focus on engineering work, as opposed to operations work.

Ensuring a Durable Focus on Engineering

As already discussed, Google caps operational work for SREs at 50% of their time. Their remaining time should be spent using their coding skills on project work. In practice, this is accomplished by monitoring the amount of operational work being done by SREs, and redirecting excess operational work to the product development teams: reassigning bugs and tickets to development managers, [re]integrating developers into on-call pager rotations, and so on.

The following section discusses each of the core tenets of Google SRE.

Read more at O’Reilly

Beat the Biggest Threat to the Open Organization: Bias

Bias is the single greatest threat to the open organization. This is no exaggeration. In traditional organizations, responsibilities for evaluating ideas, strategies, contributions—even people—typically fall on (presumably) trained managers. In open organizations, that responsibility rests with contributors of all sorts.

“In organizations that are fit for the future,” writes Jim Whitehurst in The Open Organization, “Leaders will be chosen by the led. Contribution will matter more than credentials […] Compensation will be set by peers, not bosses.” According to Whitehurst, an open organization is a meritocracy: “Those people who have earned their peers’ respect over time drive decisions.” But the way humans allocate their respect is itself prone to bias. And imagine what can happen when biased decision-making results in the wrong leaders being chosen, certain contributions being over- or undervalued, or compensation being allocated on something other than merit.

The following checklist covers several documented phenomena that, sometimes unconsciously, skew the decision-making practices.

Read more at OpenSource.com

Scary Linux Commands for Halloween

With Halloween so fast approaching, it’s time for a little focus on the spookier side of Linux. What commands might bring up images of ghosts, witches and zombies? Which might encourage the spirit of trick or treat?

crypt

Well, we’ve always got crypt. Despite its name, crypt is not an underground vault or a burial pit for trashed files, but a command that encrypts file content. These days “crypt” is generally implemented as a script that emulates the older crypt command by calling a binary called mcrypt to do its work. Using the mycrypt command directly is an even better option.

Read more at Network World

Apache Software Foundation Is Bringing Open Source ML to the Masses with PredictionIO

The Apache Software Foundation is opening up the field of machine learning with its new open source project, PredictionIO. But how are they making it easier for newcomers to learn this devilishly complicated bit of coding? The clever use of templates, of course.

The Apache Software Foundation has announced a brand-new machine learning project, PredictionIO. Built on top of a state-of-the-art open source stack, this machine learning serve is designed for developers and data scientists to create predictive engines for any machine learning task.

PredictionIO is designed to democratize machine learning. How?  By providing a full stack for developers, they can create deployable applications “without having to cobble together underlying technologies”. Making it easier to use should widen the appeal and keep the machine learning bottleneck from getting any worse.

Read more at Jaxenter