Home Blog Page 592

A Hacker’s Guide to Kubernetes Networking

This post is the first in a series. I’ll share how Kubernetes and the Container Networking Interface works with some hacking tricks to learn its internals and manipulate it. Future posts will cover high-performance storage and inter-process communications (IPC) tricks we use with containers.

Container Networking Basics

Containers use Linux partitioning capabilities called Cgroups and Namespaces. Container processes are mapped to network, storage and other namespaces. Each namespace “sees” only a subset of OS resources to guarantee isolation between containers.

Read more at The New Stack

NPM or Yarn? Node.js Devs Pick Their Package Manager

Mere months since it was open-sourced by Facebook, Yarn has NPM on the run. The upstart JavaScript package manager has gained a quick foothold in the Node.js community, particularly among users of the React JavaScript UI library.

Known for faster installation, Yarn gives developers an improved ability to manage code dependencies in their Node.js projects, proponents say. It features a deterministic install algorithm and a lockfile capability that lists exact version numbers of all project dependencies. 

Read more at InfoWorld

What’s a Linked List, Anyway? [Part 1]

Regardless of which language we start coding in, one of the first things that we encounter are data structures, which are the different ways that we can organize our data; variablesarrays, hashes, and objects are all types of data structures. But these are still just the tip of the iceberg when it comes to data structures; there are a lot more, some of which start to sound super complicated the more that you hear about them.

One of those complicated things for me has always been linked lists. I’ve known about linked lists for a few years now, but I can never quite keep them straight in my head. I only really think about them when I’m preparing for (or sometimes, in the middle of) a technical interview, and someone asks me about them. 

Read more at Vaidehi Joshi

System Hardening with Ansible

The DevOps pipeline is constantly changing.  Therefore relevant security controls must be applied contextually.

We want to be secure, but I think all of us would rather spend our time developing and deploying software. Keeping up with server updates and all of the other security tasks is like cleaning your home – you know it has to be done, but you really just want to enjoy your clean home. The good news is you can hire a “service” to keep your application security up-to-date, giving you more time to develop.

At the recent All Day DevOps conferenceAkash Mahajan (@makash), a Founder/Director at Appsecco, discussed how to harden your system’s security.  In addition to his role at Appsecco, Akash is also involved as a local leader with the Open Web Application Security Project (OWASP).

Read more at DZone

Keynote: Community Software Powers the Machine by Mark Atwood

HPE’s Mark Atwood describes some parallels between how open source software is developed and the science fiction community. 

 

6 Reasons Why Open Source Software Lowers Development Costs

In some organizations, faster development is the primary motivation for using Open Source Software (OSS.) For others, cost savings or flexibility is the most important factor.

Last week, we detailed how OSS speeds development. Now let’s explore how open source software reduces development costs.

6 reasons OSS is lower cost                    

Using OSS can significantly reduce development costs in a number of proven ways. It can be much less expensive to acquire than commercially-licensed software or in-house developed software. These cost savings start with acquisition, but extend to deployment, support, and maintenance. Using open source software:

1. Saves 20-55% over commercial solutions, according to our Linux Foundation Consulting clients

2. Avoids functionality overkill and bundling — Many proprietary products have an overload of capabilities that clients rarely use, need, or even want. Often, they’re bundled, so that they must be paid for anyway.

● Avoids unwieldy closed system deployments – OSS eliminates the costly pricing games and traps that come with commercial sales and negotiations.

● Helps prevent vendor lock-in. Even where commercial OSS vendors provide a channel to deliver and support Open Source, customers have the freedom to switch vendors or even drop commercial support entirely, without changing the application or code in use.

● Avoids proprietary solutions consulting traps — OSS also helps with consulting, training and support costs because there is no exclusive access to the technology. You can often multi-source support, or even receive support from a vibrant community of developers who are actually working with the code on a daily basis.

● Benefits from ongoing community support — Active communities often provide higher quality support than commercial support organizations, and what’s more, community support is free.

Whether your organization chooses OSS for its speed of development, lower costs, flexibility, or because it keeps you on the leading edge of technology, OSS provides a competitive advantage.

Next up in this series, we’ll discuss why open source software is more flexible. You can also download the entire series today in our Fundamentals of Professional Open Source Management sample chapter.

Open source software management

Read more:

What Is Open Source Software?

Using Open Source Software to Speed Development and Gain Business Advantage

Why Using Open Source Software Helps Companies Stay Flexible and Innovate

Community Software, Science Fiction, and The Machine

Not many presentations can start with a video co-promoting a new computer and the latest Star Trek movie, but Mark Atwood, Director of Open Source Engagement at HP Enterprise, started his LinuxCon Europe keynote with a video about The Machine and Star Trek Beyond.

The Machine uses a new kind of physics for computation and data storage allowing it to be very fast, energy efficient, and agile. The Machine runs Linux, and Atwood says that “the best way to promote the use of any sort of new technology is to make it open source.”

There are quite a few parallels between how open source software is developed and the science fiction community. Atwood talked about how they even share some big milestone years: Linux is 25 years old; Star Trek is 50; and the genre of science fiction turned 90. 

The story starts in the early 20th century when high technology meant vacuum tubes and wireless radio and the field was full of passionate hobbyists building on each other’s ideas. A man in New York City named Hugo Gernsback helped facilitate this discussion via articles and letters from readers in his magazines. Atwood points out that they were essentially open sourcing the conversation through a moderated discussion forum with a one-month cycle time. 

In 1926, Gernsback started a new magazine, Astounding Science Fiction, thus creating the genre of science fiction and the beginning of science fiction fandom. This magazine was run like his technology magazines where he would publish stories and then later issues would contain stories that were written in response to earlier stories and letters from readers discussing them, again these were ideas built on ideas. The people in this community gathered together in 1939 for the world’s first science fiction convention. Also, in 1939 on the opposite coast, two men who’d grown up reading those technology and science fiction magazines founded Hewlett Packard out of their garage in Palo Alto, which as Atwood points out, helped create the very idea of Silicon Valley. 

Building ideas on top of ideas is at the core of how open source software and science fiction came to be what they are today. Atwood says that “science fiction is a way to have a conversation about the kind of world that we can make; the kind of world that can be made of the technology that we have and that we’re building and the world we can make out of our various ideas for organizing people.”

To get the full experience of Atwood’s talk, you should watch the video!

Interested in speaking at Open Source Summit North America on September 11 – 13? Submit your proposal by May 6, 2017. Submit now>>

Not interested in speaking but want to attend? Linux.com readers can register now with the discount code, LINUXRD5, for 5% off the all-access attendee registration price. Register now to save over $300!

With Azure Container Service, Microsoft Works to Make Container Management Boring

Earlier this week, Microsoft made the Kubernetes container orchestration service generally available on Azure Container Service, alongside the other predominant container orchestration engines Docker Swarm and Mesosphere’s Data Center Operating System (DC/OS). The move is one more step in building out the service, Kubernetes co-founder Brendan Burns told The New Stack.

Burns moved from Google to Microsoft seven months ago to run ACS with the vision of turning it into “a really managed service” that can deliver not just tools for working with containers, but work as a whole Containers-as-a-Service (CaaS) platform. … As the technology matures, the emphasis shifts from how you use containers to what you use them for, he pointed out. 

Read more at The New Stack

How to Securely Transfer Files Between Servers with scp

If you run a live or home server, moving files between local machines or two remote machines is a basic requirement. There are many ways to achieve that. In this article, we talk about scp (secure copy command) that encrypts the transferred file and password so no one can snoop. With scp you don’t have to start an FTP session or log into the system.

The scp tool relies on SSH (Secure Shell) to transfer files, so all you need is the username and password for the source and target systems. Another advantage is that with SCP you can move files between two remote servers, from your local machine in addition to transferring data between local and remote machines. In that case you need usernames and passwords for both servers. Unlike Rsync, you don’t have to log into any of the servers to transfer data from one machine to another.

This tutorial is aimed at new Linux users, so I will keep things as simple as possible. Let’s get started.

Copy a single file from the local machine to a remote machine:

The scp command needs a source and destination to copy files from one location to another location. This is the pattern that we use:

scp localmachine/path_to_the_file username@server_ip:/path_to_remote_directory

In the following example I am copying a local file from my macOS system to my Linux server (Mac OS, being a UNIX operating system has native support for all UNIX/Linux tools).

scp /Volumes/MacDrive/Distros/fedora.iso 
swapnil@10.0.0.75:/media/prim_5/media_server/

Here, ‘swapnil’ is the user on the server and 10.0.0.75 is the server IP. It will ask you to provide the password for that user, and then copy the file securely.

I can do the same from my local Linux machine:

scp /home/swapnil/Downloads/fedora.iso swapnil@10.0.0.75:/media/prim_5/media_server/

If you are running Windows 10, then you can use Ubuntu bash on Windows to copy files from the Windows system to Linux server:

scp /mnt/c/Users/swapnil/Downloads/fedora.iso swapnil@10.0.0.75:/media/prim_5/
  media_server/

Copy a local directory to a remote server:

If you want to copy the entire local directory to the server, then you can add the -r flag to the command:

scp -r localmachine/path_to_the_directory username@server_ip:/path_to_remote_directory/

Make sure that the source directory doesn’t have a forward slash at the end of the path, at the same time the destination path *must* have a forward slash.

Copy all files in a local directory to a remote directory

What if you only want to copy all the files inside a local directory to a remote directory? It’s simply, just add a forward slash and * at the end of source directory and give the path of destination directory. Don’t forget to add the -r flag to the command:

scp -r localmachine/path_to_the_directory/* username@server_ip:/path_to_remote_directory/

Copying files from remote server to local machine

If you want to make a copy of a single file, a directory or all files on the server to the local machine, just follow the same example above, just exchange the place of source and destination.

Copy a single file:

scp username@server_ip:/path_to_remote_directory local_machine/path_to_the_file 

Copy a remote directory to a local machine:

scp -r username@server_ip:/path_to_remote_directory local-machine/path_to_the_directory/

Make sure that the source directory doesn’t have a forward slash at the end of the path, at the same time the destination path *must* have a forward slash.

Copy all files in a remote directory to a local directory:

scp -r username@server_ip:/path_to_remote_directory/* local-machine/path_to_the_directory/ 

Copy files from one directory of the same server to another directory securely from local machine

Usually I ssh into that machine and then use rsync command to perform the job, but with SCP, I can do it easily without having to log into the remote server.

Copy a single file:

scp username@server_ip:/path_to_the_remote_file username@server_ip:/
  path_to_destination_directory/

Copy a directory from one location on remote server to different location on the same server:

scp username@server_ip:/path_to_the_remote_file username@server_ip:/
  path_to_destination_directory/

Copy all files in a remote directory to a local directory

scp -r username@server_ip:/path_to_source_directory/* usdername@server_ip:/
  path_to_the_destination_directory/ 

Copy files from one remote server to another remote server from a local machine

Currently I have to ssh into one server in order to use rsync command to copy files to another server. I can use SCP command to move files between two remote servers:

Usually I ssh into that machine and then use rsync command to perform the job, but with SCP, I can do it easily without having to log into the remote server.

Copy a single file:

scp username@server1_ip:/path_to_the_remote_file username@server2_ip:/
  path_to_destination_directory/

Copy a directory from one location on a remote server to different location on the same server:

scp username@server1_ip:/path_to_the_remote_file username@server2_ip:/
  path_to_destination_directory/

Copy all files in a remote directory to a local directory

scp -r username@server1_ip:/path_to_source_directory/* username@server2_ip:/
  path_to_the_destination_directory/ 

Conclusion

As you can see, once you understand how things work, it will be quite easy to move your files around. That’s what Linux is all about, just invest your time in understanding some basics, then it’s a breeze!

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

A Brief History of Blockchain

We’re now in the midst of another quiet revolution: blockchain, a distributed database that maintains a continuously growing list of ordered records, called “blocks.” Consider what’s happened in just the past 10 years:

  • The first major blockchain innovation was bitcoin, a digital currency experiment. The market cap of bitcoin now hovers between $10–$20 billion dollars, and is used by millions of people for payments, including a large and growing remittances market.
  • The second innovation was called blockchain, which was essentially the realization that the underlying technology that operated bitcoin could be separated from the currency and used for all kinds of other interorganizational cooperation. Almost every major financial institution in the world is doing blockchain research at the moment, and 15% of banks are expected to be using blockchain in 2017.

Read more at HBR