Home Blog Page 300

Introducing the Non-Code Contributor’s Guide

It was May 2018 in Copenhagen, and the Kubernetes community was enjoying the contributor summit at KubeCon/CloudNativeCon, complete with the first run of the New Contributor Workshop. As a time of tremendous collaboration between contributors, the topics covered ranged from signing the CLA to deep technical conversations. Along with the vast exchange of information and ideas, however, came continued scrutiny of the topics at hand to ensure that the community was being as inclusive and accommodating as possible. Over that spring week, some of the pieces under the microscope included the many themes being covered, and how they were being presented, but also the overarching characteristics of the people contributing and the skill sets involved. From the discussions and analysis that followed grew the idea that the community was not benefiting as much as it could from the many people who wanted to contribute, but whose strengths were in areas other than writing code.

This all led to an effort called the Non-Code Contributor’s Guide.

Now, it’s important to note that Kubernetes is rare, if not unique, in the open source world, in that it was defined very early on as both a project and a community. While the project itself is focused on the codebase, it is the community of people driving it forward that makes the project successful. The community works together with an explicit set of community values, guiding the day-to-day behavior of contributors whether on GitHub, Slack, Discourse, or sitting together over tea or coffee.

By having a community that values people first, and explicitly values a diversity of people, the Kubernetes project is building a product to serve people with diverse needs. The different backgrounds of the contributors bring different approaches to the problem solving, with different methods of collaboration, and all those different viewpoints ultimately create a better project.

The Non-Code Contributor’s Guide aims to make it easy for anyone to contribute to the Kubernetes project in a way that makes sense for them. This can be in many forms, technical and non-technical, based on the person’s knowledge of the project and their available time. Most individuals are not developers, and most of the world’s developers are not paid to fully work on open source projects. Based on this we have started an ever-growing list of possible ways to contribute to the Kubernetes project in a Non-Code way!

Get Involved

Some of the ways that you can contribute to the Kubernetes community without writing a single line of code include:

The guide to get started with Kubernetes project contribution is documented on Github, and as the Non-Code Contributors Guide is a part of that Kubernetes Contributors Guide, it can be found here. As stated earlier, this list is not exhaustive and will continue to be a work in progress.

To date, the typical Non-Code contributions fall into the following categories:

  • Roles that are based on skill sets other than “software developer”
  • Non-Code contributions in primarily code-based roles
  • “Post-Code” roles, that are not code-based, but require knowledge of either the code base or management of the code base

If you, dear reader, have any additional ideas for a Non-Code way to contribute, whether or not it fits in an existing category, the team will always appreciate if you could help us expand the list.

If a contribution of the Non-Code nature appeals to you, please read the Non-Code Contributions document, and then check the Contributor Role Board to see if there are any open positions where your expertise could be best used! If there are no listed open positions that match your skill set, drop on by the #sig-contribex channel on Slack, and we’ll point you in the right direction.

We hope to see you contributing to the Kubernetes community soon!

This article originally appeared on the Kubernetes Blog.

Learn Node.js, Unit 3: A tour of Node.js

Node is often described as “JavaScript on the server”, but that doesn’t quite do it justice. In fact, any description of Node.js I can offer will be unfairly reductionist, so let me start with the one provided by the Node team:

“Node.js is a JavaScript runtime built on Chrome’s V8 JavaScript engine.” (Source)

That’s a fine description, but it kinda needs a picture, doesn’t it? If you look on the Node.js website, you’ll notice there are no high-level diagrams of the Node.js architecture. Yet, if you search for “Node.js architecture diagram” there are approximately 178 billion different diagrams that attempt to paint an overall picture of Node (I’ll refer to Node.js as Node from now on). After looking at a few of them, I just didn’t see one that fit with the way I’ve structured the material in this course, so I came up with this:

Node Architecture

Figure 1. The Node.js architecture stack

Read more at IBM Developers

What Is Machine Learning? We Drew You Another Flowchart

The vast majority of the AI advancements and applications you hear about refer to a category of algorithms known as machine learning. (For more background on AI, check out our first flowchart here.)

Machine-learning algorithms use statistics to find patterns in massive* amounts of data. And data, here, encompasses a lot of things—numbers, words, images, clicks, what have you. If it can be digitally stored, it can be fed into a machine-learning algorithm.

Machine learning is the process that powers many of the services we use today—recommendation systems like those on Netflix, YouTube, and Spotify; search engines like Google and Baidu; social-media feeds like Facebook and Twitter; voice assistants like Siri and Alexa. The list goes on.

Read more at MIT Technology Review

Practical Networking for Linux Admins: TCP/IP

Get to know networking basics with this tutorial from our archives.

Linux grew up with a networking stack as part of its core, and networking is one of its strongest features. Let’s take a practical look at some of the TCP/IP fundamentals we use every day.

It’s IP Address

I have a peeve. OK, more than one. But for this article just one, and that is using “IP” as a shortcut for “IP address”. They are not the same. IP = Internet Protocol. You’re not managing Internet Protocols, you’re managing Internet Protocol addresses. If you’re creating, managing, and deleting Internet Protocols, then you are an uber guru doing something entirely different.

Yes, OSI Model is Relevant

TCP is short for Transmission Control Protocol. TCP/IP is shorthand for describing the Internet Protocol Suite, which contains multiple networking protocols. You’re familiar with the Open Systems Interconnection (OSI) model, which categorizes networking into seven layers:

  • 7. Application layer
  • 6. Presentation layer
  • 5. Session layer
  • 4. Transport layer
  • 3. Network layer
  • 2. Data link layer
  • 1. Physical layer

The application layer includes the network protocols you use every day: SSH, TLS/SSL, HTTP, IMAP, SMTP, DNS, DHCP, streaming media protocols, and tons more.

TCP operates in the transport layer, along with its friend UDP, the User Datagram Protocol. TCP is more complex; it performs error-checking, and it tries very hard to deliver your packets. There is a lot of back-and-forth communication with TCP as it transmits and verifies transmission, and when packets get lost it resends them. UDP is simpler and has less overhead. It sends out datagrams once, and UDP neither knows nor cares if they reach their destination.

TCP is for ensuring that data is transferred completely and in order. If a file transfers with even one byte missing it’s no good. UDP is good for lightweight stateless transfers such NTP and DNS queries, and is efficient for streaming media. If your music or video has a blip or two it doesn’t render the whole stream unusable.

The physical layer refers to your networking hardware: Ethernet and wi-fi interfaces, cabling, switches, whatever gadgets it takes to move your bits and the electricity to operate them.

Ports and Sockets

Linux admins and users have to know about ports and sockets. A network socket is the combination of an IP address and port number. Remember back in the early days of Ubuntu, when the default installation did not include a firewall? No ports were open in the default installation, so there were no entry points for an attacker. “Opening a port” means starting a service, such as an HTTP, IMAP, or SSH server. Then the service opens a listening port to wait for incoming connections. “Opening a port” isn’t quite accurate because it’s really referring to a socket. You can see these with the netstat command. This example displays only listening sockets and the names of their services:

$ sudo netstat -plnt 
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address     Foreign Address State  PID/Program name
tcp        0      0 127.0.0.1:3306    0.0.0.0:*       LISTEN 1583/mysqld     
tcp        0      0 127.0.0.1:5901    0.0.0.0:*       LISTEN 13951/qemu-system-x  
tcp        0      0 192.168.122.1:53  0.0.0.0:*       LISTEN 2101/dnsmasq
tcp        0      0 192.168.122.1:80  0.0.0.0:*       LISTEN 2001/apache2
tcp        0      0 192.168.122.1:443 0.0.0.0:*       LISTEN 2013/apache2
tcp        0      0 0.0.0.0:22        0.0.0.0:*       LISTEN 1200/sshd            
tcp6       0      0 :::80             :::*            LISTEN 2057/apache2    
tcp6       0      0 :::22             :::*            LISTEN 1200/sshd            
tcp6       0      0 :::443            :::*            LISTEN 2057/apache2

This shows that MariaDB (whose executable is mysqld) is listening only on localhost at port 3306, so it does not accept outside connections. Dnsmasq is listening on 192.168.122.1 at port 53, so it is accepting external requests. SSH is wide open for connections on any network interface. As you can see, you have control over exactly what network interfaces, ports, and addresses your services accept connections on.

Apache is listening on two IPv4 and two IPv6 ports, 80 and 443. Port 80 is the standard unencrypted HTTP port, and 443 is for encrypted TLS/SSL sessions. The foreign IPv6 address of :::* is the same as 0.0.0.0:* for IPv4. Those are wildcards accepting all requests from all ports and IP addresses. If there are certain addresses or address ranges you do not want to accept connections from, you can block them with firewall rules.

A network socket is a TCP/IP endpoint, and a TCP/IP connection needs two endpoints. A socket represents a single endpoint, and as our netstat example shows a single service can manage multiple endpoints at one time. A single IP address or network interface can manage multiple connections.

The example also shows the difference between a service and a process. apache2 is the service name, and it is running four processes. sshd is one service with one process listening on two different sockets.

Unix Sockets

Networking is so deeply embedded in Linux that its Unix domain sockets (also called inter-process communications, or IPC) behave like TCP/IP networking. Unix domain sockets are endpoints between processes in your Linux operating system, and they operate only inside the Linux kernel. You can see these with netstat:

$ netstat -lx     
Active UNIX domain sockets (only servers)
Proto RefCnt Flags       Type       State         I-Node   Path
unix  2      [ ACC ]     STREAM     LISTENING     988      /var/run/dbus/system_bus_socket
unix  2      [ ACC ]     STREAM     LISTENING     29730    /run/user/1000/systemd/private
unix  2      [ ACC ]     SEQPACKET  LISTENING     357      /run/udev/control
unix  2      [ ACC ]     STREAM     LISTENING     27233    /run/user/1000/keyring/control

It’s rather fascinating how they operate. The SOCK_STREAM socket type behaves like TCP with reliable delivery, and SOCK_DGRAM is similar to UDP, unordered and unreliable, but fast and low-overhead. You’ve heard how everything in Unix is a file? Instead of networking protocols and IP addresses and ports, Unix domain sockets use special files, which you can see in the above example. They have inodes, metadata, and permissions just like the regular files we use every day.

If you want to dig more deeply there are a lot of excellent books. Or, you might start with man tcp and man 2 socket. Next week, we’ll look at network configurations, and whatever happened to IPv6?

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Open Source 2018: It Was the Best of Times, It Was the Worst of Times

Recently, IBM announced that it would be acquiring Red Hat for $34 billion, a more-than-60-percent premium over Red Hat’s market cap, and a nearly 12x multiple on revenues. In many ways, this was a clear sign that 2018 was the year commercial open source has arrived, if there was ever previously a question about it before.

Indeed, the Red Hat transaction is just the latest in a long line of multi-billion dollar outcomes this year. To date, more than $50 billion dollars have been exchanged in open source IPOs and mergers and acquisitions (M&A); and all of the M&A deals are considered “mega deals” — those valued over $5 billion.

  • IBM acquired Red Hat for $34 billion

  • Hortonworks’ $5.2 billion merger with Cloudera

  • Elasticsearch IPO – $4+billion

  • Pivotal IPO – $3.9 billion

  • Mulesoft acquired by Salesforce – $6.5 Billion

If you’re a current open source software (OSS) shareholder, it may feel like the best of times. However, If you’re an OSS user or emerging open source project or company, you might be feeling more ambivalent.

On the positive side, the fact that there have been such good financial outcomes should come as encouragement to the many still-private and outstanding open-source businesses (e.g., Confluent, Docker, HashiCorp, InfluxDB). And, we can certainly hope that this round of exits will encourage more investors to bet on OSS, enabling OSS to continue to be a prime driver of innovation.

However, not all of the news is rosy.

First, since many of these exits were in the form of M&A, we’ve actually lost some prime examples of independent OSS companies. For many years, there was a concern that Red Hat was the only example of a public open source company. Earlier this year, it seemed likely that the total would grow to 7 (Red Hat, Hortonworks, Cloudera, Elasticsearch, Pivotal, Mulesoft, and MongoDB). Assuming the announced M&As close as expected, the number of public open source companies is back down to four, and the combined market cap of public open source companies is much less than it was at the start of the year.

We Need to Go Deeper

I think it’s critical that we view these open source outcomes in the context of another unavoidable story — the growth in cloud computing.

Many of the open source companies involved share an overlooked common denominator: they’ve made most of their money through on-premise businesses. This probably comes as a surprise, as we regularly hear about cloud-related milestones, like the one that states that more than 80% of server workloads are in the cloud, that open source drives ⅔ or more of cloud revenues, and that the cloud computing market is expected to reach $300 billion by 2021.

By contrast, the total revenues of all of the open source companies listed above was less than $7B. And, almost all of the open source companies listed above have taken well over $200 million in investment each to build out direct sales and support to appropriately sell to the large, on premises enterprise market.

yRPFSfntUxV0-LzXJSZJDUuMjBJP_v6jIbOg4MQW

Open Source Driving Revenue, But for Whom?

The most common way that open source is used in the cloud is as a loss-leader to sell infrastructure. The largest cloud companies all offer free or near-free open source services that drive consumption of compute, networking, and storage.

To be clear, this is perfectly legal, and many of the cloud companies have contributed generously in both code and time to open source. However, the fact that it is difficult for OSS companies to monetize their own products with a hosted offering means that they are shut off from one of the most important and sustainable paths to scaling. Perhaps most importantly, OSS companies that are independent are largely closed off from the fastest growing segment of the computing market. Since there are only a handful of companies worldwide with the scale and capital to operate traditional public clouds (indeed, Amazon, Google, Microsoft, and Alibaba are among the largest companies on the planet), and those companies already control a disproportionate share of traffic, data, capital and talent, how can we ensure that investment, monetization, and innovation continue to flow in open source? And, how can open source companies sustainably grow.

For some OSS companies, the answer is M&A. For others, the cloud monetization/competition question has led them to adopt controversial and more restrictive licensing policies, such as Redis Lab’s adoption of the Commons Clause and MongoDB’s Server Side License.

But there may be a different answer to cloud monetization. Namely, create a different kind of cloud, one based on decentralized infrastructure.

Rather than spending billions to build out data centers, decentralized infrastructure approaches (like Storj, SONM, and others), provide incentives for people around the world to contribute spare computing, storage or network capacity. For example, by fairly and transparently allowing storage node operators to share in the revenue generated (i.e., by compensating supply), Storj was able to rapidly grow to a network of 150,000 nodes in 180 countries with over 150 PB of capacity–equivalent to several large data centers. Similarly, rather than spending hundreds of millions on traditional sales and marketing, we believe there is a way to fairly and transparently compensate those who bring demand to the network, so we have programmatically designed our network so that open source companies whose projects send users our way can get fairly and transparently compensated proportional to the storage and network usage they generate. We are actively working to encourage other decentralized networks to do the same, and believe this is the future of open cloud computing

This isn’t charity. Decentralized networks have strong economic incentives to compensate OSS as the primary driver of cloud demand. But, more importantly, we think that this can help drive a virtuous circle of investment, growth, monetization, and innovation. Done correctly, this will ensure that the best of times lay ahead!

Ben Golub is the former CEO of Docker and interim CEO at Storj Labs.

Watch the Open Source Summit keynote presentation from Ben Golub and Shawn Wilkinson to learn more about open source and the decentralized web.

Beyond Passwords: 2FA, U2F and Google Advanced Protection

Last week I wrote a couple of different pieces on passwords, firstly about why we’re going to be stuck with them for a long time yet and then secondly, about how we all bear some responsibility for making good password choices. A few people took some of the points I made in those posts as being contentious, although on reflection I suspect it was more a case of lamenting that we shouldn’t be in a position where we’re still dependent on passwords and people needing to understand good password management practices in order for them to work properly.

This week, I wanted to focus on going beyond passwords and talk about 2FA. Per the title, not just any old 2FA but U2F and in particular, Google’s Advanced Protection Program. This post will be partly about 2FA in general, but also specifically about Google’s program because of the masses of people dependent on them for Gmail. Your email address is the skeleton key to your life (not just “online” life) so protecting that is absolutely paramount.

Let’s start with defining some terms because they tend to be used a little interchangeably. Before I do that, a caveat: every single time I see discussion on what these terms mean, it descends into arguments about the true meaning and mechanics of each. Let’s not get bogged down in that and instead focus on the practical implications of each.

Read more at Troy Hunt

How to Install a Device Driver on Linux

…most default Linux drivers are open source and integrated into the system, which makes installing any drivers that are not included quite complicated, even though most hardware devices can be automatically detected. 

To learn more about how Linux drivers work, I recommend reading An Introduction to Device Drivers in the book Linux Device Drivers.

Two approaches to finding drivers

1. User interfaces

If you are new to Linux and coming from the Windows or MacOS world, you’ll be glad to know that Linux offers ways to see whether a driver is available through wizard-like programs. Ubuntu offers the Additional Drivers option. Other Linux distributions provide helper programs, like Package Manager for GNOME, that you can check for available drivers.

2. Command line

What if you can’t find a driver through your nice user interface application? Or you only have access through the shell with no graphic interface whatsoever? Maybe you’ve even decided to expand your skills by using a console. You have two options:

Read more at OpenSource.com

New Raspberry Pi A+ Board Shrinks RPi 3B+ Features to HAT Dimensions

A HAT-sized, $25, Raspberry Pi 3 Model A+ will soon arrive with the same 1.4GHz quad-A53 SoC, dual-band WiFi, and 40-pin GPIO of the RPi 3B+, but with only 512MB RAM, one USB, and no LAN.



As promised, Raspberry Pi Trading has revived its old mini-size, four-year old Raspberry Pi Model A+ SBC with a new Raspberry Pi 3 Model A+ model. Measuring the same 65 x 56mm as the earlier $20 RPi A+, the SBC will go on sale in early December for $25. 

The SBC runs the same Linux distributions on the same 1.4GHz, quad-core Cortex-A53 Broadcom BCM2837B0 SoC as the 86 x 56mm, $35 Raspberry Pi 3 Model B+. It’s pricier than the more stripped down, $5 and up Raspberry Pi Zero family of products, which trim down to a 65 x 30mm and use a single-core, 1GHz ARM11-based Broadcom BCM2836.

Read more at LinuxGizmos

Envoy and gRPC-Web: A Fresh New Alternative to REST

gRPC-Web is a JavaScript client library that enables web applications to interact with backend gRPC services using Envoy instead of a custom HTTP server as an intermediary. Last week, the gRPC team announced the GA release of gRPC-Web on the CNCF blog after nearly two years of active development.

Personally, I’d been intrigued by gRPC-Web since I first read about it in a blog post on the Improbable engineering blog. I’ve always loved gRPC’s performance, scalability, and IDL-driven approach to service interaction and have longed for a way to eliminate REST from the service path if possible. I’m excited that gRPC-Web is ready for prime time because I think it opens up some extremely promising horizons in web development.

The beauty of gRPC-Web, in my view, is that it enables you to create full end-to-end gRPC service architectures, from the web client all the way down. 

Read more at Medium — Envoy blog

5 Easy Tips for Linux Web Browser Security

If you use your Linux desktop and never open a web browser, you are a special kind of user. For most of us, however, a web browser has become one of the most-used digital tools on the planet. We work, we play, we get news, we interact, we bank… the number of things we do via a web browser far exceeds what we do in local applications. Because of that, we need to be cognizant of how we work with web browsers, and do so with a nod to security. Why? Because there will always be nefarious sites and people, attempting to steal information. Considering the sensitive nature of the information we send through our web browsers, it should be obvious why security is of utmost importance.

So, what is a user to do? In this article, I’ll offer a few basic tips, for users of all sorts, to help decrease the chances that your data will end up in the hands of the wrong people. I will be demonstrating on the Firefox web browser, but many of these tips cross the application threshold and can be applied to any flavor of web browser.

1. Choose Your Browser Wisely

Although most of these tips apply to most browsers, it is imperative that you select your web browser wisely. One of the more important aspects of browser security is the frequency of updates. New issues are discovered quite frequently and you need to have a web browser that is as up to date as possible. Of major browsers, here is how they rank with updates released in 2017:

  1. Chrome released 8 updates (with Chromium following up with numerous security patches throughout the year).

  2. Firefox released 7 updates.

  3. Edge released 2 updates.

  4. Safari released 1 update (although Apple does release 5-6 security patches yearly).

But even if your browser of choice releases an update every month, if you (as a user) don’t upgrade, that update does you no good. This can be problematic with certain Linux distributions. Although many of the more popular flavors of Linux do a good job of keeping web browsers up to date, others do not. So, it’s crucial that you manually keep on top of browser updates. This might mean your distribution of choice doesn’t include the latest version of your web browser of choice in its standard repository. If that’s the case, you can always manually download the latest version of the browser from the developer’s download page and install from there.

If you like to live on the edge, you can always use a beta or daily build version of your browser. Do note, that using a daily build or beta version does come with it the possibility of unstable software. Say, however, you’re okay with using a daily build of Firefox on a Ubuntu-based distribution. To do that, add the necessary repository with the command:

sudo apt-add-repository ppa:ubuntu-mozilla-daily/ppa

Update apt and install the daily Firefox with the commands:

sudo apt-get update

sudo apt-get install firefox

What’s most important here is to never allow your browser to get far out of date. You want to have the most updated version possible on your desktop. Period. If you fail this one thing, you could be using a browser that is vulnerable to numerous issues.

2. Use A Private Window

Now that you have your browser updated, how do you best make use of it? If you happen to be of the really concerned type, you should consider always using a private window. Why? Private browser windows don’t retain your data: No passwords, no cookies, no cache, no history… nothing. The one caveat to browsing through a private window is that (as you probably expect), every time you go back to a web site, or use a service, you’ll have to re-type any credentials to log in. If you’re serious about browser security, never saving credentials should be your default behavior.

This leads me to a reminder that everyone needs: Make your passwords strong! In fact, at this point in the game, everyone should be using a password manager to store very strong passwords. My password manager of choice is Universal Password Manager.

3. Protect Your Passwords

For some, having to retype those passwords every single time might be too much. So what do you do if you want to protect those passwords, while not having to type them constantly? If you use Firefox, there’s a built-in tool, called Master Password. With this enabled, none of your browser’s saved passwords are accessible, until you correctly type the master password. To set this up, do the following:

  1. Open Firefox.

  2. Click the menu button.

  3. Click Preferences.

  4. In the Preferences window, click Privacy & Security.

  5. In the resulting window, click the checkbox for Use a master password (Figure 1).

  6. When prompted, type and verify your new master password (Figure 2).

  7. Close and reopen Firefox.

Figure 1: The Master Password option in Firefox Preferences.

Figure 2: Setting the Master Password in Firefox.

4. Know your Extensions

There are plenty of privacy-focused extensions available for most browsers. What extensions you use will depend upon what you want to focus on. For myself, I choose the following extensions for Firefox:

  • Firefox Multi-Account Containers – Allows you to configure certain sites to open in a containerized tab.

  • Facebook Container – Always opens Facebook in a containerized tab (Firefox Multi-Account Containers is required for this).

  • Avast Online Security – Identifies and blocks known phishing sites and displays a website’s security rating (curated by the Avast community of over 400 million users).

  • Mining Blocker – Blocks all CPU-Crypto Miners before they are loaded.

  • PassFF – Integrates with pass (A UNIX password manager) to store credentials safely.

  • Privacy Badger – Automatically learns to block trackers.

  • uBlock Origin – Blocks trackers based on known lists.

Of course, you’ll find plenty more security-focused extensions for:

Not every web browser offers extensions. Some, such as Midoria, offer a limited about of built-in plugins, that can be enabled/disabled (Figure 3). However, you won’t find third-party plugins available for the majority of these lightweight browsers.

Figure 3: The Midori Browser plugins window.

5. Virtualize

For those that are concerned about releasing locally stored data to prying eyes, one option would be to only use a browser on a virtual machine. To do this, install the likes of VirtualBox, install a Linux guest, and then run whatever browser you like in the virtual environment. If you then apply the above tips, you can be sure your browsing experience will be safe.

The Truth of the Matter

The truth is, if the machine you are working from is on a network, you’re never going to be 100% safe. However, if you use that web browser intelligently you’ll get more bang out of your security buck and be less prone to having data stolen. The silver lining with Linux is that the chances of getting malicious software installed on your machine is exponentially less than if you were using another platform. Just remember to always use the latest release of your browser, keep your operating system updated, and use caution with the sites you visit.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.