Home Blog Page 382

The Ars Guide to Building a Linux Router from Scratch

After finally reaching the tipping point with off-the-shelf solutions that can’t match increasing speeds available, we recently took the plunge. Building a homebrew router turned out to be a better proposition than we could’ve ever imagined. With nearly any speed metric we analyzed, our little DIY kit outpaced routers whether they were of the $90- or $250-variety.

Naturally, many readers asked the obvious follow-up—”How exactly can we put that together?” Today it’s time to finally pull back the curtain and offer that walkthrough. By taking a closer look at the actual build itself (hardware and software), the testing processes we used, and why we used them, hopefully any Ars readers of average technical abilities will be able to put together their own DIY speed machine. And the good news? Everything is as open source as it gets—the equipment, the processes, and the setup. If you want the DIY router we used, you can absolutely have it. This will be the guide to lead you, step-by-step.

What is a router, anyway?

At its most basic, a router is just a device that accepts packets on one interface and forwards them on to another interface that gets those packets closer to their eventual destination. 

Read more at Ars Technica

Finding What You’re Looking for on Linux

It isn’t hard to find what you’re looking for on a Linux system — a file or a command — but there are a lot of ways to go looking.

7 commands to find Linux files

find

The most obvious is undoubtedly the find command, and find has become easier to use than it was years ago. It used to require a starting location for your search, but these days, you can also use find with just a file name or regular expression if you’re willing to confine your search to the local directory.

$ find e*
empty
examples.desktop

In this way, it works much like the ls command and isn’t doing much of a search.

Read more at NetworkWorld

From USENET to Facebook: The Second Time as Farce

Facebook repeats the pattern of USENET, this time as farce. As a no-holds-barred Wild West sort of social network, USENET was filled with everything we rightly complain about today. It was easy to troll and be abusive; all too many participants did it for fun. Most groups were eventually flooded by spam, long before spam became a problem for email. Much of that spam distributed pornography or pirated software (“warez”). You could certainly find newsgroups in which to express your inner neo-Nazi or white supremacist self. Fake news? We had that; we had malicious answers to technical questions that would get new users to trash their systems. And yes, there were bots; that technology isn’t as new as we’d like to think.

But there was a big divide on USENET between moderated and unmoderated newsgroups. Posts to moderated newsgroups had to be approved by a human moderator before they were pushed to the rest of the network. Moderated groups were much less prone to abuse. They weren’t immune, certainly, but moderated groups remained virtual places where discussion was mostly civilized, and where you could get questions answered. Unmoderated newsgroups were always spam-filled and frequently abusive, and the alt.* newsgroups, which could be created by anyone, for any reason, matched anything we have now for bad behavior.

So, the first thing we should learn from USENET is the importance of moderation. Fully human moderation at Facebook scale is impossible. With seven billion pieces of content shared per day, even a million moderators would have to scan seven thousand posts each: roughly 4 seconds per post. But we don’t need to rely on human moderation. After USENET’s decline, research showed that it was possible to classify users as newbies, helpers, leaders, trolls, or flamers, purely by their communications patterns—with only minimal help from the content. 

Read more at O’Reilly

FOSSology Turns 10 – A Decade of Highlights

FOSSology turns ten this year. Far from winding down, the open source license compliance project is still going strong. The interest in the project among its thriving community has not dampened in the least, and regular contributions and cross-project contributors are steering it toward productive and meaningful iterations.

An example is the recent 3.2 release, offering significant improvements over previous versions, such as the import of SDPX files and word processor document output summarizing analysis information. Even so, the overall project goal remains the same: to make it easier to understand and comply with the licenses used in open source software.

There are thousands of licenses used in Open Source software these days, with some differing by only a few words and others pertaining to entirely different use universes. Together, they present a bewildering quagmire of requirements that must be adhered to, but only as set out in the appropriate license(s), the misunderstanding or absence of which can revert rights to a reserved status and bring about a complete halt to distribution.  

Read more at The Linux Foundation

Xen Project Contributor Spotlight: Stefano Stabellini

The Xen Project is comprised of a diverse set of member companies and contributors that are committed to the growth and success of the Xen Project Hypervisor. The Xen Project Hypervisor is a staple technology for server and cloud vendors, and is gaining traction in the embedded, security and automotive space. This blog series highlights the companies contributing to the changes and growth being made to the Xen Project, and how the Xen Project technology bolsters their business.

Name: Stefano Stabellini
Title: Virtualization Architect
Company: Aporeto

When did you start contributing to the Xen Project?  

I started contributing to Xen Project in 2008. At that time, I was working for Citrix in the XenServer product team. I have been contributing every year since then, that makes it 10 years now!

What advice would you give someone considering contributing to the Xen Project?

Learning the intricate details of the Xen Project hypervisor can be daunting at first, but it is fun, and the community is great. 

Read more at Xen Project

Capital One: Open Source in a Regulated Environment

Most people know Capital One as one of the largest credit card companies in the U.S. Some also know that we’re one of the nation’s largest banks — number 8 in the U.S. by assets. But Capital One is also a technology-focused digital bank that is proud to be disrupting the financial services industrythrough our commitment to cutting edge technologies and innovative digital products. Like all U.S. banks, Capital One operates in a highly regulated environment that prioritizes the protection of our consumers and their financial data. This sets us apart from many companies who don’t operate under the same level of oversight and responsibility.

Our goal to reimagine banking is attracting amazing engineers that want to be part of the movement to reinvent the financial technology industry. During interviews, they are often surprised to find we want them to use open source project and contribute back to the open source community. Even more are blown away that we sponsor open source projects built by our engineers.

People expect that kind of behavior at a start-up, not a top bank. There is nothing traditional about Capital One and our approach to technology.

Read more at The LInux Foundation

Speak at Open Source Summit NA: Submit Your Proposal by April 29

Submit a proposal to speak at Open Source Summit North America taking place August 29-31, in Vancouver, B.C., and share your knowledge and expertise with 2,000+ open source technologists and community members. Proposals are being accepted through 11:59pm PDT, Sunday, April 29.

This year’s tracks/content will cover the following areas:

  • Cloud Native Apps/Serverless/Microservices
  • Infrastructure & Automation (Cloud/Cloud Native/DevOps)
  • Linux Systems
  • Artificial Intelligence & Data Analytics, and more

Read more at The Linux Foundation

Tips for Troubleshooting DNS

Need relief for your DNS headaches? First, it helps to understand how the domain name system works under the hood.

DNS is the Internet’s phonebook. Whenever you type in or click a human-readable web link (such as hpe.com), your web browser calls on a domain name system (DNS) resolver to resolve its corresponding Internet Protocol (IP) address.

DNS is essential. Without it, there is no Internet. Period. End of statement.

DNS is not just for browsers, though. If it runs on the Internet—Slack, email, you name it—DNS works behind the scenes to make sure all the application requests hook up with the appropriate Internet resources. 

Read more at HPE Insights

Linus Torvalds Says Linux Kernel v5.0 ‘Should Be Meaningless’

Following the release of Linux kernel 4.16, Linus Torvalds has said that the next kernel will be version 5.0. Or maybe it won’t, because version numbers are meaningless.

The announcement — of sorts — came in Torvalds’ message over the weekend about the first release candidate for version 4.17. He warns that it is not “shaping up to be a particularly big release” and questions whether it even matters what version number is slapped on the final release.

He says that “v5.0 will happen some day. And it should be meaningless. You have been warned.” 

Read more at BetaNews

5 Raspberry Pi Operating Systems That Aren’t Linux

Looking for a way to get the most out of your Raspberry Pi? Running a project that just needs something more? Odd as it may seem, Linux might be the problem, so why not consider a non-Linux operating system? Several have been released, or adapted, for use on the Raspberry Pi.

1. Plan 9

Released as an open-source operating system in 1992, Plan 9 has a small footprint and is targeted at developers. Its lightweight presence makes it ideal for the Raspberry Pi.

A descendent of UNIX, Plan 9 is easy to install on the Pi, much like any other compatible operating system. Simply download the disk image, and write it to the microSD card.

Read more at MakeUseOf