Home Blog Page 545

Building Linux Firewalls With Good Old Iptables: Part 2

When last we met we reviewed some iptables fundamentals. Now you’ll have two example firewalls to study, one for a single PC and one for a LAN. They are commented all to heck to explain what they’re doing.

This is for IPv4 only, so I’ll write up some example firewalls for IPv6 in a future installment.

I leave as your homework how to configure these to start at boot. There is enough variation in how startup services are managed in the various Linux distributions that it makes me tired to think about it, so it’s up to you figure it out for your distro.

Lone PC Firewall

Use the lone PC firewall on laptop, desktop, or server system. It filters incoming and outgoing packets only for the host it is on.

#!/bin/bash

# iptables single-host firewall script

# Define your command variables
ipt="/sbin/iptables"

# Define multiple network interfaces
wifi="wlx9cefd5fe8f20"
eth0="enp0s25"

# Flush all rules and delete all chains
# because it is best to startup cleanly
$ipt -F
$ipt -X 
$ipt -t nat -F
$ipt -t nat -X
$ipt -t mangle -F 
$ipt -t mangle -X 

# Zero out all counters, again for 
# a clean start
$ipt -Z
$ipt -t nat -Z
$ipt -t mangle -Z

# Default policies: deny all incoming
# Unrestricted outgoing

$ipt -P INPUT DROP
$ipt -P FORWARD DROP
$ipt -P OUTPUT ACCEPT
$ipt -t nat -P OUTPUT ACCEPT 
$ipt -t nat -P PREROUTING ACCEPT 
$ipt -t nat -P POSTROUTING ACCEPT 
$ipt -t mangle -P PREROUTING ACCEPT 
$ipt -t mangle -P POSTROUTING ACCEPT

# Required for the loopback interface
$ipt -A INPUT -i lo -j ACCEPT

# Reject connection attempts not initiated from the host
$ipt -A INPUT -p tcp --syn -j DROP

# Allow return connections initiated from the host
$ipt -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT

# If the above rule does not work because you
# have an ancient iptables version (e.g. on a 
# hosting service)
# use this older variation instead
$ipt -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT

# Accept important ICMP packets. It is not a good
# idea to completely disable ping; networking
# depends on ping
$ipt -A INPUT -p icmp --icmp-type echo-request -j ACCEPT
$ipt -A INPUT -p icmp --icmp-type time-exceeded -j ACCEPT
$ipt -A INPUT -p icmp --icmp-type destination-unreachable -j ACCEPT

# The previous lines define a simple firewall
# that does not restrict outgoing traffic, and
# allows incoming traffic only for established sessions

# The following rules are optional to allow external access
# to services. Adjust port numbers as needed for your setup

# Use this rule when you accept incoming connections
# to services, such as SSH and HTTP
# This ensures that only SYN-flagged packets are
# allowed in
# Then delete '$ipt -A INPUT -p tcp --syn -j DROP'
$ipt -A INPUT p tcp ! --syn -m state --state NEW -j DROP

# Allow logging in via SSH
$ipt -A INPUT -p tcp --dport 22 -j ACCEPT

# Restrict incoming SSH to a specific network interface
$ipt -A INPUT -i $eth0 -p tcp --dport 22 -j ACCEPT

# Restrict incoming SSH to the local network
$ipt -A INPUT -i $eth0 -p tcp -s 192.0.2.0/24 --dport 22 -j ACCEPT

# Allow external access to your HTTP server
# This allows access to three different ports, e.g. for
# testing. 
$ipt -A INPUT -p tcp -m multiport --dport 80,443,8080 -j ACCEPT

# Allow external access to your unencrypted mail server, SMTP,
# IMAP, and POP3.
$ipt -A INPUT -p tcp -m multiport --dport 25,110,143 -j ACCEPT

# Local name server should be restricted to local network
$ipt -A INPUT -p udp -m udp -s 192.0.2.0/24 --dport 53 -j ACCEPT
$ipt -A INPUT -p tcp -m udp -s 192.0.2.0/24 --dport 53 -j ACCEPT

You see how it’s done; adapt these examples to open ports to your database server, rsync, and any other services you want available externally. One more useful restriction you can add is to limit the source port range. Incoming packets for services should be above port 1024, so you can allow in only packets from the high-numbered ports with --sport 1024:65535, like this:

$ipt -A INPUT -i $eth0 -p tcp --dport 22 --sport 1024:65535 -j ACCEPT

LAN Internet Connection-Sharing Firewall

It is sad that IPv4 still dominates US networking, because it’s a big fat pain. We need NAT, network address translation, to move traffic between external publicly routable IP addresses and internal private class addresses. This is an example of a simple Internet connection sharing firewall. It is on a device sitting between the big bad Internet and your LAN, and it has two network interfaces, one connecting to the Internet and one that connects to your LAN switch.

#!/bin/bash

# iptables Internet-connection sharing 
# firewall script

# Define your command variables
ipt="/sbin/iptables"

# Define multiple network interfaces
wan="enp0s24"
lan="enp0s25"

# Flush all rules and delete all chains
# because it is best to startup cleanly
$ipt -F
$ipt -X 
$ipt -t nat -F
$ipt -t nat -X
$ipt -t mangle -F 
$ipt -t mangle -X 

# Zero out all counters, again for 
# a clean start
$ipt -Z
$ipt -t nat -Z
$ipt -t mangle -Z

# Default policies: deny all incoming
# Unrestricted outgoing

$ipt -P INPUT DROP
$ipt -P FORWARD DROP
$ipt -P OUTPUT ACCEPT
$ipt -t nat -P OUTPUT ACCEPT 
$ipt -t nat -P PREROUTING ACCEPT 
$ipt -t nat -P POSTROUTING ACCEPT 
$ipt -t mangle -P PREROUTING ACCEPT 
$ipt -t mangle -P POSTROUTING ACCEPT

# Required for the loopback interface
$ipt -A INPUT -i lo -j ACCEPT

# Set packet forwarding in the kernel
sysctl net.ipv4.ip_forward=1

# Enable IP masquerading, which necessary for NAT
$ipt -t nat -A POSTROUTING -j MASQUERADE

# Enable unrestricted outgoing traffic, incoming
# is restricted to locally-initiated sessions only
$ipt -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
$ipt -A FORWARD -i $wan -o $lan -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
$ipt -A FORWARD -i $lan -o $wan -j ACCEPT

# Accept important ICMP messages
$ipt -A INPUT -p icmp --icmp-type echo-request  -j ACCEPT
$ipt -A INPUT -p icmp --icmp-type time-exceeded -j ACCEPT
$ipt -A INPUT -p icmp --icmp-type destination-unreachable -j ACCEPT

Stopping the Firewall

Take the first sections of the example firewall scripts, where it flushes, zeroes, and sets all default policies to ACCEPT, and put them in a separate script. Then use this script to turn your iptables firewall “off”.

The documentation on netfilter.org is ancient, so I recommend using man iptables. These examples are basic and should provide a good template for customizing your own firewalls. Allowing access to services on your LAN through your Internet firewall is good subject for another day, because thanks to NAT it is a huge pain and needs a lot of rules and explaining.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Optimizing Apps for Wearables With Enlightenment Foundation Libraries

Developers looking to add GUIs to their embedded devices have a variety of open source and commercial options, with Qt generally leading the list. If you’re operating in severely constrained environments, however, especially for battery powered devices like wearables, the open source Enlightenment Foundation Libraries (EFL) should be given close consideration.

At the recent Embedded Linux Conference, Cedric Bail, a long-time contributor to the Enlightenment project who works on EFL integration with Tizen at Samsung Open Source Group, discussed some of the lessons learned in optimizing wearable apps for low battery, memory, and CPU usage. Bail summarized EFL and revealed an ongoing project to improve EFL’s scene graph. However, most of the lessons are relevant to anyone optimizing for wearables on any platform (see the ELC video below).

EFL has been under development for 10 years and was released in 2011. It was designed as a toolkit for the 20-year old Enlightenment, the first windows manager for GNOME. Today, however, it has evolved to feature its own rendering library and scene graph and can work with a variety of windowing environments. The EFL project is in the process of adding an optimized Wayland support.

Samsung was an early supporter of EFL and is still its biggest champion. The CE giant now uses EFL in its Tizen-based Samsung Galaxy Gear smartwatches, as well as its smart TVs, and several other Tizen-based devices.

Like Enlightenment, EFL was designed from the start for embedded GUIs. The toolkit is licensed with a mix of LGPL 2.1 and BSD, and written in C, with bindings to other languages. EFL is primarily designed for Linux, although it’s lean enough to run on RTOS- and MCU-driven devices such as the Coyote navigation device.

“EFL is optimized for reducing CPU, GPU, memory, and battery usage,” said Bail. “It takes up as little as 8MB, including all widgets. Some have pushed it lower than that, but it’s less functional. Arch Linux can run with EFL on 48MB RAM at 300MH, with a 1024×768 screen in full 32-bits display, and without a GPU. EFL does better than Android on battery consumption. That’s why Samsung is using Tizen on its smartwatches.”

Despite its minimalist nature, EFL supports accessibility (ATSPI) and international language requirements. It’s also “fully themable,” and can “change the scale of the UI based on the screen and input size,” said Bail. “We also account for the DPI and reading distance of the screen for better readability.”

Bail disagrees with the notion that improvements in power/performance will soon make minimalist graphics toolkits obsolete. “Moore’s Law is slowing down, and it doesn’t apply to battery life and memory bandwidth,” said Bail.

Optimizing the UI is one of many ways to reduce your embedded footprint. In recent months, we’ve looked at ELC presentations about optimizations ranging from shrinking the Linux kernel and file system to streamlining WiFi network stacks to trimming energy consumption on an oceanographic monitoring device.

The UI is typically the most resource-intensive part of an application. “Most applications don’t do much,” said Bail. “They fetch stuff from the network, a database, and then change the UI, which does all the CPU and GPU intensive tasks. So optimizing the UI saves a lot of energy.”

The biggest energy savings can be found in the UI design itself. “If your designer gives you a 20-layer UI with all these bitmaps and transparency, there is very little our toolkit can do to reduce energy,” said Bail.

The most power-efficient designs stick to basic rectangles, lines, and vertical gradients, and avoid using the entire screen. “If you have full-screen animations where things slide from left to right or up and down, you can’t do partial updates, so it uses more energy,” said Bail. If you’re targeting AMOLED screens, consider using a black background, as the Enlightenment project does on its own website. “Reddit running in black consumes 41 percent less energy on Android AMOLED phone,” Bail added.

One other trick used on the Gear smartwatches is an integrated frame buffer within the display. “The system can completely suspend the SoC, while the display refreshes itself, and save a lot of battery life, which is fine if you’re just displaying a watch face.”

Memory optimization is another area where you can substantially reduce consumption. “All rendering operations are constrained by memory bandwidth,” said Bail. “And if you want to run a web browser, memory declines quickly.”

Memory consumption is primarily driven by screen size, which is not usually an issue for a smartwatch. There are other ways to optimize for memory usage, however, such as avoiding true multitasking and using CPU cache.

“Accessing main memory uses more energy than accessing the CPU cache,” said Bail. “You can optimize by improving cache locality and doing linear instead of random access. You can look for cache locality with Cachegrind and visualize it with Kcachegrind, and you can hunt for leaks and overuse with tools like massif and massif-visualizer.”

Power reduction in wearables can produce all-day battery life, as well as reduce dissipated heat for greater comfort, said Bail. Lower battery consumption also provides “more freedom for designers, who may want to go with a thinner device using a smaller battery.”

Wearables developers should also optimize for speed, which means “doing things more efficiently and avoiding unnecessary complications,” said Bail. This also applies to the GPU where you can optimize redrawing the screen if nothing has changed, or doing partial updates and reusing information from a past image. You should also trigger animations “only at speed the hardware is capable of,” he added.

The EFL project uses the Raspberry Pi as a target for speed optimization, and runs tests with Kcachegrind. For EFL, Bail recommends that Pi users adopt the more up-to-date Arch Linux over Raspbian.

Network optimization is also important. “When you send data, you are more likely to lose packets, which means you have to retransmit it, which takes energy,” said Bail. “You want to send as little data as possible over the network and download only what is needed.”

Optimizing downloads is a bit tricky because “this is usually a prefetch, and you don’t know if the user will need all of the download, so you may end up over-downloading something,” said Bail. “You should group your downloads together, and then switch to full network idle for as long as possible.” Otherwise, energy is consumed by wireless stacks time switching between energy states.

When optimizing specifically for battery use, developers rely on the Linux kernel. The kernel chooses the clock, voltage, and number of active cores “by trying to figure out what you are going to do even though it has no idea what you are doing,” said Bail. “For years, the kernel failed at this,” he added.

The problem comes from the kernel scheduler basing its decisions solely on past process activity whereas digital reality can be much more dynamic. Among other problems, the scheduler “forgets everything as soon as it migrates to another CPU core.” There are also complications such as the CPU frequency driver and CPU idle driver both looking at system load, but without coordinating their activities.

In the past, developers tried to overcome this problem with an energy aware userspace daemon equipped with a hard-coded list of applications and their corresponding behavior. ARM and Linaro are working on an a more dynamically aware solution called SCHED_DEADLINE as part of its Energy Aware Scheduling (EAS) project. The technology, which was covered in a separate ELC 2017 presentation by ARM’s Juri Lelli, promises to link CPU frequency and idle to the scheduler, while also retaining more information about past loads.

“SCHED_DEADLINE will let interactive tasks become properly scheduled by the system,” said Bail. “The userspace will break things apart in a thread that is dedicated to specific things.” With the new infrastructure, interactive tasks will be able to change behavior very quickly, even during the 16ms rendering of a frame.

With the increasing use of multi-core processors, the original EFL scene graph switched a few years ago to a dual-thread version. Now the project is developing a new version that will work in harmony with SCHED_DEADLINE.

The scene graph is “a bookkeeping of primitive graphical objects, of everything you draw on the screen,” said Bail. “It has a general view of the application from inside the toolkit so you can do global optimization.”

With the current dual-thread scene graph, “the kernel may have a hard time figuring out what is going on in the main loop,” explained Bail. “Our new version will help this by grouping computation — such as CPU-intensive tasks for generating spanline for shape, and decompressing image — into a specific thread for screen-intensive tasks, and grouping all the memory-bound computations in their own thread.”

The new scene graph is not without its drawbacks, however. “The main price we pay is increased memory usage because we need more threads,” said Bail. “Every thread has its own stack, which causes increasing complexity and new bugs. We are trying to patch all these risky things inside the toolkit.”

You can watch the complete video below:

Connect with the Linux community at Open Source Summit North America on September 11-13. Linux.com readers can register now with the discount code, LINUXRD5, for 5% off the all-access attendee registration price. Register now to save over $300!

CPU Utilization Is Wrong

The metric we all use for CPU utilization is deeply misleading, and getting worse every year. What is CPU utilization? How busy your processors are? No, that’s not what it measures. Yes, I’m talking about the “%CPU” metric used everywhere, by everyone. In every performance monitoring product. In top(1).

What you may think 90% CPU utilization means:

What it might really mean:

Read more at Brendan Gregg’s blog

This Mega-Sensor Makes the Whole Room Smart

Gierad Laput wants to make homes smarter without forcing you to buy a bunch of sensor-laden, Internet-connected appliances. Instead, he’s come up with a way to combine a slew of sensors into a device about the size of a Saltine that plugs into a wall outlet and can monitor many things in the room, from a tea kettle to a paper towel dispenser.

Laput, a graduate student studying computer-human interaction at Carnegie Mellon University, built the gadget as part of a project he calls Synthetic Sensors. He says it could be used to do things like figure out how many paper towels you’ve got left, detect when someone enters or leaves a building, or keep an eye on an elderly family member (by tracking the person’s typical routine via appliances, for example). It’s being shown off this week in Denver at the CHI computer-human interaction conference.

Read more at Technology Review

Keeping the Node.js Core Small

Features are wonderful. When Node.js adds a new API, we can instantly do more with it. Wouldn’t a larger standard library be more useful for developers? Who could possibly object to Node.js getting better? And who, even more strangely, would actually remove APIs, making Node.js objectively worse?  Turns out, lots of people…

A recent proposal to get HTTP proxy environment variable support into Node.js, https://github.com/nodejs/node/issues/8381, got a generally hesitant response from Node.js maintainers, and generally enthusiastic response from Node.js users.

What’s going on here? Why the disconnect? Is this an ideological stance?

No, for those who support a small core, its a matter of pragmatism. What is happening is that the instantaneous delight of new APIs in Node.js fades with time, even as the problems of maintaining those APIs grows. The longer you work with Node.js core, supporting, bug fixing, and documenting the existing APIs, the more you wish that Node.js had been more conservative in accepting new APIs in the past, and that those APIs had been implemented as npm modules that could be independently maintained and versioned.

Read more at Medium

Mechanical Keyboards for Programmers and Gamers

Input Club’s mechanical keyboards aren’t just about producing exceptional products. They’re also proof that open source can solve any problem.

Open source software already powers most of the world, partially because it is free and mostly because it is so accessible. Under an open source system, the flaws and imperfections in every product can be observed, tracked, and fixed, much like the Japanese philosophy of “continuous improvement” known as kaizen, which is applied to every aspect of a process. By following these principles, we believe that the open hardware movement is poised to fundamentally change the global product economy.

At Input Club, we design and produce mechanical keyboards using this same philosophy and workflow, similar to how a person might develop a website or application. The design files for our keyboard frames and circuit boards are available via GitHub

Read more at OpenSource.com

Google’s Fuzz Bot Exposes over 1,000 Open-Source Bugs

Google’s OSS-Fuzz bug-hunting robot has been hard at work, and in recent months, over 1,000 bugs have been exposed.

According to Chrome Security engineers Oliver Chang and Abhishek Arya, software engineer Kostya Serebryany, and Google Security program manager Josh Armour, the OSS-Fuzz bot has been scouring the web over the past five months in the pursuit of security vulnerabilities which can be exploited.

The OSS-Fuzz bot uses a technique called fuzzing to find bugs.

Read more at ZDNet

The Beauty of Links on Unix Servers

Symbolic and hard links provide a way to avoid duplicating data on Unix/Linux systems, but the uses and restrictions vary depending on which kind of link you choose to use. Let’s look at how links can be most useful, how you can find and identify them, and what you need to consider when setting them up.

Hard vs soft links

Don’t let the names fool you. It’s not an issue of malleability, but a very big difference in how each type of link is implemented in the file system. A soft or “symbolic” link is simply a file that points to another file. If you look at a symbolic link using the ls command, you can easily tell that it’s a symbolic link.

Read more at ComputerWorld

The Next Challenge for Open Source: Federated Rich Collaboration

When over a decade ago the file sync and share movement was started by Dropbox and later joined by Google Drive, it became popular very fast. Having your data available, synced or via the web interface, no chance of forgetting to bring that important document or use USB sticks — it was a huge step forward. But more than having your own data at hand, it enabled sharing and collaboration. No longer emailing documents, no longer being unsure if you got feedback on the latest version of your draft or fixing errors that were already fixed before. Usage grew, not only among home users but also business users who often used the public cloud without the IT departments’ approval.

Problems would creep up quickly, too. Some high-profile data leaks showed what a big target the public clouds were. Having your data co-mingled with that of thousands of other home and business users means little control over it and exacerbates risks. The strong European privacy protection rules increased the cost of breaches and thus created awareness in Europe, while businesses in Asian countries especially in the tech sector disliked the risks with regards to trade secrets. Although there are stronger intellectual property protections — and less emphasis on privacy in the United States — control over data is becoming a concern there as well.

Open source, self-hosted solutions providing file sync and share began to be used by home, business and government users as a way to achieve this higher degree of privacy, security and control. I was at the center of these developments, having started the most popular open source file sync and share project, a vision I continue to push forward together with the early core contributors and the wider community at Nextcloud.

Open Source and Self Hosting
Hosting their own, open source solution gives business the typical benefits of open source:
* Customer driven development
* Long-term viability

Customer driven development
Open Source brings in contributions from a wide range of sources, advancing the interests of customers while accelerating innovation. The transparent development and its strict peer review process also ensures security and accountability, which are crucial for a component on which companies rely to protect their proprietary knowledge, critical customer data and more. The stewardship of the Nextcloud business, collaborating with a variety of partners and independent contributors, gives customers the piece of mind that they have access to a solid, enterprise-ready product.

Long-term viability
Where choosing proprietary solutions means betting on a single horse, open source allows customers to benefit from the race regardless of the outcome. Nextcloud features a large and quickly growing, healthy ecosystem with well over 300 contributors in the last 9 months and many dozens of third-party apps providing additional functionality. One can find hundreds of videos and blogs on the web talking about how to implement and optimize Nextcloud installations on various infrastructure setups and there are well over 6K people on our forums asking and answering questions. Besides us and our partners, many independent consultants and over 40 hosting providers all offer support and maintenance of Nextcloud systems. There is a healthy range of choices!

Forward looking
Having your data in a secure place, in alignment with IT policy, is thus possible and thousands of businesses already use our technology to stay in control over their data. Now the question becomes: What comes next?

The Internet and the world wide web were originally designed as distributed and federated networks. The centralized networks have lately enabled users to work together, to collaborate and share more easily. The disconnected, private networks you’d create with self-hosted technologies seem to not be able to match that. This is where Nextcloud’s Federated Cloud Sharing technology comes in. Developed by Bjoern Schliessle and myself some years ago, it enables users on one cloud server to transparently share data with users on another. To share a file to a user on another server, one can simply type in the ‘Federated Cloud ID’, a unique ID similar to an email address. The recipient will be notified and the two servers (if configured to do so) will even exchange address books to, in the future, auto-complete user names for their respective users. In our latest release, we improved integration to the point where users are even notified of any changes and access done by users on the other server, completing the seamless integration experience.

Next level of collaboration
This last feature is what efficient collaboration requires: context! People don’t only want files from other people popping up on their computer — or to have them changed in the background by other users.

Why do I have access to this file or folder? Who shared it with me and what are the recent changes? Maybe you want a way to directly chat with the person who changed the file? Maybe leave a comment or maybe directly call the person? And if you are discussing possible changes on a document, why not edit it together collaboratively? Maybe you’d like integration with your calendar to arrange a time to work on the document? Or maybe integration into your email to access the latest version you got by email. Maybe having a video call while working on that presentation deck together? Having a shared todo list with someone or who isn’t even working in the same organization as you?

Our latest release, Nextcloud 12, introduces a wide range of collaboration features and capabilities, functioning in a federated, decentralized way. Users can call each other through a secure, peer-to-peer audio/video conferencing technology; they can comment, edit documents in real time, and get push notifications when anything of note happens.

At the same time, their respective IT teams continue to be able to ensure company policies around security and privacy are fully enforced.

The open source community in a unique position to take the lead in this space because it is in our DNA. Open Source IS built in a collaborative way. Using the internet, using chat, version control, video calling, document sharing and so on. Basically all big open source communities are distributed over different continent, while working together in a very efficient way, creating great results. The Open Source movement is the child of the Internet using it as a collaborating tool. My own open source company, Nextcloud GmbH, has almost all its employees work from home or co-working places.

So we can and do build privacy aware and secure software for rich collaboration. Alternatives to the proprietary competitors. And successfully so!

If you want to join me, get involved at Nextcloud.

SNAS.io, Formerly OpenBMP Project, Joins The Linux Foundation’s Open Source Networking Umbrella

By Arpit Joshipura, General Manager, Networking and Orchestration, The Linux Foundation

We are excited to announce that SNAS.io, a project that provides network routing topologies for software-defined applications, is joining The Linux Foundation’s Networking and Orchestration umbrella. SNAS.io tackles the challenging problem of tracking and analyzing network routing topology data in real time for those who are using BGP as a control protocol, internet service providers, large enterprises, and enterprise data center networks using EVPN.

Topology network data collected stems from both layer 3 and layer 2 of the network, and includes IP information, quality of service requests, and physical and device specifics. The collection and analysis of this data in real time allows DevOps, NetOps, and network application developers who are designing and running networks, to work with topology data in big volumes efficiently and to better automate the management of their infrastructure.

Contributors to the project include Cisco, Internet Initiative of Japan (IIJ), Liberty Global, pmacct, RouteViews, and the University of California, San Diego.

Originally called OpenBMP, the project focused on providing a BGP monitoring protocol collector. Since it launched two years ago, it has expanded to include other software components to make real-time streaming of millions of routing objects a viable solution. The name change helps reflect the project’s growing scope.

The SNAS.io collector not only streams topology data, it also parses it, separating the networking protocol headers and then organizing the data based on these headers. Parsed data is then sent to the high-performance messagebus, Kafka, in a well-documented and customizable topic structure.

SNAS.io comes with an application that stores the data in a MySQL database. Others that use SNAS.io can access the data either at the messagebus layer using Kafka APIs or using the project’s RESTful database API service.

The SNAS.io Project is complementary to several Linux Foundation projects, including PNDA and FD.io, and is a part of the next phase of networking growth: the automation of networking infrastructure made possible through open source collaboration.

Industry Support for the SNAS.io Project and Its Use Cases

Cisco

SNAS.io addresses the network operational problem of real-time analytics of the routing topology and load on the network. Any NetDev or Operator working to understand the dynamics of the topology in any IP network can benefit from SNAS.io’s capability to access real-time routing topology and streaming analytics,” said David Ward, SVP, CTO of Engineering and Chief Architect, Cisco. “There is a lot of potential linking SNAS.io and other Linux Foundation projects such as PNDA, FD.io, Cloud Foundry, OPNFV, ODL and ONAP that we integrating to evolve open networking. We look forward to working with The Linux Foundation and the NetDev community to deploy and extend SNAS.io.”

Internet Initiative Japan (IIJ)

“If successful, the SNAS.io Project will provide a great tool for both operators and researchers,” said Randy Bush, Research Fellow, Internet Initiative Japan. “It is starting with usable visualization tools, which should accelerate adoption and make more of the Internet’s hidden data accessible.”

Liberty Global

“The SNAS.io Project’s technology provides our huge organization with an accurate network topology,” said Nikos Skalis, Network Automation Engineer, Liberty Global. “Together with its BGP forensics and analytics, it suited well to our toolchain.”

pmacct

“The BGP protocol is one of the very few protocols running on the Internet that has a standardized, clean and separate monitoring plane, BMP,” said Paolo Lucente, Founder and Author of the pmacct project. “The SNAS.io Project is key in providing the community a much needed full-stack solution for collecting, storing, distributing and visualizing BMP data, and more.”

RouteViews

“The SNAS.io Project greatly enhances the set of tools that are available for monitoring Internet routing,” said John Kemp, Network Engineer, RouteViews. “SNAS.io supports the use of the IETF BGP Monitoring Protocol on Internet routers. Using these tools, Internet Service Providers and university researchers can monitor routing updates in near real-time. This is a monitoring capability that is long overdue, and should see wide adoption throughout these communities.”

University of California, San Diego

“The Border Gateway Protocol (BGP) is the backbone of the Internet. A protocol for efficient and flexible monitoring of BGP sessions has been long awaited and finally standardized by the IETF last year as the BGP Monitoring Protocol (BMP). The SNAS.io Project makes it possible to leverage this new capability, already implemented in routers from many vendors,  by providing efficient and easy ways to collect BGP messages, monitor topology changes, track convergence times, etc,” said Alberto Dainotti, Research Scientist, Center for Applied Internet Data Analysis, University of California, San Diego “SNAS.io will not only have a large impact in network management and engineering, but by multiplying opportunities to observe BGP phenomena and collecting empirical data, it has already demonstrated its utility to science and education.”

You can learn more about the project and how you can get involved here https://www.SNAS.io.