Facebook says its Terragraph system could revolutionize service provider economics, insisting the cost point it is targeting for the wireless technology is “significantly” less than that of rival connectivity solutions.
Announced last month, Terragraph uses unlicensed spectrum in the 60GHz range to provide high-speed connectivity in densely populated communities.
The social networking giant says it plans to make Terragaph available to service providers through its recently launched Telecom Infra Project (TIP), which is developing open source network technologies in partnership with various telecom operators and vendors.
Facebook has been testing Terragraph at its campus in Menlo Park and has also revealed plans to run trials in the downtown part of nearby San Jose.
Blockchain technology offers many different benefits to enterprise developers — but there’s no cross-industry open standard for how to develop it.
That makes it difficult for vendors and CIO customers to place their bets and begin building it into their technology architecture. Hyperledger, a Linux Foundation project to produce a standard open-source blockchain, wants to solve that problem, and it just got an executive director, Brian Behlendorf, to help it on its way. He founded the Apache Software Foundation, was previously on the board of the Mozilla Foundation and the Electronic Frontier Foundation, and managed tech VC firm Mithril Capital Management.
The blockchain community is still young and fragmented. Many organizations are trying to build things on top of the original bitcoin blockchain and extend its functionality, capitalizing on the fact that the bitcoin network is secured by its massive base of users.
AT&T has some grand plans for its growing SDN-enabled network, one that will see the provider launch the new service in multiple countries.
Ralph de la Vega, vice chairman of AT&T and CEO of AT&T Business Solutions and AT&T International, told investors during the 44th Annual JP Morgan Global Technology, Media and Telecom Conference that while he could not name the service yet, it’s something that the company could not have achieved on traditional hardware architectures.
“I am not going to give you the actual name of the service, but I will tell you later this year, we’re going to launch a service on that software defined network that is going to hit 63 countries at the same time, on the same day,” de la Vega said. “You show me a way to do that with a physical network and I would say ‘the chances of making that happen are very small.'”
It’s likely the new SDN-enabled service is related to its business services line, something that AT&T has been touting for its Ethernet and managed security offerings.
In this Keynote, Luciano Resende, Architect, Spark Technology Center at IBM, will showcase Open source Analytic platforms. Luciano will also discuss how they are being leveraged by different organizations to upend their competition, as well as enable new use cases.
OpenDaylight (ODL) is an open source SDN platform designed to serve a broad set of use cases and end user types, mainly service providers, large enterprises, and academic institutions. Uniquely, ODL provides a plethora of network services in all domains–data center, WAN, NREN, metro and access.
With ODL, controller functionality is literally in the hands of application designers, as opposed to being hard-wired (and thus restricted) by controller designers. This unique flexibility is due to an evolved model-driven service abstraction layer (MD-SAL) that allows for the easy addition of new functionality in the form of southbound plugins, network services, and applications.
In March of this year, ODL published a Performance Report based on the newly released Beryllium, focusing on real-world, end-to-end application performance. This report generated approximately 2,000 downloads, providing many prospective–and even existing–users with key data points for a comprehensive understanding of how OpenDaylight (and potentially other SDN technologies) can be leveraged in the world’s largest networks.
Why the focus on real-world, end-to-end application metrics? ODL has well over 100 deployments, detailed in the user stories of many global service providers, including Orange, China Mobile, AT&T, T-Mobile, Comcast, KT Corporation, Telefonica,TeliaSonera, China Telecom, Deutsche Telekom, and Globe Telecom. As these key end users and the broad ecosystem of developers continue to use ODL to software-control their networks, they need to know what to expect not only in terms of ODL functionality but also the application performance characteristics of that functionality in a live network deployment.
Given all the possibilities in testing a platform as broad as ODL, savvy readers requested additional context around some of our results. For instance, developers and end users wondered about the differences that might be expected in the latest (SR1) release of Beryllium, as well as other key factors that might affect performance. Some were curious about the differing benefits of testing in single-instance versus clustered configurations (both of which are supported in production ODL deployments), and our reasons for using multiple methods for accessing controller functionality (i.e., Java versus REST).
Accordingly, we just updated the reportto give a more comprehensive picture of ODL’s performance when programming network devices using the industry’s most complete set of southbound protocols, including OpenFlow, NETCONF, BGP, PCEP, and OVSDB. As before, we also provided reference numbers for other controllers (ONOS and Floodlight) for the southbound protocols they support (principally OpenFlow).
ODL works closely with OPNFV in support of an open Controller Performance Testing project (CPerf), which will provide easily referenceable, application-relevant benchmarks for the entire networking industry to make tomorrow’s networks more agile, available, secure and higher performing. As such, we strongly encourage–and have already invited–developers and users from all open SDN controllers to participate in CPerf.
To discuss the results and other topics of interest in the open source controller world, we sat down with OpenDaylight developers–and members of the Performance Report team. Luis Gomez, Marcus Williams and Daniel Farrell offer their insights into the report and the impacton the SDN ecosystem.
Marcus Williams is a Network Software Engineer on Intel’s SDN Controller Team.
Please give us some background on who you are, where you work, and which open source networking projects you work on.
Luis Gomez, Principal SW Test Engineer at Brocade. I am committer in the Integration/Test project and a member of the OpenDaylight Technical Steering Committee (TSC) and the Board of Directors.
Marcus Williams. I am a Network Software Engineer on Intel’s SDN Controller Team. I work on OVSDB and Integration/Test projects in OpenDaylight and I am a committer in the OPNFV Controller Performance Testing (CPerf) project.
Daniel Farrell, Software Engineer on Red Hat’s SDN Team. I’m the Project Technical Lead of OPNFV CPerf (SDN Controller Performance Testing) and OpenDaylight Integration/Packaging (delivery pipelines, integration into OPNFV). I’m also a committer to OpenDaylight Integration/Test and on OpenDaylight’s TSC.
What were the key findings from the Performance Report?
Luis: One key finding was we did perform similarly to other well-known open source controllers (e.g. ONOS, Floodlight) in the same test conditions. Another key finding was the effect of batching and parallelism in the system throughput: batching multiple flow add/modify/delete operations onto a single REST request on the northbound API increased the flow programming rate by an order of magnitude (8x). Batching benefits also extend to southbound protocols; for example, L2/L3 FIB programming rate using NETCONF batch operations was nearly an order of magnitude (8x) faster than using OpenFlow single operations. On the other hand, adding more devices in the southbound did not work as expected in some tests (like OpenFlow) where the performance figure did not change very much with the number of switches. This is because we used mininet/OVS OpenFlow agents on fast machines with plenty of memory and CPU resources, as opposed to hardware switches that have much less powerful CPUs; so few of these OVS agents are normally enough to stress the controller.
Luis Gomez is a Principal Software Test Engineer at Brocade.
Daniel: We entertained some interesting discussions around the use of REST as opposed to a native Java API to program the controller. This led us to add context around this testing decision in the second version of the report. Virtually all end users employ REST for its ease of deployment and maintenance. Given the more direct connection of a Java API, it naturally yields numbers that are higher–in OpenDaylight or any other controller–by multiple orders of magnitude (literally hundreds to thousands of times faster to add flows internally in the controller). While such metrics may be useful to developers enhancing the controller, they don’t represent end-to-end system performance. Therefore, understanding the performance profile of using real/simulated devices attached to controller or using a REST interface informs end users as to use cases that are most suitable for the controller. On our southbound tests, we do use a Java API, but the the performance is measured in the device rather than internally in the controller.
What prompted you to create these tests?
Luis: As SDN has gained momentum and increased use among telcos, enterprises and others, we were often asked how OpenDaylight would perform in different scenarios, so we wanted to create tests for common use cases (e.g. device programming, control channel failures, etc). It is important to note that every test is fully described and reproducible so people can see for themselves and validate our numbers in their own environment. We wanted to show OpenDaylight’s performance in common scenarios.
Marcus: A broad set of people across the community created these tests to show the usability of OpenDaylight. Future adoption of SDN depends largely upon having a usable solution. We created this set of tests to help tell the story of OpenDaylight usability, by underscoring its ability to perform and scale in many common use cases. Since we wanted the results to be user-facing, we did the work of nicely presenting them in a white paper instead of our usual developer-oriented wikis.
Were there any major surprises?
Luis: We learned a lot about our own controller by doing this exercise. For example, we did not get comparative programming performance numbers until we disabled datastore persistence (e.g., write flow configuration in hard disk) or installed the faster Solid State Disk Drives (SDD) on which to persist the database. We also noticed that none of the other controllers we evaluated persisted the configuration by default, so we disabled this feature in OpenDaylight in order to run a commensurate test.
Marcus: We found out quickly that it is challenging to synchronize procedures and environment setups across teams and continents. We saw widely differing numbers depending on disks (owing to the datastore persistence issue mentioned by Luis) and environmental configuration. For example, using the command line tool tuned-adm, we could configure our systems to use a throughput-performance profile. This profile turns off power savings in favor of performance and resulted in around 15% improvement in performance in OpenDaylight OpenFlow tests.
Daniel Farrell is a Software Engineer on Red Hat’s SDN Team.
Daniel: I was surprised by how batching Northbound API requests (which Luis mentioned earlier) improved performance across the board (OpenFlow 8x, NETCONF 8x, BGP 10x). Since ODL at that time was the only SDN controller to support REST API batching (ONOS subsequently added similar functionality), we were pleasantly surprised at the dramatic impact on performance. I was also surprised how consistently and quickly the new design of ODL’s OpenFlow plugin collects operational flow information relative to the prior ODL design or other controllers.
What do end users look for in network performance tests? How do you decide which tests to run?
Luis: There are many, many tests one can run–we have focused on tests that we see as more relevant for the user, and that represent real network deployment scenarios being contemplated at this stage of SDN’s maturity. It is very important to look at end-to-end scenarios where the controller is just a piece of the overall solution. For this report, we tested single plugins and single controller instances, but future versions will include multi-plugin and cluster scenarios. Iterating toward these fuller implementations has a number of advantages. For example, a single controller instance has fewer variables, and it’s easier to isolate root causes for performance differences in (for instance) southbound protocols/plugins such as OpenFlow and NETCONF. Also, starting with a single instance establishes a baseline for comparison with future testing of clustered and/or federated configurations.
Marcus: I completely agree with Luis. The next phase of testing will be more solutions focused. I think end users look for tests that provide relevant metrics for their use-case or solutions needs. Clustered scenarios and the interaction of multiple plugins, as well as external software interaction and integration will be key to gathering the user-focused metrics needed to move the industry to adopt SDN solutions.
Daniel: OpenDaylight is a platform with support for a broad set of protocols and use cases. Our large and diverse community has a large and diverse set of performance metrics they care about. Part of our S3P (Security, Scalability, Stability and Performance) focus in Beryllium was to create many new tests for these metrics, including tracking changes over time in CI. So, as Luis and Marcus said, there are many tests to select from. We focused on a set of end-to-end tests we thought were representative of the types of ODL deployments our User Advisory Group has identified. For the OpenFlow use case, the Northbound REST API flow programming statistics collection and Southbound flow programming tests were interesting because they could be executed in other well known and primarily OpenFlow controllers like ONOS or Floodlight. Other OpenDaylight protocols like NETCONF, OVSDB, BGP, PCEP were also tested and show that OpenDaylight has the performance required for many other interesting use cases.
Do you plan on refreshing the report again, and if so when?
Luis: My belief and desire is to produce a performance report after every release. In addition, we will run regular performance tests against real and emulated devices through the OPNFV Cperf project. Upon customer demand, we may also run reports against larger networks; in the meantime, just such a report is currently available through one of our members and cperf collaborator, Intracom Telecom. This report compares Lithium and Beryllium on topologies of up to 6400 switches, and successful installations of up to 1M flows.
Marcus: Our next release is Boron in the fall, and we are working hard to provide an enhanced version of this report soon after that release. In the meantime, we are working through the OPNFV Cperf project to create objective, SDN controller independent, industry standard performance, scale and stability testing on real hardware using realistic, large, automated deployments.
Daniel: The report has been extremely well-received, so it looks like we’ll continue to refresh it. OpenDaylight’s experts in creating new performance tests are collaborating with OPNFV’s standards experts, who have been thinking about which SDN controller metrics are important to test for years. Eventually, we’d like to create something like a continuously updating performance report from OPNFV’s Continuous Integration (CI).
I recently read with interest that the powerful mail transfer agent (MTA) that is Postfix has introduced a relatively new addition to its load mitigation and anti-spam arsenal. As of version 2.8, Postfix now incorporates Postscreen.
In previous versions of the mail server, only one connection could be processed by each SMTP system process before moving to the next one in the queue. That’s far from ideal when you’re processing tens if not hundreds of thousands of emails every day. For superior efficiency, a single Postscreen process can handle multiple connections simultaneously and act as filter, deciding which email is valid and which might come from spambots, before passing the job onto an SMTP process.
The theory goes that, with less impact from spambots, there are more resources available to process valid email more efficiently. This in turn means you have more capacity available — or bang for your buck — when it comes to your hardware. What piqued my interest about Postscreen was the fact that Postfix already makes sterling efforts to mitigate server load conditions impeccably.
This well-designed MTA automagically deploys its own “stress-adaptive behavior“ when it’s feeling the pressure. This action takes the form of a quick daemon restart (without dropping existing network connections) and the lowering in its tolerance of who and how other machines are allowed to connect and deliver email to it. This super-clever and automatically configured change in settings acts simply as a temporary measure until Postfix feels that its cortisol has decreased to a healthier level.
Although I was amazed when I first read about this type of reaction to heavy load caused by a deluge of email — whether from an attack or from a busy mail server that had been suffering a network outage coming back online again — I was equally impressed by Postscreen’s functionality. In this article, I’ll take a look at how it can help you now.
Ham Not Spam
As demoralizing as it may be, on our beloved Internet today, most email is spam (unsolicited) not ham (solicited). And, apparently most of the spam generated today is sourced from malware being installed on desktop and laptops, which have been compromised unwittingly by some download or unsuspecting click in the past.
Sadly, those in the know estimate that this scenario will continue for the foreseeable future; thus, the onus to provide solutions ultimately falls on the Internet’s email infrastructure. According to the Postfix documentation, it became obvious that without some form of inbuilt mechanism to cope with the continual waterfall of spam, mail servers would spend significantly more time refusing to process email than actually accepting it.
As you delve further into how Postscreen works, you realize how difficult it is to circumvent the pressures of inbound spam. Postfix’s programmers realized that the most efficient way of detecting whether an email should be classified as spam or ham was to make the decision based on a single criterion. In other words, if any of a number of tests fails, the email is binned.
To achieve such efficiency, the key premise of Postscreen is one of temporary whitelisting. Thankfully, zombie machines (i.e., compromised machines infected via numerous methods) are in a race against time to cause as much damage as possible before they are blacklisted by ISPs and spam lists. This means they tend to rush through email delivery and hurry MTAs along, making them suspicious with less-than-polite and badly ordered SMTP commands.
Postfix works on two main criteria to identify zombie machines: one, if the connecting IP address is found to be blacklisted and, second, if it hurries through the SMTP process with malformed commands. The Postfix docs make the point that inspecting the content of an email isn’t a good way of making a decision using one measurement; there are simply too many factors, so it requires relatively high levels of processing power.
Some of the single-measurement tests that Postfix employs inevitably delay the processing of emails by a few seconds; however, clearly the main objective is to minimize these. As I just mentioned, the inspection of hurried SMTP commands (e.g., details relating to the “helo”, the sender, and the recipient) are used in the well-considered tests. Before reaching that stage, however, whitelists (and blacklists) are queried, too. I’ll take a closer look at these in a moment.
Defense Overview
Here’s a quick summary of which parts of an email transaction Postscreen deals with in order to help manage unsolicited email. The Postfix docs refer to a “multi-layer defense,” all designed with efficiency in mind.
To keep around 90 percent of spam at arm’s length, at the outside layer, the clever Postscreen makes light work of repelling zombie machines and spambots. These “inexpensive” defenses make a massive difference in the effort that a mail server needs to undertake.
The second layer is one concerned with SMTP-level checks — thanks to configured policies or “milter” (a portmanteau of mail filter) applications, such as DKIM (Domain Keys Identified Mail) implementations that check the identity of a sender, among other things.
The third layer is more concerned about the content of emails and their headers. The docs say that through several options, Postfix “can block unacceptable attachments such as executable programs, and worms or viruses with easy-to-recognize signatures.”
Finally, you can pipe email through Postfix, and beyond, into well-known content filtering applications. One example might be the popular SpamAssassin, which employs clever techniques, such as Bayesian probability, to determine if an email is ham or spam.
I like the simplicity and considered approach of this particular statement from the docs: “The general strategy is to use the less expensive defenses first, and to use the more expensive defenses only for the spam that remains.”
In this loosely formed layered model, the vigilant Postscreen is deployed within the first two layers.
Your Machines
Clearly, you need the ability to configure settings so that your own valid machines can always connect without delays or without the risk of their being spurned by your trusty mail server. You can adjust the postscreen_access_list option to affect those machines which are immediately whitelisted. Usually, this particular option will simply default to the permit_mynetworks option, which might seem familiar because it essentially just points at the “mynetworks” option. Anybody who has ever looked inside the main config file, /etc/postfix/main.cf, will have probably spotted that option or indeed added their own IP addresses to it.
To see how whitelisting your own machines works, here’s a reminder of the greeting that SMTP uses; it usually looks something like this:
220 mail.chrisbinnie.tld ESMTP server ready Tue, 11 Nov 2121 11:11:11 +0100
By adding an IP address or address range to “mynetworks” or adding a “permit” entry to postscreen_access_list (which I’ll look at it a moment), the sending machine won’t be grilled by Postfix either before or after the 220 “greeting” tests. Safe passage is therefore assured.
If you wanted to add either blacklisted or whitelisted IP addresses and ranges to your Postfix build, then you would add them to the /etc/postfix/postscreen_access.cidr file and then point your main.cffile at that location. Obviously, you can adjust that filename if you need to. The .cidr file extension means a file format within which the flexible MTA will look for classless inter-domain routing (CIDR) formatted address ranges. The Postfix docs offer us the following example for our .cidr file:
# Rules are evaluated in the order as specified.# Blacklist 192.168.* except 192.168.0.1.192.168.0.1 permit192.168.0.0/16 reject
Listing 1: How the “postscreen_access.cidr” file looks using the CIDR notation for networks.
I hope you’ll agree that this example speaks volumes (and follows the clarity of much of the well-written Postfix docs). And, you can tailor it to suit your needs.
Next we’ll make a small change to our /etc/postfix/main.cffile. Note the comma separation because we want to also include the contents of “mynetworks” by using the permit_mynetworks option.
A quick refresh of our config, cleverly performed without resetting all of the current connections, is now needed to set any changes live. We can achieve just that by running this simple command:
# postfix reload
There’s more to this approach than meets the eye; we don’t just receive whitelist and blacklist content options. This MTA also offers configurable actions on how to react to a blacklisted machine. There are no whitelisted actions; however, because we always want such a host to move forward — one that has been cleared to proceed — onto an SMTP process of its own. In your log files, you will see an entry for each result, including a hostname and a port number, either referenced as WHITELIST or BLACKLIST.
The settings which you can affect for a blacklisted machine are as we can see in Table 1.
Action
Description
ignore
The is the default action, and here failures are of little consequence. As other tests will also be applied, you could use this setting for testing, generating lots of logs without actually blocking -mails and upsetting your users.
enforce
Postfix will continue with its additional testing if you use this option. Here we respond with the all-too-common SMTP error 550. One MTA’s example error 550 response might be “550 Requested action not taken: mailbox unavailable” which is often seen for spam.
You should be aware that there are many 550 error statements made by MTAs such as “relay not permitted”. Note that if senders generate enough error 550s they may end up on a blacklist (or two), which could affect their ability to send email globally. For future reference, Postfix will dutifully log the “helo”, the sender, and the intended recipient for reference.
drop
Here we get much more medieval and simply kill the connection as soon as it arrives. Postfix will respond with a 521 SMTP error code. This draconian error code states something along the lines of “mail.chrisbinnie.tld does not accept mail (see RFC1846)”.
Table 1: Available actions for machines which are blacklisted; whitelisted machines go unhindered.
The options shown in Table 1 apply to both permanently blacklisted machines and also those picked up during the tests performed by Postfix before the greeting takes place. Each is tested for again and repeated whenever the sending machine returns at a later date. Additionally, if third-party real-time blackLists (RBLs), such as the powerful SpamHaus, are used and give a negative warning then they also apply. As you can see, it’s possible both to log and test any changes you make to your configuration and at the same time limit the resources used.
Stay Tuned
In this article, I looked at the tests that Postfix uses to identify zombie machines and described how Postscreen helps manage unsolicited email using a “multi-layer defense. Next time, I’ll look at some of the “deep protocol” tests that Postfix performs after the initial greeting.
Chris Binnie is a Technical Consultant with 20 years of Linux experience and a writer for Linux Magazine and Admin Magazine. His new book Linux Server Security: Hack and Defend teaches you how to launch sophisticated attacks, make your servers invisible and crack complex passwords.
The Wi-Fi Alliance recently announced a new IEEE specification, 802.11ah, developed explicitly for the Internet of Things (IoT). Dubbed HaLow (pronounced HAY-Low), it’s aimed at connecting everything in the IoT environment, from smart homes to smart cities to smart cars and any other device that can be connected to a Wi-Fi access point.
Here’s what you need to know about HaLow.
1. What are the potential advantages of HaLow?
First, HaLow operates in the 900-MHz band. This lower part of the spectrum can penetrate walls and other physical barriers, which means better range than the current 2.4GHz and 5GHz Wi-Fi bands.
Second, as a low-power technology, HaLow is intended to extend the Wi-Fi suite of standards into the resource-constrained world of battery-powered products, such as sensorsand wearables.
Cray has always been associated with speed and power and its latest computing beast called the Cray Urika-GX system has been designed specifically for big data workloads.
What’s more, it runs on OpenStack, the open source cloud platform and supports open source big data processing tools like Hadoop and Spark.
Cray recognizes that the computing world had evolved since Seymour Cray launched the company back in the early 1970s. While the computers they are creating remain technology performance powerhouses, they are competing in an entirely different landscape that includes cloud computing where companies can get as many computing resources as they need and pay by the sip (or the gulp in the case of Cray-style processing).
Yesterday we released GitLab 8.8, super powering GitLab’s built-in continuous integration. With it, you can build a pipeline in GitLab, visualizing your builds, tests, deploys and any other stage of the life cycle of your software. Today (and already in GitLab 8.8), we’re releasing the next step: GitLab Container Registry.
GitLab Container Registry is a secure and private registry for Docker images. Built on open source software, GitLab Container Registry isn’t just a standalone registry; it’s completely integrated with GitLab.
GitLab is all about having a single, integrated experience and our registry is no exception. You can now easily use your images for GitLab CI, create images specific for tags or branches and much more. Our container registry is actually the first Docker registry that is fully-integrated with Git repository management and comes out of the box with GitLab 8.8.
IBM loves Apache Spark. It’s training its engineers on it, it’s contributing to the project, and it’s building many of its big data products on top of the open source platform so IBM’s enterprise customers can use its powerful tools.
Luciano Resende, an architect at IBM’s Spark Technology Center, told the crowd at Apache Big Data in Vancouver that Spark’s all-in-one ability for handling structured, unstructured, and streaming data in one memory-efficient platform has led IBM to use the open source project where it can.
“We at IBM … have noted the power of Spark, and the other big data technologies that are coming in [from the Apache Software Foundation],” Resende said.
IBM is particularly invested in Spark’s machine-learning capabilities and is contributing back to the project with its work on SystemML, which helps create iterative machine-learning algorithms. It offers Spark-as-a-service in the cloud, and it’s building it into the next iteration of the Watson analytics platform. Basically anywhere it can, IBM is harnessing the efficient power of Apache Spark.
“We have our ETL platform, and we moved that to be on top of Spark,” Resende said. “By doing that it enabled us to go from 40 million lines of code to 4 million lines of code.”
Resende said Spark plays major roles in IBM’s Watson Health product, where doctors can query data lakes of internal and external data to better predict patient outcomes, and in helping a major telecom client create a 360-degree customer view to improve customer experience.
But, perhaps the most impressive use of Spark for IBM is how it helps run one of the tech titan’s recent acquisitions: The Weather Company.
The Weather Company provides data for The Weather Channel, as well as dozens of apps, and is used by Google, Apple, and several other companies. Resende said the database receives 30 billion API requests a day — that’s more than 60x the number of daily tweets — and serves a mobile user base of more 120 million active users.
As a result, The Weather Company processes 360 petabytes of data every day, which has to be analyzed both in batch processes and through streaming.
“For this, they’ve chosen Apache Spark,” Resende said, “and for storage they use a lot of Apache Cassandra. This allows them to process all the data they have — 360 PB of data — as traffic daily, and it allows them to have a platform that can scale linearly and in a cost-efficient way.”