Home Blog Page 737

Network Virtualization Merging LANs & WANs

For as long as anyone in the networking world can remember, management of local area networks (LANs) and wide area networks (WANs) has been distinctly different. LANs were primarily the responsibility of local IT departments, while WANs have been made up of MPLS and Internet connections controlled by carriers. Network virtualization (NV) is starting to blur the lines between the LAN and the WAN.

After all, virtual connections traverse both the LAN and WAN. Less clear, however, is how this Network Virtualizationmerging LANs & WANs network services is actually going to occur. In some quarters relying on one vendor to unify LAN and WANs will have a certain amount of appeal. But there’s also already a small cadre of vendors dominating SD-WAN deployments. Thanks to the rise of cloud computing, WANs are now obviously a more important strategic investment than ever. In fact, IDC is now forecasting the SD-WAN market would grow from from less than $225 million last year to more than $6 billion by 2020.

Read more at SDx Central

Sparkling Water: Bridging Open Source Machine Learning and Apache Spark

Although many people have experience with the fields of machine learning and artificial intelligence through applications in their pockets, such as Apple’s Siri and Microsoft’s Cortana, the scope of this technology extends well beyond the smartphone. H2O.ai, formerly known as Oxdata, has carved out a unique niche in the machine learning and artificial intelligence arena because its primary tools are free and open source, and because it is connecting its tools to other widely used data analytics tools. As a case in point, H2O.ai has now announced the availability of version 2.0 of its open Sparkling Water tool. Sparkling Water, H2O.ai’s API for Apache Spark, allows users of Spark to leverage very powerful machine learning intelligence.

You can download Sparkling Water 2.0 for free now. New features include the ability to: interface with Apache Spark, Scala and MLlib via H2O.ai’s Flow UI; build ensembles using algorithms from both H2O and MLlib; and give Spark users the power of H2O’s visual intelligence capabilities.

Sparkling Water includes a toolchain for building machine learning pipelines on Apache Spark.

In essence, Sparkling Water is an API that allows Spark users to leverage H2O’s open source machine learning platform instead of — or alongside — the algorithms that are included in Spark’s existing MLlib machine-learning library.

H2O.ai has published a number of use cases for how Sparkling Water and its other open tools are used in fields ranging from genomics to insurance.

Analysts are beginning to realize that open source machine learning tools can be used in conjunction with tools like Spark, giving them flexibility as they focus on big data. “Enterprises are looking to take advantage of a variety of machine learning algorithms to address an increasingly complex set of use cases when determining how to best serve their customers,” said Matt Aslett, Research Director, Data Platforms and Analytics at 451 Research. “Sparkling Water is likely to be attractive to H2O and Spark users alike, enabling them to mix and match algorithms as required.”

Moreover, in an interview with H2O.ai’s Vinod Iyengar, who oversees product strategy at the company, he noted that running H2O.ai’s powerful, open tools on affordable clusters is within reach of anyone now. “In the last five years the cost of storage has come down dramatically, as has the cost of memory,” he said. “Additionally, anyone can leverage an advanced computing cluster on, say, Amazon Web services, for a few hundred dollars. All of this means that organizations or individuals can take a whole lot of data and produce powerful predictions and insights from the large data sets without facing huge costs.”

Tipping Point

What does this mean in simple terms? It means that we are at a tipping point where anyone can wield the same kind of machine learning and artificial intelligence muscle that is used for everything from drug discovery to deep data analytics.

Iyengar also sees the open source roots of Sparkling Water as powerful. “Code is truly getting commoditized and the only defensible asset is community,” he said. “The relationships we have with our customers are also deepened due to the open source nature of our products. Because H2O and Sparkling Water are open source, our customers are also our community. They take part in H2O not just as consumers, but as developers as well.”

Notably, H2O.ai is also working on a data science hub called Steam, which will eliminate all the DevOps work required to build and deploy machine learning and artificial intelligence models. With Steam, developers and data scientists will be encouraged to compare models across teams and take them into production without the need for heavy engineering work on the backend. We will follow up on Steam in a post to come soon.

To learn more about the promise of machine learning and artificial intelligence, watch a video featuring David Meyer, Chairman of the Board at OpenDaylight, a Collaborative Project at The Linux Foundation. And, to learn more about H2O.ai’s machine learning work, see this previous post.

 

How Open Source Is Shaping the Future of Wireless

Most developers aren’t impressed by the ease of use of wireless protocols – they were originally invented by large corporations and heavily patented, which blocked individual developers from innovation. You had to have very deep pockets to bring any alternative to market. Fortunately, this is about to change. 

Thanks to inexpensive open source software-defined radios (SDRs), innovators will now be able to design their own wireless protocols. 

Read more at Wireless

FCC forces TP-Link to Support Open Source Firmware on Routers

Networking hardware vendor TP-Link today admitted violating US radio frequency rules by selling routers that could operate at power levels higher than their approved limits. In a settlement with the Federal Communications Commission, TP-Link agreed to pay a $200,000 fine, comply with the rules going forward, and to let customers install open source firmware on routers.

The open source requirement is a unique one, as it isn’t directly related to TP-Link’s violation. Moreover, FCC rules don’t require router makers to allow loading of third-party, open source firmware. In fact, recent changes to FCC rules made it more difficult for router makers to allow open source software. The TP-Link settlement was announced in the midst of a controversy spurred by those new FCC rules.

Read more at Ars Technica

The Rise of the Linux Botnet

A new report from Kaspersky Lab on botnet-assisted DDoS attacks shows a steady growth in their numbers the second quarter of this year.

SYN DDoS, TCP DDoS and HTTP DDoS remained the most common attack scenarios, but the proportion of attacks using the SYN DDoS method increased 1.4 times compared to the previous quarter and accounted for 76 percent. This is due to the fact that the share of attacks from Linux botnets almost doubled (to 70 percent) — and Linux bots are the most effective tool for SYN-DDoS. This is the first time that Kaspersky DDoS Intelligence has registered such an imbalance between the activities of Linux- and Windows-based DDoS bots.

Read more at BetaNews

Distribution Release: Linux Mint 18 “Xfce”

Clement Lefebvre has announced the availability of Linux Mint 18 “Xfce” edition. Linux Mint 18 is a long term support release which will receive security updates through to the year 2021. The Xfce edition is a lightweight alternative to Linux Mint’s Cinnamon and MATE editions. The new release offers users access to Mint’s X-Apps, forks of GNOME applications which are designed to look and work the same across multiple desktop environments. The new version of Mint also features improvements to the update manager 

Read more at DistroWatch

Mesosphere Declares ‘Container 2.0,’ the Stateful Era

Touting the rise of what it’s calling Container 2.0, Mesosphere says it’s time for containers to support real-time, stateful decision making.

To that end, Mesosphere today announced partnerships with three software vendors that have a hand in real-time applications: Confluent, DataStax, and Lightbend. The three companies‘ applications are now supported on DC/OS, Mesosphere’s data center operating system.

As containers mature, they’re being used for more complex tasks — distributed applications, in particular. Originally, a container was meant to hold one application and its dependencies, such as libraries. Now, developers are interested in developing distributed applications that will run on multiple containers spread across multiple machines, says Tobi Knaup, Mesosphere’s CTO.

Read more at SDx Central

Container Format Dispute on Twitter Shows Disparities Between Docker and the Community

Should the Docker container image format be completely standardized? Or should Docker not be held back from evolving the format ahead of the open specification? This was the topic of a heated Twitter tussle last week between Google evangelist Kelsey Hightower and the creator of the Docker itself, Solomon Hykes.

Hightower wants to see the Docker format image to be completely standardized, so companies, including Docker, can build additional functionality atop of the specification. Hykes, however, balked at full standardization, asserting that the format is still too new and evolving too quickly.

The debate centers around how much Docker should donate of its container technology to the Open Container Initiative (OCI), an initiative to build a vendor-neutral container image and runtime specification. The image is the package, or the container itself, which users can fill with their apps. The runtime is the engine that runs the container, providing all the support of the underlying OS.

Read more at The New Stack

 

Securing Embedded Linux by Michael E. Anderson

https://www.youtube.com/watch?v=4w4mtiy35ks

In this session from the Embedded Linux Conference, Mike Anderson discusses several techniques for improving the robustness of our platforms and hardening them against the myriad of bad actors lurking on the Internet.

Top Reasons The Open Source Community Attends Events

It should go without saying that there is no substitute for face to face collaboration. And what is open source if not the ultimate example of collaboration? Open source events provide a wide range of opportunities for the community to connect, and the end result of all of this is good for the community and good for business.

Over the years, and across more than a hundred events, we’ve learned quite a bit of just what it is that makes events specifically so important to the community. Here are some of those reasons:

1. To advance technology. The world has come to understand that open source collaboration moves technology forward. A lot of work can be accomplished over mailing lists and conference calls, but it still slows the process. Time and time again, we hear from all types of technologists – kernel maintainers to architects, that there is absolutely no substitute for the face time they get at events.

2. To learn how the community works. Not everyone in tech starts in open source, and the open source community is unique. Attending events gives developers, sysadmins, operators, users, executives and other open source players and firsthand look at how the community operates. There is no better way to immerse yourself.

3. To get motivated. Programmers are often portrayed as people who work very independently, coding for long hours at their computers into the wee hours of the night. While the long coding hours part is probably true, programmers aren’t the lone wolfs they are sometimes portrayed as. Everyone wants to feel like they are a part of something bigger, part of a community. This is what drives open source. Attendees frequently tell us that the ability to meet in person with like-minded folks to discuss the projects and technologies they are working on is a huge motivation.

4. To connect directly with the maintainers, committers and key members of projects. One of the biggest benefits of our events that we often hear about is the ability to connect directly with these folks to ask questions and gain knowledge. For example, if a developer wants to start submitting patches to the kernel but wants some information on best practices to be successful at this, what better way to find out than to speak directly to one of the kernel maintainers? There is huge benefit to the growth of the community by being able to engage in person with these people.

5. To cross-pollinate. Some of our events gather together the developers who are building technologies, with the operators that are implementing them, the users that are benefiting from them, and the business leaders making the decisions. It is incredibly important for these groups to be able to connect and events provide that opportunity. For a developer to be able to explain value directly to a business leader? For a user to be able to ask questions or propose a new feature direct to a developer? Only the open source community truly allows this level of collaboration and events are the best place to offer it.

6. To learn about the Latest and Greatest. Technology moves fast. Every time you turn around there are new open source projects, new technologies and new advancements. Events provide an unprecedented ability to learn a ton of new information in a short amount of time, with the added benefit of being able to ask questions real time to the speakers presenting new information and to engage with others to discuss the material, ask questions and brainstorm right away.

7. To have fun. The open source community works hard and sometimes events can be a bit of an information overload, so attendees appreciate the ability to ‘take 5’ while onsite and have a little fun. 5k fun runs, games, evening events with good beer and company; these elements are appreciated by attendees and contribute to a productive experience.

The list of reasons could go on and on. The fact is, events provide different benefits for different attendees. The overarching point, though, remains the same: Events help further collaboration and the advancement of open source technology. If you’re thinking of attending an event, or contemplating sending some of your team to an event, and weighing all the pros and cons of whether you should go, the answer is: go. The knowledge gained, the relationships made, the questions answered – there is no substitute and everyone benefits.