Home Blog Page 736

SDN: 7 Educational Opportunities

Find out about labs, certification programs, and training courses that offer ways to learn about software-defined networking.

Networking professionals hear all the time that they need to learn new skills to keep up with a rapidly changing industry. On-the-job training would be a practical option, but if your company hasn’t plunged into software-defined networking – and plenty haven’t — how do you expand your knowledge when you’re mired in CLI?

As it turns out, the options for learning new approaches to networking are growing as SDN adoption gradually expands beyond hyper-scale Internet companies and service providers. This spring, the Linux Foundation rolled out a software-defined networking training course to address what the foundation described as a skills gap for networking pros. In launching the SDN training, the foundation said many network engineers lack experience with software virtualization.

Read more at Network Computing

Microsoft Brings Blockchain to Azure Testing Environment

Microsoft is now making its Blockchain-as-a-Service (BaaS) offering available to all users of its Azure testing environment.

Designed to allow developers to quickly create environments in Azure, DevTest Labs seeks to help companies control costs associated with development work. Other features include reusable templates, so developers don’t have to design virtual machine environments from scratch, and artifacts, which tell apps what actions to take once deployed.

Read more at CoinDesk

 
 
 

Kaspersky Lab Launches Bug Bounty Program With HackerOne

The security firm allocates $50,000 to pay security researchers for responsibly disclosing flaws in its security products. Kaspersky Lab is no stranger to the world of vulnerability research, but the company is now opening up and enabling third-party security researchers to disclose vulnerabilities about Kaspersky’s own software.

The new effort is being conducted as a bug bounty program on the HackerOne platform. Kaspersky Lab is initially providing a total of $50,000 in bug bounties and is starting off with its Kaspersky Internet Security and Kaspersky Endpoint Security products as targets for researchers.

HackerOne also hosts public bug bounty programs for Cylance and Glasswire and helped the U.S. Department of Defense with the Hack the Pentagon program earlier this year.

Read more at eWeek

Creating a Culture of Observability

For a change of pace, I’d like to share a video of me talking, rather than writing something for you!

The gist is pretty simple: When I joined Stripe I began a process of adding Observability to the company culture. You may be wondering whatobservability even is! Well, in control theoryobservability is a measure for how well internal states of a system can be inferred by knowledge of its external outputs.

Monitorama PDX 2016 – Cory Watson – Creating A Culture of Observability at Stripe from Monitorama on Vimeo.

Read more at One Mo’Gin

Docker: Installation and Basic usage on Ubuntu 16.04

Docker is an open-source project that provides an open platform for developers and sysadmins to build, package, and run applications anywhere as a lightweight container. This tutorial shows the installation of Docker on Ubuntu 16.04 and the first steps to get started with Docker container management.

Read full article

How to Fix a Bug in Open Source Software

We’re all on the same team, and all working towards the same goal of making our open source software better. Your small contributions make a big impact.

How open source software is supported is just as important as how well it works. Given the choice between building awesome new features or carefully reading and responding to 10 bug reports, which would you choose? Which is more important? When you think of open source maintainers what do you see? I see issues. I see dozens of open bug reports that haven’t been responded to in days. I see a pile of feature requests waiting to be worked on.

Read more at OpenSource.com

Keynote: Apache OpenTech is Fueling Tomorrow’s Game Changing Innovations – Todd Moore

https://www.youtube.com/watch?v=MG2iZBLz9g8?list=PLGeM09tlguZTvqV5g7KwFhxDlWi4njK6n

When IBM got involved with the Linux open source project in 1998, they were betting that giving their code and time to the community would be a worthwhile investment. Now, 18 years later, IBM is more involved than ever, with more than 62,000 employees trained and expected to contribute to open source projects, according to Todd Moore, Vice President of Open Technology at IBM, speaking at ApacheCon in May.

IBM’s Wager on Open Source Is Still Paying Off

When IBM got involved with the Linux open source project in 1998, they were betting that giving their code and time to the community would be a worthwhile investment. Now, 18 years later, IBM is more involved than ever, with more than 62,000 employees trained and expected to contribute to open source projects, according to Todd Moore, Vice President of Open Technology at IBM, speaking at ApacheCon in May.

“It became apparent that open source could be the de facto standards we needed to be the engine to go out and drive things,” Moore said in his keynote at ApacheCon. “[The contributions] were bets; we didn’t know how this was going to come out, and we didn’t know if open source would grow, we knew there would be roadblocks and things we’d have to overcome along the way, but it had promise. We thought this would be the way of the future.”

Moore reiterated IBM’s commitment to open source, highlighting projects born at IBM’s developerWorks Open (dWOpen), such as SystemML, Toree, and Quarks, and now in the Apache Incubator.

Machine Learning Focus

IBM is especially focused on machine learning technology — hence, its work on SystemML — and it’s looking to build “a big-picture platform” that will be able to process the mountains of data sure to come from sources like streaming data and the Internet of Things. Moore cited a Cisco study that estimated that by 2018 there will be 400 zetabytes of IoT data created — a staggering figure.

“To us, machine learning is incredibly important,” Moore said. “We are swimming in data. There is just more data out there than we possibly know what to do with. As a result of that we need to use machines to start doing the analytics, to learn from themselves, to figure out the new set of things that we weren’t seeing in the data.

IBM is also contributing to projects that have progressed past the Apache Incubator, projects like Mesos, Spark, Kafka, and CouchDB. “The projects here I think are going to be the keys to that in the future,” he said.

According to Moore, the strength of the Apache Foundation will always be the individual committers; it’s a formula that has worked from the very beginning. But, big companies like IBM continuing to make big commitments to the open source community will be crucial to maintaining or even increasing the pace of innovation.

“The Apache way is that everyone is a committer, and making their own commits into the process, but the companies are behind it,” Moore said. “We’ve got real good deep support here, and that’s important.”

Watch the complete presentation below:

https://www.youtube.com/watch?v=MG2iZBLz9g8?list=PLGeM09tlguZTvqV5g7KwFhxDlWi4njK6n

linux-com_ctas_apache_052316_452x121.png?itok=eJwyR2ye

Network Virtualization Merging LANs & WANs

For as long as anyone in the networking world can remember, management of local area networks (LANs) and wide area networks (WANs) has been distinctly different. LANs were primarily the responsibility of local IT departments, while WANs have been made up of MPLS and Internet connections controlled by carriers. Network virtualization (NV) is starting to blur the lines between the LAN and the WAN.

After all, virtual connections traverse both the LAN and WAN. Less clear, however, is how this Network Virtualizationmerging LANs & WANs network services is actually going to occur. In some quarters relying on one vendor to unify LAN and WANs will have a certain amount of appeal. But there’s also already a small cadre of vendors dominating SD-WAN deployments. Thanks to the rise of cloud computing, WANs are now obviously a more important strategic investment than ever. In fact, IDC is now forecasting the SD-WAN market would grow from from less than $225 million last year to more than $6 billion by 2020.

Read more at SDx Central

Sparkling Water: Bridging Open Source Machine Learning and Apache Spark

Although many people have experience with the fields of machine learning and artificial intelligence through applications in their pockets, such as Apple’s Siri and Microsoft’s Cortana, the scope of this technology extends well beyond the smartphone. H2O.ai, formerly known as Oxdata, has carved out a unique niche in the machine learning and artificial intelligence arena because its primary tools are free and open source, and because it is connecting its tools to other widely used data analytics tools. As a case in point, H2O.ai has now announced the availability of version 2.0 of its open Sparkling Water tool. Sparkling Water, H2O.ai’s API for Apache Spark, allows users of Spark to leverage very powerful machine learning intelligence.

You can download Sparkling Water 2.0 for free now. New features include the ability to: interface with Apache Spark, Scala and MLlib via H2O.ai’s Flow UI; build ensembles using algorithms from both H2O and MLlib; and give Spark users the power of H2O’s visual intelligence capabilities.

Sparkling Water includes a toolchain for building machine learning pipelines on Apache Spark.

In essence, Sparkling Water is an API that allows Spark users to leverage H2O’s open source machine learning platform instead of — or alongside — the algorithms that are included in Spark’s existing MLlib machine-learning library.

H2O.ai has published a number of use cases for how Sparkling Water and its other open tools are used in fields ranging from genomics to insurance.

Analysts are beginning to realize that open source machine learning tools can be used in conjunction with tools like Spark, giving them flexibility as they focus on big data. “Enterprises are looking to take advantage of a variety of machine learning algorithms to address an increasingly complex set of use cases when determining how to best serve their customers,” said Matt Aslett, Research Director, Data Platforms and Analytics at 451 Research. “Sparkling Water is likely to be attractive to H2O and Spark users alike, enabling them to mix and match algorithms as required.”

Moreover, in an interview with H2O.ai’s Vinod Iyengar, who oversees product strategy at the company, he noted that running H2O.ai’s powerful, open tools on affordable clusters is within reach of anyone now. “In the last five years the cost of storage has come down dramatically, as has the cost of memory,” he said. “Additionally, anyone can leverage an advanced computing cluster on, say, Amazon Web services, for a few hundred dollars. All of this means that organizations or individuals can take a whole lot of data and produce powerful predictions and insights from the large data sets without facing huge costs.”

Tipping Point

What does this mean in simple terms? It means that we are at a tipping point where anyone can wield the same kind of machine learning and artificial intelligence muscle that is used for everything from drug discovery to deep data analytics.

Iyengar also sees the open source roots of Sparkling Water as powerful. “Code is truly getting commoditized and the only defensible asset is community,” he said. “The relationships we have with our customers are also deepened due to the open source nature of our products. Because H2O and Sparkling Water are open source, our customers are also our community. They take part in H2O not just as consumers, but as developers as well.”

Notably, H2O.ai is also working on a data science hub called Steam, which will eliminate all the DevOps work required to build and deploy machine learning and artificial intelligence models. With Steam, developers and data scientists will be encouraged to compare models across teams and take them into production without the need for heavy engineering work on the backend. We will follow up on Steam in a post to come soon.

To learn more about the promise of machine learning and artificial intelligence, watch a video featuring David Meyer, Chairman of the Board at OpenDaylight, a Collaborative Project at The Linux Foundation. And, to learn more about H2O.ai’s machine learning work, see this previous post.