Home Blog Page 391

TLS 1.3 Is Approved: Here’s How It Could Make the Entire Internet Safer

​The IETF has finally given the okay to the TLS 1.3 protocol, which will speed up secure connections and make snooping harder for attackers.

  • TLS 1.3 has been approved for use, which will make all secure internet connections faster and safer.
  • The security and speed improvements brought by TLS 1.3 are due to the elimination of unnecessary handshake steps and the forced use of newer encryption methods.

Transport Layer Security (TLS) version 1.3 has been approved by the Internet Engineering Task Force (IETF), making it the new industry standard for secure connections.

Read more at Tech Republic

Identity Management from the Cloud

Offers for identity management as a service (IDaaS) are entering the market and promising simplicity. However, many lack functionality, adaptability, and in-depth integration with existing systems. We look at how IT managers should consider IDaaS in their strategy.

Identity and access management (IAM) is a core IT discipline located between IT infrastructure, information security, and governance (Figure 1). For example, IAM tools help with the management of users and their access rights across systems and (cloud) services, to provide easy access to applications (preferably with a single sign-on experience), to handle strong authentication, and to protect shared user accounts….

In the market for IDaaS or cloud IAM, a rapidly growing number of offers focus on a number of different features. Moreover, these products are not easy to compare. The most important types of cloud IAM services are described here.

Cloud single sign-on (SSO) solutions are probably the best-known services. Their most important feature for users is an SSO to various cloud services. One of the most important value propositions is their predefined integration with hundreds, or even thousands, of different cloud services. Access is typically through a kind of portal that contains the icons of the various connected cloud services.

Read more at ADMIN Magazine

The Evolution of Systems Requires an Evolution of Systems Engineers

The systems we worked on when many of us first started out were the first generations of client-server applications. They were fundamentally different from the prior generation: terminals connecting to centralized apps running on mainframe or midrange systems. Engineers learned to care about the logic of their application client as well as the server powering it. Connectivity, the transmission of data, security, latency and performance, and the synchronization of state between the client and the server became issues that now had to be considered to manage those systems.

This increase in sophistication spawned commensurate changes to the complexity of the methodologies and skills required to manage those systems. New types of systems meant new skills, understanding new tools, frameworks, and programming languages. 

Since the first generation of client-server systems, we’ve seen significant evolution. … Each iteration of this evolution has required the technology, systems, and skills we need to build and manage that technology to change. In almost every case, those changes have introduced more complexity. The skills and knowledge we once needed to manage our client-server systems versus these modern distributed systems with their requirements for resilience, low latency, and high availability are vastly different. So, what do we need to know now that we didn’t before?

Read more at O’Reilly

Node.js Is Now Available as a Snap on Ubuntu, Other GNU/Linux Distributions

Node.js, the widely-used open-source and cross-platform JavaScript runtime environment for executing server-side  JavaScript code, is now officially available as a Snap package for the Linux platform.

Now that Linux is the preferred development platform for developers visiting Stack Overflow, the need for running the latest versions of your favorite programming languages, frameworks and development environments has become more and more important, and Canonical’s Snappy technologies are the answer.

NodeSource, the organization behind Node.js, announced today they made a Snap package to allow Linux developers to more easily install the popular JavaScript runtime environment on their operating systems. Snap is a containerized, universal binary package format developed by Canonical for Ubuntu Linux.

Read more at Softpedia

Linux Foundation Launches LF Deep Learning Foundation to Accelerate AI Growth

As this week’s Open Network Summit gets underway, The Linux Foundation has debuted the LF Deep Learning Foundation, an umbrella organization focused on driving open source innovation in artificial intelligence, machine learning and deep learning. 

The goal of the LF Deep Learning Foundation is to make these new technologies available to developers and data scientists.

Founding members of LF Deep Learning include Amdocs, AT&T, B.Yond, Baidu, Huawei, Nokia, Tech Mahindra, Tencent, Univa, and ZTE. Through the LF Deep Learning Foundation, members are working to create a neutral space where makers and sustainers of tools and infrastructure can interact and harmonize their efforts and accelerate the broad adoption of deep learning technologies.



In tandem with the launch of LF Deep Learning, The Linux Foundation also debuted the Acumos AI Project, a platform that will drive the development, discovery, and sharing of AI models and AI workflows. AT&T and Tech Mahindra contributed the initial code for the Acumos AI Project.

Read more at Fierce Telecom

The Evolution of Open Networking to Automated, Intelligent Networks

The 2018 Open Networking Summit is happening this week in Los Angeles. Just prior to opening day, we talked with John Zannos, Chief Revenue Officer at Inocybe, to get his view on the state of open networking and changes in the foreseeable future. Zannos, is on the governing board of the Linux Foundation Networking effort, and has formerly served on the OpenStack and on the OPEN-O boards.

Inocybe has been involved with OpenDaylight since the beginning. The company is one of the top five contributors, and its engineering team is involved in helping solve some of the toughest questions associated with SDN and OpenDaylight. For example, company engineers lead the community effort focused on solving the problems associated with clustering, security, and service function chaining.

John Zannos, Chief Revenue Officer at Inocybe

Previously, Zannos ran Canonical’s cloud platform business and helped drive the NFV and SDN strategy within the company.  “I have seen the evolution of disaggregation, automation of open source in compute and we are seeing those same elements migrate to the network,” he said. “And, that’s what I thought we should talk about — how SDN and open networking are combining to deliver the promise of automated and intelligent networks.” Here are some insights Zannos shared with us.

Linux.com: What is the state of open networking now?

John Zannos: Open networking is here now. Over the last 10 years, there has been open source in the compute space: Linux, virtual machines, OpenStack, Kubernetes. We learned a lot over those 10 years and we are bringing the experience and hard learned lessons to open source in the network.

In the networking space, we have seen NFV as a way to bring virtualization to networking. And we are at a point now that there is leadership from large service providers like AT&T, China Mobile and Deutsche Telekom, and smaller ones like Cablevision in Argentina to name a few. Different members of the vendor community, like Nokia and smaller ones like Inocybe, are navigating how to incorporate open source into the network in a way that it helps accelerate end user adoption with service providers and enterprises, with the goal of achieving the end state of an intelligent and automated network.

At Inocybe, we are accomplishing this through our Open Networking Platform. The Open Networking Platform simplifies the consumption and management of open networking software such as OpenDaylight and OpenSwitch. It helps companies consume just the right amount of open source components they need for specific business use cases (ie Traffic Engineering). We create a purpose built open source software stack that is production-ready for the specific use case. It helps organizations automate the build, management and upgrade process, ultimately putting them on a path to automated and intelligent network.

At Open Networking Summit, we’ll be demonstrating how our Open Networking Platform can deploy a fully integrated OpenSwitch-based NOS and OpenDaylight-based SDN Controller on a variety of hardware platforms, eliminating the complexity from the controller down the stack, while preserving the ability to disaggregate the solution (Dell’s booth, number 43).

Linux.com: What are the evolutionary steps taken, and still ahead for Open Networking?

Zannos: The first step of this journey was disaggregation of network appliances, separating network hardware and software. The next step was to incorporate automation. An example of that is the use of SDN controllers, such as OpenDaylight, an open source project which automates the deployment and management of network devices.

The next two steps are a combination of data analytics and machines learning/AI. We are moving from collecting data to determine what is happening in the network and what will happen next, to machine learning/AI that will consume that information to determine what action to take. With these two steps we move from analysis to action to autonomous networking. We see open analytics projects like PNDA, which is part of the Linux Foundation Networking effort, moving us in this direction.  In the machine learning and AI space, AT&T and Tech Mahindra with the Linux Foundation have announced Acumos, which will enable developers to easily build, share, and deploy AI applications.

Ultimately, we are using collaborative innovation to help service providers and enterprises be able to use automated and intelligent networks quicker. What’s interesting is that open source creates a framework for companies that compete to collaborate and share information in a way that accelerates adoption to an intelligent, automated network. We are now at a point where we are starting to see those benefits.

Think of software-defined networking (SDN) as allowing for automation and flexibility, and open networking as allowing for collaborative innovation and transparency. When you combine SDN and open source networking you begin to drive the acceleration of adoption.

Linux.com:  You said the open networking community could learn from open source adoption in the compute space. What are those lessons to be learned?

Zannos: There are two things to be learned from the compute experience. We don’t want to create too many competing priorities in open networking, and we want to be careful not to stifle innovation. It is a tricky balance to manage.

There was a moment in OpenStack that we had too many competing projects and that ultimately diluted the impact of engineering resources in the community. We want to ensure that the developer and engineering resources that companies big and small bring to the open source communities, can stay focused on advancing the code base, in way that helps drive end user adoption. Competing priorities and projects can create confusion in the marketplace, and that slows down adoption. Companies weren’t sure if all these projects were going to survive. I believe we have learned from that experience. We are trying to be more thoughtful about helping projects form with a focus on accelerating time to adoption by end users where they can actually reap the benefits.  That’s exactly what we are trying to do with OpenDaylight, let it to continue to evolve, but also let it stabilize so customers can actually use it in production.

The second thing is to be sensitive of the fact that you don’t want to stifle competition. You do want to allow for innovation that comes from different and competing ideas. But, I think we have an opportunity to learn and improve from our experience to date.

I am optimistic that our experience as an industry and a community is building a strong foundation for open source adoption in the network. It is exciting to be part of what Inocybe and The Linux Foundation are doing in networking, because it’s an opportunity to collaborate and prioritize the efforts that will help drive adoption.

This article was sponsored by Inocybe and written by Linux.com.

Sign up to get the latest updates on ONS NA 2018!

How to Create an Open Source Stack Using EFK

Managing an infrastructure of servers is a non-trivial task. When one cluster is misbehaving, logging in to multiple servers, checking each log, and using multiple filters until you find the culprit is not an efficient use of resources.

The first step to improve the methods that handle your infrastructure or applications is to implement a centralized logging system. This will enable you to gather logs from any application or system into a centralized location and filter, aggregate, compare, and analyze them. If there are servers or applications, there should be a unified logging layer.

Thankfully, we have an open source stack to simplify this. With the combination of Elasticsearch, Fluentd, and Kibana (EFK), we can create a powerful stack to collect, store, and visualize data in a centralized location.

Read more at OpenSource.com

Linus Torvalds: Linux 4.16 Kernel Launches on Sunday. Possibly. Maybe.

After a series of release candidates, Linus Torvalds could well be ready to unleash version 4.16 of the Linux kernel onto the world at the weekend. That is unless he changes his mind about the RC build: “rc7 is much too big for my taste,” he says in his weekly update to the kernel mailing list.

Torvalds says that while he’s not planning for there to be an eighth release candidate, the current size is causing him to think about the best course of action. For those who have not been following the story, he also details what’s new in Linux 4.16.

Read more at Betanews

Achieving Cloud-Native HPC Capabilities in a Mixed Workload Environment

As the lines blur between traditional workloads and new applications like deep-learning, containers becoming are a hot topic in enterprise HPC as well. Not surprisingly, like their internet colleagues deploying cloud-scale services, HPC architects see value in cloud-native approaches. HPC developers have been building distributed applications since before clusters were cool, open source is in their DNA, and they also appreciate the elegance of parallelism, resilience and horizontal scaling. While the term CI/CD didn’t originate in HPC, some HCP admins face the same challenge as their DevOps colleagues, needing to deploy new functionality quickly and reliably.

Barriers to Adoption

So why don’t we see a mass migration to the cloud and widespread availability of containerized HPC applications expressed as Kubernetes YAML templates? As is often the case, the answer is complicated.

There are several issues, but we explore two in more detail below.

  • Significant investments in certified and trusted application workflows
  • Technical considerations related to workload management

Read more at The New Stack

FOSSA: Open-Sourcing Open Source License Management

No one ever became a programmer so they could mange open-source licenses. But, that’s what many developers must do these days. Black Duck Software, the open-source software logistics and legal solutions provider, and North Bridge found in 2015 that 66 percent of companies create open-source software. That’s great, but all that code comes with a wide variety of licenses, each with its own set of requirements. What’s a developer or company to do?

There have long been corporate programs, such as those from Black Duck Software, White Source Software, and Sonatype, which provide code scanning and open-source licensing management. This isn’t a small job. According to Sonatype, the average application contains 106 open-source components.

Read more at ZDNet