Home Blog Page 772

Cisco Unfurls its Tetration Data Center Analytics Platform

Cisco today took the wraps off a new platform for data centers – Tetration Analytics.

According to its announcement today, existing data center analytics tools are disjointed. Cisco created Tetration as an entirely new analytics platform to monitor every action in the data center.

Tetration is based upon a 39-rack-unit appliance that’s installed on-premises at the data center. In addition, Tetration uses sensors that create an analytics platform.

“It’s a complete system, monitoring every single packet and monitoring changes happening across those packets,” said Yogesh Kaushek, a Cisco senior director of product management, in a pre-briefing with SDxCentral.

Read more at SDx Central

Syslog, A Tale Of Specifications

The advantage of a unikernel work cycle is many-fold. You get performance benefits from not having a memory management unit or Kernel/User boundary and the attack surface is greatly minimized as all system dependencies are compiled with your application logic. Don’t use a file-system in your application? Leave it out. The philosophy here is keep it simple and use what you need. These unikernels are also the secret sauce for how Docker Beta natively works on Windows and MacOSX.

I’m specifically focusing on hacking on the Mirage implementation of Syslog which was started by Jochen Bartl (verbosemode, lobo on IRC) who is an all around awesome guy to work with. This is Jochen’s first big OCaml project (mine too) and has already proven how capable and passionate he is by leading the charge. 

Read more at Gina Codes

Network Security: The Unknown Unknowns

Using the Assimilation Project to Perform Service Discovery and Inventory of Systems

I recently thought of the apocryphal story about the solid reliability of the IBM AS/400 systems. I’ve heard several variations on the story, but as the most common version of the story goes, an IBM service engineer shows up at a customer site one day to service an AS/400. The hapless employees have no idea what the service engineer is talking about. Eventually the system is found in a closet or even sealed in a walled off space where it had been reliably running the business for years completely forgotten and untouched. From a reliability perspective, this is a great story. From a security perspective, it is a nightmare. It represents Donald Rumsfeld’s infamous “unknown unknowns” statement regarding the lack of evidence linking the government of Iraq with the supply of weapons of mass destruction to terrorist groups.

Alan Robertson, an open source developer and high availability expert, likes to ask people how long it would take them to figure out which of their services are not being monitored. Typical answers range from three days to three months.

Read more at Security Week

Scientific Audio Processing, Part I – How to read and write Audio files with Octave 4.0.0 on Ubuntu

Octave, the equivalent software to Matlab in Linux, has a number of functions and commands that allow the acquisition, recording, playback and digital processing of audio signals for entertainment applications, research, medical, or any other science areas. In this tutorial, we will use Octave V4.0.0 in Ubuntu and will start reading from audio files through writing and playing signals to emulate sounds used in a wide range of activities.

Best Linux Command-Line Tools For Network Engineers

Trends like open networking and adoption of the Linux operating system by network equipment vendors require network administrators and engineers to have a basic knowledge of Linux-based command-line utilities.

When I worked full-time as a network engineer, my Linux skills helped me with the tasks of design, implementation, and support of enterprise networks. I was able to efficiently collect information needed to do network design, verify routing and availability during configuration changes, and grab troubleshooting data necessary to quickly fix outages that were impacting users and business operations. Here is a list of some of the command-line utilities I recommend to network engineers.

Read more at Network Computing

6 Amazing Linux Distributions For Kids

Linux and open source is the future and there is no doubt about that, and to see this come to a reality, a strong foundation has to be lied, by starting from the lowest…

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]

True Network Hardware Virtualization

If your network is fine, read no further.

TROUBLE IN PARADISE

But it’s likely you’re still reading. Because things are not exactly fine. Because you are probably like 99.999% of us who are experiencing crazy changes in our networks. Traffic in metro networks is exploding, characterized by many nodes, with varying traffic flows and a wide mix of services and bit rates.

To cope with all this traffic growth and changing usage patterns, WAN and metro networks require more flexibility. Network resources that can be dynamically and easily set into logical zones; the ability to create new services out of pools of network resources.

Virtualization is a concept that has achieved this in data centers where compute resources have long been virtualized using virtual machines (VMs). NICs providing network connectivity to VMs have been virtualized. But network resources are still rigidly and physically assigned in metro and WAN networks.

You, the devoted architects and tireless operators of these high capacity networks, are confronted with complex networking structures that don’t lend themselves to any form of dynamic change. A further dilemma is that you need to both manage existing connections but also build new platforms that excel at delivering on-demand services and subscriber-level networking.

VAULTING TO THE FRONT

The IXPs and ISPs who architect networks with dynamic programmatic control will achieve service velocity that is winning.

Enter true network hardware virtualization which creates virtual forwarding contexts (VFCs) at WAN scale. For ISPs, Internet Exchanges (IX) and Large Campus networks, WAN-scale multi-context virtualization offers dynamic creation of logical forwarding contexts within a physical switch to make programmable network resource allocation possible.

VIRTUALIZATION IN NETWORK HARDWARE FOR WAN AND METRO NETWORKS

Corsa’s SDN data planes allow hardware resources to be exposed as independent logical SDN Virtual Forwarding Contexts (VFCs) running at 10G and 100G physical network speed. Under SDN application control, VFCs are created in the logical overlay. Three context types are fully optimized for production network applications: L2 Bridge, L3 IP Routing, L2 Circuit Switching. A Generic OpenFlow Switch context type is provided for advanced networking applications where the user wants to use OpenFlow to define any forwarding logic.

Each packet entering the hardware is processed with full awareness of which VFC it belongs to.
Each VFC is assigned its own dedicated hardware resources that are independent of other VFCs and cannot be affected by other VFCs scavenging. Each VFC can be controlled by its own, separate SDN application.

The physical ports of the underlay are abstracted from the logical interfaces of the overlay. The logical interfaces defined for each VFC correspond to a physical port or an encapsulated tunnel, such as VLAN, MPLS pseudo wire, GRE tunnel, or VXLAN tunnel, in the underlay. Logical interfaces of any VFC can be shaped to their own required bandwidth.

USE SDN VIRTUALIZATION TO AUGMENT YOUR NETWORK WHERE IT’S NEEDED

This level of hardware virtualization, coupled with advanced traffic engineering and management, allows building traditional Layer 2 and Layer 3 services with new innovative SDN enabled capabilities. For example, an SDN enabled Layer 2 VPLS service can provide SDN enabled features such as Bandwidth on Demand, or application controlled forwarding rules, and at the same time use existing network infrastructure in the underlay (physical) network to provide connectivity for the new service. To further differentiate, service providers may even allow customers to bring their own SDN controllers to control their services, while retaining full control over the underlay network.

STOP READING AND START DOING!

With Corsa true network hardware virtualization, virtual switching and routing can be achieved at scale to enable programmable, on-demand services for operators and their customers.

WEBINAR

Join us on the live webinar at 10am PDT on June 22 to see how networks can be built using true network hardware virtualization and learn the specific uses cases that it benefits. This webinar will outline specific attributes of open, programmable SDN switching and routing platforms that are needed, especially at scale and in the process dispel the notion of ‘the controller’, discussing how open SDN applications can be used to control the virtualized instances.

Register Now!

 

A Shared History & Mission with The Linux Foundation: Todd Moore, IBM

IBM is no stranger to open source software. In fact, the global corporation has been involved with The Linux Foundation since the beginning. Founded over a century ago, IBM has made a perennial commitment to innovation and emerging technology; That’s why they chose to participate in Linux Foundation Collaborative Projects.

3 Reasons IBM Participates in Linux Foundation Projects

It’s impressive that  IBM was founded more than a century ago with decades of research, technologies, and products behind it. But even more impressive is that the company continues to evolve and embrace emerging technologies. It’s done so, in part, due to its continued involvement with Linux and open source through The Linux Foundation.

“IBM has a long history with The Linux Foundation,” says Todd Moore, VP of Open Technology at IBM. “We’ve been one of the bedrock members of The Linux Foundation since its inception.” And, more generally, says Moore, “We have a long history of doing open source projects throughout many communities.”

Today IBM participates in many Linux Foundation projects, including the Open Mainframe Project. The project’s goal is bringing government, academic and corporate members together, “to boost adoption of Linux on mainframes.”

IBM was one of the founding Platinum members of the Open Mainframe Project, along with ADP, CA Technologies, and SUSE. IBM’s participation included making “the largest single contribution of mainframe code from IBM to the open source community,” Moore says.

“We choose to work with The Linux Foundation and participate in projects like the Open Mainframe Project because of the people, the communities who come together, and the great things that get done,” says Moore.

3 reasons IBM participates in Linux Foundation projects

Moore cites three main reasons IBM participates in Linux Foundation projects:

  • Tailored structure: “There’s quite a bit of customization that can happen within a Linux Foundation project. Many communities impose structure in how they want to operate. When we work with the Linux Foundation to create a community, the community can be very much tailored to just that set of individuals.”

  • Open Governance: “Working with the Linux Foundation brings credibility to the actual open governance structure that we like to see in communities. This partnership brings the credibility that this is a project that will be truly governed out in the open.

  • Encouraging collaboration and participation: “We set up organizations and work effectively to create an atmosphere where people will come and collaborate, and they’ll be ‘sticky’ and they’ll want to go and work on those projects.”

Other Linux Foundation projects that IBM is involved in include Node.js, ODPi, the Cloud Native Computing Foundation, and The Hyperledger Project.

“If we were just to take a project and open source it ourselves and expect people to come to that project, it’s a very difficult path,” says Moore. “When you do it in partnership with someone like The Linux Foundation, that path very much gets smoothed. We have great contacts, great recruitment into these projects, and the staff that we can really go and help and deliver on that.”

Watch the complete video below:

Read more stories about Linux Foundation Collaborative Projects:

PLUMgrid: Open Source Collaboration Speeds IO and Networking Development

Telecom Companies Collaborate Through OPNFV to Address Unique Business Challenges

 

ON.Lab Releases Latest ONOS SDN Platform

The Open Network Lab’s Open Network Operating System project unveiled its seventh release targeting a software-defined networking operating system, dubbed “Goldeneye.”

ONOS said the Goldeneye release includes advances such as improved adaptive flow monitoring and selective DPI from ETRI, claimed to provide lower overhead flow monitoring and Yang tool chain support from Huawei; integration of northbound intent subsystem with the Flow objective subsystem; a six-times improvement in core performance to support consistent distributed operations; and southbound improvements to Cisco IOS NetConf and Yang tool chain.

Read more at RCR Wireless