Home Blog Page 551

Cloud Computing Continues to Influence HPC

Traditionally, HPC applications have been run on special-purpose hardware, managed by staff with specialized skills. Additionally, most HPC software stacks are rigid and distinct from other more widely adopted environments, and require a special skillset by the researchers that want to run the applications, often needing to become programmers themselves. The adoption of cloud technologies increases the productivity of your research organization by making its activities more efficient and portable. Cloud platforms such as OpenStack provide a way to collapse multiple silos into a single private cloud while making those resources more accessible through self-service portales and APIs. Using OpenStack, multiple workloads can be distributed among the resources in a granular fashion that increases overall utilization and reduces cost.

Read more at insideHPC

Serverless Security Implications—From Infra to OWASP

By its very nature, Serverless (FaaS) addresses some of today’s biggest security concerns. By eliminating infrastructure management, it pushes its security concerns to the platform provider. Unfortunately, attackers won’t simply give up, and will instead adapt to this new world. More specifically, FaaS will move attackers focus from the servers to the application concerns OWASP highlights—and defenders should adapt priorities accordingly.

This post touches on which security concerns Serverless helps, and which ones it doesn’t. Each of these bullets is probably worth of a full post of its own (which I may write later on!), but in this post I’ll keep remediation and risk management details light, in favor of covering the bigger picture.

Read more at Snyk

4 Ways to Take Control of your Wi-Fi Connections on Linux

Easy connection to the Internet over Wi-Fi is no longer a privilege denied Linux users. With a recent distribution on a fairly recent laptop, connecting your Linux laptop to an available Wi-Fi network is often as easy as it is with your phone.

But just getting something to work is only the first step. With a little extra effort, you can optimize your Wi-Fi connections on Linux for the best speed and improved privacy. 

Read more at PCWorld

OSEN Podcast: Tim Mackey, Black Duck

I spoke with Tim Mackey, Technology Evangelist from Black Duck. Tim spent a few years at Citrix working on Xen Server and Cloudstack, where he, like me and many others, started thinking about how to get code from project to product. Tim and I talked about open source risk management, the current state of IT and open source, Xen vs. KVM flashbacks, and more.

Read more at OSEN

Catch Up With The Linux Foundation at OpenStack Summit in Boston

The Linux Foundation will be at OpenStack Summit in Boston — one of the largest open cloud infrastructure events in the world — with many conference sessions, intensive training courses, giveaways, and a chance to win a free OpenStack training course or a Raspberry Pi 3 Starter Kit.

Stop by The Linux Foundation training booth for fun giveaways, including webcam covers and stickers, as well as two free ebooks: Open Source in the Enterprise and SysAdmin’s Essential Guide to Linux Workstation Security.

You can also enter the raffle for a chance to win either a free LFS252 OpenStack Administration Fundamentals course OR a Raspberry Pi 3 Starter Kit. The winners will be announced Thursday, May 11 at 10:45 a.m. Eastern time at The Linux Foundation Training booth (#C19).

The Linux Foundation is also looking forward to an array of conference events — including intensive OpenStack training, many project-focused presentations, and the Women of OpenStack Lunch.

Event Highlights

Be sure to stop by these conference booths to chat and learn more: OPNFV (Booth C15), FD.io (C16), OpenDaylight (C17), Cloud Foundry (C18), and Cloud Native Computing Foundation (C20).

Linux Kernel 4.11 ‘Fearless Coyote’ Released

Linus Torvalds has returned to an animal-themed nickname for Kernel 4.11. After 4.10 was named “Happy Anniversary” for a brief time in its development cycle, 4.11 is Fearless Coyote, a name carried over from version 4.10-rc6.

And, after spending an extra week on rc8, Torvalds remarked how the last leg of the development of 4.11 “contained smaller fixes […] but nothing that made me go “hmm…”” — which is the way he likes the last week to go.

That doesn’t mean the rest of the cycle has been uneventful, quite the contrary.

For example, along with a bunch of improvements to video drivers, this kernel comes with drivers that implement the last link to get a fully functional DisplayPort MST support (to wit, audio) on Intel video cards. DisplayPort Multi-Stream Transport (DP MST for short) is a technology that allows you to daisy-chain together several monitors. You only need one cable running from your output video port on your computer to a monitor, and then another cable running from the first monitor to a second monitor, and then another to the next, and so on. This means that, even with only one video port, in theory you could have any number of monitors hooked up to your machine. The new drivers now allow you to pipe audio to monitors decked out with loudspeakers. Decidedly cool.

Continuing with Intel-based hardware, work has started on drivers to support graphic acceleration on the new Gemini Lake SoCs coming out later next year. This means that, by the time machines start shipping with Gemini Lake, hopefully, there will be mature drivers for Linux.

Speaking of partially supported platforms, the Lego Mindstorms EV3 gets some love in 4.11. Currently working is pin muxing, pinconf, the GPIOs, the MicroSD card reader, the UART on input port 1, the buttons, the LEDs, poweroff/reset, the flash memory, EEPROM, the USB host port and the USB peripheral port. Stuff that is still being worked on includes the speaker, the A/DC chip, the display, Bluetooth, the input and output ports, and the battery indication.

Other things to look forward to in 4.11

  • The ever-so-useful perf tool, used to analyze the performance on your machine, gets an ftrace function and function_graph trace. This will allow you to trace practically every function in the kernel, which is great for debugging or just to learn from watching the kernel working live.

  • As usual, a fair number of ARM architectures are now supported by the kernel — the HiSilicon Kirin960/Hi3660 and HiKey960 development board, for one. The Banana Pi M64, powered by an Allwinner A64 is another, as is the NXP LS1012a SoC, along with three developer boards using this hardware.

  • Also supported are devices that adhere to the Opal Storage Specification. These are data storage devices (read “disk drives“) that self-encrypt their contents and can only be de-encrypted by the owner of the device.

For a full list of changes, some in-depth explanations, and links to the commits, take a look at this entry on the Kernel Newbies website.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Disruptive Collaboration: The Next Generation of Network Software and Hardware

About 10 years ago, mobile networks began experiencing massive increases in demand with the launch of the iPhone and the introduction of other smart phones. In a keynote at the Open Networking Summit, Andre Fuetsch, President AT&T Labs and CTO, AT&T says that the demand increased over 250,000% in the past 10 years. What AT&T quickly realized was the hardware-centric approach they’d been taking for decades wasn’t going to be enough, and they believed that shifting to software was their best bet to meet this accelerating demand. However, individual companies working alone tend to build similar solutions and duplicate effort, so AT&T isn’t doing this alone. They are collaborating together with other companies in a consolidated effort around ONAP, Open Network Automation Platform.

So far, AT&T has shifted more than 30 percent of their network functions to Software Defined Networking (SDN) with the goal of reaching over 55 percent this year, according to John Donovan, Chief Strategy Officer and Group President – AT&T Technology and Operations. He went on to point out that reaching this goal of becoming more software-defined than not, means that they need to figure out how to capitalize on this new software-defined network. What they’re architecting today is an abstraction layer, Indigo, designed to evolve and accelerate over time as part of what they are calling Network 3.0, a data-powered network.

When they began down this journey of shifting to SDN, there wasn’t any existing software that met their needs, so Fuetsch says that they decided to build ECOMP, a modular, scalable, and secure network operating system for SDN automation that has been in production for over two and a half years. However, over the past year, they realized that there is an opportunity to align the industry on a single consolidated effort by open sourcing ECOMP and combining it with the OPEN-O project to create ONAP, a Linux Foundation project. ”ONAP will become the global standard for service providers to introduce and operate and manage SDN,” Fuetsch predicts.

He closed by pointing out that “networking is really going to change the world. It’s more than just making SDN better. This is about connecting lives, creating new opportunities, and helping make life easier and happier around the world.”

Watch the video of this Open Networking Summit keynote to get more details about AT&T’s approach to using software and hardware to evolve their network:

https://www.youtube.com/watch?v=l-QjVrVe9Lo?list=PLbzoR-pLrL6p01ZHHvEeSozpGeVFkFBQZ

Interested in open source SDN? The “Software Defined Networking Fundamentals” training course from The Linux Foundation provides system and network administrators and engineers with the skills to maintain an SDN deployment in a virtual networking environment. Download the sample chapter today!

Check back with Open Networking Summit for upcoming news on ONS 2018. 

Red Hat Launches OpenShift.io, an Online IDE for Building Container-Based Applications

Red Hat is launching OpenShift.io today, its first major foray into offering cloud-based developer tools. As the name implies, OpenShift.io sits on top of the company’s Kubernetes-based OpenShift container management platform and provides developers with the tools they need to build cloud-native, container-based apps. That includes team collaboration services, Agile planning tools, developer workspace management, an IDE for coding and testing, as well as monitoring and — of course — continuous integration and delivery services.

While its focus is somewhat different, this does look a lot like Red Hat’s version of Microsoft’s Visual Studio Team Services. What Red Hat has done here, though, is tie together a number of existing open source projects like fabric8, Jenkins, Eclipse Che and, of course, OpenShift into a free service that provides developers with a similar experience, but with a strong focus on container-based applications.

Read more at TechCrunch

Nine Ways to Compare Files on Unix

Sometimes you want to know if some files are different. Sometimes you want to how they’re different. Sometimes you might want to compare files that are compressed and sometimes you might want to compare executables. And, regardless of what you want to compare, you probably want to select the most convenient way to see those differences. The good news is that you have a lot more options than you probably imagine when you need to focus on file differences.

First: diff

The command most likely to come to mind for this task is diff. The diff command will show you the differences between two text files or tell you if two binaries are different, but it also has quite a few very useful options.

Read more at Computer World

Uptime: Cloud Gives Many Enterprise Data Centers New Lease on Life

The volume of corporate software workloads being deployed in the cloud is quickly growing, but that does not mean on-premise enterprise data center footprint is shrinking at a similar rate. While they’re not investing in new data centers to expand capacity – cloud and colocation providers satisfy that need – many enterprises are spending money to upgrade their existing facilities, extending their useful life for many years to come.

That’s according to the latest survey of enterprise data center operators by The 451 Group’s Uptime Institute. Uptime surveys senior executives, IT, and facilities managers who operate data centers for traditional enterprise companies, such as banks, retailers, manufacturers, etc. 

The percentage of respondents who said they were planning to build new data centers was also notably high: 30 percent.

Read more at Data Center Knowledge