Home Blog Page 436

HPC Storage Grows Cloudier, Flashier

Organizations running high performance computing (HPC) workloads are increasingly seeking out cloud-based storage solutions and speedy flash-enabled systems to help them cope with growing complexity and the sheer amounts of data they are managing nowadays, according to new research from DataDirect Networks (DDN).

For starters, organizations are making use of more data, the company found in its survey of over 100 HPC professionals. Eighty-five percent of reported that they are using or managing more than one petabyte (PB) of storage, a 12-percent increase compared to last year’s results. Nearly 30 percent said they in charge of over 10PB of storage.

Nearly half (48 percent) of all respondents said they planned to stash at least some of their data on a public or private cloud, an 11 percent jump compared to 2016. Yet, only five percent of those polled said they expect to place more than 30 percent of their data in the cloud.

Read more at Datamation

3 Essential Questions to Ask at Your Next Tech Interview

The annual Open Source Jobs Report from Dice and The Linux Foundation reveals a lot about prospects for open source professionals and hiring activity in the year ahead. In this year’s report, 86 percent of tech professionals said that knowing open source has advanced their careers. Yet what happens with all that experience when it comes time for advancing within their own organization or applying for a new roles elsewhere?

Interviewing for a new job is never easy. Aside from the complexities of juggling your current work while preparing for a new role, there’s the added pressure of coming up with the necessary response when the interviewer asks “Do you have any questions for me?”

At Dice, we’re in the business of careers, advice, and connecting tech professionals with employers. But we also hire tech talent at our organization to work on open source projects. In fact, the Dice platform is based on a number of Linux distributions and we leverage open source databases as the basis for our search functionality. In short, we couldn’t run Dice without open source software, therefore it’s vital that we hire professionals who understand, and love, open source.

Over the years, I’ve learned the importance of asking good questions during an interview. It’s an opportunity to learn about your potential new employer, as well as better understand if they are a good match for your skills.

Here are three essential questions to ask and the reason they’re important:

1. What is the company’s position on employees contributing to open source projects or writing code in their spare time?

The answer to this question will tell you a lot about the company you’re interviewing with. In general, companies will want tech pros who contribute to websites or projects as long as they don’t conflict with the work you’re doing at that firm. Allowing this outside the company also fosters an entrepreneurial spirt among the tech organization, and teaches tech skills that you may not otherwise get in the normal course of your day.

2. How are projects prioritized here?

As all companies have become tech companies, there is often a division between innovative customer facing tech projects versus those that improve the platform itself. Will you be working on keeping the existing platform up to date? Or working on new products for the public? Depending on where your interests lie, the answer could determine if the company is a right fit for you.

3. Who primarily makes decisions on new products and how much input do developers have in the decision-making process?

This question is one part understanding who is responsible for innovation at the company (and how close you’ll be working with him/her) and one part discovering your career path at the firm. A good company will talk to its developers and open source talent ahead of developing new products. It seems like a no brainer, but it’s a step that’s sometimes missed and will mean the difference between a collaborative environment or chaotic process ahead of new product releases.

Interviewing can be stressful, however as 58 percent of companies tell Dice and The Linux Foundation that they need to hire open source talent in the months ahead, it’s important to remember the heightened demand puts professionals like you in the driver’s seat. Steer your career in the direction you desire.

Download the full 2017 Open Source Jobs Report now.

What Open Means to OpenStack

In his keynote at OpenStack Summit in Australia, Jonathan Bryce (Executive Director of the OpenStack Foundation) stressed on the meaning of both “Open” and “Stack” in the name of the project and focused on the importance of collaboration within the OpenStackecosystem.

OpenStack has enjoyed unprecedented success since its early days. It has excited the IT industry about applications at scale and created new ways to consume cloud. The adoption rate of OpenStack and the growth of its community exceeded even the biggest open source project on the planet, Linux. In its short life of 6 years, OpenStack has achieved more than Linux did in a similar time span.

So, why does OpenStack need to redefine the meaning of the project and stress collaboration? Why now?

“We have reached a point where the technology has proven itself,” said Mark Collier, the CTO of the OpenStack Foundation. “You have seen all the massive use case of OpenStack all around the globe.”

Collier said that the OpenStack community is all about solving problems. Although they continue to refine compute, storage, and networking, they also look beyond that.

Read more at The Linux Foundation

Many Cloud-Native Hands Try to Make Light Work of Kubernetes

The Cloud Native Computing Foundation, home of the Kubernetes open-source community, grew wildly this year. It welcomed membership from industry giants like Amazon Web Services Inc. and broke attendance records at last week’s KubeCon + CloudNativeCon conference in Austin, Texas. This is all happy news for Kubernetes — the favored platform for orchestrating containers (a virtualized method for running distributed applications). The technology needs all the untangling, simplifying fingers it can get.

This is also why most in the community are happy to tamp down their competitive instincts to chip away at common difficulties. “You kind of have to,” said Michelle Noorali (pictured), senior software engineer at Microsoft and co-chair of KubeCon + CloudNativeCon North America & Europe 2017. “These problems are really hard.”

Read more at SiliconAngle

Asynchronous Decision-Making: Helping Remote Teams Succeed

Asynchronous decision-making is a strategy that enables geographically and culturally distributed software teams to make decisions more efficiently. In this article, I’ll discuss some of the principles and tools that make this approach possible.

Synchronous decision-making, in which participants interact with each other in real time, can be expensive for people who work on a Maker’s Schedule, and they are often impractical for remote teams. We’ve all seen how such meetings can devolve into inefficient time wasters that we all dread and avoid.

In contrast, asynchronous decision-making, which is often used in large open source projects—for example, at the Apache Software Foundation (ASF), where I’m most active—provides an efficient way for teams to move forward with minimal meetings. Many open source projects involve only a few meetings each year (and some none at all), yet development teams consistently produce high-quality software.

How does asynchronous decision-making work?

Read more at OpenSource.com

Leveraging NFV and SDN for Network Slicing

Network slicing is poised to play a pivotal role in the enablement of 5G. The technology allows operators to run multiple virtual networks on top of a single, physical infrastructure. With 5G commercialization set for 2020, many are wondering to what extend network functions virtualization (NFV) and software-defined networking (SDN) can help move network slicing forward.

Virtualized infrastructure

NFV and SDN are two similar but distinct technologies that are spearheading the digital transformation of network infrastructure in the telecom industry. NFV is an initiative to provide network services that conventionally ran on proprietary hardware with virtual machines, where a virtual machine is understood as an operating system that imitates dedicated hardware. With NFV, network functions such as routing, load balancing and firewalls are delivered by virtual machines. Using NFV, resources are no longer bound to data centers, but pervade the network to accelerate the productivity of internal operations.

Read more at RCR Wireless News

Juniper Moves OpenContrail to the Linux Foundation

Juniper Networks is moving the codebase for its OpenContrail network virtualization platform to the Linux Foundation.

Juniper first released its Contrail products as open source in 2013 and built a community around the project. However, many stakeholders complained that Juniper didn’t work very hard to build the community, and some called it “faux-pen source.”

In today’s announcement, Juniper said adding OpenContrail’s codebase to the Linux Foundation will further its objective to grow the use of open source platforms in cloud ecosystems.

Read more at SDxCentral

Language Bugs Infest Downstream Software, Fuzzer Finds

Developers working in secure development guidelines can still be bitten by upstream bugs in the languages they use. That’s the conclusion of research presented last week at Black Hat Europe by IOActive’s Fernando Arnaboldi.

As Arnaboldi wrote in his Black Hat Europe paper [PDF]: “software developers may unknowingly include code in an application that can be used in a way that the designer did not foresee. Some of these behaviors pose a security risk to applications that were securely developed according to guidelines.”

Arnaboldi found bugs in the major programming languages JavaScript, Perl, PHP, Python and Ruby, and in all cases, he said the vulnerabilities could expose software written using those languages.

Read more at The Register

Challenges and Solutions in Edge Computing: The Future

This article was sponsored by Intel and written by Linux.com.

This year’s Open Source Summit Europe (formerly LinuxCon) took place in Prague. The conference is part of a series of annual events that are always popular in the open source community, and the lineup featured many different tracks, reflecting the upsurge in the number of open source projects and adoptions. One of the more popular topics there was edge computing and IoT.

Earlier we spoke with Imad Sousou, vice president of the Software and Services Group and general manager of the Open Source Technology Center at Intel Corporation. about his thoughts on edge computing.  Following Open Source Summit Europe, we spoke with Sousou again to learn more about the future of edge networks and the new technologies needed to handle the demands of the growing number of connected devices.

Linux.com: Earlier, you talked about pushing intelligence to the edge of the network. What benefits does edge computing offer?

Sousou: Billions of new connected devices are creating and collecting data that needs to be processed. Imagine if all that processing takes place in the cloud. The network would very quickly get overwhelmed, regardless of how powerful it is. We will run into issues with network congestion and latency. With edge computing, we push some processing closer to the devices to help eliminate the latency and congestion problems, and improve performance of the applications running on those devices. This is a bit of an oversimplification, but you can see the benefit. Another advantage has to do with the availability of services on devices.  Because slow response time and outages are unacceptable, if you move the computation as close to the device or even on the device, we can improve availability.  

Linux.com:  What is driving this move to the edge? Why does it matter?

Sousou: That’s a great question. You can boil it down to four key reasons. The first reason is speed. I touched on this before, but edge computing reduces latency because data doesn’t have to travel over a network to a remote data center or the cloud for processing. The second reason is security. We could see improved security at the edge because the data stays closer to where it was created. The third is related to scalability. Edge computing is fundamentally ‘distributed computing,’ meaning it improves the resiliency, reduces network load, and is easier to scale. And finally, it matters because it lowers cost. Data transmission costs are lower because the amount of data transferred back to a central location for storage is reduced.

Linux.com: What kinds of technologies are needed to make edge computing successful?

Sousou: At Intel, we think applying current cloud technologies for use at the edge is the right approach. For example, we can use existing cloud container solutions such as Intel Clear Containers. Today, applications at the edge run on bare metal. This creates security concerns if an application gets compromised. With Intel Clear Containers, we can provide hardware-based isolation with virtualization technology. If the application controlling your device gets hacked, your application will still be safe and other applications won’t be able to read/write your memory or data packets.

That is just one example. Of course we’ll innovate to address new use cases. We can use advancements in machine learning and artificial intelligence at the edge. It will really be a mix of new and existing technologies that will deliver edge computing.

Linux.com: You mentioned artificial intelligence and machine learning. Can you provide more detail on how that relates to edge computing?

Sousou:  With the access and amount of data being generated, it’s becoming more important for edge devices to know what data is relevant and what isn’t. Devices must be more than smart. They must also to be powerful enough to both train themselves and infer direction on the same small device or within a sensor. New artificial intelligence and machine learning technologies are making this possible. Machine learning algorithms that connect multiple points of input require powerful processing that support the data movement needed to best take advantage of information. At Intel, we want to ensure machine learning frameworks are optimized on Intel architecture.

Linux.com:  How does 5G wireless technology help enable edge computing?

Sousou: 5G networks support growing data rates, the increasing number of terminals, the need for higher service availability, and the desire for enhanced edge network coverage. To support new use cases, the new standard has identified three primary requirements: massive machine-to-machine (M2M) communications for Internet of Things (IoT) applications, ultra-low latency enabling life-saving car-to-car connectivity, for example, and gigabit speeds (high-bandwidth mobile broadband). No single wireless technology will meet these characteristics, so 5G goes beyond a single air interface and will be defined by a heterogeneous network that integrates 5G, 4G, Wi-Fi, and other wireless technologies.

Linux.com: We have heard a bit about time-sensitive networking (TSN). Can you explain what it is and how it relates to the edge?

Sousou: Intel has invested in time-sensitive networking for more than 5 years. TSN is a set of technologies that allows a network to deliver low-latency or guaranteed bandwidth to applications that require it while simultaneously supporting other less demanding applications. TSN uses packet prioritization, filtering, and network virtualization to support edge compute use cases on existing networking infrastructure. There are examples of uses in industrial infrastructures, autonomous devices, data centers, and communications infrastructures where the open implementations of core TSN infrastructure will help companies lower costs, make things easier to maintain, and provide the scale and accessibility needed for broad market acceleration and adoption. We are working with the industry, including groups like the Avnu Alliance, to deliver a maintainable, deterministic, open source network stack and associated hardware that can provide a coordinated time synchronization from cloud to fog to the edge.

Linux.com: Is there anything else that you would like to share with readers that we haven’t already covered?

Sousou: There is a lot of excitement about this connected world. It has the potential to change how we create, consume, and take advantage of information and could radically change how we live. Open source software is front and center in driving this change and provides the foundation for making edge computing a reality. I look forward  to continuing to work with the community to innovate and help achieve a safer, smarter world.

Linux Foundation Continues to Emphasize Diversity and Inclusiveness at Events

This has been a pivotal year for Linux Foundation events. Our largest gatherings, which include Open Source Summit, Embedded Linux Conference, KubeCon + CloudNativeCon, Open Networking Summit, and Cloud Foundry Summit, attracted a combined 25,000 people from 4,500 different organizations globally. Attendance was up 25 percent over 2016.

Over the past few years, one of our core objectives has been to work with projects and communities to promote diversity and inclusiveness in open source. We’re relentlessly focused on this not only because more diverse teams make smarter decisions and generate better business results, but because they create more productive open source projects. Most important, we think supporting diversity and inclusiveness in open source is simply the right thing to do.

While there’s still progress to be made, we’ve made remarkable headway at our events this year. Here are a few of our initiatives:

Read more at The Linux Foundation