Home Blog Page 720

Open Source 25-Core Chip can be Stringed into a 200,000-Core Computer

Researchers want to give a 25-core open-source processor called Piton some serious bite. The developers of the chip at Princeton University have in mind a 200,000-core computer crammed with 8,000 64-bit Piton chips.

It won’t happen anytime soon, but that’s one possible usage scenario for Piton. The chip is designed to be flexible and quickly scalable, and will have to ensure the giant collection of cores are in sync when processing applications in parallel.

Details about Piton were provided at the Hot Chips conference this week. The goal was to design a chip that could be used in large data centers that handle social networking requests, search and cloud services. The response time in social networking and search is tied to the horsepower of servers in data centers. Piton is a rare open-source processor based on the OpenSparc design, which is a modified version of Oracle’s OpenSparc T1 processor.

Read more at InfoWorld

Citrix Gives Away Netscaler Containers for Free

Having put Netscaler’s load balancing software into containers, Citrix is now handing out free samples.

Netscaler CPX Express, a developer version of the CPX container, is available for free downloading, the company announced yesterday at LinuxCon North America in Toronto. There’s even a catchy URL for it: microloadbalancer.com

Revealed earlier this year, CPX is a container version of Netscaler’s application delivery controller (ADC). CPX Express does the same thing but in a sample size. It handles only 20 Mb/s of traffic, whereas the commercial CPX offering supports 1 Gb/s. CPX Express is also missing TCP optimization and Level 7 distributed denial-of-service protection.

Read more at SDx Central

Live From LinuxCon: Red Hat and Microsoft Embrace On Stage

If any LinuxCon moment so far has underscored the evolution of Linux over 25 years, it was during the transition between keynotes this morning when Red Hat CEO Jim Whitehurst found himself on stage with Microsoft’s new vice president of open source, Wim Coekaerts.

The men laughed nervously at the irony of the moment and paused for a brief photo opp, arms around each other’s shoulders.

“It was cool to be with Jim Whitehurst on stage. Microsoft and Red Hat together; that’s a big difference from many years ago,” Coekaerts said later on in his keynote.

Red Hat’s open source leadership

Whitehurst took the stage at LinuxCon first today to discuss how Linux and open source have changed corporate culture. Red Hat has always been a top contributor to the Linux kernel, Whitehurst said. But alongside its technical contributions, the company has most distinguished itself as a business model innovator. And, more recently, as a model for a culture of open innovation within corporations.

After realizing that selling t-shirts and coffee mugs wouldn’t pay the bills, says Whitehurst, Red Hat set about creating “enterprise open source” software – building enterprise features into a proprietary, commercial version of the open source software. So while technical innovation was happening from the bottom up by passionate volunteer developers, enterprise use was being driven from the top down, said Whitehurst. The first enterprise Linux users were large investment banks running their trading platforms on Linux.

What followed was a massive tide of enterprise adoption, which Red Hat rode to become the first billion-dollar open source company. Meanwhile, the Linux kernel became the backbone of most of modern technology, from phones and supercomputers to light bulbs and nuclear submarines.

“Linux has become the platform on which most net new innovation happens,” Whitehurst said.

Now, companies like Nike, Ikea and Toyota have also embraced the “social DNA” of Linux — the open source mindset — to drive innovation, he said.

This is the legacy of Linux, which is celebrating its 25th anniversary this week: It’s created a new way to organize people and coordinate behavior to get things done.

Microsoft’s new open source strategy

After a brief moment on stage with Whitehurst, Wim Coekaerts – a longtime Linux kernel contributor and the former head of Linux engineering at Oracle – spoke about Microsoft’s relatively recent embrace of Linux and its shift to open source.

He presented a very different picture of the company whose former CEO once called Linux a cancer.  Microsoft now recognizes that Linux and open source are necessary to their growth prospects and plans, Coekaerts said.

The company’s transition to open source started with its contribution of hyper-v drivers to the Linux kernel in 2009. And Microsoft has made continual progress toward becoming an open source company since then, Coekaerts said. Last week Microsoft announced that it would open source PowerShell and make it available on Linux. 

Microsoft’s open source evolution, in a presentation by Wim Coekaerts, vice president of open source at Microsoft, at LinuxCon 2016.

Microsoft also uses Linux internally now; many of Microsoft’s services in Azure run on Linux. Developers work out of public Github trees to better collaborate with each other and the outside world, Coekaerts said. And, perhaps most telling, the company’s engineers now have the freedom to build new products and services using whichever operating system works best for their purposes.

“There’s no longer a (rule): “It has to be on Windows,”  he said. “It’s actually very exciting to see.”

Going forward, Coekaerts expects to see more Linux kernel contributions coming from Microsoft — and on projects that don’t directly benefit the company or its products but that are intended to advance Linux itself.

“If there are cases where we can make Linux work better,” he said, “we will do that.”
 

Watch Linus Torvalds speak live tomorrow on The Linux Foundation’s free streaming video. Sign up now.

Can’t catch the live stream? You can still register and receive recordings of the keynotes after the conference ends.

 

How IBM’s LinuxONE Has Evolved For the New Open Source Cloud

One year ago at LinuxCon 2015 in Seattle, IBM announced IBM LinuxONE, its enterprise-grade system specifically designed for Linux and open source workloads. Today in their keynote at LinuxCon 2016 in Toronto, IBM executives Jim Wasko and Donna Dillenberger will give us an update on how the technology has evolved since then and how IBM is involved now in the open source community. (You can watch all the morning keynotes on our live video stream starting at 9 a.m. Eastern.)  

In this Q&A, Mary Hall, who does marketing for LinuxONE and Blockchain at IBM, tells us more about LinuxONE, how it has evolved, and some of the challenges that remain for open source cloud computing today.

Linux.com: Can you please briefly describe LinuxONE? How is it uniquely tailored for Linux and open source?

Mary Hall:  LinuxONE is IBM’s Linux Server.  The LinuxONE server runs the major distributions of Linux; SUSE, Red Hat and Canonical’s Ubuntu.  The server also runs open source databases like Mongo DB , PostgreSQL and MariaDB  allowing for both horizontal growth and vertical scale, as demonstrated by running a 2TB Mongo database without sharding.  Several of the features built into this system support the constant innovation inherent in the open source movement while maintaining the performance and reliability required by Enterprise clients; for example, Logical Partitions (LPARs) allow clients to host a development environment on the same system as production with zero risk.

Linux.com: How has LinuxONE evolved since you announced it last year?

Mary: The LinuxONE servers have undergone a significant refresh in 2016, adding even more features and capabilities including faster processors, more memory and support for larger amounts of data.

IBM LinuxONE continues to provide additional flexibility for developers by building out capabilities for both enterprise  and open source software.  As an example,  IBM LinuxONE recently ported the Go programming language, which was developed by Google. Go is designed for building simple, reliable and efficient software, making it easier for developers to combine the software tools they know and love with the speed, security and scale offered by LinuxONE. IBM has begun contributing code to the Go community.   

SWIFT Language now runs on IBM LinuxONE. There are also new Hybrid Cloud capabilities in the product.  IBM has optimized its Cloudant and StrongLoop technologies for LinuxONE. The new features offer a highly scalable environment on Node.js, which enables developers to write applications for the server side using the language they prefer.

IBM Open Platform (IOP) is now available in 2016 for the IBM LinuxONE portfolio at no cost. IOP represents a broad set of industry standard Apache-based capabilities for analytics and big data. 

Beyond the technology enhancements, we now see customers further along with their LinuxONE deployment. Customers have reported economies of scale from consolidating their data on LinuxONE versus running server farms.

Linux.com: What has been its biggest accomplishment?

Mary:

  1. Marrying the openness, flexibility, and amazing innovation of Linux and open source software with the availability, scalability, security, and robustness of a platform like LinuxONE.  

  2. We’ve seen the competitive edge users have achieved with IBM LinuxONE.  With the IBM LinuxONE our customer ICU IT Services has helped businesses slash their IT costs by up to 50 percent.

Linux.com: Why has a platform designed for open source workloads become necessary for the enterprise?

Mary: Our clients have made it clear that they need the innovation and creativity of the Open Source movement in an environment upon which they can “bet their business” (actual quote from a customer).

It’s essential to be able to run large enterprise workloads that scale reliably and efficiently.  LinuxONE is highly scalable and delivers unprecedented performance. It has the world’s fastest commercial microprocessor running at 5GHz, large memory pools, and 4 layers of cache. The shared memory, vertical scale architecture is vastly better for stateful workloads like databases and systems of record.

LinuxONE is designed to be more efficient for large, cache-intensive business workloads and those that require high I/O bandwidth. The LinuxONE server has massive I/O throughput with up to 640 dedicated I/O processors. It is designed to support tens of thousands of concurrent users, while delivering consistent sub-second end user response times. And it achieves those fast response times at up to 100% utilization, which simplifies the solution and reduces costs.

Linux.com: What are the challenges with the open source cloud still?

Mary: Well, obviously, security remains a huge challenge in the cloud.  If users run an open source cloud on LinuxONE, they will have a level of enhanced security, that is built into the hardware.

Running cloud offerings on IBM LinuxONE  can help to meet stringent industry and compliance security requirements, especially in the banking industry.  LinuxONE provides isolation at every level — applications, containers, virtual servers, and partitions. LinuxONE features security capabilities built in to all elements of the system – unique capabilities such as dedicated cryptographic processors that are tamper proof. LinuxONE also has fully checked hardware and memory for data integrity. It delivers unmatched secure transaction throughput.

Another challenge in the cloud, is the restrictions on software, any kind of software including open-source software.  Users typically have little to no control over how software operates in the cloud.  With LinuxONE, users maintain complete control.  To maximize infrastructure investments in the public cloud requires in-house expertise in the cloud vendor’s offerings: from pricing, to redundancy, latency, disaster recovery, provisioning – if you don’t know what you’re doing, you can pay a very heavy price.  This expertise does not add value to the client – it is more of a cost of doing business in the cloud.

Linux.com: What is the role of hybrid cloud as the enterprise transitions to containers and micro services architectures?

Mary: The transition to containers & services should make the hybrid cloud environment more agile, and give MSPs the ability to be more flexible and connect to more devices in the era of iOT. The leaders will be able to put the right workload in the right place at the right time for the right reasons – hybrid cloud is essential to providing that flexibility.  It also opens up new revenue streams as companies are able to expose services they’ve created for internal constituents to external clients.

Sign up to watch the live video stream from LinuxCon and ContainerCon 2016.

 

Networking, Security & Storage with Docker & Containers: A Free eBook Covers the Essentials

With this week’s ContainerCon event underway in Toronto, new ways to manage and automate workloads are taking center stage. Container education is in the spotlight as well, and that’s where a new, free eBook from the editors at The New Stack comes in. Networking, Security & Storage with Docker & Containers, edited and curated by The New Stack’s Editor-in-Chief Alex Williams, covers the latest approaches to networking containers, including native efforts by Docker to create efficient and secure networking practices.

The New Stack analyzes how the new stack affects enterprises and enterprise startups, the various networks of developer communities, the DevOps movement and the business models that encompass the new world. The comprehensive, 99-page eBook emphasizes that working with containers necessitates a hard evaluation of security, especially at the networking and storage level.

Under the Hood

Networking, Security & Storage with Docker & Containers explores best practices for security at scale, data persistence and storage, database management, and networking all the components of today’s technology stacks. It includes discussion of composing applications with containers, dealing with the software delivery pipeline, securely networking containers, and maintaining persistent storage.

On the networking front, the new eBook covers:

·      the evolution of container network types

·      competing container networking specifications

·      the role of software-defined networking

·      network configuration and service discovery

·      networking with OpenStack.

Multimedia Extras

Docker, is, of course, the darling of the container world, and the new eBook contains much discussion of it, as well as an embedded audio discussion on Docker and secure containers. The audio discussion features Nathan McCauley, Director of Security at Docker.

There are numerous other embedded SoundCloud audio discussions throughout the eBook. They feature leaders from IBM, Joyent, Twistlock, Nuage Networks and other companies (several of which are sponsors of the eBook series). These audio discussions are in-depth, and give the eBook a multi-dimensional, multimedia feel.

Networking, Security & Storage with Docker & Containers also provides a landscape view of important technology tools and platforms that are not solely in the container space or solely focused on Docker. For example, it delves into the interesting work that Mesosphere is doing with its Data Center Operating System (DC/OS). Within this discussion, Mesosphere’s Founder and Chief Architect Ben Hindman evaluates the role of plug-ins in extending what we can do with containers. He notes that the plug-ins defined by Docker will not necessarily prevail as the universal plug-ins in the container networking arena.

Flocker, which is ClusterHQ’s persistent container solution, is another non-Docker tool that deserves, and gets, a solid discussion.

Security in Focus

The eBook also provides a comprehensive survey of security scanning solutions. Many organizations are reaching for these as they deploy disparate components in their stacks, including open source components.

Smart networking and storage are essential parts of a good container strategy, but container security is an often-cited barrier to entry for some organizations. With that in mind, Williams stays very focused on security throughout his eBook.

“Containers can facilitate a more secure environment by addressing practices around security workflows,” the eBook notes. Indeed, vulnerability scans and signed container images are becoming well-known practices.

It also notes the following: “A major security benefit of containers is the extra tooling around isolation. Containers work by creating a system with a separate view of the world — separate namespaces — with regard to the filesystem, networking and processes.”

Are your approaches to containers, networking, and storage secure and robust? Networking, Security & Storage with Docker & Containers provides an opportunity to self-audit your practices in these areas — one worth taking.

You can instantly get PDF versions of all four of The New Stack’s free eBooks here by entering your email address. Each eBook in the series focuses on the Docker and container ecosystems, and the other titles delve into orchestration, application management, microservices, and more.
 

How Hardware Can Boost NFV Adoption

Hardware acceleration can boost adoption of network function virtualization (NFV) technology by providing a more powerful platform.

These days there’s clearly a massive amount of interest in all things relating to network function virtualization (NFV). But based on historical trends, the adoption of high-performance hardware can boost NFV adoption by providing a stronger platform for applications.

The history of computing is one balanced between hardware and software. Time and again, hardware advances have proven to be a boon to software, because the hardware innovation can mitigate the overhead introduced by new software. NFV is not likely to be any different. As SDxCentral has been covering as part of its Business Insights series, virtualization introduces a performance penalty that must be solved with hardware.

Read more at SDx Central

Datera’s Elastic Data Fabric Integrates With Kubernetes

Today Datera announced a new integration with Google’s Kubernetes system. Datera states that its intent-defined universal data fabric complements the Kubernetes operational model well. An integration of the two enables automatic provisioning and deployment of stateful applications at scale. According to Datera, this integration with Kubernetes will let them translate application service level objectives, such as performance, durability, security and capacity into its universal data fabric. Datera goes on to claim that the integration will allow enterprise and service provider clouds to seamlessly and cost-effectively scale applications of any kind.

Read more at Storage Review.com

PLUMgrid Advances SDN with CloudSecure

Software Defined Networking (SDN) vendor PLUMgrid is helping to secure it product portfolio and its customers with a new technology it calls CloudSecure. The goal with CloudSecure is to help provide policy and structure for organizations to build secure micro-segmented networking in the cloud.

“PLUMgrid CloudSecure is a virtual security solution that consists of ONS, Cloudapex and the ecosystem partners to isolate, protect, and monitor north-south, east-west, and intra-host traffic between VMs and containers,” Pere Monclus, CTO of PLUMgrid, told EnterpriseNetworking Planet. “We are building on top of micro-segmentation/security policies/service insertion, and introducing policy-based virtual tap with ONS 6.0 and Security View with CloudApex 2.0.

ONS is PLUMgrid’s OpenStack Networking Suite providing overlay networking capabilities for cloud networking.

Read more at Enterprise Networking Planet

 

Huawei Launches a Kubernetes-based Container Engine

Joining an increasing number of companies, Asian telecommunications giant Huawei Technologies has released its own container orchestration engine, the Cloud Container Engine (CCE).

Ying Xiong, Huawei’s chief architect of cloud computing, announced CCE version 1.0 at LinuxCon North America, being held this week in Toronto. Like orchestration engines from CoreOS and Apprenda, CCE will be based on Google open-source Kubernetes platform. During his talk, Xiong discussed the growing use of containers in China. In 2016, Huawei found that 14 percent of companies were using containers in production, and another 23 percent were using them for test and development. About 44 percent had plans to adopt container technologies within the next six months.

While 14 percent is still fairly low, it is growing rapidly. It is up 250 percent in the past year. “To me, that means the tools are maturing,” Xiong said.

 

Read more at The New Stack

Linux Rules the World. Where to Next?

On the back of some significant improvements in the last year and a half, Linux is now the model for software development.

At LinuxCon, Jim Zemlin, the Linux Foundation’s executive director, said “Linux has gone far beyond what anyone could have expected” and that it’s been the “most successful software project in history.” He’s right. From Android phones to supercomputers to clouds to car, it’s all Linux all the time. Linux is the poster child for the open-source revolution.

The latest Linux kernel report, Linux kernel development – Linux kernel development – How fast it is going, who is doing it, what they are doing, and who is sponsoring it, details just how quickly Linux changes. In the last 15 months, more than 3 million lines of code have been added to the Linux kernel. For those of you coding at home, that’s 7.8 changes per hour.

Read more at ZDNet