Home Blog Page 645

Open Source Helps Drive Cloud Adoption Says 2016 Future of Cloud Survey

Scalability, agility, cost, and innovation are the main factors driving cloud adoption, according to the 6th annual Future of Cloud Computing study released today by North Bridge Venture Partners and Wikibon analysts. And, this year, mobile and open source are twice as likely to be cited as a drivers for cloud computing as they were in 2015.

“Open source software is the cornerstone for cloud computing. Open, collaborative development has nurtured a vibrant ecosystem that is fueling new services that in turn spur commercial adoption,” said Jim Zemlin, Executive Director of The Linux Foundation.

“From container technologies to high-velocity projects such as Kubernetes and Cloud Foundry, open source and open development are further accelerating the move to the cloud for businesses of all sizes across all industries,” Zemlin said.

DevOps adoption is also accelerating. Last year’s survey indicated that DevOps was limited to “pioneers” and small teams with adoption at 37 percent. This year, 51 percent of respondents have begun implementing DevOps in small teams and 30 percent are using DevOps in large teams or company-wide. These DevOps adopters cite agility, reliability, and cost efficiency as key competitive differentiators.

Hybrid Growth

Overall, the survey shows continued growth of hybrid cloud technologies and deeper cloud integration within businesses. Nearly half of respondents indicated that they are now using or will use an industry cloud offering within the next two years.

“Cloud environments will remain predominantly hybrid in the coming years, enhancing the importance of a clearly defined cloud governance and orchestration strategy to optimize for security, self-service and agility, while minimizing costs,” said Holly Maloney McConnell of NorthBridge.

Specifically, the survey shows the following breakdown in cloud use:

  • 47 percent hybrid

  • 30 percent public

  • 23 percent private

All types of cloud technologies — including Infrastructure as as Service (IaaS), Platform as a Service (PaaS), Database as a Service (DBaaS), and Software-Defined Networking (SDN) — are expected to increase during the next two years, with the exception of  Software as a Service (SaaS), which is expected to stay constant, according to the survey.      

“The Future of Cloud data shows that while adoption of SaaS, public and private cloud have rapid growth, most companies are working tactically rather than strategically. Innovation and agility require commitment from the business to embrace transformation by changing processes along with tools,” said Wikibon senior analyst, Stu Miniman.

Data and Business

In terms of data storage, only 28 percent of companies are storing more than half their data in a public cloud, but the survey estimates that number to increase by 18 percent in two years. Currently, 59 percent of companies are storing more than half of their data in a private cloud, with that number expected to fall 16 percent in two years.    

IT and business services are moving forward as well, with more than 70 percent of respondents saying the following areas are either already in the cloud or are moving there: disaster recovery, helpdesk, web content management, and communications. More than 75 percent said their sales/marketing, business analytics, and customer service functions were also in the cloud or on their way. Other services, such as accounting, back office, and manufacturing are moving more slowly, with 30 percent citing adoption in these areas.    

Security: Benefit or Barrier?

Respondents are evenly divided on the issue of cloud security. In fact, the survey reports that 50 percent see cloud security as a benefit of the cloud, while 50 percent see it as a barrier to cloud adoption. Other barriers mentioned include lock-in, privacy, complexity, and regulatory concerns.

Emerging areas of cloud investment cited by respondents include:

  • Analytics (58 percent)

  • Containers (52 percent)

  • AI/cognitive computing (32 percent)

  • Virtual reality (16 percent)

This year dozens of organizations, including The Linux Foundation, partnered in sponsoring the survey, which gathered responses from 1,351 respondents (40 percent vendors, 60 percent users) from organizations ranging from cloud-native startups to large enterprises, across all industry sectors.

Check out the complete Future of Cloud Computing survey for more insights about cloud trends and technologies.

 

Essentials of OpenStack Administration Part 2: The Problem With Conventional Data Centers

Start exploring Essentials of OpenStack Administration by downloading the free sample chapter today. DOWNLOAD NOW

In part 1 of this series, we defined cloud computing and discussed different cloud services models and the needs of users and platform providers. This time we’ll discuss some of the challenges that conventional data centers face and why automation and virtualization, alone, cannot fully address these challenges. Part 3 will cover the fundamental components of clouds and existing cloud solutions.

For more on the basic tenets of cloud computing and a high-level look at OpenStack architecture, download the full sample chapter from The Linux Foundation’s online Essentials of OpenStack Administration course.

Conventional Data Centers

Conventional data centers are known for having a lot of hardware that is, by current standards at least, grossly underutilized. In addition to that, all that hardware (and the software that runs on it) is usually managed with relatively little automation.

Even though many things happen automatically these days (configuration deployment systems such as Puppet and Chef help here), the overall level of automation is typically not very high.

With conventional data centers it is very hard to find the right balance between capacity and utilization. This is complicated by the fact that many workloads do not fully utilize a modern server: for instance, some may use a lot of CPU but little memory, or a lot of disk IO but little CPU. Still, data centers will want enough capacity to handle spikes in load, but don’t want the cost of idle hardware

Whatever the case, it is clear that modern data centers require a lot of physical space, power, and cooling. The more efficient they run, the better for all parties involved.

mPXG1nmdkzEB0TDMlBvUDh5ZeHI6CzEsqoVDI0BR

Figure 1: In a conventional data center some servers may use a lot of CPU but little memory (MEM), or a lot of disk IO but little CPU.

A conventional data center may have several challenges to efficiency. Often there are several silos, or divisions of duties among teams. You may have a systems team that handles the ongoing maintenance of operating systems. A hardware team that does the physical and plant maintenance. Database and network teams. Perhaps even storage and backup teams as well. While this does allow for specialization in a particular area the efficiency of producing a new instance for the customer requirements is often low.

As well, a conventional data center tends to grow in an organic method. By that I mean, it may not be a well thought-out change. If it’s 2 a.m. and something needs doing, a person from that particular team may make the changes that they think are necessary. Without the proper documentation the other teams are then unaware of those changes and to figure it out in the future requires a lot of time, and energy, and resources which further lowers efficiency.

Manual Intervention

One of the problems arises when a data center needs to expand: new hardware is ordered, and, once it arrives, it’s installed and provisioned manually. Hardware is likely specialized, making it expensive. Provisioning processes are manual and, in turn, costly, slow, and inflexible.

“What is so bad about manual provisioning?” Think about it: network integration, monitoring, setting up high availability, billing… There is a lot to do, and some of it is not simple. These are things that are not hard to automate, but up until recently, this was hardly ever done.

Automation frameworks such as Puppet, Chef, JuJu, Crowbar, or Ansible can take care of a fair amount of the work in modern data centers and automate it. However, even though the frameworks exist, there are many tasks in a data center they cannot do or do not do well.

Virtualization

A platform provider needs automation, flexibility, efficiency, and speed, all at low cost. We have automation tools, so what is the missing piece? Virtualization!

Virtualization is not a new thing. It has been around for years, and many people have been using it extensively. Virtualization comes with the huge advantage of isolating the hardware from the software being used. Modern server hardware can be used much more efficiently when being combined with virtualization. Also, virtualization allows for a much higher level of automation than standard IT setups do.

bDtr1KvuvNJMSduZhCRKoF81ayc1M-n_31H9pR-v

Figure 2: Virtualization flexibility.

Virtualization and Automation

For instance, deploying a new system in a virtualized environment is fairly easy, because all it takes is creating a new Virtual Machine (VM). This helps us plan better when buying new hardware, preparing it, and integrating it into the platform provider’s data center. Typical virtualization environments such as VMWare, KVM on Linux, or Microsoft Hyper-V are good examples.

Yet, the situation is not ideal, because in standard virtualized environments, many things need to still be done by hand.

Customers will typically not be able to create new VMs on their own; they need to wait for the provider to do it for them. The infrastructure provider will first create storage (such as Ceph, SAN, or iSCSI LUN), attach it to the VM, and then perform OS installation and basic configuration.

In other words, standard virtualization is not enough to fulfill either providers’ or their customers’ needs. Enter cloud computing!

In Part 3 of this series, we’ll contrast what we’ve learned about conventional, un-automated infrastructure offerings with what happens in the cloud.

Read the other articles in this series: 

Essentials of OpenStack Administration Part 4: Cloud Design, Software-Defined Networking and Storage

Essentials of OpenStack Administration Part 5: OpenStack Releases and Use Cases

The Essentials of OpenStack Administration course teaches you everything you need to know to create and manage private and public clouds with OpenStack. Download a sample chapter today!

Linux Kernel 4.9 Is Here, and It’s the Largest Release Ever

Linus Torvalds released Linux kernel 4.9 on Sunday and christened it “Roaring Lionus” in honour of the anonymous barista that flubbed his name on his morning cup of coffee.

With regard to volume, this is probably the largest kernel release ever. There have been more than 16,000 non-merge changesets into the mainline repository over this two-month merge window. This number is well over the 13,722 changesets for 3.15, which held the record up until now.

The most active group of developers have been the team formed by Johan Hovold, Viresh Kumar, Greg Kroah-Hartman, and Alex Elder, who are working on the Greybus code. The most active employer by number of changesets was Linaro with 1,876 changesets, mostly related to, again, Greybus.

Interestingly, the Raspberry Pi Foundation makes the list of the 20 most active employers by lines of code for the first time, with 12,816 lines contributed. However, that number is still far behind the two leaders, AMD and Red Hat, each of which contributed more than 100,000 lines.

The details

What the heck is this Greybus that is stirring up such a raucousness? You may have heard of Project Ara, a Google-backed phone project that would let users build their own device using a variety of modular hardware blocks. Given a basic frame, a user could add on a camera or two, a wide range of sensors, or screens with different resolutions, loudspeakers, and so forth, using a set of Lego-like blocks.

Unipro is the protocol developed back in the day by Nokia and Phillips that would allow the modules interconnect with each other on a hardware level… and Greybus is the software driver layer that gives the Linux kernel support for Unipro-based modules. Although Google pulled the plug on project Ara back in September, the idea of modules does live on in other phones, such as the Moto Z series and the LG G5. If the concept of modular phones catches on, the effort put into Greybus will not have been in vain.

In other news, as Linus remarks in his post, two thirds of the bulk of changes are drivers. Apart from the Greybus drivers, there is also the usual slew of new drivers and patched drivers for GPUs. AMD came in first as the employer with most lines contributed — for their Southern Islands cards — to Mesa’s DRM library, and drivers for the Vulkan API continue to improve and support more software and hardware.

Another set of drivers that is getting improvements is Intel’s Skylake family of processors. Sound on many newer HPs, Dells, and Lenovos is now a lot less buggy, and the same goes for Intel’s GPUs.

Also new in 4.9

The number of ARM machines supported by the mainline kernel continues to grow. New additions are the Raspberry Pi Zero, the BeagleBoard-x15 rev B1, and LG’s Nexus 5 phone. That means no more customized compiling for these devices.

Kernel 4.9 also comes with improved BBR congestion control algorithms. LWN says these bits of code “allow network protocols (usually TCP) to maximize the throughput of any given connection while simultaneously sharing the available bandwidth equitably with other users.” That means that your shared network speed is going up. When we’re talking network connectivity, faster is always better, right?

Finally, there are improvements for several commonly used file systems, such as Btrfs, XFS, F2FS, and EXT4 across the board. These changes improve their performance and reliability. If you’re looking for even more details, Michael Larabel at Phoronix has an extensive breakdown of all that’s new in 4.9.

To improve your Linux skills, check out the Linux Security Fundamentals course from The Linux Foundation. 

Guide to the Open Cloud: The State of IaaS and PaaS

The Linux Foundation recently announced the release of its 2016 report “Guide to the Open Cloud: Current Trends and Open Source Projects.” This third annual report provides a comprehensive look at the state of open cloud computing. You can download the report now, and one of the first things to notice is that it aggregates and analyzes research, illustrating how trends in containers, microservices, and more shape cloud computing. In fact, from IaaS to virtualization to DevOps configuration management, it provides descriptions and links to categorized projects central to today’s open cloud environment.

In a series of posts to appear here, we’ll call out many of these projects, by category, providing extra insights on how the overall category is evolving. In this post, inaugurating the series, let’s consider the key, open Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) projects to know about.

Consider this, from researchers at Gartner: “Most organizations are already using a combination of cloud services from different cloud providers. While public cloud usage will continue to increase, the use of private cloud and hosted private cloud services is also expected to increase at least through 2017. The increased use of multiple public cloud providers, plus growth in various types of private cloud services, will create a multi-cloud environment in most enterprises and a need to coordinate cloud usage using hybrid scenarios.”

Working with the hybrid model, as open cloud solutions have proliferated, there are many organizations leveraging both IaaS solutions and PaaS solutions. IaaS solutions remain popular because they provide an instant computing infrastructure, provisioned and managed online. Meanwhile, PaaS solutions let users develop, run, and manage applications without the hassle of building and maintaining the infrastructure typically implied to run applications.

As the open cloud matures and hybrid cloud deployments remain dominant, predictions about whether the IaaS model or PaaS model will “win” have fallen by the wayside. They have both won — hands down. Open PaaS solutions like Red Hat’s OpenShift are firmly entrenched, and IaaS solutions like OpenStack are ushering in massive transformation in technology stacks large and small.

IDC researchers predict that more than 80 percent of enterprise IT organizations will commit to hybrid cloud architectures by 2017. As these hybrid cloud architectures evolve, open IaaS and PaaS tools are flourishing.

The Guide to the Open Cloud 2016 includes a comprehensive look at the IaaS and PaaS solutions that you should know about. Here, directly from the report, are links and descriptions for these projects, by category, with links to the GitHub repositories for them:

Infrastructure as a Service (IaaS)

Apache CloudStack

Apache CloudStack, an Apache Software Foundation project, is software designed to deploy and manage large networks of virtual machines, as a highly available, highly scalable Infrastructure as a Service (IaaS) cloud computing platform. It can be used to offer public cloud services, to provide an on-premises (private) cloud offering, or as part of a hybrid cloud solution. Users can manage their cloud with a web interface, command-line tools, and/or a full-featured RESTful API.  See CloudStack on GitHub.

HPE Helion Eucalyptus

HPE Helion Eucalyptus is an open solution for building private clouds that are compatible with Amazon Web Services (AWS). It provides open source implementations of many Amazon Web Services to deploy AWS applications into a private cloud without changing tools, processes, or application code. See HPE Helion Eucalyptus on GitHub.

OpenNebula

OpenNebula is software to manage virtualized data centers for private, public, and hybrid IaaS clouds. Use OpenNebula to manage data center virtualization, consolidate servers, and integrate existing IT assets for computing, storage, and networking. Or provide a multi-tenant, cloud-like provisioning layer on top of an existing infrastructure management solution. See OpenNebula on GitHub.

OpenStack

OpenStack, an OpenStack Foundation project, is open source software for creating private and public clouds. The software controls large pools of compute, storage, and networking resources throughout a data center, and is managed through a dashboard or via the OpenStack API. OpenStack works with other enterprise and open source technologies making it ideal for heterogeneous infrastructure. See OpenStack on GitHub.

Platform as a Service (PaaS)

Apache Stratos

Apache Stratos, an Apache Software Foundation project, is a highly extensible PaaS framework that helps run Apache Tomcat, PHP, and MySQL applications and can be extended to support many more environments on all major cloud infrastructures. For developers, Stratos provides a cloud-based environment for developing, testing, and running scalable applications. IT providers benefit from high utilization rates, automated resource management, and platform-wide insight including monitoring and billing. See Stratos on GitHub.

Cloud Foundry

Cloud Foundry, a Cloud Foundry Foundation project at The Linux Foundation, is an open source cloud application platform that provides a choice of clouds, developer frameworks, and application services. It supports applications built in virtually any programming language, run as either Linux containers or Windows-based applications, and deployed across many infrastructure types: Amazon Web Services (AWS), Microsoft Azure, Google Compute Platform (GCP), OpenStack, VMware vSphere, VMware Photon Platform, IBM SoftLayer, and more. See Cloud Foundry on GitHub.

Deis Workflow

Deis Workflow, is Engine Yard’s version 2.0 (beta) of the open source Deis PaaS that makes it easy to deploy and manage applications on Kubernetes. It works in both public and private clouds, as well as on bare metal. See Deis on GitHub.

Flynn

Flynn is an open source PaaS for running applications in production. In its 1.0 version, it’s designed to run anything that can run on Linux, and includes built-in service discovery as well as Postgres, MySQL, and MongoDB databases. See Flynn on GitHub.

Heroku

Heroku is a cloud platform that lets companies build, deliver, monitor and scale applications. It’s based on a managed container system, with integrated data services and a powerful ecosystem, for deploying and running modern apps in multiple languages: Node, Ruby, Java, Scala, PHP, and more. See Heroku on GitHub.

OpenShift

OpenShift is Red Hat’s PaaS that allows developers to quickly develop, host, and scale applications in a cloud environment. Based on top of Docker containers and the Kubernetes container cluster manager, OpenShift offers online, on-premise, and open source project options. OpenShift Origin is the open source upstream of OpenShift. See OpenShift on GitHub.

Learn more about trends in open source cloud computing and see the full list of the top open source cloud computing projects. Download The Linux Foundation’s Guide to the Open Cloud report today!

5 Simple Tips for Building Your First Docker Image with Java

If you’re an enterprise developer who’s been eagerly anticipating the move to container technology within your organization, you have more than a passing interest in learning the basic concepts behind Docker and the commonly used orchestration frameworks around them. In this article, I’d like to expand on these basic concepts and provide some simple yet practical tips for building your first Docker images using the Java programming language.

Choose a small base JDK image size

Docker images are built by reading instructions from Dockerfile. You can find basic instructions for building your first Docker image here.

A common base image for including JDK in your application is the default openjdk:latest image. This is based on the Debian operating system. The size of this image is 640.9 MB. The JDK version can be seen by running the image.

Read more at O’Reilly

5 Enterprise-Related Things You Can Do with Blockchain Technology Today

Diamonds. Bitcoin. Pork. If you think you’ve spotted the odd one out, think again: All three are things you can track using blockchain technologies today.

Blockchains are distributed, tamper-proof, public ledgers of transactions, brought to public attention by the cryptocurrency bitcoin, which is based on what is still the most widespread blockchain. But blockchains are being used for a whole lot more than making pseudonymous payments outside the traditional banking system.

Because blockchains are distributed, an industry or a marketplace can use them without the risk of a single point of failure. And because they can’t be modified, there is no question of whether the record keeper can be trusted. 

Read more at PCWorld

CoreOS Updates its Tectonic Container Platform to Make Updates Easy

CoreOS today launched an update to its Kubernetes-based Tectonic container management service that makes it easy for its users to enable automatic updates of both Kubernetes itself and the containers it manages.

Until now, it was surprisingly hard to keep a Kubernetes cluster updated without downtime. With this new service, which CoreOS calls “self-driving infrastructure,” users can choose to have Tectonic manage these updates for them without having to take their applications down. CoreOS previously enabled a similar functionality for its operating system (which it now calls Container Linux in a nod to how people are actually using it) and it’s now bringing the same features to Tectonic and the applications that run on top of it.

Read more at TechCrunch

How Getting Your Project in the CNCF Just Got Easier

Managing and making sense of these new, cloud-native architectures is something that the Cloud Native Computing Foundation (CNCF) aims to help make easier for developers worldwide. On today’s episode of The New Stack Makers podcast, we talk with CNCF Executive Director Dan Kohn and CNCF Chief Operating Officer Chris Aniszczyk about the direction of the CNCF and cloud-native computing as a whole. The interview took place at KubeCon/CloudNativeCon, which took place last month in Seattle.

CNCF has introduced …a new category of earlier stage project for developers to submit to, called the ‘inception stage.’ “What it does is it allows less mature projects to join CNCF at an earlier stage, but unlike the incubation process where if they’re in they’re basically in, the inception stage requires a new TOC vote every 12 months.”

Read more at The New Stack

AWS Sets Cloud Networking Example For IT Organizations

Industry standard servers have played a big role in reducing the cost of networking across the enterprise. But there is a fair amount of nuance that needs to be appreciated to understand how to achieve that goal. One of the best examples is the way Amazon Web Services offloads network services from industry standard servers.

AWS has the largest amount of x86 server infrastructure on the planet. But even with all that infrastructure, AWS spent several million dollars developing its own network infrastructure to offload networking functions from those servers. At the recent AWS re:invent 2016 conference, James Hamilton, vice president and distinguished engineer for AWS, described how AWS is employing custom 25G routers and 10G network interface controller (NIC) cards based on commodity processors to scale networking services in the cloud.

Read more at SDxCentral

Help Move the Networking Industry Forward at Open Networking Summit 2017

I am honored to join The Linux Foundation this month as General Manager of Open Source Networking & Orchestration. As I look at the last three decades, we (networking geeks) have always stepped up to stay ahead of major technology disruptions. Now we are at the next big revolution: open networking, fueled by open source communities.

Through open source projects such as The Linux Foundation’s OpenDaylight, OPNFV, OPEN-O, FD.io, Open vSwitch, OpenSwitch, IO Visor, ON.Lab, CORD and ONOS, hundreds of developers, DevOps professionals and business executives from around the world are working together to undertake a massive transition and to change an industry.

Such rapid transformation is exhilarating. However, if you are an enterprise, carrier, cloud provider, or creator of the networking ecosystem, it can also be mind-boggling. The choices and options to provide services to your customers in this new open source ecosystem are limitless and leave many questions.

  1. How do we harmonize all the open initiatives across the entire stack and industry?

  2. How can I participate in the ‘Open Revolution’, saving potentially millions of dollars and providing a head-start to my core competency?

  3. How has networking had a profound impact on adjacent “hot” industries like Cloud, Big Data, IOT, Analytics, Security, Intelligence, and others?

Open Networking Summit (ONS) 2017 is the place to find the answers to these questions, and more. Developing a formal strategy around the next wave of open networking will be an integral theme at next year’s event.

ONS2017 will be even better than ever before! We have taken your feedback and set the stage for the largest, most comprehensive and most innovative Networking and Orchestration event of 2017 in Silicon Valley on April 3-6, 2017 at the Santa Clara Convention Center. This is the only industry event where you can:

  • Hear from industry visionaries and leaders on the future of Networking beyond SDN/NFV

  • Attend deep technical tracks on topics that are here today, tomorrow and on the horizon

  • Learn from the use cases of your peers as consumption of Open Source Networking is the “new norm” and mandated by most Enterprise CIOs, Carrier CTOs and Cloud Executives.

Join the leading Enterprises, Carriers and Cloud Service providers in moving the Networking industry forward.  Submit a proposal to speak in one of our five new tracks for 2017 and share your vision and expertise. The deadline for submissions is Jan. 21, 2017.  

Register now with the discount code, LINUXRD5, for 5% off the attendee registration price. And don’t miss the chance to save over $850 with early-bird registration through Feb. 19.

Arpit Joshipura is GM, Networking & Orchestration at The Linux Foundation. Joshipura has served as CMO/VP in startups and larger enterprises such as Prevoty, Dell/Force10, Ericsson/Redback, ONI/CIENA and BNR/Nortel leading strategy, product management, marketing, engineering and technology standards functions.