Home Blog Page 667

Shining a Light on the Enterprise Appeal of Multi-Cloud Deployments

Rather than settle for the services of a single cloud provider, enterprises are, in time, expected to want to source off-premise capacity from a variety of suppliers.

According to market watcher Gartner, the trend is being driven by the fact that cloud customers are increasingly aware of the merits and drawbacks of individual providers, which enables them to make informed decisions about where best to run specific workloads.

Mark D’Cunha, product manager at Pivotal, told Computer Weekly at the Cloud Foundry Summit in Frankfurt that parts of the industry have been surprised by the speed at which enterprises are looking to adopt a multi-cloud approach to IT consumption.  

Read more at ComputerWeekly

Coaches, Managers, Collaboration, and Agile: Part III

I started this series writing about the need for coaches in Coaches, Managers, Collaboration, and Agile, Part 1. I continued in Coaches, Managers, Collaboration, and Agile, Part 2, talking about the changed role of managers in Agile. In this part, let me address the role of senior managers in Agile and how coaches might help.

For years, we have organized our people into silos. That meant we had middle managers who (with any luck) understood the function (testing or development) and/or the problem domain (think about the major chunks of your product such as Search, Admin, Diagnostics, the feature sets, etc.). I often saw technical organizations organized into product areas with directors at the top, and some functional directors such as those involved in test and quality and/or performance.

Read more at DZone

How Continuous Integration Can Help You Keep Pace With the Linux Kernel

Written by Tomeu Vizoso, Principal Software Engineer at Collabora.

Almost all of Collabora’s customers use the Linux kernel on their products. Often they will use the exact code as delivered by the SBC vendors and we’ll work with them in other parts of their software stack. But it’s becoming increasingly common for our customers to adapt the kernel sources to the specific needs of their particular products.

A very big problem most of them have is that the kernel version they based on isn’t getting security updates any more because it’s already several years old. And the reason why companies are shipping kernels so old is that they have been so heavily modified compared to the upstream versions, that rebasing their trees on top of newer mainline releases is so expensive that is very hard to budget and plan for it.
 
To avoid that, we always recommend our customers to stay close to their upstreams, which implies rebasing often on top of new releases (typically LTS releases, with long term support). For the budgeting of that work to become possible, the size of the delta between mainline and downstream sources needs to be manageable, which is why we recommend contributing back any changes that aren’t strictly specific to their products.
 
But even for those few companies that already have processes in place for upstreaming their changes and are rebasing regularly on top of new LTS releases, keeping up with mainline can be a substantial disruption of their production schedules. This is in part because new bugs will be in the new mainline release, and new bugs will be in the downstream changes as they get applied to the new version.
 
Those companies that are already keeping close to their upstreams typically have advanced QA infrastructure that will detect those bugs long before production, but a long stabilization phase after every rebase can significantly slow product development.
 
To improve this situation and encourage more companies to keep their efforts close to upstream we at Collabora have been working for a few years already in continuous integration of FOSS components across a diverse array of hardware. The initial work was sponsored by Bosch for one of their automotive projects, and since the start of 2016 Google has been sponsoring work on continuous integration of the mainline kernel.
 
One of the major efforts to continuously integrate the mainline Linux kernel codebase is kernelci.org, which builds several configurations of different trees and submits boot jobs to several labs around the world, collating the results. This is being of great help already in detecting at a very early stage any changes that either break the builds, or prevent a specific piece of hardware from completing the boot stage.
 
Though kernelci.org can easily detect when an update to a source code repository has introduced a bug, such updates can have several dozens of new commits, and without knowing which specific commit introduced the bug, we cannot identify culprits to notify of the problem. This means that either someone needs to monitor the dashboard for problems, or email notifications are sent to the owners of the repositories who then have to manually look for suspicious commits before getting in contact with their author.
 
To address this limitation, Google has asked us to look into improving the existing code for automatic bisection so it can be used right away when a regression is detected, so the possible culprits are notified right away without any manual intervention.
 
Another area in which kernelci.org is currently lacking is in the coverage of the testing. Build and boot regressions are very annoying for developers because they impact negatively everybody who work in the affected configurations and hardware, but the consequences of regressions in peripheral support or other subsystems that aren’t involved critically during boot can still make rebases much costlier.
 
At Collabora we have had a strong interest in having the DRM subsystem under continuous integration and some time ago started a R&D project for making the test suite in IGT generically useful for all the DRM drivers. IGT started out being i915-specific, but as most of the tests exercise the generic DRM ABI, they could as well test other drivers with a moderate amount of effort. Early in 2016 Google started sponsoring this work and as of today submitters of new drivers are using it to validate their code.
 
Another related effort has been the addition to DRM of a generic ABI for retrieving CRCs of frames from different components in the graphics pipeline, so two frames can be compared when we know that they should match. And another one is adding support to IGT for the Chamelium board, which can simulate several display connections and hotplug events.
 
A side-effect of having continuous integration of changes in mainline is that when downstreams are sending back changes to reduce their delta, the risk of introducing regressions is much smaller and their contributions can be accepted faster and with less effort.
 
We believe that improved QA of FOSS components will expand the base of companies that can benefit from involvement in development upstream and are very excited by the changes that this will bring to the industry.
 
If you are an engineer who cares about QA and FOSS, and would like to work with us on projects such as kernelci.org, LAVA, IGT and Chamelium, get in touch!

Provision Bare Metal Servers for OpenStack with Ironic

The day-to-day life of a developer can change drastically from one moment to the next, particularly if one is working on open source projects. Intel Cloud Software Senior Developer Ruby Loo spends her days working on the bare metal provisioning software OpenStack Ironic, which is software that allows OpenStack to provision bare metal servers, as one of its Core members.

While Loo is employed by Intel, the bulk of her daily interactions are based in the upstream open source community. As patches come in, Loo reviews them alongside other Core members, ensuring that OpenStack Ironics feature priorities are met. In todays episode of The New Stack Makers recorded at OpenStack Summit Barcelona, Loo sat down with TNS Founder Alex Williams to explore more about her background, the daily tasks of an OpenStack Core project member and active open source community participant, and whats next for OpenStack Ironic.

Read more at The New Stack

China Adopts Cybersecurity Law in Face of Overseas Opposition

China adopted a controversial cyber security law on Monday to counter what Beijing says are growing threats such as hacking and terrorism, but the law triggered concerns among foreign business and rights groups.

The legislation, passed by China’s largely rubber-stamp parliament and set to take effect in June 2017, is an “objective need” of China as a major internet power, a parliament official said.

Overseas critics of the law say it threatens to shut foreign technology companies out of various sectors deemed “critical”, and includes contentious requirements for security reviews and for data to be stored on servers in China.

Read more at Reuters

Top 3 Questions Job Seekers Ask in Open Source

As a recruiter working in the open source world, I love that I interact every day with some of the smartest people around. I get to hear about the cool projects they’re working on and what they think about the industry, and when they are ready for a new challenge. I get to connect them to companies that are quietly changing the world.

But one thing I enjoy most about working with them is their curiosity: they ask questions, and in my conversations, I hear a lot of inquiries about the job search and application process. 

Read more at OpenSource.com

How to Recover a Deleted File in Linux

Did this ever happen to you? You realized that you had mistakenly deleted a file – either through the Del key, or using rm in the command line.

In the first case, you can always go to the Trashsearch for the file, and restore it to its original location. But what about the second case? As I am sure you probably know, the Linux command line does not send removed files anywhere – it REMOVES them. Bum. They’re gone.

Read complete article at Tecmint

Get Trained and Certified on Kubernetes with The Linux Foundation and CNCF

Companies in diverse industries are increasingly building applications designed to run in the cloud at a massive, distributed scale. That means they are also seeking talent with experience deploying and managing such cloud native applications using containers in microservices architectures.

Kubernetes has quickly become the most popular container orchestration tool according to The New Stack, and thus is a hot new area for career development as the demand for IT practitioners skilled in Kubernetes has also surged. Apprenda, which runs its PaaS on top of Kubernetes, reported a spike in Kubernetes job postings this summer, and the need is only growing.

To meet this demand, The Linux Foundation and the Cloud Native Computing Foundation today announced they have partnered to provide training and certification for Kubernetes.  

The Linux Foundation will offer training through a free, massive open online course (MOOC) on edX as well as a self-paced, online course. The MOOC will cover the introductory concepts and skills involved, while the online course will teach the more advanced skills needed to create and configure a real-world working Kubernetes cluster.

The training course will be available soon, and the MOOC and certification program are expected to be available in 2017. The course is open now at the discounted price of $99 (regularly $199) for a limited time. Sign up here to pre-register for the course.

The course curriculum will also be open source and available on GitHub, Dan Kohn, CNCF Executive Director, said in his keynote today at CloudNativeCon in Seattle.

Certification will be offered by Kubernetes Managed Service Providers (KMSP) trained and certified by the CNCF. Nine companies with experience helping enterprises successfully adopt Kubernetes are committing engineers to participate in a CNCF working group that will develop the certification requirements. These early supporters include Apprenda, Canonical, Cisco, Container Solutions, CoreOS, Deis, Huawei, LiveWyer, and Samsung SDS. The companies are also interested in becoming certified KMSPs once the program is available next year.

Kubernetes is a software platform that makes it easier for developers to run containerized applications across diverse cloud infrastructures — from public cloud providers, to on-premise clouds and bare metal. Core functions include scheduling, service discovery, remote storage, autoscaling, and load balancing.

Google originally engineered the software to manage containers on its Borg infrastructure, but open sourced the project in 2014 and donated it earlier this year to the Cloud Native Computing Foundation at The Linux Foundation. It is now one of the most active open source projects on GitHub and has been one of the fastest growing projects of all time with a diverse community of contributors.

“Kubernetes has the opportunity to become the new cloud platform,” said Sam Ghods, a co-founder and Services Architect at Box, in his keynote at CloudNativeCon. “We have the opportunity to do what AWS did for infrastructure but this time in an open, universal, community-driven way.”

With more than 170 user groups worldwide, it’s already easy to hire people who are experts in Kubernetes, said Chen Goldberg, director of engineering for the Container Engine and Kubernetes team at Google, in her keynote at CloudNativeCon.

The training and certification from CNCF and The Linux Foundation will go even further to help develop the pool of Kubernetes talent worldwide.

Pre-register now for the online, self-paced Kubernetes Fundamentals course from The Linux Foundation and pay only $99 ($100 off registration)!

OpenSDS for Industry-Wide Software Defined Storage Collaboration

Software defined storage (SDS) brings cloud benefits to storage, but the challenge is that it must be highly reliable – you can’t lose a single byte of data. Storage can be difficult to manage in the cloud where there are many frameworks and technologies working together in virtualized / containerized environments. 

At LinuxCon Europe, Cameron Bahar, SVP and Global CTO of Huawei Storage, launched the project proposal for a new open source initiative called OpenSDS:

“What we’re proposing effectively is this virtualization layer that effectively does discovery, provisioning, management, and orchestration of advanced storage services. It allows the open source vendors to plug in their OpenSDS adapters to manage storage. Ceph, Gluster, and ZFS and what have you can plug in through that stack. It allows the vendors from EMC, Huawei, Intel, HP to plug in their adapters. … We define the interfaces once, you make them general enough, and then we’re able to update, both in an open source way and in a proprietary way, these vendor APIs.”

In this keynote, Bahar invites vendors, customers and other open source collaborators to work with them to make this mission a reality: “We want to develop an open source SDS controller platform that allows us to manage both virtualized and containerized and bare metal environments, and facilitate collaborations. Adherence to standards, leverage existing open source, and have a customer and vendor community that comes together to solve the problem together.”

Steven Tan, Chief Architect at Huawei, talked about some of the technical details for how OpenSDS could benefit people using Kubernetes and OpenStack. He outlined three key benefits: “The first one is to be able to plug in to be able to provide a seamless plugin for any framework. Second, to be able to provide an end-to-end storage management with a single solution. Third, to support a broad set of storage including closed storage.”

Tan went on to talk more about how the project will be managed as a Linux Foundation project with light governance and a technical steering committee for the technical oversight of the project. The source code will be on GitHub with Gerrit code reviews and regular IRC meetings as well as meetups.

They were joined on stage by a special guest, Reddy Chagam, Chief Architect of SDS at Intel, who talked about Intel’s involvement in the project.  

While the project hasn’t launched quite yet, stay tuned for more information about how to participate in OpenSDS. For now, you can watch the entire keynote below to learn more about the project!

LinuxCon Europe videos

Using Apache Hadoop to Turn Big Data Into Insights

The Apache Hadoop framework for distributed processing of large data sets is supported and used by a wide-ranging community — including businesses, governments, academia, and technology vendors. According to John Mertic, Director of ODPi and the Open Mainframe Project at The Linux Foundation, Apache Hadoop provides these diverse users with a solid base and allows them to add on different pieces depending on what they want to accomplish.

John Mertic, Director, ODPi and Open Mainframe Project
As a preview to Mertic’s talk at Apache: Big Data Europe in Seville, Spain, we spoke with him about some of the challenges facing the project and its goals for growth and development.

Apache Hadoop has a large and diverse community and user base. What are some of the various ways the project is being used for business and how can the community meet those needs?

If you think of a use case where a business needs to answer a question with data, the chances that they are using Apache Hadoop are fairly high. The platform is evolving to become the go-to strategy for working with data in a business. Hadoop’s ability to turn data into business insights speaks to the flexibility and depth of both Hadoop and the Big Data ecosystem as a whole.

The Big Data community can help to increase the adoption of Apache Hadoop through consistent encouragement of interoperability and standardization across Hadoop offerings. These efforts will not only help to mitigate the risks associated with implementing such differing platforms, but also streamline new development, promote open source architectures, and eliminate functionality confusion.   

What is the most common misconception about Apache Hadoop?

The most common misconception about Apache Hadoop is that it is just a project of The Apache Software Foundation, and one containing only YARN, MapReduce, HDFS. In reality, as it’s brought to market by platform providers like Hortonworks, IBM, Cloudera, or MapR, Hadoop can be equipped with 15-20 additional projects that vary across platform vendors, like Hive, Ambari, HCFS, etc. To use an analogy, Apache Hadoop is like Mr. Potato Head. You start with a solid base and can add different pieces depending on what you are trying to accomplish. What an end user may think of as Apache Hadoop is actually more than what it really is, and thus it may seem quite amorphous.

What are its strengths, and what value does it bring to users?

The Hadoop ecosystem enables a multitude of strategies for dealing with and capitalizing on data in any enterprise environment. The breadth and depth of the evolving platform now enables businesses to consider this growing ecosystem as part of their strategy for data management.

Can you describe some of the current challenges facing the project?

There certainly are compatibility gaps with Apache Hadoop and, while technologists are tackling some of these by creating new innovative projects, I think having a tighter feedback loop of real-life usage from businesses — to help technologies closest to the project understand the challenges and opportunities — will be crucial to increase adoption. Obtaining those use cases directly from user to project can help solidify and mature these projects quickly.

The effects of the broad ecosystem – most commonly occurring through end user confusion and enterprise software expectations – happen when end users turn to Hadoop from the matured world of enterprise data warehouses with the same expectations but don’t see the same stability in this new ecosystem.

What are the project’s goals and strategies for growth?

ODPi’s goals for the Big Data community at-large are to solve end-user challenges more directly, remove the investment risks for legacy companies considering a move to Hadoop through universal standardization, and connect the technology more directly to business outcomes for potential enterprise users.

Attending Apache: Big Data Europe? Join Apache project members and speakers at the ODPi Community Lounge!