In light of the recent Dirty Cow exploit, said by experts to be the “Most serious” Linux privilege-escalation bug ever, CloudLinux has decided to push forward their prior plans to offer KernelCare for free for nonprofit organizations so that they can protect themselves from critical vulnerabilities including the Dirty Cow CVE-2016-5195.
KernelCare provides Linux kernel security updates without the need to reboot servers. Once KernelCare is installed – without a reboot – it will bring kernels up-to-date with all security patches instantly. KernelCare supports most popular Linux distributions, installs in just minutes with a single line of code.
Since most nonprofit organizations have limited IT resources and are unable to consistently update kernels, KernelCare can help tremendously by bringing kernels up to date with all security patches and keep them secure going forward.
KernelCare delivers a super-fast release of security patches for new vulnerabilities, like the recent Dirty COW privilege escalation. It helps keep Linux secure and stable but now also free for nonprofit organizations. Regular KernelCare pricing is under $3 per server per month, but if you are a nonprofit organization and would like to use out-of-the-box KernelCare, you can request a complimentary unlimited license by writing to nonprofit [ at ] kernelcare.com.
(As originally published on Softpedia, November 3, 2016)
Today, November 3, 2016, Collabora informs us about the contributions done by its multimedia team on the release of the powerful, free, open-source and cross-platform GStreamer 1.10 multimedia framework.
We reported the other day the release of GStreamer 1.10, which appears to be a major update that has been in development for the past seven months. During all this time, various developers have made smaller or bigger contributions, but it looks like Collabora’s devs have contributed a great deal of work to make GStreamer a lot better.
“Our contributions had two main targets: improve GStreamer’s overall reliability and improve support for hardware accelerated plugins. We’ve also contributed a number of improvements that we’ve done in relation to the work we do with our clients,” said Olivier Crête, Multimedia Domain Lead at Collabora, in a blog announcement.
Here are Collabora’s major contributions to GStreamer 1.10
Among the major contributions added by Collabora’s devs to GStreamer 1.10, we can mention the implementation of a GstTracer plugin for tracing memory leaks in GStreamer plugins and apps, which actually help them address many leaks, support for ALSA devices with multiple audio channels, mostly present in the industrial environments, and memory leak fixes in the new decodebin3/playbin3 code.
There’s also Acoustic Echo Cancellation (AEC) support, which might come in handy if you have a microphone capable of capturing the output of the speaker when doing phone calls, support for multiple threads in the libvpx decoder (for VP8 and VP9 streams) on multi-core systems, various improvements to the V4L2 (Video4Linux) elements, as well as support for the video meta, which allows for zero-copy operations.
Furthermore, Collabora’s developers enabled the GObject property notification for name changes of GstObject, cleaned up the rfbsrc element, added Wayland support for the new wl_viewporter extensions to allow for video cropping and scaling, improved both the AAC parser and Ogg Vorbis elements, fixed bugs in the fdkaac elements and gst-rtps-server, and Enhanced AC-3 support to the MPEG Transport Stream demultiplexer.
And it looks like their work won’t stop here, as Olivier Crête notes on a second blog post that “We’re already working on new improvements for the next major GStreamer version, in particular, Nicolas is working hard to have perfectly controlled latency in waylandsink to have guaranteed A/V sync under 15ms and automatic negotiation of dmabuf between the Wayland, vaapi and OpenGL plugins.”
All the above have been contributed by a total of seven Collabora developers, namely Guillaume Desmottes, Nicolas Dufresne, Vincent Penquer’ch, Xavier Claessens, Wonchul Lee, Thibault Saunier, and Olivier Crête. For more details about these new GStreamer improvements, also check out the links above. In the meantime, you can download GStreamer 1.10 right now via the Softpedia website.
Open source software is increasingly becoming available on the mainframe. MongoDB is among the most popular of several programs supporting Linux for mainframe. Yes, the mainframe. Surprisingly to some, mainframe computing is still in heavy use in large organizations. Indeed, 92 of the top 100 banks still run critical data on the mainframe, as do many top retailers, airlines and government organizations.
But that’s not to say that over all these years, mainframe computing has remained the same. Earlier it was primarily run over IBM’s own z/OS operating system with databases such as DB2 and IMS, but also with a smattering of other vendor products such as CA’s and their IDMS and Datacom offerings. However, over the past several years, there has been a mainstream shift to Linux on the mainframe, and that trend is continuing.
Initially, cost was driving this shift but it wasn’t long before flexibility and a strong community became equally compelling to the Fortune 500 set and academic organizations as well.
And so it is that mainframe computing is not only still relevant, but thriving, particularly with Linux. It’s also a lucrative career option for those with the open source chops.
The shift from RDBMS to open source apps for mainframe
Before the shift to Linux, mainframe users turned to the traditional RDBMS offerings such as Oracle and DB2. After the shift, users looked increasingly to open source apps, such as MongoDB.
Among the many fans and supporters of open source on mainframes is the Open Mainframe Project, a Linux Foundation effort aimed at increasing deployment and use of the Linux OS in mainframe computing. Members are eagerly embracing the shift to Linux and open source apps.
Member organizations include ADP, SUSE, CA, Marist College, Velocity Software, RSM Partners, and IBM, all of which see open source as vital to their success. In turn, members are working through the Open Mainframe Project to help build — and to contribute to — a strong and vibrant community working to advance open source in mainframe environments.
Why MongoDB specifically?
MongoDB made the move to support Linux running on the mainframe in 2013. The one-two-three punch of MongoDB’s innovative features combined with the impressive performance of Linux on mainframe added to that computing form’s muscle in scalability delivered a knock out performance in new levels of availability, security, speed, scale and flexibility.
It also helped that MongoDB’s NoSQL technology ditches the overhead of object-relational mapping. That unique setup allows developers to rapidly create and deploy modern applications since there’s no need to define a data schema first and struggle with its restrictions later.
In general, MongoDB is the heavy favorite for projects where traditional RDBMS options are too costly, or where flexibility of the data model is a critical consideration. MongoDB on mainframe systems is popular for these same reasons plus several more, including:
High-performance data serving, scalable to billions of interactions;
Reduced overhead since it achieves vertical scale through increased capacity rather than the alternative which is horizontally scaling by sharding the data;
Higher levels of security and resilience.
In short, MongoDB and other open source apps offer distinct and quantifiable advantages on Linux for mainframe systems that organizations find compelling not only for immediate competitive advantage but for the future as well.
Many organizations run Kubernetes clusters in a single public cloud, such as GCE or AWS, so they have reasonably homogenous infrastructure needs, says Alena Prokharchyk, Principal Software Engineer at Rancher Labs. In these situations, deploying Kubernetes clusters is relatively straightforward. Other organizations, however, may need to deploy Kubernetes across multiple clouds and data centers, which can lead to challenges.
Prokharchyk, who will be speaking along with Brian Scott of The Walt Disney Company at KubeCon in Seattle, shared more about these challenges and how Rancher Labs has worked with various organizations to solve them.
Alena Prokharchyk, Principal Software Engineer at Rancher Labs
Linux.com: Are there any challenges when deploying Kubernetes clusters within an organization with diverse infrastructure?
Alena Prokharchyk: While Kubernetes is designed to run on diverse infrastructure, organizations still face the challenge of preparing each of these infrastructure environments in different ways. Setting up the etcd cluster, starting a Kubernetes master and kubelets, configuring various storage and networking drivers, and setting up a load balancer often require different scripts and steps for different infrastructure environments.
As we’ll discuss at KubeCon, we address these challenges by creating a common set of infrastructure services (networking, storage, and load balancer) across diverse public clouds, private clouds, virtualization clusters, and bare metal servers. From there, a common set of tools based on Rancher can be used to automate the setup, ongoing management, and upgrade of heterogeneous Kubernetes clusters. Introducing a new declarative configuration language to solve this problem is something we tried to avoid, as it would have been another learning step for system administrators.
On Rancher, we also decided to containerize the entire Kubernetes cluster deployment, and to orchestrate those deployments. This approach allows users to describe the application itself, as well as the dependencies between different services. It also makes it simple to scale the cluster as new resources are added.
Linux.com: Are there any best practices for automating the deployment of multiple Kubernetes clusters?
Alena: There are a couple of ways to do this. Kubernetes now ships with a rich set of cloud provider support that enables easy setup of Kubernetes clusters. There is also an increasing number of tools (such as the kubeadm tool in 1.4) that automate the deployment of Kubernetes clusters. However, we still lack tools that can fully automate both the deployment of Kubernetes and the infrastructure elements on which Kubernetes relies. The industry has not yet established a set of best practices to deploy multiple Kubernetes clusters. In our talk, we will show how we might be able to accomplish this using the Rancher container management software.
Managing infrastructure is just as important as managing Kubernetes deployments. It is critical to provide an easy way of adding and removing hosts, to provide an overlay network and DNS, and to detect hosts failures – all that is necessary to ensure a smoothly running Kubernetes cluster. This part should always be automated first.
Lastly, protecting your data is always important, and we advise users to pay extra attention to etcd, HA, and disaster recovery; automating this process always pays off. For many enterprises, even large ones, losing etcd quorum is not uncommon – we advise periodically backing up etcd clusters so they can be easily restored and recovered after losing quorum.
Linux.com: What can organizations, either large and small, do to simplify Kubernetes deployments?
Alena: Teams need an easy way to both deploy and upgrade Kubernetes clusters. It should only take one click for the user to upgrade his or her Kubernetes deployment; distributing the latest templates, and notifying users that their clusters are due for updates are initial steps organizations can take to simplify the process.
Linux.com: What makes deploying Kubernetes clusters relatively straightforward for enterprises running them in a single public cloud like GCE or AWS?
Alena: Native support for Kubernetes on GCE and AWS is very good. Services like GKE make running Kubernetes on Google Cloud even easier. We actually encourage users to use these tools when they are only interested in running Kubernetes in a single public cloud, as they’re built natively to work with that cloud. If your cloud (and Kubernetes cluster) is homogenous, you can leverage provider-specific functionality for features like load balancing and persistent storage.
But in our experience, enterprise users are interested in running Kubernetes on multiple public clouds, or on mixed infrastructure; if you want to build a cluster of GCE and AWS instances, AWS ELB or EBS features won’t be available for GCE. With Kubernetes on Rancher, we offer an alternative solution for that – Rancher Load Balancer. Its implementation allows users to balance traffic across clouds, and allows them to choose a load balancing provider among choices like HAproxy, Nginx, or traefik.
Linux.com: What have been your biggest learnings when working with enterprise IT organizations to solve Kubernetes deployment problems?
Alena: For enterprise IT organizations, managing access control to the Kubernetes cluster is incredibly important; providing a variety of options for managing access control is advisable, as most organizations want to integrate with the solutions they already use. Rancher integrates with ActiveDirectory, AzureAD, GitHub, Local Authentication, and OpenLDAP, and we are planning to add more.
With large-scale Kubernetes clusters, we find that users encounter node and networking failures fairly frequently. As a result, when it comes to defining Kubernetes cluster system services, we include a monitoring option. Furthermore, when such failures occur, Rancher implements self-healing measures to automatically keep the Kubernetes cluster running as expected; those self-healing measures are just as important as automating the deployment of the cluster itself.
Registration for this event is sold out, but you can still watch the keynotes via livestream and catch the session recordings on CNCF’s YouTube channel. Sign up for the livestream now.
In our amazing Linux world, we have not one, not two, but three, count ’em, three major-league enterprise Linux distributions: Red Hat Enterprise Linux, Canonical’s Ubuntu Linux, and SUSE Enterprise Linux. In this series, we will contrast and compare all three. Each one is so large it would take a book to thoroughly cover them, so we’ll hit the high points of major products, services, important partnerships, and support.
Red Hat Enterprise Linux Background
Red Hat, like SUSE, is one of the oldest Linux distributions, founded in 1993. As a foundational distribution, it spawned a large family of derivatives, including Caldera, Mandrake, Turbolinux, Yellow Dog, and Red Flag.
In 2003, Red Hat Linux split into Red Hat Enterprise Linux (RHEL) and Fedora Linux, making a clear distinction between the commercial enterprise version and the free community version. Fedora is 100% free and open source software (FOSS); it showcases new technologies while providing a good usable system.
RHEL promises super-reliability and long support cycles. Each release is supported for 10 years, and RHEL 5 customers can purchase extended support beyond ten years.
Red Hat’s code is open, and anyone can take it for free and clone it or build competitive derivatives. CentOS and Scientific Linux are popular clones, and competitor Oracle maintains its own Oracle Unbreakable Linux clone. This exactly the same as RHEL, with one difference: customers have the option of using Oracle’s customized kernel in place of the RHEL kernel. Even so, RHEL is one of the big open source success stories and was the first open source business to reach $1 billion in revenues, and in 2016 cracked the $2 billion mark.
Getting RHEL For Free
Linux users are used to getting great software free of cost, even though that is not a requirement of most FOSS licenses. Users who want RHEL for free can build it from source RPMs (which is not a trivial task) or use one of the clones. A third option is to get the official binaries from their Get Started download page, which has images for bare metal and virtual machines. This is a self-supported, free of cost version that is the same as the paid version, and it uses all the same tools including Subscription Manager and the Red Hat Customer Portal. You have to register and join the Red Hat Developer Program, and you may not use it as a production server — only for testing and development. Read all about it at FAQ: no-cost Red Hat Enterprise Linux Developer Suite.
Many individual products have live online demos and free 30-day downloads.
Buying Red Hat
You can talk to the nice Red Hat salespeople, who really are nice and knowledgeable, and you also have the option of purchasing online.
Product Line
RHEL includes almost everything under the sun: the Linux operating system, JBoss Middleware, KVM-based hypervisor, cloud, storage, mobile development and management platforms, desktop, workstation, Internet of Things, and of course all of the major servers and productivity applications that are included in most Linux distributions. It runs on everything from embedded devices to mainframes and supercomputers.
As containers are all the rage now, check out Red Hat’s Atomic Host. This is a specialized RHEL 7 scaled-down and optimized to run in containers in Docker format. Atomic Host simplifies the complexity of developing and running containers by providing a central management console for creating and managing your containers; it incorporates Docker, Kubernetes, SELinux, Systemd, and other standard components. See the Product Documentation for Red Hat Enterprise Linux Atomic Host for a complete walk-through of installation and configuration. This is a good starting point if you’re new to container technologies.
We hear so much hype about containers and Internet of Things that it fades into background noise. To get a good perspective on the amazing possibilities of these technologies, watch “Microservices and Smart Networks Will Save the Internet,” which brings it all into the real world.
Red Hat has partnerships with many major tech vendors, including Dell, SAP, Cisco, Hewlett-Packard, Intel, IBM, Amazon, and, yes, Microsoft. Like most FOSS projects you get interoperability rather than lock-in.
What about the desktop? Red Hat has a desktop and a workstation edition, but they’ve always been quiet about them. I’ve never understood why so many businesses stick with Microsoft Windows on the desktop when it’s such an overpriced hassle. Linux on the enterprise desktop makes perfect sense: way more secure, stable, lightweight, easy to customize, and easy to manage centrally. Just one of life’s mysteries, I suppose.
Management tools are the #1 most important tools in the datacenter, in my needlessly humble opinion. Red Hat’s Satellite provides a central console for full management of the entire Red Hat stack: provisioning, configuration, license tracking, configuration, and auditing.
Visit the ecosystem catalog to look up certified hardware, software, and service providers.
Support
Red Hat’s customer and product support generally gets high marks. They also offer a full complement of training and certification courses. These are tailored for Red Hat software, but Linux and FOSS are pretty much the same everywhere so everything you learn is transferable to other Linux distributions and open source software.
Red Hat’s documentation is famous for being excellent and thorough, with manuals for everything, plus videos and knowledge base.
Cons
So far, this probably sounds like a gooey love letter. In a way it is, because Red Hat is a fine company that has been a major supporter and funder of FOSS development from its inception. Their products and support are first-rate. Of course, everyone has their quirks and flaws. These are some that I have experienced:
The Mystery of the Broken Download. When RHEL 6 was released, I tried to download a 30-day evaluation. I could not get a full download, so I filed a bug ticket. I received many nice replies but not one helpful reply. So then I requested the DVD. At the time, the evaluation disk cost $25, and as a tech journalist I figured I should receive a free review copy. Again, my request was met with abundant niceness, but nobody could just pop a disk in the mail. I gave up and found a friend who gave me access to his RHEL server to check it out.
Ancient Software. Many businesses hate to upgrade anything ever. They think computers are like staplers: when you buy a stapler, you have a stapler for life. Who upgrades staplers? Nobody, that’s who, so why upgrade computers? This causes problems when you want to run applications that have newer dependencies. For example, LAMP stacks are moving targets, and wise admins keep them updated religiously. But RHEL 6 ships with PHP 5.3.3 and RHEL 7 ships with PHP 5.4, both of which are so old and unsafe they’ve been deprecated and are unsupported by the PHP team. Red Hat keeps them patched, but most apps and servers require newer PHP versions. Getting newer versions was quite a hassle until Red Hat created the Software Collections, which is both a software repository and a toolset to build your own packages. Not all SCL packages are supported; see Red Hat Software Collections for a supported list.
Up Next: Ubuntu Linux
See our next installment, in which we explore Canonical’s Ubuntu Linux. Ubuntu is the easiest of the enterprise Linuxes to obtain; simply download it without jumping through any hoops. Ubuntu is the youngest major enterprise Linux, and they are making their mark in a number of interesting ways.
HPE remains a member of OpenSwitch, but today’s announcement signals a new direction for the project. Dell has contributed a base operating system, while SnapRoute is providing routing and switching stacks. HPE founded OpenSwitch but handed off the project to the Linux Foundation in June. At the time, HPE said the move was a way to show the community that this wouldn’t be an effort controlled by one vendor.
OpenSwitch is an effort to develop an open source software stack for network switches. It’s a rival to the open source operating systems offered by the likes of Cumulus and Pica8.
Mesosphere, the main commercial outfit behind the Apache Mesos datacenter and container orchestration project, has taken a good look at its user base and found that they gravitate toward a few fundamental use cases.
Survey data released recently by Mesosphere in the “Apache Mesos 2016 Survey Report,” indicates that Mesos users focus on running containers at scale, using Mesos to deploy big data frameworks, and relying heavily on the core tool set that Mesos and DC/OS provide rather than using substitutes.
This contributed piece is from a speaker at Node.js Interactive North America, an event offering an in-depth look at the future of Node.js from the developers who are driving the code forward, taking place in Austin, TX from Nov. 29 — Dec. 2.
There is no doubt that Node.js is one of the fastest growing platforms today. It can be found at start-ups and enterprises throughout all industries from high-tech to healthcare.
A lot of people have written about the reasons for its popularity and why it has made sense in “digital transformation” efforts. But when you implement Node.js, do you have to replace your mainframes and legacy software with a shiny new Node.js-based microservice architecture?
AWS has launched a new Linux Container Image in response to customer demand, designed for use with cloud and on-premise workloads.
Linux AMI is a secure environment for firing up applications running on EC2, but due to customer demand, AWS has now made the image available for on-premise as well as cloud infrastructures, addressing more businesses’ needs.
“Many of our customers have asked us to make this Linux image available for use on-premises, often as part of their development and testing workloads,” Jeff Barr, chief evangelist for AWS, said.
On May 12, 1996, like a benevolent mad scientist, Brewster Kahle brought the Internet Archive to life. The World Wide Web was in its infancy and the Archive was there to capture its growing pains. Inspired by and emulating the Library at Alexandria, the Internet Archive began its mission to preserve and provide universal access to all knowledge.