Home Blog Page 755

3 Cutting-Edge Frameworks on Apache Mesos

The three cutting-edge frameworks showcased in these talks from MesosCon North America demonstrate the amazing power and flexibility of Apache Mesos for solving large-scale problems.

Perhaps you have noticed, in our Apache Mesos series, the importance of frameworks. Mesos frameworks are the essential glue that make everything work in a Mesos cluster, the layer between Mesos and your applications. They perform a multitude of tasks, including launching and scaling applications, monitoring and health checks, configuration management, and scheduling. In these talks, you’ll learn how:

  • Netflix uses Mesos to power their recommendation engines.

  • Huawei Technologies uses Mesos to make a distributed Redis framework.

  • Crate.IO runs a distributed, scalable, shared-nothing SQL Mesos framework.

Building a Machine-Learning Orchestration Framework on Apache Mesos

Antony Arokiasamy and Kedar Sadekar, Netflix

Have you ever wondered what powers recommendations on Netflix? It isn’t hordes of employees studying your viewing habits, and it isn’t PigeonRank. Rather, it is a self-learning framework called Meson, which is built on Apache Mesos.

Antony Arokiasamy and Kedar Sadekar lead the personalization infrastructure team at Netflix. They build the infrastructure for the algorithmic teams, who build the machine-learning algorithms that power recommendations at Netflix.

“We want to delight the customer every time you interact with Netflix. We have over 81 million subscribers, and delighting a subscriber or a member means any time you turn on Netflix we want to put forth content to you that we feel you will be really happy to watch…Everything you see once you turn on Netflix is a recommendation. For example, on this page every row is sorted in a particular order which is personalized for that particular user,” Sadekar says.

Netflix is a global business, so they use both global and regional recommendation models, and different tools for different models. “Let’s say certain kinds of movies, certain action movies or martial arts movies carry well in all markets. So let’s train those sets of users using a global model. So for this we want to use Spark. Let’s say in India, you like Bollywood and something else, you like another set of movies, another set of genres. In this one, the technology of choice is R. At the end of it we have a Scala-based thing that’s actually doing the model validation, or choosing the best model that needed to be fit” explains Sadekar.

Meson is flexible and complex, incorporating Hadoop, Docker, Spark, R, Python, and Scala. Watch Arokiasamy and Sadekar’s talk (below) to learn the details of how it all goes together.

https://www.youtube.com/watch?v=UyjUf1xT6Qg?list=PLGeM09tlguZQVL7ZsfNMffX9h1rGNVqnC

Redis on Apache Mesos, a New Framework

Dhilip Kumar S, Huawei Technologies

Redis is a popular key-value store for persistent caching, but running it in a distributed environment is a complex endeavor. Hosting Redis-as-a-service is especially difficult. Dhilip Kumar S of Huawei Technologies shares how he built a thin, high-performing Redis framework on Mesos, which delivers Redis’s good performance and simplifies running it in a cluster.

“One of the most popularly requested middleware is Redis,” says Dhilip, “because it’s absolutely lightweight and it’s easy to create, and almost all Web applications require Redis. We think that the biggest problem down the lane two years as a public cloud provider would be to actually maintain this huge number of Redis instances.”

Dhilip’s team forecast three major Redis problems to solve: customers who need a simple setup with a single Redis binary, high availability with a Redis master and slaves on the same hosts or on different hosts, and 3.0 clusters with multiple Redis masters. They also had to solve the problems of creating, administering, monitoring, clustering, and metering thousands of Redis instances.

Watch the complete presentation to learn how they did it, and to see a live demo of creating multiple Redis instances using Mesos.

https://www.youtube.com/watch?v=xe-Gom5tOl0?list=PLGeM09tlguZQVL7ZsfNMffX9h1rGNVqnC

Managing Large SQL Database Clusters with the Apache Mesos Crate Framework

Aslan Bakirov and Christian Lutz, Crate.IO

The good news is Mesos makes it possible to perform amazing creative large-scale computing feats, as we have seen previously in this blog series. The bad news is these wonderful technologies are still young and present new challenges, such as horizontally scaling databases. Our old reliable war horses, MySQL, MariaDB, and PostgreSQL can’t do that. Christian Lutz, CEO of Crate.IO, built the Crate distributed, highly-scalable, highly-available, shared-nothing SQL database to meet modern challenges.

Aslan Bakirov describes the features that make Crate the best SQL database for Mesos. “Crate has a shared-nothing architecture, which means that nodes in the Crate cluster do not share any states, so that if any node in your cluster fails, the other nodes will not be affected from that. The second main feature is all nodes are equal. Every node can be treated as a master node whenever it’s needed, so if you lose any of your master nodes, one of your slave node can become a master node and expose the cluster state to other nodes in the Crate cluster. The third main feature of the shared-nothing architecture is each node in a Crate cluster can perform any type of a query on every node in the Crate cluster.”

The Crate Mesos Framework integrates the data storage layer with Mesos.

Watch the full presentation to learn more, and to see a live demo.

https://www.youtube.com/watch?v=kyMZ7s7dq2I?list=PLGeM09tlguZQVL7ZsfNMffX9h1rGNVqnC

Mesos Large-Scale Solutions

Please enjoy the previous blogs in this series to see some of the ingenious and creative ways to hack Mesos for large-scale tasks.

mesoscon-video-cta-2016.jpg?itok=PVP-FqWv

Apache, Apache Mesos, and Mesos are either registered trademarks or trademarks of the Apache Software Foundation (ASF) in the United States and/or other countries. MesosCon is run in partnership with the ASF.

Bridging Tech’s Diversity Gap

Recently, the OpenStack Foundation conducted a survey to dig deeper into who was actually involved with its community. The results were quite shocking, showing that only 11 percent of the entire OpenStack population identify as women. Team leaders across the industry took notice, with many asking how they could improve diversity not only within their communities but their hiring practices.

In this episode of The New Stack Makers embedded below, we address the issue of how to build diverse communities within open source, how and why companies should focus on enacting hiring practices that are inclusive and welcoming of diverse candidates, and the ways in which the increased visibility on diversity in technology impacts marginalized individuals in the workplace. The New Stack founder Alex Williams sat down with DreamFactory developer relations manager Jessica Rose, Bitergia co-founder and chief data officer Daniel Izquierdo, and director of community development at Red Hat OpenShiftDiane Mueller to hear their thoughts…

Read more at The New Stack

The PocketC.H.I.P. Is the Handheld Linux Machine I’ve Been Looking For

The variety of ways people have found to cram the palm-sized Raspberry Picomputer inside a handheld device are some of my favorite Pi projects. But those projects are usually expensive, and some even require a 3D printer. The PocketC.H.I.P. isn’t nearly as powerful as a Pi, but it’s still the handheld machine I’ve wanted for a long time. Plus, it’s just $50.

Read more at LifeHacker

Architectural Considerations for Open-Source PaaS and Container Platforms

The market for open source PaaS (Platform-as-a-Service) and Container platforms is rapidly evolving, both in terms of technologies and the breadth of offerings being brought to market to accelerate application development. Many IT organizations and developers are now mandating that any new software usage (on-demand consumption) or purchase must be based on open source software so that they have greater control over the evaluation process. In addition, many organizations want the option to choose whether their deployments are on-premises or use a public cloud service. As the pace of change accelerates for many open source technologies, IT organizations and developers are evaluating the architectural trade-offs that will impact new cloud-native applications as well as integrations with existing applications and data.

Read more at WikiBon

Microsoft’s Project Malmo AI Platform Goes Open Source

Microsoft has released artificial intelligence system Project Malmo to the open-source community. The system, now available to all, uses Minecraft to test artificial intelligence protocols. On Thursday, the Redmond giant revealed the shift of Project Malmo from the hands of a small group of computer scientists in a private preview to GitHub, a code repository for open-source projects.

Formerly referred to as Project AIX, the platform has been developed in order to give startups a cheap, effective way to test out artificial intelligence programming without the need to build robots to test commands and comprehension with physical subjects.

Read more at ZDNet

The Wi-Fi Network Edge Leads in an SDN World

New thinking around software-defined networking makes the Wi-Fi network edge especially powerful. Two decades ago, the core was the place to be in campus networking. The networking battles of the 1990s concluded with the edge specialists humbled and assimilated by core product lines. Control the core, we declared, and the edge will fall into place.

But now the edge is fruitful, and the core is sterile—and for two reasons. First, the wireless interface adds mobility and complexity to the edge. Second, the new architectures of software-defined networking (SDN) and IoT are based on centralized models that take sensed information, manipulate a software representation of the network, then send control signals back to network nodes. Nodes are peers under the controller. Their importance is based on the quantity and quality of the information they can report, as well as the sophistication of the control they can apply.

Read more at NetworkWorld

15 Useful ‘sed’ Command Tips and Tricks for Daily Linux System Administration Tasks

Every system administrator has to deal with plain text files on a daily basis. Knowing how to view certain sections, how to replace words, and how to filter content from those files are skills…

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]

Read the full article here: http://www.tecmint.com/linux-sed-command-tips-tricks/

Greg Kroah-Hartman Tells Google’s Kubernetes Team How to Go Faster

What has 21 million lines of code, 4000 contributors, and more changes per day than most software projects have in months, or even years? The Linux kernel, of course. In this video, Greg Kroah-Hartman provides an inside view of how the largest, fastest software project of all absorbs so many changes while maintaining a high level of quality and stability.

A Fabulous Machine

Kroah-Hartman presented this talk to Google’s Kubernetes development team. Kubernetes is also undergoing rapid growth, and Kroah-Hartman draws on his extensive experience to provide tips on how to manage such a high-velocity project.

The recently released 4.6 kernel contains more than 21,400,000 million lines of code in more than 53,600 files. Of those 21,400,000 million lines of code, only 1.4 million are needed to run your laptop or PC. Smartphones need more, around 2.5 million. (See the video to learn where the remainder go.) The number of contributors always grows, and currently the project includes more than 4000 contributors and nearly 500 companies.

The rate of change is like a high-speed treadmill that never stops. The 4.6 kernel averaged 11,600 lines added, 5800 removed, and 2000 modified per day. The average number of changes per hour was 8.9, and that’s just the number of changes accepted; the average rate of acceptance is one-third to one-half. Somehow, kernel maintainers managed to review 18 to 27 patches per hour.

Greg Kroah-Hartman explains the intricacies of Linux kernel maintenance.
New releases come out about every 2.5 months. Kroah-Hartman says that it is safe to routinely update Linux kernels, and cites how Facebook tested three years of kernel releases, and nothing broke. But, of course, not everyone wants to do this. Enterprise users tend to hang on to old software for years, and some Linux distros, such as Debian Stable, maintain old versions for several years. It is more work to maintain old software because the world moves on, and old code becomes increasingly out of sync.

How does the Linux kernel bridge the gap between old and new? With Linus Torvalds’ ironclad rule of “Never break userspace.”

What Does the Linux Kernel Look Like in the Future?

The Linux kernel was born in 1991. (See the famous debate between a young Linus and computer science professor Andrew Tanenbaum, who said that Linus was wrong about everything.) Its rate of adoption and growth is phenomenal, and Linus thinks it can go faster. At a mere 25 years old, it is still a youth. And, so many advances in computer technology are ephemeral. Where are your 5.25″ floppy disks? CRT monitors? DB-25 printer ports? Now Apple is trying to do away with the 3.5mm headphone jack. 

With all the buzz around containers and abstracting datacenter resources into clouds, what does the Linux kernel look like in the future? Will the future of tech reinvent everything we use now to run in large-scale distributed environments?

David Aronchick, Kubernetes product manager, says there will be room for the old and new. “I think that distributed computing is a paradigm shift, and folks will be able to focus on applications (not kernels) if it suits them. But not everyone! There will be places for folks up and down the stack. Distributed computing comes when you need more than a single device can provide. It’s highly unlikely the brakes on your car, your radio or your PC will ever need to be part of a distributed cluster.”

So, young Tuxes, if your desire is to become a Linux kernel contributor, there is plenty of opportunity. How do you get started? How do kernel maintainers keep their sanity in such a whirlwind? How do they maintain a high level of quality and rapid development? Kroah-Hartman is an excellent guide for any new contributor.

Watch the complete presentation below to learn how it all works and where to start.

Greg Kroah-Hartman is a Linux Foundation Fellow, and a longtime kernel contributor and maintainer. He maintains the stable kernel branch and several subsystems including USB, the TTY layer, and sysfs, and contributes to the annual Who Writes Linux report. He wrote Linux Device Drivers and Linux Kernel in a Nutshell, is a popular speaker, and founded the Linux Driver Project, which has been very successful in bringing hardware vendors into the Linux fold.

 

Canonical-Pivotal Partnership Makes Ubuntu Preferred Linux Distro for Cloud Foundry

Pivotal, developers of the Cloud Foundry open source cloud development platform and Canonical, the company behind the popular Ubuntu Linux distribution, announced a partnership today where Ubuntu becomes the preferred operating system for Cloud Foundry.

In fact, the two companies have been BFFs since the earliest days of Cloud Foundry when it was an open source project developed at VMware. When VMware, EMC and GE spun out Pivotal as a separate company in 2013, Cloud Foundry was a big part of that and the relationship continued through today. Dustin Kirkland, head of Ubuntu product and strategy at Canonical, said he was surprised it took so long to formalize it, but today’s announcement marks a more official partnership.

It should help make life easier for Cloud Foundry customers running Ubuntu Linux in a number of important ways. 

Read more at TechCrunch

10 Biggest Mistakes in Using Static Analysis

Using static analysis the right way can provide us with cleaner code, higher quality, fewer bugs, and better maintenance. But, not everybody knows how to do it the right way. Check out this list of mistakes to avoid when performing static analysis.

Static analysis was introduced to the software engineering process for many important reasons. Developers use static analysis tools as part of the development and component testing process. The key aspect of static analysis is that the code (or another artifact) is not executed or run, but the tool itself is executed, and the input data to the tool provides us with the source code we are interested in.

Read more at DZone