Serverless computing is the latest trend in cloud computing brought about by the enterprise shift to containers and microservices.
Serverless computing platforms promise new capabilities that make writing scalable microservices easier and more cost effective, say IBM software engineer Diana Arroyo and research staff member Alek Slominski.
In their upcoming talk at MesosCon Europe, Arroyo and Slominski who work in IBM’s Watson Research Center, will share the lessons learned when running serverless workloads in an Apache Mesos environment to meet the performance demands of OpenWhisk, IBM’s serverless open source computing platform.
Here, they define serverless computing, discuss how it makes microservices easier to implement, and start to define what makes Mesos an ideal platform for serverless workloads.
Linux.com: What is serverless computing?
Diana Arroyo & Alek Slominski: The most basic level serverless computing is about running a piece of code (a function, event handler, actions, etc.) on-demand without having to manage what is executed on what server or how to do scaling. In our work we focused on essential characteristics of serverless workloads: running in a Mesos cluster thousands of concurrent short-lived containers that are created and destroyed in hundreds of milliseconds (or less.)
Linux.com: How does it make microservices easier to implement?
Diana & Alek: Microservices started by focusing on creating a service that provides one well-defined functionality – it was less about making it small (as a microservice may have a lot of users and need to scale) but simply enough for the functionality of the microservice in a short time. From that perspective serverless computing may become an ideal choice to implement microservices: there’s no longer a need to worry about managing servers for microservices!
Linux.com: In what circumstances does it make the most sense to use a serverless architecture?
Diana & Alek: Serverless computing is ideal to run pieces of code that run for a short amount of time (milliseconds to seconds). A typical example is to run an event handler as a serverless function so it can process events and we do not need to worry where the code is running or how to do scaling when there are a very large number of events to process.
Linux.com: Why is Mesos a good platform for serverless workloads?
Diana & Alek: It is very unlikely that one computing paradigm (such as serverless, containers, or VMs) will completely dominate. Much more likely is that different paradigms will need to work together and Mesos frameworks provide great abstraction that allows us to run all of them in one shared cluster. As serverless workloads contain a lot of very short-lived jobs that lead to additional efficiencies, serverless functions may be scheduled to use available capacity.
Linux.com: What is one tuning tip you have for running serverless workloads in Mesos?
Diana & Alek: For serverless workloads it is important to have Mesos offer allocations passed to the Mesos Framework as quickly as possible. We found significant gains can be achieved by modifying the default offer refuse timeout filter from the default of 5 seconds to 10 milliseconds. Filters allow a framework to short-circuit declining offers by telling the Mesos allocator to not bother sending offers based on the filter criteria. In the case of the Swarm Framework, which we used to orchestrate the serverless workloads, we reduced the timeout parameter (mesos.offerrefusetimeout) to 10 milliseconds resulting in approximately a 10x speedup.