Serverless Computing Is the Stack Reimagined


Alan Ho’s presentation at Node.js Interactive centered on serverless — a technology he has been working on since he founded InstaOps (acquired by Apigee, which was then acquired by Google) a few years ago.

What is Serverless?

In Ho’s own words, “Serverless computing is the code execution model that the cloud provider abstracts the complexity of managing individual servers.” This basically means the provider worries about the servers. You just run your code on them.

Right off the bat, there is already the advantage of not having to bother with managing servers and load balancers. According to Ho, serverless is also more cost effective — more about this later.

Ho, who now works on serverless for Google, also describes serverless computing as “the stack re-imagined,” in the sense that the networking/communication is done by an API gateway or PubSub mechanism. The compute is done by a FaaS (Function as a service — which is a place where you deploy code and the provider will execute it); and your storage is done by a BaaS (Backend as a Service).

The FaaS is different from containers and VM technologies because you send it code, not a VM. In more traditional frameworks, you would send a VM image that executes the code. In FaaS you only send the code.

There are several gotchas to using FaaS, however. First, you don’t have access to the file system. This allows the provider to pack more tenants and preserve security. It also allows for a faster boot up of the containers, as well as reduces costs. But, as a developer, you won’t be able to read configuration from files any more, the memory cache can’t spill over to disk, and you can’t use your own monitoring and logging tools, you have to rely on the tools supplied by the provider.

Likewise, a BaaS is usually multi-tenant (i.e., cheaper) and frees you from having to run the clusters yourself. BaaS usually involves database services that run NoSQL. The problem with them is you have a limited ability to configure the index, and you don’t have the configuration flexibility provided by, say, Elasticsearch or Cassandra.

On the flip side, serverless scales really well. Ho mentioned Pokemon Go as an example. The creators gave Google Cloud an original target and worst case scenario. However, when it was deployed, traffic shot up to 50 times the original launch target. The creators of Pokemon Go blew through their worst case scenario in 12 hours. Fortunately, serverless systems were able to handle the deluge.


Probably the most palpable advantage of serverless is its low cost. Because you don’t manage the servers yourself, there are savings in administration costs. The providers also share resources across a lot of customers, making them cheaper. Theoretically, the savings are then pushed down to the customer.

Serverless also implies less Ops. If, for example, you have Cassandra clusters running hundreds of microservices, you will ideally need a separate database for each service. Managing hundreds of databases individually is very resource expensive.

Serverless also presents a simpler programming model, thus saving in development time. Ho demonstrated an natural language-based support chat bot that he developed and deployed in four hours.


Serverless computing is not for everything. Its limitations exclude it from services that require control over the underlying operating system, the file system, or even customized deployments of Node.js. Ho also does not recommend it for deployments that will move massive amounts of traffic.

However, if none of the above is necessary, the simplified coding model, the speed at which appliances can be deployed, and the cost effectiveness of serverless make it an excellent platform for online applications that may need to scale.

Watch the complete presentation below:

If you are interested in speaking or attending Node.js Interactive North America 2017 – happening in Vancouver, Canada next fall – please subscribe to the Node.js community newsletter to keep abreast with dates and time.