As Yubi adopts the microservice architecture by decomposing from current monoliths, there is a dire need to segregate core and non-core functionalities, so that they can serve independently.
While currently Yubi has pivoted towards programming polyglot by incorporating languages apart from Ruby such as Java, GoLang, Python, NodeJS etc,. bulk of the services are written on top of Ruby on Rails (RoR) framework.
While the core services could be deployed out-of-the-box in AWS’s Elastic Kubernetes Service (EKS) / Elastic Container Service (ECS), few non-core & utilities services need cost-effective, highly-available, easily configurable & highly-scalable serverless application model. Introducing AWS Lambda – A serverless, event-driven & pay-per-use compute service that lets us run code for virtually any type of application or backend service without provisioning or managing servers.
What Were the Roadblocks?
Running an MVC framework such as Ruby On Rails (RoR) on Lambda is not available out of the box! Currently Lambda supports only language specific run times for Python, Ruby, GoLang, NodeJS etc. It does not support frameworks such as RoR, Django etc,.
How Did We Remove the Roadblocks?
AWS provides a serverless IAAS called Lambda. Lamda executes the function loaded into it, once the event is triggered.
To help Rails run smoothly on Lamda and serve incoming Lambda events, we incorporated a third party gem – Lamby. Lamby is as a Rack adapter that converts AWS Lambda integration events into native Rack Environment objects which are sent directly to your application.Lamby can do this when using either API Gateway REST API, API Gateway HTTP API’s v1/v2 payloads, or even Application Load Balancer (ALB) integrations.
Phase 1: Developing RoR for Lambda
The main objective of the first phase is to develop a new lightweight Rails application which will run on AWS Lambda. Keeping the multiple infrastructure compatibility in mind, we built a docker image for the Rails application in such a way that it could be easily deployed on the AWS ECS containers with minimum configuration changes.
Phase 2: Incorporating Lamby Gem
In the second phase we emphasized on converting the incoming lambda events into rack objects, with the help of lamby gem.
Lamby is a rack adapter that converts AWS Lambda events into native Rack Environment objects. And these objects are sent directly to your application.
To run the Rails application in lambda, changes has to be made only in two core files
The app.rb file is similar to Rails’ config.ru for Rack. Commonly called your handler, this file should remain relatively simple and look something like this.
Dockerfile should use one of the AWS provided runtimes from their public ECR repository and typically do a simple copy of your built project. For example:
Phase 3: Integrating API Gateway
We then developed the lambda to serve our client’s REST/HTTP requests. Lamby could do this by either using an API Gateway HTTP API’s (the default) v1/v2 payloads, API Gateway REST API, or even Application Load Balancer (ALB) integrations.
This means, Lamby removes any Rack Web Server including WEBrick, Passenger, or Puma. It also means that Lamby can be used by any Rack Web Application such as Sinatra, Hanami, and even Rails, as long as the framework is using Rack v2.0 or higher.
For Rails however, we need to run v5.0 or higher. It does all of this in a simple one line interface.
What Were the Roadblocks?
- Deploying the Rails Application in the lambda server is not quite easy. We experimented many approaches and finally arrived at a solution to run it successfully.
- Converting the incoming lambda events into a proper native Rack Environment objects to be served by the Rails application
- Serving the REST/HTTP requests as a lambda events to Lambda service with the assistance of the AWS API Gateway
What Did We Try?
- Our approach to bundle the required gems inside the vendor folder and deploy in lambda server turned out to be a washout.
- Our approach to deploy the application as a docker image helps to our aid with the help of
- configuration changes: Increment of timeout limit in lambda
- code changes in docker file
- In order for the Rails application to recognise the incoming parameters from lambda events, we explicitly converted params with permitted attributes to a hash object in controller files.
Pros and Cons of Lambda
- Lamda is cost effective when compared to ECS, EBS or EKS, since it is billed based on only the running (active) time of the lambda function.
- Lambda has a significant amount of cold starts which would increase the latency (TAT) of each service;
- Lamda has limited storage capacity for its functions.
From our experimentations, we can say that lambda is suitable for our lightweight, non-core functionalities or modules. They are not the best fit for our core functionalities and applications which require heavy computational power and an ever running model.
Trade Off – Cost v/s Scale
Cost is a huge deciding factor when it comes to deploying the same framework in Lambda vs other container infras such as EKS, ECS etc,. Lambda is billed on a computational basis while others are billed for all its uptime.
With scale, container based infras such as EKS, ECS take the upper hand in terms of auto-scaling, threshold limit, etc. Connection pooling with DBs also pose a challenge when it comes to Lambda as it’s supposed to be stateless and any effort to make it stateful will prove to be costly.