The Serverless Framework is quickly becoming one of the more popular frameworks used in managing serverless deployments. David Wells, an engineer working on the framework, talks with Wes Reisz about serverless adoption and the use of the open source Serverless Framework.
On this week’s podcast, the two dive into what it looks like to use the tool, the development experience, why a developer might want to consider a tool like the serverless framework, and finally wraps up with what the tool offers in areas like CI/CD, canaries, and blue/green deployment.
Key Takeaways
- Serverless allows you to focus on the core business functionality and less on the infrastructure required to run your systems.
- Serverless Framework allows you to simplify the amount of configuration you need for each cloud provider (for example, you can automate much of the configuration required for CloudFormation with AWS)
- Serverless Framework is an open source CLI tool that supports all major cloud providers and several on-prem solutions for managing serverless functions.
- The serverless space has room to grow in offering a local development space. Much of the workflow today involves frequent deploy and scoping the deployment for different stages.
- Serverless Framework is open source and invites contributions from the community.
Subscribe on:
Is serverless the silver bullet?
- 2:40 Serverless is the silver bullet in most cases - it abstracts most of the things you had to worry about previously.
- 2:55 There are some cases where it might not be as cost effective or the right way to go.
- 3:05 It abstracts away the need for servers - but of course there are servers somewhere, the developer does not need to worry about them or manage them any more.
Have you got any examples of systems that benefit from serverless?
- 3:50 Finrot is a financial regulation institute - not part of the government - they are validating stock trades.
- 4:00 They are validating half a trillion stock trades a day.
- 4:05 That’s all running on Lambda, due to the load it takes.
- 4:15 I recommend checking out the serverless state of the union from every year at Re:Invent.
- 4:25 Media companies like VeVo - whenever Kayne drops a music video, they need to handle a huge influx of traffic and use AWS Lambda to handle those loads.
- 4:50 VeVo handle 80 their normal load at these kind of spiky events.
What are some of the cases where serverless makes sense, and doesn’t make sense?
- 5:15 Where serverless makes sense is for transactions that run for five minutes or less.
- 5:30 If you’re running huge batch jobs that take longer, then you would want to run that in EC2 instances or in a Kubernetes cluster.
- 5:50 One way to avoid the hard-limit of 5 minutes is to recursively break down the units of work and call the functions repeatedly.
- 5:55 Doing that is not recommended.
- 6:00 You can’t do websockets very well with Lambda, because you have a five minute limit.
- 6:25 If you want to do something real-time, that might also be an issue for you.
- 6:30 A cold start is when a Lambda function starts executing and needs to spin up the container.
- 6:40 If the container isn’t already started, then there is a bit of latency added.
- 6:55 The latency depends on what language you write your lambda in, as well as how big the code and dependencies are that run.
- 7:05 If you have requirements that are low-latency, then you might run into those issues.
How do Just-in-time compilers work for serverless?
- 7:35 I don’t have exact numbers on Java, but it is the slowest cold-start.
- 7:45 If it’s a request-response where the client is waiting for something, you might want to look into that.
- 7:50 You could always run a new runtime, like Node.JS or go - and go is the fastest because it’s a compiled binary.
What runtimes are supported across the cloud providers?
- 8:10 Natively you have Python, Node.JS, Java, Go - you can shim a lot of languages that they don’t natively support; someone got Rust working in Lambda.
- 9:15 You can shove in whatever language you want into your Lambda function.
- 9:20 You might run into performance issues; you’re combining too many things into one function, and the number of dependencies can inmpact cold-start times.
What are the size requirements for Lambda?
- 9:45 There is a maximum limit to the size that things can be.
- 10:00 Brian Le Roux, who is the author of Arc.Codes, has a lot of information - they try to keep their lambdas under 5mb to keep their cold-starts to a minimum.
- 10:10 You really have to choose your dependencies wisely, or re-write some that are too big.
- 10:25 In the JavaScript ecosystem, there’s a concept of tree-shaking unused code.
- 10:30 If you’re using WebPack, instead of using all of LoDash, you only include what you need.
What is the serverless framework?
- 10:50 The serverless framework was born out of a need - when you’re starting with Lambda, it’s very simple.
- 11:10 It’s super easy to zip it up with your dependencies, and it’s live.
- 11:20 As you start to build out your project and add more functions, and other teams add more functions, it is untenable using the UI.
- 11:45 Whether the serverless framework or something else, you’re going to have a bad time if you don’t have some form of orchestration tool to manage large projects.
- 11:55 Real world code typically never gets smaller.
What does it do for you?
- 12:25 What it will do is write a config file (service.yaml) which couples an event to a trigger.
- 12:40 It might be an HTTP event, a cron trigger, SQS or SMS queue - there are a number of different event sources.
- 12:50 The service.yaml also defines required dependencies - a database, an S3 bucket etc.
- 13:05 The framework will bundle up your service, deploy it to an S3 bucket, wire up the endpoints and events.
- 13:40 The other benefit is that it transpiles down to cloud formation, where you get a versioned snapshot.
- 13:50 This allows you to roll back deployments, or deploy the same version again elsewhere into multiple stages.
Is serverless framework an abstraction over AWS?
- 14:15 Originally yes - it only supported AWS.
- 14:25 In service.yaml you can define a provider for whatever platform you want, so now we support 8 or 9 different providers.
- 14:35 We support AWS Lambda, Azure, Google Function, OpenWhisk, Kubeless, Spadinst, WebTask and Oracle Functions.
- 15:00 As a company, we’re vendor agnostic.
- 15:10 We’re trying to push the idea of writing your code in functions - do one thing and do it well.
How does it abstract differences between different platforms?
- 15:45 Across the different providers, there are some slight differences.
- 15:55 For the most part, the service.yaml is just config of what events to trigger what functions.
- 16:10 HTTP events are pretty similar across the board, as are blob storage and queues.
- 16:35 One of the biggest challenges in developing the serverless framework is that the providers have those differences.
- 17:00 The cross-cloud world is still a work in progress.
Are you now locking yourself into serverless framework as well as the cloud provider?
- 17:30 There is a break-glass out - if you’re targeting the cloud providers like AWS, we’re transpiling down to their native resource manager.
- 17:45 When you’re running serverless deploy, we’re translating that into cloud foundation (in the hidden .serverless directory).
- 18:05 If you wanted to use native cloud formation, you can take the cloud and run with it.
- 18:40 I don’t like locking developers into things.
What about on-premise solutions like Kubeless?
- 19:10 Kubeless is very similar to Lambda, except you’re running it in your own cluster.
- 19:15 They expose a number of different events, so you can schedule your functions on cron.
- 19:25 They also have pub-sub to subscribe to a given queue.
- 19:35 I think that kubeless is serverless, as long as I’m not responsible for running the Kubernetes cluster.
- 19:40 There’s also the FN project from Oracle, also running in a Kubernetes cluster.
- 19:50 That’s a bring-your-own Docker container, so you can run any function you want.
- 20:00 You need to be careful about whether it’s a latency sensitive service.
- 20:25 The FN project is more convention driven; there’s a folder that contains functions.
- 20:40 It tries to keep the containers warm.
How do I go about getting serverless and deploy a simple function?
- 21:05 To install serverless, run ‘npm install serverless’ - locally in your project or globally with the -g flag.
- 21:20 Once it’s on your machine, you need to set up whatever provider you’re using.
- 21:30 If you’ve already got an AWS account then you should have that already in your AWS profile.
- 21:35 You can use the ‘create’ command to use a pre-made template, or you can use the ‘install’ command to pull down a Git repository.
- 22:00 We have a set of examples at https://github.com/serverless/examples with a huge list of projects.
When you deploy the function, what happens?
- 22:20 It will package it up, based on the profile will create a cloud formation stack for you and all of the resources.
- 22:35 The functions and events that you define ends up creating a verbose cloud formation template.
- 22:45 It will then deploy that into your AWS account and region that you specified.
What does the development and debug experience look like?
- 23:10 Local development in the serverless world is still lacking.
- 23:20 You can invoke the functions locally with ‘serverless invoke local’ but it breaks down if you need other infrastructure.
- 23:40 To mock all of the AWS functions is too excessive.
- 23:55 There’s a project called LocalStack that you can emulate locally, but it depends what you’re using.
- 24:10 You can deploy the function code and deploy within half a seconds.
- 24:25 I’ve had too many horror stories of “worked on my machine” but didn’t work in production.
How do you go through a development/stage/production cycle?
- 24:45 The beauty of cloud formation stacks is that you can deploy the stage name with it.
- 24:55 It goes with any infrastructure-as-code tool; if you’re writing it in such a way that none of the resources are coded, then you can actually prefix or postfix the stage in all of the resources.
- 25:10 The way that I typically do it is to work in a development stage, or you can pass a --stage flag to any CLI command.
- 25:25 You can use whatever stage names you want.
- 25:35 One you have something deployed into a dev stage or a qa stage, you can test it out and then promote the same deployment to production.
How does serverless framework fit in with CI/CD?
- 26:05 It’s very similar to how you would do it now - you just basically install the serverless framework into the CI system, and then deploy on any PR into a stage environment.
- 26:35 The entire stack spins up with that unique postfix.
- 26:45 If everything passes with that it gets deployed into the next level of QA staging.
- 27:10 When you deploy and the integration tests run, with serverless remove it removes the entire stack.
- 27:30 As long as you don’t hit any limits - it depends on what runtime you’re using.
How does serverless framework hook into AWS canarying?
- 27:55 The canarying thing is a first-class citizen in sam right now.
- 28:00 You can do it with the serverless framework; you just need a plugin.
- 28:05 The serverless framework has a lot of defaults; if it doesn’t necessary work, there are lifecycle hooks that you can override.
- 28:20 For example, if you want to use TypeScript, or you want to use Blue/Green deployments, there are plugins that can help.
- 28:30 Plugins allow you to work around what limitations exist based on what’s currently supported.
- 28:40 There’s another plugin to keep functions warm, by pinging them every 5 minutes.
- 28:50 You can see the list of plugins with the CLI or by going to https://github.com/serverless/plugins
- 29:15 With the warm plugin, after you’ve deployed your function, it will set up custom SDK commands to set up a cron job to ping your server.
- 29:30 It’s similar to setting up your own function, triggered by cron, to do the work for you - but you don’t see it in the list.
- 29:45 One of my favourite plugins is the serverless manifest plugin, which will take all the outputs from the cloud foundation.
- 30:20 There’s also a plugin for using step functions, where you define your state machine and after you deploy the functions will wire them up.
Is the serverless framework open source?
- 30:55 It started as an open-source project, and will always be open source at https://github.com/serverless/serverless/
- 31:15 It’s getting to the stage where it’s almost entirely run by the open source community.
- 31:20 We have some great community core maintainers that are really helping.
- 31:25 If you’re interested in getting involved - we love new ideas, feature requests - just open up an issue or a PR.