TNS
VOXPOP
Tech Conferences: Does Your Employer Pay?
Does your employer pay for you to attend tech conferences?
Yes, registration and travel are comped.
0%
Yes, just registration but not travel expenses.
0%
Yes, travel expenses but not registration.
0%
Only virtual conferences.
0%
No reimbursement.
0%
Serverless

6 Best Practices for High-Performance Serverless Engineering

When you write your first few lambdas, performance is the last thing on your mind. Permissions, security, identity and access management (IAM) roles and triggers all conspire to make the first couple of lambdas, even after a “hello world” trial just to get your first serverless deployments up and working. But once your users begin to rely on services your lambdas provide, it’s time to focus on high-performance serverless.Here are some key things to remember when you’re trying to produce high-performance serverless applications.
Nov 13th, 2018 6:00am by
Featued image for: 6 Best Practices for High-Performance Serverless Engineering

Stackery sponsored this post.

When you write your first few lambdas, performance is the last thing on your mind. Permissions, security, identity and access management (IAM) roles and triggers all conspire to make the first couple of lambdas, even after a “hello world” trial just to get your first serverless deployments up and working. But once your users begin to rely on services your lambdas provide, it’s time to focus on high-performance serverless.

Here are some key things to remember when you’re trying to produce high-performance serverless applications.

1. Observability

Toby Fee
Toby is a community developer at Stackery. Her roles and experience combine working as a software engineer, writer and technology instructor, building interesting projects with emerging tools and sharing her findings with the world. Prior to joining Stackery, Toby was an engineer at NWEA, Vacasa and New Relic.

Serverless handles scaling really well. But as scale interacts with complexity, slowdowns and bugs are inevitable. I’ll be frank: these can be a bear if you don’t plan for observability from the start.

Observability is a lot more than just logging or metrics — both of which serverless has in abundance. You can look deeply into cloud watch to get logging information from every single time your lambda was called, and the AWS console provides ample ways to see averages of execution time, memory use, and other key metrics.

But observability is how well we can analyze a system from the outside without cracking open its internals. Neither logging nor metrics can really do that since logging offers information too fine-grained to tell you how you’re generally handling requests, and metrics only really point to the symptoms of problems and not their causes. The solution is true instrumentation that samples tracing information and gives you general and outlier data.

If you want to write your instrumentation, Honeycomb.io can collect all your data and present them in a useful dashboard. If you’re looking for something that’s built for serverless and instruments your stacks automatically. Epsagon can give you a great overview *and* great detail.

2. Think About What You’re Requesting

Often I see Lambdas that, once triggered, start looking around for beaucoup information. They make web requests, ping other lambdas or check profiles all to get some context about what’s being requested.

The fastest/cheapest improvement that can be made here is to log out what this Lambda was called with and look through every key. A lambda, triggered by an S3 upload, should *not* need to query that same S3 bucket. It was called with all the object data needed!

3. Don’t Rewrite Your Code

Here is a list of common falsehoods:

  • “Be sure to make a copy of your array before iterating over it, it’s more performative to mutate a copy”
  • “Don’t ever concatenate strings, even if it’s just two strings still join an array instead, this will save memory”
  • “Global variables will perform a lot worse than well-scoped variables”

All of these were given by well-meaning people who wanted to help people write more performative JavaScripts. All ignore one basic fact: JavaScript is a higher order language.

In practice, this means that the JavaScript code you write is interpreted by an “engine:” that will try to execute your code in the most performative way possible. In fact, there are multiple competing teams trying to produce engines that run your code even faster. Recommendations like those above, that claim that “quirks” of JavaScript require special work to make perform well, can only apply to a single version of one JavaScript engine.

Is optimization impossible? Not at all! You can always improve a program by reducing the amount of stuff it has to do:

  • combining multiple requests to other services;
  • Stopping loops if/when you have enough matches;
  • Returning useful error information;
  • Failing gracefully.

But beyond these most basic of best practices, rewriting your code is not the way to solve performance problems

4. Start Locally

Everyone I know has had a similar experience early in their serverless development: right when your lambda starts doing things that are a bit complicated, you find yourself making dozens of small tweaks to your configuration and function code. With each change, you have to wait for the code to deploy and the rest of the stack to go live. Instead of “save, build, refresh” your dev cycle looks more like “save, open the console, deploy, wait, refresh, wait, refresh, wait, wait, refresh”

At least with AWS Lambda, this no longer needs to slow you down: the AWS Serverless Application Model command line interface (SAM CLI) can replicate lambdas and API endpoints along with a number of other resources, all in a local docker container.

5. Local Will Never Replicate Your Whole Stack

To truly simulate your serverless flow you’ll need to have a stack to deploy your code to.

The specifics may vary by team size, but essentially you’ll need some kind of staging or test version of your stack where you can test our your code. On the permissions side, AWS has all the permissions levels you need to let the bulk of your developers “‘propose” changes to production but change things as needed on Test. This is also a good place to mention that Stackery makes it very easy to stand up the same stack in multiple environments, start to finish.

6. Manage Code, Not Configuration

AWS consists of, at first glance, a lot of menus:

No discussion of AWS would be complete without a screenshot of this menu.

And plenty of its services can be configured through visual menus in the AWS console. The problems with such an approach should be obvious: when trying to build replica environments (see the rule above), hand copying is involved, some changes aren’t documented, and after a crisis, many changes aren’t particularly well understood by the people who made them!

How to fix this mess? CloudFront is AWS’s path out of ‘config that is only stored in the UI’ their Serverless Application Model lets you create YAML that defines your stack in a file you can use to track changes.

Unfamiliar with SAM, or even just YAML in general? Another selling point for Stackery is its ability to create templates automatically and update them as your stack changes.

Conclusion

High performance is a value, not a finish line. As your project begins to succeed your users can tell you what needs to be improved. Follow the guidance above to know where to start as you build a stack that hums and scales.

Feature image via Pixabay.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Honeycomb.io.
TNS DAILY NEWSLETTER Receive a free roundup of the most recent TNS articles in your inbox each day.