This article is more than 1 year old

Hey cool, you went serverless. Now you just have to worry about all those stale functions

No more unpatched servers, but alternative is hardly watertight

For hackers unpatched servers are the best thing since sliced bread. From Heartbleed to WannaCry, slow-to-update servers invite attackers in with a red carpet. Many of the most significant breaches were caused by unpatched servers, and analysts expect things to only get worse. Will we ever rid ourselves of the need to update these pesky servers?!

The answer, it appears, is yes! Serverless and its core component, Function-as-a-Service (FaaS), lets us deploy small apps (functions) simply by submitting their code. While it clearly employs servers behind the scenes, with serverless these "pros" running the platform take care of the server management for us, including the patching, and are doing a splendid job closing security loopholes by updating these short-lived servers.

And yet in the shadows lurks another risk. Vulnerable libraries, fetched from registries such as Maven and npm, are embedded in our functions. As demonstrated by the Equifax breach, Spring Break and more, these packages are just as prevalent and can be just as vulnerable as servers.

These packages fall into our blind spot in the world of serverless. They are missed by both the platform and the application owners, falling into the twilight zone between infrastructure and code. As FaaS rises in adoption, and functions get deployed en masse, these functions can grow stale and vulnerable. So are unpatched functions the new unpatched servers?

Why not deploy a function?

Functions are small in scope and easy to write, can be deployed with a few simple commands, and then scale to practically any volume of traffic – with no effort on your part. If this simplicity isn't appealing enough, serverless is also incredibly cheap. Most FaaS platforms only charge for what you use, and do so at a miniscule 100ms granularity, driving the price way down. Serverless is the epitome of easy and cheap, and the tech world is embracing it at a rapid pace. This disruptive cost model flips on its head the question of "what's worth deploying?"

In the "serverful" world, deploying code has significant costs – you need to work harder to deploy the code (which takes time), allocate ongoing compute resources (which costs money), set up constant capacity monitoring (more time), and on top of that you need to continuously patch these servers and secure them against the bad people out there. These costs mean you only deploy code that is worth deploying, providing enough value to justify the price.

In serverless, there's almost no reason not to deploy a function. It's super easy to do, costs practically nothing until the function is used in anger, and there's no need for ongoing capacity monitoring. Cheap, easy, beautiful. Indeed, you keep hearing about people turning their cron jobs into functions and posting production functions straight from a hackathon, because... why not?!

While deploying a function is easy, removing it is tricky, as you can never be sure who might be depending on it. Even if the function is rarely invoked, it may still be required for disaster recovery or an annual report, or perhaps it's simply a useful utility a colleague may need. This question gets harder to answer the longer a function has been around, as organisational knowledge about it and its role fades. The same challenge exists when decommissioning servers, but since functions don't cost us anything, there's little incentive to make an effort or take a risk.

And so, quickly and surely, you can see the horde of functions amassing. From little utilities to full-blown production services, your AWS Lambda lists start building up. While your invoice may remain flat, your risk score isn't.

The security cost of a function

What we're forgetting while building this new function collection hobby is that every function you deploy is a security liability.

For starters, every function exposes new interfaces that invoke your specific business logic. The exact access is defined by the configuration of the function or a front-end API gateway. In hacker terms, that means a bigger attack surface, and more opportunities to manipulate or exploit this business logic for unintended gain, such as scraping your product catalogue, reserving tickets to scalp or overwhelming a back-end system to deny your users service.

Beyond business logic manipulation, every function may have security vulnerabilities, and many likely do. Depending on the functions' permissions, attackers can exploit such vulnerabilities to steal customer data, steal CPU cycles for Bitcoin mining, or penetrate deeper parts of your network. It doesn't matter if your function was supposed to allow such things, only what permissions it was allowed.

worried techie

Serverless: Should we be scared? Maybe. Is it a silly name? Possibly

READ MORE

Permissions are another finicky beast that grows but never shrinks. Adding permissions is easy, but removing them is hard, as you never know what you might break. In addition, managing granular permissions is painful, resulting in most functions (and systems for that matter) being able to do more than they should be allowed to. These expanding permissions are not a serverless-specific problem, but FaaS exacerbates the risk by significantly growing the number of entities – functions – that require permission management.

Last, and potentially worst, most functions contain open-source application dependencies. These libraries are statically embedded inside the function, and so they grow stale even as new versions of the library are released to the public. Over time, vulnerabilities are discovered in these older versions, including some that are very severe, and yet nothing in the serverless flow informs you they exist. To avoid this, you need to be diligent in patching your functions at scale and repeatedly updating them before attackers exploit them. Sound familiar?

Hopefully by now, it's a bit clearer why a large number of functions are a cause for concern. Part of this risk is purely due to having more of our code in production, but it's made worse by thinking code is all we have left to protect.

Sprinkles of infrastructure

Serverless has always been a controversial name, as there are obviously still servers running our functions. In practice, what it offers is "server-management-less", offloading the need to deal with elastic capacity and keep our server operating systems secure. However, while you're not managing servers you are in fact still managing infrastructure.

Above and beyond the code concerns, the security concerns I mentioned earlier include three components – configuration, permission management and stale dependencies – that are not code problems at all. In fact, they look and behave an awful lot like the risks you need to tackle when managing servers.

Fact is these are sprinkles of infrastructure strewn amid our functions. In a perfect infrastructure-as-code fashion, they are often defined and packaged as part of the function's project, but to properly handle them, we have to acknowledge them for what they are – infrastructure. This perspective will help guide us in managing this risk well, and at scale.

Beyond the infrastructure inside our functions, managing the functions themselves also qualifies as infra-management. We need to manage these just like we do our servers, VMs and containers, even if the underlying OS has been taken off our hands.

Where are we headed?

If the industry keeps its current trajectory, we're each likely to wake up next year surrounded by hundreds – if not thousands – of deployed functions. Many of these functions will have excessive permissions and stale embedded libraries, and their tenure will make it hard to delete them without breaking functionality that someone else depends on.

Fortunately, it's still very early days for serverless, and there are plenty of opportunities to correct course. It's important to make sure you are very conscious of what you deploy and why. Ask yourself whether a function is worth deploying, and stop to consider how you intend to track, manage and make it obsolete in the future. Serverless is still a powerful new trend that you should leverage, but it needs to be done right.

Beyond that, there are tools and practices that can help. Tools like Terraform, platform KMS's and Snyk can help with some serverless infrastructure management, and best practices from leads like iRobot and Nordstrom can offer guidance. ®

More about

TIP US OFF

Send us news


Other stories you might like