TNS
VOXPOP
Tech Conferences: Does Your Employer Pay?
Does your employer pay for you to attend tech conferences?
Yes, registration and travel are comped.
0%
Yes, just registration but not travel expenses.
0%
Yes, travel expenses but not registration.
0%
Only virtual conferences.
0%
No reimbursement.
0%
Serverless

Cutting Through the Hype: When Serverless Works and When It Doesn’t

Jul 18th, 2018 6:00am by
Featued image for: Cutting Through the Hype: When Serverless Works and When It Doesn’t

CircleCi sponsored this post

There’s plenty of positive buzz around serverless architectures and how they can help development teams cut costs and offload management tasks. There are also misconceptions about what it can do, and when and how to use it. To dig into the practical uses for serverless architectures, Rob Zuber, CTO for CircleCI, spoke to Nate Taggart, CEO of Stackery, which offers a serverless operations console that helps developer teams build serverless applications. Below are some of the highlights from the conversation — to learn more, watch the video.

Rob: Can you give us a quick overview of what it means to be serverless?

Rob Zuber
Rob Zuber is a 20-year veteran of software startups — a four-time founder and a three-time CTO. Since joining CircleCI as CTO three and a half years ago, Rob has seen the company through its $18 million Series B and $31 million Series C and delivered on product innovation at scale. Over that same period, Rob has grown CircleCI’s engineering team four-fold as it has been recognized as one of the Bay Area’s Best Places to Work by the “San Francisco Business Times.” Prior to joining CircleCI, Rob was the CTO and co-founder of Distiller, a continuous integration and deployment platform for mobile applications acquired by CircleCI in 2014. Before joining Distiller, he co-founded Copious, an online social marketplace. Prior to Copious, Rob was the CTO and co-founder of Yoohoot, a technology company that enabled local businesses to connect with nearby consumers, which was acquired by Appconomy in 2011. Rob holds a Bachelor’s degree in applied Science from Queen’s University in Kingston, Ontario.

Nate: I think the best place to start is to acknowledge that “serverless” is the world’s dumbest name. There are servers — your code is going to run on servers.

Amazon has a great way to phrase this: You outsource the undifferentiated heavy lifting of managing servers. The idea here is that you focus on your application, while you ship your code to the cloud provider with a serverless solution. They handle the scaling, availability and orchestration, while you focus on application development and delivery.

Rob: Where should people get started with serverless computing? What are the easiest applications to model in a serverless world?

Nate: We see most companies start with low-visibility and -criticality workloads to test the waters. It’s almost always a background task — something like a cron job script that runs once a night or week and is on an AWS EC2 instance. This is a great entry point for your first serverless application.

What will happen with the serverless application is that your function will start on demand. You’ll pay for the few seconds or minutes that it’s running — then it’ll shut off, and you won’t be paying for the rest of the time. The cost savings are what people point to, while the real advantage here is not having to manage the infrastructure that these little services are running on.

Once people start with these tasks, they look at higher-visibility and more mission-critical applications for serverless. Typically the path forward is using serverless applications to back an API or as part of a microservices pattern. Instead of deciding to rebuild a whole application so that it can run on serverless, you can take little components of the application and migrate them to serverless.

Rob: I think it’s hard for people to conceptualize serverless for something like APIs in the same way they can see the value for a cron job. They don’t think of an API as a function, since it’s always on and always listening. What’s the mental connection that people need to make if they’re going to think about serverless for APIs?

Nate: I think we’re talking about functions in a few different ways here. You have your code, your building blocks and your components. You’ve written a code function, and now we’re going to want to run that on demand. This doesn’t seem terribly confusing. But what gets a little tricky is that these ephemeral compute instances can encapsulate more code than a single literal code block.

What you can do is build an application and then run that application on-demand. We can decide that when an API gets hit, it’ll trigger a code block to run and that’s the function-as-a-service. We can also get more sophisticated. We can build an entire application, ship it into whatever compute function we want to use and then decide that when the API endpoint gets hit, we’ll route it through the application — just as we would with a long-lived server application. Then we’ll select the right code block to run within the code.

Rob: What are the misconceptions about serverless? What ideas about serverless don’t quite line up with what it offers?

Nate: Serverless is certainly high up in the hype cycle right now and there’s a lot of buzz. As people start to embrace it, we hear about some common misconceptions. One of the biggest ones is that serverless is no ops and you don’t have to manage anything on the infrastructure side. That’s just patently untrue. There are certainly some ops responsibilities that you can outsource. You no longer, for example, have to manage orchestration for the application, while things like availability and load balancing are happening under the covers as part of the managed service.

When we talk about serverless, we’re really talking about compute — and that’s it. But no real application is being built with only compute. You’re going to have dependencies, third-party services, internal services, data stores and networking needs. These are all pieces that you still have to manage.

Rob: Who shouldn’t use serverless? What are the bad-use cases?

Nate: One of the obvious cases is where you have a long-running and very predictable workload, where every day, you have a three-hour batch job. If that’s the case, and you’re good at spinning up a server, running it and then shutting it down, that’s frankly probably a better solution.

If you need a lot of resources like high memory and lots of disk space, serverless is probably not a great approach. But If you can’t predict the volume, and you’re doing lots of small transactional workloads, then serverless is really good.

Feature image via Pixabay.

Group Created with Sketch.
TNS DAILY NEWSLETTER Receive a free roundup of the most recent TNS articles in your inbox each day.