TNS
VOXPOP
You’re most productive when…
A recent TNS post discussed the factors that make developers productive. You code best when:
The work is interesting to me.
0%
I get lots of uninterrupted work time.
0%
I am well-supported by a good toolset.
0%
I understand the entire code base.
0%
All of the above.
0%
I am equally productive all the time.
0%
Edge Computing / Software Development

CloudFlare JavaScript Workers and the Multicloud

Aug 14th, 2018 9:25am by
Featued image for: CloudFlare JavaScript Workers and the Multicloud

Server-side JavaScript isn’t new; it’s been around as long as running JavaScript in clients like web browsers. The popularity of Node.js is making JavaScript on servers popular again, especially for constructs like Service Workers that don’t have any user interface. Service Workers can intercept HTTP requests to your site and filter them, or make subrequests, combine, or conditionally route those requests and return whatever final response you want to.

Cloudflare Workers combines that with another interesting approach; Workers is a framework for running serverless apps written in JavaScript, using the W3C standard Service Workers API, in an environment that’s built on the V8 JavaScript engine (but isn’t Node.js because that would lose the advantage of the well-tested V8 sandbox).

“The way we think of Workers is serverless on a massively distributed scale,” Cloudflare CEO Matthew Prince told The New Stack. “You write code and it lives wherever it needs to; you shouldn’t even have to think about it. You want a network that makes intelligent routing decisions programmatically and routes the code to the right spot.”

That V8 environment runs on Cloudflare’s servers, in an extremely distributed infrastructure for putting code at the edge. The company currently has 152 data centers around the world and plans to expand that to 200 by the end of year, all of them in large population centers for the lowest latency when websites are using it as a CDN or for DDOS protection.

Running the Service Worker code on servers in Cloudflare data centers saves users the time and bandwidth it would take to download it — which is important as that is for locations with less connectivity and devices like IoT systems that are constrained by processing power or battery life — without just moving the workload to your own servers.

The problem with a true multicloud system will be moving not just the workload but the data.

Cloudflare already works as a reverse proxy: traffic to your site is routed through Cloudflare infrastructure, which passes requests to your site and collects the responses to serve back to visitors. Workers are a simple way to do load balancing, security filtering, redirecting visitors from specific locations, sanitizing and validating data (for example, deduplicating content and rejecting badly-formed requests before they even reach the server) or to add custom logic to chose which requests are cached.

Because Service Workers implement endpoints and event handlers instead of the request and response hooks, they’re asynchronous and can even manipulate the cache. Workers can fetch and combine content from multiple services in parallel using the Fetch API (there’s a temporary limit of 50 subrequests but that will be removed soon), or respond directly to requests without connecting to a server at all. The Streams API is supported in the response body. You can use event.waitUntil() to set up asynchronous tasks that continue after the worker has sent its first response to the end user (useful for logging and analytics).

You could also use them to deploy quick fixes for bugs without having to reboot the server (because you can route visitors to the correct code), or for developer testing of one specific service on a live site without exposing it to normal visitors and then for A/B testing and to move traffic over to a new service so you can shut down an older system without any impact.

By running JavaScript outside the browser, you also get more flexibility and security options, because you can perform authentication at the edge. Workers can restrict access to cloud storage buckets using signed URLs or pre-validate JSON Web Tokens used to authenticate to APIs REST services, so browsers never see your API keys. It also simplifies handling CORS (cross-origin resource sharing).

Initially, Workers are JavaScript only and cost $0.50 per million requests (with a minimum $5 monthly bill) but Cloudflare will add support for C/C++, Go and Rust soon, using WebAssembly.

Orchestrating Clouds

Prince has some ambitious ideas about using Workers to orchestrate cross-cloud workflows using emerging platforms like Kubernetes. Multicloud isn’t having different applications on different clouds, he explained; it’s running the same application across multiple clouds, either for availability and reliability or to take advantage of cheaper compute options. “As the feature set of each of the cloud providers gets to parity, with Kubernetes and so on, you can have a standardized platform across them. Workers can help by being the fabric that ensures you get incredibly high performance and high degree availability across providers.”

One Y Combinator startup is using a Cloudflare Worker to shift its workload between the different cloud providers that have given them free credits to try out their services. “They wrote a worker that queries the API for how much they have left on their account, as the balance burns down to zero they switch over to another cloud service,” Prince told us. The same idea would let you place new workloads on the cheapest cloud, as well as keep up with pricing changes. “When a request comes in, a worker could check what that spot price of compute is across different clouds.”

It’s also useful for policy requirements, like storing data in particular locations, he suggested. “If you have internal policy requirements where all user data from Germany has to be stored in Frankfurt, you can have the network intelligently make those routing decisions with fine-grained control.”

The problem with a true multicloud system will be moving not just the workload but the data. “We’re starting to think more about storage and how can we ensure people don’t get locked in the big cloud providers around storage,” Prince told us.

“An S3-compatible object store running at the edge of the network would be pretty simple; you could say ‘I want this data stored across these two different cloud providers, or across whichever two cloud providers have the best price at this moment in time and we could handle the logistics. The fabric would move the data around, so the data store would be where you need it to be at that particular time. If you moved your compute, we could make the data be available at the place where the user of the app is actually running it.”

Workers doesn’t yet have a good solution for data storage though, he admitted. “That’s the piece that’s still missing. We can do a data store but it’s kind of janky right now, it’s based on how we cache data.”

Workers are run on requests before they reach Cloudflare’s cache, so worker responses aren’t cached, but any outgoing subrequests using the Fetch API do go through the cache. That includes sites that don’t use Cloudflare (as long as those servers return the correct caching headers) and Cloudflare is working on adding finer-grained control over caching through Workers. In the longer term, Prince suggested Cloudflare could build a distributed data service that would work across multiple cloud backends.

Before that could happen, cloud providers would have to drop their data egress charges but Prince is confident that will happen. “Because at the end of the day, providers have to treat customers fairly. When Microsoft sends data to Cloudflare it goes across a 50m fiber optic cable from their router to ours; neither of us pays for that, so why should our customers?”

Feature image via Pixabay.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Kubernetes.
TNS DAILY NEWSLETTER Receive a free roundup of the most recent TNS articles in your inbox each day.