Reducing latency time, saving bandwidth, and creating scope for actions based on real-time feed have emerged as top priorities for businesses across industries. Hence, the unprecedented growth in edge computing adoption. Its distributed computing architecture brings computation and data storage closer to the sources of data. EdgeWorkers simplifies the process by helping developers create and deploy microservices on edge servers deployed across the globe.
As explained in Akamai’s site, “EdgeWorkers enables developers to create and deploy microservices across more than a quarter of a million edge servers deployed around the globe. When development teams activate code at the edge, they push data, insights, and logic closer to their end users.” There are various EdgeWorkers providers like Cloudflare, AWS CloudFront, Akamai, etc. who have applications deployable on edge servers.
In this blog, I will be focusing on Cloudflare EdgeWorker implementation, how we deploy code on Cloudflare worker, and its performance measurement.
How to deploy work-project in Cloudflare EdgeWorker?
A “Cloudflare Worker” runs on the Cloudflare’s edge using V8, the same JavaScript engine developed for Google Chrome. It can securely run scripts from multiple customers on its servers in the same way as Chrome runs scripts from multiple websites. Below are the steps to run JavaScript using Cloudflare workers once we sign up on Cloudflare.
- Install Wrangler (CLI command): Wrangler CLI helps us managing projects in Cloudflare EdgeWorker through terminal. To install, open the external link (ensure you have npm installed) and run npm install -g wrangler. Using Wrangler CLI, you can create and publish your work projects on Cloudflare EdgeWorkers.
- Authenticate Wrangler by running “Wrangler login” (it opens Cloudflare sign in page in the default browser where you log in using Cloudflare credentials).
- Create your work project by running “wrangler init” followed by your project name (it creates the project directory having the folder name as your project name).
Now, Wrangler init have generated the files given below in your project directory:
- wrangler.toml: Your WranglerOpen external link configuration file.
- index.js (in/src): A simple worker written in JavaScript which returns Hello World as response.
- package.json: A configuration file for Node’s basic needs, generated only if indicated in Wrangler init command.
- tsconfig.json: TypeScript configuration that includes Workers typesOpen external link. Indicated in Wrangler init command only when generated.
- Run “Wrangler dev” to deploy your code to local development server. It starts a local server for developing your project.
- In order to write your project code, you need to modify the fetch method written in index.js file which expects the Response object in return.
Below is the code snippet if you want to return “Hello world” as response from your worker project export:
default {
async fetch(request) {
return new Response ("Hello World!");
},
};
- Publish your project by running “Wrangler publish”. It publishes your worker to a custom domain (the custom domain name is configured while signing up in Cloudflare)
You can preview your Worker at <YOUR_WORKER>.<YOUR_SUBDOMAIN>.workers.dev.
There are some useful use cases like A/B testing with same-URL direct access, Fetch HTML and CORS header proxy, etc. Once the EdgeWorker is published to the Cloudflare server, logs can be accessed in the Cloudflare Workers section in the Cloudflare account.
Performance measurement
It is important to do a performance analysis of the application deployed in EdgeWorkers while it helps you improve the performance of your web applications.
Below are the performance benchmarking parameters from the performance metrics for the web app that can be used to measure the performance and analyse the user experience.
- Time to First Byte (TTFB)
TTFB is an important parameter to measure the responsiveness of any network resource. It measures the time between user sending request to any webpage and he could receive the first byte of response.
- First Contentful Paint (FCP)
FCP metric measures the time taken between when the page starts loading till any section of the content present on webpage gets rendered on the screen. Here content can be any text or image or any HTML content.
- Largest Contentful Paint (LCP)
LCP helps in measuring the perceived load speed as it is measures the time taken from the point when the page starts loading till the most useful content present on the webpage (can be the largest image or text block) gets rendered on the screen.
- First Input Delay (FID)
FID is an important measurement as it measures load responsiveness. Also, measures the amount of time it takes for the browser to start processing event handlers once a user interacts with a website for the first time.
- Time to Interactive (TTI)
The time taken between a page to load until its primary sub-resources have loaded is measured using TTI. It is also capable of reliably responding to user input instantly.
- Total Blocking Time (TBT)
TBT measures the degree of non-interactivity on a page before it reliably becomes interactive. Measuring the total amount of time between First Contentful Paint (FCP) and Time to Interactive (TTI).
- Cumulative Layout Shift (CLS)
CLS helps quantify how often users experience unexpected layout shifts. For each unanticipated layout shift that takes place during the course of a page, it measures the biggest burst of layout shift scores.
- Interaction to Next Paint (INP)
The parameter INP represents one of the single longest interactions that take place when a user sees a page in order to indicate the overall interaction latency of the page.
Tools to measure the EdgeWorkers performance
As EdgeWorkers deploy the code in several edge locations across the world, analysing the performance while stimulating the request from different regions should be the correct way to measure the performance of projects deployed using EdgeWorkers. Below are the few tools that will help you with it:
Pingdom
Pingdom is a website monitoring tool that assists in analysing the performance of a page by tracking variables such as the time it takes to receive the first byte, load DOM content, complete an SSL handshake, wait time, etc. While submitting the requests from various regions, you can monitor the report details. Using Pingdom real user monitoring, you can also define performance thresholds for what is deemed acceptable, ensuring a flawless user experience for your clients.
PageSpeed Insights
Regardless of where a webpage came from, PageSpeed Insights offers real-time performance analysis for both mobile and desktop platforms. It incorporates information from the Chrome User Experience Report (CrUX), which provides user experience metrics based on real-world Chrome users.
By taking measurements for factors like First Contentful Paint, Time to Interactive, Largest Contentful Paint, Total Blocking Time, etc., it examines performance. Even better, this tool offers an optimization score that assesses how closely a webpage adheres to performance best practices.
GTmetrix
It is one of the most often used tools for measuring website speed. Internally, the tool employs PageSpeed Insights and YSlow to produce performance rankings and a thorough report on the health of your website’s current state. It analyses performance using a performance matrix similar to PageSpeed Insights and some browser timing characteristics, such as connection timing and DOM interaction time.
The tool allows you to simulate requests from 30 test servers dispersed across six regions on various web browsers. After analysing the performance, it also aids in pinpointing the problems influencing the website’s performance. To evaluate the effectiveness of projects running on EdgeWorkers, additional tools can be utilized, such as WebPageTest (WPT), Dotcom-Monitor, Uptrends, and Yellow Lab Tools.
Another tool to measure the performance of websites is Core Web Vitals (CWV) report. It combines Google tools to audit, improve and monitor your website effectively. While we have had tools like Search Console and PageSpeed Insights which provide us the website page performance data but we were lacking a tool that has ability to operate at the macro level. Combination of powers of real-user experiences in the Chrome UX Report (CrUX) dataset and web technology detections in HTTP Archive gets us a glimpse into how architectural decisions like choices of CMS platform or JavaScript framework play a role in sites’ CWV performance. Merging of these datasets is a dashboard that can be called as the Core Web Vitals Technology Report.
Conclusion
The blog covered Cloudflare Workers implementation and how we measure EdgeWorker’s performance. Cloudflare Workers, as a platform, helps us in running serverless functions as close as possible to the end user.
Serverless code itself is ‘cached’ on the worker’s network and runs when it receives the right type of request. I discussed how to run serverless functions on the Cloudflare network and also talked about performance metrics that can be used to analyse workers’ performance in a more efficient way and some useful tools to measure those metrics.
Now that you are aware of the Worker’s implementation, you can implement some use cases to reduce the latency and execute the requests faster. And you can also use performance metrics and tools to conclude the performance of the implemented use cases.
Next, you can think of scenarios where the use of EdgeWorkers can solve the problems or make things faster and subsequently implement them using Cloudflare Workers.
Reference Links:
https://www.techtarget.com/searchdatacenter/definition/edge-computing
https://en.wikipedia.org/wiki/Edge_computing
https://developer.akamai.com/akamai-edgeworkers-overview#:~:text=EdgeWorkers
https://blog.cloudflare.com/introducing-cloudflare-workers/