There, I said it. Someone needed to say it. Cloudflare Workers are kind of terrible, today, 11-09-2023.
Specifically the deployment / publishing workflow, language support, and wrangler are all bad. The speed, dependability, and concept are all top notch, but everything surrounding the ecosystem is just not good.
Full disclosure, Cloudflare is used to host this website and I am not sponsored or affiliated with them in any other way other than paying for their services.
Now, I am not saying any of this to be sensational or to specifically put out a hit piece on Cloudflare as a business or their products. I really like Cloudflare as a business and I love the innovative spirit at the core of their ethos. I am writing this article specifically to highlight how their product compares to both my expectations of a modern tool and my experience with other similar platforms. You will see my bias through out this article and I will always concede that my bias and experiences are not always reflective of a broader population of people.
Join me as we walk through what Cloudflare workers are, what they are meant to do, and how the just seem to fall short of being something truly amazing.
Related Articles
- The Top Five Things I Learned About Search Engine Optimization In 2023
- Is DevOps A Skill?
- How To Gain DevOps Technology Experience
What are Cloudflare Workers?
In the rapidly evolving landscape of software development, the shift from unwieldy monolithic architectures to the sleeker, more scalable microservices paradigm has been nothing short of revolutionary. Within this transformation, edge services have emerged as a cutting-edge technological advancement, capturing the interest of DevOps and development communities alike. These services are designed to be highly efficient and nimble, enabling developers to craft robust applications with minimal code overhead.
The essence of edge services lies in their ability to offload the complexities of server management and infrastructure maintenance. By leveraging shared computing resources distributed across various locations, developers can deploy code into a virtualized environment, often referred to as "the ether." This approach abstracts away the underlying hardware concerns, allowing developers to focus on writing trigger-based, streamlined code that responds to specific events or requests.
Among the various edge service technologies available today, AWS Lambda stands out as a prominent example. It epitomizes the notion of serverless architecture, where you write code tailored to process particular request objects. Once the code is deployed to AWS, you set up the necessary triggers, and your application becomes operational in a remarkably short span of time. AWS Lambda's serverless framework takes away the burden of server provisioning and scaling, providing a seamless, maintenance-free platform for running applications.
The implication of this technology is profound for DevOps practices. It encourages a lean approach to coding, promotes continuous integration and delivery, and aligns perfectly with the ethos of DevOps, which advocates for automated, repeatable processes. The operational overhead traditionally associated with deploying and managing applications is greatly reduced, allowing teams to release features and updates more rapidly and reliably.
As edge services like AWS Lambda become more integrated into the fabric of modern software development, they are setting a new standard for building and running applications. They are not just an incremental improvement over the old ways but represent a significant leap forward in how we think about and execute software delivery. This is a thrilling time for DevOps evangelists, as these technologies align perfectly with the goal of fostering more efficient, resilient, and scalable systems.
How Does This Website Work?
My website operates on an array of streamlined tools. I utilize Hugo for static site generation, NodeJS to run various scripts, and Gulp to watch for file changes and execute script sequences. These are bolstered by foundational web technologies: HTML, CSS, and JavaScript. The deployment process is elegantly automated; any push to a development branch on GitHub triggers Cloudflare to conduct a build via their Pages service, producing a non-production version of the site for quality assurance.
Upon validating the changes, a pull request merges the development branch into the main repository. Cloudflare Pages engages once more, seamlessly performing the build and updating the live site. The objective is to maintain agility and speed, allowing the focus to remain on content creation rather than the complexities of website operation.
My prior experience with content management systems like WordPress and Drupal, while extensive, led me to seek a lighter alternative for a content-centric blog without the cumbersome overhead.
This workflow is in line with GitOps principles, wherein Git serves as the central mechanism for triggering updates to my website. Aside from crafting and merging pull requests, I am freed from manual processes to publish new content online—a process I find quite remarkable.
Enter Cloudflare Workers
My aspiration to keep things streamlined continually drives me to explore ways to enhance my website's functionality without the burden of managing the extensive infrastructure typical of a large-scale enterprise site. This quest for efficiency is what initially attracted me to the offerings of Cloudflare Pages.
Cloudflare's suite boasts an array of compelling tools, such as KV for key-value data storage, Queues for task management, R2 for object storage, among others. These services collectively eliminate the necessity for traditional, resource-intensive systems like Redis, Kafka, relational databases, or S3 storage solutions.
Moreover, Cloudflare provides a compute service known as Workers, designed to operate at the network edge. These Workers enable the execution of custom code, seamlessly integrating with Cloudflare's additional services like KV. The gateway to leveraging these capabilities is through Wrangler, a NodeJS package that serves as a conduit to Cloudflare's APIs. Starting is as simple as executing a wrangler login command and entering your credentials, setting the stage for a smooth and swift development experience.
Where Things Start to Break Down
Here are the requirements for the micro-servicey application I wanted to write. I needed a simple place to pull down a JSON file that is compiled by Hugo and served by Cloudflare Pages. That JSON file holds information for scheduled Tweets (or X's or whatever they are now). Once the data is pulled down, randomize the list and then call the Twitter API to post the tweet. This needs to run on a cron timer and API keys need to be secret. This operation is very synchronous and straight forward so I do not need anything fancy.
I did some googling before I started on this project to see if anyone else had done something similar. With any project, for me, this is a good frame of reference to see if a new idea is at least somewhat inline with what other smarter people are already doing. I found this example project which was the source of inspiration for the code I ended up using.
Taking the information I found on the internet plus my requirements, I set to work on creating a twitter bot to post new articles into the X-O-Sphere. What I found when writing this bot was anything but simple and streamlined.
Event Driven Javascript
Workers in Cloudflare's ecosystem are predominantly authored in TypeScript or JavaScript, and are designed to thrive in an event-driven framework. With a background in JavaScript and experience in other programming languages, one would expect a relatively easy transition. Nevertheless, the dearth of applicable examples and the necessity to navigate the intricacies of Cloudflare's specific Wrangler tool present a hurdle.
By contrast, AWS Lambda allows the flexibility to use your preferred programming language, operating under a transparent contract with AWS's system. This contract ensures that Lambda provides a request object and expects a corresponding response object from you, granting you full control over the process within.
However, with Cloudflare Workers, there's an expectation to integrate your application into their proprietary event system, which requires employing their JavaScript framework tailored to event-driven operations. This specificity constrains the portability and flexibility of microservice-like "jobs," binding them to the Cloudflare infrastructure.
This requirement is at odds with the philosophy of Cloudflare Pages, which is much more inclusive, inviting developers to bring their existing workflows. With Cloudflare Pages, you simply provide a directory of your website; the choice to utilize their build system is optional, keeping the terms of engagement straightforward: supply the content, and they manage the hosting.
As I engaged with this system for writing my worker, I felt like the only way to accomplish what I was looking for was to spend weeks of training on more highly advanced Javascript and Typescript paradigms in order to really interact with this system in a meaningful way.
Obviously I didn't do that and just brute forced my way to victory. Because of my brute forcing, I now have a script that I only partially understand and I am afraid to make any changes to.
Deployments?
As stated earlier, I am looking to follow a GitOps model for my website. This allows me to channel all of my activities through a singular pipeline with a singular workflow.
Cloudflare Workers SDLC pattern utilizes Wrangler locally for development. You simply run wrangler dev
which will compile your TS, load your environment variables, and finally give you a webserver that allows you to hook into your worker locally. This mechanism works fairly well; but, since I am a bad Javascript developer (at least on the async side of things) I was able to crash the process multiple times needing to restart it. Additionally, the behavior of auto loading is inconsistent. When you chagnge the app, auto-reloading work as advertised, but when making environment variable changes in .dev.vars it requires a process restart.
When you are done writing and testing your worker, the primary out of the box way to get your code into production is through wrangler deploy
(formerly wrangler publish
). When working from a local desktop, this is not a big deal and again works as advertised on the box. This is an absolute NIGHTMARE when working in a GitOps flow.
If you want to be able to run wrangler deploy
from your CI/CD pipeline, you would need to recreate your wrangler login session on your CI/CD server to facilitate that interaction. They do publish a GitHub worker that can help in this space, but the real question here is WHY?
Cloudflare Pages pulls down my code when I push to GitHub. I do not have any workers or configuration information about Cloudflare living in my GitHub repository. Cloudflare is watching my repository and performing pulls on my behalf. This is exactly the interaction that makes sense for Cloudflare Workers as well, but it is not an option anywhere in their UI.
The real tragedy in all of this is my now disjointed delivery pipeline. If I need changes to go out in near real time with each other between the website and Cloudflare Workers, I need to manually coordinate that timing on my own instead if it being a singular cohesive pipeline getting all of my code into Cloudflare at roughly the same time.
Manually Triggering your Worker... pffft.
The job I have referenced here is a cron job. Through their wrangler.toml file you can specify how often you want the job to run using run of the mill cron syntax. My worker is currently setup with this configuration:
crons = ["8 */6 * * *"] # Run every 6 hours on the 8th minute
.
I would expect any platform like this to have a manual button I can use to run my code for testing purposes. Alternatively, Cloudflare has you utilize the tried and tested method that every system administrator ever has used which is:
- Set your cron job to run every minute
- Waste time waiting for it to run with anxious anticipation
- Quickly revert your change once it has run so it does not run again while you analyze the results
- Analyze the results (which is its own nightmare)
- Make code tweaks
- Rerun the cycle if needed
At the very least, on Linux I can just copy-pasta the cron command to the CLI if I want to run it manually. There is no button to do this unless I want to spend additional time programming secure webhook directly into the worker which is odd bloated overhead that this worker doesn't need.
Observability
It frankly doesn't exist except for smoke and mirrors.
The web UI is completely useless here and running wrangler tail
only provides useful information on a failure. You are not able to see if your app is throwing a false success because wrangler does not pull down logs for successes.
This is my #1 most frustration aspect of this entire endeavor and I would urge Cloudflare to focus here first if they were to make any much needed updates quickly. Because of the level of frustration here, I am keeping this section short because I have zero nice things to say.
Conclusion
I don't normally get on the internet and provide a negative opinion on technology. I don't think it is my place to tear down the hard work of others, and I hope that if anyone from Cloudflare reads this, they are not looking at this through a purely negative light.
I am still using this service and it is still posting my tweets for me.
If I were in Cloudflares position, I would spend some time with members of the community who are passionate about DevOps and delivery flow to get some real insight on how to improve this system so it works for a broader segment of the market. Right now, it feels like the system has too much personal design choice and not enough industry best practice backing the overall solution.
I also think there is a broader lesson for DevOps engineers everywhere. Consider your end users and their needs as part of your system design. It is clear that Cloudflare focused on a specific set of use cases for a specific set of users, which is actually OK. Because I am not one of those targeted use cases, my technology implementation feels janky at best, but that is entirely Cloudflare's prerogative. I have faith and confidence that Cloudflare will adapt and change as their product offering matures and they continue to receive feedback on how they users are really trying to utilize their platforms. The only way to adapt and change is to try something, put yourself out there, listen to feedback, decide which action provide the most value, and make updates. That is the DevOps cycle in a nutshell.
Would I Recommend This To Other Developers?
Yes! But temper your expectations.
Pros: Its free-ish, it does what it says on the box even if the route to get there is convoluted, and it is running on the backbone of an industry leader in internet services.
Cons: You are going to do things the Cloudflare way so be ready to turn your world upside down to fit their mold.