Building a Serverless Laravel queue processor on GCP
Queue processing is a standardized way in Laravel to offload some time consuming task away from your synchronous processing. Things like image processing, PDF generation and sending emails shouldn’t slow down your request handling.
Using queues helps improve the UX of your customers with reduced latency and a faster Time To Last Byte.
Queue workers are traditionally processes that run continuously and check for new tasks to be handled. A dedicated VM for instance could handle this workload. When the queue is empty you still have to pay for that instance, even if it is idle most of the time. On the other hand, if you launch a lot of jobs at once, scaling dynamically is not that straightforward in comparison to serverless components.
In this article, I will show you a way to change this behavior into an event-driven system using serverless components that scale automatically on Google Cloud Platform.
Before you start
Be sure to read my previous article on how to use Pub/Sub as a Laravel Queue driver. I will assume you already have a Laravel project that populates a Pub/Sub topic and go from there.
Going serverless
You have a couple of options to link a Pub/Sub topic to other components. I will use Cloud Functions and Cloud Run which are both serverless products.
The first option to handle queue jobs are Cloud Functions. At time of writing, these are the supported environments:
- Node.js 10, 12 and 14
- Python 3.7 and 3.8
- Go 1.11 and 1.13
- Java 11
- .NET Core 3.1
- Ruby 2.6 and 2.7
So no PHP runtime for now. Cloud Functions are a good option if you don’t mind rewriting some of your codebase into one of the options above. They scale automatically and are very easy to get started with. I like to use Cloud Function to chop up applications into smaller parts that are easy to manage and change without impacting other parts. Email sending, post upload tasks and so on.
So let’s set this up. Head over to the Google Cloud Console and create a Pub/Sub topic. To connect this topic with a Cloud Function hit the ‘Trigger Cloud Function” button.
On the next screen, choose a name and a region for your function. The Pub/Sub trigger is automatically created.
In the advanced settings you get to configure the memory, env variables and connect to other resources in your project with a VPC access connector. You will need the latter if you want to connect to a Cloud SQL instance for example.
The last step is to add your code to the cloud function. As a proof-of-concept, I will just output the content of the Pub/Sub message. My go to language in this case is Python. Note that Pub/Sub message are base64 encode so we need to decode to see the content:
After creating the Cloud Function this way, a push subscription is automatically created for you. This will invoke the function every time a Pub/Sub message is added.
Using the code from my previous article, the output would be this:
As you can see, you get the Job information as payload in the Pub/Sub message. How you process this, is up to you.
Next, we will take a look at Cloud Run.
Cloud Run queue workers
Migrating part of your code to other languages might be a deal breaker. Luckily, we have the option to run PHP Job code from our Laravel project on a serverless component: Cloud Run.
For this to work, we need to build a container capable of running PHP code. I start from an Nginx base image, then install PHP 7.4 and the dependencies needed to run Laravel.
The great part of working with containers and Cloud Run is that you can use built-in tools to automatically deploy your code using Cloud Build triggers and Container Registry to store your images.
The focus of this article is queue processing, so I will skip that part for now. Perhaps in a future post I can discuss how to build and run Laravel Apps on Cloud Run.
Assuming you have a working container, the next step is to create an endpoint we can connect the Pub/Sub topic to. In a real scenario, this could be one generic endpoint, or many endpoints to handle all kinds of jobs.
A quick and dirty way of getting started is to create a Jobs link in your web.php route file:
This will take the request coming from Pub/Sub and trigger the execution of the EchoOutput Job. Just a reminder from my previous post: this job will send the current datetime to a Job that echoes it back once processed.
The only thing missing is the link between the Pub/Sub topic and the Cloud Run instance. This is done by creating a Pub/Sub Push subscription.
Connecting the pieces
Get the URL of your Cloud Run Instance and go to the Pub/Sub topic. Create a new subscription that pushes each Pub/Sub message to your Cloud Run service:
To keep things simple, we will not use authentication at this time. If you do need this, simply tick the box and select a service account with at least the ‘Cloud Run Invoker’ IAM role.
Time for a spin. Call the code that triggers the Job, pushes it to Pub/Sub and triggers the Cloud Run Service.
If all goes well, you will see the entry in the Cloud Run logs:
With this setup, you have to keep in mind that there are certain rules you need to follow. A Pub/Sub message needs to be acknowledged by returning a HTTP success response code. If you return an error code, Pub/Sub will publish the same message again.
By default you have 10 seconds to process the request and return a valid response. This can be increased up to 600 seconds (10 minutes). If your processing takes longer than that, you either need to chunk the work into more bits or provide your own logic to retry in case of a failure.
You will also need to align the Cloud Run timeout and concurrency to make sure the container is able to handle the load and scale when needed. Cloud monitoring is a really helpful tool for analyzing and troubleshooting.
That’s it! You have built a highly scalable, event-driven Laravel Queue processing workflow that no longer requires a VM constantly listening and being idle most of the time.
But don’t stop here. You can use another Pub/Sub topic to catch the failed jobs and process them any way you like. Or connect multiple subscriptions to the same Pub/Sub topic to not only process the Job, but also log execution. Or send out an alert when too many Jobs are failing.
Thanks for reading! If you ever want to chat on building awesome cloud stuff, ping us at www.vbridge.eu