How to build a web app based on the Symfony PHP framework and Google Cloud Platform

Contents

In web app development that uses cloud computing resources, it is necessary to choose a tech stack that will allow for scalability. We also need to bear in mind that cloud-based apps are different from the typical apps deployed in hosting environments or on dedicated servers. In this article, I will discuss how to use Google Kubernetes Engine and analyze typical problems and their solutions for both approaches: server-based and cloud-based. I hope this will help you to bring your web app idea to life.

 

Problem #1: storing files uploaded by users

At first glance, storing files may seem straightforward. In a server-based solution, we create a folder on the server, set the appropriate permissions (remember not to allow execution!), and assign access to the user that runs the application.

Unfortunately, such a solution is impractical when we use containerization in the project. Bear in mind that containers in Kubernetes are dynamically replicated. When we observe high traffic in the application, we create another replica that is an exact copy of those already running.

Now, consider the following scenario. In the application, we store 100 files uploaded by users (in a subfolder of the public folder, which is typical for Symfony PHP framework). In a traditional server solution, this does not make a huge difference. But how does it work in Kubernetes? Let us assume we start with one replica. It means that we have one copy of the files.

The client launches a marketing campaign, and our application gets featured on television and radio. Before the campaign, we had around 100 requests per second, but we suddenly have 10,000 requests. Kubernetes has limited resources per pod—let us say 300 MB of RAM. The higher number of queries increases memory load, and the Horizontal Pod Autoscaling (HPA)—a tool to scale Kubernetes pods horizontally—decides to increase the number of replicas to four.

Now, the application responds in a reasonable time. But what happened underneath? Our 100 files uploaded by users were “copied” to the remaining replicas and the files will be deleted with the replicas when traffic decreases (e.g., at night). For 100 files, this is not a significant operation, but imagine if there were 100 GB of files. The replica creation time would become exceedingly long, which would result in users experiencing a slowdown in response time and they would leave the application. They would simply find it too slow.

 

Solution: store files on an external platform

In this case, moving files to a Content Delivery Network (CDN, a network consisting of interconnected servers that accelerate the loading speed of web pages, particularly for applications with substantial data requirements) makes sense. Our replicas only deal with the code. They are not concerned with the variability of files uploaded and deleted by users. In our application, we utilized the cloud storage service available in the GCP infrastructure. Besides providing a CDN, GCP also enables automatic backup and a quick return to a specified state in time, addressing the issue of disappearing files. So, here the problem solves itself.

But how do we access files in the CDN without explicitly indicating that we are using one? In our application, we opted for the flysystem-bundle package. It is based on the Flysystem library, which provides a filesystem abstraction for PHP applications. The package configuration is straightforward. We specify the type of adapter and the bucket URL, as in the code snippet below:

				
					flysystem:
    storages:
        upload.storage:
            adapter: gcloud
            options:
                client: ‘gcloud_client_service’
                bucket: ‘%env(FLYSYSTEM_GCS_DATA_COPY_BUCKET)%’

				
			

Retrieving data from the CDN is just as straightforward:

				
					final class GetUploadedFileController extends AbstractController
{
    public function __construct(
        private readonly FilesystemOperator $uploadStorage,
    ) {}

    public function __invoke(string $filename): BinaryFileResponse
    {
        try {
            $fileStream = $this->uploadStorage->readStream($filename);
        } catch (FilesystemException) {
            throw new ServiceUnavailableHttpException();
        }

        $tmpName = (new Filesystem())->tempnam(sys_get_temp_dir(), ‘tmp_’, $filename);
        $tmpFile = fopen($tmpName, ‘wb+’);

        if (!is_resource($tmpFile)) {
            throw new RuntimeException();
        }

        file_put_contents($tmpName, $fileStream);

        fclose($tmpFile);

        return $this->file($tmpName, $filename);
    }
}

				
			

As seen in the code snippet, we include the file system operator class using dependency injection. We search for a file by its name (note: we must ensure each filename is unique), then create a temporary file and return it to the user. It is important to note that the file system operator returns the file to us as a stream, which is highly efficient—we do not load the entire file into the operating memory; instead, we read it on the fly.

Problem #2: asynchronous tasks

Another important aspect to address is asynchronous tasks. Unfortunately, PHP is not a language that allows for executing processes “in the background.” A well-known solution for asynchronous tasks is Symfony Messenger, which enables the delegation of tasks asynchronously or synchronously.

Synchronous tasks can be useful when the same task needs to be executed in multiple parts of a system and you want to encapsulate it in a separate file.

However, the real game-changer is asynchronous tasks. Let us imagine a situation where a major marketing campaign is underway. The client wants to encourage new users to use the paid section of the service by generating 100,000 discount codes. Executing such a task synchronously would lock the system for several minutes. Obviously, such a situation occurring during peak usage hours could negatively impact the brand’s image.

Here asynchronous tasks come in handy. When dealing with such tasks, we are not interested in when they will be executed. The crucial aspect is the result: the codes need to be generated. Whether they are available in ten seconds or ten hours is less important. After all, everyone plans marketing campaigns well in advance. This is where asynchronous tasks find their ideal application. We inform the system that we want our 100K discount codes but we do not need them right now. The queue of asynchronous tasks will wait for a less busy time (e.g., at night) and then generate what we need. Of course, tasks in the queue can be assigned different priorities.

 

Classic solution

In the case of a traditional server infrastructure, the queue is a process operating within the system. The queue consumer, which is the part of the application collecting data from the queue and executing tasks, is also a separate process running in the background. This is all fine, but we are still operating on a single server, and as a result, we have shared resources that are not infinite.

The pros of a classic solution:

  • Tasks are not executed synchronously (relieving the server during peak task loads).

 

The cons of a classic solution:

  • We ultimately execute tasks on the same server, which can lead to a decrease in performance or non-execution of tasks if resources are overstretched.
  • We need to configure the queue process.

 

Solution using GCP

In our application, we have chosen to leverage the Pub/Sub service, enabling the creation and subscription of tasks in a queue. Consequently, we move the entire queue handling process outside the application’s resources, providing a substantial performance improvement. But what about the consumer? Here, we revisit Kubernetes. Operating within containers allows us to create a container based on our application code, that simultaneously functions as a separate entity. With a single Docker image, the configuration remains identical, but the queue consumer operates as an independent entity with its own resources, thus avoiding imposing a burden on the application’s resources.

The pros of such a solution:

  • Queue handling moved to an external service.
  • The consumer extracted into a separate container that has its own resources.

 

Its cons:

  • Pub/Sub faces challenges with executing delayed tasks (we encountered this issue in the implementation of our application, leading us to move some tasks to recurring tasks).
  • We are unaware of the baseline service load from the consumer’s perspective, so we do not know if resource-intensive database operations will slow down the system.

 

Problem #3: recurring tasks

There is no application without recurring tasks. There are always tasks that should be executed at regular intervals. In the case of an application working with company data, this could include a cyclic data refresh from the Central Statistical Office’s (Główny Urząd Statystyczny, GUS) BIR service. Such a task ensures the accuracy of the data we store. It should definitely be a recurring task because no one wants to click thousands of buttons for each saved company just to read and potentially update data from the GUS database.

 

Classic solution

In server-side applications, we usually address the issue of recurring tasks by setting up a crontab, which is a mechanism for cyclically running commands. However, it is important to consider that a crontab operates on the same server, utilizing resources shared with the application. In the case of frequently executed tasks, this can become burdensome.

Pros:

  • We determine the execution time of the task.

 

Cons:

  • Monitoring the operation of cyclical tasks is very rudimentary. Of course, external monitoring tools can be used, but these are additional services installed on the server.
  • We rely on the resources of a single server, so we must have a sufficient “reserve” of them. Most resources will remain unused during normal operation, yet they still generate costs.

 

Solution using Kubernetes

Once again, Kubernetes emerges. Since we have a ready Docker image, we can replicate it for cyclical tasks. Each task will be invoked with its own resources, and consequently, the application will not even feel their impact. When a specific task is needed, its container will be created, and upon completion, it will be closed.

Pros:

  • Tasks have separate resources.
  • We can specify the execution time for tasks.
  • There is no need to oversize the server resources.

 

Cons:

  • Costs, as in the cloud solution we pay for the computing resources utilized.

 

Problem #4: costs

Maintaining an application will always mean incurring costs. We cannot avoid this. Of course, one can opt for free hosting, but let us be honest—the performance is usually tragic. Moreover, free hosting often has terms of service that prohibit commercial use.

 

Classic solution

In the classic approach, before deploying the application, we need to calculate the required resources. It is important to remember that we should account for the aforementioned queue systems or periodic tasks. In some extreme scenarios, it might turn out that our calculations show we will typically use only 10% of available resources, but during periods of increased activity in cyclic tasks and queues, resources will be insufficient. In such a scenario, we should reconsider implementation details or the timing of cyclic task execution.

Pros

  • Server solutions are cheaper.
  • Configuration for server-based solutions is relatively straightforward.

 

Cons:

  • In the case of performance issues, we have limited scaling options. We can, of course, move the application to a “more powerful” server, but that will increase costs.

 

Cloud solution

Cloud solution provides us with much greater flexibility of resources. After all, we determine the resources for a specific service, which is either the application itself or a cyclical task.

Pros:

  • High scalability flexibility.

 

Cons:

  • The costs of cloud solutions are typically much higher than server-based solutions.
  • The configuration of a cloud solution is more complex–it usually requires specialized knowledge about the specific cloud service provider. It cannot be done without DevOps expertise.

Problem 5#: downtime

We have no control over downtime. Of course, we should limit downtime and errors caused by application code, but let us be honest: there is no application without bugs. There are only applications where bugs have not been discovered yet (which often indicates low popularity).

 

Classic solution

In classic solutions, we typically purchase SLA packages from server providers. Alternatively, we may have our IT department take care of the server. Naturally, we have no control over random events such as power outages, server room fires (yes, this is a reference to OVH), or interruptions in Internet connection delivery. “More expensive” cloud hosting providers guarantee backups and availability, but issues can still arise.

 

Cloud solution

Cloud service providers guarantee high service availability (often at a level of 99.9%). It is worth noting that these are usually the biggest players in the market, such as Google, Microsoft, and Amazon. Such companies have incomparably better infrastructure and huge teams responding to outages. With an application running in the cloud, we can sleep peacefully.

 

How to build a web app—summary

Using cloud solutions in web application development provides us with many optimization possibilities. When choosing infrastructure, we must, of course, estimate the predicted traffic. It makes no sense to place web apps that will be used by ten users a day in an expensive cloud environment.

When deciding on a cloud solution, we must remember the implementation differences arising from such architecture. We must not forget about replication, which is undoubtedly an advantage of such a solution but brings consequences related to data synchronization between containers, making it also a “curse” at the web app design stage.

Still, if we overcome these challenges, web app development in the cloud will have more pros than a classic, server-based approach.

Sign up for the newsletter

You may also find interesting:

Product Design Sprint

Design Sprint is a process of business problems solving developed by Google Ventures. It was created to help start-ups build better products. We adapted this method to the requirements of big organizations.

Design-Driven Innovation

Innovation is often about changing the meaning of what a product or a service is and what it offers its users. This is the core of the design.

The first choice for e-Commerce – Magento

One of the most popular and functional platforms for creating professional online shopping sites. A good choice for large-scale web applications that aim to provide a good fit between functionality and customer requirements.

Book a free 15-minute discovery call

Looking for support with your IT project?
Let’s talk to see how we can help.

The controller of the personal data is FABRITY sp. z o. o. with its registered office in Warsaw; the data is processed for the purpose of responding to a submitted inquiry; the legal basis for processing is the controller's legitimate interest in responding to a submitted inquiry and not leaving messages unanswered. Individuals whose data is processed have the following rights: access to data, rectification, erasure or restriction, right to object and the right to lodge a complaint with PUODO. Personal data in this form will be processed according to our privacy policy.

You can also send us an email.

In this case the controller of the personal data will be FABRITY sp. z o. o. and the data will be processed for the purpose of responding to a submitted inquiry; the legal basis for processing is the controller’s legitimate interest in responding to a submitted inquiry and not leaving messages unanswered. Personal data will be processed according to our privacy policy.

dormakaba 400
frontex 400
pepsico 400
bayer-logo-2
kisspng-carrefour-online-marketing-business-hypermarket-carrefour-5b3302807dc0f9.6236099615300696325151
ABB_logo

Book a free 15-minute discovery call

Looking for support with your IT project?
Let’s talk to see how we can help.

Bartosz Michałowski

Head of Sales at Fabrity

The controller of the personal data is FABRITY sp. z o. o. with its registered office in Warsaw; the data is processed for the purpose of responding to a submitted inquiry; the legal basis for processing is the controller's legitimate interest in responding to a submitted inquiry and not leaving messages unanswered. Individuals whose data is processed have the following rights: access to data, rectification, erasure or restriction, right to object and the right to lodge a complaint with PUODO. Personal data in this form will be processed according to our privacy policy.

You can also send us an email.

In this case the controller of the personal data will be FABRITY sp. z o. o. and the data will be processed for the purpose of responding to a submitted inquiry; the legal basis for processing is the controller’s legitimate interest in responding to a submitted inquiry and not leaving messages unanswered. Personal data will be processed according to our privacy policy.

dormakaba 400
toyota
frontex 400
Ministry-of-Health
Logo_Sanofi
pepsico 400
bayer-logo-2
kisspng-carrefour-online-marketing-business-hypermarket-carrefour-5b3302807dc0f9.6236099615300696325151
ABB_logo