03 Common Serverless Architecture Patterns

03 Common Serverless Architecture Patterns #

What is Serverless architecture exactly? #

What is Serverless architecture? According to the CNCF’s definition of Serverless computing, Serverless architecture is a design that uses FaaS (Function as a Service) and BaaS (Backend as a Service) services to solve problems. This definition gives us a clearer understanding of Serverless, but it may also cause some confusion and debate.

  • With the development of demand and technology, there have been other forms of Serverless computing services in addition to FaaS, such as Google Cloud Run, Alibaba Cloud’s application-oriented Serverless application engine service, and Serverless K8s. These services also provide elastic scalability and usage-based billing, expanding the camp of Serverless computing;
  • In order to eliminate the impact of cold starts, FaaS services such as Alibaba Cloud Function Compute and AWS Lambda have introduced reserved functions, making them less “pay-as-you-go”;
  • Some server-based backend services have also launched Serverless products, such as AWS Serverless Aurora and Alibaba Cloud Serverless HBase service.

In this way, the boundaries of Serverless are somewhat blurred, and many cloud services are evolving towards Serverless. How can something ambiguous guide us in solving business problems? Serverless has a fundamental concept that has not changed, which is to maximize the focus on business logic. Other features such as not caring about servers, automatic elasticity, and usage-based billing are all designed to achieve this concept.

Famous Serverless practitioner Ben Kehoe describes the native Serverless mentality as follows: when considering what to do in business, you can experience this mentality:

  • What is my business?
  • Can doing this make my business outstanding?
  • If not, why should I do this instead of letting others solve this problem?
  • There is no need to solve technical issues before solving business problems.

When practicing Serverless architecture, the most important mentality is not which popular services and technologies to choose, which technical challenges to overcome, but always keeping the focus on business logic in mind. This makes it easier for us to choose the right technology and services and clarify how to design application architecture. Human energy is limited, and organizational resources are limited. The Serverless concept allows us to solve real problems with limited resources. It is precisely because we have done less and let others do these things instead that we can do more in business.

Next, we will introduce some common scenarios and discuss how to use Serverless architecture to support these scenarios. We will mainly use technologies such as computing, storage, and messaging to design architectures, and evaluate the advantages and disadvantages of architectures from the perspectives of operability, security, reliability, scalability, and cost. In order to make this discussion less abstract, we will use some specific services as references, but the ideas behind these architectures are universal and can be implemented with similar products.

Scenario 1: Static Web Site #

img

Suppose we want to create an information display website with simple requirements, similar to the old Yellow Pages in China, with very few information updates. We have a few main options:

  • Buy a server and host it in an IDC data center to run the website;
  • Buy a cloud server from a cloud vendor and run the website on it. In order to address high availability, also purchase a load balancing service and multiple servers;
  • Use a static website approach, directly supported by object storage services (such as OSS), and use CDN to fetch data from OSS.

2.JPG

These three approaches go from on-premise to the cloud and from server management to no server management, which is Serverless. What changes does this series of transitions bring to the user? The first two options require budgeting, scaling, achieving high availability, self-monitoring, and more. These are not what Mr. Ma wanted back then. He just wanted to display information and let the world know about China. This is his business logic. Serverless is precisely such a concept that maximizes people’s focus on business logic. The third option uses Serverless architecture to build a static website, which has undeniable advantages compared to other approaches, such as:

  • Operability: No need to manage servers, such as operating system security patch upgrades, fault upgrades, and high availability. These cloud services (OSS, CDN) take care of them.
  • Scalability: No need to estimate and consider future scalability of resources, because OSS itself is elastic. Using CDN makes the system have lower latency, lower costs, and higher availability.
  • Cost: Pay for the resources actually used, including storage costs and request costs. No request costs are charged when there are no requests.
  • Security: This type of system doesn’t even have visible servers, so there’s no need to log in via SSH. DDoS attacks are also handled by cloud services.

Scenario 2: Monolithic and Microservices Applications #

img

Static pages and sites are suitable for scenarios with less content and low update frequency. Conversely, dynamic sites are required. For example, managing product information for Taobao’s product pages in a static manner is not realistic. How can we dynamically return results based on user requests? Let’s look at two common solutions:

  • Web monolithic applications: All application logic is completed in a single application, combined with a database. This multi-layer architecture can quickly implement low complexity applications.
  • Microservices applications: As the business develops, more functionalities, higher access volumes, and larger teams are needed. At this time, it is generally necessary to split the logic in monolithic applications into multiple execution units. For example, comments, sales information, and distribution information on product pages can correspond to separate microservices. The advantage of this architecture is that each unit is highly autonomous and easy to develop (using different technologies), deploy, and scale. However, this architecture also introduces some problems of distributed systems, such as load balancing and failure handling in service communication.

Organizations at different stages and in different scales can choose the most suitable approach to solve their primary business problems. Taobao was initially accepted by people not because it used a particular technology architecture. However, regardless of the chosen architecture, the Serverless native mindset mentioned above helps us focus on the business. For example:

  • Do we need to purchase servers, install databases, and implement high availability, backup management, and version upgrades ourselves, or can we delegate these tasks to managed services such as RDS? Can we use services like Table Store and Serverless HBase to achieve elastic scaling and pay-as-you-go based on usage?
  • Do we need to purchase servers to run monolithic applications, or can we delegate this to managed services such as Function Compute and Serverless Application Engine?
  • Can lightweight microservices be implemented using functions, relying on the load balancing, automatic scaling, pay-as-you-go, log collection, and system monitoring capabilities provided by Function Compute?
  • Do we need to purchase servers, deploy applications, manage service discovery, load balancing, elastic scaling, resilience, and system monitoring for microservices applications implemented using Spring Cloud, Dubbo, HSF, and other frameworks, or can we delegate these tasks to services like Serverless Application Engine?

The architecture on the right side of the figure above introduces an API Gateway, Function Compute, or Serverless Application Engine to implement the computing layer, delegating a lot of work to the cloud services. This allows users to focus on implementing business logic as much as possible. The interactions between multiple microservices within the system are shown in the figure below. By providing a product aggregation service, multiple internal microservices are presented to the outside world in a unified manner. These microservices can be implemented using Serverless Application Engine or functions.

img

This architecture can be further expanded. For example, how to support access from different clients, as shown on the right side of the figure above. This is a common requirement in reality, where different clients may require different information. Mobile phones can provide relevant recommendations based on location information. How can we ensure that both mobile and different browser clients benefit from the Serverless architecture? This brings up another term, Backend for Frontend (BFF), which is a backend customized for the frontend. This architecture is highly praised by frontend development engineers and has become popular with the widespread adoption of Serverless technology. This is because frontend engineers can directly write BFF from a business perspective without having to deal with server-related headaches. For more practices, refer to: BFF architecture based on Function Compute.

Scenario 3: Event Trigger #

The dynamic page generation mentioned earlier is completed synchronously. There is another common scenario where request processing usually takes a long time or requires many resources. For example, managing images and videos in user comments involves uploading images and processing images (thumbnails, watermarks, auditing, etc.) and videos in order to meet the playback requirements of different clients.

img

How can we process uploaded media files in real time? The technical architecture of this scenario has undergone the following evolution: 7.JPG

  • Server-based Monolithic Architecture: Multimedia files are uploaded to the server, processed by the server, and display requests for multimedia are also handled by the server.
  • Server-based Microservice Architecture: Multimedia files are uploaded to the server, processed and stored in OSS (Object Storage Service), and then the file address is added to a message queue. Another group of servers handles the files and saves the processing results to OSS. Display requests for multimedia are handled by OSS and CDN (Content Delivery Network).
  • Serverless Architecture: Multimedia files are directly uploaded to OSS, and the event-triggering capability of OSS directly triggers functions. The function’s processing results are saved to OSS, and display requests for multimedia are handled by OSS and CDN.

Server-based Monolithic Architecture faces the following problems:

  • How to handle a large number of files? The storage space of a single server is limited, so more servers need to be purchased.
  • How to scale the web application server? Is the web application server suitable for CPU-intensive tasks?
  • How to ensure high availability of upload requests?
  • How to ensure high availability of display requests?
  • How to handle fluctuations in request loads?

Server-based Microservice Architecture solves most of the above problems, but still faces some challenges:

  • Managing the high availability and resilience of application servers.
  • Managing the resilience of file processing servers.
  • Managing the resilience of message queues.

On the other hand, Serverless architecture solves all of the above problems. Developers no longer need to deal with load balancing, high availability and scalability of servers, and message queues. As the architecture evolves, developers do less and the system becomes more mature, allowing them to focus more on the business, greatly improving delivery speed.

The main values of Serverless architecture are:

  • Event-triggering capability: The native integration of function computing service with event sources (OSS) eliminates the need for users to manage queue resources. Queues automatically scale and process uploaded multimedia files in real time.
  • High elasticity and pay-as-you-go: Different specifications of compute resources are required for images and videos (videos of different sizes). Resource demands vary with traffic fluctuations. This elasticity is provided by the service, which scales up or down according to the actual usage of users, allowing them to fully utilize resources and avoid paying for idle resources.

Event triggering capability is an important feature of FaaS (Function as a Service). The Pub/Sub event-driven model is not a new concept, but before Serverless became popular, users were responsible for producing events, consuming events, and maintaining the connecting hub, just like in the second architecture described earlier.

Serverless removes the responsibility of producers to send events and maintain the connecting hub, and only focuses on the logic of the consumer. This is where the value of Serverless lies.

Function computing services also integrate with other cloud service event sources, making it easier for you to use common patterns in your business, such as Pub/Sub, event stream pattern, and Event Sourcing pattern. For more details on function composition patterns, you can refer to N ways of function composition.

8.JPG

Scenario 4: Service Orchestration #

Although the previous product page is complex, all operations are read operations. The API for aggregating services is stateless and synchronous. Let’s take a look at an essential scenario in e-commerce - the order process.

img

This scenario involves multiple distributed write problems, which is the most troublesome issue caused by the introduction of a microservices architecture. A monolithic application can handle this process relatively easily to a certain extent because it uses a single database, which can maintain data consistency through database transactions. However, in reality, it may be necessary to interact with some external services and there is a need for a mechanism to ensure the smooth completion of the process’s progression and rollback. One classic pattern to solve this problem is the Saga pattern, and there are two different architectures for implementing this pattern:

One approach is to use an event-driven model to drive the completion of the process. In this architecture, there is a message bus, and interested services such as inventory services listen for events. The listener can be implemented using servers or functions. With the integration of function computing and message topics, this architecture can also be implemented without using servers.

This architecture module is loosely coupled and has clear responsibilities. The downside is that as the process becomes longer and more complex, this system becomes difficult to maintain. For example, it is difficult to intuitively understand the business logic and the execution state is not easy to track, resulting in poor operability.

img

The other architecture is the Saga pattern based on workflow. In this architecture, the various services are independent and do not communicate with each other through events. Instead, there is a centralized coordinator service to schedule individual business services, and the business logic and state are maintained by the centralized coordinator. However, implementing this centralized coordinator often faces the following problems:

  • Writing a large amount of code to implement orchestration logic, state maintenance, error retry, and other functions, which are challenging to be reused by other applications.
  • Maintaining the infrastructure for running orchestration applications to ensure high availability and scalability.
  • Considering state persistence to support multi-step long-running processes and ensure the transactionality of the process.

With the cloud services such as Alibaba Cloud’s Serverless Workflow Service, these things can be handed over to the platform to handle, allowing users to focus only on the business logic again.

The right side of the following figure shows the process definition, which achieves the effect of the event-based Saga pattern mentioned above and greatly simplifies the process, enhancing observability.

img

Scenario 5: Data Pipeline #

With further business development, data becomes more and more abundant, and the value of data can be explored. For example, analyzing users’ behavior on the website and making corresponding recommendations. A data pipeline includes multiple processes such as data collection, processing, and analysis. Although it is feasible to build such a service from scratch, it is also complex. In our discussion here, we focus on the business of e-commerce, not on providing a data pipeline service. With such a goal in mind, the choices we make will become simple and clear.

  • The Log Service (SLS) provides data collection, analysis, and delivery capabilities.
  • Function Compute (FC) can perform real-time processing of data from the Log Service and write the results to other services such as the Log Service and OSS.
  • Serverless Workflow Service can process data in batches on a scheduled basis, using function definitions to flexibly define data processing logic and build ETL jobs.
  • Data Lake Analytics (DLA) provides a serverless, interactive query service that uses standard SQL to analyze data from multiple data sources such as Object Storage Service (OSS), databases (PostgreSQL / MySQL, etc.), and NoSQL (TableStore, etc.).

Conclusion #

Due to the limited space, we only discussed the applications of the Serverless architecture in several scenarios. However, in practice, we can see a commonality, which is how to separate work that is unrelated to the business logic, and hand it over to the platform and services to complete. This approach of each entity doing its own work and collaborating in division of labor is not unfamiliar in other situations, but the Serverless approach makes this form more explicit. Less is more; the less here refers not only to the burdens of servers and server-related matters but also to aspects outside the business. The more refers to the focus on the core competitiveness of the business and product.