09 Small Trials and Bold Moves I How to Use Function Invocation to Solve Business Problems

09 Small Trials and Bold Moves I How to use function invocation to solve business problems #

Hello, I’m Jingyuan.

In the previous lessons, I shared with you the core technology of Serverless based on the FaaS model. From the perspectives of users and platforms, we learned the basic usage of functions and gained an understanding of their underlying principles and design concepts.

Although the introduction was based on FaaS, whether it is FaaS or the managed service model, the basic characteristics of Serverless are the same in terms of scalability, metering and billing, and traffic control. FaaS is just a concrete manifestation of these basic features. I believe that now you are already familiar enough to study other specific Serverless products.

Next, let’s not rush to delve into its expansion capabilities. Instead, we will start with two practical courses that are close to the front line. Through these courses on function invocation and cross-VPC implementation, you can continue to deepen your learning while putting it into practice. As long as you use cloud resources in function invocation, most of them will involve VPC invocation. By learning about these two topics together, you will gain a better understanding of their close relationship.

Today, I will talk to you about function invocation. I hope that through this lesson, you can understand how functions are assembled together and how they cooperate with other cloud resources. You will also be able to determine which invocation method is suitable for which scenarios, and what should be paid attention to when making these invocations.

The Necessity of Function Split Calling #

First, let’s understand the necessity of function split calling. It can be said that a single function is difficult to meet your needs as long as you have a demand to use functions for actual business purposes. For example, calling common functions in business functions, calling functions for cloud resources, etc.

In addition, function split management also has its benefits, let’s discuss them one by one.

  • Cost: The cost of cloud functions is composed of three parts: number of calls, public network traffic, and occupied resource time. Among them, the most expensive is occupied resource time. For example, some customers have relatively few calls but still have high costs because their functions take a long time to run. In this case, you can reduce costs by shortening the timeout time, optimizing the efficiency of third-party service calls (such as query statements and object pools), and other methods.

  • Reusability: If all functions are written in one function body or file, the possibility of it being condensed into a template or reused again is relatively small, which does not conform to our concept of componentization.

  • Performance: For non-serial functions, splitting them into multiple functions can provide concurrency. Each function has its own concurrency limit. Moreover, once an error occurs in a concurrent function, the split call will not be affected by the timeout as in a serial call.

These three benefits are actually our goals and directions for function split calling. With this awareness, we can determine what methods to use in what scenarios.

Implementation #

From an intuitive perspective, function calls can be divided into synchronous, asynchronous, and orchestration. Orchestration can actually be considered as a subset of asynchronous, but I separate it here because it is a product-level term with scheduling and management implications.

This mind map illustrates the idea of function calls. Next, let’s look at the specific implementations of each.

Synchronous #

In synchronous mode, there are mainly three methods: direct calling, gateway calling, and trigger-based calling. These are the methods we usually use in lightweight application scenarios, such as BFF layer development, conversational interfaces, and campaign landing pages. Let’s take a closer look at their characteristics.

  • Direct Calling

Usually, we use the SDK provided by cloud vendors to directly call the specified function. Taking Aliyun’s fc-python-sdk as an example, the main code would be as follows:

```python
import fc2
client = fc2.Client(
               endpoint='<Your Endpoint>',
               accessKeyID='<Your AccessKeyID>',
               accessKeySecret='<Your AccessKeySecret>')

# Synchronous call
client.invoke_function('service_name', 'function_name')
```

This method is usually used for scenarios where the **call time is short, the processing result needs to be returned promptly, and the business logic is relatively simple**. It shares some similarities with the development of microservices, such as caching the call results and timeout retries, and we can apply the experience from microservice development to function computing.

However, it is worth noting that function computing has its own timeout, so you need to consider the cost of timeouts. Pay-per-use is an important feature that distinguishes function computing from microservices.
  • Gateway Calling

    This method involves calling functions through an API gateway. It is similar to microservices and is mainly used for rate limiting, authentication, parameter mapping, and domain mapping. If it is a call between two functional modules, you can choose gateway calling. However, if it is function-to-function invocation within your own service, you can use the direct calling method mentioned above.

  • Trigger-based Calling

    When working with business clients, I often see cases where customers build triggers and functions on the function computing platform, and then write a service on their own platform to use function computing through HTTP triggers. This method is generally used for serverless transformation of a part of services and function detection scenarios.

Finally, in synchronous mode, you also need to pay attention to the cascading effects of call delays and timeouts, as this will increase your additional cost.

Asynchronous #

In asynchronous mode, we usually have three methods: direct asynchronous calling, configuration of platform-level asynchronous strategies, and triggering through intermediaries. Let me explain each of them to you.

  • Direct Asynchronous Calling This method is relatively easy to understand. Let’s use the example of calling Alibaba Cloud’s function compute to explain the difference between invoking it synchronously and invoking it asynchronously. The difference lies in passing an additional parameter value “x-fc-invocation-type” as “Async”:
# Asynchronously invoke the function.
client.invoke_function('service_name', 'function_name', headers = {'x-fc-invocation-type': 'Async'})

With this method, after calling invoke_function, the function will immediately return and not care about the execution status of the called function, as the platform ensures reliable execution.

This method is similar to the asynchronous invocation of microservices. In addition to increasing concurrency, it also minimizes the cost of function invocation. However, its limitation is that we cannot immediately know the execution result, so it is best to use it in scenarios where timing is not sensitive and there is no need to wait for the result to be returned to the client.

  • Asynchronous Strategy Configuration

In this method, you do not need to develop additional code. This capability is usually provided by platform vendors to make the product more competitive and allow you to quickly process the results and exceptions of functions. So I refer to it as “strategy configuration”.

By setting conditions such as “maximum retry count” and “message lifespan”, you can ensure reliable processing of requests. Moreover, you can further enhance the processing capability by calling other cloud functions or integrated cloud functions, message queues, etc., based on “success” or “failure”. With this kind of configuration provided by cloud vendors, we can quickly build a capability similar to a microservice asynchronous processing architecture.

  • Triggering with Media

In this method, we cleverly use the integration between functions and cloud services for processing. At the same time, we transform function compute from its original triggered mode to a “producer mode”, which truly reflects the characteristics of “glue language” in function computing.

As shown in the diagram below, we can see how many cloud services the cloud vendor has integrated, that is, how many triggers it has. Then, you can call it by using functions (such as the asynchronous triggering mode of an HTTP trigger), or store it through the API interface provided by the cloud service, and then trigger the target function through its matching event-triggering method. Let’s take object storage as an example. You can store a file in a bucket and then perform the triggering operation.

This method can greatly reduce your operation and maintenance costs and allow you to focus more on business logic processing, which is the essence of Serverless and one of its major advantages over microservices.

We all know that the governance of microservices is already quite complicated. As the granularity of functions becomes lower and lower, although the flexibility increases, when dealing with complex scenarios, even with these synchronous and asynchronous methods, once the processing logic is more complex, the chain is longer, the waiting time is uncertain, or rollback operations are required, the above two operation modes (synchronous and asynchronous) may feel a bit inadequate.

Next, I will introduce you to a killer tool in Serverless: workflows. We can see the above method as “P2P operations in the form of functions”, and workflows can be seen as “Kubernetes under functions”.

Orchestration #

As shown in the following figure, we can see that functions (or services) no longer interact with each other in an active or passive triggering manner, but rather through predefined steps. These steps are usually defined in JSON or YAML format to define the collaborative relationship between functions.

We usually call this a Serverless workflow, which allows you to coordinate one or more distributed tasks through sequential, branching, and parallel methods. These tasks not only include functions, but can also take the form of services and applications. By using the platform’s provided state tracking, logging, and exception retry logic, you can be freed from tedious work and enjoy fully managed service capabilities.

So, what kind of scenarios can it specifically apply to? Let me explain each of them to you.

  1. Long processes: If your business process takes a long time, you can use workflow orchestration to ensure the execution completion and status tracking of the process.
  2. Transactional business processes: For example, the common e-commerce order process involves reserving inventory, placing orders, settlement, delivery, refunds, and other stateful processes. Using Serverless workflows, you can provide this kind of long process distributed transaction guarantee.
  3. Concurrent business processes: This usually refers to large-scale computing scenarios with long execution time and high concurrency, such as machine training, which requires decomposing and aggregating small file calculations.
  4. Scenarios requiring full chain monitoring: Since the workflows developed by cloud vendors are equipped with visualization functions such as observability, execution recording, etc., for long chains that require monitoring, you can easily view status, execution records, and locate faults through Serverless workflows.

In summary, Serverless workflows are suitable for solving complex, long-execution, stateful, multi-step, and concurrent aggregation business processes. In fact, when the conventional methods of synchronous and asynchronous are not suitable for handling, you can consider if your business can be solved through orchestration. I will also discuss the core technical implementation of orchestration in the second module.

Key Points to Consider #

After understanding the calling methods and usage scenarios of common functions, I believe you can now easily experience on the cloud platform with these ideas. If you are new to a Serverless platform, then I believe you have a clear understanding of the abilities your platform already has or should have. Below, I will provide you with some points to pay attention to during the usage process to avoid pitfalls.

First, there is the issue of security. Cloud vendors ensure code security through encryption at the code level, execute security through isolation mechanisms, and ensure secure resource requests through authentication. What you need to do is to ensure that your access keys are secure when using them. In addition, when setting up service options such as HTTP triggers, remember to use identity authentication mechanisms in production environments.

Second, there is the issue of monitoring and alerting. You can access your important functions using the preheating methods I mentioned in the warm start section to ensure that the functions can be accessed properly. If they cannot be accessed properly, you can use the same governance mechanism as you did for your original microservice to raise alerts, or you can directly use the cloud vendor’s alert policy mechanism. In this case, you may need to pay additional fees, depending on the platform.

Lastly, there is the issue of fault tolerance. If your downstream service encounters problems (such as timeouts, unavailability, or exceeding limits), it will not only affect your function usage but also increase additional time costs. Therefore, you can selectively add caching mechanisms to avoid invocations of stateless requests. Other methods are similar to microservice approaches, such as circuit-breaking and degradation mechanisms, combined with the aforementioned alerts. In fact, choosing an asynchronous approach in appropriate scenarios is itself a highly fault-tolerant and loosely coupled method.

Summary #

Finally, let me summarize today’s content. In this lesson, I have introduced the necessity and implementation methods of function decomposition and invocation, and how to use function invocations to solve daily business needs. It is important to consider the cost, reusability, and performance of function computation in order to obtain the best ROI.

In terms of implementation methods, I have categorized function invocations into synchronous, asynchronous, and orchestration approaches. Synchronous invocations are suitable for scenarios that require immediate response and short execution times. In most cases, I still recommend using asynchronous processing to ensure business stability, reliability, and service decoupling.

For complex business scenarios that require long-lasting, stateful, and multi-step execution workflows, I recommend using Serverless workflows.

Of course, I hope you can flexibly apply the invocation methods learned today. Building upon the foundation of learning single functions, you should strive to implement more complex business logic through a “building block” approach.

Exercise #

Okay, this class is coming to an end. Finally, I have prepared a small exercise for you.

Think about the methods for audio and video processing through function calls in different scenarios. You can choose a cloud provider to practice and experience the convenience of serverless computing.

In the hands-on experience section of Module 2, I will explain to you the multiple approaches and detailed methods to implement it. Thank you for reading, and feel free to share this article with more friends for discussion and learning.