16 Hands on Experience I How to Realize High Efficiency Business Development and Go Online

16 Hands-On Experience I How to realize high-efficiency business development and go online #

Hello, I’m Jingyuan.

In previous lessons, I have mentioned multiple times that in the Serverless product form, you only need to focus on the business logic and don’t have to manage complex infrastructure. This allows for quick delivery and trial of application products.

However, if you are new to Serverless, you may encounter some difficulties in getting started. This mainly involves getting familiar with the change in development approach, mastering the framework syntax, and understanding the integration of various services. I believe you have experienced this in your previous learning and practice.

In fact, in the field of Serverless applications, there are already many templates for common scenarios, which can improve the convenience of your usage. Have you ever used any of them?

Today, I will introduce you to the concept of “templates” and the methods to quickly release business applications.

I hope that through this lesson, you will experience the advantages of Serverless technology in terms of “improving quality, increasing efficiency, and quick delivery” from a practical perspective.

What is the template for Serverless? #

When you choose to use a template in the console or development tools, you will usually generate a piece of template code in the WebIDE or your local editor. Then, you can modify the code in the template based on your actual needs. Finally, you can package and upload the modified template according to the processing method of the code package to complete the deployment of a cloud function.

In more complex scenarios, there may be multiple function templates that make up an application-level template. We can use the composition of multiple functions to meet the workflow scenarios mentioned in Lesson 12. As we know in the course, one of the core aspects of a workflow is its “orchestration structure”. This JSON or YAML structure can be refined into a common template for this business scenario.

In other words, templates in the Serverless paradigm exist for single function templates, multiple function templates composing application templates, and workflow templates under function and application orchestration. These templates provide you with the ability to code quickly and connect upstream and downstream configurations.

Now let’s take a look at how to use the templates.

Transcoding in Action Based on Function Template #

In our common business scenarios, we often need to transcode videos into different formats to adapt to various terminals and network conditions. In the following, I will show you how to use function templates to implement audio and video transcoding, and deepen your understanding of the core capabilities of templates.

Setup #

I will use Alibaba Cloud Function Compute (FC) as the platform for this exercise.

First, let’s go to the “Application Center” and click “Create Application”. You will see many application templates provided by the platform based on user experience, including common scenarios such as web application, file processing, and audio and video transcoding. Today, we will choose the “Audio and Video Processing” use case and select the “Audio and Video Transcoding Job” template.

Image

Next, click “Create Directly” to go to the “Create Application” page. Fill in the settings such as “Deployment Type”, “Application Name”, “RAM Role”, and click “Create” to complete the construction of an audio and video transcoding template application.

Image

After creation, the application will undergo a series of resource checks. If this is your first time creating one, you also need to create and bind a RAM role. This step is required by most cloud service providers.

After creation, click to enter the specific application details page, where you will find that the audio and video application consists of multiple functions, including dest-fail, dest-succ, and transcode.

Image

You can probably guess the specific functionality based on their function names. dest-fail and dest-succ correspond to the logic of handling failures or successes of transcoding operations. For example, in the case of a transcoding failure, you may need to perform notification operations or implement custom logic based on your requirements.

The core function in this case is the template function transcode. What does it do? Let’s take a look at the implementation of the entry function handler:

def handler(event, context):
    LOGGER.info(event)
    evt = json.loads(event)
    oss_bucket_name = evt["bucket"]
    object_key = evt["object"]
    output_dir = evt["output_dir"]
    dst_format = evt['dst_format']
    shortname, _ = get_fileNameExt(object_key)
    creds = context.credentials
    auth = oss2.StsAuth(creds.accessKeyId,
                        creds.accessKeySecret, creds.securityToken)
    oss_client = oss2.Bucket(auth, 'oss-%s-internal.aliyuncs.com' %
                             context.region, oss_bucket_name)
 
 
    # simplifiedmeta = oss_client.get_object_meta(object_key)
    # size = float(simplifiedmeta.headers['Content-Length'])
    # M_size = round(size / 1024.0 / 1024.0, 2)
 
 
    input_path = oss_client.sign_url('GET', object_key, 6 * 3600)
    # m3u8 special handling
    rid = context.request_id
    if dst_format == "m3u8":
        return handle_m3u8(rid, oss_client, input_path, shortname, output_dir)
    else:
        return handle_common(rid, oss_client, input_path, shortname, output_dir, dst_format)

First, handler retrieves the bucket, object, output_dir, and dst_format fields from the event. What do they mean?

  • bucket: the name of the bucket where the video is stored in Object Storage Service (OSS).
  • object: the objectName of the video in OSS.
  • output_dir: the path in OSS where the transcoded video needs to be stored.
  • dst_format: the desired video format to be transcoded into.

handler creates an OSS client object and generates a signed URL for the OSS file based on the authentication information.

Next, it performs the video transcoding based on the target format. If the target format is “m3u8”, a special handling will be performed. Otherwise, the function will perform transcoding in a common format.

Let’s take a look at the specific implementation of one of the code branches handle_common:

def handle_common(request_id, oss_client, input_path, shortname, output_dir, dst_format):
    transcoded_filepath = os.path.join('/tmp', shortname + '.' + dst_format)
    if os.path.exists(transcoded_filepath):
        os.remove(transcoded_filepath)
    cmd = ["ffmpeg", "-y", "-i", , transcoded_filepath]
    try:
        subprocess.run(
            cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, check=True)
 
 
        oss_client.put_object_from_file(
            os.path.join(output_dir, shortname + '.' + dst_format), transcoded_filepath)
    except subprocess.CalledProcessError as exc:
        # if transcode fail,trigger invoke dest-fail function
        raise Exception(request_id +
                        " transcode failure, detail: " + str(exc))
    finally:
        if os.path.exists(transcoded_filepath):
            os.remove(transcoded_filepath)
 
 
    return {}

As you can see, handle_common mainly encapsulates ffmpeg. The program transcodes the audio and video files by executing the transcoding command with ffmpeg, and finally writes the transcoded file back to the OSS through the OSS client.

So far, we have created a template for audio and video transcoding and understood its implementation mechanism. Next, I will guide you to configure specific business parameters to make it executable and process specific business requirements.

Execution #

First, in the same region as Function Compute, create a bucket in Object Storage Service (OSS), and upload a video to that bucket. This process is quite straightforward, and you can refer to the OSS documentation for quick implementation.

For example, I created a bucket named “geekbangbucket” and uploaded a video named “demo.mp4”.

Image

Next, we need to verify whether audio and video transcoding can work by using the function’s test event. Before clicking “Test Function”, we also need to configure some parameters so that the transcode function knows where to read from, what format to convert to, and where to output:

{
    "bucket": "geekbangbucket",  // The name of the bucket you created on OSS
    "object": "demo.mp4",   // The name of the uploaded audio or video file
    "output_dir": "result",  // The directory where you want to store the transcoded video
    "dst_format" : "mov"  // The desired format of the transcoded video
}

These parameters are the inputs we analyzed during the setup phase, and they will be passed into the function through the event object.

Finally, click “Test Function”. After the function execution is successful, you can check whether demo.mov has been generated in the result directory on OSS.

Image

In this way, we have completed the development of a serverless function with video transcoding capability. In your actual business work, you can also use an OSS trigger to automatically trigger the execution of the audio and video transcoding function. After going through this process, do you find it more efficient than your usual development workflow?

Advanced Audio and Video Transcoding #

Now, we are able to meet the needs of general business, but sometimes we also need to transcode multiple audio and video files simultaneously. So, how can we quickly implement a parallel transcoding processing service based on the above function template?

Before we start, you can first recall the workflow mentioned in the Orchestration lesson. Does it happen to be able to solve this scenario?

Overall Design #

In the Orchestration section, I introduced the parallel steps of a workflow, which contains multiple branches that run in parallel during execution. You can associate each transcoding task with a parallel branch and use the input mapping provided by the workflow to pass the target format to the transcoding task.

Considering that video transcoding tasks are usually run in batches in practical business scenarios, you can also utilize the looping capability to handle them.

Combining this idea, I provide you with an execution diagram of the workflow to help you better understand it:

Image

Here, in order to make the simulated transcoding scenario closer to real business, I assume that we need to transcode into three formats: avi, mov, and flv. After the transcoding tasks, a sub-task is added to notify the corresponding platform or playback terminal. Therefore, the sub-task you see contains two tasks, one is the transcoding task and the other is the post-processing task after transcoding.

Implementation Details #

Although each branch corresponds to a transcoding task of a certain format, in fact, you can control different format conversions in the functions you implemented earlier through parameter passing. Therefore, the tasks in these branches can also be completed using the cloud functions you built earlier.

Let me take avi as an example to demonstrate.

As mentioned in the overall design, we first need to define a parallel step to control the processing of multiple parallel tasks. In my YAML, it corresponds to parallel_for_videos.

steps:
  - type: parallel
    name: parallel_for_videos
    branches:

Next, we can configure parallel branch tasks under branches. Since we are using the foreach step for looping, the loop name created in the YAML is foreach_to_avi, as shown below:

- steps:
    - type: foreach
      name: foreach_to_avi
      iterationMapping:
        collection: $.videos
        item: video
      inputMappings:
        - target: videos
          source: $input.videos

Please note that you need to define the iteration variable in the foreach step, which is the iterationMapping in the YAML. The subsequent substeps will also use the “item” field to define the input variable (i.e., video). In my case, the input is defined in the videos array, but you can define it in your preferred way.

Next, let’s Define sub-steps. The sub-steps include the transcoding task and the notification task. Let’s start with the transcoding task. The transcoding task requires the audio and video transcoding function created earlier, which is the resourceArn field (if you don’t remember its usage, you can refer to the Workflow Definition Language Task Step).

Also, in the workflow, because the input is defined through inputMappings, the input needs to be consistent with the event fields defined in the function. As you can see, I still need to pass the dst_format, output_dir, bucket, and object parameters here. Below is the task for the transcoding task, which I define as avi_transcode_task.

  steps:
      - type: task
        name: avi_transcode_task
        resourceArn: acs:fc:*******:services/VideoTranscoder-t6su/functions/transcode
        inputMappings:
          - target: dst_format
            source: avi
          - target: output_dir
            source: avi_videos
          - target: bucket
            source: $input.video.bucket
          - target: object
            source: $input.video.object

After completing the definition of the avi_transcode_task, we can continue to define another task called avi_inform_task.

- type: task
  name: avi_inform_task
  resourceArn: acs:fc:*******:services/VideoTranscoder-54jr.LATEST/functions/dest-succ
  inputMappings:
    - target: platform
      source: avi_platform

Now we have completed the definition of one branch. Here, I also list the complete YAML branch for your reference. I believe that next, you can use it as an example to implement the definition of the other two branches.

- steps:
    - type: foreach
      name: foreach_to_avi
      iterationMapping:
        collection: $.videos
        item: video
      inputMappings:
        - target: videos
          source: $input.videos
      steps: 
        - type: task
          name: avi_transcode_task
          resourceArn: acs:fc:cn-hangzhou:*******:services/VideoTranscoder-54jr.LATEST/functions/transcode
          inputMappings:
            - target: dst_format
              source: avi
            - target: output_dir
              source: avi_videos   
            - target: bucket
              source: $input.video.bucket
            - target: object
              source: $input.video.object
        - type: task
          name: avi_inform_task
          resourceArn: acs:fc:cn-hangzhou:*******:services/VideoTranscoder-54jr.LATEST/functions/dest-succ
          inputMappings:
            - target: platform
              source: avi_platform

After defining the YAML of the workflow, let’s validate it. We also need to configure the test parameters of the workflow to be able to click “Execute”.

For the inputs, we can use the same video demo as before. Since we have defined the output_dir and dst_format for each branch in the workflow, these two parameters do not need to be passed in. And as I mentioned earlier, the input for foreach needs to be defined in the videos array. Therefore, my inputs are as follows:

{
  "videos": [
    {"bucket": "geekbangbucket","object": "demo.mp4"}
  ]
}

Then we can click “Execute” and see the successful execution feedback in the status bar.

Image

Finally, we can check in the OSS to see if the transcoding is successful. If it is successful, you will see three different target directories generated in the OSS.

Image

You can choose to start the workflow through API or periodic scheduling based on your actual business scenarios. In this way, a workflow with parallel transcoding capability for multiple formats is completed.

At this point, you can save the YAML file and share it with your colleagues. They only need to modify a few parameters, such as resourceARN and their own entry bucket variables, to achieve the same function orchestration as you.

This saved YAML file can also be used as a template. In order to improve the efficiency of developers, some cloud service providers have implemented this template capability in the productization of function orchestration.

Summary #

Finally, let me summarize what we have covered today. Today, from the perspective of efficient development and deployment of business applications, I have shown you the advantages of Serverless “improving efficiency and delivering quickly” using templates as the foundation.

For a newcomer to the Serverless field, using templates to build their own business applications is undoubtedly a shortcut. We can choose to use templates for single-function applications or templates that combine multiple functions based on the complexity of the business.

In more complex scenarios, we can also take advantage of workflow orchestration to combine functions with other cloud services such as file storage, object storage, and log services to quickly build a complex application system.

Furthermore, we can further consolidate such orchestration YAML files and make them available for use by colleagues, enabling the true realization of the power of templates.

Homework #

Alright, this lesson comes to an end. Finally, I have prepared an extension assignment for you.

Throughout this lesson, we have been conducting experiments through manual invocations. But can we achieve the same result through triggers? Give it a try and move your hands quickly.

Please feel free to write down your thoughts and answers in the comments section. Let’s exchange and discuss together.

Thank you for reading, and feel free to share this lesson with more friends for further discussion and learning.