11 Web ID E How to Make Functions Distant From the Complexity of Local Development Mode

11 WebIDE How to make functions distant from the complexity of local development mode #

Hello, I’m Jingyuan.

Before we start today’s topic, I’d like to talk to you about the demands of my customers regarding WebIDE. I have selected two typical questions from the inquiries of my clients in the finance industry:

  • The bank requires that I don’t bring my own computer to the production environment. How can I operate our code?
  • The editor is a bit small. I use Python. Can you provide online debugging capability?

You will find that regardless of the hard requirements of the job or the pursuit of development efficiency, their ultimate goal is to have a code editor that is ready to use anytime and anywhere - WebIDE.

I think you must be familiar with the concept of WebIDE, but how can we make it better for development with function computing? What details should we pay attention to? What different technical points are there?

Today, I will talk to you about how WebIDE works in function computing. Through this course, I hope to give you a clear understanding of WebIDE in the form of function computing in terms of plugin mechanisms, dependency on the environment, and elasticity and scalability, based on your understanding of the principles of VS Code Server.

What are the advantages of WebIDE? #

First of all, let’s experience the WebIDE provided by cloud service providers to intuitively feel its advantages.

Taking Alibaba Cloud Function Compute (FC) as an example, we can create a function with Python 3 runtime. By clicking on the “Function Code” button on the console page, we can load the online editing workspace.

Image

As you can see, there is hardly any difference between this and our local VS Code IDE. The familiar “File”, “Explorer”, and “Search” menu buttons are located in the upper-left corner. Moreover, the experience of online code editing is almost the same as that of a local IDE, including code completion and other features. After you finish coding, you can click the “Test” button to directly test the written function. After confirming that everything is correct, you can click the “Deploy” button to deploy the tested code directly to the cloud.

The benefit of this approach is that when we want to make small adjustments to a function, we don’t need to download the code to our local machine. We can directly edit, test, and deploy the code online.

Now, you should have an intuitive understanding of the advantages of WebIDE: the ability to quickly edit, debug, run, and deploy cloud-based code. Another advantage of WebIDE is in the financial and banking sectors, where, due to security considerations, sometimes it is not allowed to bring your own computer and IDE, and the only way to operate is through a WebIDE.

Different cloud service providers have different implementations for WebIDE in Function Compute. Although there may be some differences in implementation, the underlying ideas and mechanisms are generally the same, whether on public or private cloud environments. Next, let us delve into the overall architecture of WebIDE.

Overall Architecture #

To help you better understand the online function computing IDE, I have used different colors in the architecture diagram to highlight different parts. Let me explain each part:

Image

  1. Blue section: This is the core of the WebIDE client. We have chosen the well-known VS Code Server as the foundation. It solves most of the client-side issues and enables VS Code to be accessed and operated through a web browser from anywhere.
  2. Green section: This is the core that integrates the WebIDE with function computing. It includes function computing plugins, tools, SDKs, and runtime environment capabilities.
  3. Orange section: This is the essential support for the Serverless paradigm. It handles code storage, security control, and resource scheduling and scaling for user operations. It includes object storage, permission authentication, resource scaling, and health monitoring capabilities.

In the diagram, I have also labeled the brief execution process of the WebIDE in the function computing platform. Now, let’s take a deploy request of a function as an example to help you understand the overall process of each request.

In steps 1-2, the user sends a request for online editing of a function to the backend from the front-end page of VS Code. Let’s say the request is “XXX/deployFunction”. The server-side, which is the control layer of the function computing (Controller), receives the request and verifies the permission, then transfers it to the VS Code Server container instance.

In steps 3-4, the VS Code Server container retrieves the user’s code and loads the resource scheduling system of function computing. It also dynamically scales the WebIDE Pod resources based on the current state of the container pool (Container Pool).

In steps 5-7, the server-side, based on the user’s request, invokes the backend functionality of the plugin (Serverless Extension BE) and executes the deploy operation based on the language environment at that time. It then returns the execution result to the client-side.

Here are three additional points to note:

First, we can integrate the Serverless Extension plugin into the image of the VS Code Server in advance.

Second, the Runtime can be deployed based on the original architecture of function computing execution. You can package each language runtime into an image and load it dynamically, or you can integrate it into the large image of the WebIDE, although this may make it heavier.

Third, health monitoring is mainly used to monitor the status of the VS Code Server. Although the web-based service runs on the server-side, there is a heartbeat association between the client-side and the server-side. The purpose of health monitoring is to check the heartbeat according to certain logic to determine whether the page is no longer in use. This allows the resource scheduling service to be notified and release the backend container instance.

Based on the demonstration of the underlying architecture and process, we should already understand that the main function of the function computing WebIDE is to enhance the visual online editing software with the plugin functionality of Serverless function computing, and provide users with the ability to develop, debug, deploy, and run functions online, using integrated runtime environment dependencies and elastic scaling capabilities.

Next, let’s proceed with this idea and provide an explanation, one by one, of the visual online editing software (VS Code Server), the functionality of the Serverless Extension plugin, and the environment dependencies and elastic scaling capabilities. Let’s see what technical points we need to pay attention to.

VS Code Server #

As I mentioned earlier, this discussion is based on VS Code Server, mainly because VS Code is a product of Microsoft, a company with a large user base. Microsoft has open-sourced VS Code, and Code Server has further repackaged VS Code, allowing it to be deployed on any machine and accessed through a web browser.

Let’s have a hands-on experience, as it will be more intuitive.

First, you need to prepare a machine with 2 cores and 4GB of memory (different from the minimum requirement of 2 cores and 1GB for code-server on GitHub). Make sure you have enough space and also have the Docker environment.

Next, you can download the code-server image to your local machine and start it as a container.

# Pull the code-server image to your local machine
$ docker pull jmcdice/vscode-server
# Start a Docker container
# -d runs the container in the background
# -p maps the local port (9000) to the container port (8080)
# --restart=always ensures that the parameters are restarted when the container is restarted
# --name specifies the name of the container
$ docker run -d -p 9000:8080 --restart=always --name=myVscodeServer jmcdice/vscode-server

After the startup is complete, we can use the docker ps command to check if the container has started. Finally, we can open the web browser and enter 127.0.0.1:9000 to use the online VS Code. This process is very convenient and fast.

Please note that here we have only started the VS Code using Docker locally. If we want to use it externally, we only need to apply for deploying a cloud server and expose the URL.

Image

With the local experience, the process of deploying VS Code on the function computing platform is no different. We focus on integrating the capabilities of the function service, which is the functionality of the Serverless Extension plugin that we will discuss below.

Serverless Extension #

Major cloud vendors have integrated common function platform features into the WebIDE by developing extensions. 如Baidu Intelligent Cloud Function Computing’s Baidu Serverless VS Code plugin, Tencent Cloud Function’s Tencent Serverless Toolkit for VS Code plugin, Alibaba Cloud Function Computing’s Aliyun Serverless VSCode Extension plugin, and so on. From a product perspective, they are generally similar, and you can choose one to experience.

However, it is important to note that the underlying implementation of the VS Code plugins for each cloud provider integrates their own command-line tools and SDKs. For example, Baidu Intelligent Cloud integrates the BSAM CLI, Tencent Cloud integrates the Serverless Framework syntax, and Aliyun’s plugin combines the functionality of the Funcraft command-line tool and Function Computing SDK.

So, how are they implemented? Although the integrated tools and SDKs of each cloud provider have their own characteristics, the general principles of design remain the same. Next, I will use this universal design principle and approach to explain how to build a Serverless VS Code Extension.

First, we can see that all the plugins are hosted on the Microsoft Visual Studio Marketplace. You can click on any plugin here to view the corresponding source code, documentation, and other maintenance information.

Image

Second, the plugin is only the representation on the client-side, and the actual implementation is done by the SDK and CLI command-line tools of the function computing platform. For example, the underlying implementation of the function upload and download functionality is done by calling the SDK’s function interfaces, while the deployment, dependency installation, debugging, and execution of functions are generally implemented in the CLI.

In summary, there are two parts: the UI logic representation on the client-side, integrated into VS Code, and the integration of the CLI, SDK, Serverless Extension, and VS Code Server, which are used to execute the relevant commands and run them in the server-side container. The following figure shows the architecture:

Image

We can package the CLI, SDK, Serverless Extension, and VS Code Server in one large image and download it when requested to start the container and provide services.

Environment Dependencies #

Next, we need to understand a critical step - how to make the function “run”. This involves environment dependencies.

Depending on the different characteristics of programming languages, function computing platforms usually require different runtimes for each language. To distinguish them from the runtime in the production environment, I will name them dev-runtime.

Image

The main difference is that dev-runtime needs to perform different operations based on different commands. For example, it performs the “deploy” operation based on deployment commands sent from the UI layer. As mentioned in the first module, the runtime is mounted directly in the running state. Typically, function computing platforms also differentiate between runtime and dev-runtime and manage them separately.

Here, let’s take a look at the execution process of dev-runtime. I will use the debugging of Python 3 as an example to briefly discuss this process.

For Python 3, if we need to support debugging, we need to specify the installation of debugpy in the Docker image in advance. Some operating systems may also require the installation of GCC. We can usually use a script to receive various instructions and invoke various dependent tools. Therefore, for the Python3 language, if we receive an instruction to install extension dependencies for the WebIDE, we can use pip3 to install them.

# -r installs the dependencies in the requirements file
# -t specifies the target location
# -i specifies the mirror source, in this case, Tsinghua mirror is specified, you can customize it
# -I ignores if the dependencies are already installed
/XXX/bin/pip3 install -I -r /XXX/requirements.txt -t /XXX/site-packages -i https://pypi.tuna.tsinghua.edu.cn/simple

Similarly, if you need to support debugging and execution, you can use the respective commands for Python3 and debugpy to execute them. The debugging implementation for other languages depends on the debugging methods of each language. For example, Node.js uses the node-inspect debugging tool.

It is worth noting that although I mentioned at the beginning of the article that most cloud providers do not support Java-based online WebIDE capabilities on their public clouds, Java can actually support web-based capabilities. We only need to add the corresponding commands to the Java debugging startup script to enable debugging.

Elastic Scaling #

So far, we have covered most of the concepts of the WebIDE for serverless computing. However, if it stops here, it cannot be called “Serverless”. We need to add a layer of “scalability” to it.

This is similar to the scaling discussed earlier. Let me give you an idea, and then you can go through the entire process of the Serverless WebIDE on your own.

To achieve this “scalability” feature, you can use KEDA, Prometheus, HPA, and metrics to support dynamic scaling of the WebIDE in a Kubernetes environment. Why do we need to introduce KEDA? If you have reviewed the section on scaling, you should be able to answer this quickly, because KEDA supports switching between 0 and 1.

Finally, there is one more important question. It is about the “health monitoring” mentioned in the infrastructure diagram. Why do we need this feature?

We can think of it this way: after the frontend page establishes a connection with the VS Code Server that provides the service, it needs to trigger a page event to proceed to the next step. But what if the user doesn’t trigger any events? Should we release the container instance or keep it running? How long should it be kept running?

A reasonable design is to have two processes in the VS Code Server: the Server process is responsible for interacting with frontend page’s edit requests, and the Status Management process is responsible for status management and reporting. Typically, the WebSocket protocol is used for communication between the VS Code Server and the frontend.

Based on this design approach, when the WebIDE frontend and the Server do not respond within a certain period of time, it may be due to a network issue. In this case, the VS Code Server container instance will not be immediately released. It will retry connecting a certain number of times. If the connection is successful, the same container instance will continue to be used. If the retry still fails, it means that the container instance is no longer in use, and it will be marked as unavailable. A cleanup task will be executed periodically to reclaim the unavailable container instances.

By implementing this status monitoring and management, our WebIDE on the serverless computing platform becomes “scalable”.

Summary #

Finally, let me summarize today’s content. In this lesson, I started with the overall architecture of the Function Compute WebIDE and introduced its four core points: VS Code Server, Serverless Extension, environment dependencies, and elastic scalability.

At the beginning of the course, we experienced a cloud vendor’s WebIDE and intuitively felt the value of today’s content: Using a Web IDE can not only quickly edit and modify user code, but also efficiently load the external dependencies required by the code, thereby greatly reducing the number of times and time developers need to redeploy.

Next, through the introduction of the overall architecture, I believe you have a general understanding of the four major points to consider when designing a serverless WebIDE. Let’s talk about them one by one.

Firstly, the explanation of VS Code Server is mainly to convey two things to you: firstly, code-server is a secondary encapsulation of VS Code, allowing us to access it everywhere we go, just like the VS Code we use on our local machines; secondly, the function compute platform is also standing on the shoulders of giants, further building a WebIDE under the serverless paradigm. This also provides inspiration for us in designing architectures and working: There is no need to reinvent the wheel for everything.

Secondly, the Serverless Extension and environment dependencies are the “soul” of this lesson. Without these two functions, it cannot be called a serverless WebIDE. They are mainly based on the function compute tool CLI and SDK, as well as the encapsulation of different runtimes required for debugging.

Finally, in order to conform to the “scalability” feature of serverless, I also introduced two processes of the VS Code Server in containers from the perspective of scaling in and out. I hope you can have a clear understanding of the linkage between the web version “client” and the service.

Reflection Question #

Alright, this lesson comes to an end, and I have a final question for you to ponder.

If you were tasked with adding online debugging functionality for Java in an existing cloud platform, how would you go about implementing it?

Please share your thoughts and answers in the comments section. Let’s discuss and learn together. Thank you for reading, and feel free to share this lesson with more friends for further discussions and learning.

Further Reading #

You can search for the corresponding plugin repository on the VS Code Marketplace to gain a more detailed understanding of how extensions are implemented.