22 Private Cloud Era the Core Engine of Serverless Who Can Win

22 Private Cloud Era The core engine of Serverless- who can win #

Hello, I’m Jingyuan.

The implementation of the Serverless platform can be divided into two categories: public cloud and private cloud. In the previous practical lessons, we mainly focused on the Serverless platforms of public clouds. From this lesson onwards, I will take you to explore the field of private cloud deployment.

Why do we need private cloud deployment? I believe you must have some hesitation when answering this question.

As we know, Serverless is operated and provided as a service by cloud vendors, bringing efficient development and significantly reducing costs. However, as I mentioned in the previous lesson, while we enjoy these “benefits,” we also lose control of the platform and face data security issues.

Many of my private cloud clients have built a Serverless platform within their organizations based on these two key factors. This allows the business teams in the companies to enjoy the same value brought by Serverless, but the maintenance work at this level is transferred from the cloud vendor to the internal platform operations team of the enterprise.

So, how is private cloud deployment usually implemented, and what are the deployment methods available? I hope that this lesson will provide you with a comprehensive understanding of the methods of private cloud deployment, the current development status of open-source engines, and the selection of an open-source engine as the foundation of a Serverless platform.

What are the methods of private deployment? #

Currently, the majority of Serverless platforms are based on container technology. The maturity and prevalence of container technology in recent years have made it relatively easy for cloud vendors and open-source Serverless engines to be deployed in private enterprise data centers.

Different enterprises generally adopt two approaches for private deployment:

  • Self-development: Some enterprises with strong infrastructure capabilities choose to develop their own solutions. Some enterprises may customize open-source Serverless engines, while others may directly develop their own solutions based on existing IAAS and PAAS resources, equipped with professional teams for upgrades and maintenance.
  • Purchasing: Other enterprises choose to collaborate with ToB (business-to-business) cloud vendors, adapting and upgrading Serverless products sold by these vendors. For example, in the banking and financial sector, by reviewing some publicly available documents such as tender notices from banks like ICBC and CEB, it can be seen that these banks have used Baidu’s Serverless FaaS product for private deployment and upgrading.

Currently, the majority of enterprises choose to “purchase” rather than self-develop. Traditional enterprises collaborate with ToB cloud vendors, combining the demands for serverless transformations from traditional enterprises and the rich practical experience and technology of ToB cloud vendors, forming a partnership.

However, this does not mean that the purchasing party can completely rely on the vendors and do not need to have knowledge of Serverless technology.

On one hand, you need to have an understanding of the Serverless engine that the vendor is based on since most cloud vendors, except for those that have developed their own platforms, build and adapt their platforms based on open-source Serverless engines (self-hosted frameworks). On the other hand, you also need to understand how the entire solution is built around this core engine. In this lesson, we will focus on the Serverless engine part.

Image

You can learn about the development of engines and platforms from the CNCF Serverless Landscape. The “Hosted Platform” on the left side includes the Serverless hosted platforms of companies such as AWS, Alibaba Cloud FC, Baidu Cloud CFC, Huawei FunctionStage, Azure Functions, and Tencent Cloud SCF that we mentioned earlier. The “Installable Platform” on the right side includes the open-source frameworks that we often pay attention to, such as Knative, OpenWhisk, Fission, and OpenFaaS.

So, what are their specific characteristics and how should we choose and use them?

Current Situation and Selection of Open Source Serverless Engines #

Let’s start by looking at the 2020 CNCF report to get a sense of the usage of open source Serverless:

Image

We can see that 35% of the survey respondents are using “installable software”, out of which 27% are using Knative, 10% are using OpenFaaS, and 5% are using Kubeless. Over time, some open source platforms have not stood the test of time. For example, Fn and Iron Functions have gradually faded away. Kubeless, as shown in the graph, also announced the end of maintenance earlier this year.

Here, based on the CNCF Serverless Landscape and CNCF survey report, taking into account popularity, stability, contributions, tools, and ease of use, I have selected four open source platforms to introduce to you: Knative, OpenFaaS, OpenWhisk, and Fission. I would like to note that Nuclio is also a good open source framework, but it focuses on data, I/O, and compute-intensive business processing scenarios and does not consider the scale-to-zero problem itself. So we won’t discuss it for now.

  • OpenWhisk is considered one of the “senior” open source Serverless frameworks. Its codebase currently has a significant number of commits. However, its development speed is not as fast as some of the other platforms. As shown in the research graph, only 3% of Serverless users have chosen this product, which is much lower than Knative and OpenFaaS. Additionally, it has not released a new version since January 2020.
  • Although Knative was released in 2018, it has developed rapidly. With the close integration with Google and Kubernetes, it has gained almost 10,000 stars on GitHub in just a few years. The research graph shows that 27% of Serverless users have chosen it. Knative has also become a CNCF incubation project.
  • OpenFaaS can be considered the most stable among these platforms. It still has a steady number of commits and the highest number of stars among all open source Serverless platforms, nearing 30,000 in total. The research graph shows that it is supported by 10% of users.
  • Fission, although supported by only 2% of users according to the research report, still has stable contributors and releases. It remains active.

I have provided an overview and comparison of these four open source Serverless platforms from 12 aspects including implementation language, supported programming languages, and number of contributors, making it easier for you to further explore.

Image

Differences in Open Source Engines #

After comparing these aspects from the CNCF report and the official summaries of various open source Serverless platforms, you may be able to quickly decide on an open source platform as the focus of your research and development. However, these data are still on paper, so we can go a little deeper. I will give you a few reference suggestions based on the core feature of Serverless, the “scaling mechanism”, and related usage limitations.

  • Scaling Mechanism and Limitations of Fission

Fission has two execution modes: Pool-based and New Deploy. It supports resource pooling as well as scaling from 0 to 1. The official claim that the cold start speed is controlled within milliseconds is a good choice for some latency-sensitive scenarios. However, its limitation is that it only supports controlling the number of instances based on CPU metrics, which means it may not be able to accurately perceive the actual resource usage in many web scenarios.

  • Scaling Mechanism and Limitations of OpenWhisk Unlike Fission, OpenWhisk’s scaling is simpler. Each incoming request triggers an invoker to create a business container, inject and execute the code, and then destroy the container after execution. This process does not generate redundant resources, but it may not perform optimally in high concurrency scenarios.

As shown in the table above, one limitation of OpenWhisk is that it relies on multiple components. Both developers and users need to have knowledge of these tools. If you want to do secondary development, you need to be familiar with Scala. Otherwise, you can consider the other three open-source frameworks that are primarily based on Go language.

  • Scaling mechanism and limitations of Knative

We introduced Knative KPA in a previous lesson on scaling. The scaling logic of Knative KPA is complex. In addition to the AutoScaler and Activator components of Serving, it also requires injecting a queue-proxy container into the user’s pod for real-time request volume statistics. Based on these components, Knative can support scaling from 0 to 1, from 1 to N, and from N to 0, as well as scaling algorithms in Stable and Panic modes.

So what are its shortcomings? Knative relies completely on Kubernetes. As new versions are released quickly, higher versions require Kubernetes version 1.22 or above. However, some enterprises may not upgrade so quickly and need support for older versions. In addition, the extensible network component is a double-edged sword. While it provides convenience, operation and maintenance personnel also need to have knowledge in this area, such as Istio. It can be seen that Knative has high requirements for enterprises and technical personnel.

  • Scaling mechanism and limitations of OpenFaaS

OpenFaaS Pro delegates scaling initiation completely to Prometheus. OpenFaaS obtains monitoring metrics from Prometheus and then uses AlertManager to send scaling requests based on configured parameters, so as to achieve the desired resource values. Currently, OpenFaaS supports three scaling modes: rps, capacity, and cpu. The ability to scale to zero can be controlled using com.openfaas.scale.zero.

However, OpenFaaS depends on Prometheus and the ability to control scaling to zero using configuration parameters, which can be considered as limitations. These two features mean that it depends entirely on the stability and load of Prometheus, and scaling to zero is disabled by default. You can enable it, but it will result in a certain cold start time.

In summary, currently Knative, OpenWhisk, and OpenFaaS all support dynamic scaling based on request volume, while Fission currently only supports scaling based on CPU metrics. Compared to Fission and OpenWhisk, OpenFaaS and Knative have more diverse scaling modes.

If your scenario is relatively simple, you can try OpenWhisk quickly. If you have high performance requirements, Fission and Knative are good choices. The boundaries between them are not insurmountable, and we can be more flexible when making choices.

Your Choice #

With the analysis above, I believe you already have a general direction in mind. Here are a few additional considerations beyond technology:

  • Consider your company or team’s technology stack, such as programming languages, existing service governance systems, middleware, etc.
  • Consider the nature of your business, whether it’s focused on data computation, web services, or a combination of both.
  • Consider future trends, evaluating the maturity and productiveness of open-source frameworks based on their current state. For example, Knative, backed by Google and CNCF, tightly integrates with Kubernetes, and its potential is promising as time goes on.

What should I pay attention to when purchasing Serverless products? #

Finally, let’s talk about the knowledge reserves you should have as the “Party A” or “Party B” in the “purchasing” method.

If you are the “Party A”, you must have a foundation in technical skills, such as a certain amount of cloud-native technology knowledge, including Docker, Kubernetes, Service Mesh, observability, etc., and have a cloud-native mindset. In terms of Serverless, you should understand the competitive vendor’s product features, successful cases, and implementation scale, and know what key points need to be accepted in the POC stage. Lastly, you should have an architectural mindset, clearly identifying which features will be implemented in the first phase and which ones will be implemented in the second phase, and iterating in a planned manner within a reasonable architectural scope.

With the necessary knowledge reserves mentioned above, you will have smoother communication with the product provider. Of course, if you are proactive and can master the attention points of bidding, contracts, profit and loss evaluation, and onboarding, then you can truly be considered an expert in this field.

If you are the “Party B”, do you remember the “upgrading and leveling up” path I mentioned in the article on building a mindset? You must reach the “king” stage, have the ability to output products and components, and win in the POC stage. I have listed a few key points for your reference.

  • Technical knowledge: As I mentioned before, “join a cloud-native Serverless team, spend more than 10,000 hours in it, and solidly understand every detail.” Only by doing so can you possibly solve the problems encountered during product delivery. If you have experience deploying private products, you will understand the meaning of this sentence. The environment of the “Party A” may not always be “as you wish”.
  • Product mindset: For basic service products, it is not feasible to develop them independently. You must develop a Serverless product solution based on a “standard product” and then find an adaptable architectural product solution based on communication with the “Party A”. For example, in the adaptation of IAM, an independent module can be separated to build an authentication and authorization service, and different enterprises can be adapted through configuration. Building products through the combination of “independent standard products” and “plug-and-play adaptation” is more suitable for private deployment.
  • Soft skills: Usually, when the “Party A” tries to implement new technologies, they themselves do not have much experience. You need to have enough patience to help and guide them in delivering and implementing scenarios, so that there can be subsequent cooperation in the “second phase”, “third phase”, “peripheral products”, and so on.

In fact, whether it is self-developed or second development, many companies have also drawn on the ideas of these Serverless open-source engines. Therefore, studying an open-source product is equivalent to studying a cloud vendor’s service, with the difference being that cloud vendors also incorporate their own professional technical features and infrastructure services.

I hope that these insights from my experience in dealing with private deployment can help you as the “Party A” or “Party B”. I also welcome further discussion with you in the comments section.

Conclusion #

Finally, let me summarize today’s content. The choice of private deployment is mainly based on considerations of data security and platform control. Considering the threshold of the Serverless platform for private deployment, there are generally two deployment methods: “self-developed” and “purchased”. Some companies with strong infrastructure capabilities will choose the former, while others will choose the more convenient latter.

Currently, there is a large proportion of companies choosing to “purchase”. If you are the client, you need to have a proper understanding of cloud-native knowledge, grasp the core points of Serverless and architectural thinking. This will make you more adept at communication and choosing cooperation partners. If you are the service provider, you need to meet the requirements in terms of technology, products, and soft skills to have the possibility of obtaining the client’s recognition and getting the deal.

To understand private deployment, the key is to learn open-source Serverless platforms and build and practice based on them. Therefore, we have comprehensively compared four open-source frameworks: OpenWhisk, OpenFaaS, Fission, and Knative, by considering factors such as CNCF’s research, popularity, stability, and contributions. Knative, as a newcomer, relies on Google and CNCF, and its close integration with Kubernetes has earned it the title of “cross-platform serverless computing platform.” It can be said that its future is promising.

When it comes to understanding the features of a product, my suggestion has always been to get hands-on experience and then look at the latest features on the official website. If combined with some ideas from the course, finding a way to learn new frameworks and new skills would indicate that you have reached a new level.

In the next class, we will move on to the hands-on practice part. Taking Knative as an example, we will see how to deploy the Serverless core engine from 0 to 1.

Reflection Question #

Alright, that wraps up this lesson. Finally, I have a reflection question for you.

Regarding the Serverless engine frameworks mentioned above, what aspect did you find most interesting during your research? Did you make any new discoveries?

Thank you for reading, and feel free to share this lesson with more friends for them to read together.

Additional Reading #

Link 1, OpenWhisk

Link 2, knative

Link 3, Fission

Link 4, OpenFaaS

Link 5, 5.8k stars

Link 6, 4.6k stars

Link 7, 7.2k stars

Link 8, 22.1k stars

Link 9, Serverless Framework

Link 10, JavaScript, Python, PHP, .NET, Go, Java, Ruby, Swift, Binary

Link 11, Node.js, Python 3, Ruby, PHP 7, Java, Go, .NET, Perl, Binary

Link 12, Supports templates for Go, Node.js, PHP, Python, Ruby, Java, C#, and others, packaged as Docker images

Link 13, https://openwhisk.apache.org/

Link 14, https://knative.dev/docs/

Link 15, https://fission.io/

Link 16, https://www.openfaas.com/