31 Q& a Session Module Five Critical Thinking Problems Collection

31 Q&A session Module five critical thinking problems collection #

Hello, I’m Liu Chao.

In Module 5, we have been discussing design patterns. In my opinion, design patterns not only optimize our code structure, making it more scalable and readable, but also have the role of optimizing the performance of the system. This is the intention behind setting up this module. Especially in high-concurrency scenarios, design patterns related to thread collaboration can greatly improve program performance.

So, as of this week, the content related to design patterns has come to an end. I wonder if you have noticed that the reflection questions in this module have been quite diverse, and many students have shared a lot of valuable information in the comments section, which has facilitated technical discussions. In this Q&A session, I will summarize the reflection questions after class, and I hope my answers will bring you new insights.

[Lecture 26] #

Aside from the singleton implementation methods mentioned above, do you know any other methods?

In [Lecture 9], I mentioned a singleton serialization issue, and the answer is to use an enumeration to implement the singleton. This can avoid Java serialization from breaking a class’s singleton.

Enumerations are inherently singletons. The fields of an enumeration class are actually instances of the corresponding enum type. Since enums are a kind of syntactic sugar in Java, the fields in an enumeration class are declared as static properties after compilation.

In [Lecture 26], I have already explained in detail how the JVM ensures that static member variables are only initialized once, let’s review it again. Static member variables that are marked with the static modifier will be collected into the class constructor or <clinit> method during class initialization. In a multi-threaded scenario, the JVM ensures that only one thread can execute this <clinit> method, while other threads will be blocked and wait. After the <clinit> method is executed once, other threads will no longer execute the <clinit> method and will instead execute their own code. In other words, static member variables, when marked as static, can guarantee initialization only once in a multi-threaded context.

We can understand the singleton implementation using an enumeration through the following code:

// Eager singleton implementation using enumeration
public enum Singleton {
    INSTANCE; // Not instantiated
    public List<String> list = null; // list property
    
    private Singleton() { // Constructor
        list = new ArrayList<String>();
    }
    
    public static Singleton getInstance() {
        return INSTANCE; // Return the existing object
    }
}

This method of implementation does not have lazy loading functionality. So, what if we want to implement lazy loading? In this case, we can use an inner class:

// Lazy singleton implementation using enumeration
public class Singleton {
    INSTANCE; // Not instantiated
    public List<String> list = null; // list property
    
    private Singleton() { // Constructor
        list = new ArrayList<String>();
    }
    
    // Use an enumeration as an inner class
    private enum EnumSingleton {
        INSTANCE; // Not instantiated
        private Singleton instance = null;
        
        private EnumSingleton() { // Constructor
            instance = new Singleton();
        }
        
        public static Singleton getSingleton() {
            return instance; // Return the existing object
        }
    }
    
    public static Singleton getInstance() {
        return EnumSingleton.INSTANCE.getSingleton(); // Return the existing object
    }
}

Note: The original code has syntax errors, such as missing class declarations and incorrect usage of enum syntax. I have corrected these errors in the translated code above.

[Lecture 27] #

Do you know the difference between the singleton pattern discussed in the previous lecture and the flyweight pattern discussed in this lecture?

First of all, the implementation of these two design patterns is different. We use the singleton pattern to avoid repeating the instantiation of a class instance every time it is called, aiming at obtaining the uniqueness of the instantiated object in the class itself. On the other hand, the flyweight pattern uses a shared container to achieve the sharing of a series of objects.

Secondly, there is also a difference in their usage scenarios. The singleton pattern focuses more on reducing instantiation to improve performance. Therefore, it is generally used in classes that need to frequently create and destroy instantiated objects or in classes where creating and destroying instantiated objects consumes a lot of resources.

For example, connections in connection pools and threads in thread pools are implemented using the singleton pattern. Database operations are very frequent, and creating and closing connections are required for each operation. By using the singleton pattern, we can save the performance overhead caused by constantly creating and closing database connections. On the other hand, the flyweight pattern focuses more on sharing the same objects or object attributes in order to save memory usage.

In addition to the differences, these two design patterns also have similarities. The singleton pattern can avoid the creation of duplicate objects and save memory space, while the flyweight pattern can avoid the repeated instantiation of a class. In summary, although these two patterns are similar, their emphasis is different. If we encounter scenarios where we need to choose between these two design patterns, we can make a choice based on their emphasis.

[Lecture 28] #

In addition to the aforementioned multi-threading design patterns (Thread Context Pattern, Thread-Per-Message Pattern, Worker-Thread Pattern), have you used any other design patterns to optimize multi-threading in your everyday work?

In the comment section of this lecture, a student named undefined asked how we can obtain the return result when using the Worker-Thread Pattern. Specifically, when a worker thread handles an asynchronous request after receiving it, how can we get the result and return it to the main thread?

To answer this question, we will need to use some other design patterns, which we can look at together.

If we want to get the result of an asynchronous thread execution, we can use the Future design pattern to solve this problem. Let’s say we have a task that needs to be executed by a machine. However, the task requires a worker to be assigned to the machine. After the machine completes the execution, it needs to notify the worker of the specific result. In this case, we can design a Future pattern to implement this business logic.

First, let’s define a task interface that is mainly used for task design:

public interface Task<T, P> {
	T doTask(P param);// Complete the task
}

Next, let’s define a task submission interface, TaskService, which is used to submit tasks. Task submission can be divided into two types: those that require a return result and those that don’t:

public interface TaskService<T, P> {
	Future<?> submit(Runnable  runnable);// Submit a task without return result
    Future<?> submit(Task<T,P> task, P param);// Submit a task and return the result
}

Then, let’s declare an interface for querying the execution result. This interface is used to submit a task and then query the execution result in the main thread:

public interface Future<T> {
	T get(); // Get the return result
	boolean done(); // Check if it's done
}

Now, let’s implement the task interface. When we need to return a result, we can use the finish method in the result retrieval class to pass the result back to the querying execution result class:

public class TaskServiceImpl<T, P> implements TaskService<T, P> {

	/**
	 * Implementation method for task submission without return result
	 */
	@Override
	public Future<?> submit(Runnable runnable) {
		final FutureTask<Void> future = new FutureTask<Void>();
		new Thread(() -> {
			runnable.run();
		}, Thread.currentThread().getName()).start();
		return future;
	}
	
	/**
	 * Implementation method for task submission with return result
	 */
	@Override
	public Future<?> submit(Task<T, P> task, P param) {
		final FutureTask<T> future = new FutureTask<T>();
		new Thread(() -> {
			T result = task.doTask(param);
			future.finish(result);
		}, Thread.currentThread().getName()).start();
		return future;
	}
}

Finally, let’s implement the querying execution result interface. The get and finish methods in the FutureTask class utilize thread communication using wait and notifyAll to block and wake up threads. Before the task is completed, the get method is used to retrieve the result, which will cause the main thread to enter a blocked state. The blocking thread is then awakened by the task thread calling the finish method and passing the result back to the main thread:

public class FutureTask<T> implements Future<T> {

	private T result;
	private boolean isDone = false;
	private final Object LOCK = new Object();

	@Override
	public T get() {
		synchronized (LOCK) {
			while (!isDone) {
				try {
					LOCK.wait();// Block and wait
				} catch (InterruptedException e) {
					// TODO Auto-generated catch block
					e.printStackTrace();
				}
			}
		}
		return result;
	}

	/**
	 * Get the result and wake up blocked threads
	 * @param result
	 */
	public void finish(T result) {
		synchronized (LOCK) {
			if (isDone) {
				return;
			}
			this.result = result;
			this.isDone = true;
			LOCK.notifyAll();
		}
	}

	@Override
	public boolean done() {
		return isDone;
	}
}

We can implement a task to build a car and then submit it using the TaskService class:

public class MakeCarTask<T, P> implements Task<T, P> {

	@SuppressWarnings("unchecked")
	@Override
	public T doTask(P param) {
		
		String car = param + " is created successfully";
		
		try {
			Thread.sleep(2000);
		} catch (InterruptedException e) {
			// TODO Auto-generated catch block
			e.printStackTrace();
		}
		
		return (T) car;
	}
}

Finally, let’s run this task:

public class App {

	public static void main(String[] args) {
		// TODO Auto-generated method stub

		TaskServiceImpl<String, String> taskService = new TaskServiceImpl<String, String>();// Create a task submission class
		MakeCarTask<String, String> task = new MakeCarTask<String, String>();// Create a task
		 
		Future<?> future = taskService.submit(task, "car1");// Submit the task
		String result = (String) future.get();// Get the result
		 
		System.out.print(result);
	}

}

Running the code will give the following result:

car1 is created successfully

Starting from JDK 1.5, Java has provided a Future class, which can block and wait for the return result of an asynchronous execution through the get() method. However, this approach can have poor performance. In JDK 1.8, Java introduced the CompletableFuture class, which is based on asynchronous functional programming. Compared to blocking and waiting for the return result, CompletableFuture can handle the computation result through callbacks, making it asynchronous and non-blocking, and thus more efficient in terms of performance.

In version 2.7.0 of Dubbo, Dubbo also uses CompletableFuture to implement asynchronous communication and asynchronous non-blocking communication based on the callback approach. It is quite simple and convenient to operate.

[Lecture 29] #

We can use the producer-consumer pattern to achieve peak shaving for instant high concurrency. However, although this alleviates the pressure on the consumer side, the producer side will experience a large number of thread blocks due to the instant high concurrency. In this situation, do you know any ways to optimize the performance problems caused by thread blocking?

No matter how well our program is optimized, there will still be bottlenecks once the concurrency increases. Although the producer-consumer pattern can help us achieve peak shaving, it can still lead to a large number of thread blocks and waiting on the producer side when concurrency increases, causing context switching and increasing system performance overhead. In this case, we can consider implementing rate limiting at the access layer.

There are many ways to implement rate limiting, such as using thread pools or using Guava’s RateLimiter. However, they are all based on these two rate limiting algorithms: the leaky bucket algorithm and the token bucket algorithm.

The leaky bucket algorithm is based on a leaky bucket. In order for our requests to enter the business layer, they must pass through the leaky bucket. The rate at which requests exit the leaky bucket is balanced. When the incoming request volume is large, if the leaky bucket is already full, the request will overflow (be rejected). This ensures that the request volume coming out of the leaky bucket is always balanced and will not cause the system to crash due to a sudden increase in incoming request volume and excessive concurrency in the business layer.

The token bucket algorithm refers to the system putting tokens into a bucket at a constant rate. In order for a request to enter, it needs to acquire a token. If there are no tokens available in the bucket, the request will be rejected. Google’s Guava library has a RateLimiter class that is implemented based on the token bucket algorithm.

We can see that the leaky bucket algorithm controls traffic by limiting the capacity pool size, while the token bucket algorithm controls traffic by limiting the rate of token issuance.

[Lecture 30]

The chain of responsibility pattern, the strategy pattern, and the decorator pattern have many similarities. Besides being used in business scenarios, these design patterns are also frequently used in architectural design. Have you encountered the use of these design patterns in source code? Please share with everyone.

The chain of responsibility pattern is often used in scenarios where multiple event handling is required. In order to avoid coupling one handler to multiple events, this pattern connects multiple events into a chain and passes the processing result of each event to the next event through this chain. The chain of responsibility pattern consists of two main implementation classes: the abstract handler class and the concrete handler class.

In addition, many open-source frameworks also use the chain of responsibility pattern. For example, the Filter in Dubbo is implemented based on this pattern. Many of Dubbo’s features are implemented through filter extension, such as caching, logging, monitoring, security, telnet, and RPC itself. Each node in the chain of responsibility implements the Filter interface, and they are connected by ProtocolFilterWrapper.

The strategy pattern is more similar to the decorator pattern. The strategy pattern is mainly composed of a strategy base class, specific strategy classes, and a factory environment class. Unlike the decorator pattern, the strategy pattern refers to the selection of different implementation strategies for an object in different scenarios. For example, for the same price strategy, we can use the strategy pattern in some scenarios. Products in promotional activities based on red envelopes can only use the red envelope strategy, while products in promotional activities based on discount coupons can only use the discount coupon strategy.