20 Algorithm Ideas in Js Divide and Conquer, Greedy, Backtracking and Dynamic Planning

20 Algorithm Ideas in JS- Divide and Conquer, Greedy, Backtracking and Dynamic Planning #

Hello, I’m Ishikawa.

In algorithms, when we mention different algorithmic concepts such as recursion, divide and conquer, greedy, backtracking, and dynamic programming, we usually discuss them separately. But in reality, they are interconnected. For example, recursion and backtracking can solve problems that greedy algorithms cannot consider or where retries are not possible. Dynamic programming, on the other hand, can address some shortcomings of recursion by utilizing recursive formulas.

A good example that can incorporate these concepts is the problem of coin change, which involves finding the minimum number of coins needed to make change using different denominations of coins.

Greedy and Recursive Divide and Conquer #

First, let’s see how the greedy algorithm solves this problem. The core of the change making problem is to find the minimum number of coins needed to make change for a given amount of money, using different denominations such as 1, 5, and 10 cents. The simplest solution to this problem is to use the greedy algorithm, which involves selecting the highest denomination first and gradually selecting smaller denominations. Why do we choose from largest to smallest, instead of the other way around? Because generally, the larger the denomination, the fewer coins are needed.

function minCoinChange(coins, amount) {
  var change = [];
  var total = 0;
  for (let i = coins.length; i >= 0; i--) { // loop from largest to smallest
    var coin = coins[i];
    while (total + coin <= amount) { // add coins one by one, denomination must be smaller than amount
      change.push(coin);  // add the coin to the result
      total += coin; // add the coin to the total
    }
  }
  return change;
}

Greedy is the simplest solution, but it has two key problems:

  1. This approach sometimes does not yield the correct answer. For example, if we need to make change for 17 yuan and we have coins with denominations of 3 yuan and 5 yuan. If we use three 5 yuan coins, we are still 2 yuan short and cannot make exact change. However, if we use a 3 yuan coin first, we can use four 3 yuan coins and one 5 yuan coin to make 17 yuan.
  2. The solution obtained using this approach may not be optimal. For example, if we have coins with denominations of 1 yuan, 3 yuan, and 4 yuan, and we need to make change for 6 yuan. If we use a 4 yuan coin, we need to add two 1 yuan coins, resulting in a total of three coins. But if we use a 3 yuan coin, we only need two coins.

So from the examples of using the greedy algorithm, we can see that in some cases, it can give us a locally optimal solution, but it cannot guarantee a globally optimal solution, and sometimes it cannot even find a solution. So what is a better way to make change? Another approach is to use recursion and divide and conquer. Let’s go back to the Fibonacci sequence that we talked about earlier. If we look at the example of solving the Fibonacci sequence using recursion, what does the process look like? That’s right, it resembles a tree structure, and the process of traversing the tree is similar to depth-first search (DFS).

Image

Backtracking and Memoization #

The problem of coin change, similar to Fibonacci, can also be solved using a tree-like structure. We can use a brute-force recursive algorithm to solve it. With this naive approach, we can see that the problem caused by the greedy strategy can be resolved by brute force enumeration. However, this approach also introduces new problems, namely overlapping subproblems.

Consider the example of finding change for a 5-unit product using coins with denominations of 1 and 2. In the process of finding change, we can see that there are many repeated calculations. This problem is similar to the deduplication problem we encountered when solving Fibonacci.

Image

So how do we solve the problem of overlapping subproblems? Taking the example of finding change, we can combine the greedy strategy with recursion to optimize the recursive algorithm by pruning. As we mentioned before, one of the core problems with the greedy strategy is that it cannot backtrack. However, when we discussed recursion before, we mentioned the principle of stacks and stack frames. We can start with the greedy approach and continue execution if successful. If not successful, we can go back to the previous step and try a different approach. This is where the idea of backtracking comes in. Similarly, we can also use a memoization function to create a memoization table to deduplicate the calculations during execution.

Recursion and Dynamic Programming #

Looking at the example above, we can see that it is not much different from what we discussed earlier about recursion. Is there a more efficient way to solve this problem? Let’s take a look at dynamic programming. There are two important concepts here: backward independence and optimal substructure. Backward independence means that, as shown in the diagram, the dependencies between the problems on the branches of a vertex are one-way, and subsequent decisions will not affect the state of a previous stage. Optimal substructure means that the subproblems are independent of each other, and we can derive the solution based only on the previous states of the subproblem path, without being influenced by the states of other paths. This is similar to when we order takeout food. If two items have discounts at the same time, to maximize the discount, we just buy two discounted items. But if the discount can only be applied to one product in the same order, we may have to choose the item with the highest unit price to apply the discount. In this case, the choice in one branch will affect the choice in another branch, and this cannot be considered as optimal substructure.

In the change-making problem, we do not have the above influence. So we can consider dynamic optimization. In dynamic programming, we use state transition equations to solve problems, which include several key elements. The first one is the initialization state, which represents the final state even though it is called the initial state. In the change-making example, we want the final result of the change-making to be 0, so the initial state is 0. The second key element is the state parameter, which refers to the remaining change needed. In the process of change-making, this state keeps decreasing, and finally it should return to the initial state of 0. In the decision tree above, we can see that in each selection process, we have to choose a coin denomination from a set, and this choice is the decision.

For the change-making problem, we can write the state transition equation as $\(min(DP\[i\], DP(i-coin)+1)\)\(The thinking mode of this equation is similar to what we discussed before when we mentioned recurrence formula, which means that we think from bottom to top. If we decide to use a coin, we need to add \)DP(i-coin)+1\(. If we don’t use it, it is \)DP[i]\(. Since this is a problem of finding the minimum value, we use \)min()$ to compare the minimum value between the two decisions.

At this point, you may have a question: if the state transition equation is just a recurrence formula, why don’t we use recursion directly? What is the difference between dynamic programming and recursion? This is because, as we mentioned before, recursion is characterized by one recursion and one return, and it is implemented based on the principle of a stack, which incurs additional costs. Moreover, during the top-down process, it is not possible to completely avoid redundant branch calculations. Therefore, to solve this problem, dynamic programming comes into play. It uses two nested iterative loops, and in the inner loop, the transition equation is used to calculate bottom-up, which solves unnecessary time consumption. From this perspective, dynamic programming may not be as functional (declarative), but it is indeed more efficient in terms of execution efficiency.

function minCoinChange(coins, amount) {
  var dp = Array(amount + 1).fill(Infinity);  // Number of each type of coin needed
  dp[0] = 0; // Finding 0 yuan requires 0 coins
  for (let coin of coins) { // Loop through each type of coin
    for (let i = coin; i <= amount; i++) {  // Traverse all quantities
      dp[i] = Math.min(dp[i], dp[i - coin] + 1);  // Update the minimum amount of denomination needed
    }
  }
  return dp[amount] === Infinity ? -1 : dp[amount];  // If the last one is infinite, it is impossible to find it
}

So if we pursue efficiency, dynamic programming is the most efficient in this example.

Extension: Bitwise Operators #

After discussing the various algorithms mentioned above, I would like to further extend our knowledge. In addition to dynamic programming, what other algorithmic concepts can improve performance in data structures and algorithms? Here, we can explore the usage and concepts of bitwise operations in conjunction with the features of JavaScript.

In JavaScript, integers are represented as 32-bit binary sequences composed of 0s and 1s. For example, in bitwise operations, 101 represents the number 5, and 9 can be represented as 1001.

Bitwise AND (&) evaluates each bit position and if both operands have a 1 in the corresponding bit position, the result is 1; otherwise, it is 0. For example, the result of 5 & 9 is 0001, which is 1.

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 // 5
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 // 9
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 // 5 & 9

Following the same logic, for Bitwise OR (|), if either of the corresponding bits in the operands is 1, the result is 1; otherwise, it is 0. For example, the result of 5 | 9 is 1101, which is 13.

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 // 5
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 // 9
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 1 // 5 | 9

For Bitwise NOT (~), it reverses the bits of the operand, changing 0 to 1 and 1 to 0. Therefore, ~5 results in -6, and ~9 results in -10.

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 // 5
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 0 // -5
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 // 9
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 0 // -9

Next, let’s take a look at Bitwise XOR (^). It evaluates each bit position and if only one of the corresponding bits has a 1, the result is 1; otherwise, it is 0. Therefore, the result of 5 ^ 9 is 1100, which is 12. 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 // 5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 // 9 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 // 5^9

Next, let’s take a look at bit shifting. It can be divided into left shift («), signed right shift (»), and unsigned right shift (»>). Left shifting moves the binary representation of a to the left by b (< 32) bits, filling the right side with 0. Therefore, 9 « 1 results in 18. On the other hand, signed right shifting moves the binary representation of a to the right by b (< 32) bits, discarding the bits that are shifted out. So, 9 » 1 results in 4. Lastly, unsigned right shifting moves the binary representation of a to the right by b (< 32) bits, discarding the bits that are shifted out and fills the left side with 0. Therefore, 9 »> 1 results in 2147483643.

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 // 9      
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 // 9 << 1
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 // 9 >> 1
0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 // 9 >>> 1

We can summarize the operators in a table to make it easier to understand.

Once we understand bitwise operations, we can use them for addition, subtraction, multiplication, and division. For example, addition can be performed using bitwise operations, just like how we learned addition in elementary school. When we encounter 10 in base 10 addition, we carry over to the next place. The same concept can be applied to bitwise operations. Recursion or loops can be used for this purpose. Here’s an example using recursion.

var add = function(a, b) {
    if (b == 0) {
        return a;
    } else {
        return add(a ^ b, (a & b) << 1);
    }
};

Bitwise operations can increase computational efficiency to some extent, but it can also sacrifice readability. Therefore, when using bitwise operations, it’s important to consider both factors.

Conclusion #

That’s it for this class, and I’ll summarize it briefly. Today, we have seen that greedy algorithms, recursion, divide and conquer, backtracking, and dynamic programming are not completely independent; they are connected in a certain sense. In fact, algorithmic thinking is not only used in programming. If we carefully observe successful individuals and teams in different industries, we can find that they all have algorithmic thinking to some extent. For example, during the period of the War of Liberation, why could the Liberation Army win? It is because they used similar divide and conquer and dynamic programming strategies. It seems that they were winning with few against many, but each time through divide and conquer, they could win in the local area with more against fewer. At the same time, through dynamic programming, they prevented local success from bringing about a greedy mindset. Instead, they constantly focused on the overall optimal solution and ultimately achieved comprehensive victory. Therefore, by understanding the principles of algorithms through JavaScript, we can not only talk about it on paper but also apply it in our work and life, helping us “decide the outcome from behind the scenes and win from a thousand miles away”.

Thought Question #

Today’s thought question is about the problem of making change. In addition to implementing it with greedy and dynamic programming, can you also try using recursion and backtracking mentioned in the text to solve the change-making problem?

Looking forward to seeing your share in the comments section, let’s discuss and exchange ideas together. Also, feel free to share today’s content with more friends. See you next time!