12 Should Js Semantic Parsing Use Iteration or Recursion

12 Should JS Semantic Parsing Use Iteration or Recursion #

Hello, I’m Ishikawa.

In the previous two lectures, we learned about the data types of the JavaScript language and understood the principle of closures through the stack data structure. In this lecture, let’s talk about algorithms. We mentioned in the previous lecture on programming patterns that if functional programming is about input, computation, and output, the computation part may involve algorithms. Iteration and recursion can be said to be very fundamental algorithms.

Iteration is relatively easier to understand. If you have used a for loop, you won’t be too unfamiliar with it. On the other hand, recursion is more difficult to understand. However, like closures, once you master its application skills, you will appreciate its power.

When we discussed functional programming, we also mentioned a very important concept called “side effects.” Recursion naturally has many side effects, and accordingly, many solutions have been developed. Today, let’s take a look at what side effects recursion has and what methods we can use to address these side effects.

But before that, let’s first understand what iteration and recursion are.

The Difference Between Iteration and Recursion #

First, we need to understand the difference between iteration and recursion.

Let’s start with iteration. Here’s an example. Those who have worked in software engineering have probably experienced iteration. If it’s an agile project, the process is like taking small steps and running fast. The functionality is not all developed at once, but rather based on priority. High-priority tasks are implemented first, followed by lower-priority tasks. This is a process of continuous improvement through repetition.

On the other hand, recursion is like sending a package. The process of sending out the package is called recursion, and receiving the confirmation of delivery is called return. However, this process does not involve only one person. In the process of sending out the package, one person delivers it to the transit station, another person delivers it to the contact point, and finally it reaches us. Even if the return confirmation is electronic, in the world of networks, information transmission also requires passing through each node. This round trip is recursion.

Image

For the same problem, we can solve it using either iteration or recursion.

For example, let’s calculate the factorial. The factorial of 7, denoted as “7!”, is equal to 7 * 6 * 5 * 4 * 3 * 2 * 1, which is 5040. If we calculate it using an iterative function, it would look like the following. In each iteration, we multiply the previous product by the next number (n) that needs to be iterated.

function factorialIterative(number) {
  if (number < 0) return undefined;
  let total = 1;
  for (let n = number; n > 1; n--) {
    total = total * n;
  }
  return total;
}
console.log(factorialIterative(7)); // 5040

What if we solve it using recursion?

In recursion, there are usually two basic elements: the base case (also called the stop point) and the recursion itself.

Now, if we transform the above example into a recursive form, it would look like this. In recursion, the function being called is generally the function itself.

function factorialRecursive(n) {
  // Base case
  if (n === 1 || n === 0) { 
    return 1;
  }
  // Recursive call
  return n * factorialRecursive(n - 1); 
}
console.log(factorialRecursive(7)); // 5040

In the execution of the above code, we can see that the initial steps are the recursion process. After reaching the base case, the return process begins.

Image

Divide and Conquer in Recursion #

Next, let’s compare iteration and recursion using the classic Fibonacci sequence. The Fibonacci sequence is a series of numbers where each number is the sum of the two preceding ones, starting from the third term. According to this rule, the sum of the first 10 terms should be 55.

1, 1, 2, 3, 5, 8, 13, 21, 34, 55

If we write a function to calculate the nth term of the Fibonacci sequence using iteration, it would look something like this:

function fibIterative(n) {
  if (n < 1) return 0;
  if (n <= 2) return 1; 
  let fibNMinus2 = 0;
  let fibNMinus1 = 1;
  let fibN = n;
  // n >= 2
  for (let i = 2; i <= n; i++) { 
    // f(n-1) + f(n-2)
    fibN = fibNMinus1 + fibNMinus2; 
    fibNMinus2 = fibNMinus1;
    fibNMinus1 = fibN;
  }
  return fibN;
}
console.log(fibIterative(10)); // 55

If we use recursion, the code looks more concise. Here, we also use another core algorithmic approach called divide and conquer. Divide and conquer means breaking a problem down into smaller sub-problems. Since we know that each term of the Fibonacci sequence is the sum of the two preceding terms, we can recursively compute the first two terms separately, and then add them together to obtain the final result.

function fibRecursive(n) {
  // Base case
  if (n < 1) return 0; 
  // Base case
  if (n <= 2) return 1; 
  // Recursion + divide and conquer
  return fibRecursive(n - 1) + fibRecursive(n - 2); 
}
console.log(fibRecursive(10)); // 55

However, there is a problem here. When we calculate fibRecursive(5), the function fibRecursive(3) is computed twice.

Image

Is there a way to remember the previously calculated results and avoid this redundant computation?

Memoization in Recursion #

Yes, we can make use of the concepts of scope and closure that we learned in the previous lesson to enhance the power of recursion. We can incorporate memoization into recursive functions by using closures. In this example, the function fibonacci is a recursive function, but it takes an additional memo parameter from the external scope. This allows us to store the previously calculated values in memo, thus avoiding redundant calculations. Therefore, memoization is often used in combination with recursion. The problem of redundant calculations that we solve here is also known as overlapping subproblems in algorithms, and the memoization technique acts as a kind of memo.

function fibMemo(n, memo = [0, 1, 1]) {
    if (memo[n]) {
        return memo[n];
    }
    // recursion + divide and conquer + closure
    memo[n] = fibMemo(n - 1, memo) + fibMemo(n - 2, memo);
    return memo[n];
}
console.log(fibMemo(10)); // 55

Tail Recursion in Recursion #

In the example above, we can see that the time complexity is \(O(2^{n})\). Is there any way to reduce both the time and space complexity to \(O(n)\)? In this case, we can look at tail recursion. Tail recursion means making the recursive call at the end of the function. By using tail recursion, we can have a maximum of n recursive calls, as in each recursive call we reduce n by 1.

function fibTailRecursive(n, lastlast, last){
  if (n == 0) {
    return lastlast;
  }
  if (n == 1) {
    return last;
  }
  return fibTailRecursive(n-1, last, lastlast + last);
}
console.log(fibTailRecursive(10, 0, 1)); // 55

Memory Management in Recursion #

In the previous lecture, while understanding the concept of closures, we also discussed the concept of the function call stack. There is a famous Q&A website called Stack Overflow which is known as “栈溢出” in Chinese. When we use recursion, if we don’t manage it properly, we may encounter this performance issue. So now let’s take a closer look at memory management in recursion.

In the previous examples, we saw that using recursion as an alternative to iteration is more concise in terms of syntax, but it comes with a performance cost. Because recursion involves a process where a function calls itself repeatedly, it occupies a significant amount of stack space. Therefore, apart from considering time complexity, we also need to consider the higher space complexity. Additionally, if we are not careful and the recursive function does not have a terminating condition, it may lead to a stack overflow.

For example, in the previous factorial example, we can observe the process of each function call in the call stack.

Image

Starting from the ES6 version, JavaScript introduced the concept of tail call optimization. It mentions that if a function is the last action within another function, it will be treated as a jump rather than a subroutine. In other words, this code will be repeated continuously. This is why having a base condition is important. Moreover, in practice, most browsers define their own limits to prevent stack overflow. For example, in Google Chrome, after calling the recursive function in the following infinite loop example 13,952 times, it throws an error message indicating that the maximum stack size has been exceeded, and stops the recursion.

let i = 0;
function recursiveFn() {
  i++;
  recursiveFn();
}

try {
  recursiveFn();
} catch (ex) {
  console.log('i = ' + i + ' error: ' + ex);
}

Image

Extension: Recursive Complexity Calculation #

In iteration, the analysis of Big-O is relatively simple because loops clearly define when to increase, decrease, or stop. However, when analyzing recursion, we need to analyze two parts: the base case and the recursive case. Therefore, when calculating the complexity of recursion, we usually use the master theorem. Let’s take a look at the components of this theorem.

In this formula, n is the size of the problem, a is the number of subproblems, n/b is the size of each subproblem, and \(O(n^{c})\) represents the time required to decompose the original problem and merge the solutions to the subproblems.

\[T(n) = aT(n/b) + O(n^{c})\]

Based on the comparison between \(c\) and \(log\{b}(a)\), there are three possible results. \(log\{b}(a)\) represents \(aT(n/b)\), which is the time complexity required to solve the current level of problem; \(c\) represents \(O(n^{c})\), which is the time required to decompose the original problem and merge the solutions to the subproblems.

Image

Of course, not all recursive problems can be expressed in the form of the master theorem. So what should we do when encountering such problems? In this case, we can also try to use the recurrence relation and recursive tree.

Next, let’s take a look at the approach of using recurrence relations. Recurrence relations can help us design the recursive function and calculate the complexity. If we use the recurrence relation to calculate the time complexity of the Fibonacci series, we need to extract its recurrence relation and derive the time complexity. Here, you can see that each function call generates two additional calls, and the calculation increases exponentially.

// Recurrence relation of Fibonacci series
T(n) = T(n − 1) + T(n − 2)

// Derivation of time complexity
T(n) = T(n − 1) + T(n − 2) + O(1);
T(n − 1) = T(n − 2) + T(n − 3) + O(1);
T(n − 2) = T(n − 3) + T(n − 4) + O(1);

// Exponential increase each time
f(6)                * <-- once
f(5)                *
f(4)                **
f(3)               ****
f(2)             ********
f(1)         ****************         <-- 16
f(0) ******************************** <-- 32

Calculating the complexity using the recurrence relation method shown above is still complex. So is there a simpler and more intuitive way to calculate it? Let’s take a look at the recursive tree. With the recursive tree approach, we can see the results more intuitively. Here, if the length of the longest path is n, the corresponding time consumption is \(2^{n}-1\), so the highest time complexity is \(2^{n}\).

Image

Summary #

In this lesson, we learned about the two most fundamental methods in algorithms: iteration and recursion. Just like when we talked about closures in functions and this in objects during our discussion of JavaScript programming paradigms, you will find that 80% of the algorithms we discuss cannot escape their influence. Please remember the “80/20 rule”. Once you thoroughly understand and grasp the concepts of iteration and recursion, your study of algorithms will be much more effective.

If we compare iteration and recursion in terms of overall performance, iteration is superior to recursion. However, if we consider code cleanliness, recursion appears to be more concise. Furthermore, recursion, when combined with scope and closures, can give rise to memoization functions and tail recursion, both of which can reduce their “side effects” to some extent. Now, let’s summarize the methods for addressing these side effects based on the following diagram.

Image

So, in algorithms, should we use iteration or recursion? There’s no absolute answer here. In practice, you can apply them based on their strengths and weaknesses, taking into account the specific circumstances. Personally, I believe that the code we write is primarily for human readers, not just machine code. Therefore, my suggestion is to prioritize the " simplicity and readability of the code “, and then manually optimize the parts that are affected by the “side effects” that machines cannot optimize for us.

Thought Question #

Earlier, we discussed tail call optimization for stack overflow. Do you know how tail recursion call optimization is implemented?

Looking forward to seeing your sharing in the comments area, let’s communicate and discuss together. Also, feel free to share today’s content with more friends.