Home > Software design >  Why for backtracking sometimes we need to explicitly pop after recursion, and sometimes we don'
Why for backtracking sometimes we need to explicitly pop after recursion, and sometimes we don'

Time:12-05

For example let consider a task where we need to find all permutations for given string preserving the character sequence but changing case.

Here is backtracking solution without .pop():

def letterCasePermutation(S):
    """
    :type S: str
    :rtype: List[str]
    """
    def backtrack(sub="", i=0):
        if len(sub) == len(S):
            res.append(sub)
        else:
            if S[i].isalpha():
                backtrack(sub   S[i].swapcase(), i   1)
            backtrack(sub   S[i], i   1)
            
    res = []
    backtrack()
    return res

And here is a solution with .pop():

def letterCasePermutation(s):
    def backtrack(idx, path):
        if idx == n:
            res.append("".join(path))
            return
        
        ele = s[idx]
        if ele.isnumeric():
            path.append(ele)
            backtrack(idx   1, path)
            path.pop()
        else:
            path.append(ele.lower())
            backtrack(idx   1, path)
            path.pop()
            path.append(ele.upper())
            backtrack(idx   1, path)
            path.pop()
            
    n = len(s)
    res = []
    backtrack(0, [])
    return res

Are both code samples backtracking, or should I call the first one DFS and the second one backtracking?

CodePudding user response:

With backtracking (and most recursive functions in general) the critical invariant for every function call is that it doesn't corrupt state in parent calls.

This should make intuitive sense becauase recursion relies on self-similarity. If unpredictable changes to state occur elsewhere in the call stack that affect data structures shared with ancestor calls, it's easy to see how the property of self-similarity is lost.

Recursive function calls work by pushing a frame onto the call stack, manipulating state locally as needed, then popping the call stack. Before returning to the parent frame, the child call is responsible for restoring state so that from the parent call frame's perspective, execution can carry on without any surprising state modifications by some random ancestor call down the chain.

To give a metaphor, you could think of each call frame as a run through the plot of The Cat in the Hat or Risky Business where the protagonists make a mess (in their call frame), then must restore order before the story ends (the function returns).

Now, given this high-level goal, there are multiple ways to achieve it as your snippets show. One way is to allocate some sort of data structure such as a list object once, then push (append) and pop on it per call frame, mirroring the call stack.

Another approach is to copy state when spawning child calls so that each frame receives fresh versions of the relevant data, and no modifications they make will upset their parent state. This typically requires a bit less bookkeeping and could be less susceptible to subtle bugs than mutating a single data structure, but tends to have higher overhead due to memory allocator and garbage collector action and copying data structures for every frame.

In short, don't confuse the high-level goal of keeping state intact per call frame and how code goes about implementing it.


As far as backtracking versus DFS, I think of backtracking as a specialized DFS that prunes off branches of the search tree that a heuristic determines aren't worth exploring further because they cannot lead to a solution. As before, how the code actually achieves state restoration to implement the backtracking (copying data structures or pushing/popping an explicit stack) shouldn't change the fact that it's the same fundamental algorithmic technique.

I've seen the term "backtracking" applied to permutation algorithms like this. Although the terminology may be fairly common, it seems like a misuse since the permutation algorithm is a full-state recursive walk that will always visit all of the nodes in the tree and isn't doing any intelligent heuristic pruning as backtracking does.

  • Related