Home > Enterprise >  Divide and Conquer vs Backtracking
Divide and Conquer vs Backtracking

Time:06-14

Let’s use as an example the problem LeetCode 322. Coin Change

I know it is best solved by using Dynamic Programming, but I want to focus on my Brute Force solution:

class Solution:
    def coinChange(self, coins: List[int], amount: int) -> int:
        curr_min = float('inf')

        def helper(amount):
            nonlocal curr_min
        
            if amount < 0:
                return float('inf')
        
            if amount == 0:
                return 0
        
            for coin in coins:
                curr_min = min(curr_min, helper(amount-coin)   1)
            
            return curr_min
            
    ans = helper(amount)
    return -1 if ans == float('inf') else ans

The Recursion Tree looks like: Recursion Tree

I can say it is Divide and Conquer: We are dividing the problem into smaller sub-problems, solving individually and using those individual results to construct the result for the original problem.

I can also say it is Backtracking: we are enumerating all combinations of coin frequencies which satisfy the constraints.

I know both are implemented via Recursion, but I would like to know which paradigm my Brute Force solution belongs to: Divide and Conquer or Backtracking.

CodePudding user response:

To me this doesn't fit with either paradigm.

Backtracking to me is associated with reaching a point where the candidate cannot be further developed, but here we develop it to it's end, infinity, and we don't throw it away, we use it in comparisons.

Divide and conquer I associate with a division into a relatively small number of candidate groups (the classic example is two, like binary search). To call each path in a recursion a group for the sake of Divide and Conquer would lose the latter's meaning.

CodePudding user response:

The most practical answer is it doesn't matter. Safest answer recursion. My best interpretation is that its backtracking.

I think the options here are recursion, backtracking, divide-and-conquer, and dynamic programming.

Recursion being the most general and encapsulating of backtracking, D&C, and DP. If indeed it has backtracking and D&C algorithms then recursion would be the best answer as it contains both.

In Skiena's ADM (Section 5.3.1), it says:

A typical divide-and-conquer algorithm breaks a given problem into a smaller pieces, each of which is of size n/b.

By this interpretation is doesn't meet the as we divide our solution by coins and each coin amount being a different size.

In Erickson's Algorithms (section 1.6), it says:

divide and conquer:

  1. Divide the given instance of the problem into several independent smaller instances of exactly the same problem.

So in this case, according to the recursion tree, are not always independent (they overlap).

Which leaves backtracking. Erickson defines the 'recursive strategy' as:

A backtracking algorithm tries to construct a solution to a computational problem incrementally, one small piece at a time.

Which seems general enough to fit all DP problems under it. The provided code can be said it backtracks when a solution path fails.

Additionally, according to Wikipedia:

It is often the most convenient technique for parsing, for the knapsack problem and other combinatorial optimization problems.

Coin Change being an Unbounded Knapsack type problem, then it fits into the description of backtracking.

CodePudding user response:

A complication in categorizing your algorithm is that there aren’t clear, well-defined boundaries between different classes of algorithms and different people might have slightly different definitions in mind.

For example, generally speaking, divide-and-conquer algorithms involve breaking the problem apart into non-overlapping subproblems. (See, for example, mergesort, quicksort, binary search, closest pair of points, etc.) In that sense, your algorithm doesn’t nicely map onto the divide-and-conquer paradigm, since the subproblems you’re considering involve some degree of overlap in the subproblems they solve. (Then again, not all divide-and-conquer algorithms have this property. See, for example, stoogesort.)

Similarly, backtracking algorithms usually, but not always, work by committing to a decision, recursively searching to see whether a solution exists given that decision, then unwinding the choice if it turns out not to lead to a solution. Your algorithm doesn’t have this property, since it explores all options and then takes the best. (When I teach intro programming, I usually classify algorithms this way. But my colleagues sometimes describe what you’re doing as backtracking!)

I would classify your algorithm as belonging to a different family of exhaustive search. The algorithm you’ve proposed essentially works by enumerating all possible ways of making change, then returning the one that uses the fewest coins. Exhaustive search algorithms are ones that work by trying all possible options and returning the best, and I think that’s the best way of classifying your strategy.

  • Related