Home > Software engineering >  A code to decide whether, or not, a given number of different coins possible to form an exact given
A code to decide whether, or not, a given number of different coins possible to form an exact given

Time:11-17

(Using loops or recursion), I'm trying to write a python function where the user enters an amount of dollars (say:1.25) and number of coins (say:6), then the function decides whether, or not, it is possible to form the exact amount of dollars using the exact given number of coins, assuming that the coins are quarters (0.25), dimes (0.10), nickels (0.05) and pennies (0.010). the function can use any one of the coins multiple times, but the total number of coins used must be equal to the exact number passed to the function.
e.g: if we pass 1.00 dollar and number of 6 coins: should return True because we can use (3 quarters 2 dimes 1 nickel)

  • 1.25 dollars using 5 coins: True >> (5 quarters)

  • 1.25 dollars using 8 coins: True >> (3 quarters 5 dimes)

  • 1.25 dollars using 7 coins: False.

  • I have the idea of the solution in my mind but couldn't transform it to a python code: the function has to start iterating through the group of coins we have (starting from the highest coin: 0.25) and multiply it by the number passed. While the result is higher than the given amount of dollars, the number of coins passed should be decremented by 1. When we get to a point where the result of (number * coin) is less than the given amount of dollars, the amount should be (the given amount - (number * coin)) and the number of coins should be (the given number - the number used so far). I have been trying for few days to make a python code out of this. This is what I've done so far. `

def total(dollars, num):
    dollars = float(dollars)
    sofar = 0
    num = round(num, 2)
    coins = [0.25, 0.10, 0.05, 0.01]
    possible = False 
    if not possible:
        for x in range(len(coins)):
            if num * coins[x] == dollars:
                possible = True 
            elif num * coins[x] > dollars:
                num -= 1 
                sofar  = 1
            else:
                dollars -= num * coins[x]
                num = sofar 

    return possible 

`

  • When I pass (1.25, 5) to the function >> True
  • (1.25, 6) >> False
  • (1.25, 7) >> False
  • (1.25, 8) >> False (which is a wrong returned value)
    Thanks in advance

CodePudding user response:

Here's a working solution without recursion, but does use list comprehension. Not sure how large your coin set is expected to grow to and since this calculates the sum for all combinations it won't scale nicely.

from itertools import combinations_with_replacement

list_of_coins = [0.1, 0.05, 0.25] # dimes, nickles, quarters
number_of_coins = 6 # as your example gives
sum_value = 1.0 # as your example gives

def will_it_sum(list_of_coins, number_of_coins, sum_value):
    list_of_combos = list(combinations_with_replacement(iter(list_of_coins), number_of_coins))
    summed_list = [sum(item) for item in list_of_combos]
    summed_to_desired_value = [i for i in summed_list if i == sum_value]
    if len(summed_to_desired_value) > 0:
        return True
    else:
        return False

number_of_coins = 7
sum_value = 1.25

will_it_sum(list_of_coins, number_of_coins, sum_value)

CodePudding user response:

Below the solution that uses what python has already built-in. This will probably not go well as an exercise solution.

from itertools import combinations_with_replacement

def total(dollars, num):
    cents = int(dollars*100)
    coins = [25, 10, 5, 1]
    return any(sum(comb) == cents for comb in combinations_with_replacement(coins, num))

Alternatively, if the combination should be returned:

from itertools import combinations_with_replacement

def total(dollars, num):
    cents = int(dollars*100)
    coins = [25, 10, 5, 1]
    try:
        return next(filter(lambda comb: sum(comb) == cents, combinations_with_replacement(coins, num)))
    except StopIteration:
        return None

CodePudding user response:

This solves the problem without using combinations. This is the algorithm I described above. You start out using as many of the largest coin as you can, then you recursively see if you can fill in with the smaller coins. As soon as you get a match, you win.

Note that I use a helper so I can convert the dollars to integer pennies, to avoid floating point issues.

coins = [25, 10, 5, 1]

def totalhelp(cents, num, which=0):
    if which >= len(coins):
        return False
    cts = coins[which]
    # What's the most of this coin we can use?
    maxn = min(cents // cts, num)
    for i in range(maxn,-1,-1):
        amt = i * cts
        if amt == cents:
            return True
        if totalhelp(cents-amt, num-i, which 1):
            return True
    return False

def total(dollars, num):
    cents = int(dollars*100 0.5)
    return totalhelp( cents, num )

print( total(1.25, 4 ) )
print( total(1.25, 5 ) )
print( total(1.25, 6 ) )
print( total(1.25, 7 ) )
print( total(1.25, 8 ) )

Output:

False
True
True
True
True
  • Related