Home > Mobile >  Controlling Mutation in 39bit string as a candidate solution in genetic algorithm
Controlling Mutation in 39bit string as a candidate solution in genetic algorithm

Time:10-07

I am working on an optimization problem. I have X number of ambulance locations, where X ranges from 1-39.

There are 43 numbers [Ambulance Locations] to choose from (1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39) , we choose 3 of them since I have 3 ambulances.

I can only put my ambulance in three locations among 1-39 locations (Restriction). Assume that I want to put my Ambulance on the 5th, 19th, and 31 positions. -- Chromosome 1= [000010000000000000100000000000100000000]. In the above presentation, I am turning on 5-bit, 19-bit, and 31-bit.

Is it possible to flip a bit close to the original solution? For example, keeping 2 bits on in the original position and randomly changing the 3rd bit close to 2bits. It is important for me to keep 3bits on among 39bits. I want to make a control mutation with the aim to produce a small change.

My goal is to make small changes since each bit represents a location. The purpose of mutation is to make small changes and see evaluate results. Therefore, a code should do something like this. As for CS1: (111000000000000000000000000000000000000), I want something like (011010000000000000000000000000000000000), or (011001000000000000000000000000000000000), or (010110000000000000000000000000000000000) or (101010000000000000000000000000000000000), or (00101100000000000000000000000000000000), etc

To achieve mutation, what can be a good way to randomly change present positions to other positions keeping the range only between 1-39 locations (Restriction)?

CodePudding user response:

you could use numpy and do something like

import numpy

s = "1110000000000000000000000000"
def mutate(s):
    arr = numpy.array(list(s))
    mask = arr == "1"
    indices_of_ones = numpy.argwhere(mask).flatten()
    pick_one_1_index = numpy.random.choice(indices_of_ones)
    potential_swaps = numpy.argwhere(~mask).flatten()
    distances = numpy.abs(pick_one_1_index - potential_swaps)
    probabilities = (1/distances) # higher probabilities the less distance from its original position
    # probabilities = (1/(distances*2)) # even higher probabilities the less distance from its original position
    pick_one_0_index = numpy.random.choice(potential_swaps,p=probabilities/probabilities.sum())
    arr[pick_one_1_index] = '0'
    arr[pick_one_0_index] = '1'
    return "".join(arr)

there is likely a more optimal solution

alternatively you can add a scalar or power to the distances to penalize more for distance...

if you wanted to test different multipliers or powers for the probabilities you could use something like

def score_solution(s1,s2):
    ix1 = set([i for i,v in enumerate(s1) if v == "1"])
    ix2 = set([i for i,v in enumerate(s2) if v == "1"])
    a,b = ix1 ^ ix2
    return numpy.abs(a-b)

def get_solution_score_quantiles(sample_size=100,quantiles = [0.25,0.5,0.75]):
    scores = []
    for i in range(10):
        s1 = mutate(s)
        scores.append(score_solution(s,s1))
    return numpy.quantile(scores,quantiles)

print(get_solution_score_quantiles(50))
  • Related