Home > Software design >  Recursion function, random choices with probability from list
Recursion function, random choices with probability from list

Time:11-27

I need to make a simulator which makes, for the input list, a list (elements are random choices from rdm_lst) of lists. I have:

lst = ["a", "a", "a"]
rdm_lst = ["a", "b", "c"]

def simulator(lst, rdm_lst):
    sim = []
    for i in lst:
        if i == "a":
            sim.append(np.random.choice(rdm_lst, size=1, p=[0.6, 0.2, 0.2]).tolist()[0])
        elif i == "b":
            sim.append(np.random.choice(rdm_lst, size=1, p=[0.2, 0.6, 0.2]).tolist()[0])
        elif i == "c":
            sim.append(np.random.choice(rdm_lst, size=1, p=[0.2, 0.2, 0.6]).tolist()[0])
    return sim

simulator(rdm_lst)

The output is one list, the problem I have is, that I do not know how to repete this function k time, having as a result list of lists, where the input of a simulator() will be the previous list, not the first one. I tried, put:

return simulator(sim)

instead:

return sim

bun an error occurs. Kindly help.

CodePudding user response:

It sounds like you are trying to implement a Markov chain.

You could simplify a bit your code by defining your transitions with a dict (since your states are labeled, and you are currently using numpy -- in this case, a Pandas DataFrame would be more intuitive, but this will do).

Px = {
    'a': np.array([0.6, 0.2, 0.2]),
    'b': np.array([0.2, 0.6, 0.2]),
    'c': np.array([0.2, 0.2, 0.6]),
}

def simulator(xs, Px, k=1):
    labels = list(Px)
    return [
        xs := [np.random.choice(labels, p=Px[x]) for x in xs]
        for _ in range(k)
    ]

Example

np.random.seed(0)
>>> simulator(['a', 'b', 'c'], Px, k=10)
[['a', 'b', 'c'],
 ['a', 'b', 'c'],
 ['a', 'c', 'c'],
 ['a', 'c', 'c'],
 ['a', 'c', 'a'],
 ['a', 'a', 'c'],
 ['b', 'c', 'c'],
 ['b', 'c', 'c'],
 ['a', 'c', 'a'],
 ['c', 'c', 'a']]

Explanation

Given xs a list of initial states, we make a new list of states with the list comprehension: [np.random.choice(labels, p=Px[x]) for x in xs]. We then assign this back to xs (with the "walrus operator"), and iterate k times.

Fun stuff

The more usual definition of a finite state Markov chain is with the transition matrix:

P = np.c_[list(Px.values())]
>>> P
array([[0.6, 0.2, 0.2],
       [0.2, 0.6, 0.2],
       [0.2, 0.2, 0.6]])

(And you can map the labels separately, or use pandas with labels as index and columns).

What is the expected state distribution after two transitions given an initial state (1,0,0) ('a')?

>>> np.array((1,0,0)) @ P @ P
array([0.44, 0.28, 0.28])

What is the terminal expected state distribution given an initial state (1,0,0)?

# approximation of v @ P^inf
>>> np.array((1,0,0)) @ np.linalg.matrix_power(P, int(10**9))
array([0.33333334, 0.33333334, 0.33333334])

More precisely: what is the stationary distribution (a state distribution such that it is unchanged by transition P)?

w, v = np.linalg.eig(P.T)  # left eigen system
k = np.argmax(w)
assert np.allclose(w[k], 1), 'no stationary state!'
e = v[:, k]
e /= e.sum()
>>> e
array([0.33333333, 0.33333333, 0.33333333])

>>> e @ P
array([0.33333333, 0.33333333, 0.33333333])

So, the expected distribution after a large number of transitions is 1/3 for each state, regardless of the initial state (because all states are reachable from any initial state).

  • Related