Home > other >  Scale/resize a square matrix into a larger size whilst retaining the grid structure/pattern (Python)
Scale/resize a square matrix into a larger size whilst retaining the grid structure/pattern (Python)

Time:06-27

arr = [[1 0 0]    # 3x3
       [0 1 0]
       [0 0 1]]

largeArr = [[1 1 0 0 0 0]   # 6x6
            [1 1 0 0 0 0]
            [0 0 1 1 0 0]
            [0 0 1 1 0 0]
            [0 0 0 0 1 1]
            [0 0 0 0 1 1]]

Like above, I want to retain the same 'grid' format whilst increasing the dimensions of the 2D array. How would I go about doing this? I assume the original matrix can only be scaled up by an integer n.

CodePudding user response:

You could use scipy.ndimage.zoom

In [3]: arr = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]])

In [4]: ndimage.zoom(arr, 2.0)
Out[4]: 
array([[1, 1, 0, 0, 0, 0],
       [1, 1, 0, 0, 0, 0],
       [0, 0, 1, 1, 0, 0],
       [0, 0, 1, 1, 0, 0],
       [0, 0, 0, 0, 1, 1],
       [0, 0, 0, 0, 1, 1]])

CodePudding user response:

You can use repeat() twice:

arr.repeat(2, 0).repeat(2, 1)

This outputs:

[[1. 1. 0. 0. 0. 0.]
 [1. 1. 0. 0. 0. 0.]
 [0. 0. 1. 1. 0. 0.]
 [0. 0. 1. 1. 0. 0.]
 [0. 0. 0. 0. 1. 1.]
 [0. 0. 0. 0. 1. 1.]]

CodePudding user response:

scipy.ndimage.zoom as fabda01 asnwer will be very easy to use, but AIK (if IIUC the question), fabda01 asnwer will depend on arr and need to be modified to be more comprehensive to handle various arr (that I will show in my example below), if it could. You can use numba if performance is of importance (similar post) with no python jitting and in parallel mode if needed (this code can be written faster by some optimizations):

@nb.njit           # @nb.njit("int64[:, ::1](int64[:, ::1], int64)", parallel =True)
def numba_(arr, n):
    res = np.empty((arr.shape[0] * n, arr.shape[0] * n), dtype=np.int64)
    for i in range(arr.shape[0]):     # for i in nb.prange(arr.shape[0])
        for j in range(arr.shape[0]):
            res[n * i: n * (i   1), n * j: n * (j   1)] = arr[i, j]
    return res

Assume we have:

arr = [[0 0 0 1 1]
       [0 1 1 1 1]
       [1 1 0 0 1]
       [0 0 1 0 1]
       [0 1 1 0 1]]

The results of the proposed methods will be:

fabda01 asnwer:
[[0 0 0 0 0 0 0 0 0 1 1 1 1 1 1]
 [0 0 0 0 0 0 0 0 0 1 1 1 1 1 1]
 [0 0 0 0 1 1 1 1 1 1 1 1 1 1 1]
 [0 0 0 1 1 1 1 1 1 1 1 1 1 1 1]
 [0 0 1 1 1 1 1 1 1 1 1 1 1 1 1]
 [0 1 1 1 1 1 1 1 1 0 1 1 1 1 1]
 [1 1 1 1 1 1 1 0 0 0 0 0 1 1 1]
 [1 1 1 1 1 1 0 0 0 0 0 0 1 1 1]
 [1 1 1 1 1 0 0 0 0 0 0 0 0 1 1]
 [0 0 0 0 0 0 0 0 0 0 0 0 0 1 1]
 [0 0 0 0 0 0 1 1 1 0 0 0 0 1 1]
 [0 0 0 0 0 1 1 1 1 0 0 0 0 1 1]
 [0 0 0 0 1 1 1 1 1 0 0 0 0 1 1]
 [0 0 0 1 1 1 1 1 1 0 0 0 0 1 1]
 [0 0 0 1 1 1 1 1 1 0 0 0 0 1 1]]

This solution and BrokenBenchmark asnwer:
[[0 0 0 0 0 0 0 0 0 1 1 1 1 1 1]
 [0 0 0 0 0 0 0 0 0 1 1 1 1 1 1]
 [0 0 0 0 0 0 0 0 0 1 1 1 1 1 1]
 [0 0 0 1 1 1 1 1 1 1 1 1 1 1 1]
 [0 0 0 1 1 1 1 1 1 1 1 1 1 1 1]
 [0 0 0 1 1 1 1 1 1 1 1 1 1 1 1]
 [1 1 1 1 1 1 0 0 0 0 0 0 1 1 1]
 [1 1 1 1 1 1 0 0 0 0 0 0 1 1 1]
 [1 1 1 1 1 1 0 0 0 0 0 0 1 1 1]
 [0 0 0 0 0 0 1 1 1 0 0 0 1 1 1]
 [0 0 0 0 0 0 1 1 1 0 0 0 1 1 1]
 [0 0 0 0 0 0 1 1 1 0 0 0 1 1 1]
 [0 0 0 1 1 1 1 1 1 0 0 0 1 1 1]
 [0 0 0 1 1 1 1 1 1 0 0 0 1 1 1]
 [0 0 0 1 1 1 1 1 1 0 0 0 1 1 1]]

Performances

In my benchmarks, numba will be the fastest (for large n, parallel mode will be better), after that BrokenBenchmark answer will be faster than scipy.ndimage.zoom.

  • Related