Home > Enterprise >  The most efficient way rather than using np.setdiff1d and np.in1d, to remove common values of 1D arr
The most efficient way rather than using np.setdiff1d and np.in1d, to remove common values of 1D arr

Time:10-16

I need a much faster code to remove values of an 1D array (array length ~ 10-15) that are common with another 1D array (array length ~ 1e5-5e5 --> rarely up to 7e5), which are index arrays contain integers. There is no duplicate in the arrays, and they are not sorted and the order of the values must be kept in the main array after modification. I know that can be achieved using such np.setdiff1d or np.in1d (which both are not supported for numba jitted in no-python mode), and other similar posts (e.g. this) have not much more efficient way to do so, but performance is important here because all the values in the main index array will be gradually be removed in loops.

import numpy as np
import numba as nb

n = 500000
r = 10
arr1 = np.random.permutation(n)
arr2 = np.random.randint(0, n, r)

# @nb.jit
def setdif1d_np(a, b):
    return np.setdiff1d(a, b, assume_unique=True)


# @nb.jit
def setdif1d_in1d_np(a, b):
    return a[~np.in1d(a, b)]

There is another related post that proposed by norok2 for 2D arrays, that is ~15 times faster solution (hashing-like way using numba) than usual methods described there. This solution may be the best if it could be prepared for 1D arrays:

@nb.njit
def mul_xor_hash(arr, init=65537, k=37):
    result = init
    for x in arr.view(np.uint64):
        result = (result * k) ^ x
    return result


@nb.njit
def setdiff2d_nb(arr1, arr2):
    # : build `delta` set using hashes
    delta = {mul_xor_hash(arr2[0])}
    for i in range(1, arr2.shape[0]):
        delta.add(mul_xor_hash(arr2[i]))
    # : compute the size of the result
    n = 0
    for i in range(arr1.shape[0]):
        if mul_xor_hash(arr1[i]) not in delta:
            n  = 1
    # : build the result
    result = np.empty((n, arr1.shape[-1]), dtype=arr1.dtype)
    j = 0
    for i in range(arr1.shape[0]):
        if mul_xor_hash(arr1[i]) not in delta:
            result[j] = arr1[i]
            j  = 1
    return result

I tried to prepare that for 1D arrays, but I have some problems/question with that.

  • At first, IDU what does mul_xor_hash exactly do, and if init and k are arbitrary selected or not
  • Why mul_xor_hash will not work without nb.njit:
  File "C:/Users/Ali/Desktop/test - Copy - Copy.py", line 21, in mul_xor_hash
    result = (result * k) ^ x
TypeError: ufunc 'bitwise_xor' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''
  • IDK how to implement mul_xor_hash on 1D arrays (if it could), which I guess may make it faster more than for 2Ds, so I broadcast the input arrays to 2D by [None, :], which get the following error just for arr2:
    print(mul_xor_hash(arr2[0]))
ValueError: new type not compatible with array
  • and what does delta do

I am searching the most efficient way in this regard. In the absence of better method than norok2 solution, how to prepare this solution for 1D arrays?

CodePudding user response:

What I found is that hashing does not help,. It is just trick for 2D case, to convert 1d arrays to single numbers and put them as such in a set.

Below is method of norok2 I converted to 1d arrays (and added annotations for faster compilation). Note that this is only slightly (20-30%) faster than the methods you already have. And of course after second function call, on first due to compilation it is slightly slower.


@nb.njit('int32[:](int32[:], int32[:])')
def setdiff1d_nb(arr1, arr2):
    # : build `delta` set using hashes
    delta = set()
    for i in range(0, arr2.shape[0]):
        delta.add(arr2[i])

    # : build the result
    n = len(arr1) - len(arr2)
    result = np.empty(n, dtype=arr1.dtype)
    j = 0
    for i in range(arr1.shape[0]):
        if arr1[i] not in delta:
            result[j] = arr1[i]
            j  = 1
    return result

CodePudding user response:

Understanding the hash-based solution

At first, IDU what does mul_xor_hash exactly do, and if init and k are arbitrary selected or not

mul_xor_hash is a custom hash function. Functions mixing xor and multiply (possibly with shifts) are known to be relatively fast to compute the hash of a raw data buffer. The multiplication tends to shuffle bits and the xor is used to somehow combine/accumulate the result in a fixed size small value (ie. the final hash). There are many different hashing functions. Some are faster than others, some cause more collisions than other in a given context. A fast hashing function causing too many collisions can be useless in practice as it would result in a pathological situation where all conflicting values needs to be compared. This is why fast hash functions are hard to implement.

init and k are parameter certainly causing the hash to be pretty balance. This is pretty common in such a hash function. k needs to be sufficiently big for the multiplication to shuffle bits and it should typically also be a prime number (values like power of two tends to increase collisions due to modular arithmetic behaviours). init plays a significant role only for very small arrays (eg. with 1 item): it helps to reduce collisions by xoring the final hash by a non-trivial constant. Indeed, if arr.size = 1, then result = (init * k) ^ arr[0] where init * k is a constant. Having an identity hash function equal to arr[0] is known to be bad since it tends to result in many collisions (this is a complex topic, but put it shortly, arr[0] can be divided by the number of buckets in the hash table for example). Thus, init should be a relatively big number and init * k should also be a big non-trivial value (a prime number is a good target value).

Why mul_xor_hash will not work without nb.njit

It depends of the input. The input needs to be a 1D array and have a raw size in byte divisible by 8 (eg. 64-bit items, 2n x 32-bit ones, 4n x 16-bit one or 8n 8-bit ones). Here is some examples:

mul_xor_hash(np.random.rand(10))
mul_xor_hash(np.arange(10)) # Do not work with 9

and what does delta do

It is a set containing the hash of the arr2 row so to find matching lines faster than comparing them without hashes.

how to prepare this solution for 1D arrays?

AFAIK, hashes are only use to avoid comparisons of rows but this is because the input is the 2D array. In 1D, there is no such a problem.

There is big catch with this method: it only works if there is no hash collisions. Otherwise, the implementation wrongly assumes that values are equal even if they are not! @norok explicitly mentioned it in the comments though:

Note that the collision handling for the hashings should also be implemented


Faster implementation

Using the 2D solution of @norok for 1D is not a good idea since hashes will not make it faster the way they are used. In fact, a set already use a hash function internally anyway. Not to mention collisions needs to be properly implemented (which is done by a set).

Using a set is a relatively good idea since it causes the complexity to be O(n m) where n = len(arr1) and m = len(arr2). That being said, if arr1 is converted to a set, then it will be too big to fit in L1 cache (due to the size of arr1 in your case) resulting in slow cache misses. Additionally, the growing size of the set will cause values to be re-hashed which is not efficient. If arr2 is converted to a set, then the many hash table fetches will not be very efficient since arr2 is very small in your case. This is why this solution is sub-optimal.

One solution is to split arr1 in chunks and then build a set based on the target chunk. You can then check if a value is in the set or not efficiently. Building the set is still not very efficient due to the growing size. This problem is due to Python itself which do not provide a way to reserve some space for the data structure like other languages do (eg. C ). One solution to avoid this issue is simply to reimplement an hash-table which is not trivial and cumbersome. Actually, Bloom filters can be used to speed up this process since they can quickly find if there is no collision between the two sets arr1 and arr2 in average (though they are not trivial to implement).

Another optimization is to use multiple threads to compute the chunks in parallel since they are independent. That being said, the appending to the final array is not easy to do efficiently in parallel, especially since you do not want the order to be modified. One solution is to move away the copy from the parallel loop and do it serially but this is slow and AFAIK there is no simple way to do that in Numba currently (since the parallelism layer is very limited). Consider using native languages like C/C for an efficient parallel implementation.

In the end, hashing can be pretty complex and the speed up can be quite small compared to a naive implementation with to nested loops since arr2 only have few items and modern processors can compare values quickly using SIMD instructions (while hash-based method can hardly benefit from them on mainstream processors). Unrolling can help to write a pretty simple and fast implementation. Again, unfortunately, Numba use LLVM-Jit internally which appear to fail to vectorize such a simple code (certainly due to missing optimizations in either LLVM-Jit or even LLVM itself). As a result, the non vectorized code is finally a bit slower (rather than 4~10 times faster on a modern mainstream processor). One solution is to use a C/C code instead to do that (or possibly Cython).

Here is a serial implementation using basic bloop filters:

@nb.njit('uint32(int32)')
def hash_32bit_4k(value):
    return (np.uint32(value) * np.uint32(27_644_437)) & np.uint32(0x0FFF)

@nb.njit(['int32[:](int32[:], int32[:])', 'int32[:](int32[::1], int32[::1])'])
def setdiff1d_nb_faster(arr1, arr2):
    out = np.empty_like(arr1)
    bloomFilter = np.zeros(4096, dtype=np.uint8)
    for j in range(arr2.size):
        bloomFilter[hash_32bit_4k(arr2[j])] = True
    cur = 0
    for i in range(arr1.size):
        # If the bloom-filter value is true, we know arr1[i] is not in arr2.
        # Otherwise, there is maybe a false positive (conflict) and we need to check to be sure.
        if bloomFilter[hash_32bit_4k(arr1[i])] and arr1[i] in arr2:
            continue
        out[cur] = arr1[i]
        cur  = 1
    return out[:cur]

Results

Here are results on my i5-9600KF-based machine:

setdif1d_np:          2.65 ms
setdif1d_in1d_np:     2.61 ms
setdiff1d_nb:         2.33 ms
setdiff1d_nb_faster:  0.73 ms

The provided is about 3~4 time faster than the other ones.

  • Related