Home > Software engineering >  Fast algorithm for counting intersection of sorted arrays with cyclic shift
Fast algorithm for counting intersection of sorted arrays with cyclic shift

Time:10-24

I'm looking for a faster algorithm that solves the following problem:

  • Input: Two arrays of integers A and B in the range [0, N), both of fixed length d, assumed to be given in sorted order with no repeated elements.
  • Output: Whether the largest possible intersection (i.e. number of elements in common) between A and a cyclic shift of B is greater than some specified threshold t. By a cyclic shift of B, I mean the array [(b s) % N for b in B] for some integer s.

If it matters, I'm implementing this in Rust (though I'm more interested in general algorithmic improvements than language-specific optimizations), and in practice, t will be less than 10, d will typically be in the range of 15 to 150, and N will be roughly on the order of 2*d*d.

My current algorithm is essentially as follows (note, d and N are constants defined at compile-time):

fn max_shifted_overlap_geq(A: [u32; d], B: [u32; d], threshold: u32) -> bool {
    for i in 0..d {
        for j in 0..d {
            let s = N   A[i] - B[j];
            let mut B_s = [0; d];
            for k in 0..d {
                B_s[k] = (B[k]   s) % N;
            }
            B_s.sort();
            // Actually, I do an insertion-sort in place as I construct B_s,
            // but I'm writing it this way here for simplicity.
            if sorted_intersection_count(&A, &B_s) >= threshold {
                return true;
            }
        }
    }
    false
}

So I'm only choosing shifts from the possible values of A[i] - B[j] (since a shift not of this form gives zero intersection), and then I just construct the cyclic shift of B and count the number of elements in common in a fairly naive way.

Is there a more efficient algorithm for this, keeping in mind the fairly small size of the arrays? In particular, is there a better way of finding shifts that are more likely to yield large overlaps?

Edit: To provide additional context (as requested below), this arises in the study of QC-MDPC codes: The arrays represent the supports of binary vectors that generate the circulant blocks of the parity-check matrix, and this condition on the intersection with cyclic shifts defines a class of "weak keys" with some cryptographic implications. (I initially didn't mention this because the problem seems interesting in isolation and doesn't require any knowledge of coding theory or cryptography.)

Edit 2: Fixed some typos in the code and switched to a better method of counting intersections of sorted lists. (Weirdly, I actually had used that improved algorithm in an earlier version and the code ran slower, but that might've been due to an implementation bug or now-fixed problems elsewhere in the code.)

Edit 3: For future reference of anyone who runs into a similar problem, here's my current implementation, using the key idea from virchau13's answer below plus some small additional optimizations. This seems quite efficient in practice. (I've renamed some variables for clarity—arr1 and arr2 for the input arrays, and LEN instead of d for the array length.)

fn relative_shifts(arr1: &[u32; LEN], arr2: &[u32; LEN]) -> [[u32; LEN]; LEN] {
    let n = N as u32;
    let mut shifts = [[0; LEN]; LEN];
    for i in 0..LEN {
        for j in 0..LEN {
            shifts[i][j] = if arr1[i] < arr2[j] {
                n   arr1[i] - arr2[j]
            } else {
                arr1[i] - arr2[j]
            }; // this equals (arr1[i] - arr2[j]) % n
        }
    }
    shifts
}
fn max_shifted_overlap_geq(arr1: &[u32; LEN], arr2: &[u32; LEN], threshold: u8) -> bool {
    let shifts = relative_shifts(arr1, arr2);
    let mut shift_counts = [0; N];
    for i in 0..LEN {
        for j in 0..LEN {
            let count = &mut shift_counts[shifts[i][j] as usize];
            *count  = 1;
            if *count >= threshold {
                return true;
            }
        }
    }
    false
}

A couple implementation notes:

  1. This could easily be modified to produce the largest possible intersection as a value (by taking a maximum instead of short-circuiting when the threshold is exceeded) or a set of index pairs (by also appending the index pairs (i, j) to a list associated to each shift s as it's computed).
  2. We do not need to assume the arrays are sorted for this to work. For that matter, I don't think we need to assume the arrays are of the same length, either, though I haven't tested this for arrays of different lengths.

CodePudding user response:

I think it's possible to get the algorithm down to O(d^2). This is just (untested) speculation.

For two elements A[i] and B[j] to be cyclically equal, (B[j] s) % N must equal A[i]. If s = s_orig satisfies this equation, then s = s_orig % n also satisfies this equation, meaning that we can restrict s to 0 <= s < N. Using this restriction, we can show that two elements are cyclically equal if and only if B[j] s equals either A[i] or A[i] N (since 0 <= A[i],B[i] < N), which is the same as saying that s must equal either A[i] - B[j] or N A[i] - B[j]. However, since 0 <= s < N, the first term only makes sense when the difference is positive or zero and the second term only makes sense when the difference is negative; i.e. we can say that s must equal the expression if A[i] - B[j] < 0 { N A[i] - B[j] } else { A[i] - B[j] }. Another way to write this is s = (N A[i] - B[j]) % N.

Note that since there is exactly one value of s for each (i,j) pair, two (i1,j1) and (i2,j2) pairs both overlap if and only if the values for s for each of them are the same.

So here's the final algorithm:

  1. Start by enumerating all possible s cyclic differences between A and B and put them in a 2D array: possible_values: [[usize; d]; d] possible_values[i][j] = (N A[i] - B[j]) % N. This is O(d^2).

  2. Next, find all unique s values (i.e. unique values of possible_values[i][j]) and store the list of indexes each s value has in a hashmap unique_possible_values: HashMap<usize, Vec<(usize, usize)>>. That sentence isn't very clear so here's what I mean:

let unique_possible_values: HashMap<usize, Vec<(usize, usize)>> = HashMap::new();
for i in 0..d {
    for j in 0..d {
        let indexes_with_same_value = 
            unique_possible_values
                .entry(possible_values[i][j])
                .or_insert(Vec::new());
        indexes_with_same_value.push((i, j));
    }
}

In other words, each entry of the hashmap stores the list of 2D indexes (i,j) that share the same possible_values[i][j] value. This is O(d^2).

  1. Then, for each unique s value (for (s, indexes) in &unique_possible_values), count the amount of cyclically equal elements it has. This is equal to the number of unique i-values and the number of unique j-values, which can be computed in O(indexes.len()) time. I'm not going to write the code out for this, but it shouldn't be difficult, and it's O(d^2) (since each 2D index you iterate over occurs exactly once).

  2. Take the maximum of all the counts from step 3. This is worst-case O(d^2) and average case significantly lower. This final value corresponds to the maximum possible size of the A and B cyclical intersection.

  3. Check if that value exceeds threshold. If it does, return true; otherwise, return false.

This algorithm basically enumerates all possible s-values and calculates the max intersection length, but in a efficient manner.

CodePudding user response:

You can convert each input array into a new vector of length N, with a 1 for values that occur in and a 0 for other values.

Then calculate the cyclic cross correlation between the two vectors, and find the highest value -- that's the size of the maximum intersection.

The time is dominated by the correlation calculation, which takes O(N log N) time using FFT techniques. That is better than d3, but it's pretty complicated, so whether it's going to be faster will probably depend on whether or not N is a power of 2 and the specific implementation. If you have a GPU available for the FFT then it can be much faster.

  • Related