Home > Mobile >  Efficient equality test for bitstrings with arbitrary offsets
Efficient equality test for bitstrings with arbitrary offsets

Time:12-12

I have more than 1e7 sequences of tokens, where each token can only take one of four possible values. In order to make this dataset fit into memory, I decided to encode each token in 2 bits, which allows to store 4 tokens in a byte instead of just one (when using a char for each token / std::string for a sequence). I store each sequence in a char array.

For some algorithm, I need to test arbitrary subsequences of two token sequences for exact equality. Each subsequence can have an arbitrary offset. The length is typically between 10 and 30 tokens (random) and is the same for the two subsequences.

My current method is to operate in chunks:

  • Copy up to 32 tokens (each having 2 bit) from each subsequences into an uint64_t. This is realized in a loop over the tokens that selects the correct char in the array and writes the bits into the correct position of the uint64_t.
  • Compare the two uint64_t. If they are not equal, return.
  • Repeat until all tokens in the subsequences have been processed.
#include <climits>
#include <cstdint>

using Block = char;
constexpr int BitsPerToken = 2;
constexpr int TokenPerBlock = sizeof(Block) * CHAR_BIT / BitsPerToken;

Block getTokenFromBlock(Block b, int nt) noexcept
{
    return (b >> (nt * BitsPerToken)) & ((1UL << (BitsPerToken)) - 1);
}

bool seqEqual(Block const* seqA, int startA, int endA, Block const* seqB, int startB, int endB) noexcept
{
    using CompareBlock = uint64_t;
    constexpr int TokenPerCompareBlock = sizeof(CompareBlock) * CHAR_BIT / BitsPerToken;

    const int len = endA - startA;

    int posA = startA;
    int posB = startB;

    CompareBlock curA = 0;
    CompareBlock curB = 0;
    for (int i = 0; i < len;   i,   posA,   posB)
    {
        const int cmpIdx = i % TokenPerBlock;
        const int blockA = posA / TokenPerBlock;
        const int idxA = posA % TokenPerBlock;
        const int blockB = posB / TokenPerBlock;
        const int idxB = posB % TokenPerBlock;

        if ((i % TokenPerCompareBlock) == 0)
        {
            if (curA != curB)
                return false;

            curA = 0;
            curB = 0;
        }

        curA  = getTokenFromBlock(seqA[blockA], idxA) << (BitsPerToken * cmpIdx);
        curB  = getTokenFromBlock(seqB[blockB], idxB) << (BitsPerToken * cmpIdx);
    }

    if (curA != curB)
        return false;

    return true;
}

I figured that this should be quite fast (comparing 32 tokens simultaneously), but it is more than two times slower than using an std::string (with each token stored in a char) and its operator==.

I have looked into std::memcmp, but cannot use it because the subsequence might start somewhere within a byte (at a multiple of 2 bits, though).

Another candidate would be boost::dynamic_bitset, which basically implements the same storage format. However, it does not include equality tests.

How can I achieve fast equality tests using this compressed format?

CodePudding user response:

First of all, this is the kind of computation where the target processor, RAM, compiler and compiler flags can drastically change the results. Unfortunately these critical information are not provided. Let's assume you use a quite recent mainstream x86-64 processor, a common DDR4-SDRAM, a compiler like Clang/GCC relatively up-to-date, and optimizations are enabled (ie. -O3 and possibly -march=native).

Clang and GCC use a fast comparison functions for comparing strings : respectively memcmp for GCC 12 and bcmp for Clang 15. The two functions are highly optimized on most platforms : they typically compare short strings by blocks of 8 bytes (uint64_t) and large strings by using SIMD instructions.

Your optimization is good to reduce the memory footprint but it introduces more computation and there is a high chance for the operation to be already compute-bound if the input buffer is already in the CPU cache. In addition, the computation is not SIMD-friendly due to the inner loop : the compiler will certainly not generate an efficient code due toe the bit-wise operations. The thing is scalar codes are slow. In fact, scalar byte-per-byte computations are generally so slow that they are usually far from being able to saturate the RAM bandwidth (at least the one achievable using only 1 core) as opposed to to memcmp. For example, a Skylake/Coffeelake processor at 4 GHz can only read 8 GiB/s from the L1 cache using a scalar byte-per-byte code while an AVX-2 SIMD code can read 256 GiB/s. For the write it is twice smaller : 4 GiB/s VS 128 GiB/s. A 1-channel DDR4-SDRAM @ 3200MHz can theoretically reach ~24 GiB/s, that is, far more than a byte-per-byte scalar sequential code. The L3 cache have a much bigger bandwidth.

If you want a fast code for large sequences, then you need to either help your compiler so it can use SIMD instruction (not so easy in this case), to use non-portable SIMD intrinsics or possibly to use a relatively-portable SIMD library to generate quite-good SIMD code (though low-level platform-dependent intrinsics are more flexible/featureful).

I expect the main bottleneck to come from the "loop over the tokens that selects the correct char in the array and writes the bits into the correct position of the uint64_t". Indeed, this loop will likely generate a dependency chain of instructions (operating on the same uint64_t variable) that cannot be executed efficiently by the processor nor easily optimized by the compiler.

A typical solution would be to read blocks of 8 bytes (using memcpy to do it correctly, and hope the compiler optimize it properly). The bits can be reordered using a bswap instruction on x86-64 processors and it is not needed on big-endian processors. A shift mask can be applied so to compare only the useful part. Here is an (untested) example to show the idea:

if(length >= 16)
{
    uint64_t block1, block2;
    uint64_t prev_block1 = 0, prev_block2 = 0;

    unsigned int shift1 = (start1 % 4) * 2;
    unsigned int shift2 = (start2 % 4) * 2;
    uint64_t mask = 0xFFFFFFFFFFFFFF00ull;

    // Read blocks 7 byte per 7 byte for sake of simplicity
    for(size_t i=0; i<length-7 ; i =7)
    {
        // Safe and cheap and GCC/Clang
        memcpy(&block1, charArray1[i], 8);
        memcpy(&block2, charArray2[i], 8);

        // Architecture-dependent: reorder bytes on little-endian processors.
        // There is a fast instruction for that on x86-64 processors: bswap.
        // See: https://stackoverflow.com/questions/36497605
        block1 = reorder_bytes(block1);
        block2 = reorder_bytes(block2);

        block1 = (block1 << shift1) & mask;
        block2 = (block2 << shift2) & mask;

        if(block1 != block2)
            return false;
    }
}

// TODO: compute the reminder part for the last block

This operation can be done using the SSE/AVX instruction set so to be faster for large sequences. Note you can perform a special optimization when shift1 == shift2 (especially when the both are equal to 0).

One should keep in mind that the bit-packing computation is pretty expensive, even using a SIMD code. It will certainly not be faster than a memcpy unless the operation is memory bound which is unlikely to be the case. For example, a Skylake/Coffeelake processor can load and compare 2 blocks of 32 bytes (ie. 32 tokens per block) in only 1 cycle (reciprocal throughput) using the AVX-2 SIMD instruction set, while there is no chance each iteration of the above bit-packing loop can take less than 2 cycles to compute 7 bytes (ie. 28 tokens). Using AVX-2 to optimize the above code is possible but the AVX lanes and the byte reordering results in several additional instructions being required so it will certainly be still slightly slower than just a basic very-fast comparisons (few cycles to compute ~120 tokens).

The only use-case where packing can help is when multiple core are used to do the computation. Indeed, in that case, the bit-packing code can scale well because it is likely compute-bound while the string-based version will quickly be limited by the speed of the RAM since it is likely memory-bound.

CodePudding user response:

If there are only 10million tokens total, its 20Mbit or 2-3MB. If you keep their shifted versions in different arrays such as from 2 bit shifted to 30 bit shifted (assuming 4byte comparison at once, ignore 32 bit shift as it means just a different starting position), you can do a direct comparison (std::memcmp) with no shifting involved (fast) after selecting the right array with modulo of the arbitrary offset. But this requires the token sequence to be constant through many function calls (if not lifetime of program).

If these tokens are part of a much bigger data, you can put a caching layer (that caches fixed length chunks and joins them to get requested sub-sequence for A and B) just before the shifted initialization. Maybe LRU/LFU works fast enough if its token access pattern is cache-friendly. If its not cache friendly, then perhaps just reaching the arrays could be the bottleneck with or without shifting.

If you do checking per byte instead of per 4 bytes, it requires only 4 arrays instead of 16 and it shouldn't add too big requirement with caching.


You can also add an XOR result of fixed-length (like 50-100) sub-sequences for every offset as a way of quicker exiting. Again, this requires 4x more memory space. If XOR results of first tokens ( fixed length) are not equal, then they are not equal. This would reduce number of comparisons at least.


Another way is directly caching f(x,y)->bool like Python language does with its own caching. But this would be much worse than "fixed-length-chunked-caching & joining them" due to non-reusable parts & a lot of duplication.

  • Related