Home > Blockchain >  Does python have a library that generate vectors based on a super set
Does python have a library that generate vectors based on a super set

Time:09-21

I have a need to generate vector for each sample in a dataset based on total amount of features of the dataset. Assume the dataset has 6 features

features = ['a', 'b', 'c', 'd', 'e', 'f']

A sample s1 only has 3 features

s1 = ['a', 'b', 'c']

I want to generate a vector for s1 to represent features s1 = [1, 1, 1, 0, 0 ,0]

Another example: s2 = ['a', 'c', 'f'], then the vector should be [1, 0, 1, 0, 0, 1]

Are there any python libraries to do this task? If not, how should I accomplish this task?

CodePudding user response:

Probably not the most optimized, but if you want a vector for each sample in the dataset, you'd just have to create a binary array for every number between 0 and 26:

features = ['a', 'b', 'c', 'd', 'e', 'f']
l = len(features)
vectors = [[int(y) for y in f'{x:0{l}b}'] for x in range(2 ** l)] 

print(vectors);

CodePudding user response:

This is pretty straight-forward and not really something you need a library for.

Pure Python solution

features = ['a', 'b', 'c', 'd', 'e', 'f']
features_lookup = dict(map(reversed, enumerate(features)))


s1 = ['a', 'b', 'c']
s2 = ['a', 'c', 'f']


def create_feature_vector(sample, lookup):
    vec = [0]*len(lookup)
    for value in sample:
        vec[lookup[value]] = 1
    return vec

Output:

>>> create_feature_vector(s1, features_lookup)
[1, 1, 1, 0, 0, 0]

>>> create_feature_vector(s2, features_lookup)
[1, 0, 1, 0, 0, 1]

Numpy alternative for a single feature vector

If you happen to already be using numpy, this'll be much more efficient if your feature set is large:

import numpy as np


features = np.array(['a', 'b', 'c', 'd', 'e', 'f'])
sample_size = 3


def feature_sample_and_vector(sample_size, features):
    n = features.size
    sample_indices = np.random.choice(range(n), sample_size, replace=False)
    sample = features[sample_indices]
    vector = np.zeros(n, dtype="uint8")
    vector[sample_indices] = 1
    return sample, vector

Numpy alternative for a large number of samples and their feature vectors

Using numpy allows us to scale very well for large feature sets and/or large sample sets. Note that this approach can produce duplicate samples:

import random
import numpy as np


# Assumes features is already a numpy array
def generate_samples(features, num_samples, sample_size):
    n = features.size
    vectors = np.zeros((num_samples, n), dtype="uint8")
    idxs = [random.sample(range(n), k=sample_size) for _ in range(num_samples)]
    cols = np.sort(np.array(idxs), axis=1)  # You can remove the sort if having the features in order isn't important
    rows = np.repeat(np.arange(num_samples).reshape(-1, 1), sample_size, axis=1)
    vectors[rows, cols] = 1
    samples = features[cols]
    return samples, vectors

Demo:

>>> generate_samples(features, 10, 3)
(array([['d', 'e', 'f'],
        ['a', 'b', 'c'],
        ['c', 'd', 'e'],
        ['c', 'd', 'f'],
        ['a', 'b', 'f'],
        ['a', 'e', 'f'],
        ['c', 'd', 'f'],
        ['b', 'e', 'f'],
        ['b', 'd', 'f'],
        ['a', 'c', 'e']], dtype='<U1'),
 array([[0, 0, 0, 1, 1, 1],
        [1, 1, 1, 0, 0, 0],
        [0, 0, 1, 1, 1, 0],
        [0, 0, 1, 1, 0, 1],
        [1, 1, 0, 0, 0, 1],
        [1, 0, 0, 0, 1, 1],
        [0, 0, 1, 1, 0, 1],
        [0, 1, 0, 0, 1, 1],
        [0, 1, 0, 1, 0, 1],
        [1, 0, 1, 0, 1, 0]], dtype=uint8))

A very simple timing benchmark for 100,000 samples of size 12 from a feature set of 26 features:

In [2]: features = np.array(list("abcdefghijklmnopqrstuvwxyz"))

In [3]: num_samples = 100000

In [4]: sample_size = 12

In [5]: %timeit generate_samples(features, num_samples, sample_size)
645 ms ± 9.86 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

The only real bottleneck is the list comprehension necessary for producing the indices. Unfortunately there's no 2-dimensional variant for generating samples without replacement using np.random.choice(), so you still have to resort to a relatively slow method for generating the random sample indices.

  • Related