Home > Software engineering >  ElasticSearch | TypeError: string indices must be integers
ElasticSearch | TypeError: string indices must be integers

Time:12-13

I'm using this Notebook, where section Apply DocumentClassifier is altered as below.

Jupyter Labs, kernel: conda_mxnet_latest_p37.


I understand the error means I'm passing str instead of an int. However, this should not be a problem, as it works with other .pdf/ .txt files from the original Notebook.

Code Cell:

doc_dir = "GRIs/"  # contains 2 .pdfs

with open('filt_gri.txt', 'r') as filehandle:
    tags = [current_place.rstrip() for current_place in filehandle.readlines()]


doc_classifier = TransformersDocumentClassifier(model_name_or_path="cross-encoder/nli-distilroberta-base",
                                                task="zero-shot-classification",
                                                labels=tags,
                                                batch_size=2)

# convert to Document using a fieldmap for custom content fields the classification should run on
docs_to_classify = [Document.from_dict(d) for d in docs_sliding_window]

# classify using gpu, batch_size makes sure we do not run out of memory
classified_docs = doc_classifier.predict(docs_to_classify)

# let's see how it looks: there should be a classification result in the meta entry containing labels and scores.
print(classified_docs[0].to_dict())

all_docs = convert_files_to_dicts(dir_path=doc_dir)

preprocessor_sliding_window = PreProcessor(split_overlap=3,
                                           split_length=10,
                                           split_respect_sentence_boundary=False,
                                           split_by='passage')

Output Error:

INFO - haystack.modeling.utils -  Using devices: CUDA
INFO - haystack.modeling.utils -  Number of GPUs: 1
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-11-82b54cd162ff> in <module>
     14 
     15 # classify using gpu, batch_size makes sure we do not run out of memory
---> 16 classified_docs = doc_classifier.predict(docs_to_classify)
     17 
     18 # let's see how it looks: there should be a classification result in the meta entry containing labels and scores.

~/anaconda3/envs/mxnet_latest_p37/lib/python3.7/site-packages/haystack/nodes/document_classifier/transformers.py in predict(self, documents)
    144         for prediction, doc in zip(predictions, documents):
    145             if self.task == 'zero-shot-classification':
--> 146                 prediction["label"] = prediction["labels"][0]
    147             doc.meta["classification"] = prediction
    148 

TypeError: string indices must be integers

Please let me know if there is anything else I should add to post/ clarify.

CodePudding user response:

I swapped out variable docs_sliding_window with my_dsw.

my_dsw only keeps lines with <= 1000 characters in length. This helps the shape of my data to fit better.

my_dsw = []
for dsw in range(0, len(docs_sliding_window)-1):
    if len(docs_sliding_window[dsw]['content']) <= 1000:
        my_dsw.append(docs_sliding_window[dsw])

Swapping it out in docs_to_classify line:

# convert to Document using a fieldmap for custom content fields the classification should run on
docs_to_classify = [Document.from_dict(d) for d in docs_sliding_window]

Admittedly, I'm not sure how this relates to specifically to the error; but it does help the data to fit better; as now I can increase batch_size=4.

  • Related