Home > Software design >  Python Transformers Error: 'dict object has no attribute architectures'
Python Transformers Error: 'dict object has no attribute architectures'

Time:11-05

I tried to make a simple text generation tool.

This is my code:

from transformers import pipeline

pipe = pipeline('text-generation', model='dbmdz/german-gpt2', tokenizer='dbmdz/german-gpt2', config={'max_length': 800})

text = pipe("Der Sinn des Lebens ist es")[0]['generated_text']

print(text)

When i try to run it i get the following error:

2021-11-04 18:56:13.996698: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found
2021-11-04 18:56:13.996798: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
Traceback (most recent call last):
  File "D:\Projects\Coding\VoiceAssistant\Moduls\TextGeneration\main.py", line 3, in <module>
    pipe = pipeline('text-generation', model='dbmdz/german-gpt2', tokenizer='dbmdz/german-gpt2', config={'max_length': 800})
  File "C:\Users\lukas\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\transformers\pipelines\__init__.py", line 462, in pipeline
    framework, model = infer_framework_load_model(
  File "C:\Users\lukas\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\transformers\pipelines\base.py", line 118, in infer_framework_load_model
    if config.architectures:
AttributeError: 'dict' object has no attribute 'architectures'

My Python Version: 3.9.7 My Windows version: Win11 Pro

CodePudding user response:

You have treat wrong type of config here ...config={'max_length': 800} .... It must be an object that must contain an attr architectures.

CodePudding user response:

You may want to remove config={'max_length': 800}) . I suspect it may not be used correctly.

Per docs ,

The configuration parameter is used for config that will be used by the pipeline to instantiate the model. This can be a model identifier or an actual pretrained model configuration inheriting from.

Code runs through without use of config parameter.

Python 3.9.2 (default, Feb 24 2021, 13:26:09) 
[Clang 12.0.0 (clang-1200.0.32.29)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from transformers import pipeline
>>> pipe = pipeline('text-generation', model='dbmdz/german-gpt2', tokenizer='dbmdz/german-gpt2')
2021-11-04 11:17:32.211688: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-11-04 11:17:32.221882: W tensorflow/python/util/util.cc:348] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
All model checkpoint layers were used when initializing TFGPT2LMHeadModel.

All the layers of TFGPT2LMHeadModel were initialized from the model checkpoint at dbmdz/german-gpt2.
If your task is similar to the task the model of the checkpoint was trained on, you can already use TFGPT2LMHeadModel for predictions without further training.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
>>> text = pipe("Der Sinn des Lebens ist es")[0]['generated_text']
Setting `pad_token_id` to 50256 (first `eos_token_id`) to generate sequence
>>> print(text)
Der Sinn des Lebens ist es, sich von seinen eigenen Vorstellungen zu befreien und das Leben zu sehen, das er im Augenblick hat."
Wir wissen aber auch, dass wir auf dem Weg zum Glück sind und sich auch durch unsere Erlebnisse im eigenen Leben und
>>> 
  • Related