Home > Back-end >  Translation with multi lingual BERT model
Translation with multi lingual BERT model

Time:11-29

I want to translate my dataframe using multi lingual BERT. I have copied this code but inplace of text i want to use my own dataframe.

from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-multilingual-cased')
model = TFBertModel.from_pretrained("bert-base-multilingual-cased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)

However I get some errors when using like below.

df  =pd.read_csv("/content/drive/text.csv")
encoded_input = tokenizer(df, return_tensors='tf')

error

ValueError: text input must of type `str` (single example), `List[str]` (batch or single pretokenized example) or `List[List[str]]` (batch of pretokenized examples).

My dataframe looks like this

0    There is XXXX increased opacity within the rig...
1    There is XXXX increased opacity within the rig...
2    There is XXXX increased opacity within the rig...
3    Interstitial markings are diffusely prominent ...
4    Interstitial markings are diffusely prominent ...
Name: findings, dtype: object

CodePudding user response:

The first one is using a string to tokenizer. The second one you are trying to tokenizer an entire dataframe, not a string.

  • Related