Home > Enterprise >  Python: TypeError: 'module' object is not subscriptable
Python: TypeError: 'module' object is not subscriptable

Time:02-13

I was watching this tutorial. I copied over the stuff he wrote, but with some changes in the variables and other stuff. Then I got the error (The error is below).

Here's the code (main.py):

import nltk
from nltk import *
import numpy
import tflearn
import json
import random
import tensorflow

stemmer = LancasterStemmer()
prefix = "[Bot]"

trainer_load = json.load(open("trainer.json"))

words = []
labels = []
docs_a = []
docs_b = []

for trainer in data["dictionary"]: # Error here
    for inputs in trainer["inputs"]:
        words_tokenize = nltk.word_tokenize(inputs)
        words.extend(words_tokenize)
        docs_a.append(inputs)
        docs_b.append(trainer["tag"])

    if trainer["tag"] not in labels:
        labels.append(trainer["tag"])

words = [stemmer.stem(w.lower()) for w in words]
words = sorted(list(set(words)))

labels = sorted(labels)

training = []
output = []

out_empty = [0 for _ in range(len(classes))]

for a, doc in enumerate(docs_a):
    bag = []

    words_tokenize = [stemmer.stem(a) for a in doc]

    for a in words:
        if a in words_tokenize:
            bag.append(1)
        else:
            bag.append(0)

    output_row = out_empty[:]
    output_row[classes.index(docs_b[a])] = 1

    training.append(bag)
    output.append(output_row)

training = numpy.array(training)
output = numpy.array(output)

Here's the error:

Traceback (most recent call last):
  File "C:\Users\(I hid my name)\OneDrive\Desktop\Codes\Python codes\Chatbot\main.py", line 19, in <module>
    for trainer in data["dictionary"]:
TypeError: 'module' object is not subscriptable

I've searched in my browser and all the answers I found are not related to my question. My JSON file exists, it's in the same folder as the main.py.

This is the JSON file: (trainer.json)

{"dictionary": [
    {"tag": "greetings", 
     "inputs": ["Hi", "Hello", "How are you", "Whats up", "Greetings", "Good day"],
     "responses": ["Hello!", "Hi!", "I'm good! Always good.", "Greetings!", "I know, right!"],
     "context_set": ""
    },
    {"tag": "identity", 
     "inputs": ["Who are you", "Who you"],
     "responses": ["I'm a chatting bot built by FighterLoveNoob, he was watching a tutorial while building me"],
     "context_set": ""
    }
]}

CodePudding user response:

Explanation:

  • data is not defined, your json data is loaded in trainer_load.

I was watching this tutorial. I copied over the stuff he wrote, but with some changes in the variables and other stuff. Then I got the error (The error is below).

  • Change in the variable is the cause of the error. You must change the variable in every occurrence if you want to change, you just changed while defining the variable
  • In the link you have refered the code is:
with open('intents.json') as file:
    # Here data is used NOT trainer_load
    # If you want to change data to trainer_load change it every where.
    data = json.load(file)

Code:

import nltk
from nltk import *
import numpy
import tflearn
import json
import random
import tensorflow

stemmer = LancasterStemmer()
prefix = "[Bot]"


trainer_load = json.load(open("trainer.json"))

words = []
labels = []
docs_a = []
docs_b = []

#data is not defined, your json data is loaded in trainer_load
for trainer in trainer_load["dictionary"]: 
    for inputs in trainer["inputs"]:
        words_tokenize = nltk.word_tokenize(inputs)
        words.extend(words_tokenize)
        docs_a.append(inputs)
        docs_b.append(trainer["tag"])

    if trainer["tag"] not in labels:
        labels.append(trainer["tag"])

words = [stemmer.stem(w.lower()) for w in words]
words = sorted(list(set(words)))

labels = sorted(labels)

training = []
output = []

out_empty = [0 for _ in range(len(classes))]

for a, doc in enumerate(docs_a):
    bag = []

    words_tokenize = [stemmer.stem(a) for a in doc]

    for a in words:
        if a in words_tokenize:
            bag.append(1)
        else:
            bag.append(0)

    output_row = out_empty[:]
    output_row[classes.index(docs_b[a])] = 1

    training.append(bag)
    output.append(output_row)

training = numpy.array(training)
output = numpy.array(output)
  • Related