Home > Net >  PyTorch: Checking Model Accuracy Results in "AttributeError: 'bool' object has no att
PyTorch: Checking Model Accuracy Results in "AttributeError: 'bool' object has no att

Time:10-23

I am training a neural network and would like to check its accuracy. I've used Librosa and SciKitLearn to represent audio in the form of 1D Numpy arrays. Thus x_train, x_test, y_train, and y_test are all 1D Numpy arrays with the x_* arrays containing floats and the y_* arrays containing strings corresponding to classes of data. For example:

x_train = [0.235, 1.101, 3.497]
y_train = ['happy', 'angry', 'neutral'] 

I've written a dictionary to represent these classes (strings) as integers:

emotions = {
'01' : 'neutral',
'02' : 'calm',
'03' : 'happy',
'04' : 'sad',
'05' : 'angry',
'06' : 'fearful',
'07' : 'disgust',
'08' : 'surprised'}

emotion_list = list(emotions.values())

Next I've defined a class to transform this data such that it can be passed to torch.utils.data.DataLoader():

class MakeDataset(Dataset):
def __init__(self, x_train, y_train):
    self.x_train = torch.FloatTensor(x_train)
    self.y_train = torch.FloatTensor([emotion_list.index(each) for each in y_train])
def __len__(self):
    return self.x_train.shape[0]
def __getitem__(self, ind):
    x = self.x_train[ind]
    y = emotion_list.index(y_train[ind])
    return x, y

I define a training set, testing set, batch size, and load the data:

train_set = MakeDataset(x_train, y_train)
test_set = MakeDataset(x_test, y_test)

batch_size = 512

train_loader = DataLoader(train_set, batch_size=batch_size, shuffle=True)
test_loader = DataLoader(test_set, batch_size=batch_size, shuffle=False)

I define the model, train, and test as follows:

class TwoLayerMLP(torch.nn.Module):
def __init__(self, D_in, H, D_out):
    super(TwoLayerMLP, self).__init__()
    self.linear1 = torch.nn.Linear(D_in, H)
    self.linear2 = torch.nn.Linear(H, D_out)

def forward(self, x):
    h_relu = self.linear1(x).clamp(min=0)
    y_pred = self.linear2(h_relu)
    return y_pred


model = TwoLayerMLP(180, 90, 8)
optimizer = torch.optim.Adam(model.parameters())
criterion = nn.CrossEntropyLoss()

epochs = 4001

total_train = 0 
correct_train = 0
for epoch in range(epochs):  
    model.train()
    running_loss = 0.0
    for i, data in enumerate(train_loader):
        audio, label = data
        optimizer.zero_grad()
        outputs = model(audio)
        loss = criterion(outputs, labels)
        loss.backward()
        optimizer.step()
        
        # Variables for printing the training accuracy   
        predicted = torch.max(outputs.data,1)
        total_train  = float(label.size(0))

        # The line below results in AttributeError 'bool' object has no attribute 'sum'
        correct_train  = float((predicted == label)).sum()
        
    model.eval()
    total = 0
    correct = 0
    with torch.no_grad():
        for data in enumerate(test_loader):
            audio, label = data
            outputs = model(audio)

            # Variables for printing the testing accuracy
            predicted = torch.max(outputs.data, 1)
            total  = float(label.size(0))

            # The line below results in AttributeError 'bool' object has no attribute 'sum'
            correct  = float((predicted == label)).sum()
    
    print("Epoch: ", epoch, "Loss: ", loss.item(), "Training Accuracy: ",
    100.*correct_train/total_train, "Testing Accuracy: ", 100.*correct/total)

Can anyone explain why this is happening? It is worth noting that this neural network as implemented is able to effectively determine audio samples I have pulled from media clips (i.e. angry voices are correctly classified as angry) I have printed 'predicted' and it appears to be an object torch.return_types.max() of two tensors: 'values' and 'indices.' 'label' is a single tensor:

    Predicted:  torch.return_types.max(
values=tensor([ 9.1376, 14.7075, 13.8887, 12.9374, 14.6852, 12.9356, 10.8034, 12.1842,
        14.5007, 12.1388, 14.0793, 12.4416, 11.9563, 12.6768, 13.0111,  9.7837,
        11.6815, 10.0132, 13.0655,  9.3486, 10.3831, 13.3035, 12.2962, 13.1725,
        12.9111,  9.9076, 12.1036, 10.4550, 15.2908, 14.8847,  9.7669, 14.2672,
        11.6631, 12.1898, 12.6906, 15.4904, 13.0693, 11.9331, 12.8776, 13.0361,
        14.0445, 11.6117, 13.1249, 11.9780, 12.9732, 14.9221, 14.4835, 13.5883,
indices=tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
        6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
        6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
        6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
        6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,


Label:  tensor([4, 3, 7, 3, 1, 7, 4, 5, 6, 7, 1, 7, 3, 7, 7, 5, 3, 5, 0, 5, 5, 6, 6, 1,
        1, 2, 7, 6, 5, 0, 4, 1, 5, 4, 1, 3, 0, 6, 0, 3, 1, 2, 7, 5, 7, 1, 3, 3,
        1, 7, 6, 7, 1, 7, 2, 5, 2, 7, 4, 6, 3, 2, 1, 7, 1, 7, 6, 2, 0, 1, 6, 1,
        3, 3, 6, 2, 4, 5, 0, 7, 7, 2, 7, 4, 4, 7, 6, 7, 0, 2, 2, 0, 6, 7, 7, 5,
        3, 3, 0, 5, 1, 4, 0, 7, 2, 1, 2, 3, 6, 5, 7, 3, 1, 6, 2, 2, 4, 5, 7, 5,

CodePudding user response:

You don't need to convert to float before summing, you can use:

(predicted == label).sum().item()

(predicted == label) returns a BoolTensor which can be summed to obtain a float value.

PS: it is weird that the float((predicted == label)) did not throw an error for you, on my machine with pytorch version 1.9.1 on running the above command on a tensor containing more than one element I get an error saying that a float conversion only works when the tensors contain only one element.

e.g.

tx = torch.ones(5)
ty = torch.ones(5)
c = float((tx == ty)).sum()

throws the error

----> 1 float((tx == ty))

ValueError: only one element tensors can be converted to Python scalars

Also there are a number of bugs in the code you copy paste to reproduced, I would double check to make sure that the reproduction code is runnable.

CodePudding user response:

Replace

correct_train  = float((predicted == label)).sum()

with

correct_train  = sum(predicted == label)

You don't need to convert boolean tensor to float, the sum function is smart enough to convert False to 0 and True to 1

  • Related