Home > Blockchain >  What are some ways to figure out why your neural network classifies your data the way it does
What are some ways to figure out why your neural network classifies your data the way it does

Time:06-29

I have a couple opportunities to write a paper, or papers over some of the neural networks I have made.

I was wondering if there are anyways to figure out why the neural network classifies the data I have the way it does. As in, what features of the data the neural network is using to classify the data. The neural networks I'm using mostly consists of ltsm layers.

I have thought about ploting the neural network at everything output, but this doesn't really help to much. Just because there are so many weights that go to each node in a layer it makes it very hard to determine what's happening. I could plot the bias's but I don't know how much influence they have over the weights.

Another thing I considered was adjusting the the values of the input data a little bit at a time and seeing where the classification changes. Which this would work to some extent, but wouldn't allow to me to get the full picture of what the neural network is doing.

So any suggestions on how to do this?

CodePudding user response:

I ended up using Guided backprop from TorchRay. https://facebookresearch.github.io/TorchRay/attribution.html#module-torchray.attribution.guided_backprop

  • Related