3

After a neural network has been fully trained (suppose a normal feed forward network), if there a way to calculate how much weight one input has with respect to the final output? Note that I'm not talking about the weight for the input with respect to one neuron (that value should be calculated and adjusted by the NN during training process).

For example if I have 3 inputs x1, x2 and x3, and I have one output y. After the network has been trained, can I know how much does x1 affect y? I guess it should be calculated by the partial derivative of y with respect to x1. But how do I know the non-linear function that the network represents? Is this possible at all?

Thanks!

Victor
  • 508
  • 1
  • 5
  • 9
  • What do you mean by "how do I know the non-linear function that the network represents"? After all, you trained it, so you should know the hyperparameters, activation functions, and so on. Or are you using some black box library implementation? – Fred Foo Nov 26 '12 at 21:26
  • @larsmans Please forgive my ignorance since I am a newbie on neural network. I don't know what the 'hyperparameters' mean. I am just trying to find a single function which represents the relationship between x1 and y (in the above example). Although I do know the weights and active functions, but I still don't know how to 'assemble' those values to form a single function. – Victor Nov 26 '12 at 21:34
  • The function computed by a feedforward ANN is typically `Y = f(σ(X × W1.T + b1) × W2.T + b2)` for a two-layer network were *f* is task-specific and σ = tanh. Taking the derivative of *Y* wrt any input feature is explained in any textbook, e.g. [Rojas](http://page.mi.fu-berlin.de/rojas/neural/chapter/K7.pdf). – Fred Foo Nov 26 '12 at 22:32
  • @larsmans Thank you very much for the information. I'll read the textbook. – Victor Nov 27 '12 at 17:27
  • Possible duplicate of [Clarification on a Neural Net that plays Snake](http://stackoverflow.com/questions/42099814/clarification-on-a-neural-net-that-plays-snake) – devinbost Feb 15 '17 at 20:37

2 Answers2

1

Great question...

There are two methods that come to mind. One is a visual inspection using a "Hinton diagram" (check it out via Google). However, another, simple method would be to input a larger value for a single input and a small value (zero?) for the other inputs and see what it does to each output value.

There are other, more advanced approaches but these are a great way to start.

Good luck to you! Let us know if you find anything interesting and what calculations worked best for you.

danelliottster
  • 345
  • 1
  • 15
1

I've found a good work about this:

"How to measure importance of inputs" by Warren S. Sarle, SAS Institute Inc., Cary, NC, USA ftp://ftp.sas.com/pub/neural/importance.html

Briefly:

  • Summing weights does not work.
  • Summing normalized weights does not work.
  • Summing gradients does not work well.
  • Removing (zeroing or setting to mean) inputs one after another [and re-training] - works but takes a lot of time.
  • Summing small differences of output with respect to inputs - works pretty well!

Now shortly about the last method, which I prefer to use:
For output function Y = f( X1, X2, X3), you could compute:

   D1 = f( X1+h, X2, X3) - f( X1, X2, X3)
   D2 = f( X1, X2+h, X3) - f( X1, X2, X3)
   D3 = f( X1, X2, X3+h) - f( X1, X2, X3)

An average of these absolute differences over all pairs of input values gives a good estimation of each input's importance.

This is how I do it in Lua Torch

Note 1: I take squared difference instead of absolute values.
Note 2: My inputs matrix is normalized, that is why I can choose values of h as [-1..1].

local samples_count = inputs:size(1)
local inputs_count = inputs:size(2)
local outputs = model:forward(inputs):clone()

local importance = torch.zeros(inputs_count)

print("Processing inputs 1 to "..tostring(inputs_count)); io.flush()
for i = 1, inputs_count do
  io.write("\rProcessing "..tostring(i)); io.flush()
  for h = -1, 1, 0.2 do
    local inputs_h = inputs:clone()
    if h ~= 0 then inputs_h[{{},{i,i}}]:add(h) end
    local outputs_h = model:forward(inputs_h)
    importance[i] = importance[i] + torch.add(outputs_h, -1, outputs):pow(2):sum()
  end -- for h
end -- for inputs_count

importance:div(samples_count)
print("\nimportance:\n", importance)
Massinissa
  • 95
  • 1
  • 10
Pavel Chernov
  • 1,807
  • 1
  • 16
  • 15