I am interested in making a neural network with custom nonlinear activation functions that are not 1-to-1 functions.
I see that it is possible to add custom nonlinear activation functions to Pytorch, but the only functions that are considered are 1-to-1 functions. That is, there is a linear layer which performs a dot product, and then it is fed to a nonlinear function, which takes a single input and returns and output,
Is it possible to have a custom nonlinear activation function that depends on multiple input arguments of the previous layer?
So instead of taking a single number, the output depends on all of the inputs of the input layer. In general it would be a function of the inputs and tunable weights f(x, A) that cannot be expressed as f(x dot A). One such function for example might look like:
Is it possible to use such a complex activation layer in a NN in pytorch? Or is this too unconventional?