Take a look at the source code where the activation functions of Keras are defined:
keras/activations.py
For example:
def relu(x, alpha=0., max_value=None):
"""Rectified Linear Unit.
# Arguments
x: Input tensor.
alpha: Slope of the negative part. Defaults to zero.
max_value: Maximum value for the output.
# Returns
The (leaky) rectified linear unit activation: `x` if `x > 0`,
`alpha * x` if `x < 0`. If `max_value` is defined, the result
is truncated to this value.
"""
return K.relu(x, alpha=alpha, max_value=max_value)
And also how does Keras layers call the activation functions: self.activation = activations.get(activation)
the activation
can be string or callable.
Thus, similarly, you can define your own activation function, for example:
def my_activ(x, p1, p2):
...
return ...
Suppose you want use this activation in Dense layer, you just put your function like this:
x = Dense(128, activation=my_activ(p1, p2))(input)
If you mean you want to implement your own derivative:
If your activation function is written in Tensorflow/Keras functions of which the operations are differentiable (e.g. K.dot(), tf.matmul(), tf.concat() etc.
), then the derivatives will be obtained by automatic differentiation https://en.wikipedia.org/wiki/Automatic_differentiation. In that case you dont need to write your own derivative.
If you still want to re-write the derivatives, check this document https://www.tensorflow.org/extend/adding_an_op where you need to register your gradients using tf.RegisterGradient