I am trying to optimise a loss function which takes two inputs "m, d" as inputs. Both of these are (32, 32, 1) matrices. I am not able to figure out how to bound/constrain their values between 0 and 1. "m, d" are filters that I apply to some input being fed into a trained ML model.
I have looked at these documentations
https://scipy-lectures.org/advanced/mathematical_optimization/index.html#id54 (See Box-Bounds; hyperlink in Chapter Contents) https://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html
scipy minimize with constraints
def lossfunction(MD):
m = MD[:, :, 0]
d = MD[:, :, 1]
x = data[np.argwhere(label != 6)]
xt = np.multiply((1 - m), x) + np.multiply(m, d) # Todo: Apply Filter
num_examples = xt.shape[0]
sess = tf.get_default_session()
totalloss = 0
for offset in range(0, num_examples, BATCH_SIZE):
batchx, batchy = xt[offset:offset + BATCH_SIZE], (np.ones(BATCH_SIZE) * targetlabel)
loss = sess.run(loss_operation, feed_dict={x: batchx, y: batchy, prob: 0.8})
totalloss = totalloss + loss
finalloss = totalloss + lam * np.linalg.norm(m, 1)
return finalloss
optimize.minimize(lossfunction, np.zeros((32, 32, 2)), bounds=((0, 1), (0, 1)))
I get this error message: ValueError: length of x0 != length of bounds
I understand that the bounds and inputs should be of the same dimensions. Is there a convenient way of inputting the bounds?