2

NOTE: I already have tried solutions from different SO questions with no success, details follow.

I'm studying cleverhans Pyhton tutorials, focusing on this code (keras model case). I have a base keras knowledge but I've just started with Tensorflow (total newbie).

I'm trying to visualize the adversial images generated in this piece of code (quote from the linked cleverhans sources):

# Initialize the Fast Gradient Sign Method (FGSM) attack object and graph
fgsm = FastGradientMethod(wrap, sess=sess)
fgsm_params = {'eps': 0.3,
               'clip_min': 0.,
               'clip_max': 1.}
adv_x = fgsm.generate(x, **fgsm_params)
# Consider the attack to be constant
adv_x = tf.stop_gradient(adv_x)
preds_adv = model(adv_x)

From what I understand, adv_x should contain the generated adversarial images and I have tried to convert the tensor to ndarray in order to visualize it thru matplot. I have tried the following both before and after model(adv_x):

1) adv_x.eval()
2) adv_x.eval(sess)
3) sess.run(adv_x) 
4) ..and minor changes

Nothing is working as expected, I get various errors:

ValueError: Cannot evaluate tensor using `eval()`: No default session is registered. Use `with sess.as_default()` or pass an explicit session to `eval(session=sess)`

and

InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'Placeholder' with dtype float and shape [?,28,28,1]
 [[Node: Placeholder = Placeholder[dtype=DT_FLOAT, shape=[?,28,28,1], _device="/job:localhost/replica:0/task:0/device:GPU:0"]()]]

and

InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'Placeholder' with dtype float and shape [?,28,28,1]
     [[Node: Placeholder = Placeholder[dtype=DT_FLOAT, shape=[?,28,28,1], _device="/job:localhost/replica:0/task:0/device:GPU:0"]()]]
     [[Node: strided_slice/_115 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_152_strided_slice", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]

also tried with sess.as_default(): with no success.

Type of adv_x is <class 'tensorflow.python.framework.ops.Tensor'>, its shape is TensorShape([Dimension(None), Dimension(28), Dimension(28), Dimension(1)]). Writing adv_x in Debug console, I obtain: <tf.Tensor 'StopGradient_4:0' shape=(?, 28, 28, 1) dtype=float32>

I also tried working on a slice of the Tensor adv_x[0], with no success.

I'm a bit lost and I think I miss something of TensorFlow basics, or I misunderstood the tutorial (is adv_x effectively populated with data?).

How do I convert adv_x to ndarray type? Any tip is appreciated

Regards

Fabiano Tarlao
  • 3,024
  • 33
  • 40

1 Answers1

3

I have found the solution,

it seems that Tensor adv_x is more like a function than a value, and needs an input (I currently don't grasp the tensorflow convoluted reasoning behind), so you need to call eval() by providing both the session and a dictionary. The dictionary contains one entry that is the name of the adv_x input placeholder and a value for it. In my case I provide the list of 60000 input examples (images) x_train.

Please note that the placeholder name is x in my case, but I suppose you should use the variable name of the placeholder you fed in the FastGradientMethod object constructor.

adv_images = adv_x.eval(session=sess, feed_dict={x: x_train})

adv_images is an array of size (60000,28,28,1), ad1 = adv_images[1] is a greyscale image (28,28,1).

You can use matplot but you need to change the array shape a bit. Matplot greyscale images should be 2D arrays:

matplotlib.pyplot.imshow(ad1[:,:,0])

This is my solution, perhaps not all the steps are mandatory but, you know, you must be careful with black magic :-)

P.s: in order to avoid Out of Memory errors you can truncate x_train, e.g., x_train2 = xtrain[0:100]

Fabiano Tarlao
  • 3,024
  • 33
  • 40
  • kindly, does this work during model building ? i mean, can i convert conv1 to nparray to do some comptations and then convert nparray back to tenor to feed it to the next convolution layer (conv2). below is a simple code to clarify my point: inputs = Input(shape=(48,48,3)) conv1 = Conv2D(32, (3, 3), activation='relu', padding='same')(conv1) ## here i need to get the activation maps of conv1 ## conv2 = Conv2D(64, (3, 3), activation='relu', padding='same')(pool1) – ALI Q SAEED Nov 10 '21 at 16:04
  • I'm sorry I'm since 3 years out of this, and I need to reload few reasoning-modules. I have not trued this route. So..I don't know – Fabiano Tarlao Nov 11 '21 at 20:52