3

I'm trying to understand how a blind detection (detection without cover work) works, by applying linear correlation. This is my understand so far:

Embedding (one-bit):

  1. we generate a reference pattern w_r by using watermarking key
  2. W_m:we multiply w_r with an strength factor a and take the negative values if we want to embedd a zero bit.
  3. Then: C = C_0 + W_m + N,where N is noise

Blind detection (found in literature):

  1. We need to calculate the linear correlation between w_r and C, to detect the appearance of w_r in C. Linear correlation in genereal is the normalizez scalar product = 1/(j*i) *C*w_r
  2. C consists of C_0*w_r + W_m*w_r + w_*r*N. It is said that, because the left and the right term is probably small, but W_m*w_r has large magnitude, therefore LC(C,w_r) = +-a * |w_r|^2/(ji)

This makes no sense to me. Why should we only consider +-a * |w_r|^2/(ji) for detecting watermarks, without using C ?. This term LC(C,w_r) = +-a * |w_r|^2/(ji) is independent from C?

Or does this only explain why we can say that low linear correlation corresponds to zero-bit and high value to one-bit and we just compute LC(C,w_r) like we usually do by using the scalar product?

Thanks!

f_3464gh
  • 162
  • 3
  • 11

0 Answers0