I'm trying to understand how a blind detection (detection without cover work) works, by applying linear correlation. This is my understand so far:
Embedding (one-bit):
- we generate a reference pattern
w_r
by using watermarking key W_m
:we multiplyw_r
with an strength factora
and take the negative values if we want to embedd a zero bit.- Then:
C = C_0 + W_m + N
,whereN
is noise
Blind detection (found in literature):
- We need to calculate the linear correlation between
w_r
andC
, to detect the appearance ofw_r
inC
. Linear correlation in genereal is the normalizez scalar product =1/(j*i) *C*w_r
C
consists ofC_0*w_r + W_m*w_r + w_*r*N
. It is said that, because the left and the right term is probably small, butW_m*w_r
has large magnitude, thereforeLC(C,w_r) = +-a * |w_r|^2/(ji)
This makes no sense to me. Why should we only consider +-a * |w_r|^2/(ji)
for detecting watermarks, without using C
?. This term LC(C,w_r) = +-a * |w_r|^2/(ji)
is independent from C
?
Or does this only explain why we can say that low linear correlation corresponds to zero-bit and high value to one-bit and we just compute LC(C,w_r)
like we usually do by using the scalar product?
Thanks!