I have only started learning feature hashing so I need help in understanding if I can apply the hash function expressed mathematically as https://en.wikipedia.org/wiki/Tent_map.
and one such application of Tent map is in cryptography -- Paper 1: Implementation of Hash Function Based On Neural Cryptography download link.
In feature hashing,
Let x be a data point of D dimension i.e., it has D number of elements. In feature hashing, a linear hash function is used to transform the D dimensional data point to a lower k dimensional data point in such a way that the distances in reduced dimension feature space is preserved.
The hash bit k is obtained through the operation,
h_k(x) = sign(y(x)) = sign(f(w_k^Tx +b))
. The output h(x)
is 0 or 1 bit.
In essence, we are classifying whether the data point x belongs to 0 or 1 class by creating random hyperplanes.
There are various choices of hash functions in feature hashing for dimensionality reduction : f = tanh()
or simply random sampling to obtain the hyperplanes. Another choice is to use kernel functions when the data is not linearly separable. Such a hashing function /technique is implemented using Kernels and one popular choice is to use the Gaussian RBF as the kernel function.
Question : In paper 1, the Authors have used the Asymmetric Tent Map https://en.wikipedia.org/wiki/Tent_map which is piecewise linear on the unit interval as the transfer function. To me the formulation of hashing in this paper using Tent Map appears similar to the hash equation (1). How can I apply the piecewise linear function i.e., apply this map to create hyperplanes in order to do feature hashing?
Or am I mixing the two concepts?