Assuming you have some row vector of weights, W
, (or in your case probably some function of the eigenvalues of the matrix), you should first create a vector representing the normalized cumulative sum of the weights as follows:
cumNormW = cumsum(W)./sum(W);
The values of cumNormW
should now be monotonically increasing and should range from W(1)
to 1
.
Next you'll want to take draws from a uniform distribution over [0,1]
randVal = rand(1);
You will use this random value to look up the appropriate column you have just randomly selected. Finally, you want to find the first index of your cumulative normalized weight vector.
randCol = find(cumNormW <= randVal, 1, 'last')
This provides you with your final random column selection.
Now if you want to ensure that you select t
different columns, you'll need to keep track of previous values of randCol
and simply re-iterate the above above randVal
, randCol
steps until you produce a yet unchosen column.
Example
Let's say you have a matrix with 4 columns and you have calculated the eigenvalues for that matrix and stored them in W. We'll assume that columns correspond to eigenvectors and you want to choose a column with probability proportional to that column's eigenvalue compared to the sum of eigenvalues.
W = [4 3 2 1];
After normalizing, you get cumNormW = [0.40 0.70 0.90 1];
As you can see, you have a significantly higher probability of drawing a number that results in choosing the first column (`randVal <= 0.4') than the last column ('0.9 < randVal <= 1')
You draw a random number via rand(1)
and for this example, lets say randVal = 0.82
. Using the final randCol
step, for this iteration you would get randCol = 3
.