I am trying to solve the following optimization problem and trying to obtain a set of values x_1, x_2, ..., x_k
as follows:
argmin Σx_i * a_i
subject to <x_1, x_2, ..., x_k> ~ Lap(m, b)
The terms a_i are constants, and the values x_i are drawn from a laplace distribution with mean m and scale parameter b. Hence the resulting outputs are generated from a Laplacian distribution. What is this kind of constraint called?