5

Problem

I have a vector w containing n elements. I do not know n in advance.

I want to generate an n-dimensional grid g whose values range from grid_min to grid_max and obtain the "dimension-wise" product of w and g.

How can I do this for an arbitrary n?


Examples

For simplicity, let's say that grid_min = 0 and grid_max = 5.

Case: n=1

>> w = [0.75];
>> g = 0:5

ans =

     0     1     2     3     4     5

>> w * g

ans =

         0    0.7500    1.5000    2.2500    3.0000    3.7500

Case: n=2

>> w = [0.1, 0.2];
>> [g1, g2] = meshgrid(0:5, 0:5)

g1 =

     0     1     2     3     4     5
     0     1     2     3     4     5
     0     1     2     3     4     5
     0     1     2     3     4     5
     0     1     2     3     4     5
     0     1     2     3     4     5


g2 =

     0     0     0     0     0     0
     1     1     1     1     1     1
     2     2     2     2     2     2
     3     3     3     3     3     3
     4     4     4     4     4     4
     5     5     5     5     5     5

>> w(1) * g1 + w(2) * g2

ans =

         0    0.1000    0.2000    0.3000    0.4000    0.5000
    0.2000    0.3000    0.4000    0.5000    0.6000    0.7000
    0.4000    0.5000    0.6000    0.7000    0.8000    0.9000
    0.6000    0.7000    0.8000    0.9000    1.0000    1.1000
    0.8000    0.9000    1.0000    1.1000    1.2000    1.3000
    1.0000    1.1000    1.2000    1.3000    1.4000    1.5000

Now suppose a user passes in the vector w and we do not know how many elements (n) it contains. How can I create the grid and obtain the product?

rhombidodecahedron
  • 7,693
  • 11
  • 58
  • 91
  • 1
    Kudos for such a clearly expressed question – Luis Mendo Jan 22 '15 at 11:50
  • 1
    If you run into memory problems for "large" `n` (this seems to explode real quick anyway...) you could get a tiny bit further if you take *Luis*' approach and modify it to only generate one of the `n` `gg`s and sum it in a `for`-loop while permuting it. – knedlsepp Jan 22 '15 at 12:57

1 Answers1

7
%// Data:
grid_min = 0;
grid_max = 5;
w = [.1 .2 .3];

%// Let's go:    
n = numel(w);
gg = cell(1,n);
[gg{:}] = ndgrid(grid_min:grid_max);
gg = cat(n+1, gg{:});
result = sum(bsxfun(@times, gg, shiftdim(w(:), -n)), n+1);

How this works:

The grid (variable gg) is generated with ndgrid, using as output a comma-separated list of n elements obtained from a cell array. The resulting n-dimensional arrays (gg{1}, gg{2} etc) are contatenated along the n+1-th dimension (using cat), which turns gg into an n+1-dimensional array. The vector w is reshaped into the n+1-th dimension (shiftdim), multiplied by gg using bsxfun, and the results are summed along the n+1-th dimension.

Edit:

Following @Divakar's insightful comment, the last line can be replaced by

sz_gg = size(gg);
result = zeros(sz_gg(1:end-1));
result(:) = reshape(gg,[],numel(w))*w(:);

which results in a significant speedup, because Matlab is even better at matrix multiplication than at bsxfun (see for example here and here).

Community
  • 1
  • 1
Luis Mendo
  • 110,752
  • 13
  • 76
  • 147
  • 1
    These tensor-multiplications should really be built-ins! BTW: It wouldn't be a good idea to change the answer, as this is more natural, but as a sidenote: for `n==2` it's actually the transpose of the OP's question due to `meshgrid` vs. `ndgrid`. I'm pretty sure that `ndgrid` is what he really wants. – knedlsepp Jan 22 '15 at 12:01
  • `sz_gg = size(gg); result = zeros(sz_gg(1:end-1)); result(:) = reshape(gg,[],numel(w))*w(:);` seems to be much more memory efficient and shows good speedup. – Divakar Jan 22 '15 at 13:34
  • @knedlsepp Good point! As you say, I think I'll leave it as it is – Luis Mendo Jan 22 '15 at 14:26
  • @Divakar Good alternative! However, it doesn't seem to speed much up. But I only tested with `tic`, `toc` – Luis Mendo Jan 22 '15 at 14:31
  • @Divakar: Octave's JIT doesn't seem that good, so ideone profiling won't give the best results, but on my machine [this `for`-approach](https://ideone.com/REJyDQ) will even be faster for large n. (Though not as pretty.) – knedlsepp Jan 22 '15 at 14:55
  • @Divakar You're right. I was too sloppy and didn't apply iterations. With your code, the speedup in my computer is even greater than yours . Edited! – Luis Mendo Jan 22 '15 at 14:57
  • @Divakar I remember you often use this technique of substituting `bsxfun` by matrix product. And I think you had a "canonical" answer for that, but I can't find it. Can you tell me what it is, so that I can link it in my edited answer? – Luis Mendo Jan 22 '15 at 14:58
  • Well, I am not sure if I had any canonical answer for replacing `sum(bsxfun(@times..)` with `matrix-multiplication`. See if you meant one of these - [1](http://stackoverflow.com/a/26994722/3293881), [2](http://stackoverflow.com/a/27142285/3293881), [3](http://stackoverflow.com/a/26905520/3293881) – Divakar Jan 22 '15 at 15:10
  • @Divakar It was 3, and the one you were based on there. Thanks! – Luis Mendo Jan 22 '15 at 15:20