Since you don't like loops, how about recursive functions?
iif = @(varargin) varargin{2 * find([varargin{1:2:end}], 1, 'first')}();
gcdrec=@(v,gcdr) iif(length(v)==1,v, ...
v(1)==1,1, ...
length(v)==2,@()gcd(v(1),v(2)), ...
true,@()gcdr([gcd(v(1),v(2)),v(3:end)],gcdr));
mygcd=@(v)gcdrec(v(:)',gcdrec);
A=[0,0,0; 2,4,2;-2,0,8];
divisor=mygcd(A);
A=A/divisor;
The first function iif
will define an inline conditional construct. This allows to define a recursive function, gcdrec
, to find the greatest common divisor of your array. This iif
works like this: it tests whether the first argument is true
, if it is, then it returns the second argument. Otherwise it tests the third argument, and if that's true
, then it returns the fourth, and so on. You need to protect recursive functions and sometimes other quantities appearing inside it with @()
, otherwise you can get errors.
Using iif
the recursive function gcdrec
works like this:
- if the input vector is a scalar, it returns it
- else if the first component of the vector is 1, there's no chance to recover, so it returns 1 (allows quick return for large matrices)
- else if the input vector is of length 2, it returns the greatest common divisor via
gcd
- else it calls itself with a shortened vector, in which the first two elements are substituted with their greatest common divisor.
The function mygcd
is just a front-end for convenience.
Should be pretty fast, and I guess only the recursion depth could be a problem for very large problems. I did a quick timing check to compare with the looping version of @Adriaan, using A=randi(100,N,N)-50
, with N=100
, N=1000
and N=5000
and tic
/toc
.
N=100
:
- looping 0.008 seconds
- recursive 0.002 seconds
N=1000
:
- looping 0.46 seconds
- recursive 0.04 seconds
N=5000
:
- looping 11.8 seconds
- recursive 0.6 seconds
Update: interesting thing is that the only reason that I didn't trip the recursion limit (which is by default 500) is that my data didn't have a common divisor. Setting a random matrix and doubling it will lead to hitting the recursion limit already for N=100
. So for large matrices this won't work. Then again, for small matrices @Adriaan's solution is perfectly fine.
I also tried to rewrite it to half the input vector in each recursive step: this indeed solves the recursion limit problem, but it is very slow (2 seconds for N=100
, 261 seconds for N=1000
). There might be a middle ground somewhere, where the matrix size is large(ish) and the runtime's not that bad, but I haven't found it yet.