1

I'm trying to solve a linear system in Julia with ldlfact. Why am I getting different results in the following cases?

Setup:

srand(10)
n = 7
pre = sprand(n,n,0.5)
H = pre + pre' + speye(n,n)
p_true = rand(n)
g = H*-p_true
fac = ldltfact(H; shift=0.0)
perm = fac[:p]
p1 = zeros(n)
p2 = zeros(n)

Case 1:

LDs = sparse(fac[:LD])
q1 = LDs\-g[perm]
p1[perm] = fac[:U]\q1
H*p - H*p_true

Case 2:

q2 = fac[:LD]\-g[perm]
p2[perm] = fac[:U]\q2
H*p2 - H*p_true

The solution p1 is wrong in the first case.

jjjjjj
  • 1,152
  • 1
  • 14
  • 30
  • 1
    Any specific reason why you access internal fields of the factorization? Why don't you just do x = ldltfact(A) \ b to solve Ax = b? – fredrikekre Jan 30 '18 at 09:07
  • 1
    `sparse(fac[:LD])` should not be used in algebra, as it is basically a "montage" of the matrix `L` with the diagonal `D`. As a regular sparse matrix, the operations (`\ `, etc.) can be used, but the results could be unexpected and wrong - as you discovered. To correctly solve using `LDs`, separate it to `D` and `L` and solve, `L*D*Lt \ -g[perm]`. To do it faster you could first solve `Lt \ -g[perm]` and then `D \ result` and so on (using right matrix multiplication associativity). – Dan Getz Jan 30 '18 at 10:38
  • @fredrikekre: I would, but I need to perform a positive definite modification where I change any of the block diagonal elements corresponding to an eigenvalue <= 0 to some positive delta. And I'd prefer not to modify the entire diagonal using `shift=delta` – jjjjjj Jan 30 '18 at 14:53

1 Answers1

1

Couldn't post this nicely as a comment, so wanted to add for posterity. Solving Case 1 in the following way worked for this example (thanks to @DanGetz's post)

L = copy(LDs)
for i=1:size(L,1)
  L[i,i] = 1.0
end
D = sparse(1:size(L,1),1:size(L,1),diag(LDs))

q1 = (L*D)\-g[perm]
p1[perm] = L'\q1
H*p1 - H*p_true
jjjjjj
  • 1,152
  • 1
  • 14
  • 30