I am putting together a simple meta interpreter which outputs the steps of a proof. I am having trouble with getting the proof steps as an output argument. My predicate explain1
returns the proof in the detailed form that i would like, but not as an output argument. My predicate explain2
returns the proof as an output argument but not with the level of detail that i would like. Can explain2
be modified so that it yields as much info as explain1
? I don't need it to output text "Explaining..." and "Explanation...", just the actual explanans and explanandum.
The toy data at the bottom of the program ("if healthy and rich, then happy") is just an example and the idea is to have a database with more facts about other things. I want to try to make a predicate that accepts an effect, e.g. happy(john)
, and returns an explanation for it. So the E
argument of explain
is supposed to be entered by the user; another query might thus be explain(_, smokes(mary), _)
and so on. I can't get what i want directly from the C
and E
variables in explain, because i want the program to output steps in the Proof process, where C
and E
vary, e.g. "rich and healthy, so happy; wins so rich; TRUE so rich; TRUE so happy" and so on. I.e. return all causal links that lead up to an effect.
The excellent site by Markus Triska has some details on this, but i am having trouble adapting that code to my problem.
Any help would be greatly appreciated!
Thanks/JCR
My program:
main1:-explain1(_, happy(john), _), fail.
main2:-explain2(_, happy(john), _, T), writeln(T), fail.
explain1(C, E, P):-
C = ['True'],
p(C, E, P),
write('Explaining '), write(E),
write('. An explanation is: '), write(C),
write(' with probability '), write(P), nl.
explain1(C, E, P):-
p(C, E, P),
not(C = ['True']),
write('Explaining '), write(E),
write('. An explanation is: '), write(C),
write(' with probability '), write(P), nl.
explain1(C, E, P):-
p(C0, E, P0),
maplist(explain1, C1, C0, P1),
flatten(C1, C),
append([P0], P1, P2),
flatten(P2, P3),
foldl(multiply, P3, 1, P),
write('Explaining '), write(E),
write('. An explanation is: '), write(C),
write(' with probability '), write(P), nl.
explain2(C, E, P, T):-
C = ['True'],
p(C, E, P),
T = [C, E, P].
explain2(C, E, P, T):-
p(C, E, P),
not(C = ['True']),
T = [C, E, P].
explain2(C, E, P, T):-
p(C0, E, P0),
maplist(explain2, C1, C0, P1, _),
flatten(C1, C),
append([P0], P1, P2),
flatten(P2, P3),
foldl(multiply, P3, 1, P),
T = [C, E, P].
multiply(V1, V2, R) :- R is V1 * V2.
p(['True'], wins(john), 0.7).
p([wins(john)], rich(john), 0.3).
p(['True'], healthy(john), 0.9).
p([rich(john), healthy(john)], happy(john), 0.6).
The output of main1:
Explaining happy(john). An explanation is: [rich(john), healthy(john)] with probability 0.6
Explaining rich(john). An explanation is: [wins(john)] with probability 0.3
Explaining healthy(john). An explanation is: [True] with probability 0.9
Explaining happy(john). An explanation is: [wins(john), True] with probability 0.162
Explaining wins(john). An explanation is: [True] with probability 0.7
Explaining rich(john). An explanation is: [True] with probability 0.21
Explaining healthy(john). An explanation is: [True] with probability 0.9
Explaining happy(john). An explanation is: [True, True] with probability 0.1134
The output of main2:
[[rich(john), healthy(john)], happy(john), 0.6]
[[wins(john), True], happy(john), 0.162]
[[True, True], happy(john), 0.1134]