In particular I am trying to determine the best approach to solve the following type of problem:
The example I am interested in is the find-s algorithm in Mitchell's Machine Learning book were it is applied to 4 training examples.
The basic idea is for each training example, x, and hypothesis h, determine h' were it incorporates x by making it more general. I need to map h to h' for each x in the training set. The problem I am having is how to best approach this in a logical programming language. I am using minikanren which is roughly prolog embedded in scheme.
After computing each h', I need to then set! it to a global variable h, and then proceed to the next training example x. The code below is the main part of the program.
(define h '(0 0 0 0 0 0))
(define seto
(lambda (x)
(project (x)
(lambda (s) (set! h x) (succeed s)))))
(run* (q)
(fresh (x h0 h1)
(trainingo x)
(== h h0)
(find-so h0 x h1)
(seto h1)
(== h1 q)))
h is the global variable, seto mutates h with h1 which is the next computed hypothesis from h0 and x training example using the find-s algorithm (find-so).
In prolog it would be (I think) equivalent to assert('hypothesis'(H)) after each training example X (overwriting the the previous one) and calling retract('hypothesis'(H)) after all the training examples have been applied.
Again my question is whether this is the best approach (via side-effects) for solving these kind of problems?
Edit: I accepted @mat answer in conjunction with his comment. In summary I needed to treat the training examples as a list and use forward recursion on that list until I got to the empty list. Were I was getting stuck was in having the training examples as part of the backtracking along with finding the next hypothesis instead of having them in a list of which I can recur on until empty.