2

I've been using Z3 with the JAVA bindings for 2 years now. For some reason, I've always generated the SMTLib2 code myself as a String and then used the parseSMTLib2String to build the corresponding Z3 Expr. As far as I can remember, every time I entered the exact same input twice with this method, I always got the same model.

But I recently decided to change and to use the JAVA API directly and build the expressions with ctx.mk...(). Basically, I'm not generating the String and then parse it, but I let Z3 do the job of building the Z3 Expr.

What happens now is that I get different models while I've checked that the solver does indeed store the exact same code. My JAVA code looks something like this:

static final Context context = new Context();
static final Solver solver = context.mkSolver();

public static void someFunction(){
    solver.add(context.mk...()); // Add some bool expr to the solver
    Status status = solver.check();
    if(status == SATISFIABLE){
        System.out.println(solver.getModel()); // Prints different model with same expr
    }
}

I'm making more than 1 call to "someFunction()" during runtime, and the checked expression context.mk...() changes. But if I run my program twice, the same sequence of expression is checked and sometimes give me different models from one run to another.

I've tried disabling the auto-config parameter and setting my own random seed, but Z3 still produces different models sometimes. I'm only using bounded Integer variables and uninterpreted functions. Am I using the API in the wrong way?

I could add the whole SMTLib2 code to this question if needed but it isn't really short and contains multiple solver calls (I don't even know which of them will produce a different model from one execution to another, I just know that some do).

I must precise that I've read the following threads but found the answers to be either outdated or (if I understood correctly) in favour of "Z3 is deterministic and should produce the same model for the same input":

Z3 timing variation

Randomness in Z3 Results

different run time for the same code in Z3

Edit: Surprisingly enough, with the following code I seem to always get the same models and Z3 now seems deterministic. However, the memory consumption is huge compared to my previous code since I need to keep the context in memory for a while. Any idea what I could do to achieve the same behaviour with less memory use ?

public static void someFunction(){
    Context context = new Context();
    Solver solver = context.mkSolver();
    solver.add(context.mk...()); // Add some bool expr to the solver
    Status status = solver.check();
    if(status == SATISFIABLE){
        System.out.println(solver.getModel()); // Seem to always print the same model :-)
    }
}

Here is the memory consumption I get from calling the method "someFunction" multiple times: enter image description here

Guillaume Voiron
  • 1,785
  • 2
  • 15
  • 16

2 Answers2

2

As long as it doesn't toggle between SAT and UNSAT on the same problem, it's not a bug.

One of the answers you linked explains what's happening:

Randomness in Z3 Results

"That being said, if we solve the same problem twice in the same execution path, then Z3 can produce different models. Z3 assigns internal unique IDs to expressions. The internal IDs are used to break ties in some heuristics used by Z3. Note that the loop in your program is creating/deleting expressions. So, in each iteration, the expressions representing your constraints may have different internal IDs, and consequently the solver may produce different solutions."

Perhaps when it's parsing it's assigning the same ids, whereas with the API it may differ, although I'd find that a bit hard to believe...

If you need this behavior and you're sure it was doing this from the SMT encoding, you could always print the expressions from the API then parse them.

Taylor T. Johnson
  • 2,834
  • 16
  • 17
  • Thank you for this answer. The solver doesn't toggle from SAT to UNSAT, it always gives the right answer, so no problem on this side. I've already tried encoding with the API and then parsing, but the models still seem to change, meaning it doesn't seem to come from here. Could you please check my edited question ? I found a workaround, but still not as "satisfying" as I'm sure it could be. – Guillaume Voiron Dec 17 '17 at 01:11
  • Actually, I tried my old versions using the parser and not the API, and I noticed that the models weren't always the same when I give identical formulas twice to the solver during a single run of my program. However, when I run my program twice and give identical formulas, the solver always give the same models... I tried running my program a hundred times, and always got the same models. When I try with the API and not the parser however, the models are very, very often different. I'm struggling to find an explanation to why the behaviours are so different here. – Guillaume Voiron Dec 18 '17 at 00:43
0

I think I spotted the specific parts of code producing these strange opposite behavious. Maybe the Z3 experts around can tell me if I'm completely wrong.

First of all, if I try the same code (no matter if it's manually generated code or code generated with the API) twice in a single run of my program, I sometimes end up with different models. That is something I didn't notice before, and this actually isn't a real problem for me.

My main concern however is what happens if I run my program twice, checking the exact same code during the two runs.

When I'm generating the code manually, I end up with functions definitions like this:

(declare-fun var!0 () Int)
(declare-fun var!2 () Int)
(declare-fun var!42 () Int)

(assert (and
    (or (= var!0 0) (= var!0 1))
    (or (= var!2 0) (= var!2 1))
    (or (= var!42 0) (= var!42 1))
))

(define-fun fun ((i! Int)) Int
    (ite (= i! 0) var!0
        (ite (= i! 1) var!2
            (ite (= i! 2) var!42 -1)
        )
    )            
)

As far as I can tell (and for what I've read about it (see here)), the API doesn't handle the way I defined the "fun" function. So what I did to define it with the API was something like that:

(declare-fun var!0 () Int)
(declare-fun var!2 () Int)
(declare-fun var!42 () Int)

(assert (and
    (or (= var!0 0) (= var!0 1))
    (or (= var!2 0) (= var!2 1))
    (or (= var!42 0) (= var!42 1))
))

(declare-fun fun (Int) Int)
(assert (forall
    ((i! Int))
    (ite (= i! 0) (= (fun i!) var!0)
        (ite (= i! 1) (= (fun i!) var!2)
            (ite (= i! 2) (= (fun i!) var!42) (= (fun i!) -1))
        )
    )
))

It seems that with the first method, checking the same code for different runs always (or at least so often that it isn't a real problem for me) gives the same models.

With the second method, checking the same code for different runs very often gives different models.

Can anybody tell me if there is indeed some logic behind what I've exposed regarding how Z3 actually works ?

Since I need my results to be as reproducible as possible, I went back to the manual code generation and it seems to work perfectly fine. I would love to see a function in the API allowing us to define functions directly, and not having to use the "forall" method, and see if what I just described is true or not.

Guillaume Voiron
  • 1,785
  • 2
  • 15
  • 16
  • 1
    There's also define-fun and the macro-finder tactic, which inlines function definitions provided as forall x . f(x) = ..., both have the effect that the SMT solver won't see that function symbol at all. – Christoph Wintersteiger Dec 18 '17 at 19:13
  • Thank you, so basically the solver will "understand" that the "forall" is a placeholder for the function definition of "fun"? I think I remember reading that the solver will then inline the body of this definition wherever the function "fun" is called, am I right? I thought the "macro-finder" tactic was applied by default, but I guess I was wrong... I'll try that and see what happens. – Guillaume Voiron Dec 18 '17 at 19:51
  • Yes, that's correct. The macro-finder is not applied by default for everything (only for some specific tactics and logics like UFBV). – Christoph Wintersteiger Dec 20 '17 at 16:39