1

I am trying to solve formulas containing thousands of variables. The major portion of these formulas is linear and z3 is chewing through them at an unbelievable speed from my point of view. However, it happens that few constraints introduce some non-linearity. Then, the computation time grows from a few minutes to not be able to have a solution after days of computation.

I thought it would be interesting to try to use bit-vectors but just for those non-linear constraints losing some precision along the way, but this is not an issue for the problem I am trying to solve. So more or less, I want to use small SAT problems by switching to a bit representation for numbers just when needed. I have seen in another post that int2bv and bv2int are treated as uninterpreted so it seems these functions cannot be used. However, I am still unsure as to why they are uninterpreted. Are there any theoretical issues or is it for performance reasons?

I have also seen that the last stable version of z3 can handle floating points (here). However, it seems that going from FP to Reals introduces non-linearity. Therefore, using floating point just for the non-linear constraints and solving the rest with real and integers also seems impossible. I haven’t tried yet, but I am assuming that using floating points for all variables would not scale for the kind of problems I have. I came up with very naïve functions which would be equivalent to int2bv and bv2int. Obviously, it is very slow even for small examples of non-linearity. Here is the implementation for 8-bit vectors working for positive integers in SMT2.

(define-fun BitVecToInt ((x (_ BitVec 8))) Int
    (+
        (ite (= #b1 ((_ extract 0 0) x)) 1 0)
        (ite (= #b1 ((_ extract 1 1) x)) 2 0)
        (ite (= #b1 ((_ extract 2 2) x)) 4 0)
        (ite (= #b1 ((_ extract 3 3) x)) 8 0)
        (ite (= #b1 ((_ extract 4 4) x)) 16 0)
        (ite (= #b1 ((_ extract 5 5) x)) 32 0)
        (ite (= #b1 ((_ extract 6 6) x)) 64 0)
        (ite (= #b1 ((_ extract 7 7) x)) 128 0)
    )
)
(define-fun IntToBitVec ((x Int)) (_ BitVec 8)
    (bvor
        #b00000000
        (ite (> (rem x 2) 0) #b00000001 #b00000000)
        (ite (>= (rem x 4) 2) #b00000010 #b00000000)
        (ite (>= (rem x 8) 4) #b00000100 #b00000000)
        (ite (>= (rem x 16) 8) #b00001000 #b00000000)
        (ite (>= (rem x 32) 16) #b00010000 #b00000000)
        (ite (>= (rem x 64) 32) #b00100000 #b00000000)
        (ite (>= (rem x 128) 64) #b01000000 #b00000000)
        (ite (>= x 128) #b10000000 #b00000000)
    )
)

Any thoughts on the best way to solve this problem or is there anything in z3 that I have missed and that could make my life easier?

Community
  • 1
  • 1

1 Answers1

1

Your BitVecToInt function makes the implicit assumption that the number represented is an unsigned quantity. This might be just fine of course; though if you're using (_ BitVec 8) type as a 2's complement number, then you'll have to take account of that explicitly. So, perhaps you need UBitVec8ToInt and SBitVec8ToInt variants; to be clear about that.

I think the reason why int2bv and bv2int are left uninterpreted is precisely because of the performance implications: Imagine turning an integer to a very-large bit-vector of thousands bits long. The formulas would be really large and with a performance penalty. For smaller target sizes, I think the problem is quite tractable; like you have here with 8-bits.

My (limited) experience with floats in Z3 is that the support is fairly good, provided you don't do conversions to/from Reals. As soon as you go there, problems become intractable. There're some comments here from the Z3 developers regarding that very question: https://github.com/Z3Prover/z3/issues/14

alias
  • 28,120
  • 2
  • 23
  • 40