6

I would appreciate some help in finding literature specifically addressing K-map optimality.

I understand how, for example, you can map between SOP (sum-of-product) expressions and a K-map, and why in general you would expect the K-map optimized expression to be simpler, since finding a maximal grouping of 1's corresponds to finding some of the redundancies in a naive SOP expression.

I can vaguely see that the K-map method might not produce optimal solutions, because it seems like the only thing we are actually doing is taking advantage of the distributive and identity (A + A' = 1) properties of the Boolean algebra. But I don't really understand what algebraic operations we are not performing with the K-map, that might allow us to reach a more optimal solution.

The upshot is that I do not know how to start a proof that would show that K-maps are not always optimal.

I was trying to read: this but in that paper, it is just cited that the problem of finding an optimal Boolean expression is in NP, and I think the author is just implicitly saying that K-maps cannot be optimal, since as an algorithm they are not running in NP time.

Why are K-maps non-optimal, and not just in a "counterexample" kind of way.. actually why? Can you either prove it to me, or direct me to a proof?

  • No, but I see why you might think that. I am teaching myself machine structures out of one of my dad's old books and this question has been bothering me. – user1623303 Aug 24 '12 at 18:04
  • 1
    ok, cool. I have a success rate of about 80% at classifying as homework. Might need to tune my algorithms a bit. – smcg Aug 24 '12 at 18:07

2 Answers2

1

What do you consider optimal? The K-map only gives you optimal equations in SOP or POS form. So it depends on what you want.

Algebraic operations not performed includes for example Distributivity of ∧ over ∨. Applying that rule might give you a function with less terms.

The k-map will not give you equations using i.e. xor, since the resulting equations use only or and and and not. So, if I take the truth table derived from the function ( with ^ being xor):

lambda a, b, c, d: a ^ b ^ c ^ d

the resulting truth table will have no rectangles and the SOP form could be considered unoptimal:

lambda a, b, c, d: (not a and b and c and d) or (not a and not b and not c and d) or (not a and not b and c and not d) or (not a and b and not c and not d) or (a and b and c and not d) or (a and b and not c and d) or (a and not b and c and d) or (a and not b and not c and not d)

truth table and k-map

If you are only using or and and and your input function is shorter than your output function, you are using parenthesis in your input. If that is the case, you will also be able to factor out some variables in the output of the k-map (using boolean algebra), and you'll get an equation that is at least as short.

Generalization: Minimizing a boolean function is like finding the equation for any other series of numbers. There will always be multiple solutions. But which function is the simplest? I might say: "give me a function that returns 1 for 0, 2 for 1, 4 for 2 and 8 for 3". You could say "the function is pow(2,x)". And I could say "wrong! I was thinking of 1 << x". All values where the functions are not equal are outside of the domain of the specification. They correspond to "don't knows" in the K-map.

When I say "my function is simpler cause I am simply xor'ing all terms", you could say: "but if I add an extra variable and it doesn't follow your pattern, your function is getting too unwieldy and complex, mine simply still follows the SOP or POS pattern".

Janus Troelsen
  • 20,267
  • 14
  • 135
  • 196
0

If it helps someone, I shall just try to describe the challenges that I faced in terms of optimality when I tried to implement K-map in python. Here is the greedy algorithm I used for a 4-variable boolean function in SOP form:

  1. First check all minterms with just one variable (e.g., x or ¬x) and choose the unique ones that covers a subset of the ones in the function only
  2. Next check all minterms with two variables (e.g., xy or ¬wz) and choose the unique ones that covers a subset of the ones in the function yet to be covered.
  3. Next check all minterms with three variables (e.g., xyz or ¬wxy) and choose the unique ones that covers a subset of the ones in the function yet to be covered.
  4. Finally add rest of the 4-variable minterms corresponding to the ones left to covered, if any.

Now, let's consider the function f(w,x,y,z)=∑(0,2,4,5,8,10,11,12,13,15) as shown below:

enter image description here

Using the above steps, I got stuck to the following local minimum first, which was definitely a minimal representation of SOPs but not a minimum representation: f(w,x,y,z) = x¬y + ¬x¬z + w¬xy + wxz. It took 4 minterms to represent f, which is suboptimal. The critical mistake done by the algorithm was that it chose couple of 3-variable minterms w¬xy and wxz, when a single one (wyz) would suffice, since it was checking the minterms in a certain order and chose the one that had some overlap with the ones not yet covered. Once it chose w¬xy, it must had to choose another minterm, since a one was still uncovered.

enter image description here

Now, if instead at the very step, the algorithm uses an order of the 3-variable minterms that considers wyz before considering the other two, it will output the optimal SOP: f(wxyz) = x¬y + ¬x¬z + wyz, as shown below.

enter image description here

Sandipan Dey
  • 21,482
  • 2
  • 51
  • 63
  • Assuming that a minterm with a smaller number of variables should be used can still lead you to a local minimum. Consider the function `f(w,x,y,z)=∑(3,4,5,7,9,13,14,15)`. `xz` is a viable minterm but the optimal solution is `¬wyz + ¬wx¬y + w¬yz + wxy`. – Piotr Siupa Jul 30 '23 at 06:06