So, you are in a state, where you identified all prime implicancts with the Quine & McCluskey method. Then you have created the prime implicant table and extracted the essential prime implicants. You checked row and columns dominance end eliminated redundant rows and columns. But then you came to an end and have a cyclic core left.
Ac Bc Ab bC aB aC
3 X X
5 X X
7 X X
9 X X
11 X X
13 X X
And now you want to employ the set cover problem to find out all minimum necessary prime implicants.
For this you can use Petrick's method. It is an exact method and will give you a set of minimum results. The implementation is rather simple. 10 lines of code:
using MintermSet = std::set<MinTermNumber>;
using Minterm = std::set< BooleanVariable>;
using MintermVector = std::vector<MinTermNumber>;
using MaxtermSet = std::set<MaxTermNumber>;
using ConjunctiveNormalForm = std::set<MaxtermSet>;
using ProductTerm = std::set<BooleanVariable>;
using ProductTermVector = std::vector<ProductTerm>;
// Disjunctive Normal Form
using DNF = std::set<ProductTerm>;
// Conjunctive Normal Form
using CNF = std::vector<DNF>;
class PetricksMethod
{
public:
// Functors operator
ProductTermVector operator()(const CNF& cnf);
protected:
};
#include "petrick.hpp"
#include <algorithm>
// Functor operator for applying Petricks methhod
ProductTermVector PetricksMethod::operator ()(const CNF& cnf)
{
// We select an iterative approach. Start with the first Element of the CNF (which is a DNF)
// And we sorte the result of each iterative operation again in this element
DNF resultingDNF{ cnf[0] };
// We will always start with the element 1 (not element 0) becuase in 0 is the initial value
// or respectively the intermediate result
for (CNF::size_type dnfInCnfIndex = 1; dnfInCnfIndex < cnf.size(); ++dnfInCnfIndex)
{
// Result of multipliyong out od the intermediate (initial) value with the current CNF Product term
DNF intermediateCalculatedDNF;
// Now go through all elements of the intermediate (initial) product term/DNF
// For (1+2)(3+4) this would be the (1+2) part
for (const ProductTerm& productTermLeftSide : resultingDNF)
{
// Next we will iterate over all Minterms in the next DNF
// For (1+2)(3+4) this would be the (3+4) part
for (const ProductTerm& productTermRightSide : cnf[dnfInCnfIndex])
{
ProductTerm productTerm{ productTermLeftSide }; // Resulting Product term is now 1
// Add all elements from the right side
productTerm.insert(productTermRightSide.begin(), productTermRightSide.end()); // Resulting Product term is now 1,2
intermediateCalculatedDNF.insert(std::move(productTerm)); // Store this one
// And continue to add more product terms. The stl::set will ensure the idempotence law and prevent memory waste
}
}
// And now add all found terms to the result and continue with the next element of the right hand side
// Please note: also here the set will prevent double terms
resultingDNF = std::move(intermediateCalculatedDNF);
}
// Now we have the result (with 10 lines of code). The result contains all product terms in DNF
// But for our prupose we are only interested in the minimum size terms
// so, lets find the element with the minimu size (can be more than one)
uint minLength{ narrow_cast<uint>(std::min_element(resultingDNF.begin(), resultingDNF.end(), [](const ProductTerm & left, const ProductTerm & right) noexcept {return left.size() < right.size(); })->size()) };
// And from the big list of the DNF with all product terms, we copy all elements having the minimu size to the result. These are our best coverage sets
ProductTermVector cheapestVector;
// Copy result and return it to caller
std::copy_if(resultingDNF.begin(), resultingDNF.end(), std::back_inserter(cheapestVector), [&minLength](const ProductTerm& pt) noexcept {return pt.size() == minLength; });
return cheapestVector;
}
All this is part of a software that you can find on GitHub. You can see also a fully implemented Quine & McCluskey algorithm.
Hope this helps . . .