Per Dunes' suggestion, I simply dropped the entire int
concept entirely. As he pointed out, any vanilla object can implicitly be used as a unique key!
In fact MyId
could be defined as simply: class MyId: pass
. Often, that would be it - a perfectly usable, implicitly unique key!
For my use case, however, I need to pass these keys back and forth across sub processes (via multiprocessing
queues). I ran into trouble with that ultra light weight approach, as the hash value would change when the objects where pickled and pushed across processes. A minor secondary concern was that I wanted to make these objects easy to log and manually read / match up through logs. As such, I went with this:
class _MyIdPrivate: pass
class MyId :
def __init__( self ):
self.__priv = _MyIdPrivate()
self.__i = hash( self.__priv )
def __str__( self ): return str( self.__i )
def __hash__( self ): return self.__i
def __eq__( self, other ):
try: return self.__i == other.__i
except: return False
class MyCollection :
def __init__( self ):
self.__objs={}
def uniqueId( self ): return MyId()
def push( self, i, obj ):
self.__objs[ i ] = obj
def pop( self, i ):
return self.__objs.pop( i, None )
c = MyCollection()
uId = c.uniqueId()
print "uId", uId
print "isinstance", isinstance(uId, MyId)
c.push( uId, "A" )
print c.pop( MyId() )
print c.pop( uId )
As you can see, I wrapped the short and sweet approach into a more comprehensive/verbose one. When I create the MyId object, I create a _MyIdPrivate member, and take the hash of that at that moment of creation. When pickling, and pushing across sub projects, that _MyIdPrivate hash will change - but it doesn't matter because I captured the initial value, and everything ends up pivoting off of that.
The main benefit of this approach over the original int
plan is that I get a unique key without "calculating" or assigning it directly.
As Dunes' suggested I could have also used a uuid. I can see pros and cons to that vs this...