Say I have a python dictionary:
d = {"a":1, "b":2}
this represents the number of occurrences a character has in a string. So the above dictionary could generate a string of "abb", "bab", or "bba".
The Max similarity between two dictionaries is a ratio >= 0 and <= 1 that describes how similar the two most similarly generated strings are.
For example,
d1 = {"a":1, "b":2}
d2 = {"c": 3}
d3 = {"a":1, "d":2}
max_sim(d1, d2) # equals to 0.0 because no indexes
# of an arrangement of ccc matches any indexes of an arrangement of abb
max_sim(d1, d3) # equals to 0.333 because an arrangement of add matches
# one out of three characters of an arrangement of abb
# note that if we compared dda and abb, the similarity ratio would be 0.0
# but we always take into account the most similarly generated strings
How can I generate the max similarity of any two dictionaries (same length) simply by looking at the number of occurrences per character? i.e. simply analyze the dictionaries and not actually generate the strings and check the similarity ratio of each pair.
Note: I'm using max_sim on dictionaries rather than strings because I've already looped through two strings to gather their dictionary data (in addition to something else). If I use max_sim on two strings (either the original strings or convert the dictionaries back to strings), I figure I'd be just doing redundant computation. So I'd appreciate it if the answer took two dictionaries as inputs.