You could compute the Levenshtein distance between a string and a regular expression, but it wouldn't make much sense, because this measures the similarity of two raw strings.
What you want is probably to measure the number of operations that the string would need to undergo in order to match the pattern.
Here is a solution, based on the graph theory.
First, we'll need to build the automaton that represents the language defined by the regular expression.
Thompson's algorithm will help you here.
Once we've constructed this automaton, we can try our best to make the expression match, and modify it whenever it's going off.
The number of modifications will be the distance from the string to the language described by the regex.
Here is an example, for a basic regex: .*a.*
.
The corresponding automaton is:

Let's check the distance of the bbb
string.
- The first character is
b
, so we stay in 1
b
again, so we stay in 1
- Still
b
, we stay in 1
- The string is empty, and we're still not in the final state
2
.
So let's add an a
. Now, we can go to 2
.
- We're in
2
, and no character is left in the string, so we're good.
We made one modification to the original string, so the distance from bbb
to the .*a.*
regex is 1
.
In fact, the distance of any word to that language is either 0
or 1
, because it is equivalent to that word containing an a
, which is binary.
This is only an idea, and it might have flaws, but I think that you can do something with that.