That's… probably not going to work. Escape characters are handled during lexical analysis (parsing), what you have in your string is already a single backslash, it's just the escaped representation of that single backslash:
>>> r'\u3d5f'
'\\u3d5f'
What you need to do is encode the string to be "python source" then re-decode it while applying unicode escapes:
>>> my_str.encode('utf-8').decode('unicode_escape')
'\ud83d\ude01\n\ud83d\ude01'
However note that these codepoints are surrogates, and your string is thus pretty much broken / invalid, you're not going to be able to e.g. print it because the UTF8 encoder is going to reject it:
>>> print(my_str.encode('utf-8').decode('unicode_escape'))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
UnicodeEncodeError: 'utf-8' codec can't encode characters in position 0-1: surrogates not allowed
To fix that, you need a second fixup pass: encode to UTF-16 letting the surrogates pass through directly (using the "surrogatepass" mode) then do proper UTF-16 decoding back to an actual well-formed string:
>>> print(my_str.encode('utf-8').decode('unicode_escape').encode('utf-16', 'surrogatepass').decode('utf-16'))
You may really want to do a source analysis on your data though, it's not logically valid to get a (unicode) string with unicode escapes in there, it might be incorrect loading of JSON data or somesuch. If it's an option (I realise that's not always the case) fixing that would be much better than applying hacky fixups afterwards.