If you don't want parse to unencode that string then you could escape the backslashes, e.g. "\\ud83d\\udc4d"
Do you control where that data comes from? Perhaps you want to provide a "replacer" in JSON.stringify to escape those, or an "reviver" in JSON.parse.
What options do you have for exercising control over the stringify or parse?
apply a reviver
const myReviver = (key, val) => key === "reaction" ? val.replace(/\\/g, "\\\\") : val;
var safeObj = JSON.parse(myJson, myReviver);
CAUTION: This doesn't seem to work in a browser, as it appears the \uxxxx character is decoded in the string before the reviver is able to operate on it, and therefore there are no backslashes left to escape!
Multiple escaping
Following on from chat with the OP it transpired that adding multiple escaped backslashes to the property with utf characters did eventually lead to the desired value being stored in the database. A number of steps were unescaping the backslashes until the real utf character was eventually being exposed.
This is brittle and far from advisable, but it did help to identify what was/wasn't to blame.
NO backslashes
This appears to be the best solution. Strip all backslashes from the data before it is converted into the utf characters or processed in any way. Essentially storing deactivated "uxxxxuxxxx" codes in the database.
Those codes can be revived to utf characters at the point of rendering by reinserting the backslashes using a regular expression:
database_field.replace(/(u[0-9a-fA-F]{4})/g, "\\$1");
Ironically, that seems to skip utf interpretation and you actually end up with the string that was wanted in the first place. So to force it to deliver the character that was previously seen, it can be processed with:
emoji = JSON.parse(`{"utf": "${myUtfString}"}`).utf;