"The reason for this is that the Java compiler parses the Unicode character \u000d as a new line".
If true, then that's precisely where the error occurs.
Java compilers should perhaps refuse to compile this source, because (as Java source code) it is ill-formed, thus either bad to begin with, tampered with en route, or mutated by something in the tool-chain that does not understand the transformation rules. They should not blindly transform it.
If the editor in question is an ASCII-only tool, then said editor is doing the right thing--treating the Unicode escape sequence as a meaningless string of characters in (an ill-formed) comment.
If the editor in question is a Unicode-aware tool, then it is also doing the right thing--leaving the Unicode escape sequence "as is", and treating it as a meaningless string of characters in (an ill-formed) comment.
Lossless, reversible conversion requires transformations that map 1-1 onto--thus the intersection of the two sets must be empty. Here the two sets in question can overlap even if no characters are modified by a correctly implemented escape-ify-ing transformation because escaped-Unicode in the range (000-07F) might already be present in the input stream.
If the goal is lossless, reversible conversion between Unicode and ASCII, the requirement for transforming to/from ASCII is to escape-ify/re-encode any Unicode characters greater than hex 007F, and leave the rest alone.
Having done that, a language that is Unicode aware will treat escaped-Unicode characters as an error anywhere other than inside a comment or a string--they must not be converted within comments, but they must be converted within strings--therefore conversion must not happen until after lexical analysis has turned the source into tokens (i.e. lexemes), allowing conversions to be done in a type-safe manner.