String is attempting to be a sequence of abstract characters, it does not have any encoding from the point of view
of its users. Of course, it must have an internal encoding but that's an implementation detail.
It makes no sense to encode String as UTF-8, and then decode the result back as UTF-8. It will be no-op, in that:
(new String(str.getBytes("UTF-8"), "UTF-8") ).equals(str) == true;
But there are cases where the String abstraction falls apart and the above will be a "lossy" conversion. Because of the internal
implementation details, a String can contain unpaired UTF-16 surrogates which cannot be represented in UTF-8 (or any encoding
for that matter, including the internal UTF-16 encoding*). So they will be lost in the encoding, and when you decode back, you get the original string without the invalid unpaired surrogates.
The only thing I can take from your question is that you have a String result from interpreting binary data as Windows-1255, where it should have been interpreted in UTF-8.
To fix this, you would have to go to the source of this and use UTF-8 decoding explicitly.
If you however, only have the string result from misinterpretation, you can't really do anything as so many bytes have no representation in Windows-1255 and would have not made it to the string.
If this wasn't the case, you could fully restore the original intended message by:
new String( str.getBytes("Windows-1255"), "UTF-8");
* It is actually wrong of Java to allow unpaired surrogates to exist in its Strings in the first place since it's not valid UTF-16