0

I'm trying to encrypt some content that I have in a JSON file.

I have this content localized in several languages such as Spanish, German, Japanese or Chinese (traditional and simplified), and others.

The content can be encrypted, but cannot be unencrypted because of some character are not encrypted correctly. I have checked the problematic characters are the Japanese or Chinese ones. I have the same problems with some German or Russian characters. It crashes when I try to parse the content (that is plain text):

JSON.parse(decrypted_plain_text)

Then, I get the error.

Does this algorithm support characters such as Japanese or Chinese characters? I've tried to change the encoding from UTF-8 to UTF-8 w/o BOM but it doesn't work, either.

The algorithm is A256GCM and the CEK is A128KW.

assembler
  • 3,098
  • 12
  • 43
  • 84
yisus
  • 125
  • 1
  • 7
  • 2
    The usual encryption and decryption algorithms deal with *bytes*, not characters. There are no Japanese or Chinese or Sindarin bytes. All bytes are have equal standing. There are exactly 256 of them, from 0 to 255. If you wrote a program and it crashes, show the program and tag your question with a programming language tag. – n. m. could be an AI Dec 12 '17 at 12:46
  • 2
    The problem is probably not in the encryption/decryption, that is byte to byte as @n.m. says. Carefully check your conversion from text to bytes at the encryption end, and the conversion from bytes to text and the decryption end. Make sure that you explicitly specify UTF-8 of whatever both times. – rossum Dec 12 '17 at 13:35
  • I have found the solution here: https://stackoverflow.com/questions/5396560/how-do-i-convert-special-utf-8-chars-to-their-iso-8859-1-equivalent-using-javasc Thanks! – yisus Dec 12 '17 at 14:12

0 Answers0