Have a value which comes from external REST / JSON based datasource as a special character. I convert it using a pre-existing utility, CharDecoder.java, it stays the same, but after inserting it into a MySQL database (which default charset is UTF-8) it turns from ć to ?.
The flow of my program is this:
External Datasource sends JSON --> CharDecoder (inside a war file in tomcat7, handles special chars) then populates a row --> inside a MySQL database.
The end result in the MySQL database is an invalid character.
Dev environment information:
Am using Java 1.7.
Maven 3.3.3, inside my pom.xml's
<properties>
tag:< project.build.sourceEncoding > UTF-8 < /project.build.sourceEncoding >
Eclipse Oxygen on MacOS.
Am running Eclipse Oxygen on macOS - inside the Project's properties view (click on the project and ⌘I also known as COMMAND+I), it states that the text file encoding is UTF-8.
When I convert it using a utility class that in the codebase, it works, but when updating the row in a MySQL database (which table's default charset is UTF-8) it becomes an invalid character.
So, I added this character to my chars array: "ć" (its located in the same row that starts with "î").
public class CharDecoder {
public final static String chars [] =
{
"ö", "ä", "ü", "Ö", "Ä", "Ü", "ß",
"?", "\\", ",", ":", ";", "#", "+", "~", "!", "\"", "§", "$", "%",
"&", "(", ")", "=", "<", ">", "{", "[", "]", "}", "/", "â", "ê",
"î", "ô", "û", "Â", "Ê", "Î", "Ô", "Û", "á","ć", "é", "í", "ó", "ú",
"Á", "É", "Í", "Ó", "Ú", "à", "è", "ì", "ò", "ó", "ù", "Á", "É", "Í",
"Ó", "Ú", "°", "³", "²", "€", "|", "^", "`", "´", "'", " ", "@",
"~", "*"
};
public final static String charsHtml[] =
{
"ö", "ä", "ü", "Ö", "Ä", "Ü",
"ß", "?", "\\", ",", ":", ";", "#", "+", "˜", "!", "\"",
"§", "$", "%", "&", "(", ")", "=", "<", ">", "{",
"[", "]", "}", "/", "â", "ê", "î", "ô",
"û", "Â", "Ê", "Î", "Ô", "Û",
"á", "é", "í", "ó", "ú",
"Á", "É", "Í", "Ó", "Ú",
"à", "è", "ì", "ò", "Ù",
"À", "È", "Ì", "Ò", "Ù",
"°", "³", "²", "€", "|", "ˆ", "`",
"´", "'", " ", "@", "~", "*"
};
public final static String entities[] = {
"F6", "E4", "FC", "D6", "C4",
"DC", "DF", "3F", "5C", "2C", "3A", "3B", "23", "2B", "7E", "21",
"22", "A7", "24", "25", "26", "28", "29", "3D", "3C", "3E", "7B",
"5B", "5D", "7D", "2F", "E2", "EA", "EE", "F4", "FB", "C2", "CA",
"CE", "D4", "DB", "E1", "E9", "ED", "F3", "FA", "C1", "C9", "CD",
"D3", "DA", "E0", "E8", "EC", "F2", "F9", "C1", "C9", "CD", "D3",
"DA", "B0", "B3", "B2", "80", "7C", "5E", "60", "B4", "27", "20",
"40", "98", "2A"
};
public static String inputToChar(String input) {
return (inputTo(input, chars));
}
public static String inputTo(String input, String[] tc) {
StringBuilder sb = new StringBuilder();
boolean entity = false;
input = input.replace ('+', ' ');
String tokens = tc == charsHtml ? "%<>" : "%";
for (StringTokenizer st = new StringTokenizer (input, tokens, true); st.hasMoreTokens(); ) {
String token = st.nextToken();
if (entity) {
boolean replaced = false;
for (int i = 0; i < entities.length; i++) {
if (token.startsWith (entities[i])) {
sb.append (tc[i]);
sb.append (token.substring (2));
replaced = true;
break;
}
}
if (!replaced) {
sb.append (token);
}
entity = false;
}
else if (token.equals ("%")) {
entity = true;
continue;
}
else if (token.equals ("<")) {
sb.append ("<");
}
else if (token.equals (">")) {
sb.append (">");
}
else {
sb.append (token);
}
}
return (sb.toString ());
}
public static void main(String [] args) {
String person1 = CharDecoder.inputToChar("Lukić");
System.out.println(person1);
}
}
In order to make this question more straightforward, I removed the JDBC code (a simple JDBC Update Query) just created a main() method. When I run this main() method the output is:
Lukić
This is fine and what I want. However, when I update it using Spring JDBC, in the MySQL database (which table's default charset is UTF-8) it becomes:
Luki?
This definitely happens from the database side, should I change it (the table's default charset to LATIN1)?
Would I have to change the entire database's default charset to LATIN1? Am only throwing ideas out there...
Is there a way to fix this without changing the default charset (don't want to corrupt any existing data)...