1

I've been trying to use the Java SAX parser to parse an XML file in the ISO-8859-1 character encoding. This goes otherwise well, but the special characters such as ä and ö are giving me a headache. In short, the ContentHandler.characters(...) method gives me weird characters, and you cannot even use a char array to construct a String with a specified encoding.

Here's a complete minimum working example in two files:

latin1.xml:

<?xml version='1.0' encoding='ISO-8859-1' standalone='no' ?>
<x>Motörhead</x>

That file is saved in the said Latin-1 format, so hexdump gives this:

$ hexdump -C latin1.xml 
00000000  3c 3f 78 6d 6c 20 76 65  72 73 69 6f 6e 3d 27 31  |<?xml version='1|
00000010  2e 30 27 20 65 6e 63 6f  64 69 6e 67 3d 27 49 53  |.0' encoding='IS|
00000020  4f 2d 38 38 35 39 2d 31  27 20 73 74 61 6e 64 61  |O-8859-1' standa|
00000030  6c 6f 6e 65 3d 27 6e 6f  27 20 3f 3e 0a 3c 78 3e  |lone='no' ?>.<x>|
00000040  4d 6f 74 f6 72 68 65 61  64 3c 2f 78 3e           |Mot.rhead</x>|

So the "ö" is encoded with a single byte, f6, as you'd expect.

Then, here's the Java file, saved in the UTF-8 format:

MySAXHandler.java:

import java.io.File;
import java.io.FileReader;
import javax.xml.parsers.SAXParser;
import javax.xml.parsers.SAXParserFactory;
import org.xml.sax.InputSource;
import org.xml.sax.XMLReader;
import org.xml.sax.helpers.DefaultHandler;

public class MySAXHandler extends DefaultHandler {
private static final String FILE = "latin1.xml"; // Edit this to point to the correct file

@Override
public void characters(char[] ch, int start, int length) {
    char[] dstCharArray = new char[length];
    System.arraycopy(ch, start, dstCharArray, 0, length);
    String strValue = new String(dstCharArray);
    System.out.println("Read: '"+strValue+"'");
    assert("Motörhead".equals(strValue));
}

private XMLReader getXMLReader() {
    try {
        SAXParser saxParser = SAXParserFactory.newInstance().newSAXParser();
        XMLReader xmlReader = saxParser.getXMLReader();
        xmlReader.setContentHandler(new MySAXHandler());
        return xmlReader;
    } catch (Exception ex) {
        throw new RuntimeException("Epic fail.", ex);
    }
}

public void go() {
    try {
        XMLReader reader = getXMLReader();
        reader.parse(new InputSource(new FileReader(new File(FILE))));
    } catch (Exception ex) {
        throw new RuntimeException("The most epic fail.", ex);
    }
}

public static void main(String[] args) {
    MySAXHandler tester = new MySAXHandler();
    tester.go();
}
}

The result of running this program is that it outputs Read: 'Mot�rhead' (ö replaced with a "? in a box") and then crashes due to an assertion error. If you look into the char array, you'll see that the char that encodes the letter ö consists of three bytes. They don't make any sense to me, as in UTF-8 an ö should be encoded with two bytes.

What I have tried

I have tried converting the character array to a string, then getting the bytes of that string to pass to another string constructor with a charset encoding parameter. I have also played with CharBuffers and tried to find something that might possibly work with the Locale class to solve this problem, but nothing I try seems to work.

ZeroOne
  • 3,041
  • 3
  • 31
  • 52
  • 2
    What happens if you use `new FileInputStream()` instead of new `FileReader()`? Or `new InputStreamReader(new FileInputStream(FILE), "ISO-8859-1"))`? – JB Nizet May 04 '12 at 14:50
  • See http://stackoverflow.com/questions/3482494/howto-let-the-sax-parser-determine-the-encoding-from-the-xml-declaration – JB Nizet May 04 '12 at 14:54
  • note: the characters() method does not guarantee that all bytes of a multibyte characters will appear together in the same characters() event. – Colin D May 04 '12 at 14:55
  • @JBNizet Thank you! That solved it. Since comments cannot be accepted, I've just upvoted yours and accepted Jonathan's. – ZeroOne May 04 '12 at 15:47
  • @ColinD But I should be safe because the input is in ISO-8859-1, right? – ZeroOne May 04 '12 at 15:49
  • @ZeroOne I am not 100% but I believe ISO-8859-1 has only single byte characters and no surrogate pairs, so yes. – Colin D May 04 '12 at 15:54
  • @ZeroOne It does not look the following is a problem for you specifically, but the characters() method also does not guarentee you will get all adjacent character events in one event. ie: Motörhead could come as the two events Motör + head – Colin D May 04 '12 at 15:57
  • So why the three bytes ? What was the default encoding used ? – Mr_and_Mrs_D Apr 10 '13 at 01:27
  • Sorry Mr_and_Mrs_D, I never got around to trying to solve why three bytes. I think the default encoding must have been UTF-8 anyway. With ISO-8859-1 (or ISO-8859-15) I never should have had this problem in the first place. The results should be replicable, so you're free to try and work out the details. :) Should be doable with a debugger. – ZeroOne Apr 10 '13 at 10:17

3 Answers3

4

The problem is that you're using a FileReader to read the file, instead of a FileInputStream as a commenter previously suggested. In the go method, take out the FileReader and replace with FileInputStream.

public void go() {
    try {
        XMLReader reader = getXMLReader();
        reader.parse(new InputSource(new FileInputStream(new File(FILE))));
    } catch (Exception ex) {
        throw new RuntimeException("The most epic fail.", ex);
    }
}

The way you have it now, FileReader uses the default platform encoding to decode the characters before passing them to the SAX parser, which is not what you want. If you replace with FileInputStream, then the XML parser should correctly read the processing instruction with the character set encoding, and handle the character set decoding for you.

Because FileReader is doing the decoding, you're seeing the invalid characters. If you let the SAX parser handle it, it should go through fine.

Jonathan
  • 7,536
  • 4
  • 30
  • 44
  • 1
    Well, this is bittersweet. I just spent a good part of my Friday afternoon trying to figure out what's wrong with my `characters(...)` method, when the actual perpetrator was the seemingly innocent FileReader! Like it says in the RuntimeException: The most epic fail. Which of course makes it an epic win for you. Thank you very much! :) – ZeroOne May 04 '12 at 15:46
  • 3
    FileReader should almost never be used. At least not until Sun/Oracle finally decides to provide a constructor taking a charset as argument. To read characters, use an InputStreamReader wrapping a FileInputStream, and specify a charset. If you don't, the platform's default is used. – JB Nizet May 04 '12 at 15:56
  • @JBNizet I'll make at least a mental note of that, thank you. :) It does sound like FileReader could cause a lot of problems. – ZeroOne May 04 '12 at 16:03
0

In the characters() method:

When you construct a new String object, First convert your char[] into a byte[], then invoke the constructor 'new String(byte[], String charSetName)', instead of the default 'new String(char [])'

If you need more help, try: http://www.exampledepot.com/egs/java.nio.charset/ConvertChar.html

raTM
  • 399
  • 1
  • 4
  • 16
0

You are fishing in murky waters; many things are misleading. As @JBNizet indicated: a Reader reads text in some encoding, already doing a conversion on an InputStream which reads bytes. If you do not indicate the encoding the platform encoding will be taken.

    reader.parse(new InputSource(new FileInputStream(new File(FILE))));

This is neutral to the actual encoding attribute in the XML.

The java source encoding must coincide with the editor encoding, otherwise the string literal would go wrong.

System.out.println can be misrepresenting too.

Furthermore "ISO-8859-1" is a subset of Windows Latin-1, "Windows-1252". If you ever encounter problems with special characters propose "Windows-1252" (in java one can use "Cp1252").

Joop Eggen
  • 107,315
  • 7
  • 83
  • 138