49

I use read.delim(filename) without any parameters to read a tab delimited text file in R.

df = read.delim(file)

This worked as intended. Now I have a weird error message and I can't make any sense of it:

Error in type.convert(data[[i]], as.is = as.is[i], dec = dec, na.strings = character(0L)) : 
invalid multibyte string at '<fd>'
Calls: read.delim -> read.table -> type.convert
Execution halted

Can anybody explain what a multibyte string is? What does fd mean? Are there other ways to read a tab file in R? I have column headers and lines which do not have data for all columns.

Martin Preusse
  • 9,151
  • 12
  • 48
  • 80
  • 5
    check the file encoding (UTF8? Latin1?) and pass it to the read.delim function´s parameter fileEncoding – Eduardo Leoni Feb 14 '11 at 15:58
  • Tried that, no effect. I think the bug was in my Java program which put some weird characters in the text file. However, I would appreciate more comments on this because I'm not sure. – Martin Preusse Feb 14 '11 at 19:13
  • you could post the file and a reproducible example. then we could help out more. – Eduardo Leoni Feb 14 '11 at 19:16
  • 2
    Open your file in a text editor and use your eyeballs to find the weird characters, or serach for ``. A multibyte-string is one which uses more than one byte to store each character (probably a Unicode string). – Richie Cotton Feb 16 '11 at 17:11
  • The strategy Richie suggest is sound, just make sure you use different editors. Some may show you the offending characters while others may not. – Roman Luštrik Aug 30 '12 at 09:05
  • Usually that happen me when I received files from windows. Most of the cases they are solved with `read.table(file = "file.txt", fileEncoding = "latin1")`. – Erick Chacon Apr 02 '17 at 15:18

5 Answers5

30

I realize this is pretty late, but I had a similar problem and I figured I'd post what worked for me. I used the iconv utility (e.g., "iconv file.pcl -f UTF-8 -t ISO-8859-1 -c"). The "-c" option skips characters that can't be translated.

Patrick B.
  • 474
  • 5
  • 9
24

If you want an R solution, here's a small convenience function I sometimes use to find where the offending (multiByte) character is lurking. Note that it is the next character to what gets printed. This works because print will work fine, but substr throws an error when multibyte characters are present.

find_offending_character <- function(x, maxStringLength=256){  
  print(x)
  for (c in 1:maxStringLength){
    offendingChar <- substr(x,c,c)
    #print(offendingChar) #uncomment if you want the indiv characters printed
    #the next character is the offending multibyte Character
  }    
}

string_vector <- c("test", "Se\x96ora", "works fine")

lapply(string_vector, find_offending_character)

I fix that character and run this again. Hope that helps someone who encounters the invalid multibyte string error.

Ram Narasimhan
  • 22,341
  • 5
  • 49
  • 55
23

I had a similarly strange problem with a file from the program e-prime (edat -> SPSS conversion), but then I discovered that there are many additional encodings you can use. this did the trick for me:

tbl <- read.delim("dir/file.txt", fileEncoding="UCS-2LE")
rcs
  • 67,191
  • 22
  • 172
  • 153
dani
  • 231
  • 2
  • 3
0

This happened to me because I had the 'copyright' symbol in one of my strings! Once it was removed, problem solved.

A good rule of thumb, make sure that characters not appearing on your keyboard are removed if you are seeing this error.

Dr Nick Engerer
  • 765
  • 7
  • 10
0

I figured out Leafpad to be an adequate and simple text-editor to view and save/convert in certain character sets - at least in the linux-world.

I used this to save the Latin-15 to UTF-8 and it worked.

Michael Ohlrogge
  • 10,559
  • 5
  • 48
  • 76
Robin
  • 9
  • 1