82

I've got a bunch of csv files that I'm reading into R and including in a package/data folder in .rdata format. Unfortunately the non-ASCII characters in the data fail the check. The tools package has two functions to check for non-ASCII characters (showNonASCII and showNonASCIIfile) but I can't seem to locate one to remove/clean them.

Before I explore other UNIX tools, it would be great to do this all in R so I can maintain a complete workflow from raw data to final product. Are there any existing packages/functions to help me get rid of the non-ASCII characters?

smci
  • 32,567
  • 20
  • 113
  • 146
Maiasaura
  • 32,226
  • 27
  • 104
  • 108
  • Try with regular expressions, for instance the function gsub. Check ?regexp – aatrujillob Mar 29 '12 at 23:22
  • You are aware that `read.csv()` takes an `encoding` argument, so you can handle these, at least in R? What specific check do the non-ASCII characters fail, is it in R (if so post it here), or external? – smci Aug 12 '16 at 08:02

5 Answers5

96

These days, a slightly better approach is to use the stringi package which provides a function for general unicode conversion. This allows you to preserve the original text as much as possible:

x <- c("Ekstr\u00f8m", "J\u00f6reskog", "bi\u00dfchen Z\u00fcrcher")
x
#> [1] "Ekstrøm"         "Jöreskog"        "bißchen Zürcher"

stringi::stri_trans_general(x, "latin-ascii")
#> [1] "Ekstrom"          "Joreskog"         "bisschen Zurcher"
hadley
  • 102,019
  • 32
  • 183
  • 245
  • 1
    Any thoughts how I can make it work with stringi -- `iconv("Klinik. der Univ. zu K_ln (AA\u0090R)","latin1","ASCII",sub="") => [1] "Klinik. der Univ. zu K_ln (AAR)"` but `stringi::stri_trans_general("Klinik. der Univ. zu K_ln (AA\u0090R)", "latin-ascii") => [1] "Klinik. der Univ. zu K_ln (AA\u0090R)"` – xbsd Nov 13 '17 at 01:27
  • 4
    `stringi::stri_trans_general(x, "latin-ascii")` removes some of the non-ASCII characters in my text, but not others. `tools::showNonASCII` reveals the non-removed characters are:zero width space, trademark sign, Euro sign, narrow no-break space. Does this mean `"latin-ascii"` is the wrong transform identifier for my string? Is there a straightforward way to figure out the correct transform identifier? Thanks – Josh Jul 04 '20 at 23:19
91

To simply remove the non-ASCII characters, you could use base R's iconv(), setting sub = "". Something like this should work:

x <- c("Ekstr\xf8m", "J\xf6reskog", "bi\xdfchen Z\xfcrcher") # e.g. from ?iconv
Encoding(x) <- "latin1"  # (just to make sure)
x
# [1] "Ekstrøm"         "Jöreskog"        "bißchen Zürcher"

iconv(x, "latin1", "ASCII", sub="")
# [1] "Ekstrm"        "Jreskog"       "bichen Zrcher"

To locate non-ASCII characters, or to find if there were any at all in your files, you could likely adapt the following ideas:

## Do *any* lines contain non-ASCII characters? 
any(grepl("I_WAS_NOT_ASCII", iconv(x, "latin1", "ASCII", sub="I_WAS_NOT_ASCII")))
[1] TRUE

## Find which lines (e.g. read in by readLines()) contain non-ASCII characters
grep("I_WAS_NOT_ASCII", iconv(x, "latin1", "ASCII", sub="I_WAS_NOT_ASCII"))
[1] 1 2 3
Josh O'Brien
  • 159,210
  • 26
  • 366
  • 455
4

I often have trouble with iconv and I'm a base R fan.

So instead to remove unicode or non-ASCII I use gsub, using lapply to apply it to an entire dataframe.

gsub("[^\u0001-\u007F]+|<U\\+\\w+>","", string)

The benefit of this gsub is that it will match a range of notation formats. Below I show the individual matches for the two patterns.

x1 <- c("Ekstr\xf8m", "J\xf6reskog", "bi\xdfchen Z\xfcrcher")
gsub("[^\u0001-\u007F]+","", x1)
## "Ekstrm"        "Jreskog"       "bichen Zrcher"
x2 <- c("Ekstr\u00f8m", "J\u00f6reskog", "bi\u00dfchen Z\u00fcrcher")
gsub("[^\u0001-\u007F]+","", x2)
## Same as x1
## "Ekstrm"        "Jreskog"       "bichen Zrcher"
x3 <- c("<U+FDFA>", "1<U+2009>00", "X<U+203E>")
gsub("<U\\+\\w+>","", x3)
## ""    "100" "X"
FantasyGutsy
  • 81
  • 1
  • 5
3

To remove all words with non-ascii characters (borrowing code from @Hadley), you can use the package xfun with filter from dplyr

x <- c("Ekstr\u00f8m", "J\u00f6reskog", "bi\u00dfchen Z\u00fcrcher", "alex")
x

x %>% 
  tibble(name = .) %>%
  filter(xfun::is_ascii(name)== T)
Nick
  • 417
  • 4
  • 14
2

textclean::replace_non_ascii() did the job for me. This function removes not only special letters, but euro, trademark and other symbols also.

    x <- c("Ekstr\u00f8m \u2605", "J\u00f6reskog \u20ac", "bi\u00dfchen Z\u00fcrcher \u2122")

 stringi::stri_trans_general(x, "latin-ascii")
    [1] "Ekstrom ★"          "Joreskog €"         "bisschen Zurcher ™"
    
textclean::replace_non_ascii(x)
    [1] "Ekstrom"               "Joreskog"              "bisschen Zurcher cent"
Yuriy Barvinchenko
  • 1,465
  • 1
  • 12
  • 17