Writing a data frame with a mix of small integer entries (value less than 1000) and "large" ones (value 1000 or more) into csv file with write_csv() mixes scientific and non-scientific entries. If the first 1000 rows are small values but there is a large value thereafter, read_csv() seems to get confused with this mix and outputs NA for scientific notations:
test_write_read <- function(small_value,
n_fills,
position,
large_value) {
tib <- tibble(a = rep(small_value, n_fills))
tib$a[position] <- large_value
write_csv(tib, "tib.csv")
tib <- read_csv("tib.csv")
}
The following lines do not make any problem:
tib <- test_write_read(small_value = 1,
n_fills = 1001,
position = 1000, #position <= 1000
large_value = 1000)
tib <- test_write_read(1, 1001, 1001, 999)
tib <- test_write_read(1000, 1001, 1000, 1)
However, the following lines do:
tib <- test_write_read(small_value = 1,
n_fills = 1001,
position = 1001, #position > 1000
large_value = 1000)
tib <- test_write_read(1, 1002, 1001, 1000)
tib <- test_write_read(999, 1001, 1001, 1000)
A typical output:
problems(tib)
## A tibble: 1 x 5
# row col expected actual file
# <int> <chr> <chr> <chr> <chr>
#1 1001 a no trailing characters e3 'tib.csv'
tib %>% tail(n = 3)
## A tibble: 3 x 1
# a
# <int>
#1 999
#2 999
#3 NA
The csv file:
$ tail -n3 tib.csv
#999
#999
#1e3
I am running:
R version 3.4.3 (2017-11-30)
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: Ubuntu 16.04.3 LTS
with tidyverse_1.2.1 (loading readr_1.1.1)
Is that a bug that should be reported?