171

My question is related to assignment by reference versus copying in data.table. I want to know if one can delete rows by reference, similar to

DT[ , someCol := NULL]

I want to know about

DT[someRow := NULL, ]

I guess there's a good reason for why this function doesn't exist, so maybe you could just point out a good alternative to the usual copying approach, as below. In particular, going with my favourite from example(data.table),

DT = data.table(x = rep(c("a", "b", "c"), each = 3), y = c(1, 3, 6), v = 1:9)
#      x y v
# [1,] a 1 1
# [2,] a 3 2
# [3,] a 6 3
# [4,] b 1 4
# [5,] b 3 5
# [6,] b 6 6
# [7,] c 1 7
# [8,] c 3 8
# [9,] c 6 9

Say I want to delete the first row from this data.table. I know I can do this:

DT <- DT[-1, ]

but often we may want to avoid that, because we are copying the object (and that requires about 3*N memory, if N object.size(DT), as pointed out here. Now I found set(DT, i, j, value). I know how to set specific values (like here: set all values in rows 1 and 2 and columns 2 and 3 to zero)

set(DT, 1:2, 2:3, 0) 
DT
#      x y v
# [1,] a 0 0
# [2,] a 0 0
# [3,] a 6 3
# [4,] b 1 4
# [5,] b 3 5
# [6,] b 6 6
# [7,] c 1 7
# [8,] c 3 8
# [9,] c 6 9

But how can I erase the first two rows, say? Doing

set(DT, 1:2, 1:3, NULL)

sets the entire DT to NULL.

My SQL knowledge is very limited, so you guys tell me: given data.table uses SQL technology, is there an equivalent to the SQL command

DELETE FROM table_name
WHERE some_column=some_value

in data.table?

Community
  • 1
  • 1
Florian Oswald
  • 5,054
  • 5
  • 30
  • 38
  • 19
    I don't think it is that `data.table()` uses SQL technology so much as one can draw a parallel between the different operations in SQL and the various arguments to a `data.table`. To me, the reference to "technology" somewhat implies that `data.table` is sitting on top of a SQL database somewhere, which AFAIK is not the case. – Chase May 28 '12 at 21:15
  • 1
    thanks chase. yeah, i guess that sql analogy was a wild guess. – Florian Oswald May 29 '12 at 21:44
  • 1
    Often it should be sufficient to define a flag for keeping rows, like `DT[ , keep := .I > 1]`, then subset for later operations: `DT[(keep), ...]`, perhaps even `setindex(DT, keep)` the speed of this subsetting. Not a panacea, but worthwhile to consider as a design choice in your workflow -- do you really want to _delete all those rows from memory_, or would you prefer to exclude them? The answer differs by use case. – MichaelChirico Dec 19 '17 at 05:54

7 Answers7

132

Good question. data.table can't delete rows by reference yet.

data.table can add and delete columns by reference since it over-allocates the vector of column pointers, as you know. The plan is to do something similar for rows and allow fast insert and delete. A row delete would use memmove in C to budge up the items (in each and every column) after the deleted rows. Deleting a row in the middle of the table would still be quite inefficient compared to a row store database such as SQL, which is more suited for fast insert and delete of rows wherever those rows are in the table. But still, it would be a lot faster than copying a new large object without the deleted rows.

On the other hand, since column vectors would be over-allocated, rows could be inserted (and deleted) at the end, instantly; e.g., a growing time series.


It's filed as an issue: Delete rows by reference.

Henrik
  • 65,555
  • 14
  • 143
  • 159
Matt Dowle
  • 58,872
  • 22
  • 166
  • 224
  • 2
    @Matthew Dowle Is there some news on this ? – statquant Apr 19 '13 at 16:08
  • 18
    @statquant I think I should fix the 37 bugs, and finish `fread` first. After that it's pretty high. – Matt Dowle Apr 19 '13 at 18:07
  • 18
    @MatthewDowle sure, thanks again for everything you are doing. – statquant Apr 19 '13 at 18:26
  • @MattDowle This would be great. To be clear, if I have `orig <- data.table(a=1:10, b=10:1)`, and I have some `mod <- function(X){X[,b:=NULL]}`, then I can drop the column in `orig` by doing `mod(orig)`, but I don't have to assign the output to overwrite orig. However, am I correct to understand that I currently can't use `mod2 <- function(X){X[b<8 & a>3]}` to do `mod2(orig)` and expect that to drop some rows in `orig` w/o reassigning the output, as in `orig <- mod2(orig)`, right? – rbatt Oct 29 '15 at 17:02
  • 1
    @rbatt Correct. `DT[b<8 & a>3]` returns a new data.table. We'd like to add `delete(DT, b>=8 | a<=3)` and `DT[b>=8 | a<=8, .ROW:=NULL]`. The advantage of the latter would be combining with other features of `[]` such as row numbers in `i`, join in `i` and `roll` benefiting from `[i,j,by]` optimization. – Matt Dowle Oct 29 '15 at 18:46
  • 2
    @charliealpha No update. Contributions welcome. I'm willing to guide. It needs C skills - again, I'm willing to guide. – Matt Dowle Jul 17 '17 at 17:50
  • @MattDowle..! Thanks for your response. I'm new to C so not sure what's doable but willing to be led. I figure I should first learn how data.table operates at it's basic level. Any suggestions where I should start please? Many thanks – charliealpha Jul 18 '17 at 06:24
  • 7
    Still hoping for this feature ;-) – Humpelstielzchen Apr 28 '20 at 06:13
  • It's a shame there is no way to easily delete rows of data.table by indices. – johnny Sep 19 '22 at 03:13
32

the approach that i have taken in order to make memory use be similar to in-place deletion is to subset a column at a time and delete. not as fast as a proper C memmove solution, but memory use is all i care about here. something like this:

DT = data.table(col1 = 1:1e6)
cols = paste0('col', 2:100)
for (col in cols){ DT[, (col) := 1:1e6] }
keep.idxs = sample(1e6, 9e5, FALSE) # keep 90% of entries
DT.subset = data.table(col1 = DT[['col1']][keep.idxs]) # this is the subsetted table
for (col in cols){
  DT.subset[, (col) := DT[[col]][keep.idxs]]
  DT[, (col) := NULL] #delete
}
Frank
  • 66,179
  • 8
  • 96
  • 180
vc273
  • 699
  • 6
  • 10
  • 6
    +1 Nice memory efficient approach. So ideally we need to delete a set of rows by reference actually don't we, I hadn't thought of that. It'll have to be a series of `memmove`s to budge up the gaps, but that's ok. – Matt Dowle Jan 21 '14 at 20:50
  • Would this work as a function, or does the use in a function and return force it to make memory copies? – russellpierce Feb 21 '14 at 16:06
  • 1
    it would work in a function, since data.tables are always references. – vc273 Feb 21 '14 at 19:26
  • 2
    thanks, nice one. To speed up a little bit (especially with many columns) you change `DT[, col:= NULL, with = F]` in `set(DT, NULL, col, NULL)` – Michele Jul 07 '14 at 17:13
  • 2
    Updating in light of changing idiom and warning "with=FALSE together with := was deprecated in v1.9.4 released Oct 2014. Please wrap the LHS of := with parentheses; e.g., DT[,(myVar):=sum(b),by=a] to assign to column name(s) held in variable myVar. See ?':=' for other examples. As warned in 2014, this is now a warning." – Frank Nov 18 '16 at 17:39
  • @russellpierce https://stackoverflow.com/questions/13756178/writings-functions-procedures-for-data-table-objects – Alex Dec 19 '17 at 05:04
7

Here is a working function based on @vc273's answer and @Frank's feedback.

delete <- function(DT, del.idxs) {           # pls note 'del.idxs' vs. 'keep.idxs'
  keep.idxs <- setdiff(DT[, .I], del.idxs);  # select row indexes to keep
  cols = names(DT);
  DT.subset <- data.table(DT[[1]][keep.idxs]); # this is the subsetted table
  setnames(DT.subset, cols[1]);
  for (col in cols[2:length(cols)]) {
    DT.subset[, (col) := DT[[col]][keep.idxs]];
    DT[, (col) := NULL];  # delete
  }
   return(DT.subset);
}

And example of its usage:

dat <- delete(dat,del.idxs)   ## Pls note 'del.idxs' instead of 'keep.idxs'

Where "dat" is a data.table. Removing 14k rows from 1.4M rows takes 0.25 sec on my laptop.

> dim(dat)
[1] 1419393      25
> system.time(dat <- delete(dat,del.idxs))
   user  system elapsed 
   0.23    0.02    0.25 
> dim(dat)
[1] 1404715      25
> 

PS. Since I am new to SO, I could not add comment to @vc273's thread :-(

  • I commented under vc's answer explaining the changed syntax for (col) :=. Kind of odd to have a function named "delete" but an arg related to what to keep. Btw, generally it's preferred to use a reproducible example rather than to show dim for your own data. You could reuse DT from the question, for example. – Frank Nov 18 '16 at 17:42
  • I don't understand why you do it by reference but later use an assignment _dat <-_ – skan Jan 10 '17 at 00:16
  • 1
    @skan , That assignment assigns "dat" to point to the modified data.table that itself has been created by subsetting the original data.table. The <- assingment does not do copy of the return data, just assigns new name for it. [link](http://stackoverflow.com/questions/10225098/understanding-exactly-when-a-data-table-is-a-reference-to-vs-a-copy-of-another) –  Jan 11 '17 at 02:54
  • @Frank , I have updated the function for the oddity you pointed out. –  Jan 11 '17 at 02:57
  • Ok, thanks. I'm leaving the comment since I still think it's worth noting that showing console output instead of a reproducible example is not encouraged here. Also, a single benchmark isn't so informative. If you also measured the time taken for the subsetting, it'd be more informative (since most of us don't intuitively know how long that takes, much less how long it takes on your comp). Anyway, I don't mean to suggest this is a bad answer; I'm one of its upvoters. – Frank Jan 11 '17 at 03:20
  • Why not replace `setdiff(DT[, .I], del.idxs);` with `setdiff(seq_len(nrow(DT)), del.idxs);` – Alex Dec 19 '17 at 05:31
  • I also found it helpful to insert a `gc()` at the end of each for loop iteration. – Alex Dec 19 '17 at 05:44
5

The topic is still interesting many people (me included).

What about that? I used assign to replace the glovalenv and the code described previously. It would be better to capture the original environment but at least in globalenv it is memory efficient and acts like a change by ref.

delete <- function(DT, del.idxs) 
{ 
  varname = deparse(substitute(DT))

  keep.idxs <- setdiff(DT[, .I], del.idxs)
  cols = names(DT);
  DT.subset <- data.table(DT[[1]][keep.idxs])
  setnames(DT.subset, cols[1])

  for (col in cols[2:length(cols)]) 
  {
    DT.subset[, (col) := DT[[col]][keep.idxs]]
    DT[, (col) := NULL];  # delete
  }

  assign(varname, DT.subset, envir = globalenv())
  return(invisible())
}

DT = data.table(x = rep(c("a", "b", "c"), each = 3), y = c(1, 3, 6), v = 1:9)
delete(DT, 3)
JRR
  • 3,024
  • 2
  • 13
  • 37
  • Just to be clear, this does not delete by reference (based on `address(DT); delete(DT, 3); address(DT)`), though it may be efficient in some sense. – Frank Aug 28 '17 at 16:41
  • 1
    No it does not. It emulates the behavior and is memory efficient. That's why I said: *it acts like*. But strictly speaking you're right the address changed. – JRR Aug 28 '17 at 18:22
  • This approach changes the column types at least partially: column with `list(POSIXct)` value inside becomes `POSIXct` – vladli Jun 01 '22 at 16:35
  • You do not in fact need to do this, check out my recent answer! – Lazy Jan 04 '23 at 15:57
4

Instead or trying to set to NULL, try setting to NA (matching the NA-type for the first column)

set(DT,1:2, 1:3 ,NA_character_)
IRTFM
  • 258,963
  • 21
  • 364
  • 487
  • 3
    yeah, that works I guess. My problem is that I have a lot of data and I want to get rid of exactly those rows with NA, possibly without having to copy DT to get rid of those rows. thanks for your comment anyway! – Florian Oswald May 29 '12 at 21:48
3

Here are some strategies I have used. I believe a .ROW function may be coming. None of these approaches below are fast. These are some strategies a little beyond subsets or filtering. I tried to think like dba just trying to clean up data. As noted above, you can select or remove rows in data.table:

data(iris)
iris <- data.table(iris)

iris[3] # Select row three

iris[-3] # Remove row three

You can also use .SD to select or remove rows:

iris[,.SD[3]] # Select row three

iris[,.SD[3:6],by=,.(Species)] # Select row 3 - 6 for each Species

iris[,.SD[-3]] # Remove row three

iris[,.SD[-3:-6],by=,.(Species)] # Remove row 3 - 6 for each Species

Note: .SD creates a subset of the original data and allows you to do quite a bit of work in j or subsequent data.table. See https://stackoverflow.com/a/47406952/305675. Here I ordered my irises by Sepal Length, take a specified Sepal.Length as minimum,select the top three (by Sepal Length) of all Species and return all accompanying data:

iris[order(-Sepal.Length)][Sepal.Length > 3,.SD[1:3],by=,.(Species)]

The approaches above all reorder a data.table sequentially when removing rows. You can transpose a data.table and remove or replace the old rows which are now transposed columns. When using ':=NULL' to remove a transposed row, the subsequent column name is removed as well:

m_iris <- data.table(t(iris))[,V3:=NULL] # V3 column removed

d_iris <- data.table(t(iris))[,V3:=V2] # V3 column replaced with V2

When you transpose the data.frame back to a data.table, you may want to rename from the original data.table and restore class attributes in the case of deletion. Applying ":=NULL" to a now transposed data.table creates all character classes.

m_iris <- data.table(t(d_iris));
setnames(d_iris,names(iris))

d_iris <- data.table(t(m_iris));
setnames(m_iris,names(iris))

You may just want to remove duplicate rows which you can do with or without a Key:

d_iris[,Key:=paste0(Sepal.Length,Sepal.Width,Petal.Length,Petal.Width,Species)]     

d_iris[!duplicated(Key),]

d_iris[!duplicated(paste0(Sepal.Length,Sepal.Width,Petal.Length,Petal.Width,Species)),]  

It is also possible to add an incremental counter with '.I'. You can then search for duplicated keys or fields and remove them by removing the record with the counter. This is computationally expensive, but has some advantages since you can print the lines to be removed.

d_iris[,I:=.I,] # add a counter field

d_iris[,Key:=paste0(Sepal.Length,Sepal.Width,Petal.Length,Petal.Width,Species)]

for(i in d_iris[duplicated(Key),I]) {print(i)} # See lines with duplicated Key or Field

for(i in d_iris[duplicated(Key),I]) {d_iris <- d_iris[!I == i,]} # Remove lines with duplicated Key or any particular field.

You can also just fill a row with 0s or NAs and then use an i query to delete them:

 X 
   x v foo
1: c 8   4
2: b 7   2

X[1] <- c(0)

X
   x v foo
1: 0 0   0
2: b 7   2

X[2] <- c(NA)
X
    x  v foo
1:  0  0   0
2: NA NA  NA

X <- X[x != 0,]
X <- X[!is.na(x),]
rferrisx
  • 1,598
  • 2
  • 12
  • 14
  • This doesn't really answer the question (about removal by reference) and using `t` on a data.frame is usually not a good idea; check `str(m_iris)` to see that all data has become string/character. Btw, you can also get row numbers by using `d_iris[duplicated(Key), which = TRUE]` without making a counter column. – Frank Feb 06 '18 at 20:39
  • 1
    Yes, you are right. I don't answer the question specifically. But removing a row by reference doesn't have official functionality or documentation yet and many people are going to come to this post looking for generic functionality to do exactly that. We could create a post to just answer the question on how to remove a row. Stack overflow is very useful and I really understand the necessity to keep answers exact to the question. Sometimes though, I think SO can be a just a little fascist in this regard...but maybe there is a good reason for that. – rferrisx Feb 08 '18 at 18:28
  • Ok, thanks for explaining. I think for now our discussion here is enough of a signpost for anyone who gets confused in this case. – Frank Feb 08 '18 at 19:18
1

This is a version that is inspired by the versions by vc273 and user7114184. When we want to delete "by-reference" we do not want to need to create a new DT for this. But this is in fact not necessary: If we remove all columns from a data table it will become a null data table, which will allow any number of rows. So instead of shifting the columns to a new data table and continuing with that we can actually just shift the columns back to the original data table, and keep using it.

This gives us two functions, one data_table_add_rows which allows us to add "by-reference" additional rows to a data.table. The other one data_table_remove_rows removes rows "by-reference". The first takes a list of values, while the second will evaluate a DT-call for filtering which allows us to do nice things.

#' Add rows to a data table in a memory efficient, by-referencesque manner
#'
#' This mimics the by-reference functionality `DT[, new_col := value]`, but
#' for rows instead. The rows in question are assigned at the end of the data
#' table. If the data table is keyed it is automatically reordered after the
#' operation. If not this function will preserve order of existing rows, but
#' will not preserve sortedness.
#'
#' This function will take the rows to add from a list of columns or generally
#' anything that can be named and converted or coerced to data frame.
#' The list may specify less columns than present in the data table. In this
#' case the rest is filled with NA. The list may not specify more columns than
#' present in the data table. Columns are matched by names if the list is named
#' or by position if not. The list may not have names not present in the data
#' table.
#'
#' Note that this operation is memory efficient as it will add the rows for
#' one column at a time, only requiring reallocation of single columns at a
#' time. This function will change the original data table by reference.
#'
#' This function will not affect shallow copies of the data table.
#'
#' @param .dt A data table
#' @param value A list (or a data frame). Must have at most as many elements as
#'        there are columns in \param{.dt}. If unnamed this will be applied to
#'        first columns in \param{.dt}, else it will by applied by name. Must
#'        not have names not present in \param{.dt}.
#' @return \param{.dt} (invisible)
data_table_add_rows <- function(.dt, value) {
  if (length(value) > ncol(.dt)) {
    rlang::abort(glue::glue("Trying to update data table with {ncol(.dt)
      } columns with {length(value)} columns."))
  }
  if (is.null(names(value))) names(value) <- names(.dt)[seq_len(length(value))]
  value <- as.data.frame(value)
  if (any(!(names(value) %in% names(.dt)))) {
    rlang::abort(glue::glue("Trying to update data table with columns {
        paste(setdiff(names(value), names(.dt)), collapse = ', ')
      } not present in original data table."))
  }
  value[setdiff(names(.dt), names(value))] <- NA
  
  k <- data.table::key(.dt)
  
  temp_dt <- data.table::data.table()
  
  for (col in c(names(.dt))) {
    set(temp_dt, j = col,value = c(.dt[[col]], value[[col]]))
    set(.dt, j = col, value = NULL)
  }
  
  for (col in c(names(temp_dt))) {
    set(.dt, j = col, value = temp_dt[[col]])
    set(temp_dt, j = col, value = NULL)
  }
  
  if (!is.null(k)) data.table::setkeyv(.dt, k)
  
  .dt
}

#' Remove rows from a data table in a memory efficient, by-referencesque manner
#'
#' This mimics the by-reference functionality `DT[, new_col := NULL]`, but
#' for rows instead. This operation preserves order. If the data table is keyed
#' it will preserve the key.
#'
#' This function will determine the rows to delete by passing all additional
#' arguments to a data.table filter call of the form
#' \code{DT[, .idx = .I][..., j = .idx]}
#' Thus we can pass a simple index vector or a condition, or even delete by
#' using join syntax \code{data_table_remove_rows(DT1, DT2, on = cols)} (or
#' reversely keep by join using
#' \code{data_table_remove_rows(DT1, !DT2, on = cols)}
#'
#' Note that this operation is memory efficient as it will add the rows for
#' one column at a time, only requiring reallocation of single columns at a
#' time. This function will change the original data table by reference.
#'
#' This function will not affect shallow copies of the data table.
#'
#' @param .dt A data table
#' @param ... Any arguments passed to `[` for filtering the data.table. Must not
#'        specify `j`.
#' @return \param{.dt} (invisible)
data_table_remove_rows <- function(.dt, ...) {
  k <- data.table::key(.dt)
  
  env <- parent.frame()
  args <- as.list(sys.call()[-1])
  if (!is.null(names(args)) && ".dt" %in% names(args)) args[.dt] <- NULL
  else args <- args[-1]
  
  if (!is.null(names(args)) && "j" %in% names(args)) {
    rlang::abort("... must not specify j")
  }
  
  call <- substitute(
    .dt[, .idx := .I][j = .idx],
    env = list(.dt = .dt))
  
  .nc <- names(call)
  
  for (i in seq_along(args)) {
    call[[i + 3]] <- args[[i]]
  }
  
  if (!is.null(names(args))) names(call) <- c(.nc, names(args))
  which <- eval(call, envir = env)
  set(.dt, j = ".idx", value = NULL)
  
  temp_dt <- data.table::data.table()
  
  for (col in c(names(.dt))) {
    set(temp_dt, j = col,value = .dt[[col]][-which])
    set(.dt, j = col, value = NULL)
  }
  
  for (col in c(names(temp_dt))) {
    set(.dt,j = col, value = temp_dt[[col]])
    set(temp_dt, j = col, value = NULL)
  }
  
  if (!is.null(k)) data.table::setattr(.dt, "sorted", k)
  
  .dt
}

Now this allows us to do quite nice calls. For example we can do:

library(data.table)

d <- data.table(x = 1:10, y = runif(10))

#>         x          y
#>     <int>      <num>
#>  1:     1 0.77326131
#>  2:     2 0.88699627
#>  3:     3 0.15553784
#>  4:     4 0.71221778
#>  5:     5 0.11964578
#>  6:     6 0.73692709
#>  7:     7 0.05382835
#>  8:     8 0.61129007
#>  9:     9 0.18292229
#> 10:    10 0.22569555

# add some rows (y = NA)
data_table_add_rows(d, list(x=11:13))
# add some rows (y = 0)
data_table_add_rows(d, list(x=14:15, y = 0))

#>         x          y
#>     <int>      <num>
#>  1:     1 0.77326131
#>  2:     2 0.88699627
#>  3:     3 0.15553784
#>  4:     4 0.71221778
#>  5:     5 0.11964578
#>  6:     6 0.73692709
#>  7:     7 0.05382835
#>  8:     8 0.61129007
#>  9:     9 0.18292229
#> 10:    10 0.22569555
#> 11:    11         NA
#> 12:    12         NA
#> 13:    13         NA
#> 14:    14 0.00000000
#> 15:    15 0.00000000

# remove all added rows
data_table_remove_rows(d, is.na(y) | y == 0)

#>         x          y
#>     <int>      <num>
#>  1:     1 0.77326131
#>  2:     2 0.88699627
#>  3:     3 0.15553784
#>  4:     4 0.71221778
#>  5:     5 0.11964578
#>  6:     6 0.73692709
#>  7:     7 0.05382835
#>  8:     8 0.61129007
#>  9:     9 0.18292229
#> 10:    10 0.22569555

# remove by join
e <- data.table(x = 2:5)
data_table_remove_rows(d, e, on = "x")

#>        x          y
#>    <int>      <num>
#> 1:     1 0.77326131
#> 2:     6 0.73692709
#> 3:     7 0.05382835
#> 4:     8 0.61129007
#> 5:     9 0.18292229
#> 6:    10 0.22569555

# add back
data_table_add_rows(d, c(e, list(y = runif(nrow(e)))))

#>         x          y
#>     <int>      <num>
#>  1:     1 0.77326131
#>  2:     6 0.73692709
#>  3:     7 0.05382835
#>  4:     8 0.61129007
#>  5:     9 0.18292229
#>  6:    10 0.22569555
#>  7:     2 0.99372144
#>  8:     3 0.03363720
#>  9:     4 0.69880083
#> 10:     5 0.67863547

# keep by join
data_table_remove_rows(d, !e, on = "x")

#>        x         y
#>    <int>     <num>
#> 1:     2 0.9937214
#> 2:     3 0.0336372
#> 3:     4 0.6988008
#> 4:     5 0.6786355

EDIT: Thanks to Matt Summersgill I for a slightly better performing version of this!

Lazy
  • 123
  • 5
  • 1
    Cool answer! I did notice these were slower than the equivalent of `DT <- DT[!x ==y]` or `DT <- rbindlist(list(DT,NewRows))` so I did a quick look at getting performance closer to parity, feel free to take a look here: https://gist.github.com/msummersgill/10a2a25273f2018df946a14b1f755496 – Matt Summersgill Jan 14 '23 at 18:42
  • @MattSummersgill Hello Matt. Thank your for your input, I’ve changed the code. I’ve found that most of the time spent in deleting rows comes from a `setdiff` that is not in fact necessary, as we are handling index vectors here. So instead of `setdiff(..., which)` one can do the direct `...[-which]`. But rather than that we can directly subset the columns using `[-which]`, which is even a bit faster. And thank you for suggesting `set`, this does actually make a significant difference (about 300ms overhead in using `[` in your benchmark). – Lazy Jan 15 '23 at 11:12