I have an R dataframe and I'm trying to subtract one column from another. I extract the columns using the $
operator but the class of the columns is 'factor' and R won't perform arithmetic operations on factors. Are there special functions to do this?
-
2Factors in R are generally meant for categorical (or ordinal) data. How do you define arithmetic for categorical data? – Andrie Aug 08 '11 at 10:21
4 Answers
If you really want the levels of the factor to be used, you're either doing something very wrong or too clever for its own good.
If what you have is a factor containing numbers stored in the levels of the factor, then you want to coerce it to numeric first using as.numeric(as.character(...))
:
dat <- data.frame(f=as.character(runif(10)))
You can see the difference between accessing the factor indices and assigning the factor contents here:
> as.numeric(dat$f)
[1] 9 7 2 1 4 6 5 3 10 8
> as.numeric(as.character(dat$f))
[1] 0.6369432 0.4455214 0.1204000 0.0336245 0.2731787 0.4219241 0.2910194
[8] 0.1868443 0.9443593 0.5784658
Timings vs. an alternative approach which only does the conversion on the levels shows it's faster if levels are not unique to each element:
dat <- data.frame( f = sample(as.character(runif(10)),10^4,replace=TRUE) )
library(microbenchmark)
microbenchmark(
as.numeric(as.character(dat$f)),
as.numeric( levels(dat$f) )[dat$f] ,
as.numeric( levels(dat$f)[dat$f] ),
times=50
)
expr min lq median uq max
1 as.numeric(as.character(dat$f)) 7835865 7869228 7919699 7998399 9576694
2 as.numeric(levels(dat$f))[dat$f] 237814 242947 255778 270321 371263
3 as.numeric(levels(dat$f)[dat$f]) 7817045 7905156 7964610 8121583 9297819
Therefore, if length(levels(dat$f)) < length(dat$f)
, use as.numeric(levels(dat$f))[dat$f]
for a substantial speed gain.
If length(levels(dat$f))
is approximately equal to length(dat$f)
, there is no speed gain:
dat <- data.frame( f = as.character(runif(10^4) ) )
library(microbenchmark)
microbenchmark(
as.numeric(as.character(dat$f)),
as.numeric( levels(dat$f) )[dat$f] ,
as.numeric( levels(dat$f)[dat$f] ),
times=50
)
expr min lq median uq max
1 as.numeric(as.character(dat$f)) 7986423 8036895 8101480 8202850 12522842
2 as.numeric(levels(dat$f))[dat$f] 7815335 7866661 7949640 8102764 15809456
3 as.numeric(levels(dat$f)[dat$f]) 7989845 8040316 8122012 8330312 10420161

- 71,271
- 35
- 175
- 235
-
Although, R is smart about sorting before factoring, so if they are wholenumbers this problem is irrelevant. – Brandon Bertelsen Aug 08 '11 at 10:30
-
2@Brandon: Unless someone has used `relevel` or the integer sequence is not continuous. Assuming the level indices are the same as the level contents seems like a dangerous assumption to make. – Ari B. Friedman Aug 08 '11 at 10:57
-
a tip : use rbenchmark instead of microbenchmark to get more readible output and relative speeds. – Joris Meys Aug 08 '11 at 11:40
-
@Joris: I like the output of rbenchmark but I thought microbenchmark was more accurate since it doesn't include some of the calling overhead that system.time() induces.... – Ari B. Friedman Aug 08 '11 at 11:57
-
well, accurate is a relative concept here. Redo the analysis three times, each time you get different numbers. accurate milliseconds is a good thing, but beyond that you get into randomness... – Joris Meys Aug 08 '11 at 12:07
-
Well, I'd say less biased in expectation. Since mb subtracts an estimate of the overhead times of the benchmarking itself, even though results are random for both they will be unbiased (unless the estimates are biased) in mb vs rb. But kinda picking nits, since the numbers we're talking about are so small. Good presentation trumps precise timings in most use situations, and I really wish microbenchmark had better output (both `print.microbenchmark` and `plot.microbenchmark` could use substantial improvement). – Ari B. Friedman Aug 08 '11 at 12:30
You can define your own operators to do that, see ? Arith
. Without group generics, you can define your own binary operators %operator%:
%-% <- function (factor1, factor2){
# put in the code here to calculate difference
# of two factors (e.g. facor1 level cat - factor2 level mouse = ?)
}

- 13,717
- 5
- 45
- 57
You should double check how you're pulling in the data first. If these are truly numeric columns R should recognize this (Excel messes up sometimes). Either way, it could be being coerced to a factor because there are other undesirables in the columns. The responses that you've received so far haven't mentioned that as.numeric() only returns the level numbers. Meaning that you won't be performing the operation on the actual numbers that have been converted to factors but rather the level numbers associated with each factor.

- 43,807
- 34
- 160
- 255
You'll need to convert the factors to numeric arrays.
a <- factor(c(5,6,5))
b <- factor(c(3,2,1))
df <- data.frame(a, b)
# WRONG: Factors can't be subtracted.
df$a - df$b
# CORRECT: Get the levels and substract
as.numeric(levels(df$a)[df$a]) - as.numeric(levels(df$b)[df$b])

- 93
- 6
-
1-1 This assumes that a) your factor is ordered and b) that the data is interval-scaled. If this was the case, then the data shouldn't be in a factor in the first place. – Andrie Aug 08 '11 at 10:23
-
+1 as this is a better way to convert your factors than as.numeric(as.character()) given in one of the other solutions. – Joris Meys Aug 08 '11 at 10:50
-
Andrie: Does subtraction have a meaningful interpretation if the vectors are not ordered (granted, one might want to do a set intersection)? I suspect that there's a problem with data import which is causing the data to be factored in the first place. It's happened to me on several occasions. Then, of course, the right way to go is to de-factor the data and fix the import. – Janne Peltola Aug 08 '11 at 10:54
-
@Joris: This is not the correct way to do it, but it looks similar to the correct approach. The call to `as.numeric` should wrap only the levels if you hope to achieve efficiency gains. See my answer for benchmarks. – Ari B. Friedman Aug 08 '11 at 11:11
-
@gsk3: Thanks, haven't known about the performance issues involved. Of course, your way is more efficient. – Janne Peltola Aug 08 '11 at 11:14