I am workin in RStudio and am looking to develop a custom objective function for XGBoost. In order to make sure I have understood how the process works, I have tried to write an objective function which reproduces the "binary:logistic" objective. However, my custom objective function yields significantly different results (often a lot worse).
Based on the examples on the XGBoost github repo my custom objective function looks like this:
# custom objective function
logloss <- function(preds, dtrain){
# Get weights and labels
labels<- getinfo(dtrain, "label")
# Apply logistic transform to predictions
preds <- 1/(1 + exp(-preds))
# Find gradient and hessian
grad <- (preds - labels)
hess <- preds * (1-preds)
return(list("grad" = grad, "hess" = hess))
}
Based on this medium blog post this seems to match what is implemented in XGBoost binary objective.
Using some simple test data, my final training-rmse for the built-in objective is ~0.468 and using my custom objective it is ~0.72.
The code below can be used to generate test data and reproduce the problem.
Can somebody explain why my code does not reproduce the behavious of objective "binary:logistic"? I am using the XGBoost R-Package v0.90.0.2.
library(data.table)
library(xgboost)
# Generate test data
generate_test_data <- function(n_rows = 1e5, feature_count = 5, train_fraction = 0.5){
# Make targets
test_data <- data.table(
target = sign(runif(n = n_rows, min=-1, max=1))
)
# Add feature columns.These are normally distributed and shifted by the target
# in order to create a noisy signal
for(feature in 1:feature_count){
# Randomly create features of the noise
mu <- runif(1, min=-1, max=1)
sdev <- runif(1, min=5, max=10)
# Create noisy signal
test_data[, paste0("feature_", feature) := rnorm(
n=n_rows, mean = mu, sd = sdev)*target + target]
}
# Split data into test/train
test_data[, index_fraction := .I/.N]
split_data <- list(
"train" = test_data[index_fraction < (train_fraction)],
"test" = test_data[index_fraction >= (train_fraction)]
)
# Make vector of feature names
feature_names <- paste0("feature_", 1:feature_count)
# Make test/train matrix and labels
split_data[["test_trix"]] <- as.matrix(split_data$test[, feature_names, with=FALSE])
split_data[["train_trix"]] <- as.matrix(split_data$train[, feature_names, with=FALSE])
split_data[["test_labels"]] <- as.logical(split_data$test$target + 1)
split_data[["train_labels"]] <- as.logical(split_data$train$target + 1)
return(split_data)
}
# Build the tree
build_model <- function(split_data, objective){
# Make evaluation matrix
train_dtrix <-
xgb.DMatrix(
data = split_data$train_trix, label = split_data$train_labels)
# Train the model
model <- xgb.train(
data = train_dtrix,
watchlist = list(
train = train_dtrix),
nrounds = 5,
objective = objective,
eval_metric = "rmse"
)
return(model)
}
split_data <- generate_test_data()
cat("\nUsing built-in binary:logistic objective.\n")
test_1 <- build_model(split_data, "binary:logistic")
cat("\n\nUsing custom objective")
test_2 <- build_model(split_data, logloss)