0

Taking cue from xgboost xgb.dump tree coefficient question.

I specifically want to know if eta = 0.1 or 0.01 how will the probability calculation differ from the answer provided?

I want to do predictions using the tree dump.

My code is

#Define train label and feature frames/matrix
y <- train_data$esc_ind
train_data = as.matrix(train_data)
trainX <- as.matrix(train_data[,-1])
param <- list("objective" = "binary:logistic",
              "eval_metric" = "logloss",
              "eta" = 0.5,
              "max_depth" = 2, 
              "colsample_bytree" = .8,
              "subsample" = 0.8, #0.75
              "alpha" = 1

)

#Train XGBoost
bst = xgboost(param=param, data = trainX, label = y, nrounds=2) 

trainX1 = data.frame(trainX)
mpg.fmap = genFMap(trainX1, "xgboost.fmap")
xgb.save(bst, "xgboost.model")
xgb.dump(bst, "xgboost.model_6.txt",with.stats = TRUE, fmap = "xgboost.fmap")

The tree looks like:

booster[0]
0:[order.1<12.2496] yes=1,no=2,missing=2,gain=1359.61,cover=7215.25
    1:[access.1<0.196687] yes=3,no=4,missing=4,gain=3.19685,cover=103.25
        3:leaf=-0,cover=1
        4:leaf=0.898305,cover=102.25
    2:[team<6.46722] yes=5,no=6,missing=6,gain=753.317,cover=7112
        5:leaf=0.893333,cover=55.25
        6:leaf=-0.943396,cover=7056.75
booster[1]
0:[issu.1<6.4512] yes=1,no=2,missing=2,gain=794.308,cover=5836.81
    1:[team<3.23361] yes=3,no=4,missing=4,gain=18.6294,cover=67.9586
        3:leaf=0.609363,cover=21.4575
        4:leaf=1.28181,cover=46.5012
    2:[case<6.74709] yes=5,no=6,missing=6,gain=508.34,cover=5768.85
        5:leaf=1.15253,cover=39.2126
        6:leaf=-0.629773,cover=5729.64

Will the coefficient for all tree leaf scores for xgboost be 1 when eta is chosen less than 1?

Community
  • 1
  • 1
PARTHA TALUKDER
  • 321
  • 5
  • 17
  • Please check my answer in the following link - may be useful - http://stackoverflow.com/questions/39858916/xgboost-how-to-get-probabilities-of-class-from-xgb-dump-multisoftprob-objecti/40632862#40632862 – Run2 Nov 16 '16 at 14:21

1 Answers1

0

Actually this was practical which I have overseen earlier.

Using the above tree structure one can find the probability for each training example.

The parameter list was:

param <- list("objective" = "binary:logistic",
              "eval_metric" = "logloss",
              "eta" = 0.5,
              "max_depth" = 2, 
              "colsample_bytree" = .8,
              "subsample" = 0.8,
              "alpha" = 1)

For the instance set in leaf booster[0], leaf: 0-3; the probability will be exp(-0)/(1+exp(-0)).

And for booster[0], leaf: 0-3 + booster[1], leaf: 0-3; the probability will be exp(0+ 0.609363)/(1+exp(0 + 0.609363)).

And so on as one goes on increasing number of iterations.

I matched these values with R's predicted probabilities they differ in 10^(-7), probably due to floating point curtailing of leaf quality scores.

This answer can give a production level solution when R's trained boosted trees are used in different environment for prediction.

Any comment on this will be highly appreciated.

PARTHA TALUKDER
  • 321
  • 5
  • 17