7

Similar SO questions:

Catboost tutorials

Question

In this question, I have a binary classification problem. After modelling we get the test model predictions y_pred and we already have true test labels y_true.

I would like to get the custom evaluation metric defined by following equation:

profit = 400 * truePositive - 200*fasleNegative - 100*falsePositive

Also, since higher profit is better I would like to maximize the function instead of minimize it.

How to get this eval_metric in catboost?

Using sklearn

def get_profit(y_true, y_pred):
    tn, fp, fn, tp = sklearn.metrics.confusion_matrix(y_true,y_pred).ravel()
    loss = 400*tp - 200*fn - 100*fp
    return loss

scoring = sklearn.metrics.make_scorer(get_profit, greater_is_better=True)

Using catboost

class ProfitMetric(object):
    def get_final_error(self, error, weight):
        return error / (weight + 1e-38)

    def is_max_optimal(self):
        return True

    def evaluate(self, approxes, target, weight):
        assert len(approxes) == 1
        assert len(target) == len(approxes[0])

        approx = approxes[0]

        error_sum = 0.0
        weight_sum = 0.0

        ** I don't know here**

        return error_sum, weight_sum

Question

How to complete the custom eval metric in catboost?

UPDATE

My update so far

import numpy as np
import pandas as pd
import seaborn as sns
import sklearn

from catboost import CatBoostClassifier
from sklearn.model_selection import train_test_split

def get_profit(y_true, y_pred):
    tn, fp, fn, tp = sklearn.metrics.confusion_matrix(y_true,y_pred).ravel()
    profit = 400*tp - 200*fn - 100*fp
    return profit


class ProfitMetric:
    def is_max_optimal(self):
        return True # greater is better

    def evaluate(self, approxes, target, weight):
        assert len(approxes) == 1
        assert len(target) == len(approxes[0])

        approx = approxes[0]

        y_pred = np.rint(approx)
        y_true = np.array(target).astype(int)

        output_weight = 1 # weight is not used

        score = get_profit(y_true, y_pred)
 
        return score, output_weight

    def get_final_error(self, error, weight):
        return error


df = sns.load_dataset('titanic')
X = df[['survived','pclass','age','sibsp','fare']]
y = X.pop('survived')

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=100)


model = CatBoostClassifier(metric_period=50,
  n_estimators=200,
  eval_metric=ProfitMetric()
)

model.fit(X, y, eval_set=(X_test, y_test)) # this fails
BhishanPoudel
  • 15,974
  • 21
  • 108
  • 169
  • You linked custom loss, which is used for training, and eval metric, which is used for evaluation only and doesn't affect training. Which one you are interested in? – Sergey Bushmanov Dec 29 '20 at 20:42
  • @SergeyBushmanov The original dataset is about customer churn and I have defined custom metric which calculates "profit" based on TP,TN,FP,FN of binary classification. I would like to directly optimize that metric "profit" instead of "auc", how is it possible in catboost? – BhishanPoudel Dec 29 '20 at 21:37
  • Eval metric will not affect training. If you want your training to optimize (maximize) your custom metric you need to (1) write a gradient and hess for your function to optimize or (2) find a readily available one that closely replicate yours – Sergey Bushmanov Dec 29 '20 at 21:39
  • @SergeyBushmanov Ok, I got that. Then, I would only like to get the eval metric with the default loss function. Still, my eval_metric is not working. If you guide through example, I would greatly appreciate the effort. – BhishanPoudel Dec 29 '20 at 21:42

2 Answers2

6

The main difference from yours is:

@staticmethod
def get_profit(y_true, y_pred):
    y_pred = expit(y_pred).astype(int)
    y_true = y_true.astype(int)
    #print("ACCURACY:",(y_pred==y_true).mean())
    tn, fp, fn, tp = confusion_matrix(y_true, y_pred).ravel()
    loss = 400*tp - 200*fn - 100*fp
    return loss

It's not obvious from the example you linked what are the predictions, but after inspecting it turns out catboost treats predictions internally as raw log-odds (hat tip @Ben). So, to properly use confusion_matrix you need to make it sure both y_true and y_pred are integer class labels. This is done via:

y_pred = scipy.special.expit(y_pred) 
y_true = y_true.astype(int)

So the full working code is:

import seaborn as sns
from catboost import CatBoostClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
from scipy.special import expit

df = sns.load_dataset('titanic')
X = df[['survived','pclass','age','sibsp','fare']]
y = X.pop('survived')

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=100)

class ProfitMetric:
    
    @staticmethod
    def get_profit(y_true, y_pred):
        y_pred = expit(y_pred).astype(int)
        y_true = y_true.astype(int)
        #print("ACCURACY:",(y_pred==y_true).mean())
        tn, fp, fn, tp = confusion_matrix(y_true, y_pred).ravel()
        loss = 400*tp - 200*fn - 100*fp
        return loss
    
    def is_max_optimal(self):
        return True # greater is better

    def evaluate(self, approxes, target, weight):            
        assert len(approxes) == 1
        assert len(target) == len(approxes[0])
        y_true = np.array(target).astype(int)
        approx = approxes[0]
        score = self.get_profit(y_true, approx)
        return score, 1

    def get_final_error(self, error, weight):
        return error

model = CatBoostClassifier(metric_period=50,
  n_estimators=200,
  eval_metric=ProfitMetric()
)

model.fit(X, y, eval_set=(X_test, y_test))
Sergey Bushmanov
  • 23,310
  • 7
  • 53
  • 72
  • 1
    This runs perfect. I did `print(model.get_evals_result())` this gives gives me the profit. Thanks a lot. – BhishanPoudel Dec 29 '20 at 22:15
  • Why loop when you already have the confusion matrix? – Ben Reiniger Dec 29 '20 at 22:17
  • If we do list unpacking, it does not work (`tn, fp, fn, tp = confusion_matrix(y_true,y_pred).ravel() does not work`). Only the for-loop works. – BhishanPoudel Dec 29 '20 at 22:38
  • @BenReiniger There is a problem in catboost inner working, for some loop instead of `[549 0 342 0]` it returns `[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 16 90 287 126 30 0 0 5 82 110 129 16 0 0 0 0 0 0]` meaning it's not a binary classifier anymore. This is why unpacking fails. – Sergey Bushmanov Dec 29 '20 at 22:45
  • You might need to check that this is working as intended: is the `np.rint` always producing 0 or 1? The Accuracy example in OP's link takes an argmax, and the Logloss example seems to suggest that `approxes` is the log-odds output. – Ben Reiniger Dec 29 '20 at 23:02
1

By way of example, I implemented a very simple metric.

It counts the number of times y_pred != y_true in a multi-class classifier.

class CountErrors:
    '''Count of wrong predictions'''
    
    def is_max_optimal(self):
        False

    def evaluate(self, approxes, target, weight):  
        
        y_pred = np.array(approxes).argmax(0)
        y_true = np.array(target)
                                    
        return sum(y_pred!=y_true), 1

    def get_final_error(self, error, weight):
        return error

You can see it used if you run this code:

import numpy as np
import pandas as pd

from catboost import CatBoostClassifier
from sklearn.model_selection import train_test_split

class CountErrors:
    '''Count number of wrong predictions'''
    
    def is_max_optimal(self):
        False # Lower is better

    def evaluate(self, approxes, target, weight):  
        
        y_pred = np.array(approxes).argmax(0)
        y_true = np.array(target)
                                    
        return sum(y_pred!=y_true), 1

    def get_final_error(self, error, weight):
        return error
    

df = pd.read_csv('https://raw.githubusercontent.com/mkleinbort/resource-datasets/master/abalone/abalone.csv')
y = df['sex']
X = df.drop(columns=['sex'])

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=12)

model = CatBoostClassifier(metric_period=50, n_estimators=200, eval_metric=CountErrors())

model.fit(X, y, eval_set=(X_test, y_test))

Hope you can adapt this to your use-case.

Myccha
  • 961
  • 1
  • 11
  • 20
  • This is great and works, but I am looking for a way to get custom score for binary classification. – BhishanPoudel Dec 29 '20 at 17:05
  • This should work with a binary classification. Try replacing this line: `return sum(y_pred!=y_true), 1` with your custom metric – Myccha Jan 02 '21 at 00:09