Custom Objective Tutorial
Perpetual allows you to define your own objective functions for optimization. This is useful when the standard loss functions (like SquaredLoss or LogLoss) do not fit your specific use case.
In this tutorial, we will demonstrate how to define a custom objective and use it with PerpetualBooster.
1. Imports and Data Preparation
We’ll use the California Housing dataset for this regression example.
[ ]:
import numpy as np
import pandas as pd
from perpetual import PerpetualBooster
from sklearn.datasets import fetch_california_housing
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
# Load data
data = fetch_california_housing()
X = pd.DataFrame(data.data, columns=data.feature_names)
y = data.target
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
2. Defining a Custom Objective
A custom objective in Perpetual is a tuple of three functions:
loss(y, pred, weight, group): Computes the loss for each sample.
gradient(y, pred, weight, group): Computes the gradient and hessian for each sample. It should return a tuple
(gradient, hessian). Ifhessianis constant (e.g., 1.0 for SquaredLoss), returnNoneto improve performance.initial_value(y, weight, group): Computes the initial prediction for the model (usually the mean or median of the target).
Let’s implement a simple Squared Error objective as an example.
[ ]:
def custom_loss(y, pred, weight, group):
return (y - pred) ** 2
def custom_gradient(y, pred, weight, group):
# Gradient of (y - pred)^2 with respect to pred is 2 * (pred - y)
# Note: Perpetual handles the scaling, so (pred - y) is sufficient.
grad = pred - y
hess = None # If hess is constant (e.g. 1.0) return None to improve performance
return grad, hess
def custom_init(y, weight, group):
return np.mean(y)
3. Training and Evaluation
Now we can pass our custom objective to the PerpetualBooster constructor.
[ ]:
# Initialize booster with custom objective
model = PerpetualBooster(objective=(custom_loss, custom_gradient, custom_init))
# Fit the model
model.fit(X_train, y_train)
# Make predictions
preds = model.predict(X_test)
# Evaluate
mse = mean_squared_error(y_test, preds)
print(f"Mean Squared Error with Custom Objective: {mse:.4f}")
4. Comparison with Standard Objective
Let’s compare this with the built-in SquaredLoss objective.
[ ]:
standard_model = PerpetualBooster(objective="SquaredLoss")
standard_model.fit(X_train, y_train)
standard_preds = standard_model.predict(X_test)
standard_mse = mean_squared_error(y_test, standard_preds)
print(f"Mean Squared Error with Standard Objective: {standard_mse:.4f}")
assert np.allclose(preds, standard_preds, atol=1e-5)
print("Results are identical!")