The canonical cost function aggregates the residuals into a single scalar value:Φ(x)∝r(x)⊤r(x)=i∑Nri(x)2This sum-of-squares formulation is fundamental to least-squares optimization.
The basic sum of squared residuals:ΦSSE(x)=i∑Nri2=r⊤rThis can be represented in both array and scalar forms, making it compatible with all optimization algorithms.
Normalized by the number of data points:ΦMSE(x)=N1i∑Nri2
The square root of MSE:ΦRMSE(x)=N1i∑Nri2
RMSE cannot be represented in a residual array form because there is no mapping from residuals to the canonical scalar form. This limits its compatibility with some optimization algorithms.
Under the assumption of independent, identically distributed Gaussian errors, the MLE cost function is:rMLE(x)=ymodel−ydataΦMLE(x)=i∑Nri2=r⊤rThis is equivalent to the sum squared error formulation, providing a probabilistic interpretation of least-squares fitting.
import ionworkspipeline as iwp# Default: Sum Squared Errorobjective = iwp.objectives.FittingObjective( options={"model": model, "data": data}, cost=iwp.costs.SumSquaredError())# Mean Squared Errorobjective = iwp.objectives.FittingObjective( options={"model": model, "data": data}, cost=iwp.costs.MeanSquaredError())# Root Mean Squared Error (scalar-only)objective = iwp.objectives.FittingObjective( options={"model": model, "data": data}, cost=iwp.costs.RMSE())
For most optimization algorithms, the sum squared error formulation provides the best compatibility and performance. Use MSE or RMSE when you need interpretable, scale-independent metrics.
The cost functions and usage examples above are not exhaustive. See the API reference for full details on costs and objectives.