Remember to Correct the Bias When Using Deep Learning for Regression!

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningfagfællebedømt

Dokumenter

  • Fulltext

    Forlagets udgivne version, 1,16 MB, PDF-dokument

When training deep learning models for least-squares regression, we cannot expect that the training error residuals of the final model, selected after a fixed training time or based on performance on a hold-out data set, sum to zero. This can introduce a systematic error that accumulates if we are interested in the total aggregated performance over many data points (e.g., the sum of the residuals on previously unseen data). We suggest adjusting the bias of the machine learning model after training as a default post-processing step, which efficiently solves the problem. The severeness of the error accumulation and the effectiveness of the bias correction are demonstrated in exemplary experiments.

OriginalsprogEngelsk
TidsskriftKI - Kunstliche Intelligenz
Vol/bind37
Udgave nummer1
Sider (fra-til)33-40
ISSN0933-1875
DOI
StatusUdgivet - 2023

Bibliografisk note

Publisher Copyright:
© 2023, The Author(s).

ID: 347311460