About

Log in?

DTU users get better search results including licensed content and discounts on order fees.

Anyone can log in and get personalized features such as favorites, tags and feeds.

Log in as DTU user Log in as non-DTU user No thanks

DTU Findit

Interpreted as:

title:(Bayesian AND Leave-One-Out AND Cross-Validation AND for AND Large AND Data)

Suggestions: Include records that partially match the query

Filter results
Access
Type
Language
Year
From DTU
Advanced
1 Preprint article

Bayesian leave-one-out cross-validation for large data

Model inference, such as model comparison, model checking, and model selection, is an important part of model development. Leave-one-out cross-validation (LOO) is a general approach for assessing the generalizability of a model, but unfortunately, LOO does not scale well to large datasets. We

Year: 2019

Language: Undetermined

pkoe dnl macgjhf bi
2 Conference paper

Bayesian Leave-One-Out Cross-Validation for Large Data

Andersen, Michael Riis; Magnusson, Mans; Jonasson, Johan; Vehtari, Aki

Proceedings of the 36<sup>th</sup> International Conference on Machine Learning — 2019, pp. 7505-7525

Model inference, such as model comparison, model checking, and model selection, is an important part of model development. Leave-one-out cross-validation (LOO-CV) is a general approach for assessing the generalizability of a model, but unfortunately, LOO-CV does not scale well to large datasets. We

Year: 2019

Language: English

mgihbljackof endp
3 Book chapter

Leave-One-Out Cross-Validation for Bayesian Model Comparison in Large Data

Recently, new methods for model assessment, based on subsampling and posterior approximations, have been proposed for scaling leave-one-out cross-validation (LOO) to large datasets. Although these methods work well for estimating predictive performance for individual models, they are less powerful

Year: 2020

Language: English

npadhlmceijbf k go
4 Preprint article

Leave-One-Out Cross-Validation for Bayesian Model Comparison in Large Data

Recently, new methods for model assessment, based on subsampling and posterior approximations, have been proposed for scaling leave-one-out cross-validation (LOO) to large datasets. Although these methods work well for estimating predictive performance for individual models, they are less powerful

Year: 2020

Language: Undetermined

oajicn bhl ekdpgmf

DTU users get better search results including licensed content and discounts on order fees.

Log in as DTU user