Bayesian (after Thomas Bayes) refers to methods in probability and statistics that involve quantifying uncertainty about parameter or latent variable estimates by incorporating both prior and observed information. Bayesian modeling, inference, optimization, and model comparison techniques are on topic. A programming element is expected; theoretical/methodological questions should go to https://stats.stackexchange.com.
Overview
Bayesian inference is a method of statistical inference which uses Bayes' theorem - named after Thomas Bayes (1702-1761) - to quantify the uncertainty of parameters or latent variables. The statement of Bayes' theorem in Bayesian inference is
Here θ represents the parameters to be inferred and d the data. P(θ|d) is the posterior probability and P(d|θ) is the likelihood function. P(θ) is the prior: a function encoding previous beliefs about θ within a model appropriate for the data. P(d) is a normalization factor.
The formula is used as an updating procedure: as more data become available, the posterior can be updated successively. In the first instance, the prior must be specified by the user. In later updates, the prior is usually chosen to be the posterior from a previous updating procedure.
References
The following threads contain lists of references:
The following journals are dedicated to research in Bayesian statistics:
- Bayesian Analysis (Open Access)
Tag usage
Questions on tag bayesian should be about implementation and programming problems, not about the statistical or theoretical properties of the technique. Consider whether your question might be better suited to Cross Validated, the StackExchange site for statistics, machine learning and data analysis.