请输入您要查询的字词:

 

单词 Bayesian inference
释义
Bayesian inference

Physics
  • A technique of statistical inference that estimates the probability of an event occurring in terms of the frequency at which the event occurred previously. It depends on Bayes’ theorem.


Mathematics
  • In Bayesian statistics, parameters have probability distributions, while in frequentist statistics parameters have fixed values. In Bayesian inference, a prior distribution is proposed for a parameter and after further data is collected Bayes’ Theorem is used to calculate a posterior distribution in light of the new data. Frequentist inference, in contrast, would use a hypothesis test to assign a p-value to a null hypothesis that the data came from a population with a suggested parameter value.


Statistics
  • An approach concerned with the consequences of modifying our previous beliefs as a result of receiving new data. By contrast with the ‘classical’ approach which begins with a hypothesis test that proposes a specific value for an unknown parameter, θ, Bayesian inference proposes a prior distribution (often simply called a prior), p(θ), for this parameter. Data x1, x2,…, xn are collected and the likelihood f(x1, x2,…, xn|θ) is calculated. Bayes’s theorem is now used to calculate the posterior distribution, g(θ| x1, x2,…, xn). The change from the prior to the posterior distribution reflects the information provided by the data about the parameter value. For any particular event the initial probability is described as the prior probability and the subsequent probability as the posterior probability.

    If nothing is known about the value of a parameter, then a non-informative prior is used—typically, this is a uniform distribution over the feasible set of values of the parameter. Another approach, the empirical Bayes method, utilizes the data to inform the prior distribution.

    In a similar way, if nothing is known about the underlying distribution, then the principle of indifference effectively states that all possible values should be assigned the same probability of occurrence. This is also called the principle of insufficient reason.

    Subjective probability measures the degree of belief an individual has in an uncertain proposition. This could form the basis for a prior distribution. Another term is personal probability, though this may be used to suggest that the person’s selected probability is misguided.

    Often, however, a more useful choice for the form of a prior distribution is a member of a family of distributions which is such that the posterior distribution is another member of that family, so that the effect of the data can be interpreted in terms of changes in parameter values. Such a prior is called a conjugate prior.

    Sometimes useful information is available. For example, an appropriate prior for the amount taken by a supermarket on a Saturday might be a normal distribution centred on the amount taken the previous Saturday. This would be an informative prior.

    Jeffreys argued that an appropriate prior should be unaffected by the way a model is expressed: this leads to the Jeffreys prior which is proportional to , where I(θ) is the Fisher information. Since it is only the relative sizes of the prior values that matter, those values need not sum or integrate to 1. Such a prior is called an improper prior.

    In the Dempster–Shafer theory of evidence (suggested by Dempster in 1967 and later developed by his research student, Glenn Shafer) the Bayesian approach is developed to handle events with imprecisely known probabilities. The theory uses concepts termed ‘belief’ and ‘plausibility’ as lower and upper bounds on event probabilities.


Economics
  • An approach to hypothesis testing that involves making an assessment of which of two hypotheses, the null (H0) or the alternative (H1), has a higher probability of being correct. First, prior probabilities of each of the hypotheses being correct, P(H0) and P(H1), are assumed, and the prior odds ratio, P(H0)/P(H1), is formed. Then, based on the prior density functions and the likelihood functions of the data conditioned on each of the hypotheses, the prior odds ratio is modified to form a posterior odds ratio. In contrast to the classical approach, it is not necessary to accept or reject each hypothesis. If needed, such a decision can be made by minimizing the expected loss from making a wrong decision, using some specified loss function, where the expectations are calculated with respect to the posterior probabilities on each hypothesis.


随便看

 

科学参考收录了60776条科技类词条,基本涵盖了常见科技类参考文献及英语词汇的翻译,是科学学习和研究的有利工具。

 

Copyright © 2000-2023 Sciref.net All Rights Reserved
京ICP备2021023879号 更新时间:2024/11/6 9:48:31