To make this work, a person needs to have prior data that indicates how likely it is that the event is true. Bayesian statisticians typically use the following form to express this, where p(A) is the prior probability that A would occur, and p(B) is the prior probability that B would occur. If you use a different form, you will get different results, which depend on the data and assumptions you have made. To get a Bayesian estimate of A’s prior probability, first determine whether the A/B ratio and its posterior probability is greater than zero. Then calculate p(A|B) from this.
This Bayesian method of calculation of Bayesian estimates involves a model that allows you to combine information obtained in different ways and allows you to estimate both A and B at once. In other words, Bayesian statisticians have to be able to generate a model that allows you to combine information from different sources and estimates both of the probabilities of A and B.
There are many models that Bayesian statisticians use to build their estimates, but two popular models are: the conditional random variable model and the random equation model. The conditional model is used when you have a set of A/B data and you want to know whether the Bayesian estimate of A/B is a good fit to the data; on the other hand, the random equation model is used when you have A and B data and you want to find the posterior probability that the data give rise to a Bayesian estimate of A/B.
In the conditional model, you look at the A/B data in one way and then ask yourself what is the chance that if the data came up tails you would have made the same decision as if it came up heads. If the odds are high, this means that the data was not very well-fitting to the data, so the Bayesian estimate of A/B is high, and if they are low then the odds are low, and this means the data was very well-fitting to the data.
Then, you model the odds as follows: If the prior has A and B data, you use the conditional model and assume that both A and B come out tails and you use the same prior and the posterior. The conditional probability is given by the formula p(A/B) = p(B). And, if the prior has nothing, then you assume that the data are independent, so that the data come up tails and the Bayesian estimate is zero, p(A/B) = zero. The posterior is then given by taking the posterior of the conditional probability and dividing it by the total number of data to be estimated, or by the total number of observations to be estimated.
The Bayesian model allows you to get a better fit to the data, especially if you want a high prior and a low posterior. A lot of Bayesian statistics calculations are based upon this basic model. A person may also assume that if the data come out tails he/she would also have drawn the data in the opposite direction (toward tails) if the prior had tails. If the posterior is negative, that means that the data have come up tails and you would have drawn the data in the same direction if the prior were positive. and vice versa.
Random variables are typically used in the Bayesian method, because they do not have any prior distributions, so they are able to give a much better fit. A person who is not familiar with Bayes can still use this type of model. It is easier for them to estimate the probabilities if they know about Bayes. It is also easy for them to make correct estimations of the posterior distributions. Bayesian statisticians often use random variable theory as an example when explaining their method to students who are not yet completely aware of Bayesian theory.