Core Principles of Probability and Statistical Inference

Classified in Mathematics

Written on in English with a size of 6.41 KB

Fundamental Theorems in Probability

The Law of Large Numbers

The Law of Large Numbers is a theorem in probability that describes the average behavior of a succession of random variables as the total number of variables increases. The theorem provides sufficient hypotheses to affirm that the sample average converges to the average of the expectations of the random variables involved.

Chebyshev's Theorem

Chebyshev's Theorem provides an upper bound on the probability that values fall outside a certain range from the average.

Z

Bernoulli's Theorem

Bernoulli's Theorem is a special case of the Law of Large Numbers, which specifies that the approximate frequency of an occurrence converges to the probability p of that occurrence as the experiment is repeated.

The Central Limit Theorem

The Central Limit Theorem indicates that, in very general terms, the distribution of the sum of random variables tends toward a normal distribution when the number of variables is very large.

Key Statistical Concepts

Sampling Techniques

There are two ways to select population samples: non-random (or trial) sampling and random (or probability) sampling. In the latter, all elements of the population have an opportunity to be selected for the sample. A sample selected by trial sampling is based on someone's experience with the population. Sometimes, a trial sample is used as a preliminary guide to decide how to take a random sample later. Trial samples can inform statistical analysis, but probability samples are necessary for rigorous inference.

Mean Squared Error (MSE)

MSE is the sum of squared differences between the actual values and the values predicted by a model. The formula is: Error = Σ (pᵢ - rᵢ)² / N, where p is the predicted value, r is the real value, and N is the sample size.

Estimator

An estimator is a statistic (i.e., a function of the sample) used to estimate an unknown population parameter. For example, if we want to know the average price of an article (the unknown parameter), we can collect observations of the price of that article on various sites (the sample), and the arithmetic mean of the observations can be used as an estimate of the average price. Consistency is a desirable property for an estimator.

Parametric Hypothesis

A parametric hypothesis is a conjecture about the values of the parameters of a given probability law.

Hypothesis Testing

Hypothesis testing is a decision rule that leads us to accept or reject the null hypothesis. We establish a decision rule based on the probability of making errors. There are two types of errors:

  • Type I Error: Rejecting the null hypothesis when it is actually true.
  • Type II Error: Accepting the null hypothesis when it is actually false.

Power of the Test

The power of the test is a function that assigns probabilities to each of the points of the parameter space, representing the probability of correctly rejecting a false null hypothesis.

Level of Significance

The level of significance is the probability associated with the risk of a Type I error.

Related entries: