**Why do I need to learn about probability and statistics?**

Probability and statistics are fundamental tools for understanding many modern theories and techniques such as artificial intelligence, machine learning, deep learning, data mining, security, digital imagine processing and natural language processing.

**What can I do after finishing learning about probability and statistics?**

You will be prepared to learn modern theories and techniques to create modern security, machine learning, data mining, image processing or natural language processing software.

**That sounds useful! What should I do now?**

Please read

– this Dimitri P. Bertsekas and John N. Tsitsiklis (2008). Introduction to Probability. Athena Scientific book, or

– this Hossein Pishro-Nik (2014). Introduction to Probability, Statistics, and Random Processes. Kappa Research, LLC book.

Alternatively, please read these notes, then watch

– this MIT 6.041SC – Probabilistic Systems Analysis and Applied Probability, Fall 2011 course (Lecture Notes), and

– this MIT RES.6-012 – Introduction to Probability, Spring 2018 course (Lecture Notes).

Probability and statistics are difficult topics so you may need to *learn it 2 or 3 times* using different sources to actually master the concepts. For example you may audit this Probability & Statistics for Machine Learning & Data Science course (Coursera) to get more examples and intuitions about core concepts.

**Terminology Review:**

- Sample Space (Ω): Set of possible outcomes.
- Event: Subset of the sample space.
- Probability Law: Law specified by giving the probabilities of all possible outcomes.
- Probability Model = Sample Space + Probability Law.
- Probability Axioms: Nonnegativity: P(A) ≥ 0; Normalization: P(Ω)=1; Additivity: If A ∩ B = Ø, then P(A ∪ B)= P(A)+ P(B).
- Conditional Probability: P(A|B) = P (A ∩ B) / P(B).
- Multiplication Rule.
- Total Probability Theorem.
- Bayes’ Rule: Given P(Aᵢ) (initial “beliefs” ) and P (B|Aᵢ). P(Aᵢ|B) = ? (revise “beliefs”, given that B occurred).
- The Monty Problem: 3 doors, behind which are two goats and a car.
- The Spam Detection Problem: “Lottery” word in spam emails.
- Independence of Two Events: P(B|A) = P(B) or P(A ∩ B) = P(A) · P(B).
- The Birthday Problem: P(Same Birthday of 23 People) > 50%.
- The Naive Bayes Model: “Naive” means features independence assumption.
- Discrete Uniform Law: P(A) = Number of elements of A / Total number of sample points = |A| / |Ω|
- Basic Counting Principle: r stages, nᵢ choices at stage i, number of choices = n₁ n₂ · · · nᵣ
*Permutations*: Number of ways of ordering elements. No repetition for n slots: [n] [n-1] [n-2] [] [] [] [] [1].
*Combinations*: number of k-element subsets of a given n-element set.
- Binomial Probabilities. P (any sequence) = p# ʰᵉᵃᵈˢ(1 − p)# ᵗᵃᶦˡˢ.
*Random Variable*: A function from the sample space to the real numbers. It is not random. It is not a variable. It is a function: f: Ω ↦ ℝ. Random variable is used to model the whole experiment at once.
- Discrete Random Variables.
- Probability Mass Function: P(X = 𝑥) or Pₓ(𝑥): A function from the sample space to [0..1] that produces the likelihood that the value of X equals to 𝑥. PMF gives probabilities. 0 ≤ PMF ≤ 1. All the values of PMF must sum to 1. PMF is used to model a random variable.

- Bernoulli Random Variable (Indicator Random Variable): f: Ω ↦ {1, 0}. Only
*2 outcomes*: 1 and 0. p(1) = p and p(0) = 1 – p.
- Binomial Random Variable: X = Number of successes in
*n trials*. X = Number of heads in n independent coin tosses.
- Binomial Probability Mass Function: Combination of (k, n)pᵏ(1 − p)ⁿ−ᵏ.
- Geometric Random Variable: X = Number of coin tosses until first head.
- Geometric Probability Mass Function: (1 − p)ᵏ−¹p.
- Expectation: E[X] = Sum of xpₓ(x).
- Let Y=g(X): E[Y] = E[g(X)] = Sum of g(x)pₓ(x). Caution: E[g(X)] ≠ g(E[X]) in general.
- Variance: var(X) = E[(X−E[X])²].
- var(aX)=a²var(X).
- X and Y are independent: var(X+Y) = var(X) + var(Y). Caution: var(X+Y) ≠ var(X) + var(Y) in general.
- Standard Deviation: Square root of var(X).
- Conditional Probability Mass Function: P(X=x|A).
- Conditional Expectation: E[X|A].
- Joint Probability Mass Function: Pₓᵧ(x,y) = P(X=x, Y=y) = P((X=x) and (Y=y)).
- Marginal Probability Mass Function: P(x) = Σ
_{y} Pₓᵧ(x,y).
- Total Expectation Theorem: E[X|Y = y].
- Independent Random Variables: P(X=x, Y=y)=P(X=x)·P(Y=y).
- Expectation of Multiple Random Variables: E[X + Y + Z] = E[X] + E[Y] + E[Z].
- Binomial Random Variable: X = Sum of Bernoulli Random Variables.
- The Hat Problem.
- Continuous Random Variables.
- Probability Density Function: P(a ≤ X ≤ b) or Pₓ(𝑥).
*(a ≤ X ≤ b)* means X function produces a real number value *within the [a, b] range*. Programming language: X(outcome) = 𝑥, where a ≤ 𝑥 ≤ b. PDF does NOT give probabilities. PDF does NOT have to be less than 1. PDF gives probabilities per unit length. The total area under PDF must be 1. PDF is used to define the random variable’s probability coming within a distinct range of values.
- Cumulative Distribution Function: P(X ≤ b).
*(X ≤ b)* means X function produces a real number value *within the [-∞, b] range*. Programming language: X(outcome) = 𝑥, where 𝑥 ≤ b.
- Continuous Uniform Random Variables: fₓ(x) = 1/(b – a) if a ≤ X ≤ b, otherwise f = 0.
- Normal Random Variable, Gaussian Distribution, Normal Distribution: Fitting bell shaped data.
- Chi-Squared Distribution: Modelling communication noise.
- Sampling from a Distribution: The process of drawing a random value (or set of values) from a probability distribution.
- Joint Probability Density Function.
- Conditional Probability Density Function.
- Marginal Probability Density Function.
- Derived Distributions.
- Convolution: A mathematical operation on two functions (f and g) that produces a third function.
- Covariance.
- Correlation Coefficient.
- Conditional Expectation: E[X | Y = y] = Sum of xpₓ|ᵧ(x|y). If Y is unknown then E[X | Y] is a random variable, i.e. a function of Y. So E[X | Y] also has its expectation and variance.
- Law of Iterated Expectations: E[E[X | Y]] = E[X].
- Conditional Variance: var(X | Y) is a function of Y.
- Law of Total Variance: var(X) = E[var(X | Y)] +var([E[X | Y]).
- Bernoulli Process: A sequence of independent Bernoulli trials. At each trial, i: P(Xᵢ=1)=p, P(Xᵢ=0)=1−p.
- Poisson Process.
- Markov Chain.
- Population: N.
- Sample: n.
- Random Sampling.
- Population Mean: μ.
- Sample Mean: x̄.
- Population Proportion: p.
- Sample Proportion: p̂.
- Population Variance: σ².
- Sample Variance: s².
- Markov’s Inequality: P(X ≥ a) ≤ E(X)/a (X > 0, a > 0).
- Chebyshev’s Inequality: P(|X – E(X)| ≥ a) ≤ var(X)/a².
- Week Law of Large Numbers: The average of the samples will get closer to the population mean as the sample
**size** (not number of items) increases.
- Central Limit Theorem: The distribution of sample means approximates a normal distribution as the sample
**size** (not number of items) gets larger, regardless of the population’s distribution.
- Sampling Distributions: Distribution of Sample Mean, Distribution of Sample Proportion, Distribution of Sample Variance.
- Point Estimate: A single number, calculated from a
*sample*, that estimates a *parameter* of the *population*.
- Maximum Likelihood Estimation: Given data the maximum likelihood estimate (MLE) for the
*parameter* p is the *value* of p that maximizes the likelihood P (data | p). P (data | p) is the *likelihood function*. For continuous distributions, we use the probability density function to define the likelihood.
- Log likelihood: the natural log of the likelihood function.
- Frequentists: Assume no prior belief, the goal is to find the model that most likely generated observed data.
- Bayesians: Assume prior belief, the goal is to update prior belief based on observed data.
- Maximum A Posteriori (MAP): Good for instances when you have limited data or strong prior beliefs. Wrong priors, wrong conclusions. MAP with uninformative priors is just MLE.
- Margin of Error: A
*bound* that we can confidently place on the difference between an estimate of something and the true value.
- Significance Level: α, the
*probability* that the event *could have occurred* by chance.
- Confidence Level: 1 − α, a
*measure* of how confident we are in a given margin of error.
- Confidence Interval: A
*95% confidence interval (CI) of the mean* is __a range__ with an upper and lower number calculated from a sample. Because the true population mean is unknown, this range describes *possible values* that the mean __could ____be__. If *multiple samples* were drawn from the same population and a 95% CI calculated for each sample, we would expect the population mean to be found within 95% of these CIs.
- z-score: the number of standard deviations from the mean value of the reference population.
- Confidence Interval: Unknown
*σ*.
- Confidence Interval for Proportions.
- Hypothesis: A statement about a population developed for the purpose of testing.
- Hypothesis Testing.
- Null Hypothesis (H₀): A statement about the value of a population parameter, contains equal sign.
- Alternate Hypothesis (H₁): A statement that is accepted if the sample data provide sufficient evidence that the
__null hypothesis is false__, never contains equal sign.
- Type I Error
**: **Reject the null hypothesis when it is true.
- Type II Error
**: **Do not reject the null hypothesis when it is false.
- Significance Level, α: The maximum probability of rejecting the null hypothesis when it is true.
- Test Statistic: A
*number*, calculated from *samples*, used to find if your data could have occurred under the __null hypothesis__.
- Right-Tailed Test: The alternative hypothesis states that the true value of the parameter specified in the null hypothesis is greater than the null hypothesis claims.
- Left-Tailed Test: The alternative hypothesis states that the true value of the parameter specified in the null hypothesis is less than the null hypothesis claims.
- Two-Tailed Test: The alternative hypothesis which does not specify a direction, i.e. when the alternative hypothesis states that the null hypothesis is wrong.
- p-value: The probability of obtaining test results at least as extreme as the result actually observed, under the assumption that the null hypothesis is correct. μ₀ is assumed to be known and H₀ is assumed to be true.
- Decision Rules: If H₀ is true then acceptable x̄ must fall in (1 − α) region.
- Critical Value or k-value: A value on a test distribution that is used to decide whether the null hypothesis should be rejected or not.
- Power of a Test: The probability of rejecting the null hypothesis when it is false; in other words, it is the probability of avoiding a type II error.
*t*-Distribution.
- T-Statistic.
*t*-Tests: Unknown σ, use T-Statistic.
- Independent Two-Sample
*t*-Tests.
- Paired
*t*-Tests.
- A/B testing: A methodology for comparing two variations (A/B) that uses
*t*-Tests for statistical analysis and making a decision.
- Model Building: X = a·S + W, where X: output, S: “signal”, a: parameters, W: noise. Know S, assume W, observe X, find a.
- Inferring: X = a·S + W. Know a, assume W, observe X, find S.
- Hypothesis Testing: X = a·S + W. Know a, observe X, find S. S can take
*one* of few possible values.
- Estimation: X = a·S + W. Know a, observe X, find S. S can take
*unlimited* possible values.
- Bayesian Inference can be used for both Hypothesis Testing and Estimation by leveraging Bayes rule. Output is posterior distribution. Single answer can be Maximum a posteriori probability (MAP) or Conditional Expectation.
- Least Mean Squares Estimation of Θ based on X.
- Classical Inference can be used for both Hypothesis Testing and Estimation.

After finishing learning about probability and statistics please click Topic 20 – Discrete Mathematics to continue.