0% found this document useful (0 votes)
132 views

Formulas: Introductory Statistics, 6/E

This document provides formulas and notation for introductory statistics. It includes formulas for descriptive statistics like mean, range, standard deviation, and interquartile range. It also includes formulas for probability, binomial and Poisson distributions, the sampling distribution of the mean, and confidence intervals for one population mean.
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
132 views

Formulas: Introductory Statistics, 6/E

This document provides formulas and notation for introductory statistics. It includes formulas for descriptive statistics like mean, range, standard deviation, and interquartile range. It also includes formulas for probability, binomial and Poisson distributions, the sampling distribution of the mean, and confidence intervals for one population mean.
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

INTRODUCTORY STATISTICS, 6/E

Neil A. Weiss

FORMULAS

NOTATION The following notation is used on this card: • Rule of total probability:
n  sample size σ  population stdev 
k
P (B)  P (Aj ) · P (B | Aj )
x  sample mean d  paired difference
j 1
s  sample stdev p̂  sample proportion
(A1 , A2 , . . . , Ak mutually exclusive and exhaustive)
Qj  j th quartile p  population proportion
N  population size O  observed frequency • Bayes’s rule:
µ  population mean E  expected frequency P (Ai ) · P (B | Ai )
P (Ai | B)  k
j 1 P (Aj ) · P (B | Aj )
CHAPTER 3 Descriptive Measures (A1 , A2 , . . . , Ak mutually exclusive and exhaustive)
x
• Sample mean: x  • Factorial: k!  k(k − 1) · · · 2 · 1
n
m!
• Range: Range  Max − Min • Permutations rule: m Pr 
(m − r)!
• Sample standard deviation:
  • Special permutations rule: m Pm  m!
(x − x)2 x 2 − (x)2 /n m!
s or s • Combinations rule: 
n−1 n−1 m Cr
r! (m − r)!
• Interquartile range: IQR  Q3 − Q1 N!
• Number of possible samples: N Cn 
• Lower limit  Q1 − 1.5 · IQR, Upper limit  Q3 + 1.5 · IQR n! (N − n)!
x
• Population mean (mean of a variable): µ  CHAPTER 5 Discrete Random Variables
N
• Population standard deviation (standard deviation of a variable): • Mean of a discrete random variable X: µ  xP (X  x)
 
(x − µ)2 x 2 • Standard deviation of a discrete random variable X:
σ  or σ  − µ2  
N N
σ  (x − µ)2 P (X  x) or σ  x 2 P (X  x) − µ2
x−µ
• Standardized variable: z 
σ • Factorial: k!  k(k − 1) · · · 2 · 1
 
CHAPTER 4 Probability Concepts n n!
• Binomial coefficient: 
x x! (n − x)!
• Probability for equally likely outcomes:
• Binomial probability formula:
f
P (E) ,  
N n x
P (X  x)  p (1 − p)n−x ,
where f denotes the number of ways event E can occur and x
N denotes the total number of outcomes possible.
where n denotes the number of trials and p denotes the success
• Special addition rule: probability.

P (A or B or C or · · · )  P (A) + P (B) + P (C) + · · · • Mean of a binomial random variable: µ  np



(A, B, C, . . . mutually exclusive) • Standard deviation of a binomial random variable: σ  np(1 − p)
• Complementation rule: P (E)  1 − P (not E) λx
• Poisson probability formula: P (X  x)  e−λ
• General addition rule: P (A or B)  P (A) + P (B) − P (A & B) x!

P (A & B) • Mean of a Poisson random variable: µ  λ


• Conditional probability rule: P (B | A)  √
P (A) • Standard deviation of a Poisson random variable: σ  λ
• General multiplication rule: P (A & B)  P (A) · P (B | A)
• Special multiplication rule: CHAPTER 7 The Sampling Distribution of the Sample Mean

P (A & B & C & · · · )  P (A) · P (B) · P (C) · · · • Mean of the variable x: µx  µ



(A, B, C, . . . independent) • Standard deviation of the variable x: σx  σ/ n
INTRODUCTORY STATISTICS, 6/E
Neil A. Weiss

FORMULAS

CHAPTER 8 Confidence Intervals for One Population Mean • Pooled t-interval for µ1 − µ2 (independent samples, normal
populations or large samples, and equal population standard
• Standardized version of the variable x:
deviations):
x−µ
z √ 
σ/ n (x 1 − x 2 ) ± tα/2 · sp (1/n1 ) + (1/n2 )
• z-interval for µ (σ known, normal population or large sample):
with df  n1 + n2 − 2.
σ
x ± zα/2 · √
n
• Degrees of freedom for nonpooled-t procedures:
σ
• Margin of error for the estimate of µ: E  zα/2 · √ 

2
n s12 /n1 + s22 /n2

2 2
2 ,
• Sample size for estimating µ: s12 /n1 s2 /n2
 2 +
zα/2 · σ n1 − 1 n2 − 1
n ,
E
rounded down to the nearest integer.
rounded up to the nearest whole number.
• Studentized version of the variable x: • Nonpooled t-test statistic for H0 : µ1  µ2 (independent samples,
and normal populations or large samples):
x−µ
t √
s/ n x1 − x2
t 
• t-interval for µ (σ unknown, normal population or large sample): (s12 /n1 ) + (s22 /n2 )
s
x ± tα/2 · √ with df  .
n
with df  n − 1. • Nonpooled t-interval for µ1 − µ2 (independent samples, and normal
populations or large samples):
CHAPTER 9 Hypothesis Tests for One Population Mean

• z-test statistic for H0 : µ  µ0 (σ known, normal population or large (x 1 − x 2 ) ± tα/2 · (s12 /n1 ) + (s22 /n2 )
sample):
x − µ0 with df  .
z √
σ/ n
• Mann–Whitney test statistic for H0 : µ1  µ2 (independent samples
• t-test statistic for H0 : µ  µ0 (σ unknown, normal population or and same-shape populations):
large sample):
x − µ0 M  sum of the ranks for sample data from Population 1
t √
s/ n
• Paired t-test statistic for H0 : µ1  µ2 (paired sample, and normal
with df  n − 1.
differences or large sample):
• Wilcoxon signed-rank test statistic for H0 : µ  µ0 (symmetric
population): d
t √
W  sum of the positive ranks sd / n

with df  n − 1.
CHAPTER 10 Inferences for Two Population Means
• Pooled sample standard deviation: • Paired t-interval for µ1 − µ2 (paired sample, and normal differences
 or large sample):
(n1 − 1)s12 + (n2 − 1)s22
sp  sd
d ± tα/2 · √
n1 + n2 − 2
n
• Pooled t-test statistic for H0 : µ1  µ2 (independent samples, normal
populations or large samples, and equal population standard with df  n − 1.
deviations):
• Paired Wilcoxon signed-rank test statistic for H0 : µ1  µ2 (paired
x1 − x2
t √ sample and symmetric differences):
sp (1/n1 ) + (1/n2 )
with df  n1 + n2 − 2. W  sum of the positive ranks
INTRODUCTORY STATISTICS, 6/E
Neil A. Weiss

FORMULAS

CHAPTER 11 Inferences for Population Standard Deviations • Two-sample z-test statistic for H0 : p1  p2 :

• χ 2 -test statistic for H0 : σ  σ0 (normal population): p̂1 − p̂2


z  √
p̂p (1 − p̂p ) (1/n1 ) + (1/n2 )
n−1 2
χ2  s (Assumptions: independent samples; x1 , n1 − x1 , x2 , n2 − x2 are all 5
σ02
or greater)
with df  n − 1.
• Two-sample z-interval for p1 − p2 :
• χ 2 -interval for σ (normal population): 
  (p̂1 − p̂2 ) ± zα/2 · p̂1 (1 − p̂1 )/n1 + p̂2 (1 − p̂2 )/n2
n−1 n−1 (Assumptions: independent samples; x1 , n1 − x1 , x2 , n2 − x2 are all 5
2
· s to 2
·s
χα/2 χ1−α/2 or greater)
with df  n − 1. • Margin of error for the estimate of p1 − p2 :

• F -test statistic for H0 : σ1  σ2 (independent samples and normal E  zα/2 · p̂1 (1 − p̂1 )/n1 + p̂2 (1 − p̂2 )/n2
populations):
• Sample size for estimating p1 − p2 :
F  s12 /s22  2
zα/2
n1  n2  0.5
with df  (n1 − 1, n2 − 1). E
or
• F -interval for σ1 /σ2 (independent samples and normal   z 2
α/2
populations): n1  n2  p̂1g (1 − p̂1g ) + p̂2g (1 − p̂2g )
E
1 s1 1 s1 rounded up to the nearest whole number (g  “educated guess”)
 · to  ·
Fα/2 s2 F1−α/2 s2
CHAPTER 13 Chi-Square Procedures
with df  (n1 − 1, n2 − 1).
• Expected frequencies for a chi-square goodness-of-fit test:
CHAPTER 12 Inferences for Population Proportions E  np
• Sample proportion: • Test statistic for a chi-square goodness-of-fit test:
x
p̂  , χ 2  (O − E)2 /E
n
with df  k − 1, where k is the number of possible values for the
where x denotes the number of members in the sample that have the
variable under consideration.
specified attribute.
• Expected frequencies for a chi-square independence test:
• One-sample z-interval for p:
 R·C
p̂ ± zα/2 · p̂(1 − p̂)/n E
n
(Assumption: both x and n − x are 5 or greater) where R  row total and C  column total.

• Margin of error for the estimate of p: • Test statistic for a chi-square independence test:
 χ 2  (O − E)2 /E
E  zα/2 · p̂(1 − p̂)/n
with df  (r − 1)(c − 1), where r and c are the number of possible
• Sample size for estimating p: values for the two variables under consideration.
   2
zα/2 2 zα/2
n  0.25 or n  p̂g (1 − p̂g ) CHAPTER 14 Descriptive Methods in Regression and Correlation
E E
• Sxx , Sxy , and Syy :
rounded up to the nearest whole number (g  “educated guess”)

• One-sample z-test statistic for H0 : p  p0 : Sxx  (x − x)2  x 2 − (x)2 /n

p̂ − p0 Sxy  (x − x)(y − y)  xy − (x)(y)/n


z √
p0 (1 − p0 )/n Syy  (y − y)2  y 2 − (y)2 /n
(Assumption: both np0 and n(1 − p0 ) are 5 or greater) • Regression equation: ŷ  b0 + b1 x, where
x1 + x2 Sxy 1
• Pooled sample proportion: p̂p  b1  and b0  (y − b1 x)  y − b1 x
n1 + n2 Sxx n
INTRODUCTORY STATISTICS, 6/E
Neil A. Weiss

FORMULAS

• Total sum of squares: SST  (y − y)2  Syy CHAPTER 16 Analysis of Variance (ANOVA)
• Regression sum of squares: SSR  (ŷ − y)2  Sxy
2
/Sxx • Notation in one-way ANOVA:
• Error sum of squares: SSE  (y − ŷ)  Syy −
2 2
Sxy /Sxx k  number of populations
• Regression identity: SST  SSR + SSE n  total number of observations
SSR x  mean of all n observations
• Coefficient of determination: r 2 
SST nj  size of sample from Population j
• Linear correlation coefficient: x j  mean of sample from Population j
1
(x − x)(y − y) Sxy sj2  variance of sample from Population j
r n−1
or r 
sx sy Sxx Syy Tj  sum of sample data from Population j

• Defining formulas for sums of squares in one-way ANOVA:


CHAPTER 15 Inferential Methods in Regression and Correlation
• Population regression equation: y  β0 + β1 x SST  (x − x)2
 SSTR  nj (x j − x)2
SSE
• Standard error of the estimate: se 
n−2 SSE  (nj − 1)sj2
• Test statistic for H0 : β1  0:
• One-way ANOVA identity: SST  SSTR + SSE
b1
t √ • Computing formulas for sums of squares in one-way ANOVA:
se / Sxx
with df  n − 2. SST  x 2 − (x)2 /n
• Confidence interval for β1 : SSTR  (Tj2 /nj ) − (x)2 /n
se
b1 ± tα/2 · √ SSE  SST − SSTR
Sxx
with df  n − 2. • Mean squares in one-way ANOVA:

• Confidence interval for the conditional mean of the response variable SSTR SSE
MSTR  , MSE 
corresponding to xp : k−1 n−k

• Test statistic for one-way ANOVA (independent samples, normal
1 (xp − x/n)2
ŷp ± tα/2 · se + populations, and equal population standard deviations):
n Sxx
MSTR
with df  n − 2. F 
MSE
• Prediction interval for an observed value of the response variable with df  (k − 1, n − k).
corresponding to xp :
 • Confidence interval for µi − µj in the Tukey multiple-comparison
1 (xp − x/n)2 method (independent samples, normal populations, and equal
ŷp ± tα/2 · se 1 + +
n Sxx population standard deviations):
with df  n − 2. qα 
(x i − x j ) ± √ · s (1/ni ) + (1/nj ),
• Test statistic for H0 : ρ  0: 2

t 
r where s  MSE and qα is obtained for a q-curve with parameters k
1 − r2 and n − k.
n−2 • Test statistic for a Kruskal–Wallis test (independent samples,
with df  n − 2. same-shape populations, all sample sizes 5 or greater):

• Test statistic for a correlation test for normality: SSTR 12  k


Rj2
H  or H  − 3(n + 1),
xw SST/(n − 1) n(n + 1) j 1 nj
Rp  
Sxx w2 where SSTR and SST are computed for the ranks of the data, and
where x and w denote observations of the variable and the Rj denotes the sum of the ranks for the sample data from
corresponding normal scores, respectively. Population j . H is approximately chi-square with df  k − 1.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy