# Statistical Treatment of Experimental Data

### From charlesreid1

"Statistical Treatment of Experimental Data" by Green and Margerison (Elsevier)

## Contents

- 1 Chapter 2 - Probability
- 2 Chapter 3 - Random Variables
- 3 Chapter 4: Important Probability Distributions
- 3.1 Outline
- 3.2 Uniform distribution
- 3.3 Binomial distribution
- 3.4 Binomial distribution example
- 3.5 Binomial distribution mean and variance
- 3.6 Poisson distribution
- 3.7 Poisson distribution mean and variance
- 3.8 Poisson distribution example
- 3.9 Poisson process distribution
- 3.10 Exponential distribution
- 3.11 Exponential distribution example
- 3.12 Gamma distribution
- 3.13 Gamma distribution mean and variance
- 3.14 Connecting Gamma distribution and Poisson distribution
- 3.15 Gamma distribution example
- 3.16 Normal distribution
- 3.17 Normal distribution mean and variance
- 3.18 Normal distribution example
- 3.19 Chi squared distribution

- 4 Flags

## Chapter 2 - Probability

Basic definitions:

- set of all possible outcomes from random experiment - sample space
- discrete - countable number of possible outcomes (can also be infinite - as in, number of particles emitted)
- continuous - all possible real values in certain interval or series of intervals may occur
- univariate - only one number is recorded
- multivariate -more than one value obtained from single performance of an experiment
- event - set of outcomes in the sample space
- probability of an event A as outcome is P(A)
- addition law: P(A U B) = P(A) + P(B)
- venn diagram: if two events are not mutually exclusive, split into three mutually exclusive events (D - (D and E)), (E - (D and E)), (D and E)
- product law: P(A and B) = P(A) * P(B)
- conditional probability: P(C | D) = P(C and D)/P(D)
- independent - two or more performances of an experiment are called independent if probabilities of different outcomes in one are unaffected by outcomes in the other
- replicates - independent repeat performances of an experiment

Probability models:

- discrete uniform model - each outcome equally likely (e.g., tossing unbiased fair die)
- random sampling - drawing random sample of size s from batch of size N (random means, all samples of size s equally likely to be chosen); number of possible samples is N choose s

- if r of the N items are special, number of ways of drawing sample containing d specials (number of ways of choosing d specials and s-d non-specials) is:

- Another way to write this:

- this is the definition of hypergeometric distribution (special case of the uniform model)
- example: bag with 3 red and 4 blue discs, no replacement; random sample of size 2 (=s) from batch of size 7 (=N) with 3 (=r) special (red). probability that 1 (=d) sample is special (red), is P(R and B) = 3 choose 1 * 4 choose 1 / 7 choose 2

## Chapter 3 - Random Variables

More definitions/concepts:

- Random variables are a function on the sample space (corresponding to each outcome, random variable takes a particular value that is a realization of it)
- Sample space comprises all possible values of random variable
- Convention - capital letters denote random variables, mall letters denote realization
- e.g., if X is discrete random variable, denotes probability of event comprising all outcomes for which X takes the value x; this can also be written
- e.g, if X is a continuous random variable, is probability of the event comprising all outcomes for which X falls into the interval (x, x+dx)
- Realizations of random variables are not necessarily outcomes in the sample space. Example: if tossing a die, could assign outcome as 0 if even and 1 if odd
- Random variables also called statistics or variates

### Probability Density Functions

Density function:

- If random variable X is continuous, can specify probability density function f(x)
- The integral of f(x) over any interval A gives probability of X belonging to A, denoted , equivalent to

- Integral over entire space -infinity to +infinity yields 1 by definition (takes value 0 where X cannot occur)
- Discrete point: use sum instead of integral, and sum over probability p(x) of single outcomes x:

Joint density:

- Can extend definitions above to joint density
- Two outcomes are recorded for each performance of experiment
- Two corresponding random variables X and Y
- If continuous, joint density such that:

Likewise, integral over entire space of possible outcomes for X and Y will yield 1.

Independence:

- Two random variables are independent if:

### Cumulative Distribution Function

Cumulative distribution function F for a random variable X is defined for discrete and continuous random variables as:

for continuous:

for discrete:

It follows that:

Statisticians use the term "distribution function" differently from physicists/chemists. Phys/chem usually apply term to probability density. Density and distribution functions are different for case of normal distribution.

For a quantity we can denote the quantile as - this is the quantity such that

### Expectation

Define expectation using distribution function:

These forms both included in the Steltjes integral form:

Represents whichever of the two (discrete or continuous) forms defined above.

Distribution mean of X also called mean of distribution F(x)

The rth non-central moment of X or of distribution F(x) is given by:

The rth central moment of X or of distribution F(x) is given by:

(Integral must be finite, of course.)

Distribution of variance of X or of F(x) is the second moment, , also denoted , defined by:

Can represent variance of X by symbol V(X).

Standard deviation is the square root of variance in the distribution , more useful because it has units that match and itself.

Moment generating function represented by symbol (t is a dummy variable) defined through expression:

Expanding the exponential function using a Taylor series yields:

Characteristic function and probability generating function:

- closely related to moment generating function

Characteristic function definition:

Probability generating function:

### Covariance

Covariance of two random variables X an Y:

Variance is special case of covariance, C(X,X)

Distribution correlation coefficient is a "normalized" covariance - normalized by variance of individual variables:

### Properties of Expectation

Useful properties of expectation include:

Expectation of a constant is the constant

Can simplify application of expectation operator to linear model

Expectation of sum is sum of individual expectations:

### Properties of Variance

This can be applied to the variance expression to get a useful identity:

The last line yields the identity:

Likewise,

Covariance identity can likewise be derived:

In the special case where X and Y are independent, the expectation of the product becomes the product of the expectations, . In this special case, and therefore

If we consider the variance of the sum of two random variables, we can find a relationship between the variance of the individual variables and their covariance:

This yields the identity:

Likewise,

### Example

Evaluate the mean and variance of a rectangular distribution.

Definition of rectangular distribution:

We know that

Therefore

Now the mean can be computed as:

which is a trivial average.

The density is symemtrical about this value.

Further, expectation of x^2 is:

This result can be used to compute the variance:

In the special case where , we get:

### Sampling

If we replicate an experiment n times, we produce a vector of observations

Subscripts label the observations.

Consider a function of these observations

Two valuable statistics are the sample mean and sample variance . These are defined as:

Note the variable s: the letter s represents the sample variance (and not a random variable).

Important to distinguish the *sample* parameters from the *distribution* parameters. The sample population estimates the entire population, just as the sample parameters estimate the distribution parameters. In the limit of sample size being equal to population size, the sample parameters equal the distribution parameters.

However, we also have to remember that and themselves have a distribution. Using different sample populations leads to different values for these two parameters.

Properties of the distributions of and :

The Xs are independent, so we can also get

As n increases, the distribution of becomes more concentrated about the mean, but occurring slowly - increasing n fourfold halves the standard deviation of .

The expectation of can be derived using an identity:

Now we get:

and therefore,

Thus the sample variance approaches the distribution's variance.

Sample variance (and real variance) are said to have n-1 degrees of freedom.

## Chapter 4: Important Probability Distributions

### Outline

This chapter covers the following distributions:

- uniform distribution - dice etc.
- binomial distribution - used for trials (binary outcomes)
- poisson distribution - used for distribution of outcomes that are positive numbered
- poisson process - used for distribution of event times/frequenies
- exponential distribution - used for distribution of time elapsed
- gamma distribution - distribution of sum of n independent exponential variates with same mean
- normal distribution - most important and widely-used distribution, used for distribution of continuous random variables
- chi squared distribution - another widely-used distribution, models distribution of e.g., sum of squares of n independent standard normal variates
- student's t distribution - used for tests on, and confidence intervals for, normal distributions
- F distribution - used in tests involving comparison fo two distribution variances (ANOVA)
- distribution of sample mean and sample variance for normal case - important extension of discussion of normal distribution

### Uniform distribution

When a "fair" process (such as a six-sided die) occurs, it has a uniform distribution.

In general, a variable x can be between a and b, in the interval:

### Binomial distribution

If outcome of experiment is divided into two complementary events, A and not A, the experiment outcomes can be modeled using the binomial distribution.

Running n binomial trials results in n outcomes.

For K successes out of n trials, we have a discrete random variable on the sample space. Sample space is the number of times an outcome may occur,

K has a binomial distribution . The name comes from the fact that the probabilities P(K=k, k = 0, 1, \dots, n</math> are found from the binomial expansion of

Probability of any sequence, e.g., SSFSSSFFSF... comprised of k S's and (n-k) F's is because trials are independent

Number of sequences containing k S's is the number of ways of choosing k items from n, (n choose k). Need to sum the probabilities of simple events to find the probability .

Note by definition,

is the term in in the expansion of

Total probability for all k is:

To compute probabilities of successive values of k using a recurrence relation:

This can be used to calculate P(0), P(1), P(2), etc. It is a good idea to independently calculate the last probability in the sequence to check it, or sum the probabilities to ensure they sum to 1.

### Binomial distribution example

Probability of single performance of experiment will yield usable result is 60%.

We perform the experiment 5 times.

Question 1: What is distribution of number of usable results?

Question 2: What is probability of at least 2 unusable results?

Question 1:

Start by calculating probabilities using direct method. Example:

or by recurrence method:

etc...

Question 2:

To find probability of more than 2 unusable results, we need to find

To do this in a more simple way:

This is:

### Binomial distribution mean and variance

Mean can be written as:

Simplifying:

For the variance, compute to find :

Now,

which gives

Additive property: if K1 and K2 are independently distributed as and , the distribution of their sum K1+K2 is given by . This holds for the sum of m independent, binomially distributed random variables.

Relation to other distributions: Binomial distribution can be used to approximate the hypergeometric distribution, when sample size s is small compared to batch size. In this case, sampling without replacement (hypergeometric distribution) is well-approximated by sampling with replacement (binomial).

### Poisson distribution

Relates to the number of events that occur per given segment of time or space, when the events occur randomly in time or space at a certain average rate.

Examples: number of particles emitted by radioactive source, number of faults per given length of yarn, number of typing errors per page of manuscript, number of vehicles passing a given point on a road.

Use K to represent random variable on this space. Define the Poisson distribution as the distribution in which probability that K = k is given by:

Shorthand:

Poisson distribution has free parameter m.

Recurrence relation:

If we increase the size of each segment by a factor a, number of events per segment is distributed according to

### Poisson distribution mean and variance

To compute the mean via direct method:

this becomes

so for the Poisson distribution ,

The free parameter m is therefore the expected value of the parameter k.

The variance can be calculated as above by first computing E(K(K-1)):

Therefore,

which becomes

Therefore, *the mean and variance of a Poisson distribution are the same.*

Additivity property: if two variables K1 and K2 are independently distributed as Pn(m1) and Pn(m2), then the distribution of their sum is

Relationship to other distributions: Poisson distribution is useful approximation to binomial distribution B(p,n) for small p and large n. Number of successes approximately distributed as Pn(np). (Also, the normal distribution can be used to approximate the Poisson distribution.)

### Poisson distribution example

Suppopse laboratory counter arranged to measure cosmic ray background. Records number of particles arriving, in intervals of 0.1 s. Very large number of measurements made, histogram obtained, estimate of mean.

Plotting KP(k) vs. k shows distribution is not quite symmetrical. (The smaller m is, the more skewed the distribution becomes.)

Mean obtained this way is 11.60, giving the parameter m for the distribution.

Repeating the experiment with a radioactive source close to the detector, mean number of particles over same interval 0.1 s is 98.73. We assume number of particles arriving at detector from radioactive source and from cosmic rays are independent, so we have two independent variables distributed according to Poisson distribution with different mean values.

The additivity theorem allows us to find the number of particles from the radioactive source alone as:

### Poisson process distribution

Closely related to Poisson distribution - a Poisson process is a process in which events occur randomly in time or space. The Poisson process thinks in terms of TIME PER EVENT (or space) rather than in terms of number of events per time.

Number of events per given time have a Poisson distribution, while intervals between consecutive events have exponential distribution.

Probability of an occurrence of an event in time intervla is , where is a constant characteristic of the process and is small compared with .

Consider probability of occurrence of n events in interval where .

We only need to consider two possibilities:

A: n events occur in the interval (0,t) and none occur in the next

B: n-1 events occur in the interval (0,t) and 1 occurs in the next

(Other possibilities have an extremely small probability.)

We use P(n,t) to denote prbability that n events have occurred in interval (0,t).

and

Therefore, we can get

and that gives an approximation to the time derivative,

In the limit of the derivative becomes

which, when integrated, gives a recurrence formula. Cutting to the chase, the initial probability n=0 is:

and the recurrence relation gives

Number of occurrences in the time interval (0,t) is distributed as

Density of distribution of time to first occurrence of an event:

Similarly, density of disttribution of time to nth event is:

### Exponential distribution

Distribution of time elapsed, space covered, etc., before a randomly located event occurs.

Time elapsed between consecutive events in a Poisson process has an exponential distribution.

Example: lifetime of a component in a piece of apparatus; distance traveled between successive collisions in a low pressure gas.

Continuous random variable for which sample space is the positive real numbers,

Random variable X has the exponential distribution if density f(x) is given by:

As required by density function,

Mean is given by:

The mean or expectation of the exponential distribuition is

To find variance, start by finding

Now,

Relationship to other distributions: exponential distribution is connected with Poisson processes. Also closely related to the Gamma distribution - it is the simplest case of the Gamma distribution.

### Exponential distribution example

Ditertiary butyl peroxide DTBP decomposes at 154.6 C in the gas phase by first order process, with rate constant k = 3.46e-4 1/s.

Number of molecules N(t) of DTBP remaining at time t after reaction is given by:

Decrease in number of molecules of DTBP -dN(t) during time interval t to t+dt is:

This leads to

Thus the density of the survival time is

The average survival time of DTBP molecules is 1/k = 2.89e3 s

### Gamma distribution

Gamma distribution is related to exponential distribution. It is used to model the distribution of hte sum of n independent exponential variates, each with the same mean.

(Also related to chi-squared distribution.)

Random variable X has gamma distribution if

Shorthand, denote as

b (often an integer) called the number of degrees of freedom

Ratio

If b is an integer,

Gamma distribution with one degree of freedom is same as exponential distribution

### Gamma distribution mean and variance

We can use the identity/property

Now,

simplifying,

Thus,

and

which becomes

Additivity property: if we have two random variables X1 and X2 independently distributed as and , then the sum of these variables X1 + X2 is distributed as

(This can be extended to sums of multiple variables.)

### Connecting Gamma distribution and Poisson distribution

If we have a random variable Z that is distributed according to the Gamma distribution, , where m is an integer, then we can obtain the following result:

To interpret: consider Poisson process in which events occur at average rate of 1 per second; Z seconds represents waiting time until occurrence of mth event. The probability that this waiting time is greater than c seconds is jsut hte proability that not more than m-1 events have occurred during the time interval (0,c) seconds, i.e.,

Can also be expressed as:

### Gamma distribution example

A car is fifth in a queue of vehicles waiting at a toll booth. Waiting time is the sum of four service times for preceding vehicles. Service times are independently exponentially distributed with mean of 20 seconds.

Q: What is probability that car in question will have to wait more than 90 seconds?

Let service time be denoted T seconds. Then T is distributed as ,

If waiting time is W seconds, W is sum of 4 independent exponential variates, each with a parameter

Hence,

Therefore can be obtained by using in eqn from preceding section:

where

Plugging in:

### Normal distribution

Random variable X is normally distributed if its probability density is given by

Shorthand:

Random variable Z can be written as a "standardized form" of X if:

Probability density of Z :

The density becomes:

Z is the standard normal variate, and is denoted

### Normal distribution mean and variance

Expectation of Z:

(because integrand is odd.)

Hence,

Now we can use these to find E(X) and V(X):

therefore,

Additive property: if we have two normally distributed random variables X1 and X2, describe by normal distributions and , the distribution of their sum X1 + X2 can be described with the normal distribution

### Normal distribution example

Use tabulated values of to answer the question.

Note that

Suppose you have a physical quantity distributed as N(3,4).

Q1: What is probability of observing X > 3.5?

Q2: What is probability of observing X < 1.2?

Q3: What is probability of observing 2.5 < X < 3.5?

Question 1: convert X to Z by plugging in to definition: . Now:

Question 2: again, convert X to Z. Now:

Question 3: convert from X to Z, which gives

### Chi squared distribution

Random variable X is distributed as chi squared with degrees of freedom if density given by:

Shorthand:

Example of this type of random variable: sum of squares of n independent standard normal variates, distributed as

Equivalently, if independent random variables, each distributed as , then distributed as

It can also be shown that is also distributed as , independently of

If , the mean and variance are given by:

Mean:

Variance:

Additive property: if X1 and X2 are independently distributed as and , then their sum X1+X2 is distributed as