From charlesreid1

Contents

October 7, 2010

Statistical Inference (Casella and Berger)

wikipedia:Set (mathematics)

wikipedia:Probability interpretations


http://en.wikipedia.org/wiki/Set_%28mathematics%29

http://en.wikipedia.org/wiki/Probability_interpretations


Set Theory:

Union -  A \bigcup B - combination of two sets

Intersection -  A \bigcap B - elements contained in both A and B

Complement -  A^c - everything that's not in A

Empty Set -  \emptyset - set containing no elements


Definitions:

experiment - any activity generating observable results

outcome - result of experiment (IMPORTANT TO KEEP STRAIGHT! don't confuse events and outcomes)

trial - single performance of experiment

sample space - set of all possible outcomes

countable/uncountable: - countable = one-to-one correspondence (e.g. 1/n) - uncountable = no one-to-one correspondence can be made -- infinite loop: you can do an infinite loop, but still count it -- flipping a coin: countable; temperature: uncountable

event - any subset of the sample space


Example:

experiment - roll a dice outcome - 1, or 2, or 3, or 4, or 5, or 6 trial - one roll of the dice COUNTABLE sample space - {1,2,3,4,5,6} event - may be {1}, or {1,2,3}, etc...


Operator Properties:


Commutative:

 A \bigcup B = B \bigcup A

Associative:

 A \bigcup (B \bigcup C) = (A \bigcup B) \bigcup C

Distributive:

 A \bigcap (B \bigcup C) = (A \bigcap B) \bigcup (A \bigcap C)

DeMorgan's Law:

 (A \bigcup B)^c = A^c \bigcup B^c


Call a set abnormal if it can be put into itself (otherwise it's normal)

Example: the set of all squares is not itself square, so it is not a member of the set of squares The complimentary set, containing all non-squares, is itself not a square, so is normal

Consider the set of all normal sets Is it normal or abnormal? If it were normal, it would be contained in itself, and would therefore be abnormal If it were abnormal, it would not be contained in itself, and would therefore be normal

You can resolve this using more rigorous set theory...


More Definitions:

Disjoint ("set" term) / mutually exclusive ("probability" term) - if the intersection of two sets is the null set, they are mutually exclusive

ie  A \bigcap B = \emptyset

Partition - take a group of sets; if the union of these sets is the sample space, and they are mutually exclusive, this is a partition

ie  \bigcup_i^\infty A_i = S \mbox{  and  } \bigcap_i^\infty A_i = \emptyset



Distinction between probability theory that has a physical meaning (and is therefore "contaminated" by intuition) and a more abstract probability theory that doesn't have a corresponding physical meaning

Axiomatic probability theory (Komolgorov)

A probability is a function that follows 3 axioms:

Sample space S

(The domain) \sigma-algebra \mathfrak{B} (means the set is fully consistent)

Function P -> probability over the domain \mathfrak{B}

1. P(A) \geq 0 for all A \in \mathfrak{B}

2. P(S) = 1

3. If A \in \mathfrak{B} and B \in \mathfrak{B} are disjoint, then P(A \bigcup B) = P(A) + P(B)

In other words, P( \bigcup_{i=1}^{\infty} A_i )= \sum_{l=1}^{\infty} P(A_{i})

This is a mathematician's viewpoint: a clean definition, as long as we follow these rules, the function is a probability.


What is the probability of the null set?

Create a partition: S = {S, \emptyset}

The probability of the sample space is P(S) = 1

So P(\emptyset) = 1 - P(S) = 0

P(A) = 1

P(A^c) = 1-P(A)

If A \subset B then P(A) \leq P(B)

The size of the set is directly related to the probability...

Another way to do this is using measure theory (another route, besides rigorous set theory, that leads to probability theory)

wikipedia:Measure theory

wikipedia:Sigma-algebra

Bonferroni's inequality: P(A \bigcap B) \geq P(A) + P(B) - 1

Reading Assignment Discussion

Classical Definition of Probability (Laplace, 1812)

"If a random experiment can result in N mutually exclusive and equally likely outcomes and if N_A of these outcomes result in the occurrence of the event A, the probability of A is defined by P(A) = P{A} = {N_A \over N}."

Example: rolling a dice

Event A might be how many times we roll a 1... or how many times we roll a 1 or a 2...

What if one side of the dice is weighted to favor 5? This definition doesn't work... No mathematical proof to show that the outcomes were mutually exclusive and equally likely.

Frequency

This is the limit, as the number of experimental trials performed (trials must be performed under "identical" conditions) goes to infinity, of the number of outcomes of the event of interest N_A:


P(A) = lim_{n \rightarrow \infty} {N_A \over N}

Limitations:

  • can never actually perform an infinite number of trials
  • what does "identical" mean? e.g. if you're performing a turbulence experiment, how can you initialize everthing the exact same way?
  • (Tony): what's your infinity?
    • (Sean): whatever it is, it's not finite


Probability can be seen as lack of information (e.g. fluid mechanics is deterministic, so we could describe it if we know the state perfectly - we use probability to fill in/make up for the lack of knowledge)

Quantum theory: in two-split experiment, the very state of nature is random and needs to be defined using probability

Probability of an outcome in the future: judging confidence based on prior experience... statement of confidence

Bayesian vs. fequentist


Side discussion:

wikipedia:Noether's theorem


Advantages/Disadvantages of Axiomatic Approach

It provides a rigorous mathematical framework, removing bias/preference

But, when you get a result, you can't necessarily specify the meaning

October 11, 2010

Derivation of Bonferoni's Inequality

Show:


P(A \bigcap B) \leq P(A) + P(B) - 1

Proof:


P(A \bigcup B) = P(A) + P(B) - P(A \bigcap B)

P(A \bigcap B) = P(A) + P(B) - P(A \bigcup B)

But in general,


P(A \bigcup B) \leq 1

Plugging this back in,


P(A \bigcap B) \leq P(A) + P(B) - 1

On Wikipedia: more general case; more than two sets

wikipedia:Boole's inequality


Conditional Probability and Bayes' Rule

if A,B \in S, and P(B) \neq 0,


P(A|B) = \frac{P(A \bigcap B)}{P(B)}

And as a result, we get Bayes' Rule:


P(A|B) = P(B|A) \frac{ P(A) }{ P(B) }

Derivation:


\frac{ P(A) }{ P(B) } \times P(B|A) = \frac{ P(B \bigcap A) }{ P(A) } = \frac{ P(A \bigcap B) }{ P(A) } \times \frac{ P(A) }{ P(B) } = P(A|B)

Example application of conditional probability:

Run a combustion simulation... "Given that the temperature is in range X, what is the concentration range?"

Statistical Independence

Definition: P(A \bigcap B) = P(A) * P(B)

Consequences:

P(A|B) = P(A)


Random Variable

wikipedia:Random variable

A random variable is a mapping from a sample space to real numbers.

Example: Dice roll

Mapping rolls to a set {1,2,3,4,5,6}

Example: Morse code

Looking at statistics of morse code...

Mapping dots and dashes to {0,1} (or alternatively {1,2})

IMPORTANT: Sample space is different from the random variable


Induced Probability Function

Let P(B) for event B B \in \mathfrak{B} on S, outcome S_{i}

(e.g. there's a sample space, and in the sample space there are events...

Before, we were talking about probability of events. Now, we're talking about probability of a random variable)

Random variable X(s), random variable realization \chi, A \subset \chi

P_{X} (X \in A) = P{ S_{i} \in S : X(S_{i}) \in A }

S_{i} are the outcomes that are in B


Cumulative distribution function

Definition:

F_{X}(x) = P_{X} (X \leq x) \forall x

Example: Dice roll

Probability of rolling a given number is constant (1/6)

Cumulative distribution function is a line, because for x=1, cumulative probability is 1/6; for x=2, cumulative probability is 2/6; and so on.

This definition is more general than the integral definition, because in general you need a cumulative distribution function that is differentiable

Sometimes there will be situations where a probability distribution function can't be defined, and only a cumulative distribution function can be defined

F_{X} \geq 0

Further information:


\displaystyle{ \lim_{x \rightarrow -\infty} } = 0


\displaystyle{ \lim_{x \rightarrow +\infty} } = 1


Example

Given 5% of men are colorblind, and 0.25% of women are colorblind: if a person is chosen at random and they are colorblind, what is the probability of their gender?


\begin{align}
P(CB|M) &=& 0.05 \\
P(CB|W) &=& 0.0025 \\
P(M|CB) &=& ?
\end{align}


P(M|CB) = \frac{ P(M \bigcap CB)}{P(CB)} = \frac{ P(CB|M) P(M) }{ P(CB) } = \frac{ (0.05) (0.50) }{ P(M) P(CB|M) + P(CB|W)P(W) } = 0.952


Identically Distributed

For two random variables X, Y, and an event A: they are identically distributed if P(X \in A) = P(Y \in A)

We have an experiment, which we run a trial of, and we get an outcome. If the probability of that outcome is the same, the two outcomes are identically distributed.

The actual values of X, Y are not important. e.g., probability of rolling a 2 on a dice is the same as rolling a 4 on a dice, so they are identically distributed, even thought 2 and 4 are not equal.


Probability Mass Function, Probability Density Function

Difference: one's continuous, one's discrete

Probability Mass Function (PMF): f_{X}(x) = P(X=x)

  • Discrete cases only

Probability Density Function (PDF): f_{X}(x) = \frac{ d }{ dx } F_{X}(x)

  • Continuous cases only

where F_{X} is the cumulative distribution function:


F_{X}(x) = \displaystyle{ \int_{-\infty}^{x} f_{X}(x') dx' }


Given properties of the CDF, what are the properties of the PDF?

  • \displaystyle{ \int_{-\infty}^{\infty} f_{X}(x) dx } = 1
  • If CDF is monotonically increasing, analogous property for PDF is f_{X}(x) \geq 0

Above two properties are necessary and sufficient conditions for a function to be considered a PDF.

Further properties:

  •  P(a \leq X \leq b) = \displaystyle{ \int_{a}^{b} f_{X}(x) dx } = F_{X}(b) - F_{X}(a)


Example

Assume \lambda > 0 is a fixed positive constant

define function:


f(x) = 
\begin{cases}
1/2 \lambda \exp(-\lambda x) & x \geq 0 \\
1/2 \lambda \exp(\lambda x) & x < 0 
\end{cases}

Part a

What is the pdf of f(x)?

f(x) \geq 0 \qquad \forall x


\begin{align}
\int_{-\infty}^{+\infty} f(x) dx &=& \int_{-\infty}^{0} f(x) dx + \int_{0}^{\infty} f(x) dx \\
 & = & \frac{1}{2} \lambda \frac{1}{\lambda} \exp(\lambda x) \vert_{-\infty}^{0} + 
\frac{1}{2} \lambda ( - \frac{1}{\lambda} ) \exp(- \lambda x) \vert_{0}^{+\infty}  \\
 & = & 1
\end{align}

Part b

Find probability that a random variable x is less than t, e.g. P(x<t) \quad \forall t \in \mathfrak{R}


P(x < t) = \int_{-\infty}^{t} f(x) dx

For this one, you have to integrate twice.


\int_{-\infty}^{t} f(x) dx =
\begin{cases}
\frac{1}{2} \exp( \lambda t ) & t < 0 \\
1 - {1}{2} \exp(- \lambda t) & t > 0 
\end{cases}

Another name for this is the cumulative distribution function.

Transformations

Expectation

Definition:


E( g(x) ) = \displaystyle{ \int_{-\infty}^{+\infty} g(x) f_{X}(x) dx }

Linearity:


E( a g_{1}(X) + b g_{2}(X) + c) = a E(g_{1}(X)) + b E(g_{2}(X)) + c

Positivity:

If the function g(x) > 0 \forall x, then E(g(X)) > 0

If g_{1}(x) > g_{2}(x) \forall x, then E(g_{1}(X)) > E(g_{2}(X))

Moments:

nth moment \mu_{n} = E( X^{n} )

Central moments:

nth central moment \mu^{\prime}_{n} = E( (X - \mu_{1})^{n} )

2nd central moment: variance


Moment Generating Function (MGF)


M_{X}(t) = E( \exp( t X ) )

If you're looking at M(-t), it's a Laplace transform. If it's M(-it), it's a Fourier transform.

wikipedia:Moment generating function


\begin{align}
M_{X}(t) &=& \int_{-\infty}^{+\infty} \exp( tx ) f_{X}(x) dx \\
 &=& \int_{-\infty}^{+\infty} ( 1 + tx + \frac{t^2 x^2}{2!} + \frac{t^3 x^3}{3!} + ... ) f_{X}(x) dx \\
 &=& \mu_{0} + t \mu_{1} + \frac{t^2}{2!} \mu_{2} + ... \\
\mu_{n} &=& M_{X}^{(n)}(0)
\end{align}

So knowing all the moments is equivalent to knowing the PDF.


October 14, 2010

Bernoulli Trial - two outcomes, known probability for each (e.g. heads or tails, or picking black and white marbles out of an urn)

Binomial distribution - probability of k successes in n Bernoulli trials

Normal distribution (and central limit theorem) - the sum of a bunch of random variables with same mean and variance approaches a Gaussian distribution

When doing an experiment - if there are a whole bunch of causes of error, all the errors add up, and you can expect a normal (Gaussian) distribution


October 18, 2010

Example 1: Expectation, CDF

Let x be a continuous non-negative random variable

Let f denote the PDF

f(x) = 0 for x < 0

Show that E(X) = \displaystyle{ \int_{0}^{\infty} \left( 1-F_{X} (x) \right) dx }

x = \int_{0}^{x} 1 dx'

Next, using definition of expectation... substitute this into the definition of the expectation.

(There's a problem - going from this step to the next step)

E(X) = \int_{x' = 0}^{x' = \infty} \left[ \int_{x=x'}^{x=\infty} f(x) dx \right] dx' - \int_{x' = 0}^{x' = \infty} \left[ \int_{x=0}^{x=x'} f(x) dx \right] dx'

where the first quantity in square brackets is 1, and the second quantity in square brackets is the CDF of x'.

 = \int_{0}^{\infty} 1 dx' = \int_{0}^{\infty} F_{X} (x') dx

 = \int_{0}^{\infty} ( 1 - F_{X} ) dx

Example 2: Choosing Keys

A man has a set of N keys. He wants to open his door, which will open with exactly 1 key, but he doesn't know which one, and he is trying keys at random.

Part A

Find the mean number of trial attempts.


Use a negative binomial: http://en.wikipedia.org/wiki/Negative_binomial_distribution

Specifically, use a geometric distribution: http://en.wikipedia.org/wiki/Geometric_distribution

So we know that the expectation of the geometric distribution is:

E(X) = \frac{1}{p}

where p is the probability of success in the Bernoulli trial,

p = \frac{1}{n}

So that the mean number of trial attempts is:

E(X) = n


Part B

What if there is no replacement?

With no attempts,

P(X = 0)=0

After the first attempt,

P(X=1) = \frac{1}{n}

After the second attempt,

P(X=2) = \frac{ (n-1) }{ n (n-1) }

And the third attempt,

P(X=3) = \frac{ (n-1)(n-2) }{ n (n-1) (n-2) }

And so on. Each time, all terms cancel out except

P(X=x) = \frac{1}{n}

So we can take the expectation of that:

E(X) = \displaystyle{ \sum_{x=1}^{n} } \frac{x}{n}

And using the formula for the first n counting numbers,

E(X) = \frac{1}{n} \frac{n (n+1)}{2}

E(X) = \frac{n+1}{2}

Multivariate

Set theory

Moved into random variables (mapping from event space to the real numbers)

Now we want to do a new type of mapping into multiple random variables


n-dimensional random vector - a mapping from a sample space into \mathbf{R}^n Euclidian space.

Joint Probability Mass Function: f_{X,Y,\dots} (x,y,\dots) = P(X=x, Y=y, \dots )

Joint Probability Density Function: P( {X, Y, \dots \in A} ) = \int_{A} \dots \int f_{X,Y,\dots} (x,y,\dots) dx dy \dots

Joint Cumulative Distribution Function: P(X \leq x, Y \leq y, \dots ) = F_{X,Y,\dots} (x,y,\dots) = \int_{0}^{x} \dots \int_{0}^{y} f_{X,Y,\dots} (x,y,\dots) dx dy \dots

Question: Why is joint PDF defined in terms of P, whereas the univariate PDF is defined in terms of the CDF?

Answer: Boundaries of multivariate PDFs are often non-trivial, and are not nice even "rectangles"... You need to know the boundaries of the PDF really well to use the CDF, so the joint CDF is not used as often.

Joint Conditional PDF: f_{X|Y} (x|y) = P(X=x|Y=y) = \frac{ f_{X,Y} (x,y) }{ f_{Y} (y) }

The conditional PDF is just a renormalization.

But how is f_{Y} (y) defined, if it's a multivariate PDF?

f_{Y} (y) = \int_{x=-\infty}^{\infty} f_{X,Y} (x,y) dx

This is the marginal PDF...

Marginal PDF:  f_{X} (x) = \int_{Y} \dots \int_{Z} f_{X,Y,\dots,Z} (x,y,\dots,z) dz \dots dy

Definition of independence of X and Y: f_{X,Y} (x,y) = f_{X} (x) f_{Y} (y)

  • P ( { X \in A, Y \in B } ) = P({ X \in A}) * P({ Y \in B })
  • E(g(X) h(Y)) = E(g(X)) * E(h(Y))
  • If Z = X + Y, then the moment generating function of Z, M_{Z} (t) = M_{X} (t) * M_{Y} (t)
    • This was used in deriving Central Limit Theorem


Z = X+Y ~ Norm( \mu{x} + \mu{y}, \sigma_{x}^2 + \sigma_{y}^2 )

X ~ Norm(\mu_{x}, \sigma_{x}^2)

Y ~ Norm(\mu_{y}, \sigma_{y}^2)


E(X) = E(E(X|Y))

Define a new variable: covariance

Covariance: Cov(X,Y) = E((X-\mu_{x})(Y-\mu_{y})) = E(XY) - \mu_{X} \mu_{Y}

Correlation: \rho_{XY} = \frac{ Cov(X,Y) }{\sigma_{X} \sigma_{Y}}

Note: Just because the covariance is 0 does not mean that X and Y are independent, i.e. it doesn't imply E(XY) = E(X) E(Y)

Bivariate Normal Distribution

  • Means \mu_{X}, \mu_{Y}
  • Variances \sigma_{X}^2, \sigma_{Y}^2
  • Correlation \rho


f_{X,Y} (x,y) = 
\displaystyle{ 
  \frac{ 
    \exp{ \frac{-1}{2(1-\rho^2)} 
      \left[ 
        \left( \frac{x-\mu_{x}}{\sigma_{x}} \right) 
        - 2 \rho \left( \frac{x - \mu_{x}}{\sigma_{x}} \right) \left( \frac{y-\mu_{y}}{\sigma_{y}} \right) 
        + \left( \frac{y-\mu_y}{\sigma_y} \right)^2 \right] 
    }
  }{   
    2 \pi \sigma_x \sigma_y \sqrt{ 1 - \rho^2 }   
  }
}

Transform of a PDF

\overrightarrow{X}, f_{ \overrightarrow{X} } ( \overrightarrow{x} )

\overrightarrow{ U } \rightarrow U_{1} = g_{1} ( \overrightarrow{X} ), U_{2} = g_{2} ( \overrightarrow{X} ), \dots

h = g^{-1}

J = det \left( \left[ \frac{\partial h_i}{\partial u_j} \right]_{i,j} \right)

The transformed PDF is:

f_{\overrightarrow{U}} (\overrightarrow{u}) = f_{x} \left[ h_1 (\overrightarrow{u}), h_2 (\overrightarrow{u}), \dots \right] \cdot \vert J \vert


Multivariate Example 1

X and Y have the distribution:

X
1 2 3
Y 2 \frac{1}{12} \frac{1}{6} \frac{1}{12}
3 \frac{1}{6} 0 \frac{1}{6}
4 0 \frac{1}{3} 0


Part A

Show that X and Y are not independent.


One way: show that the covariance is nonzero.


E(XY) = 
\displaystyle{
  \sum_{i=1}^{3} \sum_{j=2}^{4}
  x_{i} y_{j} f_{X,Y}
}


A more fundamental way: using the definition of independence


f_{X,Y} (x,y) =? f_{X} (x) * f_{Y} (y)

So sum up each row/column and put it in a new row/column


X
1 2 3 (sum)
Y 2 \frac{1}{12} \frac{1}{6} \frac{1}{12} \frac{1}{3}
3 \frac{1}{6} 0 \frac{1}{6} \frac{1}{3}
4 0 \frac{1}{3} 0 \frac{1}{3}
(sum) \frac{1}{4} \frac{1}{2} \frac{1}{4}

Then show that the product of the two marginal PDFs is not equal to the joint PDF value

i.e. pick row i and column j, and if the sum of the joint PDF across the whole row i, times the sum of the joint PDF across the whole column j, does not equal the joint PDF at location (row i col j), then we know the definition of independence is not met

Part B

Give a probability table for random variables U and V with the same marginals as X and Y but are independent.

So, we want to keep the "sum" column and row. Then we want to multiply the sum for row i by the sum for column j,


F_{U,V} = f_{U} * f_{V}


f_{U} = f_{X}; f_{V} = f_{Y}

U is a discrete binomial distribution

V is uniformly distributed


Notes

If they're independent, they WILL have a zero covariance

(So it follows that, if the covariance is nonzero, there is no way they can be independent)

But, just because the covariance is zero doesn't mean they are independent


October 21, 2010

Review

Hypergeometric - out of n objects, picking k objects (bernoulli trials) without replacement, the probability that x of them are success

Binomial distribution - out of n objects, picking k objects (bernoulli trials) with replacement, the probability that x of them are success

Normal distribution - derived using Central Limit Theorem; central concept is, if you take an infinite number of random variables distributed with the same mean and variance, the sum of these infinite number becomes a normal distribution

Independence of random variables - the joint PDF is equal to the products of the marginal PDFs

Joint normal distribution - determinant of covariance matrix shows up


\Sigma = 
\left[
\begin{array}{cc}
\sigma_{X_1} & \sigma_{X_1 X_2} \\
\sigma_{X_1 X_2} & \sigma_{X_2}
\end{array}
\right]

If we then find the eigenvalues of this matrix, and use these as the covariance matrix (diagonal), then some ellipsoidal, skewed distribution would become a normal distribution composed of nice circles.

Also: can define Y = \Sigma^{-1} [X - \mu] to center the distribution around the means.

What if you have a kidney-shaped distribution?

You can define a new variable Y, and use the covariance matrix, and get a "rotated" distribution, centered around the means, that's sort-of circular. But the kidney shape will still remain.

\sigma_{Y_j Y_i} = 0, i \neq j

Covariance (correlation) is 0, but the two variables are not independent. (i.e. using mathematical trick to make the variance 0, but the joint distribution is not equal to the product of the two marginal PDFs).

You cannot make the distribution fit into a circle, because then you're throwing away information that's important.

To first order, you may be able to approximate it as a joint-normal distribution. (But this is like approximating a human as a sphere).


If you have an ellipsoidal distribution that's extremely squished in one direction,

X_2 \approx g(X_1) \approx a X_1

Homework Problem 1

Let the number of chocolate chips in a certain type of cookie have a Poisson distribution. We want the probability that a randomly chosen cookie has at least two chocolate chips to be greater than 0.99. Find the smallest value of the mean of the distribution that ensures this probability.

Poisson distribution:


P(X=x) = \frac{ \lambda^{x} e^{-\lambda} }{ x! }

The question is asking for: P({X \geq 2}) > 0.99

Rewriting using a less-than sign: P({ X < 2 }) < 0.01

What is the value of \lambda that satisfies

CDF: F(x) = \frac{ \Gamma( x+1 , \lambda ) }{ x! }, where \lambda is the mean and \Gamma is the incomplete Gamma function

We want to plug 2 into the CDF, set it equal to 0.01, and do a nonlinear solve to find values of \lambda


We know x < 2, so it can be 0 or 1, and plugging this into expression for Poisson distribution:

\frac{ \exp{(-\lambda)} \lambda^{0} }{ 0! } + \frac{ \exp{(-\lambda)} \lambda^{1} }{ 1! } < 0.01

(replace the < sign with an = sign... and solve for \lambda...)


\lambda = 6.638

Homework 2

Two movie theaters compete for the business of one thousand customers. Assume that each customer chooses between the movie theaters independently and with indifference (p=1/2). Find and expression for N, the number of seats in your theater, that will result in the probability of turning away a customer (due to filling the seats) being less than one percent. First use the discrete distribution and then use the continuous approximation.

Problem is asking for N, such that  P({K > N}) < 0.01

Binomial distribution: "What is the probability that n identical but independent bernoulli trials result in k successes?"

  • Are we doing this with replacement or not?
  • P stays the same - probability is 1/2 for each binomial trials (whether someone picks our movie theater or not)
  • So that's like having replacement

K \sim Binomial(n=1000, p=0.5) = \left( \begin{array}{c} n \\ k \end{array} \right) p^k (1-p)^{n-k}

Want the probability of everything under the curve, for K > N:

For the CDF: since it is discrete, the actual value will fall somewhere between two discrete values. Which one do we use?

  • we want to pick the one to the right - to make sure we have an extra seat

Using binomial distribution web app, n=1000, p=0.5: plots PDF and CDF)

Number of seats: 537 (makes sense - half of 1000 is 500, and if it's a random night,

Part 2:

How do we do this with the normal (continuous) distribution? How would d'Alembert do this?

Homework 3

For the joint pdf f_{X,Y}(x,y) = c(x+2y) over 0<y<1 and 0<x<2 (the pdf is zero elsewhere), find the coefficient c.

Also, find the joint CDF and the two marginal PDFs.

First part: The integral over the entire domain must be 1. c = \frac{1}{4}

Second part:

F_{X,Y} (x,y) =


\begin{align}
\frac{1}{4} xy (x+y) & \dots & 0 < x < 2, 0 < y < 1 \\
\frac{1}{4} x(\frac{1}{2}x + 1) & \dots & 0 < x < 2, y \geq 1 \\
\frac{1}{2} y (1+y) & \dots & x \geq 2, 0 < y < 1 \\
1 & \dots & x \geq 2, y > 1 \\
0 & \dots & elsewhere
\end{align}

Challenge: define a new variable z = \frac{9}{ (X+1)^2 }, and find the marginal PDF of z.

Homework 4

For two independent random variables X and Y with moment generating functions M_X(t) and M_Y(t), show that the sum of these variables, Z = X+ Y, has a moment generating function M_Z(t) = M_X(t) M_Y(t).

Suggestion: if you first show that for independent variables E(g(X) h(Y)) = E(g(X)) E(h(Y)) then you can use the "cool" way of writing the moment generating function to give this proof in about four steps.

Given: definition of independence


f_{X,Y} (x,y) = f_{X}(x) f_{Y}(y)


E(g(X) h(Y)) = \int_{-\infty}^\infty \int_{-\infty}^\infty f_{X,Y}(x,y) g(X) h(Y) dx

since  f_{X,Y} (x,y) = f_{X}(x) f_{Y}(y)


E(g(X) h(Y)) = \int_{-\infty}^\infty \int_{-\infty}^\infty f_{X} f_{Y} g(X) h(Y) = \int_{-\infty}^\infty f_X g(X) dx \int_{-\infty}^\infty f_Y h(Y) dy = E(g(X)) E(h(Y))

Now, let's look at two particular functions for g and h:

Suppose  g(X) = \exp{(tX)} and  h(Y) = \exp{(tY)}

E( \exp{(tX))} E( \exp{ (tY) } ) = E( \exp{(tX)} \exp{(tY)} ) = E( \exp{(t (X+Y)) } ) = E( \exp{(tZ)} ) = M_Z(t)

October 25, 2010

Conditional Expectation


\langle g(X) \vert Y \rangle = \displaystyle{
  \int_{-\infty}^{\infty} g(x) f_{X,Y} dx 
= \int_{-\infty}^{\infty} g(x) \frac{ f_{X,Y} }{ f_{Y} } dx 
= \frac{ 1 }{ f_{Y} } \int_{-\infty}^{\infty} g(x) f_{X,Y} dx
}

This is also denoted as E( g(X) \vert Y ). Also, Y is supposed to be an event, not just a variable, that you are conditioning on (all of this is built up from set theory).

E(g(X) \vert Y=a) = \langle g(X) \vert Y=a \rangle


Stochastic Processes

Sean's experience: intimidating topic, due to the approach of most professionals being one of rigor.

Want to dispel this: from what we've covered, we already know everything we need to know

Stochastic process - just like a random variable, but slightly more generalized

Random variable / trials / experiments:

We will allow an experimental trial to result in a family of outcomes, parameterized by a variable (universally denoted as t)

Now you can create a mapping, and the stochastic processes is the result of the mapping


X(t)

When we substitute in a random variable, e.g. t_1, this becomes a random variable.

So X(t) is the family of outcomes, parameterized by t

Example: six random clock digits (e.g. \mathbf{7, 3, 6, 1, 2, 0}; we push a button, the numbers all change

  • one approach: treat this as one large number
  • another approach: each number is random, and can find a joint PDF between them
  • stochastic process approach: assign a t value to each digit
    • t = 0, 1, 2, 3, 4, 5 - one for each digit
    • X(t=0) is the family of outcomes for the first digit
    • Set theory: mapping from the digits that show up, to the random variable X

Sample space: S = { \mathbf{0, 1, 2, 3, 4, 5, 6, 7, 8, 9} } (possible values that our clock digits can take on)

Random variable: X = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9}

The interesting fact is, there may be a sequence, so that they all depend on one another.


Shower Example

Water hits the wall of the shower, runs down in serpentines

Random paths that jump around along the wall

Stochastic shower.png

One experiment: a snapshot of the serpentine, and it's particular path.

The coordinate of length down the shower is t

The coordinate of horiz. displacement is X(t)

Can look at f_{ X(t_n),X(t_m)}, or f_{X(t_1) \vert X(t_0)}, etc.

Important to maintain that we have a parameter t, because for a given experiment, there is (or may be) some connection between X(t) for all of the t's

e.g. for the serpentines - we know the path a serpentine takes follows some pattern (because it creates some curve)


Stock Market Example

Random variable: value of stock

Stock is only dependent on how much someone is willing to pay for it. But we can only know the price of the stock when there is a transaction.

So actually the stock price is a set of steps - a piecewise, not a continuous, function

Stochastic stock.png


Types of Stochastic Processes

A single realization of a stochastic process (a single realization of X(t)) can be:

  • Continuous/smooth
  • Discontinuous
  • Piecewise



Thoughts

Each value of t can have a separate probability distribution

And we can have X(t), and Y(t), and Z(t), etc.

Recall: frequentist

If we had a perfect flow machine, and a perfect experimental instrument... and running a turbulent flow experiment

Turbulent experiment is a stochastic process

We run the experiment, and we measure the velocity u,v,w as a function of time.

Velocity is smooth and continuous, and may be related by some physics that we know

We run the experiment over and over and over again to get statistics.

Maybe we look at the velocity at another point...

Or maybe we look at it EVERYWHERE

So then we have x,y,z,t as stochastic parameters - MULTIPLE stochastic process parameters

The value at a certain point u(x,y,z,t) is a random variable - parameterized on x,y,z,t

Distinction: ensemble average vs. time average

  • When we run 10 experiments, by gathering velocity data at a certain point
  • Averaging over time for a single experiment, is different from averaging over the 10 experimental values for a given t


U, V Velocity Example

Continuing with the flow experiment...

Looking at f_{U(t_1),U(t_2)} and their correlation \rho_{U(t_1), U(t_2)}

The correlation drops from 1 to 0 as time goes on

This is true of all turbulent velocity fields, due to the strong dependence on initial conditions

The correlation curve has to be continuous, since the velocity field is continuous and smooth in time.

Timescale it takes the correlation to go to zero: wikipedia:Lyapunov exponent (amount of time it takes for two curves to become completely uncorrelated)

Turbulence: tradeoff between strong correlation (due to direction-change by eddies), which will go from +1 to negative, and the decaying exponential

Turbulence autcorrelation.png


Markov Processes and Bernoulli Processes

We have a Bernoulli trial (outcome is 0 or 1), for a bunch of experiments.

The outcome X(t) (where t is the experiment number) is either 0 or 1 - but it is like the clock example given earlier, because each of the experiments is independent:

f_{X(t_1) \vert X(t_0)} = f_{X(t_1)}


Markov process - the PDF at some time parameter, conditioned on all of the previous times, is only dependent on the previous time


f_{ X(t_{n+1}) \vert X( t_{n} ), X( t_{n-1} ), X( t_{n-2} ), \dots } = f_{ X( t_{n+1} ) \vert X( t_{n} ) }

Markov chain: each link in the chain is only connected to the previous link

Examples:

Card games: cards represent a memory of the past moves (NOT a Markov process)

Chutes and Ladders: where you end up next depends ONLY on your current location and what you roll on the dice

Following are some examples of subsets of Markov processes. Each person made a set of assumptions, and showed that the result had some cool properties. No information is given about what they proved... just some nomenclature so we're familiar with it.


Martingale Process

http://en.wikipedia.org/wiki/Martingale_%28betting_system%29

This is similar to a Bernoulli process, but it adds up - you add up the

X(t+1) = X(t) + B

where B is a distributed Bernoulli trial, either -1 or 1, with some probability (0.5).

So if we take the expectation of B, it's 0 - it could be 1, or it could be -1.

\langle X(n+1) \rangle = X(n)


Levy Process

Is a Markov proces...

Condition 1: X(t=0) = 0

Condition 2: X(t_n) - X(t_{n-1}) are disjoint (independent), for all n

Condition 3:

Example (to clarify)...


Wiener Process

wikipedia:Wiener process

Fits within Levy processes


X( t_{n+1} ) = X( t_n ) + \sqrt{ \delta^2 dt } N_{t}^{t + dt}(0,1)

where N_{t}^{t+1} is a transition probability - transitioning from t to t+dt

This is an initial value problem (think explicit Euler...)

Can get the next value from the current value PLUS the RHS

C++ random number generator --> normally-distributed number, mean 0, standard deviation 1: that's N_{t}^{t+dt}.

N is a value, but its value is distributed normally. A value comes out, we multiply it by \sqrt{\delta^2 dt}, and add it to our current value X(t_n) to get the next value.

So this means our distribution will be discontinuous (due to random number).

Question: can we shrink dt small enough that our function is continuous?

For Euler method, this works fine:


\lim_{\Delta t \rightarrow 0} \frac{ X(t_{n+1}) - X(t_n) }{ \Delta t } = F(x)

and we get a derivative,


\frac{ dX }{ dt } = F

But now we have


\lim_{\Delta t \rightarrow 0} \frac{ X(t_{n+1}) - X(t_n) }{ \Delta t } = \frac{1}{\sqrt{( \Delta t )} } F(x)

and so as \Delta t goes to zero, the RHS goes to infinity

We can't do a derivative of this, so calculus becomes difficult on these stochastic processes

This one is not really continuous

But we still may be able to do calculus, because there's still an area under the curve

Probability: may use more generalized definitions of a limit

e.g. aren't looking at single realization values of X - if we're looking at the PDF at two points, we can say there is a limit in the PDF


\lim_{\Delta t \rightarrow 0} \frac{ f_{X(t_{n+1})} - f_{X(t_n)} }{ \Delta t }

There are four different probability limits that can be defined, and you can start doing calculus

Ito calculus, Strotanovich calculus, stochastic calculus, etc. - all go down the field of "not discrete, so need to define a new calculus"

But we're always dealing with smooth and continuous processes, so we won't go down that road


\begin{array}{l}
X(t=0) = 1 \\
X(t=1) = 1 + N \sim Normal(1,1)
X(t=2) = X(t=1) + N \sim Normal + Normal
\end{array}

So you always have a normally-distributed process, but with increasing variance

Alternative definition of Markov process:


X(t_{n+1}) = X(t_n) + g( X(t), dt )

If your function g is normally distributed, with the same mean and variance, Central Limit Theorem tells you X(t) will be normally distributed


Example of Wiener process:

Einstein's Brownian Motion Paper (1905)

3 papers:

  • Brownian motion
  • photoelectric effect
  • special relativity

Pollen particle in water - randomly moving around, being bumped to the left, bumped to the right

Considered the first directly observable evidence of kinetic theory

http://files.charlesmartinreid.com/Einstein_BrownianMotion_1905.pdf

Ornstein-Uhlenbeck Process

Also a Levy process


X(t_{n+1}) = X(t_n) - \gamma X(t_n) dt + \sqrt{ \beta^2 dt } N_{t}^{t+dt}(0,1)

Second term on RHS is like an advection term

Third term is a random term like the one in the Wiener process

Example of Ornstein-Uhlenbeck process:

Langevin's Brownian Motion - let's let the position of a Brownian particle be goverened by Newton's second law

Have two variables: X and V, where the position X is continuous, and a velocity V which is a Wiener process

Langevin considered this to be more physically sensible - because particle position isn't discontinuous


Poisson Process

Like a Bernoulli process, but we're going to


X(t_{n+1}) = X(t_n) + 1

\Delta t \sim Poisson( \lambda )

The dt has a certain distribution - as opposed to the Wiener process, where we have an expression in terms of dt

So the y-value (variable X) is uniformly distributed (well, uniform - everything is 1)

But the x-axis (variable t) jumps are Poisson-distributed

Examples:

  • Radioactive decay
  • Telephone calls arriving at a switchboard
  • Page view request at web site

Things that, over a certain interval, are uniformly distributed


Random Walk Processes

Depending on the conditions on your steps, you end up with different types of random walks

e.g. if your steps are normally distributed, you have a Wiener random walk

if your steps are Bernoulli steps, it is a Martingale random walk

etc...

But ALL are Markov processes


October 28, 2010

(Missed some...)

Wiener process - distributed randomly about X_{0}

We can get statistics from a Wiener process:

  • Means \mu_{ X_n } and \mu_{ X_{n+1} }
  • Variances \sigma_{X_n} and \sigma_{ X_{n+1} }
  • Correlation \rho_{X_n, X_{n+1}} (how narrow the 2-D joint PDF surface is)


Wiener Process Statistics

X_{n+1} = X_{n} + \sqrt{ \delta^2 dt } N(0,1)

where N(0,1) is the normal distribution with mean 0 and variance 1

This can also be written as

N(0, \sqrt{ \delta^2 dt } )

Property of normal distribution (see wikipedia:Normal distribution) - if we have X_1 \sim Normal(\mu_1,\sigma_1) and X_2 \sim Normal(\mu_2,\sigma_2), then for Y = a X_1 + b X_2 , Y \sim N( a \mu_1 + b \mu_2, a^2 \sigma_1^2 + b^2 \sigma_2^2)


Lemmons books:

\displaystyle{
\lim{at \leftarrow 0 } X_{n+1} = X_{n}
}

Which then leads to:

\displaystyle{
\lim{at \leftarrow 0 } \frac{ X_{n+1} - X_{n} }{dt}
= \lim{ at \leftarrow 0 } \sqrt{ \frac{\delta^2}{dt} } N(0,1)
}

which is NOT SMOOTH!

Then we can say


X_{n+1} - X_{n} = ( X_{n+1} - X_{n+\frac{1}{2}} ) + ( X_{n+\frac{1}{2}} - X_{n} )


\sqrt{ \delta^2 dt } N_{t}^{t+1} (0,1) 
= \sqrt{ \delta^2 dt } N_{t+\frac{dt}{2}}^{t+dt} (0,1)
+ \sqrt{ \delta^2 dt } N_{t}^{t+\frac{dt}{2}} (0,1)

all the \sqrt{ \delta^2 dt } terms cancel out, and we get:

N(0,1) = N(0,1)

which means the Wiener process is consistent (and independent of the dt selection)

Have a bunch of random processes, separated by intervals of dt, and construct a PDF

Definition of stochastic process: Set of random variables... performing a trial of an experiment... outcome is a set of random variables parameterized by t

X(t)

When we put in a specific t (say t_0), we plug it in and get a particular value of X(t = t_0)

So if we pick a value of t, we get a PDF for X at that t - e.g. f_{X(t_5)}, or f_{X(t_3)}, etc.

It would be great if we had the joint PDF, f_{ X(t_5), X(t_3) }, because then we can get the marginal PDF, the variances, the covariance, etc.


So what we've just shown is, no matter what our dt is, we always get the same PDF f_{X(t_0)} at a particular t_0

Wiener process is used more commonly because of this consistency

Analog for differential equations: if you have a differential equation, and you're integrating to a certain point in time, it doesn't matter if you take two timesteps or a thousand timesteps to get there, you will still get the same result.

Note: You can also get a similar consistency result for a Cauchy distribution, but because it doesn't have a finite variance, it isn't as useful


Now, using the method of moments,


E(X_{n+1}) = E(X_n) + E \left( \sqrt{ \delta^2 dt } N_{n}^{n+1} (0,1) \right)

We can take constants out of the expectation operator, so the last term becomes:


\begin{array}{rcl}
E \left( \sqrt{ \delta^2 dt } N_{n}^{n+1} (0,1) \right) &=& \sqrt{ \delta^2 dt } E \left( N_{n}^{n+1} (0,1) \right) \\
&=& 0
\end{array}

Next, can write this as:


\frac{d}{dt} E(X) = 0

or alternatively,


\frac{d}{dt} \langle X \rangle = 0

We have an ODE for the first moment.


\begin{array}{rcl}
\frac{d}{dt} Var(X) &=& \frac{d}{dt} \left( \langle X^2 \rangle - \langle X \rangle^2 \right) \\
&=& \frac{d \langle X^2 \rangle}{dt} - 2 \langle X \rangle \frac{ d \langle X \rangle }{ dt } \\
&=& \frac{d \langle X^2 \rangle}{dt}
\end{array}

Next,


\begin{array}{rcl}
X_{n+1}^2 &=& \left[ X_{n} + \sqrt{ \delta^2 dt } N(0,1) \right]^2 \\
&=& X_{n}^2 + 2 \sqrt{ \delta^2 dt } X_n N_{t}^{t+dt} (0,1) + \delta^2 dt \left( N_t^{t+dt} (0,1) \right)^2
\end{array}

And taking the expectation of both sides:


\begin{array}{rcl}
\langle X_{n+1}^2 \rangle &=& \langle X_{n}^2 \rangle + 2 \sqrt{ \delta^2 dt } \langle X_n N_{t}^{t+dt} (0,1) \rangle + \delta^2 dt \langle \left( N_t^{t+dt} (0,1) \right)^2 \rangle 
\end{array}

For a Wiener process, the normal distribution N_{t}^{t+dt} is added after X_n, and is independent of X_n (however, X_{n+1} is not independent of the normal distribution N_{t}^{dt}, because that normal distribution is used to obtain X_{n+1})

So we can separate the expectation of these variables (because their joint DPF is equal to the product of their marginal PDFs), and the second expectation term becomes:


\langle X_n N_{t}^{t+dt} \rangle = \langle X_n \rangle \times \langle N_{t}^{t+dt} \rangle

and the second term goes to 0, so that term goes away.

In the third expectation term, the expectation of the squared normal distribution  \langle N_{t}^{t+dt} (0,1)^2 \rangle is NOT a normal distribution - for example, the squared normal distribution can never be negative.

wikipedia:Chi-square distribution is the product of normal distributions... has mean \mu=1... so the third expectation term goes to 1

Plugging in and simplifying yields


\frac{ d \langle X^2 \rangle }{dt} = \frac{d}{dt} Var(X) = \delta^2

So the variance increases at a constant rate.

Evolution of PDF in Time

Use successive substitution and same idea used before in finding that Wiener process independent of dt


f_{x} (x;t)

Inspection:


\begin{array}{rcl}
X(dt) &=& X(0) + \sqrt{ \delta^2 dt } N_{0}^{dt} (0,1) \\
X(2 dt) &=& X(0) + N(0, \delta^2 (2 dt) ) \\
 & \vdots & \\
X(t) &=& X(0) + N(0, \delta^2 t)
\end{array}

This just says that as we advance in time, the mean of the distribution doesn't change, and the variance is linear in time, with coefficient \delta^2.

Einstein Paper

In Einstein's 1905 paper (see above for link), he finds the result:

\frac{ \partial f_{X} }{ \partial t } = \frac{ \delta^2 }{2} \frac{\delta^2 f_{X} }{ d x^2 }

and he says this is the same as the heat equation, but for a distribution instead of a scalar:


x \sim \sqrt{ D t }

The point being: there are three ways to get at the PDF:

  • Method of moments (how the PDF evolves in time)
  • Inspection (what we just did above)
  • Einstein (the PDE of the PDF)

For the Wiener process, this is a lot easier than for other distributions - another 8 yrs for someone to do the same thing for the Wiener process with a drift term, e.g.


X_{n+1} = X_{n} + \sqrt{ \delta^2 dt } N(0,1) + V dt

This is still going to be a Markov process, but is no longer a Martingale process... Adding that one extra term made it extremely difficult to derive the equivalent governing equation PDE, and took 8 years

When it was derived, it was called the wikipedia:Fokker-Planck equation

Point: it's powerful when you can go from a stochastic differential equation (e.g. definition of Wiener process; difficult to deal with) to a PDE (easier to deal with)

To generalize the process and mathematics required to go from a general stochastic differential equation to a PDE requires wikipedia:Ito calculus, and we're not going to go there


Aside

Can take the joint PDF at two times,


f_{X_1 , X_2 }

and we can get the covariance Cov(X_1 , X_2) and the correlation function \rho_{X_1, X_2}

We can find an expression for f_{X_1, X_2} and then use that to find the covariances and correlations.... or, we can find them directly from our stochastic differential equation

Covariance

Definition of covariance is


Cov(X,Y) = E( (X - \langle X \rangle ) ( Y - \langle Y \rangle ) )

so,


Cov(X_1, X_2) = E \left( (X_1 - \langle X_1 \rangle) (X_2 - \langle X_2 \rangle) \right) = E( X_1^{\prime} X_2^{\prime})

Now we can use the Wiener process stochastic differential equation to get


Cov(X_1, X_2) = E \left(  (X_1^{\prime})^2 + X_{1}^{\prime} N( 0, \delta^2 dt ) \right)

And we just did a couple of these operations above, so we get


Cov(X_1, X_2) = \langle (X_1^{\prime})^2 \rangle = \delta^2 t

Correlation

Definition of correlation: Corr(X_1, X_2) = \frac{ Cov( X_1, X_2 ) }{ \sqrt{ Var(X_1) Var(X_2) } }


\rho_{X_1, X_2} = \frac{ Cov(X_1, X_2) }{ Var(X_1) Var(X_2) } = \frac{ \delta^2 t }{ \delta^2 t \times \delta^2 (t + dt) } = \sqrt{ \frac{1}{1 + \frac{dt}{t} } }

When you plot this:

  • Starts with a high correlation (starts at 1)
  • It has a very long tail

Joint PDF: equal to the marginal PDF times the conditional PDF


f_{X_1, X_2} = f_{X_1} \times f_{X_2 \vert X_1 }

What is the conditional PDF?

We know the value of X_1 at some time... what's the probability of X_2 given X_1?

A couple of observations:

The value of X_2 is governed by the stochastic differential equation (the one that defines the Wiener process)

And we know that the Wiener process has consistency... so regardless of the timestep we take, the Wiener process stochastic differential equation will still hold

Meaning the relationship is one of:


X_{n+1} = X_{n} + \sqrt{ \delta^2 dt } N(0,1)

and this means that X_2 is related to X_1 by just adding a normal distribution... and that distribution has a variance \delta^2 t and is added to X_1 (which meas it is a normal distribution with mean X_1)


f_{X_2 \vert X_1} = N(X_{1}, \delta^2 t)

Continuous and Smooth Realizations of a Continuous-Time Stochastic Process

Want to show both of these identities:


\langle \frac{ \partial U }{ \partial t } \rangle = \frac{ \partial \langle U \rangle }{ \partial t }

and


\langle \frac{ \partial Q(U) }{ \partial t } \rangle = \int_{-\infty}^{\infty} Q(u) \frac{ \partial f_{U} (u; t) }{ \partial t } du

This will make it straightforward to derive PDF transport equations.

For the first:


\frac{ \partial U }{ \partial t } = \lim_{t \leftarrow t^{\star} } \frac{ U(t^{\star}) - U(t) }{ t^{\star} - t }

which becomes


\begin{array}{rcl}
  \langle \frac{\partial U}{\partial t} \rangle &=& \langle \lim_{t \leftarrow t^{\star} } \frac{ U(t^{\star}) - U(t) }{ t^{\star} - t } \rangle = \lim_{t \leftarrow t^{\star} } \frac{ \langle U(t^{\star}) - U(t) \rangle }{ t^{\star} - t }
\end{array}

and the numerator becomes


\begin{array}{rcl}
\langle U(t^{\star}) - U(t) \rangle &=& \int_{-\infty}^{\infty} (u^{\star} - u) f_{U^{\star},U} (u; t^{\star},t) du^{\star} du \\
&=& \int_{-\infty}^{\infty} u^{\star} f_{U^{\star}} (u; t^{\star}) du^{\star}
    - \int_{-\infty}^{\infty} u f_{U} (u; t) du \\
&=& \langle U(t^{\star}) \rangle - \langle U(t) \rangle
\end{array}

For the second identity:


\langle \frac{ \partial Q(U) }{ \partial t } = \lim_{t \rightarrow t^{\star}} \frac{ \langle Q(U(t^{\star})) - Q(U(t)) \rangle }{ t^{\star} - t }


\langle Q(U^{\star}) - Q(U) \rangle = \int \int_{\infty}^{\infty} \left( Q(u^{\star}) - Q(u) \right) f_{U^{\star},U} (u^{\star}, u; t^{\star}, t) du^{\star} du

which becomes:


\langle Q(U^{\star}) - Q(U) \rangle = \int_{-\infty}^{\infty} Q(u^{\star}) f_{U^{\star}}(u; t^{\star}) du - \int_{-\infty}^{\infty} Q(u) f_{U}(u; t) du

(Note: removing the star from the dummy variable in the first integral, but still indicating a difference by including the star in the PDF variables)

Next step:


\langle Q(U^{\star}) - Q(U) \rangle = \int_{-\infty}^{\infty} Q(u) \left( f_{U^{\star}} (u; t^{\star}) - f_{U} (u; t) \right) du

and bringing in the limit from above:


\lim_{t \rightarrow t^{\star} } \frac{ \langle Q(U(t^{\star})) - Q(U(t)) }{ t^{\star} - t } = \int_{-\infty}^{\infty} Q(u) \lim_{t^{\star} \rightarrow t} \frac{ f_{U^{\star}} (u; t^{\star}) - f_{U}(u; t) }{ t^{\star} - t} du

which is equal to the final identity we were trying to prove in the first place...


\int_{-\infty}^{\infty} Q(u) \frac{ \partial f_{U}(u; t) }{ \partial t } du

Derivation of PDF Transport Equation

Momentum equation:


\rho \frac{ DU_i }{Dt} = - \frac{ \partial p}{\partial x_i} - \frac{ \partial \tau_{ij} }{\partial x_j} + \rho g_i = A_i

and species equation:


\rho \frac{ D \phi_{\alpha} }{ Dt } = - \frac{ \partial J_{j,\alpha} }{\partial x_j} + \rho S_{\alpha}(\boldsymbol{\phi}) = \Theta_{\alpha}

Now we start out with a function Q, which has the following properties:

  • Q(u,\phi)
  • Scalar
  • Only a function of one realization at one time (not U,\phi at different times...)
  • Q must be a nice function (can't go to infinity very fast, so that when we calculate the moments, the PDF times Q can't go to infinity - i.e. it doesn't go to infinity so fast it overpowers the tails of the PDF and therefore makes the moment integral infinity)


\langle \rho \frac{ D }{Dt} Q(U, \phi) \rangle 
= \langle \rho \frac{\partial Q}{\partial t} 
+ \rho U_j \frac{ \partial Q }{ \partial x_j } \rangle

= \langle \frac{ \partial \rho Q }{ \partial t } 
+ \frac{ \partial \rho U_j Q }{ \partial x_j } \rangle

= \frac{ \partial \langle \rho Q \rangle }{ \partial t } 
+ \frac{ \partial \langle \rho U_j Q \rangle }{ \partial x_j }

by the definition of the total derivative.

This can be expanded, using the definition of the expectation, as:


\langle \rho \frac{ D }{Dt} Q(U, \phi) \rangle 
= \int \dots \int_{-\infty}^{\infty} Q(u, \psi) 
\left( 
  \frac{ \partial \rho f_{U,\phi} }{ \partial t } + \frac{ \partial \rho u_j f_{U,\phi} }{ \partial x_j }
\right) du d \psi

Then, using the chain rule:


\langle \rho \frac{ DQ(U,\phi) }{ Dt } \rangle

= \langle \rho \frac{\partial Q}{\partial U_i} \frac{D U_i}{D t}
+ \rho \frac{ \partial Q } {\partial \phi_{\alpha} } \frac{D \phi_{\alpha}}{Dt} \rangle

= \langle \frac{\partial Q}{\partial U_i} A_i \rangle 
+ \langle \frac{\partial Q}{\partial \phi_{\alpha}} \Theta_{\alpha} \rangle

We are going to use the rule


f_{U,\phi,z} = f_{U,\phi \vert z} f_{z} = f_{z \vert U,\phi} f_{U,\phi}

to make


\langle \rho \frac{ DQ(U,\phi) }{ Dt } \rangle

= - \int \int_{-\infty}^{\infty} 
Q \frac{ \partial }{ \partial \psi_{\alpha}} \left( \langle \Theta_{\alpha} \vert u, \psi \rangle 
f_{U,\phi} \right) 
du d \psi 

  - \int \int_{-\infty}^{\infty} 
Q \frac{ \partial }{ \partial u_j} \left( \langle A_{j} \vert u, \psi \rangle 
f_{U,\phi} \right) 
du d \psi

So now we set these two equalities of \langle \rho \frac{ DQ(U,\phi) }{ Dt } \rangle equal to one another... and we're doing this for arbitrary Q... and as a result, we get the following:


\frac{ \partial \rho f_{U,\phi} }{ \partial t } + \frac{ \partial \rho u_j f_{U,\phi} }{\partial x_j} 
+ \frac{ \partial }{\partial \psi_{\alpha}} \left( \langle \Theta_{\alpha} \vert u, \psi \rangle f_{U,\phi} \right) 
+ \frac{ \partial }{\partial u_j } \left( \langle A_j \vert u, \psi \rangle f_{U,\phi} \right) = 0

Further Reading:

Don Lemmons, An Introduction to Stochastic Processes in Physics

  • Very readable for undergraduates
  • Not in-depth, covers all the basic concepts

Review Article (1943): Stochastic Problems in Physics and Astronomy. S. Chandrasekhar. Review of Modern Physics. Vol 15, p. 1-89.

  • Much more in-depth
  • But difficult - and

Gardiner, Handbook of Stochastic Methods.