From charlesreid1

Contents

October 7, 2010

Statistical Inference (Casella and Berger)

wikipedia:Set (mathematics)

wikipedia:Probability interpretations


http://en.wikipedia.org/wiki/Set_%28mathematics%29

http://en.wikipedia.org/wiki/Probability_interpretations


Set Theory:

Union - - combination of two sets

Intersection - - elements contained in both A and B

Complement - - everything that's not in A

Empty Set - - set containing no elements


Definitions:

experiment - any activity generating observable results

outcome - result of experiment (IMPORTANT TO KEEP STRAIGHT! don't confuse events and outcomes)

trial - single performance of experiment

sample space - set of all possible outcomes

countable/uncountable: - countable = one-to-one correspondence (e.g. 1/n) - uncountable = no one-to-one correspondence can be made -- infinite loop: you can do an infinite loop, but still count it -- flipping a coin: countable; temperature: uncountable

event - any subset of the sample space


Example:

experiment - roll a dice outcome - 1, or 2, or 3, or 4, or 5, or 6 trial - one roll of the dice COUNTABLE sample space - {1,2,3,4,5,6} event - may be {1}, or {1,2,3}, etc...


Operator Properties:


Commutative:

Associative:

Distributive:

DeMorgan's Law:


Call a set abnormal if it can be put into itself (otherwise it's normal)

Example: the set of all squares is not itself square, so it is not a member of the set of squares The complimentary set, containing all non-squares, is itself not a square, so is normal

Consider the set of all normal sets Is it normal or abnormal? If it were normal, it would be contained in itself, and would therefore be abnormal If it were abnormal, it would not be contained in itself, and would therefore be normal

You can resolve this using more rigorous set theory...


More Definitions:

Disjoint ("set" term) / mutually exclusive ("probability" term) - if the intersection of two sets is the null set, they are mutually exclusive

ie

Partition - take a group of sets; if the union of these sets is the sample space, and they are mutually exclusive, this is a partition

ie



Distinction between probability theory that has a physical meaning (and is therefore "contaminated" by intuition) and a more abstract probability theory that doesn't have a corresponding physical meaning

Axiomatic probability theory (Komolgorov)

A probability is a function that follows 3 axioms:

Sample space

(The domain) -algebra (means the set is fully consistent)

Function P -> probability over the domain

1. for all

2.

3. If and are disjoint, then

In other words,

This is a mathematician's viewpoint: a clean definition, as long as we follow these rules, the function is a probability.


What is the probability of the null set?

Create a partition:

The probability of the sample space is

So

If then

The size of the set is directly related to the probability...

Another way to do this is using measure theory (another route, besides rigorous set theory, that leads to probability theory)

wikipedia:Measure theory

wikipedia:Sigma-algebra

Bonferroni's inequality:

Reading Assignment Discussion

Classical Definition of Probability (Laplace, 1812)

"If a random experiment can result in mutually exclusive and equally likely outcomes and if of these outcomes result in the occurrence of the event , the probability of is defined by ."

Example: rolling a dice

Event A might be how many times we roll a 1... or how many times we roll a 1 or a 2...

What if one side of the dice is weighted to favor 5? This definition doesn't work... No mathematical proof to show that the outcomes were mutually exclusive and equally likely.

Frequency

This is the limit, as the number of experimental trials performed (trials must be performed under "identical" conditions) goes to infinity, of the number of outcomes of the event of interest :

Limitations:

  • can never actually perform an infinite number of trials
  • what does "identical" mean? e.g. if you're performing a turbulence experiment, how can you initialize everthing the exact same way?
  • (Tony): what's your infinity?
    • (Sean): whatever it is, it's not finite


Probability can be seen as lack of information (e.g. fluid mechanics is deterministic, so we could describe it if we know the state perfectly - we use probability to fill in/make up for the lack of knowledge)

Quantum theory: in two-split experiment, the very state of nature is random and needs to be defined using probability

Probability of an outcome in the future: judging confidence based on prior experience... statement of confidence

Bayesian vs. fequentist


Side discussion:

wikipedia:Noether's theorem


Advantages/Disadvantages of Axiomatic Approach

It provides a rigorous mathematical framework, removing bias/preference

But, when you get a result, you can't necessarily specify the meaning

October 11, 2010

Derivation of Bonferoni's Inequality

Show:

Proof:

But in general,

Plugging this back in,

On Wikipedia: more general case; more than two sets

wikipedia:Boole's inequality


Conditional Probability and Bayes' Rule

if , and ,

And as a result, we get Bayes' Rule:

Derivation:

Example application of conditional probability:

Run a combustion simulation... "Given that the temperature is in range X, what is the concentration range?"

Statistical Independence

Definition:

Consequences:


Random Variable

wikipedia:Random variable

A random variable is a mapping from a sample space to real numbers.

Example: Dice roll

Mapping rolls to a set

Example: Morse code

Looking at statistics of morse code...

Mapping dots and dashes to (or alternatively )

IMPORTANT: Sample space is different from the random variable


Induced Probability Function

Let for event B on , outcome

(e.g. there's a sample space, and in the sample space there are events...

Before, we were talking about probability of events. Now, we're talking about probability of a random variable)

Random variable , random variable realization ,

are the outcomes that are in B


Cumulative distribution function

Definition:

Example: Dice roll

Probability of rolling a given number is constant (1/6)

Cumulative distribution function is a line, because for x=1, cumulative probability is 1/6; for x=2, cumulative probability is 2/6; and so on.

This definition is more general than the integral definition, because in general you need a cumulative distribution function that is differentiable

Sometimes there will be situations where a probability distribution function can't be defined, and only a cumulative distribution function can be defined

Further information:


Example

Given 5% of men are colorblind, and 0.25% of women are colorblind: if a person is chosen at random and they are colorblind, what is the probability of their gender?


Identically Distributed

For two random variables , and an event : they are identically distributed if

We have an experiment, which we run a trial of, and we get an outcome. If the probability of that outcome is the same, the two outcomes are identically distributed.

The actual values of are not important. e.g., probability of rolling a 2 on a dice is the same as rolling a 4 on a dice, so they are identically distributed, even thought 2 and 4 are not equal.


Probability Mass Function, Probability Density Function

Difference: one's continuous, one's discrete

Probability Mass Function (PMF):

  • Discrete cases only

Probability Density Function (PDF):

  • Continuous cases only

where is the cumulative distribution function:


Given properties of the CDF, what are the properties of the PDF?

  • If CDF is monotonically increasing, analogous property for PDF is

Above two properties are necessary and sufficient conditions for a function to be considered a PDF.

Further properties:


Example

Assume is a fixed positive constant

define function:

Part a

What is the pdf of ?

Part b

Find probability that a random variable x is less than t, e.g.

For this one, you have to integrate twice.

Another name for this is the cumulative distribution function.

Transformations

Expectation

Definition:

Linearity:

Positivity:

If the function , then

If , then

Moments:

nth moment

Central moments:

nth central moment

2nd central moment: variance


Moment Generating Function (MGF)

If you're looking at , it's a Laplace transform. If it's , it's a Fourier transform.

wikipedia:Moment generating function

So knowing all the moments is equivalent to knowing the PDF.


October 14, 2010

Bernoulli Trial - two outcomes, known probability for each (e.g. heads or tails, or picking black and white marbles out of an urn)

Binomial distribution - probability of k successes in n Bernoulli trials

Normal distribution (and central limit theorem) - the sum of a bunch of random variables with same mean and variance approaches a Gaussian distribution

When doing an experiment - if there are a whole bunch of causes of error, all the errors add up, and you can expect a normal (Gaussian) distribution


October 18, 2010

Example 1: Expectation, CDF

Let x be a continuous non-negative random variable

Let denote the PDF

for

Show that

Next, using definition of expectation... substitute this into the definition of the expectation.

(There's a problem - going from this step to the next step)

where the first quantity in square brackets is 1, and the second quantity in square brackets is the CDF of .

Example 2: Choosing Keys

A man has a set of N keys. He wants to open his door, which will open with exactly 1 key, but he doesn't know which one, and he is trying keys at random.

Part A

Find the mean number of trial attempts.


Use a negative binomial: http://en.wikipedia.org/wiki/Negative_binomial_distribution

Specifically, use a geometric distribution: http://en.wikipedia.org/wiki/Geometric_distribution

So we know that the expectation of the geometric distribution is:

where is the probability of success in the Bernoulli trial,

So that the mean number of trial attempts is:


Part B

What if there is no replacement?

With no attempts,

After the first attempt,

After the second attempt,

And the third attempt,

And so on. Each time, all terms cancel out except

So we can take the expectation of that:

And using the formula for the first counting numbers,

Multivariate

Set theory

Moved into random variables (mapping from event space to the real numbers)

Now we want to do a new type of mapping into multiple random variables


n-dimensional random vector - a mapping from a sample space into Euclidian space.

Joint Probability Mass Function:

Joint Probability Density Function:

Joint Cumulative Distribution Function:

Question: Why is joint PDF defined in terms of P, whereas the univariate PDF is defined in terms of the CDF?

Answer: Boundaries of multivariate PDFs are often non-trivial, and are not nice even "rectangles"... You need to know the boundaries of the PDF really well to use the CDF, so the joint CDF is not used as often.

Joint Conditional PDF:

The conditional PDF is just a renormalization.

But how is defined, if it's a multivariate PDF?

This is the marginal PDF...

Marginal PDF:

Definition of independence of and :

  • If , then the moment generating function of ,
    • This was used in deriving Central Limit Theorem



Define a new variable: covariance

Covariance:

Correlation:

Note: Just because the covariance is 0 does not mean that and are independent, i.e. it doesn't imply

Bivariate Normal Distribution

  • Means
  • Variances
  • Correlation

Transform of a PDF

The transformed PDF is:


Multivariate Example 1

X and Y have the distribution:

X
1 2 3
Y 2
3
4


Part A

Show that X and Y are not independent.


One way: show that the covariance is nonzero.


A more fundamental way: using the definition of independence

So sum up each row/column and put it in a new row/column


X
1 2 3 (sum)
Y 2
3
4
(sum)

Then show that the product of the two marginal PDFs is not equal to the joint PDF value

i.e. pick row i and column j, and if the sum of the joint PDF across the whole row i, times the sum of the joint PDF across the whole column j, does not equal the joint PDF at location (row i col j), then we know the definition of independence is not met

Part B

Give a probability table for random variables U and V with the same marginals as X and Y but are independent.

So, we want to keep the "sum" column and row. Then we want to multiply the sum for row i by the sum for column j,

U is a discrete binomial distribution

V is uniformly distributed


Notes

If they're independent, they WILL have a zero covariance

(So it follows that, if the covariance is nonzero, there is no way they can be independent)

But, just because the covariance is zero doesn't mean they are independent


October 21, 2010

Review

Hypergeometric - out of n objects, picking k objects (bernoulli trials) without replacement, the probability that x of them are success

Binomial distribution - out of n objects, picking k objects (bernoulli trials) with replacement, the probability that x of them are success

Normal distribution - derived using Central Limit Theorem; central concept is, if you take an infinite number of random variables distributed with the same mean and variance, the sum of these infinite number becomes a normal distribution

Independence of random variables - the joint PDF is equal to the products of the marginal PDFs

Joint normal distribution - determinant of covariance matrix shows up

If we then find the eigenvalues of this matrix, and use these as the covariance matrix (diagonal), then some ellipsoidal, skewed distribution would become a normal distribution composed of nice circles.

Also: can define to center the distribution around the means.

What if you have a kidney-shaped distribution?

You can define a new variable , and use the covariance matrix, and get a "rotated" distribution, centered around the means, that's sort-of circular. But the kidney shape will still remain.

Covariance (correlation) is 0, but the two variables are not independent. (i.e. using mathematical trick to make the variance 0, but the joint distribution is not equal to the product of the two marginal PDFs).

You cannot make the distribution fit into a circle, because then you're throwing away information that's important.

To first order, you may be able to approximate it as a joint-normal distribution. (But this is like approximating a human as a sphere).


If you have an ellipsoidal distribution that's extremely squished in one direction,

Homework Problem 1

Let the number of chocolate chips in a certain type of cookie have a Poisson distribution. We want the probability that a randomly chosen cookie has at least two chocolate chips to be greater than 0.99. Find the smallest value of the mean of the distribution that ensures this probability.

Poisson distribution:

The question is asking for:

Rewriting using a less-than sign:

What is the value of that satisfies

CDF: , where is the mean and is the incomplete Gamma function

We want to plug 2 into the CDF, set it equal to , and do a nonlinear solve to find values of


We know , so it can be 0 or 1, and plugging this into expression for Poisson distribution:

(replace the sign with an sign... and solve for ...)

Homework 2

Two movie theaters compete for the business of one thousand customers. Assume that each customer chooses between the movie theaters independently and with indifference (p=1/2). Find and expression for N, the number of seats in your theater, that will result in the probability of turning away a customer (due to filling the seats) being less than one percent. First use the discrete distribution and then use the continuous approximation.

Problem is asking for N, such that

Binomial distribution: "What is the probability that n identical but independent bernoulli trials result in k successes?"

  • Are we doing this with replacement or not?
  • P stays the same - probability is 1/2 for each binomial trials (whether someone picks our movie theater or not)
  • So that's like having replacement

Want the probability of everything under the curve, for :

For the CDF: since it is discrete, the actual value will fall somewhere between two discrete values. Which one do we use?

  • we want to pick the one to the right - to make sure we have an extra seat

Using binomial distribution web app, n=1000, p=0.5: plots PDF and CDF)

Number of seats: 537 (makes sense - half of 1000 is 500, and if it's a random night,

Part 2:

How do we do this with the normal (continuous) distribution? How would d'Alembert do this?

Homework 3

For the joint pdf over and (the pdf is zero elsewhere), find the coefficient c.

Also, find the joint CDF and the two marginal PDFs.

First part: The integral over the entire domain must be 1.

Second part:

Challenge: define a new variable , and find the marginal PDF of z.

Homework 4

For two independent random variables X and Y with moment generating functions and M_Y(t), show that the sum of these variables, Z = X+ Y, has a moment generating function .

Suggestion: if you first show that for independent variables then you can use the "cool" way of writing the moment generating function to give this proof in about four steps.

Given: definition of independence

since

Now, let's look at two particular functions for g and h:

Suppose and

October 25, 2010

Conditional Expectation

This is also denoted as . Also, is supposed to be an event, not just a variable, that you are conditioning on (all of this is built up from set theory).


Stochastic Processes

Sean's experience: intimidating topic, due to the approach of most professionals being one of rigor.

Want to dispel this: from what we've covered, we already know everything we need to know

Stochastic process - just like a random variable, but slightly more generalized

Random variable / trials / experiments:

We will allow an experimental trial to result in a family of outcomes, parameterized by a variable (universally denoted as t)

Now you can create a mapping, and the stochastic processes is the result of the mapping

When we substitute in a random variable, e.g. , this becomes a random variable.

So is the family of outcomes, parameterized by

Example: six random clock digits (e.g. ; we push a button, the numbers all change

  • one approach: treat this as one large number
  • another approach: each number is random, and can find a joint PDF between them
  • stochastic process approach: assign a t value to each digit
    • - one for each digit
    • is the family of outcomes for the first digit
    • Set theory: mapping from the digits that show up, to the random variable

Sample space: (possible values that our clock digits can take on)

Random variable:

The interesting fact is, there may be a sequence, so that they all depend on one another.


Shower Example

Water hits the wall of the shower, runs down in serpentines

Random paths that jump around along the wall

Stochastic shower.png

One experiment: a snapshot of the serpentine, and it's particular path.

The coordinate of length down the shower is

The coordinate of horiz. displacement is

Can look at , or , etc.

Important to maintain that we have a parameter , because for a given experiment, there is (or may be) some connection between for all of the t's

e.g. for the serpentines - we know the path a serpentine takes follows some pattern (because it creates some curve)


Stock Market Example

Random variable: value of stock

Stock is only dependent on how much someone is willing to pay for it. But we can only know the price of the stock when there is a transaction.

So actually the stock price is a set of steps - a piecewise, not a continuous, function

Stochastic stock.png


Types of Stochastic Processes

A single realization of a stochastic process (a single realization of ) can be:

  • Continuous/smooth
  • Discontinuous
  • Piecewise



Thoughts

Each value of can have a separate probability distribution

And we can have , and , and , etc.

Recall: frequentist

If we had a perfect flow machine, and a perfect experimental instrument... and running a turbulent flow experiment

Turbulent experiment is a stochastic process

We run the experiment, and we measure the velocity as a function of time.

Velocity is smooth and continuous, and may be related by some physics that we know

We run the experiment over and over and over again to get statistics.

Maybe we look at the velocity at another point...

Or maybe we look at it EVERYWHERE

So then we have as stochastic parameters - MULTIPLE stochastic process parameters

The value at a certain point is a random variable - parameterized on

Distinction: ensemble average vs. time average

  • When we run 10 experiments, by gathering velocity data at a certain point
  • Averaging over time for a single experiment, is different from averaging over the 10 experimental values for a given t


U, V Velocity Example

Continuing with the flow experiment...

Looking at and their correlation

The correlation drops from 1 to 0 as time goes on

This is true of all turbulent velocity fields, due to the strong dependence on initial conditions

The correlation curve has to be continuous, since the velocity field is continuous and smooth in time.

Timescale it takes the correlation to go to zero: wikipedia:Lyapunov exponent (amount of time it takes for two curves to become completely uncorrelated)

Turbulence: tradeoff between strong correlation (due to direction-change by eddies), which will go from +1 to negative, and the decaying exponential

Turbulence autcorrelation.png


Markov Processes and Bernoulli Processes

We have a Bernoulli trial (outcome is 0 or 1), for a bunch of experiments.

The outcome (where is the experiment number) is either 0 or 1 - but it is like the clock example given earlier, because each of the experiments is independent:


Markov process - the PDF at some time parameter, conditioned on all of the previous times, is only dependent on the previous time

Markov chain: each link in the chain is only connected to the previous link

Examples:

Card games: cards represent a memory of the past moves (NOT a Markov process)

Chutes and Ladders: where you end up next depends ONLY on your current location and what you roll on the dice

Following are some examples of subsets of Markov processes. Each person made a set of assumptions, and showed that the result had some cool properties. No information is given about what they proved... just some nomenclature so we're familiar with it.


Martingale Process

http://en.wikipedia.org/wiki/Martingale_%28betting_system%29

This is similar to a Bernoulli process, but it adds up - you add up the

where is a distributed Bernoulli trial, either -1 or 1, with some probability (0.5).

So if we take the expectation of B, it's 0 - it could be 1, or it could be -1.


Levy Process

Is a Markov proces...

Condition 1:

Condition 2: are disjoint (independent), for all

Condition 3:

Example (to clarify)...


Wiener Process

wikipedia:Wiener process

Fits within Levy processes

where is a transition probability - transitioning from to

This is an initial value problem (think explicit Euler...)

Can get the next value from the current value PLUS the RHS

C++ random number generator --> normally-distributed number, mean 0, standard deviation 1: that's .

N is a value, but its value is distributed normally. A value comes out, we multiply it by , and add it to our current value to get the next value.

So this means our distribution will be discontinuous (due to random number).

Question: can we shrink small enough that our function is continuous?

For Euler method, this works fine:

and we get a derivative,

But now we have

and so as goes to zero, the RHS goes to infinity

We can't do a derivative of this, so calculus becomes difficult on these stochastic processes

This one is not really continuous

But we still may be able to do calculus, because there's still an area under the curve

Probability: may use more generalized definitions of a limit

e.g. aren't looking at single realization values of - if we're looking at the PDF at two points, we can say there is a limit in the PDF

There are four different probability limits that can be defined, and you can start doing calculus

Ito calculus, Strotanovich calculus, stochastic calculus, etc. - all go down the field of "not discrete, so need to define a new calculus"

But we're always dealing with smooth and continuous processes, so we won't go down that road

So you always have a normally-distributed process, but with increasing variance

Alternative definition of Markov process:

If your function is normally distributed, with the same mean and variance, Central Limit Theorem tells you will be normally distributed


Example of Wiener process:

Einstein's Brownian Motion Paper (1905)

3 papers:

  • Brownian motion
  • photoelectric effect
  • special relativity

Pollen particle in water - randomly moving around, being bumped to the left, bumped to the right

Considered the first directly observable evidence of kinetic theory

http://files.charlesmartinreid.com/Einstein_BrownianMotion_1905.pdf

Ornstein-Uhlenbeck Process

Also a Levy process

Second term on RHS is like an advection term

Third term is a random term like the one in the Wiener process

Example of Ornstein-Uhlenbeck process:

Langevin's Brownian Motion - let's let the position of a Brownian particle be goverened by Newton's second law

Have two variables: and , where the position is continuous, and a velocity which is a Wiener process

Langevin considered this to be more physically sensible - because particle position isn't discontinuous


Poisson Process

Like a Bernoulli process, but we're going to

The dt has a certain distribution - as opposed to the Wiener process, where we have an expression in terms of dt

So the y-value (variable ) is uniformly distributed (well, uniform - everything is 1)

But the x-axis (variable ) jumps are Poisson-distributed

Examples:

  • Radioactive decay
  • Telephone calls arriving at a switchboard
  • Page view request at web site

Things that, over a certain interval, are uniformly distributed


Random Walk Processes

Depending on the conditions on your steps, you end up with different types of random walks

e.g. if your steps are normally distributed, you have a Wiener random walk

if your steps are Bernoulli steps, it is a Martingale random walk

etc...

But ALL are Markov processes


October 28, 2010

(Missed some...)

Wiener process - distributed randomly about

We can get statistics from a Wiener process:

  • Means and
  • Variances and
  • Correlation (how narrow the 2-D joint PDF surface is)


Wiener Process Statistics

where is the normal distribution with mean 0 and variance 1

This can also be written as

Property of normal distribution (see wikipedia:Normal distribution) - if we have and , then for ,


Lemmons books:

Which then leads to:

which is NOT SMOOTH!

Then we can say

all the terms cancel out, and we get:

which means the Wiener process is consistent (and independent of the dt selection)

Have a bunch of random processes, separated by intervals of , and construct a PDF

Definition of stochastic process: Set of random variables... performing a trial of an experiment... outcome is a set of random variables parameterized by t

When we put in a specific t (say ), we plug it in and get a particular value of

So if we pick a value of t, we get a PDF for X at that t - e.g. , or , etc.

It would be great if we had the joint PDF, , because then we can get the marginal PDF, the variances, the covariance, etc.


So what we've just shown is, no matter what our dt is, we always get the same PDF at a particular

Wiener process is used more commonly because of this consistency

Analog for differential equations: if you have a differential equation, and you're integrating to a certain point in time, it doesn't matter if you take two timesteps or a thousand timesteps to get there, you will still get the same result.

Note: You can also get a similar consistency result for a Cauchy distribution, but because it doesn't have a finite variance, it isn't as useful


Now, using the method of moments,

We can take constants out of the expectation operator, so the last term becomes:

Next, can write this as:

or alternatively,

We have an ODE for the first moment.

Next,

And taking the expectation of both sides:

For a Wiener process, the normal distribution is added after , and is independent of (however, is not independent of the normal distribution , because that normal distribution is used to obtain )

So we can separate the expectation of these variables (because their joint DPF is equal to the product of their marginal PDFs), and the second expectation term becomes:

and the second term goes to 0, so that term goes away.

In the third expectation term, the expectation of the squared normal distribution is NOT a normal distribution - for example, the squared normal distribution can never be negative.

wikipedia:Chi-square distribution is the product of normal distributions... has mean ... so the third expectation term goes to 1

Plugging in and simplifying yields

So the variance increases at a constant rate.

Evolution of PDF in Time

Use successive substitution and same idea used before in finding that Wiener process independent of dt

Inspection:

This just says that as we advance in time, the mean of the distribution doesn't change, and the variance is linear in time, with coefficient .

Einstein Paper

In Einstein's 1905 paper (see above for link), he finds the result:

and he says this is the same as the heat equation, but for a distribution instead of a scalar:

The point being: there are three ways to get at the PDF:

  • Method of moments (how the PDF evolves in time)
  • Inspection (what we just did above)
  • Einstein (the PDE of the PDF)

For the Wiener process, this is a lot easier than for other distributions - another 8 yrs for someone to do the same thing for the Wiener process with a drift term, e.g.

This is still going to be a Markov process, but is no longer a Martingale process... Adding that one extra term made it extremely difficult to derive the equivalent governing equation PDE, and took 8 years

When it was derived, it was called the wikipedia:Fokker-Planck equation

Point: it's powerful when you can go from a stochastic differential equation (e.g. definition of Wiener process; difficult to deal with) to a PDE (easier to deal with)

To generalize the process and mathematics required to go from a general stochastic differential equation to a PDE requires wikipedia:Ito calculus, and we're not going to go there


Aside

Can take the joint PDF at two times,

and we can get the covariance and the correlation function

We can find an expression for and then use that to find the covariances and correlations.... or, we can find them directly from our stochastic differential equation

Covariance

Definition of covariance is

so,

Now we can use the Wiener process stochastic differential equation to get

And we just did a couple of these operations above, so we get

Correlation

Definition of correlation:

When you plot this:

  • Starts with a high correlation (starts at 1)
  • It has a very long tail

Joint PDF: equal to the marginal PDF times the conditional PDF

What is the conditional PDF?

We know the value of at some time... what's the probability of given ?

A couple of observations:

The value of X_2 is governed by the stochastic differential equation (the one that defines the Wiener process)

And we know that the Wiener process has consistency... so regardless of the timestep we take, the Wiener process stochastic differential equation will still hold

Meaning the relationship is one of:

and this means that X_2 is related to X_1 by just adding a normal distribution... and that distribution has a variance and is added to (which meas it is a normal distribution with mean )

Continuous and Smooth Realizations of a Continuous-Time Stochastic Process

Want to show both of these identities:

and

This will make it straightforward to derive PDF transport equations.

For the first:

which becomes

and the numerator becomes

For the second identity:

which becomes:

(Note: removing the star from the dummy variable in the first integral, but still indicating a difference by including the star in the PDF variables)

Next step:

and bringing in the limit from above:

which is equal to the final identity we were trying to prove in the first place...

Derivation of PDF Transport Equation

Momentum equation:

and species equation:

Now we start out with a function , which has the following properties:

  • Scalar
  • Only a function of one realization at one time (not at different times...)
  • must be a nice function (can't go to infinity very fast, so that when we calculate the moments, the PDF times Q can't go to infinity - i.e. it doesn't go to infinity so fast it overpowers the tails of the PDF and therefore makes the moment integral infinity)

by the definition of the total derivative.

This can be expanded, using the definition of the expectation, as:

Then, using the chain rule:

We are going to use the rule

to make

So now we set these two equalities of equal to one another... and we're doing this for arbitrary Q... and as a result, we get the following:

Further Reading:

Don Lemmons, An Introduction to Stochastic Processes in Physics

  • Very readable for undergraduates
  • Not in-depth, covers all the basic concepts

Review Article (1943): Stochastic Problems in Physics and Astronomy. S. Chandrasekhar. Review of Modern Physics. Vol 15, p. 1-89.

  • Much more in-depth
  • But difficult - and

Gardiner, Handbook of Stochastic Methods.