La lecture en ligne est gratuite
Leer Descargar

Compartir esta publicación

Wei Zhang
Tutorial 11 ERG2040C&D Probability Models and Applications
Wei Zhang Dept. of Information Engineering, The Chinese University of Hong Kong http://personal.ie.cuhk.edu.hk/˜zw007
April 9, 2009
Tutorial 11 (ERG2040C&D) – 1 / 18
v Outline Probability Generating Functions Inequalities and Central Limit Theorem Summary
Wei Zhang
Outline
Probability Generating Functions
Inequalities and Central Limit Theorem
Summary
Tutorial 11 (ERG2040C&D) – 2 / 18
v Outline
Probability Generating Functions v Probability Generating Functions v Probability Generating Functions (cont.) v Probability Generating Functions (cont.) v Method Summary v Generating function method Inequalities and Central Limit Theorem
Summary
Wei Zhang
ProbabilityGeneratingFunctions
Tutorial 11 (ERG2040C&D) – 3 / 18
hodInmetctiogfundneCeiaslatienuqreeoThitimlLrantgnahZieWyrammuSmbilityGe.)vProbauFcnitnoenaritgnetvMdShocos(.)ntrenenitaammuGvyr
A p.g.f. is nothing more than a mathematician's trick. You should think of it in terms of the denition. The p.g.f. of a discrete random variable X is dened by
g X ( z ) = E ( z X )
l l
In this tutorial we only introduce the simplest case: X takes the values 0 1 2   Why bother with p.g.f.s? F They make calculations of expectations and of some probabilities very easy. F The distribution of a random variable is easy to obtain from its p.g.f. F They make sums of independent random variables easy to handle.
l l
Tutorial 11 (ERG2040C&D) – 4 / 18
ProbabilityGeneratingFunctions
svPrtionFunctingenaryteGibilorabePintlOuvons(contngFunctieGenaritabibilytnsiorovPgFinctunneGytarebabotili
l l l
P ( X = k ) −− the coefcient of z k P ( X = k ) = 1 d k dgz X ( z ) k z =0 k ! Example: g ( z )1 z = 2 z 2 1 1=12 X  2 k Method 1: g ( z ) = 1 z 2 k =0 P ( X = k ) = 2 k 1 +1 Method 2: d k g X ( z P ( X = k ) = k 1! dz k ) z =0 = k 1!(2 kz !) k +1 z =0 =2 k 1 +1
l
Distribution p.g.f.
g X ( z ) = E ( z X ) = X P ( X = k ) z k k =0
Tutorial 11 (ERG2040C&D) – 5 / 18
ProbabilityGeneratingFunctions(cont.)
neretanibalitiGyt.)vProbions(conFgnitcnuneGytareabobitilontiPrsvuFcnitgnenaryteGbilirobansvPctionuFgnitareneGytiilabobPrneliutvOWyiehZnagtTmiorheSuemarmmseitCdnartneiLlathodonmeualiIneqariteGencnitgnufodthMe)vyvarmmSuoitcnuFg.tnoc(sn
g X (1) = E ( X )
g X ( z ) = E ( z X ) g X ( z ) = E ( X z X 1 )
p.g.f. Expectation
l
So
So g X (1) = E ( X ( X 1)) = E ( X 2 ) E ( X ) 2 V ar ( X ) = E ( X 2 ) ( EX ) = g X (1) + g X (1) ( g X (1)) 2
l
p.g.f. Variance g X ( z ) = E ( z X ) ′′ g X ( z ) = E ( X ( X 1) z X 2 )
Tutorial 11 (ERG2040C&D) – 6 / 18
ProbabilityGeneratingFunctions(cont.)
vgFinctunenyGaterbabotilinoitrPvstingFunctyGeneraorabibiluOltniPeahgneWZimiLlartneCdnaseirymaummSreeoThitcontons(roba.)vPyteGibilitgnenarrovPnsiotylibibaitareneGitcnuFgneneratingfunctiomnteohIdenuqlatincFuonticos(.)ntteMvSdohammuGvyr
l
dx g ( u ) = f ( x ) du
Generating function method.
Tutorial 11 (ERG2040C&D) – 7 / 18
Methods for determining the distribution of functions of Random Variables: l Distribution function method: StepA:ndc.d.f.ofthefunction;StepB:ndthep.d.f.ofthe function. Please recall tutorial 7. l Transformation method. Let X denote a random variable with probability density function f ( x ) and U = h ( X ) . Assume that h ( x ) is either strictly increasing (or decreasing) then the probability density of U is:
MethodSummary
WeryhaiZngtimioehTSmerammuitiesandCentralLoimnteohIdenuqlaaterenvGctungfinohteMv).yrammuSdnctingFucontons(ilytabibariteGen).tnorPvnoitoc(sngtincFuGetyraneorabibiltcoisnPvatingFunityGenerlibaborPvsnoitcnFungtiraneGetyliabibPeorltnivuO
babieProGenelityltnivuOyteGenarorabibilctionsvPatingFunGytireneborPlibatincsvontiraFungteoh).Mvoctnno(snctingFuratiGeneytilibaborPv).tncos(ontincFungti
Example A communication system has n links. P (link i fails) = p . Find P (exactly k links fail). Solution: 1) Indicator function Let X i = 1 if link i fails and X i = 0 otherwise. Then X i Bernoulli ( p ) and are i.i.d. Dene Y = X 1 + X 2 +  + X n , P ( exactly k links fail ) = P ( Y = k ) . 2) Generating function Now, use expectation property, g Y ( z ) = E [ z X 1 +  + X n ] = E [ z X 1 ] E [ z X 2 ] E [ z X n ] .
l l
Y is a binomial random variable!
E [ z X i ] = z 0 (1 p ) + z 1 p = (1 p ) + pz
d k dgz Yk ( z )=( n n ! k )! p k (1 p + pz ) n k  P ( Y = k ) = C nk p k (1 p ) n k
Hence, g Y ( z ) = (1 p + pz ) n . Thus, for 0 k n ,
Generatingfunctionmethod
Tutorial 11 (ERG2040C&D) – 8 / 18
amyrSdmuretaGvneunctingfethoionmlauqenIddnaseitilLrantCeeoThitimerSmmuamyreWZiahng
v Outline
Probability Generating Functions
Inequalities and Central Limit Theorem v Markov's inequality v Chebyshev's inequality v The central limit theorem
v Example v Example (cont.) v Example (cont.)
Summary
Wei Zhang
InequalitiesandCentralLimitTheorem
Tutorial 11 (ERG2040C&D) – 9 / 18
Tutorial 11 (ERG2040C&D) – 10 / 18
Markov'sinequality
If X is a non-negative random variable with nite mean µ , then for any α > 0 , µ P ( X > α ) α
t
l
u
g ( x ) h ( x ) P ( X > α ) = E ( g ( X )) E ( h ( X )) = µα
O
Intuition:
veniltaniFgnutcoisnnIProbabilityGenerhZnagmmaryWeicont.)SuyvitebChheyssiv'raMv'vokenislauqtralLimitTheoremqeauilitsenaCdne)vt.on(ce(plamExelpmaxEvelpmaxEvlimitraloremtthelatienuqcenevyhT
eProtlinvOuariteGenilytabib
X is a random variable with nite mean µ and nite variance σ 2 . Then,
2 P ( | X µ | ≥ α ) ασ 2
Intuition:
P ( | X µ | ≥ α ) = E ( g ( X )) E ( h ( X )) = σα 22
Chebyshev'sinequality
Tutorial 11 (ERG2040C&D) – 11 / 18
l
g ( x ) h ( x )
ZiahyreWmuam.tS)(conmplevExant.)ngiesaalitntrandCecnitgnuFenuqnoIsin'sovrktyliuaeqhTtimiLlaMvmeroeualityvThecentraCvehybhsves'niqeplamExevplamcoe(millhttieroexEvm