Continuous Random Variables Guide
Continuous Random Variables Guide
[email protected] https://web.facebook.com/OMEGACENTER2014
𝓐– 𝟏 • 𝑫é𝒇𝒊𝒏𝒊𝒕𝒐𝒏 ∶
𝑺𝒐𝒊𝒕 (𝛀, 𝓕, 𝓟) 𝒖𝒏 𝒆𝒔𝒑𝒂𝒄𝒆 𝒑𝒓𝒐𝒃𝒂𝒃𝒊𝒍𝒊𝒔é. 𝑶𝒏 𝒂𝒑𝒑𝒆𝒍𝒍𝒆 𝒗𝒂𝒓𝒊𝒂𝒃𝒍𝒆 𝒂𝒍é𝒂𝒕𝒐𝒊𝒓𝒆 𝒄𝒐𝒏𝒕𝒊𝒏𝒖𝒆 𝒔𝒖𝒓 (𝛀, 𝓕, 𝓟)
𝛀⟶ℝ
𝒕𝒐𝒖𝒕𝒆 𝒂𝒑𝒑𝒍𝒊𝒄𝒂𝒕𝒊𝒐𝒏 𝑿 ∶ , 𝒗é𝒓𝒊𝒇𝒊𝒂𝒏𝒕 𝒍𝒆𝒔 𝒅𝒆𝒖𝒙 𝒄𝒐𝒏𝒅𝒊𝒕𝒊𝒐𝒏𝒔 𝒔𝒖𝒊𝒗𝒂𝒏𝒕𝒔 ∶
𝝎 ⟼ 𝑿(𝝎)
𝓪 • 𝑳′ 𝒆𝒏𝒔𝒆𝒎𝒃𝒍𝒆 𝒅𝒆𝒔 𝒊𝒎𝒂𝒈𝒆𝒔 𝑿(𝛀) ⊆ ℝ 𝒆𝒔𝒕 𝒖𝒏 𝒊𝒏𝒕𝒆𝒓𝒗𝒂𝒍𝒍𝒆 𝒅𝒆 ℝ , (𝒐𝒏 𝒑𝒆𝒖𝒕 𝒂𝒗𝒐𝒊𝒓
̅ = [−∞, +∞] = ℝ ∪ {±∞})
𝑿(𝛀) = ℝ
𝓫 • ∀𝒙 ∈ ℝ , 𝑷𝑿 ({𝒙}) = 𝟎
𝑹𝒆𝒎𝒂𝒓𝒒𝒖𝒆 ∶ 𝑷𝑿 ({𝒙}) = 𝟎 , 𝒑𝒂𝒓 𝒄𝒐𝒏𝒕𝒓𝒆 𝑷𝑿 (𝒙 ∈ [𝒙, 𝒙 + 𝒅𝒙[) ≠ 𝟎
𝓐– 𝟑 • 𝑭𝒐𝒏𝒄𝒕𝒊𝒐𝒏 𝒅𝒆 𝒓é𝒑𝒂𝒓𝒕𝒊𝒕𝒊𝒐𝒏 ∶
𝑶𝒏 𝒂𝒑𝒑𝒆𝒍𝒍𝒆 𝒇𝒐𝒏𝒄𝒕𝒊𝒐𝒏 𝒅𝒆 𝒓é𝒑𝒂𝒓𝒕𝒊𝒕𝒊𝒐𝒏 𝒅𝒆 𝒍𝒂 𝒗𝒂𝒓𝒊𝒂𝒃𝒍𝒆 𝒂𝒍é𝒂𝒕𝒐𝒊𝒓𝒆 𝑿 , 𝒍𝒂 𝒇𝒐𝒏𝒄𝒕𝒊𝒐𝒏 𝑭𝑿 𝒅é𝒇𝒊𝒏𝒊𝒆
𝒙
𝒔𝒖𝒓 ℝ 𝒑𝒂𝒓 ∶ ∀ 𝒙 ∈ ℝ , 𝑭𝑿 (𝒙) = 𝑷𝑿 (]−∞, 𝒙]) = 𝑷𝑿 (𝑿 ≤ 𝒙) = ∫ 𝒇𝑿 (𝒕)𝒅𝒕
−∞
𝓐– 𝟔 • 𝑬𝒔𝒑é𝒓𝒂𝒏𝒄𝒆 𝒎𝒂𝒕𝒉é𝒎𝒂𝒕𝒊𝒒𝒖𝒆 ∶
𝑺𝒐𝒊𝒕 𝑿 𝒖𝒏𝒆 𝒗𝒂𝒓𝒊𝒂𝒃𝒍𝒆 𝒂𝒍é𝒂𝒕𝒐𝒊𝒓𝒆 𝒄𝒐𝒏𝒕𝒊𝒏𝒖𝒆 𝒗é𝒓𝒊𝒇𝒊𝒂𝒏𝒕 ∶ 𝒙𝒇𝑿 (𝒙) 𝒊𝒏𝒕é𝒈𝒓𝒂𝒃𝒍𝒆 𝒔𝒖𝒓 𝑿(𝛀) 𝒆𝒕 𝒕𝒆𝒍 𝒒𝒖𝒆 ∶
+∞
∫ 𝒙𝒇𝑿 (𝒙)𝒅𝒙 𝒄𝒐𝒏𝒗𝒆𝒓𝒈𝒆 . 𝑶𝒏 𝒂𝒑𝒑𝒆𝒍𝒍𝒆 𝒆𝒔𝒑é𝒓𝒂𝒏𝒄𝒆 𝒎𝒂𝒕𝒉é𝒎𝒂𝒕𝒊𝒒𝒖𝒆 𝒅𝒆 𝑿 (𝒐𝒖 𝒎𝒐𝒚𝒆𝒏𝒏𝒆 𝒐𝒖 𝒑𝒓𝒆𝒎𝒊𝒆𝒓
−∞
+∞
𝒎𝒐𝒎𝒆𝒏𝒕 𝒏𝒐𝒏 𝒄𝒆𝒏𝒕𝒓é)𝒍𝒆 𝒓é𝒆𝒍 𝑬(𝑿) 𝒅é𝒇𝒊𝒏𝒊 𝒑𝒂𝒓 ∶ 𝑬(𝑿) = ∫ 𝒙𝒇𝑿 (𝒙)𝒅𝒙
−∞
𝑹𝒆𝒎𝒂𝒓𝒒𝒖𝒆𝒔 ∶ 𝑺𝒊 𝑿 𝒆𝒕 𝒀 𝒐𝒏𝒕 𝒎ê𝒎𝒆 𝒍𝒐𝒊 , 𝒊𝒍 𝒆𝒔𝒕 𝒄𝒍𝒂𝒊𝒓 𝒒𝒖𝒆 𝑬(𝑿) = 𝑬(𝒀) . 𝑳𝒂 𝒓é𝒄𝒊𝒑𝒓𝒐𝒒𝒖𝒆 𝒆𝒔𝒕 𝒇𝒂𝒖𝒔𝒔𝒆.
𝒐𝒏 𝒂 ∶ 𝑬 (∑ 𝑿𝒊 ) = ∑ 𝑬(𝑿𝒊 )
𝒊=𝟏 𝒊=𝟏
𝓭 • 𝑷𝒐𝒔𝒊𝒕𝒊𝒗𝒊𝒕é 𝒅𝒆 𝒍’𝒆𝒔𝒑é𝒓𝒂𝒏𝒄𝒆 ∶
𝑺𝒊 𝑬(𝑿) 𝒆𝒙𝒊𝒔𝒕𝒆 𝒆𝒕 𝒔𝒊 𝑿 ≥ 𝟎 , 𝒂𝒍𝒐𝒓𝒔 𝑬(𝑿) ≥ 𝟎
𝑺𝒊 𝑿 𝒆𝒕 𝒀 𝒐𝒏𝒕 𝒖𝒏𝒆 𝒆𝒔𝒑é𝒓𝒂𝒏𝒄𝒆 𝒆𝒕 𝒔𝒊 𝑿 ≤ 𝒀 , 𝒂𝒍𝒐𝒓𝒔 𝑬(𝑿) ≤ 𝑬(𝒀)
𝓮 • 𝑰𝒏é𝒈𝒂𝒍𝒊𝒕é 𝒅𝒆 𝑱𝒆𝒏𝒔𝒆𝒏 ∶ 𝑺𝒐𝒊𝒕 𝝋 𝒖𝒏𝒆 𝒇𝒐𝒏𝒄𝒕𝒊𝒐𝒏 𝒄𝒐𝒏𝒗𝒆𝒙𝒆 𝒅𝒆 ℝ 𝒗𝒆𝒓𝒔 𝒍𝒖𝒊 𝒎ê𝒎𝒆 𝒆𝒕 𝑿 𝒖𝒏𝒆 𝒗. 𝒂.
𝒕𝒆𝒍𝒍𝒆 𝒒𝒖𝒆 𝑬[𝝋(𝑿)] 𝒆𝒙𝒊𝒔𝒕𝒆 . 𝑶𝒏 𝒂 𝒂𝒍𝒐𝒓𝒔 ∶ 𝝋[𝑬(𝑿)] ≤ 𝑬[𝝋(𝑿)]
𝑹𝒂𝒑𝒑𝒆𝒍 ∶ 𝑼𝒏𝒆 𝒇𝒐𝒏𝒄𝒕𝒊𝒐𝒏 𝝋 𝒅𝒆 ℝ 𝒗𝒆𝒓𝒔 𝒍𝒖𝒊 𝒎ê𝒎𝒆 𝒆𝒔𝒕 𝒅𝒊𝒕𝒆 𝒄𝒐𝒏𝒗𝒆𝒙𝒆 𝒔𝒊, 𝒑𝒐𝒖𝒓 𝒕𝒐𝒖𝒕 𝒄𝒐𝒖𝒑𝒍𝒆
(𝒙, 𝒚) 𝒅𝒆 ℝ𝟐 𝒆𝒕 𝒑𝒐𝒖𝒓 𝒕𝒐𝒖𝒕 𝝀 ∈ [𝟎, 𝟏] , 𝒐𝒏 𝒂 ∶ 𝝋[𝝀𝒙 + (𝟏 − 𝝀)𝒚] ≤ 𝝀𝝋(𝒙) + (𝟏 − 𝝀)𝝋(𝒚)
𝑵𝒐𝒕𝒐𝒏𝒔 , 𝒆𝒏 𝒑𝒂𝒓𝒕𝒊𝒄𝒖𝒍𝒊𝒆𝒓 , 𝒒𝒖′ 𝒖𝒏𝒆 𝒇𝒐𝒏𝒄𝒕𝒊𝒐𝒏 𝝋 𝒅𝒆𝒖𝒙 𝒇𝒐𝒊𝒔 𝒅é𝒓𝒊𝒗𝒂𝒃𝒍𝒆𝒔 𝒅𝒐𝒏𝒕 𝒍𝒂 𝒅é𝒓𝒊𝒗é𝒆 𝒔𝒆𝒄𝒐𝒏𝒅𝒆
𝒆𝒔𝒕 𝒑𝒐𝒔𝒊𝒕𝒊𝒗𝒆 (𝝋′′ (𝒙) ≥ 𝟎) 𝒆𝒔𝒕 𝒖𝒏𝒆 𝒇𝒐𝒏𝒄𝒕𝒊𝒐𝒏 𝒄𝒐𝒏𝒗𝒆𝒙𝒆.
𝓯 • 𝑺𝒊 𝑬(𝑿) 𝒆𝒙𝒊𝒔𝒕𝒆 , 𝒂𝒍𝒐𝒓𝒔 |𝑬(𝑿)| ≤ 𝑬(|𝑿|) < +∞
𝓰 • 𝑷𝒐𝒖𝒓 𝒕𝒐𝒖𝒕𝒆 𝒔𝒖𝒊𝒕𝒆 𝒅𝒆 𝒗. 𝒂. (𝑿𝒊 )𝒊=𝟏,𝟐,…,𝒏 𝒊𝒅𝒆𝒏𝒕𝒊𝒒𝒖𝒆𝒔 𝒂𝒚𝒂𝒏𝒕 𝒄𝒉𝒂𝒄𝒖𝒏𝒆 𝒖𝒏𝒆 𝒆𝒔𝒑é𝒓𝒂𝒏𝒄𝒆
𝓐– 𝟗 • 𝑽𝒂𝒓𝒊𝒂𝒏𝒄𝒆 ∶
𝓐– 𝟏𝟎 • 𝑷𝒓𝒐𝒑𝒓𝒊é𝒕é𝒔 𝒅𝒆 𝒍𝒂 𝒗𝒂𝒓𝒊𝒂𝒏𝒄𝒆 ∶
𝓪 • ∀ 𝒂 ∈ ℝ , ∀ 𝒃 ∈ ℝ , 𝑽𝒂𝒓(𝒂𝑿 + 𝒃) = 𝒂𝟐 𝑽𝒂𝒓(𝑿) 𝓫 • 𝑽𝒂𝒓(𝒄) = 𝟎 𝒐ù 𝒄 𝒆𝒔𝒕 𝒖𝒏𝒆 𝒄𝒐𝒏𝒔𝒕𝒂𝒏𝒕𝒆.
𝑿 − 𝑬(𝑿)
𝓰 • 𝑻𝒆𝒓𝒎𝒊𝒏𝒐𝒍𝒐𝒈𝒊𝒆 𝒑𝒂𝒓𝒕𝒊𝒄𝒖𝒍𝒊è𝒓𝒆 ∶ 𝑺𝒐𝒊𝒕 𝑿 𝒖𝒏𝒆 𝒗. 𝒂. 𝑳𝒆𝒔 𝒗. 𝒂. 𝑿𝒄 = 𝑿 − 𝑬(𝑿) 𝒆𝒕 𝑿∗ =
𝝈(𝑿)
𝒔𝒐𝒏𝒕 𝒓𝒆𝒔𝒑𝒆𝒄𝒕𝒊𝒗𝒆𝒎𝒆𝒏𝒕 𝒂𝒑𝒑𝒆𝒍é𝒆𝒔 𝒗. 𝒂. 𝒄𝒆𝒏𝒕𝒓é𝒆 𝒆𝒕 𝒗. 𝒂. 𝒄𝒆𝒏𝒕𝒓é𝒆 𝒓é𝒅𝒖𝒊𝒕𝒆 𝒂𝒔𝒔𝒐𝒄𝒊é𝒆𝒔 à 𝑿 .
𝑬(𝑿𝒄 ) = 𝟎 𝑬(𝑿∗ ) = 𝟎
𝑶𝒏 𝒂 ∶ { 𝒆𝒕 {
𝑽𝒂𝒓(𝑿𝒄 ) = 𝑽𝒂𝒓(𝑿) = 𝝈𝟐 𝑽𝒂𝒓(𝑿∗ ) = 𝟏
𝓱 • 𝑷𝒐𝒖𝒓 𝒕𝒐𝒖𝒕𝒆 𝒔𝒖𝒊𝒕𝒆 𝒅𝒆 𝒗. 𝒂. (𝑿𝒊 )𝒊=𝟏,𝟐,…,𝒏 𝒊𝒅𝒆𝒏𝒕𝒊𝒒𝒖𝒆𝒔 𝒂𝒚𝒂𝒏𝒕 𝒄𝒉𝒂𝒄𝒖𝒏𝒆 𝒖𝒏𝒆 𝒗𝒂𝒓𝒊𝒂𝒏𝒄𝒆 é𝒈𝒂𝒍𝒆 à
𝑺𝒊 𝒓 = 𝟎 ⇒ 𝝁𝟎 = 𝟏 𝑺𝒊 𝒓 = 𝟏 ⇒ 𝝁𝟏 = 𝟎
𝟐 𝟐
𝑺𝒊 𝒓 = 𝟐 ⇒ 𝝁𝟐 = 𝑬 [(𝑿 − 𝑬(𝑿)) ] = 𝑽𝒂𝒓(𝑿) = 𝑬(𝑿𝟐 ) − (𝑬(𝑿)) ⇒ 𝝁𝟐 = 𝒎𝟐 − 𝒎𝟐𝟏
𝑬𝒍𝒍𝒆 𝒈é𝒏è𝒓𝒆 𝒍𝒆𝒔 𝒎𝒐𝒎𝒆𝒏𝒕𝒔 𝒏𝒐𝒏– 𝒄𝒆𝒏𝒕𝒓é𝒔 ∶ 𝑴𝑿 (𝟎) = 𝟏 , 𝑴′𝑿 (𝟎) = 𝑬(𝑿), 𝑴′′ 𝟐
𝑿 (𝟎) = 𝑬(𝑿 ) …
(𝒓)
𝑴𝑿 (𝟎) = 𝑬(𝑿𝒓 ) = 𝒎𝒓
𝒗𝒂𝒓𝒊𝒂𝒃𝒍𝒆 𝒂𝒍é𝒂𝒕𝒐𝒊𝒓𝒆. 𝑨𝒖𝒕𝒓𝒆𝒎𝒆𝒏𝒕 𝒅𝒊𝒕 ∶ (𝑴𝑿 (𝒕) = 𝑴𝒀 (𝒕)) ⇔ (𝑿 𝒆𝒕 𝒀 𝒐𝒏𝒕 𝒍𝒂 𝒎ê𝒎𝒆 𝒍𝒐𝒊)
𝒄𝒐𝒏𝒕𝒊𝒏𝒖𝒆 𝒔𝒖𝒓 𝑱 𝒂 𝒑𝒐𝒖𝒓 𝒅𝒆𝒏𝒔𝒊𝒕é ∶ 𝒇𝒀 (𝒚) = |(𝝋−𝟏 )′ (𝒚)| [𝒇𝑿 (𝝋−𝟏 (𝒚))] , ∀𝒚 ∈ 𝑱
𝑼𝒕𝒊𝒍𝒊𝒔𝒂𝒕𝒊𝒐𝒏 ∶ 𝑳𝒆𝒔 𝒍𝒐𝒊𝒔 𝒆𝒙𝒑𝒐𝒏𝒆𝒏𝒕𝒊𝒆𝒍𝒍𝒆𝒔 𝒔𝒐𝒏𝒕 𝒔𝒐𝒖𝒗𝒆𝒏𝒕 𝒖𝒕𝒊𝒍𝒊𝒔é𝒆𝒔 𝒑𝒐𝒖𝒓 𝒎𝒐𝒅é𝒍𝒊𝒔𝒆𝒓 𝒅𝒆𝒔
𝒕𝒆𝒎𝒑𝒔 𝒅’𝒂𝒕𝒕𝒆𝒏𝒕𝒆. 𝑪𝒆𝒕𝒕𝒆 𝒍𝒐𝒊 𝒆𝒔𝒕 𝒕𝒓è𝒔 𝒖𝒕𝒊𝒍𝒊𝒔é𝒆 𝒆𝒏 é𝒕𝒖𝒅𝒆 𝒅𝒆 𝒇𝒊𝒂𝒃𝒊𝒍𝒊𝒕é.
𝑷𝒓𝒐𝒑𝒓𝒊é𝒕é𝒔 𝒅′ 𝒂𝒃𝒔𝒆𝒏𝒄𝒆 𝒅𝒆 𝒎é𝒎𝒐𝒊𝒓𝒆 ∶
𝒐ù 𝒂𝒊 ∈ ℝ∗ ∀𝒊 = 𝟏, 𝟐, … , 𝒏 𝒆𝒕 𝒃 ∈ ℝ
𝓬 • 𝑺𝒊 (𝑿𝟏 , 𝑿𝟐 , … , 𝑿𝒏 ) 𝒆𝒔𝒕 𝒖𝒏𝒆 𝒔𝒖𝒊𝒕𝒆 𝒊. 𝒊. 𝒅 𝒅𝒆 𝒗. 𝒂. 𝒅𝒆 𝓝(𝒎, 𝝈𝟐 ) , 𝒂𝒍𝒐𝒓𝒔 ∶
𝒏
𝟏 𝝈 𝟐
̅=
𝑿 ∑ 𝑿𝒊 ↝ 𝓝 (𝒎, ( ) )
𝒏 √𝒏
𝒊=𝟏
𝓮 • 𝑳𝒂 𝒍𝒐𝒊 𝒏𝒐𝒓𝒎𝒂𝒍𝒆 𝓝(𝒎, 𝝈𝟐 ) 𝒄𝒐𝒏𝒗𝒆𝒓𝒈𝒆 𝒗𝒆𝒓𝒔 𝒍𝒂 𝒍𝒐𝒊 𝒅𝒆 𝑫𝒊𝒓𝒂𝒄 𝒂𝒖 𝒑𝒐𝒊𝒏𝒕 𝒎 (𝜹𝒎 ) 𝒍𝒐𝒓𝒔𝒒𝒖𝒆 𝝈 → 𝟎
𝑨𝒑𝒑𝒓𝒐𝒙𝒊𝒎𝒂𝒕𝒊𝒐𝒏 𝒅’𝒖𝒏𝒆 𝒍𝒐𝒊 𝑩𝒊𝒏𝒐𝒎𝒊𝒂𝒍𝒆 𝒆𝒕 𝒅𝒆 𝒍𝒂 𝒍𝒐𝒊 𝒅𝒆 𝑷𝒐𝒊𝒔𝒔𝒐𝒏 𝒑𝒂𝒓 𝒖𝒏𝒆 𝒍𝒐𝒊 𝑵𝒐𝒓𝒎𝒂𝒍𝒆:
𝑷𝒂𝒓 𝒆𝒙𝒆𝒎𝒑𝒍𝒆, 𝒔𝒊 𝑿 𝒆𝒔𝒕 𝒍𝒆 𝒕𝒆𝒎𝒑𝒔 𝒅𝒆 𝒗𝒊𝒆 𝒅′ 𝒖𝒏 𝒄𝒐𝒎𝒑𝒐𝒔𝒂𝒏𝒕, 𝒑𝒍𝒖𝒔 𝒊𝒍 𝒂 𝒗é𝒄𝒖 (𝑿 > 𝒙) 𝒑𝒍𝒖𝒔 𝒊𝒍 𝒂 𝒅𝒆
𝒄𝒉𝒂𝒏𝒄𝒆𝒔 𝒅𝒆 𝒗𝒊𝒗𝒓𝒆 𝒍𝒐𝒏𝒈𝒕𝒆𝒎𝒑𝒔 ∶ 𝒍𝒆 𝒔𝒚𝒔𝒕è𝒎𝒆 𝒓𝒂𝒋𝒆𝒖𝒏𝒊𝒕.
𝑶𝒏 𝒑𝒆𝒖𝒕 𝒑𝒂𝒍𝒍𝒊𝒆𝒓 𝒍′ 𝒊𝒏𝒄𝒐𝒏𝒗é𝒏𝒊𝒆𝒏𝒕 « 𝒍𝒐𝒏𝒈𝒖𝒆 𝒒𝒖𝒆𝒖𝒆 » 𝒅𝒂𝒏𝒔 𝒅′ 𝒂𝒖𝒕𝒓𝒆𝒔 𝒂𝒑𝒑𝒍𝒊𝒄𝒂𝒕𝒊𝒐𝒏𝒔 𝒅𝒆𝒔
𝒅𝒊𝒔𝒕𝒓𝒊𝒃𝒖𝒕𝒊𝒐𝒏𝒔 𝒅𝒆 𝑷𝒂𝒓𝒆𝒕𝒐 𝒕𝒆𝒍𝒍𝒆𝒔 𝒒𝒖𝒆 𝒍𝒂 𝒅𝒊𝒔𝒕𝒓𝒊𝒃𝒖𝒕𝒊𝒐𝒏 𝒑𝒂𝒓 𝒕𝒂𝒊𝒍𝒍𝒆 𝒅𝒆𝒔 𝒆𝒏𝒕𝒓𝒆𝒑𝒓𝒊𝒔𝒆𝒔 𝒆𝒙𝒑𝒓𝒊𝒎é𝒆
𝒆𝒏 𝒏𝒐𝒎𝒃𝒓𝒆 𝒅′ 𝒆𝒎𝒑𝒍𝒐𝒚é𝒔 𝒐𝒖 𝒆𝒏 𝒄𝒉𝒊𝒇𝒇𝒓𝒆 𝒅′ 𝒂𝒇𝒇𝒂𝒊𝒓𝒆𝒔 𝒐𝒖 𝒅′ 𝒂𝒖𝒕𝒓𝒆𝒔 𝒆𝒏𝒕𝒊𝒕é𝒔 𝒎𝒆𝒔𝒖𝒓𝒂𝒃𝒍𝒆𝒔 𝒑𝒂𝒓
𝒙−𝝁
−( )
𝑭𝒐𝒏𝒄𝒕𝒊𝒐𝒏 𝒅𝒆 𝒓é𝒑𝒂𝒓𝒕𝒊𝒕𝒊𝒐𝒏 ∶ ∀𝒙 ∈ ℝ , 𝑭𝑿 (𝒙) = 𝐞𝐱𝐩 (−𝒆 𝜷 )
𝝅𝟐 𝟐
𝑽𝒂𝒓𝒊𝒂𝒏𝒄𝒆 ∶ 𝑽𝒂𝒓(𝑿) = 𝜷
𝟔
𝜞(𝜶 + 𝒏)
𝓬 • ∀𝜶 > 𝟎 , ∀𝒏 ∈ ℕ∗ , 𝜞(𝜶) =
(𝜶 + 𝒏 − 𝟏) … (𝜶 + 𝟏)𝜶
𝓭 • ∀𝒏 ∈ ℕ , 𝜞(𝒏 + 𝟏) = 𝒏! , 𝒆𝒏 𝒑𝒂𝒓𝒕𝒊𝒄𝒖𝒍𝒊𝒆𝒓 𝜞(𝟏) = 𝜞(𝟐) = 𝟏
𝟏 (𝟐𝒏)! √𝝅 𝟏 𝟑 √𝝅
𝓮 • ∀𝒏 ∈ ℕ , 𝜞 ( + 𝒏) = 𝟐𝒏 , 𝒆𝒏 𝒑𝒂𝒓𝒕𝒊𝒄𝒖𝒍𝒊𝒆𝒓 𝜞 ( ) = √𝝅 , 𝜞 ( ) =
𝟐 𝟐 (𝒏!) 𝟐 𝟐 𝟐
𝑬𝒔𝒑é𝒓𝒂𝒏𝒄𝒆 𝒎𝒂𝒕𝒉é𝒎𝒂𝒕𝒊𝒒𝒖𝒆 ∶ 𝑬(𝑿) = 𝜶𝜷 𝑽𝒂𝒓𝒊𝒂𝒏𝒄𝒆 ∶ 𝑽𝒂𝒓(𝑿) = 𝜶𝜷𝟐
𝟐
𝑪𝒐𝒆𝒇𝒇𝒊𝒄𝒊𝒆𝒏𝒕 𝒅′ 𝒂𝒔𝒚𝒎é𝒕𝒓𝒊𝒆 ∶ 𝜸𝟏 =
√𝜶
𝟔
𝑪𝒐𝒆𝒇𝒇𝒊𝒄𝒊𝒆𝒏𝒕 𝒅′ 𝒂𝒑𝒍𝒂𝒕𝒊𝒔𝒔𝒆𝒎𝒆𝒏𝒕 (𝒌𝒖𝒓𝒕𝒐𝒔𝒊𝒔) ∶ 𝜸𝟐 =
𝜶
𝟏
𝑭𝒐𝒏𝒄𝒕𝒊𝒐𝒏 𝒈é𝒏é𝒓𝒂𝒕𝒓𝒊𝒄𝒆 𝒅𝒆𝒔 𝒎𝒐𝒎𝒆𝒏𝒕𝒔 ∶ 𝑴𝑿 (𝒕) = (𝟏 − 𝜷𝒕)−𝜶 , 𝒑𝒐𝒖𝒓 𝒕 <
𝜷
𝑷𝒓𝒐𝒑𝒓𝒊é𝒕é𝒔 𝒅𝒆 𝒍𝒂 𝒍𝒐𝒊 𝑮𝒂𝒎𝒎𝒂 𝚪(𝜶, 𝜷) ∶
𝓪 • 𝑺𝒊 𝑿𝟏 , 𝑿𝟐 , … , 𝑿𝒏 𝒏 𝒗. 𝒂. 𝒊𝒏𝒅é𝒑𝒆𝒏𝒅𝒂𝒏𝒕𝒆𝒔 𝒕𝒆𝒍𝒔 𝒒𝒖𝒆 ∀𝒊 = 𝟏, 𝟐, … , 𝒏 ; 𝑿𝒊 ↝ 𝜞(𝜶𝒊 , 𝜷) 𝒂𝒍𝒐𝒓𝒔 ,
𝒏 𝒏
∑ 𝑿𝒊 ↝ 𝜞 (∑ 𝜶𝒊 , 𝜷)
𝒊=𝟏 𝒊=𝟏
𝑿 ↝ 𝕭(𝜶, 𝜽) 𝑿
𝓫 • 𝑺𝒊 {𝒀 ↝ 𝕭(𝜷, 𝜽) , 𝒂𝒍𝒐𝒓𝒔 ↝ 𝕭(𝜶, 𝜷)
𝑿+𝒀
𝑿 𝒆𝒕 𝒀 𝒅𝒆𝒖𝒙 𝒗. 𝒂. 𝒊𝒏𝒅é𝒑𝒆𝒅𝒂𝒏𝒕𝒆𝒔
𝟏
𝓬 • 𝑺𝒊 𝑿 ↝ 𝓤[𝟎,𝟏] , 𝒂𝒍𝒐𝒓𝒔 𝑿𝟐 ↝ 𝕭 ( , 𝟏)
𝟐
𝓑– 𝟏𝟐 • 𝑳𝒐𝒊 𝒅𝒆 𝑷𝒆𝒂𝒓𝒔𝒐𝒏 𝒐𝒖 𝒌𝒉𝒊‑𝒅𝒆𝒖𝒙 à 𝒏 𝒅𝒆𝒈𝒓é𝒔 𝒅𝒆 𝒍𝒊𝒃𝒆𝒓𝒕é 𝝌𝟐 (𝒌) , 𝒌 ∈ ℕ∗ ∶
𝒌 𝒙
𝒙(𝟐−𝟏) 𝒆− 𝟐
, 𝒔𝒊 𝒙 ∈ [𝟎, +∞[
𝑭𝒐𝒏𝒄𝒕𝒊𝒐𝒏 𝒅𝒆𝒏𝒔𝒊𝒕é 𝒅𝒆 𝒑𝒓𝒐𝒃𝒂𝒃𝒊𝒍𝒊𝒕é ∶ 𝒇𝑿 (𝒙) = 𝒌 𝒌𝟐
𝜞 (𝟐) 𝟐
{ 𝟎 , 𝒔𝒊 𝒏𝒐𝒏
𝑬𝒔𝒑é𝒓𝒂𝒏𝒄𝒆 𝒎𝒂𝒕𝒉é𝒎𝒂𝒕𝒊𝒒𝒖𝒆 ∶ 𝑬(𝑿) = 𝒌 𝑽𝒂𝒓𝒊𝒂𝒏𝒄𝒆 ∶ 𝑽𝒂𝒓(𝑿) = 𝟐𝒌
∑ 𝑿𝒊 ↝ 𝝌𝟐 (∑ 𝒌𝒊 )
𝒊=𝟏 𝒊=𝟏
𝒌
𝓮 • 𝑺𝒊 𝑿 ↝ 𝝌𝟐 (𝒌), 𝒂𝒍𝒐𝒓𝒔 ∀𝒂 > 𝟎 , 𝒂𝑿 ↝ 𝜞 ( , 𝟐𝒂)
𝟐
𝟏 −𝟏 𝟐
𝓯 • 𝑺𝒊 𝑿 ↝ 𝝌𝟐 (𝒌), 𝒌 > 𝟓𝟎 𝒂𝒍𝒐𝒓𝒔 √𝟐𝑿 − √𝟐𝒌 − 𝟏 ↝ 𝓝(𝟎, 𝟏) 𝒆𝒕 𝝌𝟐𝜶 (𝒌) = [𝚽 (𝛂) + √𝟐𝒌 − 𝟏]
𝟐
𝓑– 𝟏𝟑 • 𝑳𝒐𝒊 𝒅𝒆 𝑺𝒕𝒖𝒅𝒆𝒏𝒕 à 𝒌 𝒅𝒆𝒈𝒓é𝒔 𝒅𝒆 𝒍𝒊𝒃𝒆𝒓𝒕é 𝓣(𝒌) , 𝒌 ∈ ℕ∗ ∶
𝑭𝒐𝒏𝒄𝒕𝒊𝒐𝒏 𝒅𝒆𝒏𝒔𝒊𝒕é 𝒅𝒆 𝒑𝒓𝒐𝒃𝒂𝒃𝒊𝒍𝒊𝒕é ∶
𝒌+𝟏 −(
𝒌+𝟏
) −(
𝒌+𝟏
)
𝟏 𝜞( 𝟐 ) 𝒙𝟐 𝟐 𝟏 𝒙𝟐 𝟐
∀𝒙 ∈ ℝ , 𝒇𝑿 (𝒙) = (𝟏 + ) = (𝟏 + )
√𝒌𝝅 𝜞 (𝒌) 𝒌 𝟏 𝒌
√𝒌𝔹 (𝟐 , 𝟐)
𝒌
𝟐
𝑬𝒔𝒑é𝒓𝒂𝒏𝒄𝒆 𝒎𝒂𝒕𝒉é𝒎𝒂𝒕𝒊𝒒𝒖𝒆 ∶ 𝑬(𝑿) = 𝟎 , 𝒔𝒊 𝒌 > 𝟏
𝒌
𝑽𝒂𝒓𝒊𝒂𝒏𝒄𝒆: 𝑽𝒂𝒓(𝑿) = , 𝒔𝒊 𝒌 > 𝟐 𝑪𝒐𝒆𝒇𝒇𝒊𝒄𝒊𝒆𝒏𝒕 𝒅′ 𝒂𝒔𝒚𝒎é𝒕𝒓𝒊𝒆: 𝜸𝟏 = 𝟎 , 𝒔𝒊 𝒌 > 𝟑
𝒌−𝟐
𝟔
𝑪𝒐𝒆𝒇𝒇𝒊𝒄𝒊𝒆𝒏𝒕 𝒅′ 𝒂𝒑𝒍𝒂𝒕𝒊𝒔𝒔𝒆𝒎𝒆𝒏𝒕 (𝒌𝒖𝒓𝒕𝒐𝒔𝒊𝒔) ∶ 𝜸𝟐 = , 𝒔𝒊 𝒌 > 𝟒
𝒌−𝟒
𝑷𝒓𝒐𝒑𝒓𝒊é𝒕é𝒔 𝒅𝒆 𝒍𝒂 𝒍𝒐𝒊 𝒅𝒆 𝑺𝒕𝒖𝒅𝒆𝒏𝒕 à 𝒌 𝒅𝒆𝒈𝒓é𝒔 𝒅𝒆 𝒍𝒊𝒃𝒆𝒓𝒕é 𝓣(𝒌), 𝒌 ∈ ℕ∗ ∶
𝒁 ↝ 𝓝(𝟎, 𝟏)
𝒁
𝓪 • 𝑺𝒊 { 𝑿 ↝ 𝝌𝟐 (𝒌) , 𝒂𝒍𝒐𝒓𝒔 𝑻 = ↝ 𝓣(𝒌)
𝒁 𝒆𝒕 𝑿 𝒅𝒆𝒖𝒙 𝒗. 𝒂. 𝒊𝒏𝒅é𝒑𝒆𝒏𝒅𝒂𝒏𝒕𝒆𝒔 √ 𝑿
𝒌
𝓫 • 𝑷𝒐𝒖𝒓 𝒌 = 𝟏, 𝑿 ↝ 𝓣(𝟏) ⇔ 𝑿 ↝ 𝓒(𝟎 , 𝟏) 𝒂𝒖𝒕𝒓𝒆𝒎𝒆𝒏𝒕 𝒅𝒊𝒕 ∶
𝓣(𝟏) ≡ 𝓒(𝟎 , 𝟏) , 𝒐ù 𝓒(𝟎 , 𝟏) 𝒆𝒔𝒕 𝒍𝒂
𝒍𝒐𝒊 𝒅𝒆 𝑪𝒂𝒖𝒄𝒉𝒚 𝒅𝒆 𝒑𝒂𝒓𝒂𝒎è𝒕𝒓𝒆𝒔 𝟎 𝒆𝒕 𝟏
𝓛
𝓬 • 𝓣(𝒌) → 𝓝(𝟎, 𝟏) , 𝒆𝒏 𝒑𝒓𝒂𝒕𝒊𝒒𝒖𝒆 𝒑𝒐𝒖𝒓 𝒏 > 𝟑𝟎
𝓒– 𝟏 • 𝑰𝒏é𝒈𝒂𝒍𝒊𝒕é 𝒅𝒆 𝑴𝒂𝒓𝒌𝒐𝒗 ∶
𝓪 • 𝑺𝒊 𝑿 𝒆𝒔𝒕 𝒖𝒏𝒆 𝒗𝒂𝒓𝒊𝒂𝒃𝒍𝒆 𝒂𝒍é𝒂𝒕𝒐𝒊𝒓𝒆 𝒑𝒐𝒔𝒊𝒕𝒊𝒗𝒆 𝒂𝒚𝒂𝒏𝒕 𝒖𝒏𝒆 𝒆𝒔𝒑é𝒓𝒂𝒏𝒄𝒆 , 𝒐𝒏 𝒂 ∶
𝑬(𝑿)
∀𝒕 > 𝟎 , 𝑷(𝑿 ≥ 𝒕) ≤
𝒕
𝓫 • 𝑺𝒊 𝑿𝒆𝒔𝒕 𝒖𝒏𝒆 𝒗𝒂𝒓𝒊𝒂𝒃𝒍𝒆 𝒂𝒍é𝒂𝒕𝒐𝒊𝒓𝒆 𝒂𝒚𝒂𝒏𝒕 𝒖𝒏 𝒎𝒐𝒎𝒆𝒏𝒕 𝒅′ 𝒐𝒓𝒅𝒓𝒆 𝒓 ∶
𝑬(|𝑿|𝒓 )
∀𝒕 > 𝟎 , 𝑷(|𝑿| ≥ 𝒕) ≤
𝒕𝒓
𝓒– 𝟐 • 𝑰𝒏é𝒈𝒂𝒍𝒊𝒕é 𝒅𝒆 𝑩𝒊𝒆𝒏𝒂𝒚𝒎é‑𝑻𝒄𝒉𝒆𝒃𝒚𝒄𝒉𝒆𝒗 ∶
𝑽𝒂𝒓(𝑿)
𝑺𝒐𝒊𝒕 𝑿 𝒖𝒏𝒆 𝒗𝒂𝒓𝒊𝒂𝒃𝒍𝒆 𝒂𝒍é𝒂𝒕𝒐𝒊𝒓𝒆 𝒓é𝒆𝒍𝒍𝒆 𝒅𝒆 𝒗𝒂𝒓𝒊𝒂𝒏𝒄𝒆 𝒇𝒊𝒏𝒊𝒆. ∀ 𝝐 > 𝟎 , 𝑷(|𝑿 − 𝑬(𝑿)| ≥ 𝝐) ≤
𝝐𝟐
𝓒– 𝟑 • 𝑪𝒐𝒏𝒗𝒆𝒓𝒈𝒆𝒏𝒄𝒆 𝒑𝒓𝒆𝒔𝒒𝒖𝒆 𝒔û𝒓𝒆 ∶
𝓪 • 𝑫é𝒇𝒊𝒏𝒊𝒕𝒊𝒐𝒏 𝟏 ∶ 𝑺𝒐𝒊𝒕 (𝑿𝒏 )𝒏≥𝟏 𝒖𝒏𝒆 𝒔𝒖𝒊𝒕𝒆 𝒅𝒆 𝒗. 𝒂. 𝒆𝒕 𝑿 𝒖𝒏𝒆 𝒗. 𝒂 𝒅é𝒇𝒊𝒏𝒊𝒆𝒔 𝒔𝒖𝒓 𝒍𝒆 𝒎ê𝒎𝒆
𝒆𝒔𝒑𝒂𝒄𝒆 𝒑𝒓𝒐𝒃𝒂𝒃𝒊𝒍𝒊𝒔é (𝛀, 𝓕, 𝓟) . 𝑶𝒏 𝒅𝒊𝒕 𝒒𝒖𝒆 𝑿𝒏 𝒄𝒐𝒏𝒗𝒆𝒓𝒈𝒆 𝒑𝒓𝒆𝒔𝒒𝒖𝒆 𝒔û𝒓𝒆𝒎𝒆𝒏𝒕 𝒗𝒆𝒓𝒔 𝑿 ,
𝒔𝒊 𝒍′ 𝒆𝒏𝒔𝒆𝒎𝒃𝒍𝒆 𝒅𝒆𝒔 𝝎 𝒕𝒆𝒍𝒔 𝒒𝒖𝒆 𝑿𝒏 (𝝎) 𝒄𝒐𝒏𝒗𝒆𝒓𝒈𝒆 𝒗𝒆𝒓𝒔 𝑿(𝝎) 𝒂 𝒑𝒐𝒖𝒓 𝒑𝒓𝒐𝒃𝒂𝒃𝒊𝒍𝒊𝒕é 𝟏
𝒑.𝒔
𝑶𝒏 𝒏𝒐𝒕𝒆 𝑿𝒏 → 𝑿
𝒏→+∞
𝒑.𝒔
𝓫 • 𝑫é𝒇𝒊𝒏𝒊𝒕𝒊𝒐𝒏 𝟐 ∶ 𝑳𝒂 𝒔𝒖𝒊𝒕𝒆 (𝑿𝒏 ) 𝒄𝒐𝒏𝒗𝒆𝒓𝒈𝒆 𝒑𝒓𝒆𝒔𝒒𝒖𝒆 𝒔û𝒓𝒆𝒎𝒆𝒏𝒕 𝒗𝒆𝒓𝒔 𝑿 , 𝒏𝒐𝒕é 𝑿𝒏 → 𝑿
𝒏→+∞
𝓒– 𝟒 • 𝑪𝒐𝒏𝒗𝒆𝒓𝒈𝒆𝒏𝒄𝒆 𝒆𝒏 𝒑𝒓𝒐𝒃𝒂𝒃𝒊𝒍𝒊𝒕é ∶
𝑺𝒐𝒊𝒕 (𝑿𝒏 )𝒏≥𝟏 𝒖𝒏𝒆 𝒔𝒖𝒊𝒕𝒆 𝒅𝒆 𝒗. 𝒂. 𝒆𝒕 𝑿 𝒖𝒏𝒆 𝒗. 𝒂 𝒅é𝒇𝒊𝒏𝒊𝒆𝒔 𝒔𝒖𝒓 𝒍𝒆 𝒎ê𝒎𝒆 𝒆𝒔𝒑𝒂𝒄𝒆 𝒑𝒓𝒐𝒃𝒂𝒃𝒊𝒍𝒊𝒔é (𝛀, 𝓕, 𝓟).
𝑶𝒏 𝒅𝒊𝒕 𝒒𝒖𝒆 𝑿𝒏 𝒄𝒐𝒏𝒗𝒆𝒓𝒈𝒆 𝒆𝒏 𝒑𝒓𝒐𝒃𝒂𝒃𝒊𝒍𝒊𝒕é 𝒗𝒆𝒓𝒔 𝑿
𝒔𝒊 ∶ ∀𝝐 > 𝟎 , 𝐥𝐢𝐦 𝑷(|𝑿𝒏 − 𝑿| ≥ 𝝐) = 𝟎 𝒐𝒖 𝒆𝒏𝒄𝒐𝒓𝒆 ∀𝝐 > 𝟎 , 𝐥𝐢𝐦 𝑷(|𝑿𝒏 − 𝑿| < 𝝐) = 𝟏
𝒏→+∞ 𝒏→+∞
𝒑
𝑶𝒏 𝒏𝒐𝒕𝒆 𝐩𝐥𝐢𝐦 (𝑿𝒏 ) = 𝑿 𝒐𝒖 𝒆𝒏𝒄𝒐𝒓𝒆 𝑿𝒏 → 𝑿
𝒏→+∞ 𝒏→+∞
𝒑 𝒑
𝓪 • 𝑻𝒉é𝒐𝒓è𝒎𝒆 𝒅𝒆 𝑺𝒍𝒖𝒕𝒔𝒌𝒚 ∶ 𝑿𝒏 → 𝑿 ⇒ 𝒈(𝑿𝒏 ) → 𝒈(𝑿) , 𝒐ù 𝒈 𝒆𝒔𝒕 𝒖𝒏𝒆 𝒇𝒐𝒏𝒄𝒕𝒊𝒐𝒏
𝒏→+∞ 𝒏→+∞
𝒄𝒐𝒏𝒕𝒊𝒏𝒖𝒆 𝒅𝒆 ℝ 𝒅𝒂𝒏𝒔 ℝ
𝒑.𝒔 𝒑
𝓫 • 𝑿𝒏 → 𝑿 ⇒ 𝑿𝒏 → 𝑿
𝒏→+∞ 𝒏→+∞
𝒎𝒒 𝐥𝐢𝐦 𝑬(𝑿𝒏 ) = 𝑿
𝑺𝒊 𝒍𝒆𝒔 𝒎𝒐𝒎𝒆𝒏𝒕𝒔 𝒅′ 𝒐𝒓𝒅𝒓𝒆𝟏 𝒆𝒕 𝟐 𝒆𝒙𝒊𝒔𝒕𝒆𝒏𝒕 , 𝒂𝒍𝒐𝒓𝒔 ∶ (𝑿𝒏 → 𝑿) ⇔ {𝒏→+∞
𝒏→+∞ 𝐥𝐢𝐦 𝑽𝒂𝒓(𝑿𝒏 ) = 𝟎
𝒏→+∞
𝒎𝒒 𝒑
𝓪 • 𝑿𝒏 → 𝑿 ⇒ 𝑿𝒏 → 𝑿
𝒏→+∞ 𝒏→+∞
𝒑.𝒔 𝒑
𝓫 • 𝑿𝒏 → 𝑿 ⇒ (𝑬(𝑿𝒏 ))𝒏≥𝟏 → (𝑿𝒏 )𝒏≥𝟏
𝒏→+∞ 𝒏→+∞
𝓒– 𝟓 • 𝑪𝒐𝒏𝒗𝒆𝒓𝒈𝒆𝒏𝒄𝒆 𝒆𝒏 𝒍𝒐𝒊 ∶
𝑺𝒐𝒊𝒕 (𝑿𝒏 )𝒏≥𝟏 𝒖𝒏𝒆 𝒔𝒖𝒊𝒕𝒆 𝒅𝒆 𝒗. 𝒂. 𝒆𝒕 𝑿 𝒖𝒏𝒆 𝒗. 𝒂 𝒅é𝒇𝒊𝒏𝒊𝒆𝒔 𝒔𝒖𝒓 𝒍𝒆 𝒎ê𝒎𝒆 𝒆𝒔𝒑𝒂𝒄𝒆 𝒑𝒓𝒐𝒃𝒂𝒃𝒊𝒍𝒊𝒔é (𝛀, 𝓕, 𝓟).
𝓛
𝑶𝒏 𝒅𝒊𝒕 𝒒𝒖𝒆 𝑿𝒏 𝒄𝒐𝒏𝒗𝒆𝒓𝒈𝒆 𝒆𝒏 𝒍𝒐𝒊 𝒗𝒆𝒓𝒔 𝑿 𝒆𝒕 𝒐𝒏 𝒏𝒐𝒕𝒆 𝑿𝒏 → 𝑿 , 𝒔𝒊 𝒆𝒏 𝒕𝒐𝒖𝒕 𝒑𝒐𝒊𝒏𝒕 𝒙 𝒐ù 𝑭𝑿 𝒆𝒔𝒕
𝒏→+∞
𝓪 • 𝑶𝒏 𝒄𝒐𝒏𝒔𝒕𝒂𝒕𝒆 𝒒𝒖𝒆 𝒍𝒂 𝒄𝒐𝒏𝒗𝒆𝒓𝒈𝒆𝒏𝒄𝒆 𝒆𝒏 𝒍𝒐𝒊 𝒅𝒆 𝒍𝒂 𝒔𝒖𝒊𝒕𝒆 (𝑿𝒏 )𝒏≥𝟏 𝒗𝒆𝒓𝒔 𝑿 𝒆𝒔𝒕 é𝒒𝒖𝒊𝒗𝒂𝒍𝒆𝒏𝒕𝒆
à 𝐥𝐢𝐦 𝑷(𝒂 < 𝑿𝒏 ≤ 𝒃) = 𝑷(𝒂 < 𝑿 ≤ 𝒃) , 𝒐ù 𝒂 𝒆𝒕 𝒃 𝒔𝒐𝒏𝒕 𝒅𝒆𝒖𝒙 𝒑𝒐𝒊𝒏𝒕𝒔 𝒅𝒆 𝒄𝒐𝒏𝒕𝒖𝒏𝒖𝒊𝒕é 𝒅𝒆 𝑭𝑿
𝒏→+∞
𝓫 • 𝑺𝒊 (𝑿𝒏 )𝒏≥𝟏 𝒆𝒔𝒕 𝒖𝒏𝒆 𝒔𝒖𝒊𝒕𝒆 𝒅𝒆 𝒗. 𝒂. 𝒅𝒊𝒔𝒄𝒓è𝒕𝒆𝒔 𝒆𝒕 𝒔𝒊 𝑿 𝒆𝒔𝒕 é𝒈𝒂𝒍𝒆𝒎𝒆𝒏𝒕 𝒅𝒊𝒔𝒄𝒓è𝒕𝒆 𝒕𝒆𝒍𝒍𝒆 𝒒𝒖𝒆
𝓭 • 𝑺𝒊 (𝑿𝒏 )𝒏≥𝟏 𝒆𝒔𝒕 𝒖𝒏𝒆 𝒔𝒖𝒊𝒕𝒆 𝒅𝒆 𝒗. 𝒂. 𝒄𝒐𝒏𝒗𝒆𝒓𝒈𝒆𝒂𝒏𝒕 𝒆𝒏 𝒍𝒐𝒊 𝒗𝒆𝒓𝒔 𝒖𝒏𝒆 𝒄𝒐𝒏𝒔𝒕𝒂𝒏𝒕𝒆 𝒂 𝒅𝒆 ℝ ,
𝓛 𝒑
𝒂𝒍𝒐𝒓𝒔 𝒆𝒍𝒍𝒆 𝒄𝒐𝒏𝒗𝒆𝒓𝒈𝒆 é𝒈𝒂𝒍𝒆𝒎𝒆𝒏𝒕 𝒆𝒏 𝒑𝒓𝒐𝒃𝒂𝒃𝒊𝒍𝒊𝒕é 𝒗𝒆𝒓𝒔 𝒍𝒂 𝒄𝒐𝒏𝒔𝒕𝒂𝒏𝒕𝒆 𝒂 ∶ 𝑿𝒏 → 𝒂 ⇒ 𝑿𝒏 → 𝒂
𝒏→+∞ 𝒏→+∞
𝓛
𝑿𝒏 → 𝑿
𝒏→+∞
𝓮 • 𝑺𝒖𝒑𝒑𝒐𝒔𝒐𝒏𝒔 𝒒𝒖𝒆 𝒍′ 𝒐𝒏 𝒂𝒊𝒕 𝒍𝒆𝒔 𝒄𝒐𝒏𝒗𝒆𝒓𝒈𝒆𝒏𝒄𝒆𝒔 ∶ { 𝒑 , 𝒑𝒐𝒖𝒓 𝒂 𝒄𝒐𝒏𝒔𝒕𝒂𝒏𝒕𝒆 𝒓é𝒆𝒍𝒍𝒆.
𝒀𝒏 → 𝒂
𝒏→+∞
𝓛 𝓛 𝑿𝒏 𝓛 𝑿
𝑨𝒍𝒐𝒓𝒔, 𝒐𝒏 𝒂 ∶ ① 𝑿𝒏 + 𝒀𝒏 → 𝑿+𝒂 ② 𝑿𝒏 𝒀𝒏 → 𝒂𝑿 ③ → , 𝒔𝒊 𝒂 ≠ 𝟎
𝒏→+∞ 𝒏→+∞ 𝒀𝒏 𝒏→+∞ 𝒂
𝓯 • 𝑺𝒐𝒊𝒕 (𝑿𝑵 )𝑵≥𝟏 𝒖𝒏𝒆 𝒔𝒖𝒊𝒕𝒆 𝒅𝒆 𝒗. 𝒂. 𝒅𝒆 𝒍𝒐𝒊 𝒉𝒚𝒑𝒆𝒓𝒈é𝒐𝒎é𝒕𝒓𝒊𝒒𝒖𝒆 𝓗(𝑵, 𝒏, 𝒑). 𝑶𝒏 𝒏𝒐𝒕𝒆 𝑺 ,
𝒍′ 𝒆𝒏𝒔𝒆𝒎𝒃𝒍𝒆 𝒅𝒆𝒔 𝒏𝒐𝒎𝒃𝒓𝒆𝒔 𝒆𝒏𝒕𝒊𝒆𝒓𝒔 𝑵 𝒕𝒆𝒍𝒔 𝒒𝒖𝒆 𝑵𝒑 𝒔𝒐𝒊𝒕 𝒆𝒏𝒕𝒊𝒆𝒓 . 𝑨𝒍𝒐𝒓𝒔 𝒍𝒂 𝒔𝒖𝒊𝒕𝒆 (𝑿𝑵 )𝑵∈ℕ
𝓛
𝒄𝒐𝒏𝒗𝒆𝒓𝒈𝒆 𝒆𝒏 𝑳𝒐𝒊 𝒗𝒆𝒓𝒔 𝒖𝒏𝒆 𝒗. 𝒂. 𝒃𝒊𝒏𝒐𝒎𝒊𝒂𝒍𝒆 𝓑(𝒏, 𝒑). 𝑶𝒏 𝒏𝒐𝒕𝒆 ∶ 𝑿𝑵 ↝ 𝓗(𝑵, 𝒏, 𝒑) ⇒ 𝑿𝑵 → 𝓑(𝒏, 𝒑)
𝑵→+∞
𝑴𝒏 − 𝒎 √𝒏(𝑴𝒏 − 𝒎)
𝒆𝒕 𝒁𝒏 𝒍𝒆𝒔 𝒗𝒂𝒓𝒊𝒂𝒃𝒍𝒆𝒔 𝒄𝒆𝒏𝒕𝒓é𝒆𝒔 𝒓é𝒅𝒖𝒊𝒕𝒆𝒔 𝒂𝒔𝒔𝒐𝒄𝒊é𝒆𝒔 ∶ 𝒁𝒏 = = .
𝝈⁄√𝒏 𝝈
𝒕𝟐
𝒃 − 𝟐
𝒆
𝑨𝒍𝒐𝒓𝒔 𝒑𝒐𝒖𝒓 𝒕𝒐𝒖𝒕 𝒊𝒏𝒕𝒆𝒓𝒗𝒂𝒍𝒍𝒆 [𝒂, 𝒃], 𝒐𝒏 𝒂 ∶ 𝐥𝐢𝐦 𝑷[𝒂 ≤ 𝒁𝒏 ≤ 𝒃] = ∫ 𝒅𝒕 .
𝒏→+∞ 𝒂 √𝟐𝝅
√𝒏(𝑴𝒏 − 𝒎)
𝑶𝒏 𝒅𝒊𝒕 𝒒𝒖𝒆 𝒍𝒂 𝒗. 𝒂. 𝒁𝒏 = 𝒄𝒐𝒏𝒗𝒆𝒓𝒈𝒆 𝒆𝒏 𝒍𝒐𝒊 𝒗𝒆𝒓𝒔 𝒍𝒂 𝒍𝒐𝒊 𝑵𝒐𝒓𝒎𝒂𝒍𝒆
𝝈
√𝒏(𝑴𝒏 − 𝒎) 𝓛
𝒔𝒕𝒂𝒏𝒅𝒂𝒓𝒅 𝓝(𝟎, 𝟏) ∶ → 𝓝(𝟎, 𝟏)
𝝈 𝒏→+∞
𝓒– 𝟏 • 𝑽𝒆𝒄𝒕𝒆𝒖𝒓𝒔 𝒂𝒍é𝒂𝒕𝒐𝒊𝒓𝒆𝒔 ∶
𝑶𝒏 𝒂𝒑𝒑𝒆𝒍𝒍𝒆 𝒄𝒐𝒖𝒑𝒍𝒆 (𝑿, 𝒀) 𝒅𝒆 𝒗. 𝒂. 𝒄𝒐𝒏𝒕𝒊𝒏𝒖𝒆𝒔 𝒅𝒆 ℝ𝟐 , 𝒔′𝒊𝒍 𝒆𝒙𝒊𝒔𝒕𝒆 𝒖𝒏𝒆 𝒇𝒐𝒏𝒄𝒕𝒊𝒐𝒏 𝒇: ℝ𝟐 ⟶ ℝ+
𝒕𝒆𝒍𝒍𝒆 𝒒𝒖𝒆 , 𝒑𝒐𝒖𝒓 𝒕𝒐𝒖𝒔 𝒊𝒏𝒕𝒆𝒓𝒗𝒂𝒍𝒍𝒆𝒔 𝑰 𝒆𝒕 𝑱 𝒆𝒕 𝒑𝒐𝒖𝒓 𝒕𝒐𝒖𝒕𝒆 𝒇𝒐𝒏𝒄𝒕𝒊𝒐𝒏 𝒄𝒐𝒏𝒕𝒊𝒏𝒖𝒆 𝒃𝒐𝒓𝒏é𝒆 𝒉 , 𝒐𝒏 𝒂𝒊𝒕 ∶
𝑷((𝑿, 𝒀) ∈ 𝑰 × 𝑱) = ∬ 𝒇𝑿,𝒀 (𝒙, 𝒚)𝒅𝒙𝒅𝒚 𝒆𝒕 𝑬[𝒉(𝑿, 𝒀)] = ∬ 𝒉(𝒙, 𝒚)𝒇𝑿,𝒀 (𝒙, 𝒚)𝒅𝒙𝒅𝒚
𝑰×𝑱 ℝ𝟐
𝓒– 𝟐 • 𝑫𝒆𝒏𝒔𝒊𝒕é 𝒄𝒐𝒏𝒋𝒐𝒊𝒏𝒕𝒆 ∶
𝒇𝑿,𝒀 𝒆𝒔𝒕 𝒖𝒏𝒆 𝒅𝒆𝒏𝒔𝒊𝒕é 𝒄𝒐𝒏𝒋𝒐𝒊𝒏𝒕𝒆 𝒅𝒖 𝒄𝒐𝒖𝒑𝒍𝒆 (𝑿, 𝒀) 𝒔𝒊 𝒆𝒕 𝒔𝒆𝒖𝒍𝒆𝒎𝒆𝒏𝒕 𝒔𝒊 ∶
+∞ +∞
∫ ∫ 𝒇𝑿,𝒀 (𝒙, 𝒚)𝒅𝒙𝒅𝒚 = 𝟏 𝒆𝒕 𝒇𝑿,𝒀 (𝒙, 𝒚) ≥ 𝟎 , ∀(𝒙, 𝒚) ∈ ℝ𝟐
−∞ −∞
𝓒– 𝟑 • 𝑫𝒆𝒏𝒔𝒊𝒕é𝒔 𝒎𝒂𝒓𝒈𝒊𝒏𝒂𝒍𝒆𝒔 ∶
𝑺𝒐𝒊𝒕 (𝑿, 𝒀) 𝒆𝒔𝒕 𝒖𝒏 𝒄𝒐𝒖𝒑𝒍𝒆 𝒆 𝒗. 𝒂. 𝒄𝒐𝒏𝒕𝒊𝒏𝒖𝒆𝒔 𝒅𝒆 ℝ𝟐 , 𝒔𝒆𝒔 𝒅𝒆𝒏𝒔𝒊𝒕é𝒔 𝒎𝒂𝒓𝒈𝒊𝒏𝒂𝒍𝒆𝒔 𝒇𝑿 𝒆𝒕 𝒇𝒀 𝒑𝒆𝒖𝒗𝒆𝒏𝒕
𝒔𝒆 𝒄𝒂𝒍𝒄𝒖𝒍𝒆𝒓 𝒑𝒂𝒓 ∶
𝒎𝒖𝒕𝒖𝒆𝒍𝒍𝒆𝒎𝒆𝒏𝒕𝒔 𝒊𝒏𝒅é𝒑𝒆𝒏𝒅𝒂𝒏𝒕𝒔 ∶ 𝒇𝑿𝟏 ,𝑿𝟐 ,…𝑿𝒎 (𝒙𝟏𝒊 , 𝒙𝟐𝒊 , … , 𝒙𝒎𝒊 ) = ∏ 𝒇𝑿𝒌 (𝒙𝒌𝒊 )
𝒌=𝟏
𝝏𝟐 𝑭𝑿,𝒀 (𝒙, 𝒚)
= 𝒇𝑿,𝒀 (𝒙, 𝒚) ∀(𝒙, 𝒚) , 𝟎 ≤ 𝑭𝑿,𝒀 (𝒙, 𝒚) ≤ 𝟏
𝝏𝒚𝝏𝒙
𝑭𝑿,𝒀 𝒆𝒔𝒕 𝒖𝒏𝒆 𝒇𝒐𝒏𝒄𝒕𝒊𝒐𝒏 𝒄𝒓𝒐𝒊𝒔𝒔𝒂𝒏𝒕𝒆
𝐥𝐢𝐦 𝑭𝑿,𝒀 (𝒙, 𝒚) = 𝐥𝐢𝐦 𝑭𝑿,𝒀 (𝒙, 𝒚) = 𝟎 𝒐𝒖 𝒆𝒏𝒄𝒐𝒓𝒆 𝑭𝑿,𝒀 (−∞, 𝒚) = 𝑭𝑿,𝒀 (𝒙, −∞) = 𝟎
𝒙→−∞ 𝒚→−∞
𝒃 𝒅
𝑷(𝒂 < 𝑿 ≤ 𝒃, 𝒄 < 𝒀 ≤ 𝒅) = 𝑭𝑿,𝒀 (𝒃, 𝒅) + 𝑭𝑿,𝒀 (𝒂, 𝒄) − 𝑭𝑿,𝒀 (𝒂, 𝒅) − 𝑭𝑿,𝒀 (𝒃, 𝒄) = ∫ ∫ 𝒇𝑿,𝒀 (𝒙, 𝒚)𝒅𝒙𝒅𝒚
𝒂 𝒄
𝓒– 𝟏𝟑 • 𝑬𝒔𝒑é𝒓𝒂𝒏𝒄𝒆 𝒄𝒐𝒏𝒅𝒊𝒕𝒊𝒐𝒏𝒏𝒆𝒍𝒍𝒆 ∶
𝑳′ 𝒆𝒔𝒑é𝒓𝒂𝒏𝒄𝒆 𝒎𝒂𝒓𝒈𝒊𝒏𝒂𝒍𝒆 𝒆𝒔𝒕 𝒍′𝒆𝒔𝒑é𝒓𝒂𝒏𝒄𝒆 𝒅𝒆𝒔 𝒆𝒔𝒑é𝒓𝒂𝒏𝒄𝒆𝒔 𝒄𝒐𝒏𝒅𝒊𝒕𝒊𝒐𝒏𝒏𝒆𝒍𝒍𝒆𝒔
𝓒– 𝟏𝟒 • 𝑽𝒂𝒓𝒊𝒂𝒏𝒄𝒆‑𝑪𝒐𝒗𝒂𝒓𝒊𝒂𝒏𝒄𝒆 ∶
𝑺𝒐𝒊𝒆𝒏𝒕 𝑿 𝒆𝒕 𝒀 𝒅𝒆𝒖𝒙 𝒗𝒂𝒓𝒊𝒂𝒃𝒍𝒆𝒔 𝒂𝒍é𝒂𝒕𝒐𝒊𝒓𝒆𝒔 𝒔𝒖𝒓 (𝜴, 𝓕, 𝓟). 𝑶𝒏 𝒂𝒑𝒑𝒆𝒍𝒍𝒆 𝒄𝒐𝒗𝒂𝒓𝒊𝒂𝒏𝒄𝒆 𝒅𝒆 𝑿 𝒆𝒕 𝒀 ,
𝒆𝒕 𝒍′ 𝒐𝒏 𝒏𝒐𝒕𝒆 𝑪𝒐𝒗(𝑿, 𝒀) , 𝒍𝒆 𝒓é𝒆𝒍 ∶
+∞ +∞
𝑪𝒐𝒗(𝑿, 𝒀) = 𝑬[(𝑿 − 𝑬(𝑿))(𝒀 − 𝑬(𝒀))] = ∫ ∫ (𝒙 − 𝑬(𝑿))(𝒚 − 𝑬(𝒀))𝒇𝑿,𝒀 (𝒙, 𝒚)𝒅𝒙𝒅𝒚
−∞ −∞
𝓲 • 𝑪𝒐𝒗 (∑ 𝑿𝒊 , ∑ 𝒀𝒋 ) = ∑ ∑ 𝑪𝒐𝒗(𝑿𝒊 , 𝒀𝒋 )
𝒊=𝟏 𝒋=𝟏 𝒊=𝟏 𝒋=𝟏
+∞ +∞
𝟐 𝟐
𝓳 • 𝑽𝒂𝒓 (𝑿) = 𝑬 [(𝑿 − 𝑬(𝑿)) ] = ∫ ∫ (𝒙 − 𝑬(𝑿)) 𝒇𝑿,𝒀 (𝒙, 𝒚)𝒅𝒙𝒅𝒚
−∞ −∞
+∞ +∞
𝟐 𝟐
𝓴 • 𝑽𝒂𝒓 (𝒀) = 𝑬 [(𝒀 − 𝑬(𝒀)) ] = ∫ ∫ (𝒚 − 𝑬(𝒀)) 𝒇𝑿,𝒀 (𝒙, 𝒚)𝒅𝒙𝒅𝒚
−∞ −∞
⇒ 𝑪𝒐𝒗(𝑿, 𝒀) = 𝟎
𝑳𝒂 𝒓é𝒄𝒊𝒑𝒓𝒐𝒒𝒖𝒆 𝒅𝒆 𝒄𝒆 𝒓é𝒔𝒖𝒍𝒕𝒂𝒕 𝒆𝒔𝒕 𝒇𝒂𝒖𝒔𝒔𝒆
𝟐 𝟐
⇒ 𝑽𝒂𝒓(𝑿𝒀) = 𝑽𝒂𝒓(𝑿)𝑽𝒂𝒓(𝒀) + (𝑬(𝑿)) 𝑽𝒂𝒓(𝒀) + (𝑬(𝒀)) 𝑽𝒂𝒓(𝑿)
𝑳𝒂 𝒓é𝒄𝒊𝒑𝒓𝒐𝒒𝒖𝒆 𝒅𝒆 𝒄𝒆 𝒓é𝒔𝒖𝒍𝒕𝒂𝒕 𝒆𝒔𝒕 𝒇𝒂𝒖𝒔𝒔𝒆
𝑷𝒓𝒐𝒑𝒓𝒊é𝒕é𝒔 ∶
𝓪 • 𝑴(𝑿,𝒀) (𝟎, 𝟎) = 𝟏 𝓫 • 𝑴(𝑿,𝒀) (𝒕𝟏 , 𝒕𝟐 ) ≥ 𝟎 𝓬 • 𝑴(𝑿,𝒀) (𝒕𝟏 , 𝟎) = 𝑴𝑿 (𝒕𝟏 )
𝓓– 𝟏 • 𝑬𝒙𝒆𝒎𝒑𝒍𝒆 𝒇𝒐𝒏𝒅𝒂𝒎𝒆𝒏𝒕𝒂𝒍 ∶
𝑪𝒐𝒏𝒔𝒊𝒅é𝒓𝒐𝒏𝒔 𝒏 𝒗𝒂𝒓𝒊𝒂𝒃𝒍𝒆𝒔 𝒂𝒍é𝒂𝒕𝒐𝒊𝒓𝒆𝒔 𝑿𝟏 , … , 𝑿𝒏 𝒊𝒏𝒅é𝒑𝒆𝒏𝒅𝒂𝒏𝒕𝒆𝒔 𝒆𝒕 𝒅𝒆 𝒍𝒐𝒊𝒔 𝒓𝒆𝒔𝒑𝒆𝒄𝒕𝒊𝒗𝒆𝒎𝒆𝒏𝒕
𝝈𝟐𝟏 𝟎 ⋯ 𝟎
𝒎 = 𝑬(𝑿) = (𝑬(𝑿𝟏 ) ⋯ 𝑬(𝑿𝒏 ))′ = (𝒎𝟏 ⋯ 𝒎 𝒏 ) ′ , 𝚺𝐗 = ( 𝟎 ⋱ ⋱ ⋮
) 𝑳𝒂 𝒎𝒂𝒕𝒓𝒊𝒄𝒆 𝒅𝒆
⋮ ⋱ ⋱ 𝟎
𝟎 ⋯ 𝟎 𝝈𝟐𝒏
𝟏⁄𝝈𝟐𝟏 𝟎 ⋯ 𝟎
𝟎 ⋱ ⋱ ⋮
𝒄𝒐𝒗𝒂𝒓𝒊𝒂𝒏𝒄𝒆𝒔 𝒅𝒖 𝒗𝒆𝒄𝒕𝒆𝒖𝒓 𝒂𝒍é𝒂𝒕𝒐𝒊𝒓𝒆 𝑿 𝒆𝒕 𝚺𝐗−𝟏 =( )
⋮ ⋱ ⋱ 𝟎
𝟎 ⋯ 𝟎 𝟏⁄𝝈𝟐𝒏
𝒏
𝝀 = (𝝀𝟏 ⋯ )′ 𝒏 ′
𝝀𝒏 𝒅𝒆 ℝ 𝒍𝒂 𝒗. 𝒂. 𝝀 𝑿 = ∑ 𝝀𝒊 𝑿𝒊 𝒆𝒔𝒕 𝒖𝒏𝒆 𝒗𝒂𝒓𝒊𝒂𝒃𝒍𝒆 𝒅𝒆 𝒍𝒐𝒊 𝒏𝒐𝒓𝒎𝒂𝒍𝒆.
𝒊=𝟏
𝝀′ 𝑿 ↝ 𝓝(𝝀′ 𝒎, 𝝀′ 𝚺𝐗 𝝀)
𝓬 • 𝑷𝒓𝒐𝒑𝒐𝒔𝒊𝒕𝒊𝒐𝒏 𝟐 ∶
𝑺𝒐𝒊𝒕 𝑿 𝒖𝒏 𝒗𝒆𝒄𝒕𝒆𝒖𝒓 𝒈𝒂𝒖𝒔𝒔𝒊𝒆𝒏 𝒅𝒆 ℝ𝒏 𝒅′ 𝒆𝒔𝒑é𝒓𝒂𝒏𝒄𝒆𝒎 𝒆𝒕 𝒅𝒆 𝒎𝒂𝒕𝒓𝒊𝒄𝒆 𝒅𝒆 𝒄𝒐𝒗𝒂𝒓𝒊𝒂𝒏𝒄𝒆𝒔 𝚺𝐗 .
𝑳𝒐𝒓𝒔𝒒𝒖𝒆 𝚺𝐗 𝒆𝒔𝒕 𝒊𝒏𝒗𝒆𝒓𝒔𝒊𝒃𝒍𝒆, 𝒍𝒆 𝒗𝒆𝒄𝒕𝒆𝒖𝒓 𝒂𝒍é𝒂𝒕𝒐𝒊𝒓𝒆 𝑿 𝒆𝒔𝒕 𝒅𝒊𝒕 𝒗𝒆𝒄𝒕𝒆𝒖𝒓 𝒂𝒍é𝒂𝒕𝒐𝒊𝒓𝒆 𝒈𝒂𝒖𝒔𝒔𝒊𝒆𝒏 𝒏𝒐𝒏
𝟏 𝟏 𝟏
𝒅é𝒈é𝒏é𝒓é 𝒆𝒕 𝒂𝒅𝒎𝒆𝒕 𝒑𝒐𝒖𝒓 𝒅𝒆𝒏𝒔𝒊𝒕é ∶ 𝒇𝑿 (𝒙𝟏 ⋯ 𝒙𝒏 ) = 𝒏 × 𝐞𝐱𝐩 {− (𝒙 − 𝒎)′ 𝚺−𝟏
𝐗 (𝒙 − 𝒎)}
(√𝟐𝝅) √|𝚺𝐗 | 𝟐
𝛔𝟐𝐗 𝐂𝐨𝐯(𝑿, 𝒀)
𝒆𝒕 𝚺𝐗 = ( ) 𝒂𝒗𝒆𝒄 |𝚺𝐗 | = 𝛔𝟐𝐗 𝛔𝟐𝐘 − 𝐂𝐨𝐯 𝟐 (𝑿, 𝒀) = 𝛔𝟐𝐗 𝛔𝟐𝐘 (𝟏 − 𝝆𝟐 (𝑿, 𝒀))
𝐂𝐨𝐯(𝑿, 𝒀) 𝛔𝟐𝐘
𝟏 𝟏 (𝒙 − 𝒎𝑿 )𝟐 (𝒚 − 𝒎𝒀 )𝟐 𝟐𝝆(𝒙 − 𝒎𝑿 )(𝒚 − 𝒎𝒀 )
𝒇(𝑿,𝒀) (𝒙, 𝒚) = 𝐞𝐱𝐩 {− [ + − ]}
𝟐(𝟏 − 𝝆𝟐 ) 𝛔𝟐𝐗 𝛔𝟐𝐘 𝛔𝑿 𝛔𝒀
𝟐𝝅√𝛔𝟐𝐗 𝛔𝟐𝐘 (𝟏 − 𝝆𝟐 )
Corrigé
1) 𝑿 ↝ 𝓝(𝟎, 𝟏) ⟹ 𝛀𝑿 = ]−∞, +∞[
𝒅′ 𝒐ù 𝑮(𝒚) = 𝟐𝑭(𝒚) − 𝟏
𝒅𝑮(𝒚) 𝒅𝑭(𝒚)
3) 𝑺𝒐𝒊𝒕 𝒈(𝒚) 𝒍𝒂 𝒅. 𝒅. 𝒑 𝒅𝒆 𝒍𝒂 𝒗. 𝒂 𝒀, 𝒐𝒓 = 𝒈(𝒚) 𝒆𝒕 = 𝒇(𝒚)
𝒅𝒚 𝒅𝒚
𝟐 −𝒚𝟐
√ , ∀ 𝒚 ∈ [𝟎, +∞[
𝒅′ 𝒐ù 𝒍𝒂 𝒅. 𝒅. 𝒑 𝒅𝒆 𝒍𝒂 𝒗. 𝒂 𝒀 ∶ 𝒈(𝒚) = { 𝝅 𝒆
𝟐
𝟎 , 𝒂𝒊𝒍𝒍𝒆𝒖𝒓𝒔
𝟐) 𝟐) 𝟐)
4) 𝑬(𝒀 = 𝑬(|𝑿| = 𝑬(𝑿
𝟏
𝒐𝒓 𝑮(𝒚) = 𝟐𝑭(𝒚) − 𝟏 𝒅𝒐𝒏𝒄 𝒆𝒏 𝒑𝒂𝒓𝒕𝒊𝒄𝒖𝒍𝒊𝒆𝒓 𝑮(𝑴𝒆𝒀 ) = 𝟐𝑭(𝑴𝒆𝒀 ) − 𝟏 ⟺ 𝟐𝑭(𝑴𝒆𝒀 ) − 𝟏 =
𝟐
𝟑 𝟑
⟺ 𝟐𝑭(𝑴𝒆𝒀 ) = ⟺ 𝑭(𝑴𝒆𝒀 ) =
𝟐 𝟒
𝟑
𝑵𝒐𝒕𝒐𝒏𝒔 𝑸𝟑 𝑿 𝒍𝒆 𝒕𝒓𝒐𝒊𝒔𝒊è𝒎𝒆 𝒒𝒖𝒂𝒓𝒕𝒊𝒍𝒆 𝒅𝒆 𝒍𝒂 𝒗. 𝒂 𝑿, 𝒄𝒆 𝒒𝒖𝒊 𝒅𝒐𝒏𝒏𝒆 𝑭(𝑸𝟑 𝑿 ) =
𝟒
𝑪𝒐𝒎𝒎𝒆 𝒕𝒐𝒖𝒕𝒆 𝒇. 𝒓 , 𝑭 𝒓é𝒂𝒍𝒊𝒔𝒆 𝒖𝒏𝒆 𝒃𝒊𝒋𝒆𝒄𝒕𝒊𝒐𝒏 𝒔𝒖𝒓 𝜴𝑿 𝒑𝒂𝒓 𝒍𝒖𝒔𝒖𝒊𝒕𝒆 ∶ 𝑭(𝑸𝟑 𝑿 ) = 𝑮(𝑴𝒆𝒀 )
𝒅′ 𝒐ù 𝑸𝟑 𝑿 = 𝑴𝒆𝒀
Corrigé
𝑿−𝒎
1) 𝑷(𝒎 − 𝟐𝝈 ≤ 𝑿 ≤ 𝒎 + 𝟐𝝈) = 𝑷 (−𝟐 ≤ ≤ 𝟐)
𝝈
𝑿−𝒎
𝑨𝒊𝒏𝒔𝒊 𝑷(𝒎 − 𝟐𝝈 ≤ 𝑿 ≤ 𝒎 + 𝟐𝝈) = 𝑷(−𝟐 ≤ 𝒀 ≤ 𝟐)𝒐ù 𝒀 = ↝ 𝓝(𝟎, 𝟏), 𝒔𝒊 𝑿 ↝ 𝓝(𝒎, 𝝈𝟐 )
𝝈
𝑷(𝒎 − 𝟐𝝈 ≤ 𝑿 ≤ 𝒎 + 𝟐𝝈) = 𝐅(𝟐) − 𝐅(−𝟐) = 𝐅(𝟐) − (𝟏 − 𝐅(𝟐)) = 𝟐𝐅(𝟐) − 𝟏 = (𝟐 × 𝟎, 𝟗𝟕𝟕𝟐) − 𝟏
2)
𝟏
a) 𝑿 = 𝒏 ∑𝒏𝒊=𝟏 𝑿𝒊 ∶ 𝑳𝒆 𝒓𝒆𝒏𝒅𝒆𝒎𝒆𝒏𝒕 𝒎𝒐𝒚𝒆𝒏 𝒅𝒆𝒔 𝒂𝒄𝒕𝒊𝒐𝒏𝒔
𝒏
𝟐
𝟏 𝟐
𝑺 = ∑(𝑿𝒊 − 𝑿) ∶ 𝑼𝒏 𝒄𝒐𝒆𝒇𝒇𝒊𝒄𝒊𝒆𝒏𝒕 𝒅𝒆 𝒓𝒊𝒔𝒒𝒖𝒆, 𝒎𝒆𝒔𝒖𝒓𝒂𝒏𝒕 𝒍𝒆𝒔 é𝒄𝒂𝒓𝒕𝒔 𝒄𝒐𝒏𝒔𝒕𝒂𝒕é𝒔 𝒆𝒏𝒕𝒓𝒆 𝒍𝒆𝒔
𝒏
𝒊=𝟏
𝟐 𝟐
𝟏 𝟐
𝑿𝟏 + 𝑿𝟐 𝑿𝟏 + 𝑿𝟐 𝟏 𝑿𝟏 − 𝑿𝟐 𝟐 𝑿𝟏 − 𝑿𝟐 𝟐
𝑷𝒂𝒓 𝒍𝒂 𝒔𝒖𝒊𝒕𝒆 𝑺 = [(𝑿𝟏 − ( )) + (𝑿𝟐 − ( )) ] = [( ) +( ) ]
𝟐 𝟐 𝟐 𝟐 𝟐 𝟐
𝟐
𝑿𝟏 − 𝑿𝟐 𝟐
𝑺 =( )
𝟐
ii.
𝟐
𝑿 ↝ 𝓝(𝒎, 𝝈𝟐 ) ; 𝒊 = 𝟏, 𝟐 𝑿𝟏 − 𝑿𝟐 𝝈
{ 𝒊 ⇒ (𝑿𝟏 − 𝑿𝟐 ) ↝ 𝓝(𝟎, 𝟐𝝈𝟐 ) ⇒ 𝑺 = ( ) ↝ 𝓝 (𝟎, )
𝑿𝟏 𝒆𝒕 𝑿𝟐 𝒔𝒐𝒏𝒕 𝒊𝒏𝒅é𝒑𝒆𝒏𝒅𝒂𝒏𝒕𝒆𝒔 𝟐 𝟐
𝝈𝟐
𝒆𝒕 𝑿 ↝ 𝓝 (𝒎, ) 𝒅𝒐𝒏𝒄 𝑿 𝒆𝒕 𝑺 𝒅𝒆𝒖𝒙 𝒗. 𝒂. 𝒈𝒂𝒖𝒔𝒔𝒊𝒆𝒏𝒏𝒆𝒔
𝟐
𝑪𝒂𝒍𝒄𝒖𝒍𝒐𝒏𝒔 𝑪𝒐𝒗(𝑿, 𝑺) ∶
𝑿𝟏 + 𝑿𝟐 𝑿𝟏 − 𝑿𝟐 𝟏
𝑪𝒐𝒗(𝑿, 𝑺) = 𝑪𝒐𝒗 [( ),( )] = 𝑪𝒐𝒗[(𝑿𝟏 + 𝑿𝟐 ), (𝑿𝟏 − 𝑿𝟐 )]
𝟐 𝟐 𝟒
𝝈𝟐
𝒆𝒕 𝑬[𝒈(𝑺)] = 𝑬(𝑺𝟐 ) = 𝒆𝒏 𝒆𝒇𝒇𝒆𝒕 (𝒍𝒆𝒔 𝒗. 𝒂. 𝒇(𝑿)𝒆𝒕 𝒈(𝒀) 𝒔𝒐𝒏𝒕 𝒊𝒏𝒕é𝒈𝒓𝒂𝒃𝒍𝒆𝒔)
𝟐
𝒅′ 𝒐ù 𝑿 𝒆𝒕 𝑺𝟐 𝒔𝒐𝒏𝒕 𝒊𝒏𝒅é𝒑𝒆𝒏𝒅𝒂𝒏𝒕𝒆𝒔
iii.
𝒄𝒆𝒓𝒕𝒂𝒊𝒏𝒆 .
Corrigé
Partie 1 :
1)
∙ 𝑽𝒂𝒓(𝒀𝟏 ) = 𝑽𝒂𝒓(𝑿𝟏 + 𝑿𝟐 ) = ⏟
𝑽𝒂𝒓(𝑿𝟏 ) + ⏟
𝑽𝒂𝒓(𝑿𝟐 ) + 𝟐 ⏟
𝑪𝒐𝒗(𝑿𝟏 , 𝑿𝟐 ) ⇒ 𝑽𝒂𝒓(𝒀𝟏 ) = 𝟐(𝟏 + 𝒄)
𝟏 𝟏 𝒄
𝑽𝒂𝒓(𝑿𝟏 ) + 𝟐𝟐 ⏟
∙ 𝑽𝒂𝒓(𝒀𝟐 ) = 𝑽𝒂𝒓(𝑿𝟏 + 𝟐𝑿𝟐 ) = ⏟ 𝑽𝒂𝒓(𝑿𝟐 ) + [𝟐 × 𝟏 × 𝟐 ⏟
𝑪𝒐𝒗(𝑿𝟏 , 𝑿𝟐 )]
𝟏 𝟏 𝒄
⇒ 𝑽𝒂𝒓(𝒀𝟐 ) = 𝟓 + 𝟒𝒄
𝑪𝒐𝒗(𝒀𝟏 , 𝒀𝟐 ) = 𝑪𝒐𝒗((𝑿𝟏 + 𝑿𝟐 ), (𝑿𝟏 + 𝟐𝑿𝟐 )) = 𝑪𝒐𝒗(𝑿𝟏 , (𝑿𝟏 + 𝟐𝑿𝟐 )) + 𝑪𝒐𝒗(𝑿𝟐 , (𝑿𝟏 + 𝟐𝑿𝟐 ))
= [𝑪𝒐𝒗(𝑿
⏟ 𝟏 , 𝑿𝟏 ) + 𝑪𝒐𝒗(𝑿𝟏 , 𝟐𝑿𝟐 )] + [𝑪𝒐𝒗(𝑿
⏟ 𝟐 , 𝑿𝟏 ) + 𝑪𝒐𝒗(𝑿𝟐 , 𝟐𝑿𝟐 )]
𝑽𝒂𝒓(𝑿𝟏 ) 𝒄
𝑪𝒐𝒗(𝒀𝟏 , 𝒀𝟐 ) = 𝟑(𝟏 + 𝒄)
𝒀 𝟐(𝟏 + 𝒄) 𝟑(𝟏 + 𝒄)
𝑫′ 𝒐ù ∶ 𝛀𝒀 = 𝑽𝒂𝒓 ([ 𝟏 ]) = ( )
𝒀𝟐 𝟑(𝟏 + 𝒄) 𝟓 + 𝟒𝒄
𝑿𝒊 ↝ 𝓝(𝟎, 𝟏) ; 𝒊 = 𝟏, 𝟐
3) 𝑹𝒆𝒎𝒂𝒓𝒒𝒖𝒆𝒔 ∶ 𝑶𝒏 𝒂 ∶ {
𝒀𝟏 (𝒓𝒆𝒔𝒑. 𝒀𝟐 ) 𝒆𝒔𝒕 𝒖𝒏𝒆 𝒄𝒐𝒎𝒃𝒊𝒏𝒂𝒊𝒔𝒐𝒏 𝒍𝒊𝒏é𝒂𝒊𝒓𝒆 𝒅𝒆 𝑿𝟏 𝒆𝒕 𝑿𝟐
̅𝒏 ) 𝒆𝒕 𝑽(𝒚
1) 𝑪𝒂𝒍𝒄𝒖𝒍𝒆𝒓 𝑬(𝒚 ̅𝒏 )
2)
a) 𝑬𝒙𝒑𝒓𝒊𝒎𝒆𝒓 𝒚 ̅𝒏 𝒆𝒏 𝒇𝒐𝒏𝒄𝒕𝒊𝒐𝒏 𝒅𝒆 𝒚
̅𝒏−𝟏
b) 𝑫é𝒕𝒆𝒓𝒎𝒊𝒏𝒆𝒓 𝒍𝒂 𝒄𝒐𝒗𝒂𝒓𝒊𝒂𝒏𝒄𝒆 𝒆𝒏𝒕𝒓𝒆 𝒚 ̅𝒏−𝟏 𝒆𝒕 𝒚
̅𝒏
̅𝒏−𝟏 𝒆𝒕 𝒚
c) 𝑪𝒂𝒍𝒄𝒖𝒍𝒆𝒓 𝒍𝒆 𝒄𝒐𝒆𝒇𝒇𝒊𝒄𝒊𝒆𝒏𝒕 𝒅𝒆 𝒄𝒐𝒓𝒓é𝒍𝒂𝒕𝒊𝒐𝒏 𝒍𝒊𝒏é𝒂𝒊𝒓𝒆 𝒆𝒏𝒕𝒓𝒆 𝒚 ̅𝒏
d) 𝑰𝒏𝒕𝒆𝒓𝒑𝒓é𝒕𝒆𝒓 𝒄𝒆 𝒅𝒆𝒓𝒏𝒊𝒆𝒓 𝒓é𝒔𝒖𝒍𝒕𝒂𝒕
𝒅𝒊𝒇𝒇é𝒓𝒆𝒏𝒄𝒆 𝒅𝒆 𝒅𝒆𝒖𝒙 𝒍𝒐𝒊𝒔 𝒅𝒆 𝑲𝒉𝒊‑𝑫𝒆𝒖𝒙 𝒅𝒐𝒏𝒕 𝒊𝒍 𝒇𝒂𝒖𝒕 𝒑𝒓é𝒄𝒊𝒔𝒆𝒓 𝒍𝒆𝒔 𝒅𝒆𝒈𝒓é𝒔 𝒅𝒆 𝒍𝒊𝒃𝒆𝒓𝒕é
c) 𝑬𝒏 𝒂𝒅𝒎𝒆𝒕𝒕𝒂𝒏𝒕 𝒒𝒖𝒆 𝒚 ̅𝒏 𝒆𝒕 ∑𝒏𝒊=𝟏(𝒚𝒊 − 𝒎)𝟐 𝒔𝒐𝒏𝒕 𝒊𝒏𝒅é𝒑𝒆𝒏𝒅𝒂𝒏𝒕𝒆𝒔,
𝒏𝑺𝟐
𝒆𝒏 𝒅é𝒅𝒖𝒊𝒓𝒆 𝒍𝒂 𝒍𝒐𝒊 𝒅𝒆 𝟐
𝝈
Corrigé
1)
𝒏 𝒏 𝒏 𝒏
𝟏 𝟏 𝟏 𝟏 𝟏
̅𝒏 ) = 𝑬 ( ∑ 𝒚𝒊 ) = 𝑬 (∑ 𝒚𝒊 ) = ∑ 𝑬(𝒚𝒊 ) = ∑ 𝒎 = (𝒏. 𝒎). 𝑫′ 𝒐ù 𝑬(𝒚
∙ 𝑬(𝒚 ̅𝒏 ) = 𝑬(𝒚𝒊 ) = 𝒎
𝒏 𝒏 𝒏 𝒏 𝒏
𝒊=𝟏 𝒊=𝟏 𝒊=𝟏 𝒊=𝟏
𝒏 𝒏
𝟏 𝟏
̅𝒏 ) = 𝑽 ( ∑ 𝒚𝒊 ) = 𝟐 𝑽 (∑ 𝒚𝒊 )
∙ 𝑽(𝒚
𝒏 𝒏
𝒊=𝟏 𝒊=𝟏
𝒏 𝒏
𝑽(𝒚𝒊 ) 𝝈𝟐
̅𝒏 ) =
𝑽(𝒚 =
𝒏 𝒏
2)
a)
𝒏 𝒏−𝟏
𝟏 𝟏 𝟏 𝒏−𝟏 𝟏
̅𝒏 = ∑ 𝒚𝒊 = (𝒚𝒏 + ∑ 𝒚𝒊 ) = [𝒚𝒏 + (𝒏 − 𝟏)𝒚
𝒚 ̅𝒏−𝟏 ] ⇒ 𝒚
̅𝒏 = ( )𝒚
̅𝒏−𝟏 + 𝒚𝒏
𝒏 𝒏 𝒏 𝒏 𝒏
𝒊=𝟏 𝒊=𝟏
b)
𝒏−𝟏 𝟏 𝒏−𝟏 𝟏
𝑪𝒐𝒗(𝒚 ̅𝒏 ) = 𝑪𝒐𝒗 (𝒚
̅𝒏−𝟏 , 𝒚 ̅𝒏−𝟏 , ( )𝒚
̅𝒏−𝟏 + 𝒚𝒏 ) = ̅𝒏−𝟏 ) + 𝑪𝒐𝒗(𝒚
̅𝒏−𝟏 , 𝒚
𝑪𝒐𝒗(𝒚 ̅𝒏−𝟏 , 𝒚𝒏 )
𝒏 𝒏 𝒏 𝒏
𝒏−𝟏 𝟏
̅𝒏 ) =
̅𝒏−𝟏 , 𝒚
𝑪𝒐𝒗(𝒚 ̅𝒏−𝟏 ) + 𝑪𝒐𝒗(𝒚
𝑽(𝒚 ̅𝒏−𝟏 , 𝒚𝒏 )
𝒏 𝒏
̅𝒏−𝟏 , 𝒚𝒏 ) = 𝟎
⇒ 𝑪𝒐𝒗(𝒚
′
𝝈𝟐 𝒏−𝟏 𝝈𝟐
̅𝒏−𝟏 ) =
𝑫 𝒂𝒖𝒕𝒓𝒆 𝒑𝒂𝒓𝒕 𝑽(𝒚 , 𝒂𝒊𝒏𝒔𝒊 𝑪𝒐𝒗(𝒚 ̅𝒏 ) = (
̅𝒏−𝟏 , 𝒚 )( )
𝒏−𝟏 𝒏 𝒏−𝟏
𝝈𝟐
̅𝒏 ) = 𝑽(𝒚
̅𝒏−𝟏 , 𝒚
𝑫’𝒐ù 𝑪𝒐𝒗(𝒚 ̅𝒏 ) =
𝒏
̅𝒏 )
̅𝒏−𝟏 ,𝒚
𝑪𝒐𝒗(𝒚 𝝈𝟐 ⁄𝒏 𝝈𝟐 √(𝒏−𝟏)𝒏 𝟏
c) 𝝆𝒚̅𝒏−𝟏 ,𝒚̅𝒏 = = = (𝒏)( ) ⇒ 𝝆𝒚̅𝒏−𝟏 ,𝒚̅𝒏 = √𝟏 −
̅𝒏−𝟏 )𝑽(𝒚
√𝑽(𝒚 ̅𝒏 ) 𝝈𝟐 𝒏
𝝈𝟐 𝝈𝟐
√( )( )
𝒏−𝟏 𝒏
3)
̅𝒏 ) = 𝒎
̅𝒏 − 𝒎 ≤ 𝜶], 𝒐𝒓 𝑬(𝒚
a) 𝑶𝒏 𝒂 ∶ 𝑷[−𝜶 ≤ 𝒚
̅𝒏 − 𝒎 ≤ 𝜶] = 𝑷[−𝜶 ≤ 𝒚
𝒅𝒐𝒏𝒄 𝑷[−𝜶 ≤ 𝒚 ̅𝒏 ) ≤ 𝜶] = 𝑷[|𝒚
̅𝒏 − 𝑬(𝒚 ̅𝒏 )| ≤ 𝜶 ]
̅𝒏 − 𝑬(𝒚
̅𝒏 )| > 𝜶 ]
̅𝒏 − 𝑬(𝒚
= 𝟏 − 𝑷[|𝒚
̅𝒏 )
𝑽(𝒚
̅𝒏 )| > 𝜶 ] ≤
̅𝒏 − 𝑬(𝒚
𝑫’𝒂𝒑𝒓è𝒔 𝒍’𝒊𝒏é𝒈𝒂𝒍𝒊𝒕é 𝒅𝒆 𝑩𝒊𝒆𝒏𝒂𝒚𝒎é 𝑻𝒄𝒉𝒆𝒃𝒚𝒄𝒉𝒆𝒗 𝒐𝒏 𝒂 𝑷[|𝒚
𝜶𝟐
𝝈𝟐 𝝈𝟐 𝝈𝟐
̅𝒏 ) =
𝑽(𝒚 𝒑𝒂𝒓 𝒍𝒂 𝒔𝒖𝒊𝒕𝒆 𝑷[|𝒚 ̅𝒏 )| > 𝜶 ] ≤
̅𝒏 − 𝑬(𝒚 ⇔ 𝟏 − ̅
𝑷[|𝒚 𝒏 − ̅
𝑬(𝒚 𝒏 )| > 𝜶 ] ≥ 𝟏 −
𝒏 𝒏𝜶𝟐 𝒏𝜶𝟐
𝝈𝟐 𝝈𝟐
𝑫′ 𝒐ù 𝑷[−𝜶 ≤ 𝒚
̅𝒏 − 𝒎 ≤ 𝜶] ≥ 𝟏 − , 𝟏 − ̅𝒏 − 𝒎 ≤ 𝜶]
é𝒕𝒂𝒏𝒕 𝒖𝒏 𝒎𝒊𝒏𝒐𝒓𝒂𝒏𝒕 𝒑𝒐𝒖𝒓 𝑷[−𝜶 ≤ 𝒚
𝒏𝜶𝟐 𝒏𝜶𝟐
𝝈𝟐
̅𝒏 − 𝒎 ≤ 𝜶] ≤ 𝟏
b) 𝑶𝒏 𝒂 𝒂𝒊𝒏𝒔𝒊 ∶ 𝟏 − 𝒏𝜶𝟐 ≤ 𝑷[−𝜶 ≤ 𝒚
𝝈𝟐
𝑶𝒓 𝐥𝐢𝐦 (𝟏 − ̅𝒏 − 𝒎 ≤ 𝜶] = 𝟏 𝒐𝒖 𝒆𝒏𝒄𝒐𝒓𝒆 𝐥𝐢𝐦 𝑷[|𝒚
) = 𝟏 ⇒ 𝐥𝐢𝐦 𝑷[−𝜶 ≤ 𝒚 ̅𝒏 − 𝒎| ≤ 𝜶] = 𝟏
𝒏→∞ 𝒏𝜶𝟐 𝒏→∞ 𝒏→∞
𝑷
𝑫′ 𝒐ù 𝒚
̅𝒏 𝒄𝒐𝒏𝒗𝒆𝒓𝒈𝒆 𝒆𝒏 𝒑𝒓𝒐𝒃𝒂𝒃𝒊𝒍𝒊𝒕é 𝒗𝒆𝒓𝒔 𝒎 ∶ 𝒚
̅𝒏 ⟶ 𝒎
4)
a) 𝒏𝑺𝟐 = ∑𝒏𝒊=𝟏(𝒚𝒊 − 𝒚
̅𝒏 )𝟐 = ∑𝒏𝒊=𝟏[(𝒚𝒊 − 𝒎) − (𝒚
̅𝒏 − 𝒎)]𝟐
𝒏
𝒏 𝒏 𝒏
𝟐
̅𝒏 − 𝒎)𝟐
̅𝒏 − 𝒎) ∑ 𝒚𝒊 − ∑ 𝒎 + 𝒏(𝒚
= ∑(𝒚𝒊 − 𝒎) − 𝟐(𝒚
𝒊=𝟏 ⏟
𝒊=𝟏 ⏟
𝒊=𝟏
( ̅
𝒏𝒚 𝒏 𝒏.𝒎 )
b)
𝒏
𝟐)
𝒚𝒊 − 𝒎 (𝒚𝒊 − 𝒎)𝟐 𝟐 (𝟏)
(𝒚𝒊 − 𝒎)𝟐
𝒚𝒊 ↝ 𝓝(𝒎, 𝝈 ⇒ ↝ 𝓝(𝟎, 𝟏) ⇒ ↝ 𝝌 ⇒ 𝑯 = ∑ ↝ 𝝌𝟐 (𝒏) (𝟏)
𝝈 𝝈𝟐 𝝈𝟐
𝒊=𝟏
𝑽(𝒚𝒊 ) 𝝈𝟐 𝝈 𝟐
̅𝒏 ) = 𝑬(𝒚𝒊 ) = 𝒎 𝒆𝒕 𝑽(𝒚
𝒐𝒓 𝑬(𝒚 ̅𝒏 ) = = =( )
𝒏 𝒏 √𝒏
𝝈 𝟐 ̅𝒏 − 𝒎
𝒚 ̅𝒏 − 𝒎
𝒚
̅𝒏 ↝ 𝓝 (𝒎, (
𝒐𝒏 𝒐𝒃𝒕𝒊𝒆𝒏𝒕 𝒚 ) )⇒ ↝ 𝓝(𝟎, 𝟏) ⇒ ( ) ↝ 𝝌𝟐 (𝟏)
√𝒏 𝝈⁄√𝒏 𝝈⁄√𝒏
̅𝒏 − 𝒎)𝟐
𝒏(𝒚
⇒𝑼= 𝟐
↝ 𝝌𝟐 (𝟏) (𝟐)
𝝈
𝒏
𝒏𝑺𝟐 ∑𝒏𝒊=𝟏(𝒚𝒊 − 𝒎)𝟐 − 𝒏(𝒚
̅𝒏 − 𝒎)𝟐 (𝒚𝒊 − 𝒎)𝟐 𝒏(𝒚
̅𝒏 − 𝒎)𝟐
𝑷𝒂𝒓 𝒍𝒂 𝒔𝒖𝒊𝒕𝒆 𝟐 = =∑ − =𝑯−𝑼
𝝈 𝝈𝟐 𝝈𝟐 𝝈𝟐
𝒊=𝟏
𝒏𝑺𝟐
𝑫 𝒐ù 𝑾 = 𝟐 = 𝑯 − 𝑼 𝒂𝒗𝒆𝒄 𝑯 ↝ 𝝌𝟐 (𝒏) 𝒆𝒕 𝑼 ↝ 𝝌𝟐 (𝟏)
′
𝝈
̅𝒏 𝒆𝒕 ∑𝒏𝒊=𝟏(𝒚𝒊 − 𝒚
c) 𝑹𝒆𝒇𝒐𝒓𝒎𝒖𝒍𝒐𝒏𝒔 𝒅𝒐𝒏𝒄 𝒍𝒂 𝒒𝒖𝒆𝒔𝒕𝒊𝒐𝒏 𝟒) 𝐛) ∶ 𝑬𝒏 𝒂𝒅𝒎𝒆𝒕𝒕𝒂𝒏𝒕 𝒒𝒖𝒆 𝒚 ̅ 𝒏 )𝟐
𝒏𝑺𝟐
𝒔𝒐𝒏𝒕 𝒊𝒏𝒅é𝒑𝒆𝒏𝒅𝒂𝒏𝒕𝒆𝒔, 𝒆𝒏 𝒅é𝒅𝒖𝒊𝒓𝒆 𝒍𝒂 𝒍𝒐𝒊 𝒅𝒆 𝟐
𝝈
𝒏
̅𝒏 − 𝒎)𝟐
𝒏(𝒚
̅𝒏 ) =
𝒆𝒕 𝒒𝒖𝒆 𝒈(𝒚 =𝑼
𝝈𝟐
𝑫’𝒂𝒖𝒕𝒓𝒆 𝒑𝒂𝒓𝒕 ∶ 𝒇 𝒆𝒕 𝒈 𝒅𝒆𝒖𝒙 𝒇𝒐𝒏𝒄𝒕𝒊𝒐𝒏𝒔 𝒄𝒐𝒏𝒕𝒊𝒏𝒖𝒆𝒔 𝒆𝒕 ∶
̅𝒏 )] = 𝑬(𝑼) = 𝟏 , 𝒆𝒙𝒊𝒔𝒕𝒆 𝒆𝒕 𝒇𝒊𝒏𝒊𝒆 , 𝒄𝒂𝒓 𝑼 ↝ 𝝌𝟐 (𝟏)
𝑬[𝒈(𝒚
𝒏
̅𝒏 )𝟐 ) 𝒆𝒕 𝑼 = 𝒈(𝒚
𝑰𝒍 𝒆𝒏 𝒓é𝒔𝒖𝒍𝒕𝒆 𝒆𝒏𝒇𝒊𝒏 𝒒𝒖𝒆 ∶ 𝑾 = 𝒇 (∑(𝒚𝒊 − 𝒚 ̅𝒏 ) 𝒔𝒐𝒏𝒕 𝒊𝒏𝒅é𝒑𝒆𝒏𝒅𝒂𝒏𝒕𝒆𝒔.
𝒊=𝟏
Corrigé
1) 𝚫𝒚𝒕 = 𝒚𝒕 − 𝒚𝒕−𝟏 = (𝒂𝒕 + 𝒃 + 𝒖𝒕 ) − (𝒂(𝒕 − 𝟏) + 𝒃 + 𝒖𝒕−𝟏 )
= (𝒂𝒕 + 𝒃 + 𝒖𝒕 ) − (𝒂𝒕 − 𝒂 + 𝒃 + 𝒖𝒕−𝟏 ) = 𝒂 + (𝒖𝒕 − 𝒖𝒕−𝟏 ) = 𝒂 + 𝚫𝒖𝒕
𝒚𝒕 − 𝒚𝒕−𝟏 𝜟𝒚𝒕
𝒎𝒐𝒚𝒆𝒏 𝒅𝒆 𝒍𝒂 𝒈𝒓𝒂𝒏𝒅𝒆𝒖𝒓 , (𝒚𝒕 ) ∶ = = 𝜟𝒚𝒕
𝒕 − (𝒕 − 𝟏) 𝟏
2)
𝒐𝒓 𝒍𝒆𝒔 𝒕𝒆𝒓𝒎𝒆𝒔 𝒅’𝒆𝒓𝒓𝒆𝒖𝒓𝒔 (𝒖𝒕 )𝒇𝒐𝒓𝒎𝒆𝒏𝒕 𝒖𝒏𝒆 𝒔𝒖𝒊𝒕𝒆 𝒅𝒆 𝒗𝒂𝒓𝒊𝒂𝒃𝒍𝒆𝒔 𝒂𝒍é𝒂𝒕𝒐𝒊𝒓𝒆𝒔 𝒊. 𝒊. 𝒅 𝒅𝒆 𝒍𝒂 𝒍𝒐𝒊 𝓝(𝟎, 𝟏)
𝟎 , 𝒔𝒊 𝒕 ≠ 𝒔
⇒ 𝑪𝒐𝒗(𝒖𝒕 , 𝒖𝒔 ) = {
𝑽(𝒖𝒕 ) = 𝟏 , 𝒔𝒊 𝒕 = 𝒔
′
𝑽(𝚫𝒚𝒕 ) = 𝑽(𝒖
⏟ 𝒕 ) + 𝑽(𝒖
⏟ 𝒕−𝟏 ) − 𝟐 𝑪𝒐𝒗(𝒖
⏟ 𝒕 , 𝒖𝒕−𝟏 ) . 𝑫 𝒐ù 𝑽(𝚫𝒚𝒕 ) = 𝟐
𝟏 𝟏 𝟎
3)
𝑪𝒐𝒗(𝚫𝒚𝒕 , 𝚫𝒚𝒕−𝟏 ) = 𝑪𝒐𝒗(𝒂 + 𝚫𝒖𝒕 , 𝒂 + 𝚫𝒖𝒕−𝟏 ) = 𝑪𝒐𝒗(𝚫𝒖𝒕 , 𝚫𝒖𝒕−𝟏 ) = 𝑪𝒐𝒗((𝒖𝒕 − 𝒖𝒕−𝟏 ), (𝒖𝒕−𝟏 − 𝒖𝒕−𝟐 ))
𝑪𝒐𝒗(𝒖𝒕 , 𝒖𝒕−𝟏 ) − ⏟
=⏟ 𝑪𝒐𝒗(𝒖𝒕 , 𝒖𝒕−𝟐 ) − ⏟
𝑪𝒐𝒗(𝒖𝒕−𝟏 , 𝒖𝒕−𝟏 ) + ⏟
𝑪𝒐𝒗(𝒖𝒕−𝟏 , 𝒖𝒕−𝟐 )
𝟎 𝟎 𝑽(𝒖𝒕 )=𝟏 𝟎
𝑪𝒐𝒗(𝚫𝒚𝒕 , 𝚫𝒚𝒕−𝟏 ) −𝟏
𝝆𝜟𝒚𝒕 ,𝜟𝒚𝒕−𝟏 = = . 𝑫′ 𝒐ù 𝝆𝚫𝒚𝒕,𝚫𝒚𝒕−𝟏 = − 𝟏⁄𝟐
√𝑽(𝚫𝒚𝒕 )𝑽(𝚫𝒚𝒕−𝟏 ) √𝟐 × 𝟐
3)
𝑪𝒐𝒗(𝚫𝒚𝒕 , 𝚫𝒚𝒕−𝟐 ) = 𝑪𝒐𝒗(𝒂 + 𝚫𝒖𝒕 , 𝒂 + 𝚫𝒖𝒕−𝟐 ) = 𝑪𝒐𝒗(𝚫𝒖𝒕 , 𝚫𝒖𝒕−𝟐 ) = 𝑪𝒐𝒗((𝒖𝒕 − 𝒖𝒕−𝟏 ), (𝒖𝒕−𝟐 − 𝒖𝒕−𝟑 ))
𝑪𝒐𝒗(𝒖𝒕 , 𝒖𝒕−𝟐 ) − ⏟
=⏟ 𝑪𝒐𝒗(𝒖𝒕 , 𝒖𝒕−𝟑 ) − ⏟
𝑪𝒐𝒗(𝒖𝒕−𝟏 , 𝒖𝒕−𝟐 ) + ⏟
𝑪𝒐𝒗(𝒖𝒕−𝟏 , 𝒖𝒕−𝟑 )
𝟎 𝟎 𝟎 𝟎
5)
i. 𝒚𝒕 = 𝒍𝒏 𝒙𝒕 = 𝒂𝒕 + 𝒖𝒕 ⟹ 𝒚 ̂
̂𝒕 = 𝒍𝒏 ̂𝒕
𝒙𝒕 = 𝒂
̂ 𝒕 = 𝒚𝒕 − 𝒚
𝒖 ̂𝒕
𝑳𝒂 𝒎é𝒕𝒉𝒐𝒅𝒆 𝒅𝒆𝒔 𝒎𝒐𝒊𝒏𝒅𝒓𝒆𝒔 𝒄𝒂𝒓𝒓é𝒔 𝒐𝒓𝒅𝒊𝒏𝒂𝒊𝒓𝒆𝒔 𝒄𝒐𝒏𝒔𝒊𝒔𝒕𝒆 à 𝒎𝒊𝒏𝒊𝒎𝒊𝒔𝒆𝒓 𝒍𝒂 𝒔𝒐𝒎𝒎𝒆 𝒅𝒆𝒔 𝒄𝒂𝒓𝒓é𝒔
𝑻 𝑻 𝑻
̂ 𝟐𝒕 = 𝚿(𝒂
𝒅𝒆𝒔 𝒓é𝒔𝒊𝒅𝒖𝒔 : ∑ 𝒖 ̂) ⟹ 𝚿(𝒂 ̂𝒕 )𝟐 = ∑(𝒚𝒕 − 𝒂
̂) = ∑(𝒚𝒕 − 𝒚 ̂𝒕)𝟐
𝒕=𝟏 𝒕=𝟏 𝒕=𝟏
𝑻 𝑻 𝑻
′ (𝒂) ′ (𝒚
(𝒚𝒕 − 𝒂
𝚿 ̂ = ∑𝟐⏟ ̂𝒕) 𝒕 ̂𝒕) = ∑ −𝟐𝒕 (𝒚𝒕 − 𝒂
−𝒂 ̂ 𝒕𝟐 )
̂𝒕) = ∑(−𝟐𝒕𝒚𝒕 + 𝟐𝒂
𝒕=𝟏 −𝒕 𝒕=𝟏 𝒕=𝟏
𝑻
′′ (𝒂)
𝚿 ̂ = ∑ 𝟐𝒕𝟐
𝒕=𝟏
̂𝒕𝟐 ) = 𝟎 (𝟏)
∑(−𝟐𝒕𝒚𝒕 + 𝟐𝒂
′ (𝒂)
𝚿 ̂ =𝟎 𝒕=𝟏
̂ 𝒔𝒐𝒍𝒖𝒕𝒊𝒐𝒏 𝒅𝒆 (𝑺) { ′′
𝒂 ⇔ 𝑻
𝚿 (𝒂̂) > 𝟎
∑ 𝟐𝒕𝟐 > 𝟎 (𝟐)
{ 𝒕=𝟏
𝑻 𝑻 𝑻 𝑻 𝑻
𝟐)
∑𝑻𝒕=𝟏 𝒕𝒚𝒕
𝟐 𝟐
(𝟏): ∑(𝟐𝒕𝒚𝒕 − 𝟐𝒂
̂𝒕 ̂∑𝒕 = 𝟎 ⇔ 𝒂
= 𝟎 ⇔ 𝟐 ∑ 𝒕𝒚𝒕 − 𝟐𝒂 ̂ ∑ 𝒕 = ∑ 𝒕𝒚𝒕 ⇔ 𝒂
̂= 𝑻 𝟐
∑𝒕=𝟏 𝒕
𝒕=𝟏 𝒕=𝟏 𝒕=𝟏 𝒕=𝟏 𝒕=𝟏
𝑻 𝑻 𝑻
𝟐
̂ = (∑ 𝒕𝒚𝒕 )⁄(∑ 𝒕𝟐 )
(𝟐): ∑ 𝟐𝒕 > 𝟎 é𝒗𝒊𝒅𝒆𝒏𝒕𝒆 . 𝑫’𝒐ù 𝒂
𝒕=𝟏 𝒕=𝟏 𝒕=𝟏
𝟏 𝒖𝟐
−
𝒂𝒗𝒆𝒄 𝒖𝒕 ↝ 𝓝(𝟎, 𝟏), 𝒅𝒐𝒏𝒄 𝒍𝒂 𝒅. 𝒅. 𝒑 𝒅𝒆 𝒍𝒂 𝒗. 𝒂. 𝑼 𝒔𝒆𝒓𝒂 ∶ 𝒇𝑼 (𝒖) = 𝒆 𝟐 𝒐ù 𝒖 ∈ 𝛀𝑼 = ]−∞, +∞[
√𝟐𝝅
𝑵𝒐𝒕𝒐𝒏𝒔 𝑭𝑼 (𝒖) = 𝑷(𝑼 ≤ 𝒖) 𝒆𝒕 𝑭𝑿 (𝒙) = 𝑷(𝑿 ≤ 𝒙), 𝒍𝒆𝒔 𝒇𝒐𝒏𝒄𝒕𝒊𝒐𝒏𝒔 𝒅𝒆 𝒓é𝒑𝒂𝒓𝒕𝒊𝒕𝒊𝒐𝒏𝒔
𝒓𝒆𝒔𝒑𝒆𝒄𝒕𝒊𝒗𝒆𝒔 𝒅𝒆𝒔 𝒗. 𝒂 𝑼 𝒆𝒕 𝑿
𝑭𝑿 (𝒙) = 𝑷(𝑿 ≤ 𝒙) = 𝑷(𝒆𝑼 ≤ 𝒙) 𝒐𝒓 𝑳𝒏 𝒆𝒔𝒕 ↗ 𝒔𝒖𝒓 ]𝟎, +∞[
𝒍𝒏(𝒙)
𝟏 𝒖𝟐
′ −
𝑫 𝒐ù 𝑭𝑿 (𝒙) = 𝑭𝑼 (𝒍𝒏(𝒙)) = ∫ 𝒆 𝟐 𝒅𝒖
√𝟐𝝅
−∞
′ 𝟏
ii. 𝒇𝑿 (𝒙) = 𝑭′𝑿 (𝒙) = (𝒍𝒏(𝒙)) 𝑭′𝑼 (𝒍𝒏(𝒙)) = 𝒙 𝒇𝑼 (𝒍𝒏(𝒙))
𝟐
𝟏 𝟏 −(𝒍𝒏(𝒙))
𝑷𝒂𝒓 𝒍𝒂 𝒔𝒖𝒊𝒕𝒆 𝒇𝑿 (𝒙) = 𝒆 𝟐 𝒐ù 𝒙 ∈ 𝛀𝑿 = ]𝟎, +∞[
𝒙 √𝟐𝝅
iii.
𝟏
𝑻 𝑻 𝑻
𝑻
̅ 𝑿 = √∏ 𝒙𝒕 = (∏ 𝒙𝒕 )
𝑮
𝒕=𝟏 𝒕=𝟏
𝟏
𝑻 𝑻 𝑻 𝑻 𝑻
𝟏 𝟏 𝟏
̅ 𝑿 ) = 𝒍𝒏 ((∏ 𝒙𝒕 ) ) = 𝒍𝒏 (∏ 𝒙𝒕 ) = ∑ 𝒍𝒏 𝒙𝒕 = ∑ 𝒖𝒕 . 𝑫′𝒐ù 𝒍𝒏(𝑮
𝒂𝒊𝒏𝒔𝒊, 𝒍𝒏(𝑮 ̅ 𝑿) = 𝑼
̅
𝑻 𝑻 𝑻
𝒕=𝟏 𝒕=𝟏 𝒕=𝟏 𝒕=𝟏
̅ ̅ 𝜶
̅ 𝑿 )𝜶 ) ⇒ 𝑴𝑼̅ (𝟏) = 𝑬((𝑮
̅ 𝑿 )𝟏 ) = 𝑬(𝑮
̅ 𝑿)
𝒐𝒓 𝑴𝑼̅ (𝜶) = 𝑬(𝒆𝜶𝑼 ) = 𝑬 ((𝒆𝑼 ) ) = 𝑬((𝑮
𝑻
𝟏 𝟏
𝑪𝒂𝒍𝒄𝒖𝒍𝒐𝒏𝒔 𝒅’𝒂𝒃𝒐𝒓𝒅𝒔 𝑴𝑼̅ (𝜶): 𝑴𝑼̅ (𝜶) = 𝑴𝟏 ∑𝑻 (𝜶) = 𝑴∑𝑻 𝒖𝒕 ( 𝜶) = ∏ 𝑴𝒖𝒕 ( 𝜶)
𝒖
𝑻 𝒕=𝟏 𝒕
𝒕=𝟏 𝑻 𝑻
𝒕=𝟏
𝟏 𝟐
𝜶𝟐 𝟏 ( 𝜶)
𝑻 𝜶𝟐
𝒐𝒓 𝑴𝒖𝒕 (𝜶) = 𝒆𝟐 𝒑𝒖𝒊𝒔𝒒𝒖𝒆 𝒖𝒕 ↝ 𝓝(𝟎, 𝟏) ⇒ 𝑴𝒖𝒕 ( 𝜶) = 𝒆 𝟐 = 𝒆 𝟐
𝟐𝑻
𝑻
𝑻 𝑻
𝜶𝟐 𝜶𝟐 𝜶𝟐 𝟏 𝟏
𝑴𝑼̅ (𝜶) = ∏ 𝒆𝟐𝑻𝟐 = (𝒆𝟐𝑻𝟐 ) ̅ 𝑿 ) = 𝒆𝟐𝑻
= 𝒆𝟐𝑻 ⇒ 𝑴𝑼̅ (𝟏) = 𝒆𝟐𝑻 . 𝑫′ 𝒐ù 𝑬(𝑮
𝒕=𝟏
Corrigé
1) 𝑶𝒏 𝒂 = 𝒃 = 𝟎 , 𝒑𝒂𝒓 𝒍𝒂 𝒔𝒖𝒊𝒕𝒆, 𝒚𝒕 = 𝝐𝒕 , 𝒐𝒓 𝝐𝒕 ↝ 𝓝(𝟎, 𝝈𝟐 ) , 𝒅𝒐𝒏𝒄 𝒅𝒆 𝒎ê𝒎𝒆 𝒑𝒐𝒖𝒓 𝒍𝒂 𝒗. 𝒂. 𝒚𝒕
𝟏 𝒚𝟐
𝟐 ), − 𝒕𝟐
𝒚𝒕 ↝ 𝓝(𝟎, 𝝈 𝒅𝒐𝒏𝒄 𝒍𝒂 𝒅. 𝒅. 𝒑 𝒅𝒆 𝒍𝒂 𝒗. 𝒂. 𝒀 𝒔𝒆𝒓𝒂 ∶ 𝒇𝒀 (𝒚𝒕 ) = 𝒆 𝟐𝝈 𝒐ù 𝒚𝒕 ∈ 𝛀𝒀 = ]−∞, +∞[
𝝈√𝟐𝝅
𝑵𝒐𝒕𝒐𝒏𝒔 𝑭𝒀 (𝒚𝒕 ) = 𝑷(𝒀 ≤ 𝒚𝒕 ) 𝒆𝒕 𝑭𝒁 (𝒛𝒕 ) = 𝑷(𝒁 ≤ 𝒛𝒕 ), 𝒍𝒆𝒔 𝒇𝒐𝒏𝒄𝒕𝒊𝒐𝒏𝒔 𝒅𝒆 𝒓é𝒑𝒂𝒓𝒕𝒊𝒕𝒊𝒐𝒏𝒔 𝒓𝒆𝒔𝒑𝒆𝒄𝒕𝒊𝒗𝒆𝒔
𝒅𝒆𝒔 𝒗. 𝒂 𝒀 𝒆𝒕 𝒁,
𝑶𝒏 𝒂 ∶ 𝒁 = 𝒀𝟐 , 𝒑𝒂𝒓 𝒍𝒂 𝒔𝒖𝒊𝒕𝒆 𝛀𝒀 = ]−∞, +∞[ ⇒ 𝛀𝒁 ⊆ [𝟎, +∞[
Première Partie :
1) 𝑫é𝒕𝒆𝒓𝒎𝒊𝒏𝒆𝒓 𝒍𝒂 𝒇𝒐𝒏𝒄𝒕𝒊𝒐𝒏 𝒅𝒆 𝒓é𝒑𝒂𝒓𝒕𝒊𝒕𝒊𝒐𝒏 𝒅𝒆 𝑿
2) 𝑪𝒂𝒍𝒄𝒖𝒍𝒆𝒓 𝒆𝒏 𝒇𝒐𝒏𝒄𝒕𝒊𝒐𝒏 𝒅𝒆 𝜽 𝒍𝒂 𝒎é𝒅𝒊𝒂𝒏𝒆 𝒅𝒆 𝑿
3) 𝑶𝒏 𝒅é𝒇𝒊𝒏𝒊𝒕 𝒁 = ⌊𝑿⌋ 𝒒𝒖𝒊 𝒅é𝒔𝒊𝒈𝒏𝒆 𝒍𝒂 𝒑𝒂𝒓𝒕𝒊𝒆 𝒆𝒏𝒕𝒊è𝒓𝒆 𝒅𝒆 𝑿 ∶ 𝒄𝒆 𝒒𝒖𝒊 𝒔𝒊𝒈𝒏𝒊𝒇𝒊𝒆 𝒒𝒖𝒆 𝒁 𝒆𝒔𝒕 𝒍𝒆
𝒑𝒍𝒖𝒔 𝒈𝒓𝒂𝒏𝒅 𝒏𝒐𝒎𝒃𝒓𝒆 𝒆𝒏𝒕𝒊𝒆𝒓 𝒊𝒏𝒇é𝒓𝒊𝒆𝒖𝒓 à 𝑿
i. 𝑫é𝒕𝒆𝒓𝒎𝒊𝒏𝒆𝒓 𝒍𝒂 𝒍𝒐𝒊 𝒅𝒆 𝒑𝒓𝒐𝒃𝒂𝒃𝒊𝒍𝒊𝒕é 𝒅𝒆 𝒁
ii. 𝑪𝒂𝒍𝒄𝒖𝒍𝒆𝒓 𝒍’𝒆𝒔𝒑é𝒓𝒂𝒏𝒄𝒆 𝒎𝒂𝒕𝒉é𝒎𝒂𝒕𝒊𝒒𝒖𝒆 𝒅𝒆 𝒍𝒂 𝒗𝒂𝒓𝒊𝒂𝒃𝒍𝒆 𝒁
Corrigé
1)
′ 𝟎, 𝒔𝒊 𝒙 < 𝟎
𝑺𝒊 𝒙 ≥ 𝟎 , 𝒂𝒍𝒐𝒓𝒔 𝑭𝑿 (𝒙) = 𝟏 − 𝒆−𝜽𝒙 . 𝑫 𝒐ù 𝑭𝑿 (𝒙) = {
𝟏 − 𝒆−𝜽𝒙 , 𝒔𝒊 𝒙 ≥ 𝟎
𝑻𝒐𝒖𝒕 𝒅’𝒂𝒃𝒐𝒓𝒅 𝒁(𝛀) = ℕ, 𝒄𝒆 𝒒𝒖𝒊 𝒔𝒊𝒈𝒏𝒊𝒇𝒊𝒆 𝒒𝒖𝒆 𝒁 𝒆𝒔𝒕 𝒖𝒏𝒆 𝒗𝒂𝒓𝒊𝒂𝒃𝒍𝒆 𝒂𝒍é𝒂𝒕𝒐𝒊𝒓𝒆 𝒅𝒊𝒔𝒄𝒓è𝒕𝒆.
𝑻=𝒁+𝟏
∙{ ⇒ 𝑻(𝛀) = ℕ∗
𝒁(𝛀) = ℕ
𝟎 , 𝒔𝒊 𝒛 < 𝟎
𝑭𝒁 (𝒛) = { 𝒛+𝟏
𝟏 − (𝒆−𝜽 ) , 𝒔𝒊 𝒛 ≥ 𝟎
(𝒕−𝟏)+𝟏
∙ ∀𝒕 ≥ 𝟏, 𝑭𝑻 (𝒕) = 𝑭𝒁 (𝒕 − 𝟏) = 𝟏 − (𝒆−𝜽 )
−𝜽 𝒕−𝟏
𝑷(𝑻 = 𝒕) = {(𝒆 ) (𝟏 − 𝒆−𝜽 ) = (𝟏 − 𝒑)𝒕−𝟏 𝒑 , 𝒔𝒊 𝒕 ∈ ℕ∗ 𝒂𝒗𝒆𝒄 𝒑 = (𝟏 − 𝒆−𝜽 ) ∈ ]𝟎, 𝟏[
𝟎 , 𝒔𝒊 𝒏𝒐𝒏
𝑫′ 𝒐ù 𝑻 ↝ 𝓖 (𝟏 − 𝒆−𝜽 )
𝟏
𝑬𝒏 𝒆𝒇𝒇𝒆𝒕 𝑬(𝑻) =
𝟏 − 𝒆−𝜽
𝟏 𝟏
𝑶𝒓 𝑻 = 𝒁 + 𝟏 , 𝑷𝒂𝒓 𝒍𝒂 𝒔𝒖𝒊𝒕𝒆 𝑬(𝒁 + 𝟏) = ⟺ 𝑬(𝒁) + 𝟏 =
𝟏 − 𝒆−𝜽 𝟏 − 𝒆−𝜽
𝟏 𝟏 − 𝟏 + 𝒆−𝜽 𝒆−𝜽 𝟏 𝟏
⟺ 𝑬(𝒁) = −𝜽
− 𝟏 = −𝜽
= −𝜽
= 𝜽 . 𝑫′ 𝒐ù 𝑬(𝒁) = 𝜽
𝟏−𝒆 𝟏−𝒆 𝟏−𝒆 𝒆 −𝟏 𝒆 −𝟏
2)
+∞ +∞
𝟒 𝟒 𝟏 +∞ 𝟒
𝑬(𝑿) = ∫ 𝒙𝒇𝑿 (𝒙)𝒅𝒙 = ∫ 𝒅𝒙 = − [ ] = − [𝟎 − 𝟏]
𝟏 𝟏 𝒙𝟒 𝟑 𝒙𝟑 𝟏 𝟑
𝟒
𝑬(𝑿) = ≅ 𝟏, 𝟑𝟑
𝟑
3)
𝟎, 𝒔𝒊 𝒙 < 𝟏
𝒙
▪𝑭𝑿 (𝒙) = 𝑷(𝑿 ≤ 𝒙) {
∫ 𝒇𝑿 (𝒕)𝒅𝒕 , 𝒔𝒊 𝒙 ≥ 𝟏
𝟏
𝟒 𝟒 𝟏 𝒙 𝒙
𝟏
𝑷𝒐𝒖𝒓 𝒕𝒐𝒖𝒕 𝒙 ≥ 𝟏, 𝒐𝒏 𝒂 ∶ 𝑭𝑿 (𝒙) = ∫ 𝟓 𝒅𝒕 = − [ 𝟒 ] = 𝟏 − 𝟒
𝟏 𝒕 𝟒 𝒕 𝟏 𝒙
𝟎, 𝒔𝒊 𝒙 < 𝟏
′ 𝟏
𝑫 𝒐ù, 𝑭𝑿 (𝒙) = 𝑷(𝑿 ≤ 𝒙) {
𝟏 − 𝟒 , 𝒔𝒊 𝒙 ≥ 𝟏
𝒙
𝟏
▪𝑴𝒆 𝒆𝒔𝒕 𝒖𝒏𝒆 𝒔𝒐𝒍𝒕𝒊𝒐𝒏 𝒅𝒆 𝒍’é𝒒𝒖𝒂𝒕𝒊𝒐𝒏 ∶ 𝑭𝑿 (𝒙) = 𝒔𝒖𝒓 𝒍′ 𝒊𝒏𝒕𝒆𝒓𝒗𝒂𝒍𝒍𝒆[𝟏, +∞[
𝟐
𝟏 𝟏 𝟏 𝟏 𝟏 𝟒
𝑶𝒓, 𝑭𝑿 (𝒙) = ⟺ 𝟏 − 𝟒 = ⟺ 𝟒 = ⟺ 𝒙𝟒 = 𝟐 ⟺ 𝒙 = √𝟐
𝟐 𝒙 𝟐 𝒙 𝟐
𝟒
𝑫′ 𝒐ù, 𝑴𝒆 = √𝟐 ≅ 𝟏, 𝟏𝟗
4)
𝑴𝒆 ≠ 𝑬(𝑿), 𝒅𝒐𝒏𝒄 𝒍𝒂 𝒅𝒊𝒔𝒕𝒓𝒊𝒃𝒖𝒕𝒊𝒐𝒏 𝒏’𝒆𝒔𝒕 𝒑𝒂𝒔 𝒔𝒚𝒎é𝒕𝒓𝒊𝒒𝒖𝒆
𝟑
′ 𝟑 𝟑 𝟐 𝟐
𝑬𝒏 𝒄𝒂𝒍𝒄𝒖𝒍𝒂𝒏𝒕 𝒍𝒆 𝒄𝒐𝒆𝒇𝒇𝒊𝒄𝒊𝒆𝒏𝒕 𝒅 𝒂𝒔𝒚𝒎é𝒕𝒓𝒊𝒆 𝜸𝟏 = 𝝁𝟑 ⁄𝝈 = 𝑬 [(𝑿 − 𝑬(𝑿)) ]⁄[𝑬 ((𝑿 − 𝑬(𝑿)) )] ,
Corrigé
𝑿, 𝒀 ↝ 𝓝(𝟎, 𝟏) 𝑼 = 𝑿 − 𝒀 ↝ 𝓝(𝑬(𝑿 − 𝒀), 𝑽(𝑿 − 𝒀))
{ ⇒{
𝑿 𝒆𝒕 𝒀 𝒅𝒆𝒖𝒙 𝒗. 𝒂. 𝒊𝒏𝒅é𝒑𝒆𝒏𝒅𝒂𝒏𝒕𝒆𝒔 𝑽 = 𝑿 + 𝒀 ↝ 𝓝(𝑬(𝑿 + 𝒀), 𝑽(𝑿 + 𝒀))
= 𝑽(𝑿) − 𝑽(𝒀)
𝑪𝒐𝒗(𝑼, 𝑽) = 𝟎 é𝒒𝒖𝒊𝒗𝒂𝒖𝒕 à 𝒅𝒊𝒓𝒆 𝒒𝒖𝒆 𝑿 𝒆𝒕 𝒀 𝒏𝒐𝒏 𝒔𝒆𝒖𝒍𝒆𝒎𝒆𝒏𝒕 𝒏𝒐𝒏 𝒄𝒐𝒓𝒓é𝒍é𝒆𝒔 𝒎𝒂𝒊𝒔 𝒂𝒖𝒔𝒔𝒊
𝑪𝒐𝒗(𝑼, 𝑽)
𝝆= = 𝟎 𝑼 𝒆𝒕 𝑽 𝒔𝒐𝒏𝒕 𝒂𝒖𝒔𝒔𝒊 𝒅𝒆𝒖𝒙 𝒗. 𝒂. 𝒊𝒏𝒅é𝒑𝒆𝒏𝒅𝒂𝒏𝒕𝒆𝒔
√𝑽(𝑼)𝑽(𝑽)
Corrigé
𝒙
𝒙 𝒙 𝒕−𝟑+𝟏 𝟏 𝒙 𝟏
1) ∀𝒙 ≥ 𝟏, 𝑭𝑿 (𝒙) = 𝑷(𝑿 ≤ 𝒙) = ∫𝟏 𝒇𝑿 (𝒕)𝒅𝒕 = 𝟐 ∫𝟏 𝒕−𝟑 𝒅𝒕 = 𝟐 [−𝟑+𝟏] = − [𝒕𝟐 ] = 𝟏 − 𝒙𝟐
𝟏 𝟏
𝟎 , 𝒔𝒊 𝒙 < 𝟏
′ 𝟏
𝑫 𝒐ù, 𝑭𝑿 (𝒙) = {
𝟏 − 𝟐 , 𝒔𝒊 𝒙 ≥ 𝟏
𝒙
𝟏 𝟏 𝟏 𝟏 𝟏
2) 𝑭𝑿 (𝑴𝒆 ) = 𝟐 ⟺ 𝟏 − 𝑴𝟐 = 𝟐 ⟺ 𝑴𝟐 = 𝟐 ⟺ 𝑴𝟐𝒆 = 𝟐, 𝑫′ 𝒐ù 𝑴𝒆 = √𝟐
𝒆 𝒆
𝑿(𝛀) = [𝟏, +∞[
3) { 𝟏 ⇒ 𝒁(𝛀) = ]𝟎, 𝟏]
𝒁=𝑿
𝟐𝒛 , 𝒔𝒊 𝒛 ∈ ]𝟎, 𝟏]
𝑫′ 𝒐ù, 𝒇𝒁 (𝒛) = {
𝟎 , 𝒔𝒊 𝒏𝒐𝒏
𝟏
𝟏 𝟏 𝟏 𝒛𝟐+𝟏 𝟐 𝟐
4) 𝑬 (𝑿) = 𝑬(𝒁) = ∫𝟎 𝒛𝒇𝒁 (𝒛)𝒅𝒛 = 𝟐 ∫𝟎 𝒛𝟐 𝒅𝒛 = 𝟐 [ 𝟐+𝟏 ] = 𝟑 [𝒛𝟑 ]𝟏𝟎 = 𝟑
𝟎
𝟏 𝟏
𝑫′ 𝒐ù, < 𝑬( )
𝑬(𝑿) 𝑿
Corrigé
1) 𝑿, 𝒀 ↝ 𝓝(𝟎, 𝟏) ⇒ 𝑬(𝑿) = 𝑬(𝒀) = 𝟎 𝒆𝒕 𝑽(𝑿) = 𝑽(𝒀) = 𝟏
𝑿 𝒆𝒕 𝒁 𝒔𝒐𝒊𝒆𝒏𝒕 𝒏𝒐𝒏 𝒄𝒐𝒓𝒓é𝒍é𝒆𝒔 ⟺ 𝑪𝒐𝒗(𝑿, 𝒁) = 𝟎 ⟺ 𝑪𝒐𝒗[𝑿, (𝑿 − 𝒂𝒀)] = 𝟎
𝟏
⟺ 𝑪𝒐𝒗(𝑿, 𝑿) − 𝒂𝑪𝒐𝒗(𝑿, 𝒀) = 𝟎 ⟺ 𝑽(𝑿) − 𝒂𝑪𝒐𝒗(𝑿, 𝒀) = 𝟎 ⟺ 𝟏 − 𝒂𝒄 = 𝟎 ⟺ 𝒂 = ,𝒄 ≠ 𝟎
𝒄
𝑿 𝒆𝒕 𝒀 𝒔𝒐𝒊𝒆𝒏𝒕 𝒏𝒐𝒏 𝒄𝒐𝒓𝒓é𝒍é𝒆𝒔 (𝒄. −à − 𝒅. 𝒄 ≠ 𝟎)
′ 𝟏
𝑫 𝒐ù, 𝑿 𝒆𝒕 𝒁 𝒔𝒐𝒊𝒆𝒏𝒕 𝒏𝒐𝒏 𝒄𝒐𝒓𝒓é𝒍é𝒆𝒔 ⟺ {
𝒂=
𝒄
𝟏
2) 𝑽(𝒁) = 𝑽(𝑿 − 𝒂𝒀) = 𝑽(𝑿) + 𝒂𝟐 𝑽(𝒀) − 𝟐𝒂𝑪𝒐𝒗(𝑿, 𝒀) = 𝟏 + 𝒂𝟐 − 𝟐𝒂𝒄 = 𝟏 + 𝒄𝟐 − 𝟐
𝟏 𝟏 − 𝒄𝟐
𝑫′ 𝒐ù, 𝒑𝒐𝒖𝒓 𝒄 ≠ 𝟎, 𝒆𝒕 𝒂 = , 𝑽(𝒁) =
𝒄 𝒄𝟐
Corrigé
𝒆−𝒙 , 𝒑𝒐𝒖𝒓 𝒙 ≥ 𝟎
𝑿 ↝ 𝓔(𝟏) ⇒ 𝒇𝑿 (𝒙) = {
𝟎 , 𝒔𝒊 𝒏𝒐𝒏
𝑿(𝛀) = ℝ+
{ ⇒ 𝒀(𝛀) = ℝ , 𝒑𝒐𝒖𝒓 𝒙 > 𝟎
𝒀 = 𝐥𝐧 𝑿
𝑫é𝒕𝒆𝒓𝒎𝒊𝒏𝒐𝒏𝒔 𝒍𝒂 𝒇𝒐𝒏𝒄𝒕𝒊𝒐𝒏 𝒅𝒆 𝒓é𝒑𝒂𝒓𝒕𝒊𝒕𝒊𝒐𝒏 𝒅𝒆 𝒍𝒂 𝒗. 𝒂. 𝒀 :
∀𝒚 ∈ ℝ 𝒆𝒕 ∀𝒙 ∈ ]𝟎, +∞[ , 𝑭𝒀 (𝒚) = 𝑷(𝒀 ≤ 𝒚) = 𝑷(𝐥𝐧 𝑿 ≤ 𝒚) = 𝑷(𝑿 ≤ 𝒆𝒚 ) = 𝑭𝑿 (𝒆𝒚 )
𝒍𝒂 𝒅. 𝒅. 𝒑 𝒅𝒆 𝒍𝒂 𝒗. 𝒂. 𝒀 𝒔𝒆 𝒅é𝒅𝒖𝒊𝒕 𝒅𝒆 𝒔𝒂 𝒇. 𝒓 𝒑𝒂𝒓 𝒅é𝒓𝒊𝒗𝒂𝒕𝒊𝒐𝒏 ∶
𝒅𝑭𝒀 (𝒚) 𝒅𝑭𝑿 (𝒆𝒚 ) 𝒚 𝒚
∀𝒚 ∈ ℝ ; 𝒇𝒀 (𝒚) = = = (𝒆𝒚 )′ 𝒇𝑿 (𝒆𝒚 ) = 𝒆𝒚 𝒆−(𝒆 ) = 𝒆(𝒚−𝒆 )
𝒅𝒚 𝒅𝒚
𝒚
𝑫′ 𝒐ù 𝒇𝒀 (𝒚) = 𝒆(𝒚−𝒆 ) , ∀𝒚 ∈ ℝ
Corrigé
𝟏 𝒙𝟐
−
𝑿 ↝ 𝓝(𝟎, 𝟏) ⇒ ∀𝒙 ∈ ℝ , 𝒇𝑿 (𝒙) = 𝒆 𝟐
√𝟐𝝅
𝑿(𝛀) = ℝ
{ ⇒ 𝒀(𝛀) = ]𝟎, +∞[
𝒀 = 𝒆𝑿
𝑫é𝒕𝒆𝒓𝒎𝒊𝒏𝒐𝒏𝒔 𝒍𝒂 𝒇𝒐𝒏𝒄𝒕𝒊𝒐𝒏 𝒅𝒆 𝒓é𝒑𝒂𝒓𝒕𝒊𝒕𝒊𝒐𝒏 𝒅𝒆 𝒍𝒂 𝒗. 𝒂. 𝒀 ; ∀𝒙 ∈ ℝ 𝒆𝒕 ∀𝒚 ∈ ]𝟎, +∞[ :
𝑭𝒀 (𝒚) = 𝑷(𝒀 ≤ 𝒚) = 𝑷(𝒆𝑿 ≤ 𝒚) = 𝑷(𝑿 ≤ 𝐥𝐧 𝒚) = 𝑭𝑿 (𝐥𝐧 𝒚)
𝒍𝒂 𝒅. 𝒅. 𝒑 𝒅𝒆 𝒍𝒂 𝒗. 𝒂. 𝒀 𝒔𝒆 𝒅é𝒅𝒖𝒊𝒕 𝒅𝒆 𝒔𝒂 𝒇. 𝒓 𝒑𝒂𝒓 𝒅é𝒓𝒊𝒗𝒂𝒕𝒊𝒐𝒏 ∶
𝒅𝑭𝒀 (𝒚) 𝒅𝑭𝑿 (𝐥𝐧 𝒚) 𝟏 𝟏 −(𝐥𝐧 𝒚)𝟐
∀𝒚 ∈ ]𝟎, +∞[ ; 𝒇𝒀 (𝒚) = = = (𝐥𝐧 𝒚)′ 𝒇𝑿 (𝐥𝐧 𝒚) = × 𝒆 𝟐
𝒅𝒚 𝒅𝒚 𝒚 √𝟐𝝅
𝟏 (𝐥𝐧 𝒚)𝟐
−
𝒆 𝟐 , 𝒔𝒊 𝒚 > 𝟎
𝑫′ 𝒐ù 𝒇𝒀 (𝒚) = {√𝟐𝝅𝒚𝟐
𝟎 , 𝒔𝒊 𝒏𝒐𝒏
Corrigé
𝒙𝟐
𝟏 𝑿(𝛀) = ℝ
1) 𝑶𝒏 𝒂 ∶ 𝑿 ↝ 𝓝(𝟎, 𝟏) ⇒ ∀𝒙 ∈ ℝ , 𝒇𝑿 (𝒙) = 𝒆− 𝟐 ; { ⇒ 𝒀(𝛀) = [𝟎, +∞[
√𝟐𝝅 𝒀 = 𝑿𝟐
𝑫é𝒕𝒆𝒓𝒎𝒊𝒏𝒐𝒏𝒔 𝒍𝒂 𝒇𝒐𝒏𝒄𝒕𝒊𝒐𝒏 𝒅𝒆 𝒓é𝒑𝒂𝒓𝒕𝒊𝒕𝒊𝒐𝒏 𝒅𝒆 𝒍𝒂 𝒗. 𝒂. 𝒀 ; ∀(𝒙, 𝒚) ∈ ℝ × [𝟎, +∞[ ∶
𝒅[𝟐𝚽(√𝒚) − 𝟏]
𝒍𝒂 𝒅. 𝒅. 𝒑 𝒅𝒆 𝒍𝒂 𝒗. 𝒂. 𝒀 𝒔𝒆 𝒅é𝒅𝒖𝒊𝒕 𝒅𝒆 𝒔𝒂 𝒇. 𝒓 𝒑𝒂𝒓 𝒅é𝒓𝒊𝒗𝒂𝒕𝒊𝒐𝒏: ∀𝒚 ∈ ]𝟎, +∞[ ; 𝒇𝒀 (𝒚) =
𝒅𝒚
𝟐
′ 𝟏 𝟏 (√𝒚) 𝟏 𝒚
−
∀𝒚 ∈ ]𝟎, +∞[ ; 𝒇𝒀 (𝒚) = 𝟐(√𝒚) 𝒇𝑿 (√𝒚) = 𝒇𝑿 (√𝒚) = 𝒆 𝟐 = 𝒆− 𝟐
√𝒚 √𝟐𝝅√𝒚 √𝟐𝝅𝒚
𝟏 𝒚
𝒆−𝟐 , 𝒔𝒊𝒚 ∈ ]𝟎, +∞[
𝒇𝒀 (𝒚) = {√𝟐𝝅𝒚
𝟎 , 𝒔𝒊 𝒏𝒐𝒏
𝟏 (𝒖−𝒆𝒖 )
∀𝒖 ∈ ℝ ; 𝒇𝑼 (𝒖) = 𝒆 𝟐
√𝟐𝝅
Corrigé
𝟏 , 𝒔𝒊 𝒗 ∈ [𝟎, 𝟏]
𝑶𝒏 𝒂 𝑽 ↝ 𝓤[𝟎,𝟏] ⇒ ∀𝒗 ∈ ℝ , 𝒇𝑽 (𝒗) = {
𝟎 , 𝒔𝒊 𝒏𝒐𝒏
𝟏
∀𝒗 ∈ [𝟎, 𝟏], 𝒐𝒏 𝒂 ∶ 𝐥𝐧(𝒗) ≤ 𝟎 , 𝒐𝒓 𝝀 > 𝟎 , 𝒅𝒐𝒏𝒄 − 𝐥𝐧(𝑽) ≥ 𝟎
𝝀
𝑽(𝛀) = [𝟎, 𝟏]
{ 𝟏 ⇒ 𝑾(𝛀) = [𝟎, +∞[
𝑾 = − 𝐥𝐧(𝑽)
𝝀
𝑫é𝒕𝒆𝒓𝒎𝒊𝒏𝒐𝒏𝒔 𝒍𝒂 𝒇𝒐𝒏𝒄𝒕𝒊𝒐𝒏 𝒅𝒆 𝒓é𝒑𝒂𝒓𝒕𝒊𝒕𝒊𝒐𝒏 𝒅𝒆 𝒍𝒂 𝒗. 𝒂. 𝑾 ; ∀𝒗 ∈ [𝟎, 𝟏] 𝒆𝒕 ∀𝒘 ∈ [𝟎, +∞[ ∶
𝟏
𝑭𝑾 (𝒘) = 𝑷(𝑾 ≤ 𝒘) = 𝑷 (− 𝐥𝐧(𝑽) ≤ 𝒘) = 𝑷(𝐥𝐧(𝑽) ≥ −𝝀𝒘) = 𝑷(𝑽 ≥ 𝒆−𝝀𝒘 ) = 𝟏 − 𝑷(𝑽 < 𝒆−𝝀𝒘 )
𝝀
= 𝟏 − 𝑷(𝑽 ≤ 𝒆−𝝀𝒘 ) = 𝟏 − 𝑭𝑽 (𝒆−𝝀𝒘 )
𝒅[𝟏 − 𝑭𝑽 (𝒆−𝝀𝒘 )] ′
∀𝒘 ∈ [𝟎, +∞[ ; 𝒇𝑾 (𝒘) = = −(𝒆−𝝀𝒘 ) 𝒇𝑽 (𝒆−𝝀𝒘 ) = 𝝀𝒆−𝝀𝒘 𝒇𝑽 (𝒆−𝝀𝒘 )
𝒅𝒘
𝑶𝒓 𝒘 ∈ [𝟎, +∞[ ⟹ −𝝀𝒘 ≤ 𝟎 ⟹ 𝟎 ≤ 𝒆−𝝀𝒘 ≤ 𝟏 𝒆𝒕 𝒇𝑽 (𝒆−𝝀𝒘 ) = 𝟏
𝟐
𝑿 𝜷 𝑿 𝟏 𝟏 𝟏
∀(𝒙, 𝒚) ∈ [𝟎, +∞[ , 𝑭𝒀 (𝒚) = 𝑷(𝒀 ≤ 𝒚) = 𝑷 (( ) ≤ 𝒚) = 𝑷 ( ≤ 𝒚𝜷 ) = 𝑷 (𝑿 ≤ 𝜶𝒚𝜷 ) = 𝑭𝑿 (𝜶𝒚𝜷 )
𝜶 𝜶
𝑽 ↝ 𝓒(𝟎 , 𝟏)
{ 𝟏 ⇒ 𝒁 ↝ 𝓒(𝟎 , 𝟏)
𝒁=
𝑽
Corrigé
+∞
∫ 𝒇𝑿 (𝒙)𝒅𝒙 = 𝟏 (𝟏)
1) 𝒇𝑿 (𝒙) 𝒆𝒔𝒕 𝒖𝒏𝒆 𝒅. 𝒅. 𝒑 ⟺ { 𝒂
𝒇𝑿 (𝒙) ≥ 𝟎 , ∀𝒙 ∈ [𝒂, +∞[ (𝟐)
+∞ +∞ +∞
𝜶 𝒂 𝜶+𝟏 𝟏
(𝟏) ∶ ∫ 𝒇𝑿 (𝒙)𝒅𝒙 = ∫ ( ) 𝒅𝒙 = 𝜶(𝒂)𝜶 ∫ 𝒅𝒙
𝒂 𝒂 𝒂 𝒙 𝒂 𝒙(𝜶+𝟏)
+∞
𝟏
𝒐𝒓 𝜶 > 𝟎 ⟹ 𝜶 + 𝟏 > 𝟏 , 𝒑𝒂𝒓 𝒍𝒂 𝒔𝒖𝒊𝒕𝒆 ∫ 𝒅𝒙 , 𝒄𝒐𝒏𝒗𝒆𝒓𝒈𝒆 , (𝑰𝒏𝒕é𝒈𝒓𝒂𝒍𝒆 𝒅𝒆 𝑹𝒊𝒆𝒎𝒂𝒏𝒏)
𝒂 𝒙(𝜶+𝟏)
+∞
+∞
𝜶
+∞
−(𝜶+𝟏)
𝒙−(𝜶+𝟏)+𝟏
𝜶 𝜶
𝟏 +∞
∫ 𝒇𝑿 (𝒙)𝒅𝒙 = 𝜶(𝒂) ∫ 𝒙 𝒅𝒙 = 𝜶(𝒂) [ ] = −(𝒂) [ 𝜶 ]
𝒂 𝒂 −(𝜶 + 𝟏) + 𝟏 𝒂 𝒙 𝒂
𝜶 𝒂 𝜶+𝟏
(𝟐) ∶ 𝒂 > 𝟎 𝒆𝒕 𝜶 > 𝟎, 𝒇𝑿 (𝒙) = ( ) ⇒ 𝒇𝑿 (𝒙) ≥ 𝟎 , ∀𝒙 ∈ [𝒂, +∞[ , 𝒂𝒊𝒏𝒔𝒊 (𝟐) 𝒆𝒔𝒕 𝒗é𝒆𝒊𝒇𝒊é𝒆
𝒂 𝒙
𝑫′ 𝒐ù 𝒇𝑿 (𝒙) 𝒆𝒔𝒕 𝒖𝒏𝒆 𝒅. 𝒅. 𝒑
+∞
+∞ 𝟏 +∞ 𝟏 𝟏
2) 𝑭𝑿 (𝒙) = 𝟏 − 𝑷(𝑿 > 𝒙) = 𝟏 − ∫𝒙 𝒇𝑿 (𝒕)𝒅𝒕 = −(𝒂)𝜶 [𝒕𝜶 ] = 𝟏 + (𝒂)𝜶 [( 𝐥𝐢𝐦 𝒕𝜶 ) − 𝒙𝜶 ]
𝒙 ⏟𝒕→+∞
𝟎 𝒙
𝒂 𝜶
= 𝟏 − ( ) , 𝒔𝒊 𝒙 ≥ 𝒂
𝒙
𝟎 , 𝒔𝒊 𝒙 < 𝒂
′
𝑫 𝒐ù, 𝑭𝑿 (𝒙) = { 𝒂 𝜶
𝟏 − ( ) , 𝒔𝒊 𝒙 ≥ 𝒂
𝒙
𝑷[𝑿∈{](𝒙+𝒚),+∞[∩]𝒙,+∞[}]
3) 𝑷(𝑿 > 𝒙 + 𝒚|𝑿 > 𝒙) =
𝑷(𝑿>𝒙)
𝒐𝒓 𝒑𝒐𝒖𝒓, 𝒚 > 𝟎 , ](𝒙 + 𝒚), +∞[ ∩ ]𝒙, +∞[ = ](𝒙 + 𝒚), +∞[ , 𝒑𝒂𝒓 𝒍𝒂 𝒔𝒖𝒊𝒕𝒆 ∶
𝑷(𝑿 > 𝒙 + 𝒚) 𝟏 − 𝑭𝑿 (𝒙 + 𝒚) (𝒂⁄𝒙 + 𝒚)𝜶 𝒙 𝜶
𝑷(𝑿 > 𝒙 + 𝒚|𝑿 > 𝒙) = = = = ( )
𝑷(𝑿 > 𝒙) 𝟏 − 𝑭𝑿 (𝒙) (𝒂⁄𝒙)𝜶 𝒙+𝒚
𝒙 𝜶
𝑬𝒏 𝒆𝒇𝒇𝒆𝒕, 𝐥𝐢𝐦 𝑷(𝑿 > 𝒙 + 𝒚|𝑿 > 𝒙) = 𝐥𝐢𝐦 ( ) =𝟏
𝒙→+∞ 𝒙→+∞ 𝒙 + 𝒚
𝒂𝒚𝒂𝒏𝒕 𝒓𝒆𝒎𝒂𝒓𝒒𝒖é 𝒂𝒖 𝒅é𝒃𝒖𝒕 𝒅𝒖 𝟐𝟎è𝒎𝒆 𝒔𝒊è𝒄𝒍𝒆 𝒒𝒖𝒆 𝟐𝟎% 𝒅𝒆 𝒍𝒂 𝒑𝒐𝒑𝒖𝒍𝒂𝒕𝒊𝒐𝒏 𝒑𝒐𝒔𝒔é𝒅𝒂𝒊𝒕 𝟖𝟎% 𝒅𝒆𝒔
𝒓𝒊𝒄𝒉𝒆𝒔𝒔𝒆𝒔. 𝑫’𝒂𝒖𝒕𝒓𝒆𝒔 𝒑𝒉é𝒏𝒐𝒎è𝒏𝒆𝒔 𝒐𝒏𝒕 𝒄𝒆 𝒎ê𝒎𝒆 𝒕𝒚𝒑𝒆 𝒅𝒆 𝒑𝒓𝒐𝒑𝒓𝒊é𝒕é ∶ 𝒑𝒐𝒖𝒓 𝒖𝒏 𝒔𝒆𝒓𝒗𝒊𝒄𝒆, 𝟐𝟎%
𝒅𝒆𝒔 𝒄𝒍𝒊𝒆𝒏𝒕𝒔 𝒔𝒐𝒏𝒕 𝒓𝒆𝒔𝒑𝒐𝒏𝒔𝒂𝒃𝒍𝒆𝒔 𝒅𝒆 𝟖𝟎% 𝒅𝒆𝒔 𝒓é𝒄𝒍𝒂𝒎𝒂𝒕𝒊𝒐𝒏𝒔 …
∙ 𝑪𝒆𝒍𝒂 𝒏’𝒆𝒔𝒕 𝒑𝒂𝒔 𝒗𝒓𝒂𝒊 𝒑𝒐𝒖𝒓 𝒍𝒂 𝒍𝒐𝒊 𝒆𝒙𝒑𝒐𝒏𝒆𝒏𝒕𝒊𝒆𝒍𝒍𝒆 𝒒𝒖𝒊 𝒏’𝒂 𝒑𝒂𝒔 𝒅𝒆 𝒎é𝒎𝒐𝒊𝒓𝒆 ∶
𝒂 𝜶
∀𝒙, 𝒚 > 𝟎 ∶ 𝑷(𝑿 > 𝒙 + 𝒚|𝑿 > 𝒙) = 𝑷(𝑿 > 𝒚) = 𝟏 − 𝑭𝑿 (𝒚) = ( )
𝒚
𝒂 𝜶
𝑷𝒂𝒓 𝒍𝒂 𝒔𝒖𝒊𝒕𝒆, 𝐥𝐢𝐦 𝑷(𝑿 > 𝒙 + 𝒚|𝑿 > 𝒙) = ( )
𝒙→+∞ 𝒚
+∞ +∞ 𝟏
4) 𝑬(𝑿) = ∫𝒂 𝒙𝒇𝑿 (𝒙)𝒅𝒙 = 𝜶(𝒂)𝜶 ∫𝒂 𝒅𝒙 , 𝒄𝒐𝒏𝒗𝒆𝒓𝒈𝒆 𝒔𝒊 𝜶 > 𝟏, (𝑰𝒏𝒕é𝒈𝒓𝒂𝒍𝒆 𝒅𝒆 𝑹𝒊𝒆𝒎𝒂𝒏𝒏)
𝒙𝜶
+∞
𝜶
+∞
−𝜶 𝜶
𝒙−𝜶+𝟏 𝜶(𝒂)𝜶 𝟏 +∞ 𝜶(𝒂)𝜶 𝟏 𝟏
𝑬(𝑿) = 𝜶(𝒂) ∫ 𝒙 𝒅𝒙 = 𝜶(𝒂) [ ] =− [ 𝜶−𝟏 ] = [ 𝜶−𝟏 − ( 𝐥𝐢𝐦 𝜶−𝟏 )]
𝒂 −𝜶 + 𝟏 𝒂 𝜶−𝟏 𝒙 𝒂 𝜶−𝟏 𝒂 ⏟𝒙→+∞ 𝒙
𝟎
𝜶(𝒂)𝜶 𝜶𝒂
= 𝜶−𝟏
, 𝒅′ 𝒐ù ∶ 𝒑𝒐𝒖𝒓, 𝜶 > 𝟏 ∶ 𝑬(𝑿) =
(𝜶 − 𝟏)𝒂 𝜶−𝟏
+∞ +∞ 𝟏
5) 𝑬(𝑿𝟐 ) = ∫𝒂 𝒙𝟐 𝒇𝑿 (𝒙)𝒅𝒙 = 𝜶(𝒂)𝜶 ∫𝒂 𝒅𝒙 , 𝒄𝒐𝒏𝒗𝒆𝒓𝒈𝒆 𝒔𝒊 𝜶 − 𝟏 > 𝟏 𝒐𝒖 𝒆𝒏𝒄𝒓𝒆 , 𝒔𝒊 𝜶 > 𝟐
𝒙𝜶−𝟏
+∞
𝟐) 𝜶
+∞
𝟏−𝜶
𝒙𝟏−𝜶+𝟏
𝜶
𝜶(𝒂)𝜶 𝟏 +∞
𝑬(𝑿 = 𝜶(𝒂) ∫ 𝒙 𝒅𝒙 = 𝜶(𝒂) [ ] =− [ ]
𝒂 𝟏−𝜶+𝟏 𝒂 𝜶 − 𝟐 𝒙𝜶−𝟐 𝒂
𝜶𝒂𝟐 𝜶𝟐 𝒂𝟐 𝟏 𝜶
𝑬𝒏 𝒆𝒇𝒇𝒆𝒕 ∶ 𝑽(𝑿) = 𝑬(𝑿𝟐 ) − [𝑬(𝑿)]𝟐 = − = 𝜶𝒂𝟐
[ − ]
𝜶 − 𝟐 (𝜶 − 𝟏)𝟐 𝜶 − 𝟐 (𝜶 − 𝟏)𝟐
(𝜶 − 𝟏)𝟐 − 𝜶(𝜶 − 𝟐) 𝜶𝟐 − 𝟐𝜶 + 𝟏 − 𝜶𝟐 + 𝟐𝜶
= 𝜶𝒂𝟐 [ ] = 𝜶𝒂𝟐
[ ]
(𝜶 − 𝟐)(𝜶 − 𝟏)𝟐 (𝜶 − 𝟐)(𝜶 − 𝟏)𝟐
′
𝜶𝒂𝟐
𝑫 𝒐ù ∶ 𝒑𝒐𝒖𝒓, 𝜶 > 𝟐 ∶ 𝑽(𝑿) =
(𝜶 − 𝟐)(𝜶 − 𝟏)𝟐
Corrigé
1) 𝑬𝒕𝒂𝒏𝒕 𝒅𝒐𝒏𝒏é (𝑿𝟏 , 𝑿𝟐 , … , 𝑿𝒏 ) 𝒔𝒖𝒊𝒕𝒆 𝒅𝒆 𝒗. 𝒂 𝒊𝒏𝒅é𝒑𝒆𝒏𝒅𝒂𝒏𝒕𝒆𝒔 𝒆𝒕 𝒅𝒆 𝒎ê𝒎𝒆 𝒍𝒐𝒊 ; 𝒐𝒏 𝒂 ∶
𝒏
𝟐𝒏𝒕𝟐𝒏−𝟏
𝑫′ 𝒐ù, 𝒇𝑻 (𝒕) = { 𝜽𝟐𝒏 , 𝒔𝒊 𝟎 < 𝒕 < 𝜽
𝟎 , 𝒔𝒊 𝒏𝒐𝒏
𝟐𝒏
= (𝜽𝟐𝒏+𝟏 − 𝟎)
(𝟐𝒏 + 𝟏)𝜽𝟐𝒏
𝟐𝒏
𝑫′ 𝒐ù, 𝑬(𝑻) = ( )𝜽
𝟐𝒏 + 𝟏
3) 𝑶𝒏 𝒔𝒆 𝒑𝒓𝒐𝒑𝒐𝒔𝒆 𝒅𝒆 𝒄𝒂𝒍𝒄𝒖𝒍𝒆𝒓 𝒍𝒂 𝒑𝒓𝒐𝒃𝒂𝒃𝒊𝒍𝒊𝒕é :
𝑷(𝑻 > 𝒂) = 𝟏 − 𝑷(𝑻 ≤ 𝒂) = 𝟏 − 𝑭𝑻 (𝒂) = 𝟏 − [𝑭𝑿 (𝒂)]𝒏
𝟏 , 𝒔𝒊 𝒙 ≤ 𝟎
𝒕 𝟐𝒏
𝑫′ 𝒐ù, 𝑷(𝑻 > 𝒂) = {𝟏 − ( ) , 𝒔𝒊 𝟎 < 𝒕 < 𝜽
𝜽
𝟎 , 𝒔𝒊 𝒕 ≥ 𝜽
Corrigé
∙ 𝑶𝒏 𝒂 ∶ 𝒀𝟐 = 𝒀𝟏 + 𝑼𝟐 , 𝒀𝟑 = 𝒀𝟐 + 𝑼𝟑 = 𝒀𝟏 + 𝑼𝟐 + 𝑼𝟑 , 𝒀𝟒 = 𝒀𝟑 + 𝑼𝟒 = 𝒀𝟏 + 𝑼𝟐 + 𝑼𝟑 + 𝑼𝟒 …
𝟏𝟏
⏟ 𝒊 ) = 𝟏𝟎𝝈𝟐 = 𝟏𝟎
𝑶𝒓 𝒍𝒆𝒔 𝒗𝒂𝒓𝒊𝒂𝒃𝒍𝒆𝒔 𝒂𝒍é𝒂𝒕𝒐𝒊𝒓𝒆𝒔 𝑼𝟏 , 𝑼𝟐 , … , 𝑼𝒏 𝒔𝒐𝒏𝒕 𝒊. 𝒊. 𝒅 , 𝒅𝒐𝒏𝒄 𝑽 (∑ 𝑼𝒊 ) = ∑ 𝑽(𝑼
𝒊=𝟐 𝒊=𝟐 𝝈𝟐
𝑨𝒊𝒏𝒔𝒊, 𝑽(𝒀𝟏𝟏 ) = 𝟏𝟎
∙ 𝑷(𝟗𝟓 < 𝒀𝟏𝟏 < 𝟏𝟎𝟓) = 𝑷(−𝟓 < 𝒀𝟏𝟏 − 𝟏𝟎𝟎 < 𝟓) = 𝑷(|𝒀𝟏𝟏 − 𝑬(𝒀𝟏𝟏 )| < 𝟓)
= 𝟏 − 𝑷(|𝒀𝟏𝟏 − 𝑬(𝒀𝟏𝟏 )| ≥ 𝟓)
𝑽𝒂𝒓(𝒀𝟏𝟏 )
𝑶𝒓 𝒑𝒂𝒓 𝒍’𝒊𝒏é𝒈𝒂𝒍𝒊𝒕é 𝒅𝒆 𝑻𝒄𝒉𝒆𝒃𝒚𝒄𝒉𝒆𝒗 ∶ 𝑷(|𝒀𝟏𝟏 − 𝑬(𝒀𝟏𝟏 )| ≥ 𝟓) ≤
𝟓𝟐
𝟏𝟎
⟺ 𝑷(|𝒀𝟏𝟏 − 𝑬(𝒀𝟏𝟏 )| ≥ 𝟓) ≤ ⟺ −𝑷(|𝒀𝟏𝟏 − 𝑬(𝒀𝟏𝟏 )| ≥ 𝟓) ≥ −𝟎, 𝟒 ⟺ 𝟏 − 𝑷(|𝒀𝟏𝟏 − 𝑬(𝒀𝟏𝟏 )| ≥ 𝟓) ≥ 𝟎, 𝟔
𝟐𝟓
𝒄‑à‑𝒅. 𝒒𝒖’𝒊𝒍 𝒚 𝒂, 𝒂𝒖 𝒎𝒐𝒊𝒏𝒔 𝟔𝟎 % 𝒅𝒆 𝒄𝒉𝒂𝒏𝒄𝒆 𝒒𝒖𝒆 𝒍𝒆 𝒑𝒓𝒊𝒙 𝒅𝒆 𝒍’𝒂𝒄𝒕𝒊𝒐𝒏 𝒔𝒆 𝒕𝒓𝒐𝒖𝒗𝒆 𝒆𝒏𝒕𝒓𝒆 𝟗𝟓 𝒆𝒕 𝟏𝟎𝟓
𝒅𝒂𝒏𝒔 𝟏𝟎 𝒋𝒐𝒖𝒓𝒔.
𝑺𝒐𝒊𝒕 𝑺𝒕 𝒍𝒂 𝒗𝒂𝒍𝒆𝒖𝒓 𝒅’𝒖𝒏 𝒂𝒄𝒕𝒊𝒇 à 𝒍𝒂 𝒇𝒊𝒏 𝒅𝒆 𝒍’𝒂𝒏𝒏é𝒆 𝒕 𝒆𝒕 𝑹𝟎,𝒏 𝒍𝒆 𝒕𝒂𝒖𝒙 𝒅𝒆 𝒓𝒆𝒏𝒅𝒆𝒎𝒆𝒏𝒕 𝒔𝒖𝒓 𝒖𝒏 𝒉𝒐𝒓𝒊𝒛𝒐𝒏
𝒏
𝒅𝒆 𝒏 𝒂𝒏𝒏é𝒆𝒔, 𝒄‑à‑𝒅. 𝒒𝒖𝒆 𝑹𝟎,𝒏 𝒆𝒔𝒕 𝒍𝒂 𝒔𝒐𝒍𝒖𝒕𝒊𝒐𝒏 𝒅𝒆 𝒍′ é𝒒𝒖𝒂𝒕𝒊𝒖𝒐𝒏 ∶ 𝑺𝒏 = 𝑺𝟎 (𝟏 + 𝑹𝟎,𝒏 ) .
𝑺𝒕
𝑺𝒐𝒖𝒔 𝒍’𝒉𝒚𝒑𝒐𝒕𝒉è𝒔𝒆 𝒒𝒖𝒆 ( ) 𝒆𝒔𝒕 𝒖𝒏𝒆 𝒔𝒖𝒊𝒕𝒆 𝒅𝒆 𝒗. 𝒂. 𝒊. 𝒊. 𝒅 𝒅𝒆 𝒍𝒂 𝒍𝒐𝒊 𝒍𝒐𝒈‑𝒏𝒐𝒓𝒎𝒂𝒍𝒆 𝓛𝓝(𝝁, 𝝈𝟐 )
𝑺𝒕−𝟏 𝒕∈ℕ∗
2)
a) 𝒀 = 𝐥𝐧 𝑿 ⟺ 𝑿 = 𝒆𝒀
𝑬(𝑿) = 𝑬(𝒆𝒀 ) = 𝑴𝒀 (𝟏) , 𝒍𝒂 𝒇𝒐𝒏𝒄𝒕𝒊𝒐𝒏 𝒈é𝒏é𝒓𝒂𝒕𝒓𝒊𝒄𝒆 𝒅𝒆𝒔 𝒎𝒐𝒎𝒆𝒏𝒕𝒔 𝒅𝒆 𝒍𝒂 𝒗. 𝒂. 𝒀 𝒆𝒏 𝟏
𝒀−𝝁
𝑫′ 𝒂𝒖𝒕𝒓𝒆 𝒑𝒂𝒓𝒕 𝒀 ↝ 𝓝(𝝁, 𝝈𝟐 ) ⟹ 𝒁 = ↝ 𝓝(𝟎, 𝟏)
𝝈
𝒕𝟐 𝝈𝟐 𝝈𝟐
(𝝁+ )
{𝑴𝒁 (𝒕) = 𝐞𝟐
⟹ 𝑴𝒀 (𝒕) = 𝑴𝝈𝒁+𝝁 (𝒕) = 𝒆𝝁𝒕 𝑴𝒁 (𝝈𝒕) ⟹ 𝑴𝒀 (𝟏) = 𝒆𝝁 𝑴𝒁 (𝝈) = 𝒆𝝁 𝐞 𝟐 =𝒆 𝟐
𝒀 = 𝝈𝒁 + 𝝁
𝝈𝟐
(𝝁+ )
𝟐
𝑫′ 𝒐ù, 𝑿 ↝ 𝓛𝓝(𝝁, 𝝈𝟐 ) ⟹ 𝑬(𝑿) = 𝒆
b)
(𝟐𝝈)𝟐 𝟐 +𝝁)
∙ 𝑬(𝑿𝟐 ) = 𝑬[(𝒆𝒀 )𝟐 ] = 𝑬(𝒆𝟐𝒀 ) = 𝑴𝒀 (𝟐) = 𝒆𝟐𝝁 𝑴𝒁 (𝟐𝝈) = 𝒆𝟐𝝁 𝒆 𝟐 = 𝒆𝟐(𝝈
𝟐
𝝈𝟐
(𝝁+ )
𝟐(𝝈𝟐 +𝝁) 𝟐 𝟐 𝟐) 𝟐 +𝝁) 𝟐
∙ 𝑽(𝑿) = 𝑬(𝑿𝟐 ) − [𝑬(𝑿)]𝟐 = 𝒆 − [𝒆 ] = 𝒆(𝟐𝝁+𝟐𝝈 ) − 𝒆(𝟐𝝁+𝝈 = 𝒆𝟐(𝝈 (𝟏 − 𝒆−𝝈 )
𝟐 +𝝁) 𝟐
𝑫′ 𝒐ù, 𝑿 ↝ 𝓛𝓝(𝝁, 𝝈𝟐 ) ⟹ 𝑽(𝑿) = 𝒆𝟐(𝝈 (𝟏 − 𝒆−𝝈 )
3)
𝟏 𝟏
𝒏 𝑺𝒏 𝒏 𝑺𝒏 𝒏 𝑺𝒏 𝒏
𝑺𝒏 = 𝑺𝟎 (𝟏 + 𝑹𝟎,𝒏 ) ⟺ (𝟏 + 𝑹𝟎,𝒏 ) = ⟺ 𝟏 + 𝑹𝟎,𝒏 = ( ) ⟺ 𝑹𝟎,𝒏 = ( ) − 𝟏
𝑺𝟎 𝑺𝟎 𝑺𝟎
𝒏
𝑺𝒕 𝑺𝒏
𝑬𝒏 𝒖𝒕𝒊𝒍𝒊𝒔𝒂𝒏𝒕 𝒍𝒂 𝒑𝒓𝒐𝒑𝒓𝒊é𝒕é 𝒅𝒆𝒔 𝒑𝒓𝒐𝒅𝒖𝒊𝒕𝒔 𝒕é𝒍𝒆𝒔𝒄𝒐𝒑𝒊𝒒𝒖𝒆𝒔, 𝒐𝒏 𝒐𝒃𝒕𝒊𝒆𝒏𝒕 ∶ ∏ =
𝑺𝒕−𝟏 𝑺𝟎
𝒕=𝟏
𝟏
𝒏 𝒏 𝟏 𝒏 𝟏
𝒏
𝑺𝒕 𝑺𝒕 𝒏 𝑺𝒕 𝒏
𝑷𝒂𝒓 𝒍𝒂 𝒔𝒖𝒊𝒕𝒆, 𝑹𝟎,𝒏 = (∏ ) − 𝟏 = ∏( ) − 𝟏 = 𝐞𝐱𝐩 (𝐥𝐧 (∏ ( ) )) − 𝟏
𝑺𝒕−𝟏 𝑺𝒕−𝟏 𝑺𝒕−𝟏
𝒕=𝟏 𝒕=𝟏 𝒕=𝟏
𝑺𝒕
𝑨𝒗𝒆𝒄 , 𝑸𝒕 = 𝐥𝐧 ( )
𝑺𝒕−𝟏
𝑺𝒕 𝑺𝒕
𝑪𝒐𝒎𝒎𝒆 𝒐𝒏𝒂 ∶ ( ) ↝ 𝓛𝓝(𝝁, 𝝈𝟐 ) ⟹ 𝑸𝒕 = 𝐥𝐧 ( ) ↝ 𝓝(𝝁, 𝝈𝟐 )
𝑺𝒕−𝟏 𝑺𝒕−𝟏
𝑺𝒕 𝑺𝒕
𝑺𝒐𝒊𝒕 𝒇(𝒖) = 𝐥𝐧(𝒖) , 𝒅𝒐𝒏𝒄 𝑸𝒕 = 𝒇 ( ) , 𝒇 𝒄𝒐𝒏𝒕𝒊𝒏𝒖𝒆 𝒔𝒖𝒓 ( ) (𝛀) = ]𝟎, +∞[ (𝟏)
𝑺𝒕−𝟏 𝑺𝒕−𝟏
𝑺𝒕
( ) 𝒆𝒔𝒕 𝒖𝒏𝒆 𝒔𝒖𝒊𝒕𝒆 𝒅𝒆 𝒗. 𝒂. 𝒊. 𝒊. 𝒅 𝒅𝒆 𝒍𝒂 𝒍𝒐𝒊 𝒍𝒐𝒈‑𝒏𝒐𝒓𝒎𝒂𝒍𝒆 𝓛𝓝(𝝁, 𝝈𝟐 ) (𝟐)
𝑺𝒕−𝟏 𝒕∈ℕ∗
𝑺𝒕
𝑬 (𝒇 ( )) = 𝑬(𝑸𝒕 ) = 𝝁, 𝒆𝒙𝒊𝒔𝒕𝒆 𝒆𝒕 𝒇𝒊𝒏𝒊𝒆 (𝟑)
𝑺𝒕−𝟏
̅ ↝ 𝓝(𝑬(𝑸
(𝟏) + (𝟐) + (𝟑) ⇒ (𝑸𝒕 )𝒕∈ℕ∗ 𝒆𝒔𝒕 𝒖𝒏𝒆 𝒔𝒖𝒊𝒕𝒆 𝒅𝒆 𝒗. 𝒂. 𝒊. 𝒊. 𝒅. 𝒅𝒆 𝓝(𝝁, 𝝈𝟐 ) ⇒ 𝑸 ̅ ), 𝑽(𝑸
̅ ))
𝒏 𝒏 𝒏 𝒏
𝟏 𝟏 𝟏 𝟏
̅ ) = 𝑬 ( ∑ 𝑸𝒕 ) = 𝑬 (∑ 𝑸𝒕 ) = ∑ 𝑬(𝑸
𝑶ù 𝑬(𝑸 ⏟ 𝒕) = ∑ 𝝁 = 𝝁
𝒏 𝒏 𝒏 𝒏
𝒕=𝟏 𝒕=𝟏 𝒕=𝟏 𝝁 𝒕=𝟏
𝒏 𝒏 𝒏 𝒏
𝟏 𝟏 𝟏 𝟏 𝝈𝟐
̅ )
𝑬𝒕 𝑽(𝑸 = 𝑽 ( ∑ 𝑸𝒕 ) = 𝟐 𝑽 (∑ 𝑸𝒕 ) = 𝟐 ∑ 𝑽(𝑸 ) 𝟐
⏟ 𝒕 = 𝟐∑𝝈 =
𝒏 𝒏 𝒏 𝟐
𝒏 𝒏
𝒕=𝟏 𝒕=𝟏 𝒕=𝟏 𝝈 𝒕=𝟏
̅ ↝ 𝓝(𝝁, 𝝈𝟐 ⁄𝒏)
𝑪𝒆 𝒒𝒖𝒊 𝒅𝒐𝒏𝒏𝒆 ∶ 𝑸
̅ ̅
̅
𝑸
𝑬(𝑹𝟎,𝒏 ) = 𝑬(𝒆𝑸 − 𝟏) = 𝑬(𝒆𝑸 ) − 𝟏 = 𝑬(𝑯) − 𝟏
𝑶𝒏 𝒂 𝒅é𝒎𝒐𝒏𝒕𝒓é 𝒒𝒖𝒆 ∶ 𝑹𝟎,𝒏 = 𝒆 − 𝟏𝒅𝒐𝒏𝒄 , { ̅ ̅
𝑽(𝑹𝟎,𝒏 ) = 𝑽(𝒆𝑸 − 𝟏) = 𝑽(𝒆𝑸 ) = 𝑽(𝑯)
̅
𝒐ù 𝑯 = 𝒆𝑸
𝝈𝟐
(𝝁+ )
𝒀 ↝ 𝓝(𝝁, 𝝈𝟐 ) 𝟐
𝑪𝒐𝒎𝒑𝒕𝒆 𝒕𝒆𝒏𝒖 𝒅𝒆𝒔 𝒓é𝒔𝒖𝒍𝒕𝒂𝒕𝒔 𝒑𝒓é𝒄é𝒅𝒆𝒏𝒕𝒔 ∶ { 𝒀
⇒ { 𝑬(𝑿) = 𝒆
𝑿=𝒆 𝟐 𝟐
𝑽(𝑿) = 𝒆𝟐(𝝈 +𝝁) (𝟏 − 𝒆−𝝈 )
𝝈 𝟐
( )
𝟐 𝝁+ √𝒏
𝝈 𝟐
̅ ↝ 𝓝 (𝝁, (
𝑸 ) )
𝑰𝒍 𝒆𝒏 𝒓é𝒔𝒖𝒍𝒕𝒆 𝒒𝒖𝒆 ∶ { √𝒏 ⇒ 𝑬(𝑯) = 𝒆( )
𝟐
̅ 𝝈 𝝈 𝟐
𝑯 = 𝒆𝑸 𝟐(( ) +𝝁)
𝒏
−( )
𝑽(𝑯) = 𝒆 √ (𝟏 − 𝒆 √𝒏 )
{
𝝈𝟐 𝝈𝟐
(𝝁+ ) (𝝁+ )
𝟐𝒏 𝟐𝒏
𝑬(𝑯) = 𝒆 𝑬(𝑹𝟎,𝒏 ) = 𝒆 −𝟏
⇒ 𝝈𝟐 𝝈𝟐
𝑫′ 𝒐ù 𝝈𝟐
𝟐( +𝝁) 𝟐( +𝝁) 𝝈𝟐
𝑽(𝑯) = 𝒆 𝒏
(𝟏 − 𝒆− 𝒏 ) 𝑽(𝑹𝟎,𝒏 ) = 𝒆 𝒏
(𝟏 − 𝒆 −
𝒏 )
{ {
𝒎𝒒 𝒑
⇔ 𝑹𝟎,𝒏 → 𝒆𝝁 − 𝟏 ⇒ 𝑹𝟎,𝒏 → 𝒆𝝁 − 𝟏
𝒏→+∞ 𝒏→+∞
1) 𝑫é𝒕𝒆𝒓𝒎𝒊𝒏𝒆𝒓 𝒍𝒂 𝒍𝒐𝒊 𝒅𝒆 𝒀𝒏
2) 𝑬𝒏 𝒅é𝒅𝒖𝒊𝒓𝒆 𝒍𝒂 𝒇𝒐𝒏𝒄𝒕𝒊𝒐𝒏 𝒅𝒆 𝒓é𝒑𝒂𝒓𝒕𝒊𝒕𝒊𝒐𝒏 𝒆𝒕 𝒍𝒂 𝒇𝒐𝒏𝒄𝒕𝒊𝒐𝒏 𝒈é𝒏é𝒓𝒂𝒕𝒓𝒊𝒄𝒆 𝒅𝒆𝒔 𝒎𝒐𝒎𝒆𝒏𝒕𝒔 𝒅𝒆 𝒀𝒏
3) 𝑴𝒐𝒏𝒕𝒓𝒆𝒓 𝒒𝒖𝒆 𝒍𝒂 𝒔𝒖𝒊𝒕𝒆 𝒅𝒆 𝒗. 𝒂 (𝒀𝒏 )𝒏∈ℕ 𝒄𝒐𝒏𝒗𝒆𝒓𝒈𝒆 𝒆𝒏 𝒑𝒓𝒐𝒃𝒂𝒃𝒊𝒍𝒊𝒕é 𝒗𝒆𝒓𝒔 𝒖𝒏𝒆 𝒄𝒐𝒏𝒔𝒕𝒂𝒏𝒕𝒆 𝒒𝒖𝒆
𝒍’𝒐𝒏 𝒅é𝒕𝒆𝒓𝒎𝒊𝒏𝒆𝒓𝒂
𝟏
4) 𝑺𝒐𝒊𝒕 𝑻𝒏 = ∑𝒏𝒊=𝟏(𝑿𝒊 − 𝒀𝒏 ) .
𝒏
𝑴𝒐𝒏𝒕𝒓𝒆𝒓 𝒒𝒖𝒆 𝒍𝒂 𝒔𝒖𝒊𝒕𝒆 𝒅𝒆 𝒗. 𝒂 (𝑻𝒏 )𝒏∈ℕ 𝒄𝒐𝒏𝒗𝒆𝒓𝒈𝒆 𝒆𝒏 𝒑𝒓𝒐𝒃𝒂𝒃𝒊𝒍𝒊𝒕é 𝒗𝒆𝒓𝒔 𝒖𝒏𝒆 𝒄𝒐𝒏𝒔𝒕𝒂𝒏𝒕𝒆 𝒆𝒕
𝒅é𝒕𝒆𝒓𝒎𝒊𝒏𝒆𝒓 𝒔𝒂 𝒗𝒂𝒍𝒆𝒖𝒓.
Corrigé
1) 𝑬𝒕𝒂𝒏𝒕 𝒅𝒐𝒏𝒏é (𝑿𝟏 , 𝑿𝟐 , … , 𝑿𝒏 ) 𝒔𝒖𝒊𝒕𝒆 𝒅𝒆 𝒗. 𝒂 𝒊𝒏𝒅é𝒑𝒆𝒏𝒅𝒂𝒏𝒕𝒆𝒔 𝒆𝒕 𝒅𝒆 𝒎ê𝒎𝒆 𝒍𝒐𝒊 ; 𝒐𝒏 𝒂 ∶
𝒏𝒆−𝒏(𝒚−𝟐) , 𝒔𝒊 𝒚 ≥ 𝟐
𝑫′ 𝒐ù, 𝒇𝒀𝒏 (𝒚) = {
𝟎 , 𝒔𝒊 𝒏𝒐𝒏
2)
𝒏
∙ ∀𝒚 ≥ 𝟐, 𝑭𝒀𝒏 (𝒚) = 𝟏 − [𝟏 − 𝑭𝑿 (𝒚)]𝒏 = 𝟏 − [𝟏 − (𝟏 − 𝒆−(𝒚−𝟐) )] = 𝟏 − 𝒆−𝒏(𝒚−𝟐)
𝟎 , 𝒔𝒊 𝒚 < 𝟐
𝑭𝒀𝒏 (𝒚) = {
𝟏 − 𝒆−𝒏(𝒚−𝟐) , 𝒔𝒊 𝒚 ≥ 𝟐
+∞ +∞ +∞
∙ ∀𝒚 ≥ 𝟐, 𝑴𝒀𝒏 (𝒕) = 𝑬(𝒆 𝒕𝒀𝒏 )
=∫ 𝒕𝒚
𝒆 𝒇𝒀𝒏 (𝒚)𝒅𝒚 = ∫ 𝒏𝒆 𝒆 𝒕𝒚 −𝒏(𝒚−𝟐)
𝒅𝒚 = 𝒏 ∫ 𝒆[(𝒕−𝒏)𝒚+𝟐𝒏] 𝒅𝒚
𝟐 𝟐 𝟐
+∞
𝑶𝒓 𝑴𝒀𝒏 (𝒕) 𝒆𝒙𝒊𝒔𝒕𝒆 𝒔𝒊 𝒆𝒕 𝒔𝒆𝒖𝒍𝒆𝒎𝒆𝒏𝒕 𝒔𝒊 ∫ 𝒆[(𝒕−𝒏)𝒚+𝟐𝒏] 𝒅𝒚 𝒄‑à‑𝒅. ∶ 𝒕 − 𝒏 < 𝟎 𝒐𝒖 𝒆𝒏𝒄𝒐𝒓𝒆 𝒕 ∈ ]−∞, 𝒏[
𝟐
+∞
𝒆[(𝒕−𝒏)𝒚+𝟐𝒏] 𝒏
∀𝒕 ∈ ]−∞, 𝒏[ , 𝑴𝒀𝒏 (𝒕) = 𝒏 [ ] = [( 𝐥𝐢𝐦 𝒆[(𝒕−𝒏)𝒚+𝟐𝒏] ) − (𝒆[𝟐(𝒕−𝒏)+𝟐𝒏] )]
𝒕−𝒏 𝟐
𝒕−𝒏 ⏟𝒚→+∞
𝟎
−𝒏 [𝟐𝒕−𝟐𝒏+𝟐𝒏] 𝒏𝒆𝟐𝒕
= (𝒆 )=
𝒕−𝒏 𝒏−𝒕
𝒏𝒆𝟐𝒕
∀𝒕 ∈ ]−∞, 𝒏[ , 𝑴𝒀𝒏 (𝒕) =
𝒏−𝒕
3)
∙ 𝑫é𝒕𝒆𝒓𝒎𝒊𝒏𝒐𝒏𝒔 𝑬(𝒀𝒏 ) 𝒆𝒕 𝑽(𝒀𝒏 ) à 𝒑𝒂𝒓𝒕𝒊𝒓 𝒅𝒆 𝑴𝒀𝒏 (𝒕) ∶
′
𝒅𝑴𝒀𝒏 (𝒕) 𝒆𝟐𝒕 (𝒆𝟐𝒕 )′ (𝒏 − 𝒕) − 𝒆𝟐𝒕 (𝒏 − 𝒕)′ 𝟐𝒆𝟐𝒕 (𝒏 − 𝒕) + 𝒆𝟐𝒕
= 𝒏( ) = 𝒏( ) = 𝒏( )
𝒅𝒕 𝒏−𝒕 (𝒏 − 𝒕)𝟐 (𝒏 − 𝒕)𝟐
4)
[𝟐, +∞[
∙ 𝑺𝒐𝒊𝒕 𝒁 = 𝑿 − 𝟐 , 𝒂𝒊𝒏𝒔𝒊 {𝑿(𝛀) = ⇒ 𝒁(𝛀) = [𝟎, +∞[
𝒁 =𝑿−𝟐
𝑫é𝒕𝒆𝒓𝒎𝒊𝒏𝒐𝒏𝒔 𝒍𝒂 𝒇𝒐𝒏𝒄𝒕𝒊𝒐𝒏 𝒅𝒆 𝒓é𝒑𝒂𝒓𝒕𝒊𝒕𝒊𝒐𝒏 𝒅𝒆 𝒍𝒂 𝒗. 𝒂. 𝒁 :
𝑭𝒁 (𝒛) = 𝑷(𝒁 ≤ 𝒛) = 𝑷(𝑿 − 𝟐 ≤ 𝒛) = 𝑷(𝑿 ≤ 𝒛 + 𝟐) = 𝑭𝑿 (𝒛 + 𝟐)
𝑳𝒂 𝒅. 𝒅. 𝒑 𝒅𝒆 𝒍𝒂 𝒗. 𝒂. 𝒁 𝒔𝒆 𝒅é𝒅𝒖𝒊𝒕 𝒅𝒆 𝒔𝒂 𝒇. 𝒓 𝒑𝒂𝒓 𝒅é𝒓𝒊𝒗𝒂𝒕𝒊𝒐𝒏 ∶
𝒅[𝑭𝒁 (𝒛)]
𝒇𝒁 (𝒛) = = (𝒛 + 𝟐)′ 𝒇𝑿 (𝒛 + 𝟐) = 𝒆−((𝒛+𝟐)−𝟐) = 𝒆−𝒛
𝒅𝒛
𝒆−𝒛 , 𝒔𝒊 𝒛 ≥ 𝟎 𝑬(𝑿 − 𝟐) = 𝟏 𝑬(𝑿) = 𝟑
𝒇𝒁 (𝒛) = { ⇒ 𝒁 ↝ 𝓔(𝟏) ⇒ 𝑬(𝒁) = 𝑽(𝒁) = 𝟏 ⇒ { ⇒{
𝟎 , 𝒔𝒊 𝒏𝒐𝒏 𝑽(𝑿 − 𝟐) = 𝟏 𝑽(𝑿) = 𝟏
𝒏 𝒏 𝒏
𝟏 𝟏 𝟏
̅−𝒀
∙ 𝑶𝒏 𝒂 𝑻𝒏 = ∑(𝑿𝒊 − 𝒀𝒏 ) = ∑ 𝑿𝒊 − ∑ 𝒀𝒏 = 𝑿 ̅𝒏
𝒏 𝒏 𝒏
𝒊=𝟏 𝒊=𝟏 𝒊=𝟏
𝟏 𝒎𝒒 𝒑
̅ ) = 𝟑 𝒆𝒕 𝐥𝐢𝐦 𝑽(𝑿
( 𝐥𝐢𝐦 𝑬(𝑿 ̅ ) = 𝐥𝐢𝐦 ̅→
= 𝟎) ⇔ 𝑿 ̅→
𝟑⇒𝑿 𝟑
𝒏→+∞ 𝒏→+∞ 𝒏→+∞ 𝒏 𝒏→+∞ 𝒏→+∞
𝒑
̅𝒏 →
𝒀 𝟐 𝒑
𝒏→+∞ ̅−𝒀
̅𝒏 →
▪{ 𝒑 ⇒ 𝑻𝒏 = 𝑿 𝟏
̅→ 𝒏→+∞
𝑿 𝟑
𝒏→+∞
Corrigé
1) 𝑿𝒏 ↝ 𝓑 (𝟏, 𝒑 ) ⇒ 𝑿𝒏 (𝛀) = 𝑿𝒏 (𝛀) = {𝟎, 𝟏}
𝑶𝒏 𝒂 ∶ 𝒀𝒏 = 𝟐𝑿𝒏 + 𝑿𝒏+𝟏 − 𝑿𝒏 𝑿𝒏+𝟏 , 𝒅𝒐𝒏𝒄 ∶
𝑿𝒏+𝟏 𝑿𝒏+𝟏 = 𝟎 𝑿𝒏+𝟏 = 𝟏
𝑿𝒏
𝑿𝒏 = 𝟎 𝒀𝒏 = 𝟐𝑿𝒏 + 𝑿𝒏+𝟏 − 𝑿𝒏 𝑿𝒏+𝟏 = 𝟎 𝒀𝒏 = 𝟐𝑿𝒏 + 𝑿𝒏+𝟏 − 𝑿𝒏 𝑿𝒏+𝟏 = 𝟏 ⇒ 𝒀𝒏 (𝛀) = {𝟎, 𝟏, 𝟐}
𝑿𝒏 = 𝟏 𝒀𝒏 = 𝟐𝑿𝒏 + 𝑿𝒏+𝟏 − 𝑿𝒏 𝑿𝒏+𝟏 = 𝟐 𝒀𝒏 = 𝟐𝑿𝒏 + 𝑿𝒏+𝟏 − 𝑿𝒏 𝑿𝒏+𝟏 = 𝟐
∙ 𝑷(𝒀𝒏 = 𝟎) = 𝑷(𝑿𝒏 = 𝟎, 𝑿𝒏+𝟏 = 𝟎) = 𝑷(𝑿𝒏 = 𝟎)𝑷(𝑿𝒏+𝟏 = 𝟎) = (𝟏 − 𝒑)𝟐
∙ 𝑷(𝒀𝒏 = 𝟏) = 𝑷(𝑿𝒏 = 𝟎, 𝑿𝒏+𝟏 = 𝟏) = 𝑷(𝑿𝒏 = 𝟎)𝑷(𝑿𝒏+𝟏 = 𝟏) = 𝒑(𝟏 − 𝒑)
∙ 𝑷(𝒀𝒏 = 𝟐) = 𝑷(𝑿𝒏 = 𝟏, 𝑿𝒏+𝟏 = 𝟎) + 𝑷(𝑿𝒏 = 𝟏, 𝑿𝒏+𝟏 = 𝟏)
2)
a)
𝒏 𝒏 𝒏 𝒏
𝟏 𝟏 𝟏 𝟏
𝑬(𝒁𝒏 ) = 𝑬 ( ∑ 𝒀𝒌 ) = 𝑬 (∑ 𝒀𝒌 ) = ∑ 𝑬(𝒀𝒌 ) = ∑[𝒑(𝟑 − 𝒑)] . 𝑫′ 𝒐ù 𝑬(𝒁𝒏 ) = 𝒑(𝟑 − 𝒑)
𝒏 𝒏 𝒏 𝒏
𝒌=𝟏 𝒌=𝟏 𝒌=𝟏 𝒌=𝟏
b)
∙ 𝑺𝒊 𝒊 + 𝟏 ≠ 𝒋 𝒐𝒖 𝒆𝒏𝒄𝒐𝒓𝒆 𝒔𝒊 𝒊 + 𝟏 < 𝒋 ⇒ 𝒊 ≠ 𝒋 , 𝒊 ≠ 𝒋 + 𝟏 𝒆𝒕 𝒊 + 𝟏 ≠ 𝒋 + 𝟏
= 𝟒𝑿𝒊 𝑿𝒋 + 𝟐𝑿𝒊 𝑿𝒋+𝟏 − 𝟐𝑿𝒊 𝑿𝒋 𝑿𝒋+𝟏 + 𝟐𝑿𝒊+𝟏 𝑿𝒋 + 𝑿𝒊+𝟏 𝑿𝒋+𝟏 − 𝑿𝒊+𝟏 𝑿𝒋 𝑿𝒋+𝟏 − 𝟐𝑿𝒊 𝑿𝒊+𝟏 𝑿𝒋 − 𝑿𝒊 𝑿𝒊+𝟏 𝑿𝒋+𝟏 + 𝑿𝒊 𝑿𝒊+𝟏 𝑿𝒋 𝑿𝒋+𝟏
𝒏 𝒏 𝒏
𝑫𝒐ù , 𝒔𝒊 𝒊 + 𝟏 ≠ 𝒋 ∶ 𝑪𝒐𝒗(𝒀𝒊 , 𝒀𝒋 ) = 𝟎
∙ 𝑺𝒊 𝒊 + 𝟏 = 𝒋
𝒀𝒊 𝒀𝒋 = 𝒀𝒊 𝒀𝒊+𝟏 = 𝟒𝑿𝒊 𝑿𝒊+𝟏 + 𝟐𝑿𝒊 𝑿𝒊+𝟐 − 𝟐𝑿𝒊 𝑿𝒊+𝟏 𝑿𝒊+𝟐 + 𝟐𝑿𝟐𝒊+𝟏 + 𝑿𝒊+𝟏 𝑿𝒊+𝟐 − 𝑿𝟐𝒊+𝟏 𝑿𝒊+𝟐 − 𝟐𝑿𝒊 𝑿𝟐𝒊+𝟏 − 𝑿𝒊 𝑿𝒊+𝟏 𝑿𝒊+𝟐 + 𝑿𝒊 𝑿𝟐𝒊+𝟏 𝑿𝒊+𝟐
𝑨𝒊𝒏𝒔𝒊 𝑬(𝒀𝒊 𝒀𝒋 ) = 𝟒𝒑𝟐 + 𝟐𝒑𝟐 − 𝟐𝒑𝟑 + 𝟐𝒑𝟐 + 𝟐𝒑 − 𝒑𝟐 − 𝒑𝟑 − 𝑬(𝑿𝟐𝒊+𝟏 𝑿𝒊+𝟐 ) − 𝟐𝑬(𝑿𝒊 𝑿𝟐𝒊+𝟏 ) + 𝑬(𝑿𝒊 𝑿𝟐𝒊+𝟏 𝑿𝒊+𝟐)
𝑷(𝑿𝟐𝒌 = 𝟎) = 𝑷(𝑿𝒌 = 𝟎) = 𝟏 − 𝒑
▪𝑿𝒌 ↝ 𝓑 (𝟏, 𝒑 ) ⇒ 𝑿𝒌 (𝛀) = 𝑿𝟐𝒌 (𝛀) = {𝟎, 𝟏} 𝒆𝒕 { ⇒ 𝑿𝟐𝒌 ↝ 𝓑 (𝟏, 𝒑 )
𝑷(𝑿𝟐𝒌 = 𝟏) = 𝑷(𝑿𝒌 = 𝟏) = 𝒑
𝑪𝒆 𝒒𝒖𝒊 𝒅𝒐𝒏𝒏𝒆 ∶ 𝑬(𝒀𝒊 𝒀𝒋 ) = 𝟐𝒑 + 𝟕𝒑𝟐 − 𝟑𝒑𝟑 − 𝑬(𝑿𝟐𝒊+𝟏 )𝑬(𝑿𝒊+𝟐 ) − 𝟐𝑬(𝑿𝒊 )𝑬(𝑿𝟐𝒊+𝟏 ) + 𝑬(𝑿𝒊 )𝑬(𝑿𝟐𝒊+𝟏 )𝑬(𝑿𝒊+𝟐 )
𝟎 , 𝒔𝒊 , 𝒊 + 𝟏 < 𝒋
𝑫′ 𝒐ù 𝑪𝒐𝒗(𝒀𝒊 , 𝒀𝒋 ) = {
𝒑(𝒑 − 𝟏)𝟐 (𝟐 − 𝒑) , 𝒔𝒊 , 𝒊 + 𝟏 = 𝒋
c)
𝒏 𝒏
𝟏 𝟏
𝑽(𝒁𝒏 ) = 𝑽 ( ∑ 𝒀𝒊 ) = 𝟐 [∑ 𝑽𝒂𝒓(𝒀𝒊 ) + 𝟐 ∑ 𝑪𝒐𝒗(𝒀𝒊 , 𝒀𝒋 )]
𝒏 𝒏
𝒊=𝟏 𝒊=𝟏 𝟏≤𝒊<𝑗≤𝑛
𝒏 𝒏−𝟏 𝒏
𝟏
= 𝟐 [∑ 𝒑(𝟏 − 𝒑)(𝒑𝟐 − 𝟓𝒑 + 𝟓) + 𝟐 ∑ ∑ 𝑪𝒐𝒗(𝒀𝒊 , 𝒀𝒋 )]
𝒏
𝒌=𝟏 𝒊=𝟏 𝒋=𝒊+𝟏
𝒏−𝟏 𝒏
𝟏
= 𝟐 𝒏𝒑(𝟏 − 𝒑)(𝒑𝟐 − 𝟓𝒑 + 𝟓) + 𝟐 ∑ 𝑪𝒐𝒗(𝒀𝒊 , 𝒀𝒊+𝟏 ) + ∑ 𝑪𝒐𝒗(𝒀𝒊 , 𝒀𝒋 )
𝒏
𝒊=𝟏 ⏟
𝒋=𝒊+𝟐
[ ( 𝟎 )]
𝒏−𝟏
𝟏
= 𝟐 [𝒏𝒑(𝟏 − 𝒑)(𝒑𝟐 − 𝟓𝒑 + 𝟓) + 𝟐 ∑ 𝒑(𝟏 − 𝒑)𝟐 (𝟐 − 𝒑)]
𝒏
𝒊=𝟏
𝒏𝒑(𝟏 − 𝒑)(𝒑𝟐 − 𝟓𝒑 + 𝟓) + 𝟐(𝒏 − 𝟏)𝒑(𝟏 − 𝒑)𝟐 (𝟐 − 𝒑) 𝒑(𝟏 − 𝒑) (𝒏(𝒑𝟐 − 𝟓𝒑 + 𝟓) + 𝟐(𝒏 − 𝟏)(𝟏 − 𝒑)(𝟐 − 𝒑))
= =
𝒏𝟐 𝒏𝟐
𝒑(𝟏 − 𝒑)(𝟓𝒏 − 𝟓𝒏𝒑 + 𝒏𝒑𝟐 + 𝟒(𝒏 − 𝟏) − 𝟔(𝒏 − 𝟏)𝒑 + 𝟐(𝒏 − 𝟏)𝒑𝟐 ) 𝒑(𝟏 − 𝒑)(𝟗𝒏 − 𝟒 − 𝟏𝟏𝒏𝒑 + 𝟔𝒑 + 𝟑𝒏𝒑𝟐 − 𝟐𝒑𝟐 )
= =
𝒏𝟐 𝒏𝟐
′
(𝒏(𝟗 − 𝟏𝟏𝒑 + 𝟑𝒑𝟐 ) − 𝟒 + 𝟔𝒑 − 𝟐𝒑𝟐 )𝒑(𝟏 − 𝒑)
𝑫 𝒐ù, 𝑽(𝒁𝒏 ) =
𝒏𝟐
Exercice 24 : (Taux de panne- Loi de Weibull- Loi des extrêmes- Loi exponentielle)
ÉNONCÉ
𝑺𝒐𝒊𝒕 𝑿 𝒖𝒏𝒆 𝒗. 𝒂. 𝒑𝒐𝒔𝒊𝒕𝒊𝒗𝒆 𝒅𝒆 𝒅𝒆𝒏𝒔𝒊𝒕é 𝒇𝑿 𝒆𝒕 𝒅𝒆 𝒇. 𝒓. 𝑭𝑿 . 𝑶𝒏 𝒂𝒑𝒑𝒆𝒍𝒍𝒆 𝒕𝒂𝒖𝒙 𝒅𝒆 𝒑𝒂𝒏𝒏𝒆, 𝒍𝒂 𝒒𝒖𝒂𝒏𝒕𝒊𝒕é ∶
𝒇𝑿 (𝒙)
𝒕(𝒙) = , 𝒙 ∈ ℝ+ .
𝟏 − 𝑭𝑿 (𝒙)
𝒙
1) 𝑴𝒐𝒏𝒕𝒓𝒆𝒓 𝒒𝒖𝒆 𝑭𝑿 (𝒙) = 𝟏 − 𝒆−𝑻(𝒙) , 𝒐ù 𝒐𝒏 𝒂 𝒑𝒐𝒔é 𝑻(𝒙) = ∫𝟎 𝒕(𝒖)𝒅𝒖 , 𝒙 ∈ ℝ+ .
2) 𝑪𝒂𝒍𝒄𝒖𝒍𝒆𝒓 𝒕(𝒙) 𝒅𝒂𝒏𝒔 𝒍𝒆𝒔 𝒄𝒂𝒔 𝒔𝒖𝒊𝒗𝒂𝒏𝒕𝒔 ∶
a) 𝑿 ↝ 𝓔(𝜽) , 𝜽 > 𝟎
𝜶−𝟏 −(𝜽𝒙𝜶 )
b) 𝑿 𝒔𝒖𝒊𝒕 𝒖𝒏𝒆 𝒍𝒐𝒊 𝒅𝒆 𝑾𝒆𝒊𝒃𝒖𝒍𝒍 , 𝓦(𝜶, 𝜽)𝒅𝒆 𝒅. 𝒅. 𝒑. ∶ 𝒇𝑿 (𝒙) = { 𝜶𝜽𝒙 𝒆 , 𝒔𝒊 𝒙 > 𝟎 ;
𝟎 , 𝒔𝒊 𝒏𝒐𝒏
𝜶 𝒆𝒕 𝜽 é𝒕𝒂𝒏𝒕 𝒅𝒆𝒖𝒙 𝒑𝒂𝒓𝒂𝒎è𝒕𝒓𝒆𝒔 𝒔𝒕𝒓𝒊𝒄𝒕𝒆𝒎𝒆𝒏𝒕 𝒑𝒐𝒔𝒊𝒕𝒊𝒇.
c) 𝑿 𝒔𝒖𝒊𝒕 𝒖𝒏𝒆 𝒍𝒐𝒊 𝒅𝒊𝒕𝒆 "𝒅𝒆𝒔 𝒆𝒙𝒕𝒓ê𝒎𝒆𝒔" 𝒅𝒆 𝒅. 𝒅. 𝒑. ∶
[𝒙−𝜽(𝒆𝒙 −𝟏)]
𝒇𝑿 (𝒙) ={ 𝜽𝒆 , 𝒔𝒊 𝒙 > 𝟎 ; 𝜽 > 𝟎 .
𝟎 , 𝒔𝒊 𝒏𝒐𝒏
(𝐍. 𝐁. ∶ 𝒐𝒏 𝒑𝒐𝒖𝒓𝒓𝒂 𝒇𝒂𝒊𝒓𝒆 𝒍𝒆 𝒄𝒉𝒂𝒏𝒈𝒆𝒎𝒆𝒏𝒕 𝒅𝒆 𝒗𝒂𝒓𝒊𝒂𝒃𝒍𝒆𝒔 ∶ 𝒖 = 𝜽(𝒆𝒕 − 𝟏) , 𝒑𝒐𝒖𝒓
𝒄𝒂𝒍𝒄𝒖𝒍𝒆𝒓 𝑭𝑿 (𝒙).
Corrigé
′
𝒙 𝒙 𝒇𝑿 (𝒖) 𝒙 (𝟏−𝑭𝑿 (𝒖))
1) 𝑻(𝒙) = ∫𝟎 𝒕(𝒖)𝒅𝒖 = ∫𝟎 𝒅𝒖 = − ∫𝟎 𝒅𝒖 = −[𝐥𝐧|𝟏 − 𝑭𝑿 (𝒖)|]𝒙𝟎
𝟏−𝑭𝑿 (𝒖) 𝟏−𝑭𝑿 (𝒖)
𝟎 , 𝒔𝒊 𝒙 < 𝟎
𝒙
𝑻(𝒙) = 𝐥𝐧(𝟏 − 𝑭𝑿 (𝟎)) − 𝐥𝐧(𝟏 − 𝑭𝑿 (𝒙)) , 𝑶𝒓 𝑭𝑿 (𝒙) = { ⇒ 𝑭𝑿 (𝟎) = 𝟎
∫ 𝒇𝑿 (𝒖)𝒅𝒖 , 𝒔𝒊 𝒙 ≥ 𝟎
𝟎
2)
𝜽𝒆−𝜽𝒙 , 𝒔𝒊 𝒙 ≥ 𝟎 𝟎 , 𝒔𝒊 𝒙 < 𝟎
a) 𝑿 ↝ 𝓔(𝜽) ⟺ 𝒇𝑿 (𝒙) = { ⟺ 𝑭𝑿 (𝒙) = { 𝒙
𝟎 , 𝒔𝒊 𝒏𝒐𝒏 ∫𝟎 𝒇𝑿 (𝒖)𝒅𝒖 , 𝒔𝒊 𝒙 ≥ 𝟎
𝒙
∀ 𝒙 ∈ ℝ+ , 𝑭𝑿 (𝒙) = − ∫ (−𝜽𝒖)′ 𝒆−𝜽𝒖 𝒅𝒖 = −[𝒆−𝜽𝒖 ]𝒙𝟎 = 𝒆𝟎 − 𝒆−𝜽𝒙 = 𝟏 − 𝒆−𝜽𝒙
𝟎
𝒇𝑿 (𝒙) 𝜽𝒆−𝜽𝒙
𝑷𝒂𝒓 𝒍𝒂 𝒔𝒖𝒊𝒕𝒆, ∀ 𝒙 ∈ ℝ+ , 𝒕(𝒙) = = −𝜽𝒙 , 𝒅𝒐ù 𝒔𝒊 𝑿 ↝ 𝓔(𝜽) , 𝒂𝒍𝒐𝒓𝒔 𝒕(𝒙) = 𝜽
𝟏 − 𝑭𝑿 (𝒙) 𝒆
c) 𝒇𝑿 (𝒙) = { 𝜽𝒆
[𝒙−𝜽(𝒆𝒙 −𝟏)]
, 𝒔𝒊 𝒙 > 𝟎 ⟺ 𝑭 (𝒙) = {𝟎𝒙, 𝒔𝒊 𝒙 ≤ 𝟎
𝑿
𝟎 , 𝒔𝒊 𝒏𝒐𝒏 ∫𝟎 𝒇𝑿 (𝒖)𝒅𝒖 , 𝒔𝒊 𝒙 > 𝟎
𝒙
𝒖 −𝟏)]
∀𝒙∈ ℝ∗+ , 𝑭𝑿 (𝒙) = ∫ 𝜽𝒆[𝒖−𝜽(𝒆 𝒅𝒖
𝟎
𝒛+𝜽 𝒛+𝜽
𝑷𝒐𝒔𝒐𝒏𝒔 𝒛 = 𝜽(𝒆𝒖 − 𝟏) ⇒ 𝒛 + 𝜽 = 𝜽𝒆𝒖 ⇒ 𝒆𝒖 = ⇒ 𝒖 = 𝐥𝐧 ( )
𝜽 𝜽
𝒅𝒛 𝒅𝒛 𝒖=𝟎⇒𝒛=𝟎
𝒆𝒕 𝒅𝒛 = [𝜽(𝒆𝒖 − 𝟏)]′ 𝒅𝒕 = 𝜽𝒆𝒖 𝒅𝒖 ⇒ 𝒅𝒖 = = ;{
𝜽𝒆 𝒖 𝒛+𝜽 𝒖 = 𝒙 ⇒ 𝒛 = 𝜽(𝒆𝒙 − 𝟏)
𝒛+𝜽
𝜽(𝒆𝒙 −𝟏) 𝒙 −𝟏)
𝒙
[𝒖−𝜽(𝒆𝒖 −𝟏)] 𝒛+𝜽 𝒅𝒛 𝜽(𝒆
𝜽𝒆𝐥𝐧( 𝜽 ) 𝒆−𝒛
𝑭𝑿 (𝒙) = ∫ 𝜽𝒆 𝒅𝒖 = ∫ 𝜽 𝐞𝐱𝐩 [𝐥𝐧 ( ) − 𝒛] =∫ 𝒅𝒛
𝟎 𝟎 𝜽 𝒛+𝜽 𝟎 𝒛+𝜽
𝜽(𝒆𝒙 −𝟏) 𝜽 (
𝒛 + 𝜽 −𝒛 𝜽(𝒆𝒙 −𝟏)
=∫ 𝜽 ) 𝒆 𝒅𝒛 = ∫ 𝒆−𝒛 𝒅𝒛 = −[𝒆−𝒛 ]𝜽(𝒆
𝒙 −𝟏) 𝒙
. 𝑨𝒊𝒏𝒔𝒊 , 𝑭𝑿 (𝒙) = 𝟏 − 𝒆−𝜽(𝒆 −𝟏)
𝟎
𝟎 𝒛+𝜽 𝟎
𝒙
𝒇𝑿 (𝒙) 𝜽𝒆[𝒙−𝜽(𝒆 −𝟏)]
𝑷𝒂𝒓 𝒍𝒂 𝒔𝒖𝒊𝒕𝒆, ∀ 𝒙 ∈ ℝ+ , 𝒕(𝒙) = = 𝒙 = 𝜽𝒆𝒙
𝟏 − 𝑭𝑿 (𝒙) 𝒆−𝜽(𝒆 −𝟏)
[𝒙−𝜽(𝒆𝒙 −𝟏)]
𝑫𝒐ù 𝒔𝒊 𝒇𝑿 (𝒙) = { 𝜽𝒆 , 𝒔𝒊 𝒙 > 𝟎 , 𝒂𝒍𝒐𝒓𝒔 𝒕(𝒙) = 𝜽𝒆𝒙
𝟎 , 𝒔𝒊 𝒏𝒐𝒏
Corrigé
1) 𝑮𝜽 (𝒚) = 𝑷(𝒀𝜽 ≤ 𝒚) = 𝑷((𝑿 ≤ 𝒚)|(𝑿 ∈ ]−∞, 𝜽])) = 𝑷((𝑿 ∈ ]−∞, 𝒚])|(𝑿 ∈ ]−∞, 𝜽]))
𝑷[𝑿 ∈ (]−∞, 𝒚] ∩ ]−∞, 𝜽])]
=
𝑷(𝑿 ≤ 𝜽)
𝑷(𝑿 ≤ 𝜽)
∙ 𝑺𝒊 𝒚 ≥ 𝜽 ⇒ ]−∞, 𝒚] ∩ ]−∞, 𝜽] = ]−∞, 𝜽] 𝒆𝒕 𝑮𝜽 (𝒚) = =𝟏
𝑷(𝑿 ≤ 𝜽)
𝑷(𝑿 ≤ 𝒚) 𝑭𝑿 (𝒚)
∙ 𝑺𝒊 𝒚 < 𝜽 ⇒ ]−∞, 𝒚] ∩ ]−∞, 𝜽] = ]−∞, 𝒚] 𝒆𝒕 𝑮𝜽 (𝒚) = =
𝑷(𝑿 ≤ 𝜽) 𝑭𝑿 (𝜽)
𝟏 , 𝒔𝒊 𝒚 ≥ 𝜽 𝒇𝑿 (𝒚)
𝒅𝑮𝜽 (𝒚) , 𝒔𝒊 𝒚 < 𝜽
′ 𝑭
𝑫 𝒐ù 𝑮𝜽 (𝒚) = { 𝑿 (𝒚) . 𝑶𝒓 𝒈𝜽 (𝒚) = ⇒ 𝒈𝜽 (𝒚) = {𝑭𝑿 (𝜽)
, 𝒔𝒊 𝒚 < 𝜽 𝒅𝒚
𝑭𝑿 (𝜽) 𝟎, 𝒔𝒊 𝒏𝒐𝒏
′
▪𝟎 ≤ 𝑭𝑿 (𝜽) ≤ 𝟏 𝒆𝒕 ▪𝑭𝑿 (𝜽) 𝒆𝒔𝒕 𝒔𝒕𝒓𝒊𝒄𝒕𝒆𝒎𝒆𝒏𝒕 𝒄𝒓𝒐𝒊𝒔𝒔𝒂𝒏𝒕𝒆 , 𝒅𝒐𝒏𝒄 (𝑭𝑿 (𝜽)) > 𝟎
𝒏
𝑨𝒊𝒏𝒔𝒊 𝒈𝒏,𝜽 (𝜽) 𝒆𝒔𝒕 𝒅é𝒄𝒓𝒐𝒊𝒔𝒔𝒂𝒏𝒕𝒆 . 𝒐𝒏 𝒂 𝒂𝒖𝒔𝒔𝒊 ∶ 𝐥𝐢𝐦 (𝑭𝑿 (𝜽)) = 𝟏
𝜽→+∞
𝑫’𝒐ù 𝒈𝒏,𝜽 (𝒚𝟏 , 𝒚𝟐 , … , 𝒚𝒏 ) 𝒂𝒕𝒕𝒆𝒊𝒏𝒕 𝒔𝒐𝒏 𝒎𝒂𝒙𝒊𝒎𝒖𝒎 𝒂𝒖 𝒑𝒐𝒊𝒏𝒕 𝒅’𝒂𝒃𝒔𝒄𝒊𝒔𝒔𝒆 𝒔𝒏 (𝒚𝟏 , 𝒚𝟐 , … , 𝒚𝒏 ) = 𝐬𝐮𝐩 𝒚𝒊
𝟏≤𝒊≤𝒏
𝟏 , 𝒔𝒊 𝒔 ≥ 𝜽
𝒏
𝑭𝑺𝒏 (𝒔) = { 𝑭𝑿 (𝒔)
( ) , 𝒔𝒊 𝒚 < 𝜽
𝑭𝑿 (𝜽)
𝒅𝑭𝑺𝒏 (𝒔) 𝒏𝒇𝑿 (𝒔) 𝒏−𝟏
𝑷𝒂𝒓 𝒍𝒂 𝒔𝒖𝒊𝒕𝒆, 𝒇𝑺𝒏 (𝒔) = = 𝒏 (𝑭𝑿 (𝒔))
𝒅𝒔 (𝑭𝑿 (𝜽))
6)
𝑼 = 𝒏(𝜽 − 𝑺𝒏 )
a) { 𝒏 ⇒ 𝑼𝒏 (𝛀) = ℝ∗+ ,
𝑺𝒏 (𝛀) = ]−∞, 𝜽[
𝒖 𝒖 𝒖
𝑷(𝑼𝒏 < 𝒖) = 𝑷(𝒏(𝜽 − 𝑺𝒏 ) < 𝒖) = 𝑷 (𝜽 − 𝑺𝒏 < ) = 𝑷 (𝜽 − < 𝑺𝒏 ) = 𝟏 − 𝑷 (𝑺𝒏 ≤ 𝜽 − )
𝒏 𝒏 𝒏
𝟏 , 𝒔𝒊 𝒖 ≤ 𝟎
𝒖 𝒏
b) 𝑷(𝑼𝒏 < 𝒖) = 𝑭𝑼𝒏 (𝒖) = { 𝑭𝑿 (𝜽− )
𝒏
𝟏−( ) , 𝒔𝒊 𝒖 > 𝟎
𝑭𝑿 (𝜽)
𝒏
𝒖 𝒖
𝒖 𝒏 𝐥𝐧(
𝑭𝑿 (𝜽− )
𝒏 ) 𝒏 𝐥𝐧(
𝑭𝑿 (𝜽− )
𝒏 )
𝑭𝑿 (𝜽 − 𝒏) 𝑭𝑿 (𝜽) 𝑭𝑿 (𝜽)
𝐥𝐢𝐦 𝑭𝑼𝒏 (𝒖) = 𝐥𝐢𝐦 [𝟏 − ( ) ] = 𝐥𝐢𝐦 𝟏 − 𝒆 = 𝐥𝐢𝐦 𝟏 − 𝒆
𝒏→+∞ 𝒏→+∞ 𝑭𝑿 (𝜽) 𝒏→+∞ 𝒏→+∞
[ ] [ ]
𝒖
𝒏[𝐥𝐧(𝑭𝑿 (𝜽− ))−𝐥𝐧(𝑭𝑿 (𝜽))]
= 𝐥𝐢𝐦 [𝟏 − 𝒆 𝒏 ] ; 𝒐𝒓 𝒉 = 𝐥𝐧(𝑭) , 𝒑𝒂𝒓 𝒍𝒂 𝒔𝒖𝒊𝒕𝒆 ∶
𝒏→+∞
𝒖 𝒖
𝒉(𝜽− )−𝒉(𝜽) 𝒉(𝜽− )−𝒉(𝜽)
[ 𝒏 ] 𝒖[ 𝒏 ]
𝒖 𝟏⁄𝒏 𝒖⁄𝒏
𝐥𝐢𝐦 𝑭𝑼𝒏 (𝒖) = 𝐥𝐢𝐦 [𝟏 − 𝒆𝒏[𝒉(𝜽−𝒏)−𝒉(𝜽)] ] = 𝐥𝐢𝐦 𝟏 − 𝒆 = 𝐥𝐢𝐦 𝟏 − 𝒆
𝒏→+∞ 𝒏→+∞ 𝒏→+∞ 𝒏→+∞
[ ] [ ]
𝒉(𝜽−𝒕)−𝒉(𝜽) 𝒖
𝒖[ ]
= 𝐥𝐢𝐦 [𝟏 − 𝒆 𝒕 ] , 𝒂𝒗𝒆𝒄 ,𝒕 = 𝒆𝒕 (𝒔𝒊, 𝒏 → +∞, 𝒂𝒍𝒐𝒓𝒔, 𝒕 → 𝟎)
𝒕→𝟎 𝒏
𝒉(𝒗)−𝒉(𝜽)
−𝒖[ ]
= 𝐥𝐢𝐦 [𝟏 − 𝒆 𝒗−𝜽 ] , 𝒂𝒗𝒆𝒄 , 𝒗 = 𝜽 − 𝒕 𝒆𝒕 (𝒔𝒊, 𝒕 → 𝟎, 𝒂𝒍𝒐𝒓𝒔, 𝒗 → 𝜽)
𝒗→𝜽
′
𝐥𝐢𝐦 𝑭𝑼𝒏 (𝒖) = 𝟏 − 𝒆−[𝒉 (𝜽)]𝒖
𝒏→+∞
′ −[𝒉 (𝜽)]𝒖 ′
𝑺𝒐𝒊𝒕 𝒍𝒂 𝒗. 𝒂. 𝑼 ↝ 𝓔(𝒉′ (𝜽)) ⟺ 𝒇𝑼 (𝒖) = {𝒉 (𝜽)𝒆 , 𝒔𝒊 𝒖 ∈ [𝟎, +∞[
𝟎 , 𝒔𝒊 𝒏𝒐𝒏
𝟎 , 𝒔𝒊 𝒖 < 𝟎
⟺ 𝑭𝑼 (𝒖) = { ′
𝟏 − 𝒆−[𝒉 (𝜽)]𝒖 , 𝒔𝒊 𝒖 ≥ 𝟎
𝓛
𝑫′ 𝒐ù 𝑼𝒏 𝒄𝒐𝒏𝒗𝒆𝒓𝒈𝒆 𝒆𝒏 𝒍𝒐𝒊 𝒗𝒆𝒓𝒔 𝑼 ∶ 𝑼𝒏 → 𝑼 , 𝒐ù 𝑼 ↝ 𝓔(𝒉′ (𝜽))
𝒏→+∞
1) 𝑶𝒏 𝒂𝒅𝒎𝒆𝒕 𝒒𝒖𝒆, ∀𝒊, (𝑿𝒊 − 𝑿 ̅ 𝒏 ) 𝒔𝒖𝒊𝒕 𝒖𝒏𝒆 𝒍𝒐𝒊 𝒏𝒐𝒓𝒎𝒂𝒍𝒆. 𝑫é𝒕𝒆𝒓𝒎𝒊𝒏𝒆𝒓 𝒍𝒆𝒔 𝒑𝒂𝒓𝒂𝒎è𝒕𝒓𝒆𝒔 𝒅𝒆
𝒄𝒆𝒕𝒕𝒆 𝒍𝒐𝒊
2) 𝑴𝒐𝒏𝒕𝒓𝒆𝒓 𝒒𝒖𝒆 𝑪𝒐𝒗(𝑿 ̅ 𝒏 , (𝑿𝒊 − 𝑿 ̅ 𝒏 )) = 𝟎 𝒆𝒕 𝒆𝒏 𝒅é𝒅𝒖𝒊𝒓𝒆 𝒒𝒖𝒆 𝒍𝒆𝒔 𝒗𝒂𝒓𝒊𝒂𝒃𝒍𝒆𝒔 𝒂𝒍é𝒂𝒕𝒐𝒊𝒓𝒆𝒔 𝑿
̅ 𝒏 𝒆𝒕
̅ 𝒏 ) 𝒔𝒐𝒏𝒕 𝒊𝒏𝒅é𝒑𝒆𝒏𝒅𝒂𝒏𝒕𝒆𝒔
(𝑿𝒊 − 𝑿
3) 𝑭𝒊𝒏𝒂𝒍𝒆𝒎𝒆𝒏𝒕 𝒆𝒏 𝒅é𝒅𝒖𝒊𝒓𝒆 𝒒𝒖𝒆 𝑿 ̅ 𝒏 𝒆𝒕 𝑺𝟐𝒏 𝒔𝒐𝒏𝒕 𝒊𝒏𝒅é𝒑𝒆𝒏𝒅𝒂𝒏𝒕𝒆𝒔
Corrigé
1)
̅ 𝒏 ) = 𝑬(𝑿𝒊 ) − 𝑬(𝑿
∙ 𝑬(𝑿𝒊 − 𝑿 ̅ 𝒏) = 𝝁 − 𝝁 = 𝟎
𝒏 𝒏 𝒏
𝟏 𝟏 𝟏 𝒏−𝟏 𝟏
̅ 𝒏 = 𝑿𝒊 − ∑ 𝑿𝒌 = (𝑿𝒊 − 𝑿𝒊 ) − ∑ 𝑿𝒌 = (
∙ 𝑿𝒊 − 𝑿 ) 𝑿𝒊 − ∑ 𝑿𝒌
𝒏 𝒏 𝒏 𝒏 𝒏
𝒌=𝟏 𝒌≠𝒊 𝒌≠𝒊
𝒏
𝒏−𝟏 𝟏
̅ 𝒏 ) = 𝑽 ((
∙ 𝑽(𝑿𝒊 − 𝑿 ) 𝑿𝒊 − ∑ 𝑿𝒌 )
𝒏 𝒏
𝒌≠𝒊
𝒏 𝒏
𝒏−𝟏 𝟐 𝟏 𝟐(𝒏 − 𝟏)
=( ) 𝑽(𝑿𝒊 ) + 𝟐 𝑽 (∑ 𝑿𝒌 ) − 𝑪𝒐𝒗 (𝑿𝒊 , ∑ 𝑿𝒌 )
𝒏 𝒏 𝒏𝟐
𝒌≠𝒊 𝒌≠𝒊
̅ 𝒏 , (𝑿𝒊 − 𝑿
𝑫′ 𝒐ù ∶ 𝑪𝒐𝒗(𝑿 ̅ 𝒏 )) = 𝟎 ; 𝒍𝒆𝒔 𝒗𝒂𝒓𝒊𝒂𝒃𝒍𝒆𝒔𝑿
̅ 𝒏 , (𝑿𝒊 − 𝑿
̅ 𝒏 ) 𝒔𝒐𝒏𝒕 𝒏𝒐𝒏‑𝒄𝒐𝒓𝒓é𝒍é𝒆𝒔
𝝈𝟐
̅ 𝒏 ↝ 𝓝 (𝝁,
𝑿 )
𝒏
▪𝑶𝒏 𝒂 ∶ ̅ 𝒏 , (𝑿𝒊 − 𝑿
𝒏 − 𝟏 𝟐 ⟺ 𝑳𝒆𝒔 𝒗𝒂𝒓𝒊𝒂𝒃𝒍𝒆𝒔 𝑿 ̅ 𝒏 ) 𝒔𝒐𝒏𝒕 𝒊𝒏𝒅é𝒑𝒆𝒏𝒅𝒂𝒏𝒕𝒆𝒔
̅ 𝒏 ) ↝ 𝓝 (𝟎, (
(𝑿𝒊 − 𝑿 )𝝈 )
𝒏
̅ ̅
{𝑪𝒐𝒗(𝑿𝒏 , (𝑿𝒊 − 𝑿𝒏 )) = 𝟎
3)
̅ 𝒏 )𝟐 = 𝒇((𝑿𝒊 − 𝑿
▪𝑺𝒐𝒊𝒕 𝒇(𝒖) = 𝒖𝟐 , 𝒅𝒐𝒏𝒄 (𝑿𝒊 − 𝑿 ̅ 𝒏 )), 𝒇 𝒄𝒐𝒏𝒕𝒊𝒏𝒖𝒆 𝒔𝒖𝒓 ℝ (𝟏)
̅ 𝒏 = 𝒈(𝑿
▪𝒈(𝒖) = 𝒖, 𝒅𝒐𝒏𝒄 𝑿 ̅ 𝒏 ), 𝒈 𝒄𝒐𝒏𝒕𝒊𝒏𝒖𝒆 𝒔𝒖𝒓 ℝ (𝟐)
𝒏−𝟏 𝟐 𝒏 ̅𝒏
𝑿𝒊 − 𝑿 ̅ 𝒏 )𝟐
𝒏(𝑿𝒊 − 𝑿
̅ 𝒏 ) ↝ 𝓝 (𝟎, (
▪(𝑿𝒊 − 𝑿 )𝝈 ) ⇒ √ ( ) ↝ 𝓝(𝟎, 𝟏) ⇒ ↝ 𝝌𝟐 (𝟏)
𝒏 𝒏−𝟏 𝝈 (𝒏 − 𝟏)𝝈𝟐
̅ 𝒏 )𝟐
𝒏(𝑿𝒊 − 𝑿 𝒏
𝑨𝒊𝒏𝒔𝒊, 𝑬 [ ] = 𝟏 ⇒ ̅ 𝒏 )𝟐 ) = 𝟏
𝑬((𝑿𝒊 − 𝑿
(𝒏 − 𝟏)𝝈𝟐 (𝒏 − 𝟏)𝝈𝟐
(𝒏 − 𝟏)𝝈𝟐
̅ 𝒏 ))) = 𝑬((𝑿𝒊 − 𝑿
𝑪𝒆 𝒒𝒖𝒊 𝒅𝒐𝒏𝒏𝒆 ∶ 𝑬 (𝒇((𝑿𝒊 − 𝑿 ̅ 𝒏 )𝟐 ) = , 𝒆𝒙𝒊𝒔𝒕𝒆 𝒆𝒕 𝒇𝒊𝒏𝒊𝒆 (𝟑)
𝒏
𝝈𝟐
̅ 𝒏 ↝ 𝓝 (𝝁,
▪𝑿 ̅ 𝒏 )) = 𝑬(𝑿
) ⇒ 𝑬(𝒈(𝑿 ̅ 𝒏 ) = 𝝁 , 𝒆𝒙𝒊𝒔𝒕𝒆 𝒆𝒕 𝒇𝒊𝒏𝒊𝒆 (𝟒)
𝒏
̅ 𝒏 , (𝑿𝒊 − 𝑿
▪𝒆𝒔 𝒗𝒂𝒓𝒊𝒂𝒃𝒍𝒆𝒔 𝑿 ̅ 𝒏 ) 𝒔𝒐𝒏𝒕 𝒊𝒏𝒅é𝒑𝒆𝒏𝒅𝒂𝒏𝒕𝒆𝒔 (𝟓)
̅ 𝒏 , (𝑿𝒊 − 𝑿
(𝟏) + (𝟐) + (𝟑) + (𝟒) + (𝟓) ⇒ 𝑳𝒆𝒔 𝒗𝒂𝒓𝒊𝒂𝒃𝒍𝒆𝒔 𝑿 ̅ 𝒏 )𝟐 𝒔𝒐𝒏𝒕 𝒊𝒏𝒅é𝒑𝒆𝒏𝒅𝒂𝒏𝒕𝒆𝒔
𝒏
𝟏
̅ 𝒏 , (𝑿𝒊 − 𝑿
𝑳𝒆𝒔 𝒗. 𝒂. 𝑿 ̅ 𝒏 )𝟐 𝒔𝒐𝒏𝒕 𝒊𝒏𝒅é𝒑𝒆𝒏𝒅𝒂𝒏𝒕𝒆𝒔 ⇒ 𝒈(𝑿
̅ 𝒏) = 𝑿
̅ 𝒏 𝒆𝒕 𝒉(𝑿𝒊 − 𝑿
̅ 𝒏 )𝟐 = ̅ 𝒏 )𝟐
∑(𝑿𝒊 − 𝑿
𝒏−𝟏
𝒊=𝟏