TP-optimisation-numerique-2/docs/build/Regions_de_confiance.html
2021-11-08 11:38:05 +01:00

3 lines
15 KiB
HTML
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

<!DOCTYPE html>
<html lang="en"><head><meta charset="UTF-8"/><meta name="viewport" content="width=device-width, initial-scale=1.0"/><title>La méthode des régions de confiance · Optinum.jl</title><link href="https://fonts.googleapis.com/css?family=Lato|Roboto+Mono" rel="stylesheet" type="text/css"/><link href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.11.2/css/fontawesome.min.css" rel="stylesheet" type="text/css"/><link href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.11.2/css/solid.min.css" rel="stylesheet" type="text/css"/><link href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.11.2/css/brands.min.css" rel="stylesheet" type="text/css"/><link href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.11.1/katex.min.css" rel="stylesheet" type="text/css"/><script>documenterBaseURL="."</script><script src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.6/require.min.js" data-main="assets/documenter.js"></script><script src="siteinfo.js"></script><script src="../versions.js"></script><link class="docs-theme-link" rel="stylesheet" type="text/css" href="assets/themes/documenter-dark.css" data-theme-name="documenter-dark"/><link class="docs-theme-link" rel="stylesheet" type="text/css" href="assets/themes/documenter-light.css" data-theme-name="documenter-light" data-theme-primary/><script src="assets/themeswap.js"></script></head><body><div id="documenter"><nav class="docs-sidebar"><a class="docs-logo" href="index.html"><img src="assets/logo.png" alt="Optinum.jl logo"/></a><div class="docs-package-name"><span class="docs-autofit">Optinum.jl</span></div><form class="docs-search" action="search.html"><input class="docs-search-query" id="documenter-search-query" name="q" type="text" placeholder="Search docs"/></form><ul class="docs-menu"><li><a class="tocitem" href="index.html">Accueil</a></li><li><a class="tocitem" href="Sujet.html">Sujet</a></li><li><span class="tocitem">Algorithmes</span><ul><li><a class="tocitem" href="Algorithme_de_newton.html">L&#39;algorithme de Newton local</a></li><li class="is-active"><a class="tocitem" href="Regions_de_confiance.html">La méthode des régions de confiance</a><ul class="internal"><li><a class="tocitem" href="#Principe-1"><span>Principe</span></a></li><li><a class="tocitem" href="#Algorithme-1"><span>Algorithme</span></a></li><li><a class="tocitem" href="#Le-pas-de-cauchy-1"><span>Le pas de cauchy</span></a></li><li><a class="tocitem" href="#Algorithme-du-Gradient-Conjugué-Tronqué-1"><span>Algorithme du Gradient Conjugué Tronqué</span></a></li></ul></li><li><a class="tocitem" href="Lagrangien_augmente.html">La méthode du Lagrangien augmenté</a></li></ul></li><li><a class="tocitem" href="fct_index.html">Index des fonctions</a></li><li><a class="tocitem" href="Annexes.html">Annexes</a></li><li><a class="tocitem" href="mise_en_place.html">Installation de Julia et tests unitaires</a></li><li><a class="tocitem" href="FAQ.html">Foire aux Questions</a></li></ul><div class="docs-version-selector field has-addons"><div class="control"><span class="docs-label button is-static is-size-7">Version</span></div><div class="docs-selector control is-expanded"><div class="select is-fullwidth is-size-7"><select id="documenter-version-selector"></select></div></div></div></nav><div class="docs-main"><header class="docs-navbar"><nav class="breadcrumb"><ul class="is-hidden-mobile"><li><a class="is-disabled">Algorithmes</a></li><li class="is-active"><a href="Regions_de_confiance.html">La méthode des régions de confiance</a></li></ul><ul class="is-hidden-tablet"><li class="is-active"><a href="Regions_de_confiance.html">La méthode des régions de confiance</a></li></ul></nav><div class="docs-right"><a class="docs-edit-link" href="https://github.com//blob/master/docs/src/Regions_de_confiance.md" title="Edit on GitHub"><span class="docs-icon fab"></span><span class="docs-label is-hidden-touch">Edit on GitHub</span></a><a class="docs-settings-button fas fa-cog" id="documenter-settings-button" href="#" title="Settings"></a><a class="docs-sidebar-button fa fa-bars is-hidden-desktop" id="documenter-sidebar-button" href="#"></a></div></header><article class="content" id="documenter-page"><h1 id="Régions-de-confiance-partie-1-1"><a class="docs-heading-anchor" href="#Régions-de-confiance-partie-1-1">Régions de confiance partie 1</a><a class="docs-heading-anchor-permalink" href="#Régions-de-confiance-partie-1-1" title="Permalink"></a></h1><p>Lintroduction dune <em>région de confiance</em> dans la méthode de Newton permet de garantir la convergence globale de celle-ci, i.e. la convergence vers un optimum local quel que soit le point de départ. Cela suppose certaines conditions sur la résolution locale des sous- problèmes issus de la méthode, qui sont aisément imposables.</p><h2 id="Principe-1"><a class="docs-heading-anchor" href="#Principe-1">Principe</a><a class="docs-heading-anchor-permalink" href="#Principe-1" title="Permalink"></a></h2><p>Lidée de la méthode des régions de confiance est dapprocher <span>$f$</span> par une fonction modèle plus simple <span>$m_{k}$</span> dans une région <span>$R_{k}=\left\{x_{k}+s ;\|s\| \leq \Delta_{k}\right\}$</span> pour un <span>$\Delta_{k}$</span> fixé. Cette région dite “de confiance” doit être suffisament petite pour que</p><p><span>$\hspace*{2.5cm}$</span> <span>$m_{k}\left(x_{k}+s\right) \sim f\left(x_{k}+s\right).$</span></p><p>Le principe est que, au lieu de résoudre : <span>$f\left(x_{k+1}\right)=\min _{\|x\| \leq \Delta_{k}} f\left(x_{k}+s\right)$</span> on résout :</p><p><span>$\hspace*{2.5cm}$</span> <span>$m_{k}\left(x_{k+1}\right)=\min _{\|x\| \leq \Delta_{k}} m_{k}\left(x_{k}+s\right)$</span> <span>$\hspace*{2.5cm}.$</span>(2.1)</p><p>Si la différence entre <span>$f(x_{k+1})$</span> et <span>$m_{k} (x_{k+1} )$</span> est trop grande, on diminue le <span>$∆_{k}$</span> (et donc la région de confiance) et on résout le modèle (2.1) à nouveau. Un avantage de cette méthode est que toutes les directions sont prises en compte. Par contre, il faut faire attention à ne pas trop séloigner de <span>$x_{k}$</span> ; en général, la fonction <span>$m_{k}$</span> napproche proprement <span>$f$</span> que sur une région proche de <span>$x_{k}$</span> .</p><p>Exemple de modèle : lapproximation de Taylor à lordre 2 (modèle quadratique) :</p><p><span>$\hspace*{1.5cm}$</span> <span>$m_{k}\left(x_{k}+s\right)=q_{k}(s)=f\left(x_{k}\right)+g_{k}^{\top} s+\frac{1}{2} s^{\top} H_{k} s$</span> <span>$\hspace*{1.5cm},$</span>(2.2)</p><p>avec <span>$g_{k}=\nabla f\left(x_{k}\right) \text { et } H_{k}=\nabla^{2} f\left(x_{k}\right).$</span></p><h2 id="Algorithme-1"><a class="docs-heading-anchor" href="#Algorithme-1">Algorithme</a><a class="docs-heading-anchor-permalink" href="#Algorithme-1" title="Permalink"></a></h2><h4 id="Algorithme-2-1"><a class="docs-heading-anchor" href="#Algorithme-2-1">Algorithme 2</a><a class="docs-heading-anchor-permalink" href="#Algorithme-2-1" title="Permalink"></a></h4><p><em>Méthode des régions de confiance (algo général)</em> </p><h5 id="Données:-1"><a class="docs-heading-anchor" href="#Données:-1">Données:</a><a class="docs-heading-anchor-permalink" href="#Données:-1" title="Permalink"></a></h5><p><span>$\Delta_{\max } &gt; 0, \Delta_{0} \in(0, \Delta_{\max}), 0 &lt; \gamma_{1} &lt; 1 &lt; \gamma_{2} , 0 &lt; \eta_{1} &lt; \eta_{2} &lt; 1.$</span></p><h5 id="Sorties:-1"><a class="docs-heading-anchor" href="#Sorties:-1">Sorties:</a><a class="docs-heading-anchor-permalink" href="#Sorties:-1" title="Permalink"></a></h5><p>une approximation de la solution du problème : <span>$\min _{x \in \mathbb{R}^{n}} f(x).$</span></p><h5 id=".Tant-que-le-test-de-convergence-est-non-satisfait-:-1"><a class="docs-heading-anchor" href="#.Tant-que-le-test-de-convergence-est-non-satisfait-:-1">1.Tant que le test de convergence est non satisfait :</a><a class="docs-heading-anchor-permalink" href="#.Tant-que-le-test-de-convergence-est-non-satisfait-:-1" title="Permalink"></a></h5><p><span>$\hspace*{1.5cm}$</span> a.Calculer approximativement <span>$s_{k}$</span> solution du sous-problème (2.1).</p><p><span>$\hspace*{1.5cm}$</span> b.Evaluer <span>$f\left(x_{k}+s_{k}\right)$</span> et <span>$\rho_{k}=\frac{f\left(x_{k}\right)-f\left(x_{k}+s_{k}\right)}{m_{k}\left(x_{k}\right)-m_{k}\left(x_{k}+s_{k}\right)}.$</span></p><p><span>$\hspace*{1.5cm}$</span> c. Mettre à jour litéré courant :</p><p><span>$\hspace*{2.5cm}$</span> <span>$x_{k+1}=\left\{\begin{array}{ll} x_{k}+s_{k} &amp; \text { si } \rho_{k} \geq \eta_{1} \\ x_{k} &amp; \text { sinon. } \end{array}\right.$</span></p><p><span>$\hspace*{1.5cm}$</span> d. Mettre à jour la région de confiance : </p><p><span>$\hspace*{2.5cm}$</span> <span>$\Delta_{k+1}=\left\{\begin{array}{cc}\min \{\gamma_{2} \Delta_{k}, \Delta_{\max }\} &amp; \operatorname{si} \rho_{k} \geq \eta_{2} \\ \Delta_{k} &amp; \text{ si } \rho_{k} \in [\eta_{1}, \eta_{2}]. \\\gamma_{1} \Delta_{k} &amp; \text { sinon. } \end{array}\right.$</span></p><h5 id=".Retourner-x_{k}.-1"><a class="docs-heading-anchor" href="#.Retourner-x_{k}.-1">2.Retourner <span>$x_{k}$</span>.</a><a class="docs-heading-anchor-permalink" href="#.Retourner-x_{k}.-1" title="Permalink"></a></h5><p>Cet algorithme est un cadre générique. On va sintéresser à deux raffinages possibles de létape a.</p><h2 id="Le-pas-de-cauchy-1"><a class="docs-heading-anchor" href="#Le-pas-de-cauchy-1">Le pas de cauchy</a><a class="docs-heading-anchor-permalink" href="#Le-pas-de-cauchy-1" title="Permalink"></a></h2><p><span>$\hspace*{0.5cm}$</span>On considère ici le modèle quadratique <span>$q_{k}(s)$</span>. Le sous-problème de régions de confiance correspondant peut se révéler difficile à résoudre (parfois autant que le problème de départ).</p><p>Il est donc intéressant de se restreindre à une résolution approchée de ce problème.</p><p><span>$\hspace*{0.5cm}$</span>Le pas de Cauchy appartient à la catégorie des solutions approchées. Il sagit de se restreindre au sous-espace engendré par le vecteur <span>$g_{k}$</span> ; le sous-problème sécrit alors</p><p><span>$\hspace*{2.5cm}$</span> <span>$\left\{\begin{array}{cl} \min &amp; q_{k}(s) \\ s . t . &amp; s=-t g_{k} \\ &amp; t&gt;0 \\ &amp; \|s\| \leq \Delta_{k} \end{array}\right.$</span> <span>$\hspace*{1.5cm}$</span> (2.3)</p><h2 id="Algorithme-du-Gradient-Conjugué-Tronqué-1"><a class="docs-heading-anchor" href="#Algorithme-du-Gradient-Conjugué-Tronqué-1">Algorithme du Gradient Conjugué Tronqué</a><a class="docs-heading-anchor-permalink" href="#Algorithme-du-Gradient-Conjugué-Tronqué-1" title="Permalink"></a></h2><p>On sintéresse maintenant à la résolution approchée du problème (2.1) à litération k de lalgorithme 2 des Régions de Confiance. On considère pour cela lalgorithme du Gradient Conjugué Tronqué (vu en cours), rappelé ci-après :</p><h6 id="Algorithme-3-1"><a class="docs-heading-anchor" href="#Algorithme-3-1">Algorithme 3</a><a class="docs-heading-anchor-permalink" href="#Algorithme-3-1" title="Permalink"></a></h6><p><em>Algorithme du gradient conjugué tronqué</em></p><h6 id="Données:-2"><a class="docs-heading-anchor" href="#Données:-2">Données:</a><a class="docs-heading-anchor-permalink" href="#Données:-2" title="Permalink"></a></h6><p><span>$\Delta_{k} &gt; 0, x_{k}, g=\nabla f\left(x_{k}\right), H=\nabla^{2} f\left(x_{k}\right)$</span></p><h6 id="Sorties:-2"><a class="docs-heading-anchor" href="#Sorties:-2">Sorties:</a><a class="docs-heading-anchor-permalink" href="#Sorties:-2" title="Permalink"></a></h6><p>le pas <span>$s$</span> qui approche la solution du problème : <span>$\min_{\|s \| \leq \Delta_{k}} q(s)$</span></p><p><span>$q(s)=g^{\top} s+\frac{1}{2} s^{\top} H_{k} s$</span></p><h6 id="Initialisations-:-1"><a class="docs-heading-anchor" href="#Initialisations-:-1">Initialisations :</a><a class="docs-heading-anchor-permalink" href="#Initialisations-:-1" title="Permalink"></a></h6><p><span>$s_{0}=0, g_{0}=g, p_{0}=-g$</span></p><p><strong>1. Pour</strong> j = 0, 1, 2, . . . , <strong>faire</strong> :</p><p><span>$\hspace*{1.5cm}$</span> a. <span>$\hspace*{0.4cm}$</span> <span>$\kappa_{j}=p_{j}^{T} H p_{j}$</span></p><p><span>$\hspace*{1.5cm}$</span> b.<span>$\hspace*{0.4cm}$</span> Si <span>$\kappa_{j} \leq 0$</span> alors <span>$\\$</span> <span>$\hspace*{2.5cm}$</span> déterminer <span>$\sigma_{j}$</span> la racine de léquation <span>$\left\|s_{j}+\sigma p_{j}\right\|_{2}=\Delta_{k}\\$</span> <span>$\hspace*{2.7cm}$</span> pour laquelle la valeur de <span>$q\left(s_{j}+\sigma p_{j}\right)$</span> est la plus petite.</p><p><span>$\hspace*{2.5cm}$</span> Poser <span>$s=s_{j}+\sigma_{j} p_{j}$</span> et sortir de la boucle.<span>$\\$</span> <span>$\hspace*{1.5cm}$</span> Fin Si</p><p><span>$\hspace*{1.5cm}$</span> c. <span>$\hspace*{0.4cm}$</span> <span>$\alpha_{j}=g_{j}^{T} g_{j} / \kappa_{j}\\$</span></p><p><span>$\hspace*{1.5cm}$</span> d.<span>$\hspace*{0.4cm}$</span> Si <span>$\left\|s_{j}+\alpha_{j} p_{j}\right\|_{2} \geq \Delta_{k}$</span> alors</p><p><span>$\hspace*{2.5cm}$</span> déterminer <span>$\sigma_{j}$</span> la racine positive de léquation <span>$\left\|s_{j}+\sigma p_{j}\right\|_{2}=\Delta_{k}\\$</span></p><p><span>$\hspace*{2.5cm}$</span> Poser <span>$s=s_{j}+\sigma_{j} p_{j}$</span> et sortir de la boucle.<span>$\\$</span> <span>$\hspace*{1.5cm}$</span> Fin Si</p><p><span>$\hspace*{1.5cm}$</span> e. <span>$\hspace*{0.4cm}$</span> <span>$s_{j+1}=s_{j}+\alpha_{j} p_{j}\\$</span> <span>$\hspace*{1.5cm}$</span> f. <span>$\hspace*{0.4cm}$</span> <span>$g_{j+1}=g_{j}+\alpha_{j} H p_{j}\\$</span> <span>$\hspace*{1.5cm}$</span> g. <span>$\hspace*{0.4cm}$</span> <span>$\beta_{j}=g_{j+1}^{T} g_{j+1} / g_{j}^{T} g_{j}\\$</span> <span>$\hspace*{1.5cm}$</span> h. <span>$\hspace*{0.4cm}$</span> <span>$p_{j+1}=-g_{j+1}+\beta_{j} p_{j}\\$</span> <span>$\hspace*{1.5cm}$</span> i. <span>$\hspace*{0.4cm}$</span> Si la convergence est suffisante (<span>$\|g_{j+1}\|\leq Tol\_rel\|g_0\|$</span>), poser <span>$s=s_{j+1}$</span> et sortir de la boucle.</p><h6 id="Retourner-s.-1"><a class="docs-heading-anchor" href="#Retourner-s.-1">Retourner <span>$s$</span>.</a><a class="docs-heading-anchor-permalink" href="#Retourner-s.-1" title="Permalink"></a></h6></article><nav class="docs-footer"><a class="docs-footer-prevpage" href="Algorithme_de_newton.html">« L&#39;algorithme de Newton local</a><a class="docs-footer-nextpage" href="Lagrangien_augmente.html">La méthode du Lagrangien augmenté »</a></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> on <span class="colophon-date" title="Monday 8 November 2021 11:31">Monday 8 November 2021</span>. Using Julia version 1.4.1.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>