Taylor-based pseudo-metrics for random process fitting in dynamic programming.
Résumé
Stochastic optimization is the research of $x$ optimizing $E\ C(x,A)$, the expectation of $C(x,A)$, wher e $A$ is a random variable. Typically $C(x,a)$ is the cost related to a strategy $x$ which faces the reali zation $a$ of the random process. Many stochastic optimization problems deal with multiple time steps, leading to computationally difficu lt problems ; efficient solutions exist, for example through Bellman's optimality principle, but only provided that the random process is represented by a well structured process, typically an inhomogeneous Markovian process (hopefully with a finite number of states) or a scenario tree. The problem is that in the general case, $A$ is far from b eing Markovian. So, we look for $A'$, "looking like $A$", but belonging to a given family $\A'$ which do es not at all contain $A$. The problem is the numerical evaluation of "$A'$ looks like $A$". A classical method is the use of the Kantorovitch-Rubinstein distance or other transportation metrics \c ite{Pflug}, justified by straightforward bounds on the deviation $|E C(x,A)-E C(x,A')|$ through the use of the Kantorovitch-Rubinstein distance and uniform lipschitz conditions. These approaches might be bett er than the use of high-level statistics \cite{Keefer}. We propose other (pseudo-)distances, based upon refined inequalities, guaranteeing a good choice of $A'$. Moreover, as in many cases, we indeed prefer t he optimization with risk management, e.g. optimization of $EC(x,noise(A))$ where $noise(.)$ is a random noise modelizing the lack of knowledge on the precise random variables, we propose distances which can deal with a user-defined noise. Tests on artificial data sets with realistic loss functions show the rel evance of the method.
Domaines
Optimisation et contrôle [math.OC]
Loading...