One-Step Time-Dependent Future Video Frame Prediction with a Convolutional Encoder-Decoder Neural Network
Résumé
There is an inherent need for machines to have a notion of how entities within their environment behave and to anticipate changes in the near future. In this work, we focus on anticipating future appearance, given the current frame of a video. Typical methods are used either to predict the next frame of a video or to predict future optical flow or trajectories based on a single video frame. This work presents an experiment on stretching the ability of CNNs to anticipate appearance at an arbitrarily given near future time, by conditioning our predicted video frames on a continuous time variable. We show that CNNs can learn an intrinsic representation of typical appearance changes over time and successfully generate realistic predictions in one step-at a deliberate time difference in the near future. The method is evaluated on the KTH human actions dataset and compared to a baseline consisting of an analogous CNN architecture that is not time-aware.
Origine | Fichiers produits par l'(les) auteur(s) |
---|
Loading...