Top-Down and Bottom-Up Cues for Scene Text Recognition - Inria - Institut national de recherche en sciences et technologies du numérique Access content directly
Conference Papers Year : 2012

Top-Down and Bottom-Up Cues for Scene Text Recognition

Abstract

Scene text recognition has gained significant attention from the computer vision community in recent years. Recognizing such text is a challenging problem, even more so than the recognition of scanned documents. In this work, we focus on the problem of recognizing text extracted from street images. We present a framework that exploits both bottom-up and top-down cues. The bottom-up cues are derived from individual character detections from the image. We build a Conditional Random Field model on these detections to jointly model the strength of the detections and the interactions between them. We impose top-down cues obtained from a lexicon-based prior, i.e. language statistics, on the model. The optimal word represented by the text image is obtained by minimizing the energy function corresponding to the random field model. We show significant improvements in accuracies on two challenging public datasets, namely Street View Text (over 15%) and ICDAR 2003 (nearly 10%).
Fichier principal
Vignette du fichier
mishra12.pdf (398.94 Ko) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-00818178 , version 1 (03-11-2014)

Identifiers

Cite

Anand Mishra, Karteek Alahari, C.V. Jawahar. Top-Down and Bottom-Up Cues for Scene Text Recognition. CVPR - IEEE Conference on Computer Vision and Pattern Recognition, Jun 2012, Providence, United States. ⟨10.1109/CVPR.2012.6247990⟩. ⟨hal-00818178⟩
440 View
1441 Download

Altmetric

Share

Gmail Facebook Twitter LinkedIn More