TipiWiki2

[ PedestrianDetection.2007-02-02-14-54 ]

edit | Recent Changes | Find Page | All Pages | Front Page |

Appearance-based visual learning and recognition techniques that are based on models derived from a training set of 2D images are being widely used in computer vision applications. In robotics, they have received most attention in visual servoing and navigation. In this paper we discuss a framework for visual self-localization of mobile robots using a parametric model built from panoramic snapshots of the environment. In particular, we propose solutions to the problems related to robustness against occlusions and invariance to the rotation of the sensor. Our principal contribution is an “eigenspace of spinning-images”, i.e., a model of the environment which successfully exploits some of the specific properties of panoramic images in order to efficiently calculate the optimal subspace in terms of principal components analysis (PCA) of a set of training snapshots without actually decomposing the covariance matrix. By integrating a robust recover-and-select algorithm for the computation of image parameters we achieve reliable localization even in the case when the input images are partly occluded or noisy. In this way, the robot is capable of localizing itself in realistic environments. (http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6V16-48V8497-1&_user=10&_coverDate=10%2F31%2F2003&_rdoc=1&_fmt=&_orig=search&_sort=d&view=c&_acct=C000050221&_version=1&_urlVersion=0&_userid=10&md5=7e247ec9f57c3a46154b927ff33bc6d5)

The analysis of scenes containing multiple 3D objects remains an active research topic in computer vision. Particularly challenging are scenes containing non-polyhedral objects. In general, conventional object models based on junctions and line segments are not suitable for use in this type of recognition. To address this issue, we have developed a representation scheme in which objects are defined by the features that can be reliably extracted from a training set of real images. For a given object the set of such features is called the appearance-based model of the object. We have created a database of appearance-based model of industrial objects containing both flat and curved surfaces, holes, and threads. A matching technique, called relational indexing, has been developed to work with the appearance-based representation of our objects. Each model in the database is described by a relational graph of its appearance-based features and small relational subgraphs of the scene features are wed to index the database and to retrieve appropriate 3D model hypotheses. This paper describes the new models, the matching algorithm, and preliminary results. (http://portal.acm.org/citation.cfm?id=849908)

ROC
http://en.wikipedia.org/wiki/Receiver_operating_characteristic

JabRef

http://cbcl.mit.edu/publications/object-detection-recognition.html
http://www.lkl.ac.uk/niall/book_odv.pdf (Robot perception through omnidirectional vision)
http://cres.usc.edu/pubdb_html/files_upload/485.pdf
http://cvrr.ucsd.edu/publications/2005/Gandhi_ICIP2005.pdf
http://citeseer.ist.psu.edu/illmann01people.html

general

http://www.cs.unc.edu/Research/vision/comp256fall03/
http://www.ri.cmu.edu/people/rowley_henry.html
http://www.eecs.berkeley.edu/~cgeyer/OMNIVIS05/programme.html