Introduction
The project UrbanScan aims to develop computationally efficient algorithms for carrying piecewise-planar reconstruction (PPR) from stereo and monocular image sequences.
Since man-made environments are dominated by planar surfaces, it makes sense to use planes, as opposed to points, to carry Structure from Motion (SfM). Potential key advantages are:
- Better accuracy in camera motion estimation;
- Robustness to several issues/artefacts (wide baseline, weak texture, perceptual aliasing, illumination);
- Rendering of complete, visually pleasing models;
This website makes available several datasets for developing and testing algorithms of PPR, describes our algorithm for PPR dubbed StereoScan, presents several PPR results in challenging datasets, and discusses the possibilities of acceleration using GPU.
Camera Setup (Frontal and Lateral)
The outdoor scenes were obtained with Setup 1 for frontal configuration and Setup 2 for lateral configuration. The indoor scenes were acquired with Bumblebee Setup.
![]() |
![]() |
Figure 1: Stereo pair pointing forward (left) and lateral (right) for dataset acquisition.
- Cameras: PointGrey Grasshopper2 GS2-FW-14S5C
- Aquisition frame rate: 7.5 FPS
- Image resolution: 1280x960
- Car Velocity: 10-30 km/h
Acquired Datasets
Our datasets can be downloaded, and ready to use. Each dataset .zip file have four *.txt files with calibration results, ordered original and retified images, recovered motion whenever exists. All *.txt files with calibration parameters follow the format shown in Table 1.
Calibration Files:
- cal_left.txt and cal_right.txt (Before rectification)
- cal_left_rectified.txt and cal_right_rectified.txt (After rectification)
Calibration Parameters (format):
- K - Intrinsic parameters (camera model)
- Kc - contains both radial and tangential distortion coefficients (3 for radial and 2 for tangential)
- R - Rotation matrix (rotation between cameras)
- t - Translation Vector (translation between cameras)
K -> |
1272.456504408348 |
0 |
671.623451160178 |
||
---|---|---|---|---|---|
0 |
1272.456504408348 |
479.257822799631 |
|||
0 |
0 |
1 |
|||
Kc -> |
-0.118136161770665 |
0.145778662392242 |
0 |
0 |
0 |
R -> |
0.998967461479298 |
0.035293903953770 |
-0.028606839205163 |
||
-0.035180281505050 |
0.999371005122405 |
0.004465637004832 |
|||
0.028746455413380
|
-0.003454629406398 |
0.999580765539649 |
|||
t -> |
-795.4788321497704 |
19.5651298561876 |
19.2636003632926 |
Table 1: Format demonstration for calibration parameters (reading cal_right.txt file)
- Calibration Images Frontal and Lateral
- Outdoor Frontal
- Outdoor Lateral
Dataset Calibration Images: Frontal and Lateral |
FULL Calibration Preview |
||
Download: |
Dataset: Loop Ladeira Seminario |
FULL Dataset Preview |
||
Download: |
Dataset: Se Velha |
FULL Dataset Preview |
||
Download: |
Dataset: Loop in Santa Clara |
FULL Dataset Preview |
||
Download: |
Dataset: Loop in Santa Clara Lateral |
FULL Dataset Preview |
||
Download: |
- Indoor Bumblebee
Dataset: ISR Building Entrance |
FULL Dataset Preview | ||
Download: |
- Outdoor Bumblebee
Dataset: Loop in Condeixa |
FULL Dataset Preview |
||
Download: |
Citation
If you use one of our datasets in your research please cite:
@incollection{ECCV2014_PPR,
title={Piecewise-Planar StereoScan:Structure and Motion from Plane Primitives},
author={Raposo, Carolina and Antunes, Michel and Barreto, Joao P.},
year={2014}, isbn={978-3-319-10604-5}, booktitle={Computer Vision – ECCV 2014},
volume={8690}, series={Lecture Notes in Computer Science}, editor={Fleet, David and Pajdla, Tomas and Schiele, Bernt and Tuytelaars, Tinne}, doi={10.1007/978-3-319-10605-2_4}, url={http://dx.doi.org/10.1007/978-3-319-10605-2_4}, publisher={Springer International Publishing}, pages={48-63}, language={English}
}
Copyright
These datasets are property of University of Coimbra (UC), available only for academic use, you may not use this work for commercial purposes. If you use one of our datasets in your research please cite us.
Acknowledgments
The authors thank Google Inc, Portuguese Foundation for Science and Technology (FCT), Instituto de Sistemas e Robótica (ISR), Instituto de Telecomunicações (IT) and University of Coimbra by bearing this project. This work was supported by the FCT under Grants PDCS10: PTDC/EEA-AUT/113818/2009, AMS-HMI12: RECI/EEIAUT/0181/2012, UID/EEA/50008/2013 and also by a Google Research Award from Google Inc.