Laser scans (LIDAR)
Images
LAS Tools: LAStools: converting, viewing, and compressing LIDAR data in LAS format. Contains sample LAS files (mostly terrains).
MeshLab: MeshLab is an open source, portable, and extensible system for the processing and editing of unstructured 3D triangular meshes. The system is aimed to help the processing of the typical not-so-small unstructured models arising in 3D scanning, providing a set of tools for editing, cleaning, healing, inspecting, rendering and converting this kind of meshes.
Unstructured textured polygons used in this paper by Johannes Kopf et al., from MSFT research.
A Connection between Partial Symmetry and Inverse Procedural Modeling. Martin Bokeloh, Michael Wand, Hans-Peter Seidel.
-
Docking sites are an interesting concept. Can they be extended to non-perfect information? Can docking sites be identified in noisy point sets and used as a reference to build the grammar and/or reconstruction?
Non-local Scan Consolidation for 3D Urban Scene. Qian Zheng, Andrei Sharf, Guowei Wan, Yangyan Li, Niloy J. Mitra, Baoquan Chen, Daniel Cohen-Or.
-
SmartBoxes for Interactive Urban Reconstruction. Liangliang Nan, Andrei Sharf, Hao Zhang, Daniel Cohen-Or, Baoquan Chen.
-
Summary:
CityFit: High-quality urban reconstructions by fitting shape grammars to images and derived textured point clouds. Bernhard Hohmann, Ulrich Krispel, Sven Havemann and Dieter Fellner.
-
Summary: The goal of the CityFit project is to reconstruct the facades of 80% of the buildings in the city of Graz fully automatically. The challenge is to establish a complete workflow, ranging from acquisition of images and LIDAR data over 2D/3D feature detection and recognition to the generation of lean polygonal facade models.
The input data for CityFit consist of highly redundant road side photographs and LIDAR scans acquired by Microsoft Photogrammetry. The LIDAR 3D point cloud is not sufficient for direct facade reconstruction. It allows, however, to derive the main orientation and the rough structure of the facade very reliably. For examining the preprocessing results a point cloud viewer was developed (section 3). The LIDAR data is used together with information extracted from the road side photographs.
Question for us: Do we want to use information from the images to compute grammar?
Take input data from different sources (e.g., Arc3D, Pollefeys IJCV'08, etc).
Use non-local consolidation (Zheng et al., 2010) to improve and complete point set.
Use docking sites (from Bokeloh et al., 2010) to partition point set?
|