Tanks and Temples: Benchmarking Large-Scale Scene Reconstruction
Events
Subscribe:  iCal  |  Google Calendar
7, Mar — 12, Jun
London GB   29, May — 1, Jun
Birmingham GB   1, Jun — 4, Jun
Taipei TW   5, Jun — 10, Jun
Los Angeles US   12, Jun — 15, Jun
Latest comments
by kalisz.michal@gmail.com
4 hours ago

Guys! We need "Favorites" tab here on 80.lv

Great work!

by derjyn@gmail.com
6 hours ago

My motivation wasn't to knock Cem, not as a person nor as a developer. As I said, "this is cool, no doubt about that". I was sharing my personal opinion about the price-point for a material that is so expensive (performance-wise), and pointing out the fact that the same look can be achieved for cheaper (both performance and wallet-wise). I personally find it hard to budget 10s of dollars for a single material, a single effect, etc., but that's me. Other people have money pouring out of their ears and can afford to play like that. The internet is getting less friendly as far as opening dialogues like this. People should be able to have opinions and share them, debate them, without being told to hush up and move along. I hope others buy and use this asset- I'd be curious to see how it stacks up to alternatives out there (again, as I said "I love options"). As far as making my own assets and releasing articles here? It's in the works. And if somebody came along and started a dialogue about issues, opinions they had, or whatever- I would be happy to engage them!

Tanks and Temples: Benchmarking Large-Scale Scene Reconstruction
24 May, 2017
News
Existing reconstruction techniques have significant limitations. For example, take a camera, walk around a building recording a video, and use that data to create a 3D model. You won’t get a clean and accurate reconstruction of the building. Or walk around a house with your camera and try reconstructing it. The result will definitely upset you. Arno Knapitsch, Jaesik Park, Qian-Yi Zhou, and Vladlen Koltun from Intel Labs presented a new benchmark for image-based 3D reconstruction.

The researchers used a state-of-the-art industrial laser scanner with a range of 330 meters and submillimeter accuracy to acquire ground-truth models of large-scale scenes. They’ve scanned objects and environments from multiple viewpoints and registered the scans to obtain ground-truth models. Then, they used 8-megapixel video to reconstruct models.

The presented benchmark has a number of characteristics that can support the development of new reconstruction techniques:

  • The input modality is video. This can help future pipelines track the camera, reason about illumination and reflectance, and reconstruct small details.
  • The benchmark evaluates complete reconstruction pipelines. This leaves scope for tackling camera localization and dense reconstruction jointly, potentially increasing robustness and precision via co-adaptation to the performance characteristics of each task.
  • The benchmark includes both outdoor and indoor scans of complete scenes, pushing current reconstruction pipelines to their limits and beyond.

Arno Knapitsch, Jaesik Park, Qian-Yi Zhou, and Vladlen Koltun

This research can drastically change the way developers reconstruct outdoor and indoor scenes and  stimulate the development of robust broad-competence systems. The researchers will set up an evaluation server and online leaderboard that can be used by the community to track progress here.

You can find the full paper on the new benchmark here. And here is the link with the input data for all the benchmark sequences.

Leave a Reply

Be the First to Comment!

avatar
wpDiscuz