<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>CV4DT Research Group</title><link>https://cv4dt.github.io/</link><atom:link href="https://cv4dt.github.io/index.xml" rel="self" type="application/rss+xml"/><description>CV4DT Research Group</description><generator>Hugo Blox Builder (https://hugoblox.com)</generator><language>en-us</language><lastBuildDate>Mon, 24 Oct 2022 00:00:00 +0000</lastBuildDate><item><title>Example Event</title><link>https://cv4dt.github.io/event/example/</link><pubDate>Sat, 01 Jun 2030 13:00:00 +0000</pubDate><guid>https://cv4dt.github.io/event/example/</guid><description>&lt;p>Slides can be added in a few ways:&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Create&lt;/strong> slides using Wowchemy&amp;rsquo;s &lt;a href="https://docs.hugoblox.com/managing-content/#create-slides" target="_blank" rel="noopener">&lt;em>Slides&lt;/em>&lt;/a> feature and link using &lt;code>slides&lt;/code> parameter in the front matter of the talk file&lt;/li>
&lt;li>&lt;strong>Upload&lt;/strong> an existing slide deck to &lt;code>static/&lt;/code> and link using &lt;code>url_slides&lt;/code> parameter in the front matter of the talk file&lt;/li>
&lt;li>&lt;strong>Embed&lt;/strong> your slides (e.g. Google Slides) or presentation video on this page using &lt;a href="https://docs.hugoblox.com/writing-markdown-latex/" target="_blank" rel="noopener">shortcodes&lt;/a>.&lt;/li>
&lt;/ul>
&lt;p>Further event details, including page elements such as image galleries, can be added to the body of this page.&lt;/p></description></item><item><title>ActionReasoning: Robot Action Reasoning in 3D Space with LLM for Robotic Brick Stacking</title><link>https://cv4dt.github.io/publication/wang-2026-actionreasoning/</link><pubDate>Tue, 03 Mar 2026 00:00:00 +0000</pubDate><guid>https://cv4dt.github.io/publication/wang-2026-actionreasoning/</guid><description/></item><item><title>TrueCity: Real and Simulated Urban Data for Cross-Domain 3D Scene Understanding</title><link>https://cv4dt.github.io/publication/nguyen-2026-truecity/</link><pubDate>Sun, 23 Nov 2025 00:00:00 +0000</pubDate><guid>https://cv4dt.github.io/publication/nguyen-2026-truecity/</guid><description/></item><item><title>OPAL: Visibility-aware Lidar-to-OpenStreetMap Place Recognition via Adaptive Radial Fusion</title><link>https://cv4dt.github.io/publication/kang-2025-opal/</link><pubDate>Sun, 03 Aug 2025 00:00:00 +0000</pubDate><guid>https://cv4dt.github.io/publication/kang-2025-opal/</guid><description/></item><item><title>Texture2LoD3: Enabling LoD3 Building Reconstruction With Panoramic Images</title><link>https://cv4dt.github.io/publication/tang-2025-texture-2-lod-3/</link><pubDate>Sat, 21 Jun 2025 00:00:00 +0000</pubDate><guid>https://cv4dt.github.io/publication/tang-2025-texture-2-lod-3/</guid><description/></item><item><title>RADLER: Radar Object Detection Leveraging Semantic 3D City Models and Self-Supervised Radar-Image Learning</title><link>https://cv4dt.github.io/publication/luo-2025-radler/</link><pubDate>Fri, 20 Jun 2025 00:00:00 +0000</pubDate><guid>https://cv4dt.github.io/publication/luo-2025-radler/</guid><description/></item><item><title>Zaha: Introducing the level of facade generalization and the large-scale point cloud facade semantic segmentation benchmark dataset</title><link>https://cv4dt.github.io/publication/wysocki-2025-zaha/</link><pubDate>Thu, 20 Mar 2025 00:00:00 +0000</pubDate><guid>https://cv4dt.github.io/publication/wysocki-2025-zaha/</guid><description/></item><item><title>FacaDiffy: Inpainting Unseen Facade Parts Using Diffusion Models</title><link>https://cv4dt.github.io/publication/froech-2025-facadiffy/</link><pubDate>Tue, 21 Jan 2025 00:00:00 +0000</pubDate><guid>https://cv4dt.github.io/publication/froech-2025-facadiffy/</guid><description/></item><item><title>CDGS: Confidence-Aware Depth Regularization for 3D Gaussian Splatting</title><link>https://cv4dt.github.io/publication/zhang-2025-cdgs/</link><pubDate>Wed, 01 Jan 2025 00:00:00 +0000</pubDate><guid>https://cv4dt.github.io/publication/zhang-2025-cdgs/</guid><description/></item><item><title>Computer Vision</title><link>https://cv4dt.github.io/projects/cv/</link><pubDate>Wed, 01 Jan 2025 00:00:00 +0000</pubDate><guid>https://cv4dt.github.io/projects/cv/</guid><description>&lt;p>We are the Computer Vision Working Group of CV4DT. Click for more!&lt;/p>
&lt;p>We are the Computer Vision Working Group of CV4DT: (&lt;a href="https://cv4dt.github.io/author/dr-olaf-wysocki/">Olaf Wysocki&lt;/a>, &lt;a href="https://cv4dt.github.io/author/haibing-wu/">Haibing Wu&lt;/a>, &lt;a href="https://cv4dt.github.io/author/qilin-zhang/">Qilin Zhang&lt;/a>, &lt;a href="https://cv4dt.github.io/author/daniel-lehmberg/">Daniel Lehmberg&lt;/a>, &lt;a href="https://cv4dt.github.io/author/wanru-yang/">Wanru Yang&lt;/a>), but not limited to!&lt;/p>
&lt;h1 id="research-projects">Research Projects&lt;/h1>
&lt;p>Our projects are primarily &lt;strong>research-oriented&lt;/strong>, aiming for publication in top-tier computer vision venues such as &lt;strong>CVPR&lt;/strong>, &lt;strong>ECCV&lt;/strong>, &lt;strong>NeurIPS&lt;/strong>, and similar.&lt;br>
Below is an overview of our ongoing and upcoming research directions.&lt;/p>
&lt;hr>
&lt;h3 id="-structured-3d-object-reconstruction">🏠 Structured 3D Object Reconstruction&lt;/h3>
&lt;p>We aim to reconstruct &lt;strong>structured 3D models&lt;/strong> aligned with interpretable geometric and semantic representations.&lt;br>
This direction builds upon our prior work:&lt;/p>
&lt;ul>
&lt;li>&lt;a href="https://openaccess.thecvf.com/content/CVPR2025W/USM3D/papers/Tang_Texture2LoD3_Enabling_LoD3_Building_Reconstruction_With_Panoramic_Images_CVPRW_2025_paper.pdf" target="_blank" rel="noopener">&lt;em>Texture2LoD3: Enabling LoD3 Building Reconstruction With Panoramic Images&lt;/em> (CVPR25)&lt;/a>&lt;/li>
&lt;li>&lt;a href="https://openaccess.thecvf.com/content/CVPR2023W/PCV/papers/Wysocki_Scan2LoD3_Reconstructing_Semantic_3D_Building_Models_at_LoD3_Using_Ray_CVPRW_2023_paper.pdf" target="_blank" rel="noopener">&lt;em>Scan2LoD3: Reconstructing semantic 3D building models at LoD3 using ray casting and Bayesian networks&lt;/em> (CVPR23)&lt;/a>&lt;/li>
&lt;/ul>
&lt;p>Several project proposals are currently under review to expand this line of research.&lt;/p>
&lt;hr>
&lt;h3 id="-revisiting-geometric-features-for-3d-scene-understanding">🧩 Revisiting Geometric Features for 3D Scene Understanding&lt;/h3>
&lt;p>We revisit &lt;strong>geometric descriptors&lt;/strong> for large-scale &lt;strong>3D semantic segmentation&lt;/strong>, &lt;strong>SSL&lt;/strong>, &lt;strong>3D instance segmentation&lt;/strong>, &lt;strong>3D object pose estimation&lt;/strong>, &lt;strong>3D shape completion&lt;/strong>, studying how handcrafted and learned geometric features can be combined to achieve better generalization across domains. Preliminary findings are available in &lt;a href="https://arxiv.org/pdf/2402.06506" target="_blank" rel="noopener">&lt;em>arXiv:2402.06506&lt;/em>&lt;/a>&lt;/p>
&lt;p>Papers are in the making to expand this line of research.&lt;/p>
&lt;hr>
&lt;h3 id="-sim2real-3d-domain-gap">🏁 Sim2Real 3D Domain Gap&lt;/h3>
&lt;p>We still observe large domain gaps between simulated and real-world data, hampering application of simulated data into real-world challenges and many downstream tasks. We believe in the power of &lt;em>diffusion models&lt;/em> to cater for this gap. Preliminary results, building a framework for running simulations within the unique real-world city twin are here: &lt;a href="https://arxiv.org/abs/2505.17959" target="_blank" rel="noopener">&lt;em>arXiv:2505.17959&lt;/em>&lt;/a>&lt;/p>
&lt;p>One paper is under review, while another draft is in preparation.&lt;/p>
&lt;hr>
&lt;h3 id="-6dof-estimation-using-structured-3d-models">🧭 6DoF Estimation Using Structured 3D Models&lt;/h3>
&lt;p>We explore &lt;strong>structured 3D model representations&lt;/strong> for &lt;strong>6-degree-of-freedom (6DoF) pose estimation&lt;/strong>, targeting improved robustness and interpretability compared to implicit or point-based methods.&lt;br>
This direction builds on related work of &lt;a href="https://proceedings.neurips.cc/paper_files/paper/2024/file/d78ece6613953f46501b958b7bb4582f-Paper-Conference.pdf" target="_blank" rel="noopener">&lt;em>LoD-Loc: Aerial Visual Localization using LoD 3D
Map with Neural Wireframe Alignment&lt;/em> (NeurIPS24)&lt;/a>.&lt;/p>
&lt;p>A new iteration of this work is in preparation for upcoming major conference deadlines.&lt;/p>
&lt;hr>
&lt;h3 id="-geometry-prior-guided-3d-gaussian-splatting">🌌 Geometry-Prior-Guided 3D Gaussian Splatting&lt;/h3>
&lt;p>This project investigates the integration of &lt;strong>geometry-aware priors&lt;/strong> into &lt;strong>3D Gaussian Splatting&lt;/strong> to enhance reconstruction quality and geometric fidelity.&lt;br>
Preliminary findings are available in &lt;a href="https://arxiv.org/pdf/2508.07355" target="_blank" rel="noopener">&lt;em>arXiv:2508.07355&lt;/em>&lt;/a>, and ongoing work extends the framework beyond building-specific scenarios toward &lt;strong>general-purpose 3D environments&lt;/strong>.&lt;/p>
&lt;hr>
&lt;h3 id="-quantifying-uncertainty-of-x">📈 Quantifying Uncertainty of X&lt;/h3>
&lt;p>In this research direction, we explore quantification of uncertainty in various modalities and downstream tasks: From data acqusition, through segmentation to inference. Our rationale is often grounded in Bayesian modeling of uncertainty (but not limited to!). Previously published papers, e.g., on reconstruction uncertainty
&lt;a href="https://openaccess.thecvf.com/content/CVPR2023W/PCV/papers/Wysocki_Scan2LoD3_Reconstructing_Semantic_3D_Building_Models_at_LoD3_Using_Ray_CVPRW_2023_paper.pdf" target="_blank" rel="noopener">&lt;em>Scan2LoD3: Reconstructing semantic 3D building models at LoD3 using ray casting and Bayesian networks&lt;/em> (CVPR23)&lt;/a>.
Currently, we are involved in the funded project, &lt;a href="https://www.asg.ed.tum.de/en/gds/forschung-research/projects/nerf2bim/" target="_blank" rel="noopener">NeRF2BIM&lt;/a>, together with Profs Petzold, Holst and Niessner, where we analyze laser scanning uncertainty and its influence on final 3D object reconstruction.&lt;/p>
&lt;hr>
&lt;h3 id="-dataset-development">🗂️ Dataset Development&lt;/h3>
&lt;p>We also curate and release datasets supporting our main research directions, including:&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Facade Segmentation Dataset&lt;/strong> – for large-scale semantic façade parsing; This building upon our worldwide-largest facade dataset &lt;a href="https://openaccess.thecvf.com/content/WACV2025/html/Wysocki_ZAHA_Introducing_the_Level_of_Facade_Generalization_and_the_Large-Scale_WACV_2025_paper.html" target="_blank" rel="noopener">&lt;em>ZAHA&lt;/em> (WACV25)&lt;/a>&lt;/li>
&lt;li>&lt;strong>Point Cloud Completion Dataset&lt;/strong> – for partial-to-complete reconstruction learning&lt;/li>
&lt;li>&lt;strong>3D Object Reconstruction Dataset&lt;/strong> – for structured geometry prediction and analysis&lt;/li>
&lt;/ul>
&lt;p>These datasets promote &lt;strong>reproducible, data-rich 3D research&lt;/strong> across geometry, perception, and robotics.&lt;/p>
&lt;p>We are always looking for fantastic persons to join us for the following project collaboration!&lt;/p></description></item><item><title>Mind the domain gap: Measuring the domain gap between real-world and synthetic point clouds for automated driving development</title><link>https://cv4dt.github.io/publication/duc-2025-mind/</link><pubDate>Wed, 01 Jan 2025 00:00:00 +0000</pubDate><guid>https://cv4dt.github.io/publication/duc-2025-mind/</guid><description/></item><item><title>Robotics</title><link>https://cv4dt.github.io/projects/robotics/</link><pubDate>Wed, 01 Jan 2025 00:00:00 +0000</pubDate><guid>https://cv4dt.github.io/projects/robotics/</guid><description>&lt;p>We are the Robotics Working Group of CV4DT. Click for more!&lt;/p>
&lt;h1 id="robotics">Robotics&lt;/h1>
&lt;p>We are a computer vision &amp;amp; robotics working group (&lt;a href="https://cv4dt.github.io/author/dr-guangming-wang/">Guangming Wang&lt;/a>, &lt;a href="https://cv4dt.github.io/author/dr-yixiong-jing/">Yixiong Jing&lt;/a>, &lt;a href="https://cv4dt.github.io/author/qizhen-ying/">Qizhen Ying&lt;/a>), focusing on:&lt;/p>
&lt;ul>
&lt;li>Robotic manipulation and control&lt;/li>
&lt;li>3D vision for robotic perception&lt;/li>
&lt;li>Generative models for planning and world understanding&lt;/li>
&lt;/ul>
&lt;p>We are always looking for fantastic persons to join us for the following project collaboration!&lt;/p>
&lt;hr>
&lt;h2 id="ongoing-research-projects">Ongoing Research Projects&lt;/h2>
&lt;h3 id="project-1-actionreasoning-robot-action-reasoning-in-3d-space-with-llm-for-robotic-brick-stacking">Project 1: &lt;strong>ActionReasoning: Robot Action Reasoning in 3D Space with LLM for Robotic Brick Stacking&lt;/strong>&lt;/h3>
&lt;img src="https://cv4dt.github.io/uploads/research_video_robotics/brick_stacking.gif" alt="brick demo" width="550">
&lt;p>Classical robotic systems typically rely on custom planners designed for constrained environments. While effective in restricted settings, these systems lack generalization capabilities, limiting the scalability of embodied AI and general‑purpose robots. To address this gap, we propose ActionReasoning, an LLM-driven framework that performs explicit action reasoning to produce physics-consistent, prior-guided decisions for robotic manipulation. The experiments demonstrate that the proposed multi-agent LLM framework enables stable brick placement without task-specific programming, highlighting its potential to generalize beyond narrowly defined tasks (Paper is submitted to a top robotic conference).&lt;/p>
&lt;h3 id="project-2-robotic-perception-physics-aware-3d-gaussian-modeling">Project 2: &lt;strong>Robotic Perception: Physics-Aware 3D Gaussian Modeling&lt;/strong>&lt;/h3>
&lt;p>Our goal is to develop a unified 3D Gaussian modeling framework that integrates geometric, semantic, and physical attributes, enabling robots to achieve dynamic and adaptive understanding of their environments, thereby acquiring human-like adaptability and generalization capabilities.&lt;/p>
&lt;h3 id="project-3-robotic-manipulation-generalizable-manipulation-of-different-types-of-objects">Project 3: &lt;strong>Robotic Manipulation: Generalizable Manipulation of Different Types of Objects&lt;/strong>&lt;/h3>
&lt;p>Our goal is to build a general reasoning framework for the manipulation of deformable objects, hinged objects, and rigid objects, progressing from universal representations of different types of objects, to general reasoning, and ultimately to general manipulation. This will enable robots to attain human-like perception and manipulation skills for different types of objects.&lt;/p>
&lt;hr>
&lt;h2 id="past-research-projects">Past Research Projects&lt;/h2>
&lt;h3 id="rl-gsbridge-3d-gaussian-splatting-based-real2sim2real-method-for-robotic-manipulation-learning">&lt;strong>RL-GSBridge: 3D Gaussian Splatting Based Real2Sim2Real Method for Robotic Manipulation Learning&lt;/strong>&lt;/h3>
&lt;img src="https://cv4dt.github.io/uploads/research_video_robotics/Sim2real.gif" alt="RL-GSBridge demo" width="350">
&lt;p>Sim-to-Real refers to the process of transferring policies learned in simulation to the real world, which is crucial for achieving practical robotics applications. However, recent Sim2real methods either rely on a large amount of augmented data or large learning models, which is inefficient for specific tasks. To this end, we propose RL-GSBridge, a novel real-to-sim-to-real framework which incorporates 3D Gaussian Splatting into the conventional RL simulation pipeline, enabling zero-shot sim-to-real transfer for vision-based deep reinforcement learning.&lt;/p>
&lt;p>Through a series of sim-to-real experiments, including grasping and pick-and-place tasks, we demonstrate that RL-GSBridge maintains a satisfactory success rate in real-world task completion during sim-to-real transfer. Furthermore, a series of rendering metrics and visualization results indicate that our proposed mesh-based 3D GS reduces artifacts in unstructured objects, demonstrating more realistic rendering performance. The related work was published at top robotics conference &lt;a href="https://ieeexplore.ieee.org/abstract/document/11128103" target="_blank" rel="noopener">ICRA&lt;/a>.&lt;/p>
&lt;h3 id="sni-slam-semantic-neural-implicit-slam">&lt;strong>SNI-SLAM: Semantic Neural Implicit SLAM&lt;/strong>&lt;/h3>
&lt;img src="https://cv4dt.github.io/uploads/research_video_robotics/SNI_SLAM.gif" alt="SLAM demo" width="550">
&lt;p>We propose SNI-SLAM, a first semantic SLAM system utilizing neural implicit representation that simultaneously performs accurate semantic mapping high-quality surface reconstruction and robust camera tracking. In this system we introduce hierarchical semantic representation to allow multi-level semantic comprehension for top-down structured semantic mapping of the scene. In addition to fully utilize the correlation between multiple attributes of the environment we integrate appearance geometry and semantic features through cross-attention for feature collaboration. Our SNI-SLAM method demonstrates superior performance over all recent NeRF-based SLAM methods in terms of mapping and tracking accuracy on multiple datasets while also showing excellent capabilities in accurate semantic segmentation and real-time semantic mapping. Related work was publihsed at top computer vision conference &lt;a href="https://openaccess.thecvf.com/content/CVPR2024/papers/Zhu_SNI-SLAM_Semantic_Neural_Implicit_SLAM_CVPR_2024_paper.pdf" target="_blank" rel="noopener">CVPR&lt;/a>.&lt;/p>
&lt;h3 id="learning-of-long-horizon-sparse-reward-robotic-manipulator-tasks-with-base-controllers">&lt;strong>Learning of Long-Horizon Sparse-Reward Robotic Manipulator Tasks With Base Controllers&lt;/strong>&lt;/h3>
&lt;img src="https://cv4dt.github.io/uploads/research_video_robotics/20_arxiv_DDPGwB.gif" alt="RL robot arm demo" width="350">
&lt;p>Deep reinforcement learning (DRL) enables robots to perform some intelligent tasks end-to-end. However, there are still many challenges for long-horizon sparse-reward robotic manipulator tasks. We propose a method of learning long-horizon sparse-reward tasks utilizing one or more existing traditional controllers named base controllers. The experiments demonstrated that the learned policies steadily outperform base controllers. Compared to previous works of learning from demonstrations, our method improves sample efficiency by orders of magnitude and improves performance. Overall, our method bears the potential of leveraging existing industrial robot manipulation systems to build more flexible and intelligent controllers. The related work was published at top AI journal &lt;a href="https://ieeexplore.ieee.org/abstract/document/9882014" target="_blank" rel="noopener">IEEE T-NNLS&lt;/a>&lt;/p></description></item><item><title>To Glue or Not to Glue? Classical vs Learned Image Matching for Mobile Mapping Cameras to Textured Semantic 3D Building Models</title><link>https://cv4dt.github.io/publication/gaisbauer-2025-glue/</link><pubDate>Wed, 01 Jan 2025 00:00:00 +0000</pubDate><guid>https://cv4dt.github.io/publication/gaisbauer-2025-glue/</guid><description/></item><item><title>TUM2TWIN: Introducing the Large-Scale Multimodal Urban Digital Twin Benchmark Dataset</title><link>https://cv4dt.github.io/publication/tum-2-twin/</link><pubDate>Wed, 01 Jan 2025 00:00:00 +0000</pubDate><guid>https://cv4dt.github.io/publication/tum-2-twin/</guid><description/></item><item><title>Analyzing the impact of semantic LoD3 building models on image-based vehicle localization</title><link>https://cv4dt.github.io/publication/bieringer-2024-analyzing/</link><pubDate>Mon, 01 Jan 2024 00:00:00 +0000</pubDate><guid>https://cv4dt.github.io/publication/bieringer-2024-analyzing/</guid><description/></item><item><title>Enriching Thermal Point Clouds of Buildings using Semantic 3D building Models</title><link>https://cv4dt.github.io/publication/zhu-2024-enriching/</link><pubDate>Mon, 01 Jan 2024 00:00:00 +0000</pubDate><guid>https://cv4dt.github.io/publication/zhu-2024-enriching/</guid><description/></item><item><title>Reviewing Open Data Semantic 3D City Models to Develop Novel 3D Reconstruction Methods</title><link>https://cv4dt.github.io/publication/wysocki-2024-reviewing/</link><pubDate>Mon, 01 Jan 2024 00:00:00 +0000</pubDate><guid>https://cv4dt.github.io/publication/wysocki-2024-reviewing/</guid><description/></item><item><title>Classifying point clouds at the facade-level using geometric features and deep learning networks</title><link>https://cv4dt.github.io/publication/yuetan-deep-learning-official/</link><pubDate>Sun, 01 Jan 2023 00:00:00 +0000</pubDate><guid>https://cv4dt.github.io/publication/yuetan-deep-learning-official/</guid><description/></item><item><title>MLS2LoD3: Refining low LoDs building models with MLS point clouds to reconstruct semantic LoD3 building models</title><link>https://cv4dt.github.io/publication/wysocki-mls-2-lo-d-3/</link><pubDate>Sun, 01 Jan 2023 00:00:00 +0000</pubDate><guid>https://cv4dt.github.io/publication/wysocki-mls-2-lo-d-3/</guid><description/></item><item><title>Reconstructing facade details using MLS point clouds and Bag-of-Words approach</title><link>https://cv4dt.github.io/publication/froech-2023-reconstructing/</link><pubDate>Sun, 01 Jan 2023 00:00:00 +0000</pubDate><guid>https://cv4dt.github.io/publication/froech-2023-reconstructing/</guid><description/></item><item><title>Scan2LoD3: Reconstructing semantic 3D building models at LoD3 using ray casting and Bayesian networks</title><link>https://cv4dt.github.io/publication/wysocki-2023-scan-2-lod-3-reconstructingsemantic-3-d/</link><pubDate>Sun, 01 Jan 2023 00:00:00 +0000</pubDate><guid>https://cv4dt.github.io/publication/wysocki-2023-scan-2-lod-3-reconstructingsemantic-3-d/</guid><description/></item><item><title>Transferring facade labels between point clouds with semantic octrees while considering change detection</title><link>https://cv4dt.github.io/publication/schwarz-2023-transferring/</link><pubDate>Sun, 01 Jan 2023 00:00:00 +0000</pubDate><guid>https://cv4dt.github.io/publication/schwarz-2023-transferring/</guid><description/></item><item><title>Contact</title><link>https://cv4dt.github.io/contact/</link><pubDate>Mon, 24 Oct 2022 00:00:00 +0000</pubDate><guid>https://cv4dt.github.io/contact/</guid><description/></item><item><title>People</title><link>https://cv4dt.github.io/people/</link><pubDate>Mon, 24 Oct 2022 00:00:00 +0000</pubDate><guid>https://cv4dt.github.io/people/</guid><description/></item><item><title>Vision</title><link>https://cv4dt.github.io/vision/</link><pubDate>Mon, 24 Oct 2022 00:00:00 +0000</pubDate><guid>https://cv4dt.github.io/vision/</guid><description>&lt;p>&lt;strong>CV4DT&lt;/strong> aims to gather researchers willing to push forward the boundaries of machine learning, photogrammetry, and computer vision.
As shown above, our &lt;strong>CV4DT&lt;/strong> research agenda focuses on the following aspects:&lt;/p>
&lt;ul>
&lt;li>3D semantic understanding,&lt;/li>
&lt;li>3D semantic reconstruction,&lt;/li>
&lt;li>3D models as sensors,&lt;/li>
&lt;li>Uncertainty quantification &amp;ndash; overarching all three aspects.&lt;/li>
&lt;/ul>
&lt;p>The rationale is that those aspects are interdependent and indispensable in creating digital twins from any sensory data, enabling any digital simulations before real-world action occurs.&lt;/p>
&lt;p>We understand &lt;em>digital twin&lt;/em> not as a mere 3D geometric representation of reality, but rather as a 3D model comprising a) 3D minimum-viable and watertight geometric representation, b) hierarchical semantics, c) estimated uncertainty of both predicted semantics and geometry; enabling updates of digital twins in the presence of new evidence.&lt;/p>
&lt;p>The ultimate goal is to create methods enabling robust digital twinning, delivering impact to society in real-time monitoring and simulation of physical systems, leading to more efficient decision-making and reduced operational costs.
They also shall support sustainable development by optimising resource use and minimising environmental impact across industries.&lt;/p>
&lt;p>Beyond science itself, we aim to create an environment where researchers of any background will thrive and develop their skills and careers. But first and foremost&amp;hellip; have fun pursuing their passion!&lt;/p></description></item><item><title>Combining visibility analysis and deep learning for refinement of semantic 3D building models by conflict classification</title><link>https://cv4dt.github.io/publication/wysocki-visibility/</link><pubDate>Sat, 01 Jan 2022 00:00:00 +0000</pubDate><guid>https://cv4dt.github.io/publication/wysocki-visibility/</guid><description/></item><item><title>Refinement of semantic 3D building models by reconstructing underpasses from MLS point clouds</title><link>https://cv4dt.github.io/publication/wysocki-underpasses/</link><pubDate>Sat, 01 Jan 2022 00:00:00 +0000</pubDate><guid>https://cv4dt.github.io/publication/wysocki-underpasses/</guid><description/></item><item><title>TUM-FAÇADE: Reviewing and enriching point cloud benchmarks for façade segmentation</title><link>https://cv4dt.github.io/publication/tumfacade-paper/</link><pubDate>Sat, 01 Jan 2022 00:00:00 +0000</pubDate><guid>https://cv4dt.github.io/publication/tumfacade-paper/</guid><description/></item><item><title>Plastic surgery for 3D city models: A pipeline for automatic geometry refinement and semantic enrichment</title><link>https://cv4dt.github.io/publication/wysocki-2021-plastic/</link><pubDate>Fri, 01 Jan 2021 00:00:00 +0000</pubDate><guid>https://cv4dt.github.io/publication/wysocki-2021-plastic/</guid><description/></item><item><title>Unlocking point cloud potential: Fusing MLS point clouds with semantic 3D building models while considering uncertainty</title><link>https://cv4dt.github.io/publication/wysocki-2021-unlocking/</link><pubDate>Fri, 01 Jan 2021 00:00:00 +0000</pubDate><guid>https://cv4dt.github.io/publication/wysocki-2021-unlocking/</guid><description/></item><item><title>Jian Yang and Monica Hall Win the Best Paper Award at Wowchemy 2020</title><link>https://cv4dt.github.io/post/20-12-02-icml-best-paper/</link><pubDate>Wed, 02 Dec 2020 00:00:00 +0000</pubDate><guid>https://cv4dt.github.io/post/20-12-02-icml-best-paper/</guid><description>&lt;p>Congratulations to Jian Yang and Monica Hall for winning the Best Paper Award at the 2020 Conference on Wowchemy for their paper “Learning Wowchemy”.&lt;/p>
&lt;p>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Integer tempus augue non tempor egestas. Proin nisl nunc, dignissim in accumsan dapibus, auctor ullamcorper neque. Quisque at elit felis. Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia curae; Aenean eget elementum odio. Cras interdum eget risus sit amet aliquet. In volutpat, nisl ut fringilla dignissim, arcu nisl suscipit ante, at accumsan sapien nisl eu eros.&lt;/p>
&lt;p>Sed eu dui nec ligula bibendum dapibus. Nullam imperdiet auctor tortor, vel cursus mauris malesuada non. Quisque ultrices euismod dapibus. Aenean sed gravida risus. Sed nisi tortor, vulputate nec quam non, placerat porta nisl. Nunc varius lobortis urna, condimentum facilisis ipsum molestie eu. Ut molestie eleifend ligula sed dignissim. Duis ut tellus turpis. Praesent tincidunt, nunc sed congue malesuada, mauris enim maximus massa, eget interdum turpis urna et ante. Morbi sem nisl, cursus quis mollis et, interdum luctus augue. Aliquam laoreet, leo et accumsan tincidunt, libero neque aliquet lectus, a ultricies lorem mi a orci.&lt;/p>
&lt;p>Mauris dapibus sem vel magna convallis laoreet. Donec in venenatis urna, vitae sodales odio. Praesent tortor diam, varius non luctus nec, bibendum vel est. Quisque id sem enim. Maecenas at est leo. Vestibulum tristique pellentesque ex, blandit placerat nunc eleifend sit amet. Fusce eget lectus bibendum, accumsan mi quis, luctus sem. Etiam vitae nulla scelerisque, eleifend odio in, euismod quam. Etiam porta ullamcorper massa, vitae gravida turpis euismod quis. Mauris sodales sem ac ultrices viverra. In placerat ultrices sapien. Suspendisse eu arcu hendrerit, luctus tortor cursus, maximus dolor. Proin et velit et quam gravida dapibus. Donec blandit justo ut consequat tristique.&lt;/p></description></item><item><title>Richard Hendricks Wins First Place in the Wowchemy Prize</title><link>https://cv4dt.github.io/post/20-12-01-wowchemy-prize/</link><pubDate>Tue, 01 Dec 2020 00:00:00 +0000</pubDate><guid>https://cv4dt.github.io/post/20-12-01-wowchemy-prize/</guid><description>&lt;p>Congratulations to Richard Hendricks for winning first place in the Wowchemy Prize.&lt;/p>
&lt;p>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Integer tempus augue non tempor egestas. Proin nisl nunc, dignissim in accumsan dapibus, auctor ullamcorper neque. Quisque at elit felis. Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia curae; Aenean eget elementum odio. Cras interdum eget risus sit amet aliquet. In volutpat, nisl ut fringilla dignissim, arcu nisl suscipit ante, at accumsan sapien nisl eu eros.&lt;/p>
&lt;p>Sed eu dui nec ligula bibendum dapibus. Nullam imperdiet auctor tortor, vel cursus mauris malesuada non. Quisque ultrices euismod dapibus. Aenean sed gravida risus. Sed nisi tortor, vulputate nec quam non, placerat porta nisl. Nunc varius lobortis urna, condimentum facilisis ipsum molestie eu. Ut molestie eleifend ligula sed dignissim. Duis ut tellus turpis. Praesent tincidunt, nunc sed congue malesuada, mauris enim maximus massa, eget interdum turpis urna et ante. Morbi sem nisl, cursus quis mollis et, interdum luctus augue. Aliquam laoreet, leo et accumsan tincidunt, libero neque aliquet lectus, a ultricies lorem mi a orci.&lt;/p>
&lt;p>Mauris dapibus sem vel magna convallis laoreet. Donec in venenatis urna, vitae sodales odio. Praesent tortor diam, varius non luctus nec, bibendum vel est. Quisque id sem enim. Maecenas at est leo. Vestibulum tristique pellentesque ex, blandit placerat nunc eleifend sit amet. Fusce eget lectus bibendum, accumsan mi quis, luctus sem. Etiam vitae nulla scelerisque, eleifend odio in, euismod quam. Etiam porta ullamcorper massa, vitae gravida turpis euismod quis. Mauris sodales sem ac ultrices viverra. In placerat ultrices sapien. Suspendisse eu arcu hendrerit, luctus tortor cursus, maximus dolor. Proin et velit et quam gravida dapibus. Donec blandit justo ut consequat tristique.&lt;/p></description></item><item><title/><link>https://cv4dt.github.io/admin/config.yml</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cv4dt.github.io/admin/config.yml</guid><description/></item></channel></rss>