News
- [Jul 10, 2023] All presentation videos are publicly available online. Please check the Workshop Schedule page.
- [May 23, 2023] Workshop schedule and accepted papers announced!
- [Feb 15, 2023] We are happy to announce the BuildingNet challenge (hosted on EvalAI) as part of the current workshop.
- [Jan 12, 2023] Workshop website launched, with preliminary invited speakers announced.
Introduction
Dealing with the huge diversity and complexity of 3D data has become the main research challenge for various applications in computer vision, graphics, and robotics. One key approach that researchers have found promising is to decompose the complex 3D data into smaller and easier composable subcomponents. For example, in 3D objects, this could be a decomposition of an object into spatially localized parts and a sparse set of relationships between them, or in scenes, it could be a scene graph, where rich inter-object relationships are described. Similarly, one can imagine that a navigation or interaction task in robotics can also be decomposed into separate parts of concepts or submodules that are related by spatial, causal, or semantic relationships. Decomposing involved 3D data or tasks into meaningful sub-units also encourages learning more generalizable and interpretable representations that are crucial to fix potential black-box pitfalls of using deep learning.
Unlike traditional connectionist approaches in deep learning, structural and compositional learning includes components that lean more towards the symbolic end of the spectrum, which leads to many challenging open research questions about how to represent the composable sub-units and how to conduct efficient learning over them. People from different fields or backgrounds use different structural and compositional representations of their 3D data for different applications. In this workshop series, we bring them together to have an explicit discussion of the advantages and disadvantages of different representations and approaches, as well as to share, discuss and debate the diverse opinions regarding the following questions:
- How to properly decompose different kinds of 3D data and tasks into sub-components?
- What 3D data structures should we use for different data types, tasks and applications?
- How to conduct efficient learning over the 3D structured representations?
- How are the approaches different for different downstream fields/applications?
We successfully hosted the first StruCo3D workshop at ICCV 2021 which is very well-received and inspiring to the audience. In this second workshop, we are inviting a fully new line of speakers who will talk about more recent research progress and the latest trends related to the workshop topics, including but not limited to:
- Compositional Neural Radiance Fields for Rendering and Editing;
- Program Composition and Synthesis based on Large Linguistic and Visual-lingual Models;
- Unsupervised Object Discovery Methods (e.g., Slot Attentions);
- Part-whole Hierarchies, Capsule Networks, Attention-based Transformers, etc.
Invited Keynote Speakers
Xin Tong |
Angela Dai |
Andrea Tagliasacchi |
Dieter Fox |
|
|
|
|
Invited Spotlight Speakers
Kiana Ehsani |
Fei Xia |
Jun Gao |
|
|
|
Amir Hertz |
Kenny Jones |
Mikaela Uy |
|
|
|
Call for Papers
We accept both archival and non-archival paper submissions. The accepted archival papers will be included in the CVPR2023 conference proceedings, while the non-archival ones will just be presented in the workshop. We welcome papers that are already accepted to the CVPR main conference or other previous conferences to present your work in the non-archival paper track. Every accepted paper will have the opportunity to give a 5-min spotlight presentation and host a poster presentation at the workshop.
Please use the official CVPR template for your submission. Make sure to anonymize your submission and the paper is up to eight pages for the main content excluding references. Supplementary material is allowed in a single PDF or ZIP format. All new papers will be peer-reviewed by three experts in the field in a double-blind manner. There is no need for peer-review for papers that are previously accepted to conferences (please clearly mark the acceptance conference at the end of the paper title and attach a copy of the paper acceptance notification email at the end of submission PDF).
Submission Cite: https://cmt3.research.microsoft.com/StruCo3D2023
Timeline Table (11:59 PM Pacific Time)
- Mar 17 2023, Fri: Paper submission deadline
- Apr 4 2023, Tue: Review deadline
- Apr 5 2023, Wed: Decision announced to authors
- Apr 14 2023, Fri: Camera ready deadline
Program Committee
We would like to extend our heartfelt gratitude to the following program committee members who generously volunteered their time to review the paper submissions:
Xingguang Yan | Rundi Wu | Xiang Xu | Albert Matveev | Haoxiang Guo |
Jiteng Mu | Fangyin Wei | Ruojin Cai | Jialei Huang | Congyue Deng |
Yujia Liu | Mutian Xu | Mikaela Angelina Uy | Konstantinos Tertikas | Aleksei Bokhovkin |
Jiahui Lei | Xiaomeng Xu | Can Gumeli | Haoran Geng | Zhi-Hao Lin |
Jia-Mu Sun | Rohith Agaram | Nicklas Hansen | Renrui Zhang | Georg Hess |
Gopal Sharma | Siming Yan |
The BuildingNet Challenge
As part of this workshop we are hosting the BuildingNet challenge. BuildingNet is a publicly available large-scale dataset of annotated 3D building models whose exteriors and surroundings are consistently labeled. For more information regarding the BuildingNet dataset please visit the dataset's website.
Overview
The current challenge includes two main phases for mesh and point cloud semantic labeling. In the first phase, called "BuildingNet-Mesh", algorithms can access the mesh data, including subgroups. The second phase, called "BuildingNet-Points", is designed for large-scale point-based processing algorithms that must deal with unstructured point cloud. For the evaluation of both phases, the metrics of mean Part IoU and Shape IoU are used, along with the classification accuracy.
Participate
The challenge is hosted on the EvalAI online evaluation platform. To participate in this challenge you will have to create an account on EvalAI and a participant team. For more information, please refer to the following guide.
Timeline Table (11:59 PM Pacific Time)
- Mar 15 2023, Wed: Competition starts
- May 24 2023, Wed: Competition ends
- May 29 2023, Mon: Notification to Participants
Organizers
Contact Info
E-mail: kmo@nvidia.com
Acknowledgements
Website template borrowed from: https://futurecv.github.io/ (Thanks to Deepak Pathak)