Space AS Interface: Jingyang Liu’s PhD Proposal
PhD Proposal Presentation Announcement
Space AS Interface: A Spatial Interface for Human-Robot Interaction in Robotic-Supported Construction
Jingyang (Leo) Liu, Ph.D. Candidate in Computational Design
Date: Dec 5, 2022
Time: 11:30 am
Location: MMCH 107
Abstract
In architectural studies, space, more than a volume or container, can organize resources, confer identities, and establish socio-cultural protocols between inhabitants. Meanwhile, spatial information such as distance, orientation, and space layout may orchestrate the interaction between humans and machines in a shared space. The emergence of spatial computing technologies, such as augmented reality and mixed reality (AR/MR), brings new possibilities for reconnecting the digital world with the physical space through digitization, registration, and spatial interaction. Situated in this context, my research aims to explore the mutual constitution between physical space and spatial computing. With a specific focus on the robotic-supported construction scenario, I will investigate the role of the physical space in human-robot interaction (HRI) by asking three primary questions:
1. How might we use physical space for contextualizing virtual information to make it easier to be understood and engaged with?
2. How might we leverage the relationship between the physical environments and the well-entrenched body skills for intuitive programming?
3. How can physical space serve as a shared spatial reference frame for both face-to-face and remote collaboration?The potential contributions of this study lie in three-fold (1) a novel computational framework for integrating spatial computing technologies into robotic-supported construction practices: Informed by the contextual knowledge in the Architecture Engineering Construction (AEC) industry, the framework can potentially bridge the gap between the existing human-robot interface and the sectoral challenges in robotic-supported construction. The framework is built upon three key components, including a motion planner, a middleware, and a spatial interface. The motion planner devises a collision-free strategy for exploring and digitizing the job site that may not be known a priori. The middleware maps the raw data acquired by the sensor to a high-level semantic representation through an end-to-end pipeline. The spatial interface allows the operator to interact with the virtual information using well-entrenched spatial skills. Each component will be optimized for domain-specific requirements in terms of efficiency, robustness, and accuracy. (2) Case studies on how the proposed framework may facilitate visualization, control, supervision, and collaboration in robotic-assisted quality assessment and management (QA&M) tasks. (3) A conceptual framework for tracing the tight linkage between physical environments and computation from a historical perspective. A technical evaluation and a participant experiment will be conducted to evaluate the framework’s usability. The objective metrics include task accuracy, task completion time, and the total coverage rate based on a one-way Analysis of Variance (ANOVA) using the experimental setup as a fixed effect. A qualitative evaluation will be conducted through observation and interviews with participants. At the intersection of robotics and spatial computing, this study may engender new possibilities for robotic-supported practices in the AEC industry that builds upon physical immersion. At the time of the spatial turn in computing, the study will bring about a broader set of inquiries on the interdependency between computation and the physical environment.
Advisory Committee
Dr. Daniel Cardoso Llach (Chair)
Associate Professor, Computational Design Track Chair, School of Architecture
Carnegie Mellon University
Prof. Joshua Bard
Associate Professor, Associate Head of Design Research, School of Architecture
Carnegie Mellon University
Dr. Mayank Goel
Associate Professor, School of Computer Science
Carnegie Mellon University
Category: News