
RESEARCH
Overview
MEIC (Mechanical Systems with Intelligence and Computer Vision) Lab
The MEIC Lab aims to establish an AI-embodied engineering intelligence framework in which perception, reasoning, and actuation are organically integrated. We define the Simulation-to-Real (Sim2Real) gap, the discrepancy between physical environments and virtual simulations, as the primary bottleneck in realizing Physical AI. To overcome this challenge, our research focuses on spatial, multimodal, and physics-informed convergence technologies that tightly couple digital models with real-world dynamics.
Representative Technologies
-
Neural Rendering & Spatial AI Group
Through our proprietary Micro-Splatting and LiDAR-3DGS pipelines, we enable photorealistic digitization of large-scale facilities and develop perception-driven digital twin frameworks capable of interpreting sensor data in real time. Our work advances spatial AI systems that bridge high-fidelity 3D reconstruction with intelligent environmental understanding.
-
Physics-Digital Interaction & Multimodal AI Group
We develop multimodal fusion pipelines integrating AR, LiDAR, and inertial sensing data, alongside a Distributed Collaborative Remote Diagnostics Metaverse (DCRM) platform. These technologies enable real-time alignment of large-scale physical site dynamics with virtual environments, supporting immersive remote monitoring and collaborative decision-making.
-
Physics-Informed Actuation & Embodied AI Group
We advance goal-oriented predictive intelligence based on physics-informed AI. By embedding physical laws directly into neural networks through EV-PINN and PINO architectures, we enable accurate inference of complex physical behaviors and system states even under limited data conditions, supporting adaptive virtual sensing capabilities. Furthermore, by integrating the vision-based robotic control platform R-Zoom with learned models, we develop embodied AI systems in which hardware autonomously adapts to unforeseen real-world variations that were not captured in simulation, enabling robust operation beyond pre-defined control regimes.
MEIC (Mechanical Systems with Intelligence and Computer Vision) Lab은 인식, 추론, 작동이 유기적으로 통합된 AI-embodied 공학 지능 프레임워크 수립을 목표로 합니다. 특히 실제 물리 환경과 가상 시뮬레이션 사이의 간극인 Simulation-to-Real (Sim2Real) 격차를 Physical-AI 구현의 최대 병목으로 정의하고, 이를 극복하기 위한 공간(Spatial), 멀티모달(Multimodal), 물리 정보(Physics-informed) 기반의 융합 기술을 연구합니다.
Representative Technologies
-
Neural Rendering & Spatial AI Group: 자체 개발한 Micro-Splatting 및 LiDAR-3DGS 파이프라인을 통해 대규모 시설을 실사 수준으로 디지털화하고, 센서 데이터를 실시간 해석하는 인지형 디지털 트윈 구축 기술
-
Physics-Digital Interaction & Multimodal AI Group: AR, LiDAR, 관성 센서 데이터를 통합하는 융합 파이프라인과 **분산 협업 원격 진단 메타버스(DCRM)**를 통해 대규모 현장의 물리적 변화를 가상 환경에 실시간으로 동기화하는 기술.
-
Physics-Informed Acutation & Emboided AI Group: Physics-informed AI 기반 목표 지향적 예측 (Deep Learning): 물리 법칙을 신경망에 내재화한 EV-PINN/PINO 모델을 활용하여, 제한된 데이터로도 복잡한 물리적 거동과 시스템 상태를 정확히 추론하고 자가 적응하는 지능형 가상 센싱 기술. 비전 기반 로봇 제어 플랫폼인 R-Zoom과 학습 모델을 결합하여, 시뮬레이션에서 예측하지 못한 실제 환경의 변수에도 하드웨어가 스스로 적응하여 작동하는 Emboided AI 기술.

