top of page

Research

LiMRSF: Real-time Visualization of 3D Reconstruction in Mixed Reality

Simultaneous Localization and Mapping (SLAM) has been extensively studied in the fields of computer vision and robotics, enabling the real-time tracking of autonomous vehicles and the 3D reconstruction of environments. Despite advancements in Visual-SLAM and Visual-LiDAR SLAM, challenges persist in indoor applications, where sensor limitations and dynamic scenes often result in incomplete and inconsistent point cloud data. This study introduces the LiDAR-MR-RGB-Sensor-Fusion (LiMRSF) system, a novel approach that integrates LiDAR and RGB cameras with mixed reality (MR) devices to visualize and validate SLAM-generated colorized point clouds in real-time. By utilizing the ORB algorithm and advanced error correction techniques, the LiMRSF system allows users to immediately detect and correct errors, such as drifting and double walls, during the scanning process by visualizing the 3D reconstructed model in MR device. This reduces the need for repetitive scans and enhances the quality of indoor 3D reconstructions. The system's real-time visualization capabilities offer significant potential for building information modeling (BIM) and structural engineering, providing a cost-effective and reliable solution in an era of declining expert availability.

Picture1.png

06/2024~Present

On Progress

This work is supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2022M1A3C2085237).

Development of Mobile App to Enable Local Update on Mapping API: Construction Sites Monitoring through Digital Twin

Unmanned ground vehicles (UGVs) have emerged as a promising solution for reconnaissance missions, overcoming labor cost, frequency, and subjectivity issues associated with manual procedures. However, for dynamic environments such as construction sites, the constantly changing conditions hinder a manager from planning the UGV’s paths. For an autonomous monitoring mission, the path planning should be dealt with by having a map with the site’s most recent scene. In this study, we develop a mobile app capable of local map updates by overlaying an image on a mapping API (e.g., Google Maps) hence working as a digital twin capable of creating a dynamic representation of the updated terrain over the mapping API. UGV operators can draw a path on such an updated construction scene using a tablet PC or smartphone. Discrete GPS information (e.g., latitudinal, and longitudinal) is executed for the UGV’s controller. In the overlaying procedure, the homographic relation between the image and map is automatically computed and then projected. We successfully demonstrated the capabilities of the technique with two construction sites and a soccer field using images from the ground and satellite, respectively. The error generation is quantitatively measured and analyzed.

alfredo_paper1.png

10/2022~11/2023

Paper published[1]

This work was funded by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT) (No. NRF-2022R1F1A106361711 and 2022M1A3C2085237).

LogPath: Log Data Based Energy Consumption Analysis Enabling Electric Vehicle Path Optimization

Vehicle navigation and path optimization require a more meticulous approach when it deals with EVs (electric vehicles) and SDVs (software-defined vehicles), due to lengthy charging times and the lack of charging infrastructure. Long-distance freight EV trucking needs path guidance with accurate energy consumption estimates to prevent charging-related failures. We developed a novel energy consumption estimation approach that only uses battery log data to extract major vehicle parameters to increase EV navigation accuracy without additional sensors. This is enabled by extracting multiple drive modes from the log data for analysis. The system provides 1) routes, 2) charge locations, 3) charging times, and 4) optimal vehicle speeds that guarantee the shortest travel time. We successfully validated the system using log data collected from an EV and Tesla’s Supercharging map in the US and compared it with the commercially available navigation system, Tesla’s trip planner, whose capabilities solely include charging time and routing.

jonathan_paper1.png

01/2021 - 08/2024

On Review

This work is supported by the National Research Foundation of Korea (NRF) grant funded by the Korean Government (MSIT) (No. 2022M1A3C2085237)

Automating Visual Assessment of Infrastructure exploiting Computer Vision and Big Visual Data

The infrastructure that forms our transportation networks, communication system, power grid, and utility networks require constant attention. Currently, human visual inspection is still the most trusted and pervasive method to assess the condition of our infrastructure over its lifetime. However, this process is not as simple as it may seem. The limitations of human inspection resolve around consistency, accessibility, safety and efficiency. We propose to establish the scientific foundations for fully automated UAV systems that are able to fulfill reliable visual inspection for large-scale infrastructure systems, while considering the realistic challenges that would be encountered in the field. These systems will be able to perform certain maintenance tasks, and their power generation system will be accessible to enable these systems to operate indefinitely.

Research#1_image_fixed_edited.jpg

03/2021 - Present

Paper Published [1] [2]

supported by NRF (National Science Foundation of Korea) under Grant No. NRF-2021R1G1A1012298. Award funding of 30 million KRW (equivalent to 30k USD) for 03/01/2021 – 02/28/2022

Integrating Human and Machine for Post-Disaster Visual Data Analytics

Reconnaissance teams collect perishable data after each disaster to learn about building performance. However, often these large image sets are not adequately curated, nor do they have sufficient metadata (e.g., GPS), hindering any chance to identify images from the same building when collected by different reconnaissance teams. In this study, Siamese convolutional neural networks(S-CNN) are implemented and repurposed to establish a building search capability suitable for post-disaster imagery. This method can automatically rank and retrieve corresponding building images in response to a single query using an image. A quantitative performance evaluation is conducted by examining two metrics introduced for this application: Similarity Score (SS) and Similarity Rank (SR).

Research#2_image_fixed.png

01/2019 - Present

Paper Published [1] [2]

supported by NSF under Grant No. NSF-1835473

STORM: Safeguarding Cultural Heritage through Organisational Resources Management

Graffiti is common in many communities and even affects our historical and heritage structures. Though graffiti may be a form of modern art, in most cases these markings are undesirable, ugly, and can impact a community financially by sweeping up maintenance funds, and influencing the perception of safety. Photographs can be quickly captured and are already frequently posted by ordinary citizens (e.g., tourists, residents, visitors). In this research, we have developed a vision-based graffiti detection technique using a convolutional neural network. Images collected from historical structures-of-interest within a community can be utilized to automatically inspect for graffiti markings. The robust graffiti detector is built using a database of damaged or contaminated structures gathered during a recent European Union project, entitled Safeguarding Cultural Heritage through Technical and Organisational Resources Management(STORM).

Research#3_image_fixed_edited.jpg

04/2017-05/2020

Paper Published [1]

collaborated with EU (European Union) under Grant No. H2020 n. 700191

 RETH: Resilience Extra Terrestrial Habitat

Habitation on planets outside Earth has been gaining the interest of space agencies such as NASA, and industry, e.g. SpaceX. The habitats should function safely and be resilient under different hazards including meteorite impacts, thermal fluctuations, earthquakes, and radiation. Recent evidence has indicated the existence of underground openings on the Moon in a form of “lava tubes” that are believed to be formed as the lava channels crust at the surface while the lava underneath flows away. These lava tubes could offer immediate shelter for temporary missions as well as permanent human settlements. Evidence for the presence of the tubes was first provided from careful inspection of data taken from the surface of the moon by JAXA’s SElenological and Engineering Explorer (SELENE) spacecraft and NASA’s Lunar Reconnaissance Orbiter (LRO), and later from the gravity data from the Gravity Recovery and Interior Laboratory (GRAIL).

Research#4_image_fixed_edited.jpg

08/2018–01/2019

Paper Published [1]

supported by New Horizon Program at Purdue University and NASA (The National Aeronautics and Space Administration), 3D models & videos were published in numerous articles worldwide (e.g., usatoday.com, space.com, etc.); Available in https://phys.org/news/2019-07-humans-lava-tubes-moon.html

Active Citizen Engagement to Enable Lifecycle Management of Infrastructure Systems

Crowdsourcing provides a new opportunity to gather numerous photos of certain structures from various viewpoints and at frequent intervals, potentially enabling remote visual assessment. In this study, we exploit state-of art computer vision techniques to streamline structural inspection and support lifecycle assessment by using visual data collected from ordinary citizens. One major inherent challenge in the use of such data is that they include a significant amount of irrelevant information because they are not captured intended for inspection purposes. To address this challenge, we develop an automated method to filter out unnecessary portions of the images and extract highly relevant regions-of-interest for reliable inspection. 

Research#5_image_fixed_edited.jpg

05/2017–08/2018

Paper Published [1]

supported by NSF under Grant No. NSF-1645047

Vision-based Visual Inspection System for A Large Number of Aerial Image

After a disaster strikes an urban area, damage to the façades of a building may produce dangerous falling hazards that jeopardize pedestrians and vehicles. In this research, we have developed an approach to perform rapid and accurate visual inspection of building façades using images collected from UAVs. An orthophoto corresponding to any reasonably flat region on the building (e.g., a façade or building side) is automatically constructed using a structure-from-motion (SfM) technique, followed by image stitching and blending. Based on the geometric relationship between the collected images and the constructed orthophoto, high-resolution region-of-interest are automatically extracted from the collected images, enabling efficient visual inspection. 

Research#6_image_fixed.png

01/2017–12/2017

Paper Published [1] [2]

supported by NSF under Grant No. NSF-1645047

Sensor Integrated Autonomous Flight UAV System Development

The objective of this project is to connec tPixhawk and Arduino in order to make UAV, or drone, to gather data by itself and find the correct data needed. Arduino system will be  connected to the Pixhawk which is the main firmware of the drone, and Arduino will command the Pixhawk how to fly. When drone is flying , sensors on the drone will be coded to collect data and find the correct data needed by using image visualization method. The ultimate goal is to put the ZED (3D Camera) or other stereo vision cameras on the quadcopter and make real-time 3D mapping, using arduino. Linux Ubuntu system will be used for compiling process. This research would help people to live in a more cconvenient and safer world. This research will be useful in dangerous environments such as biohazard areas or fired area where there are onluy very little space to move around. This arducopter will fly those regions and ocllect data safely.

Research#7_image_fixed_edited.jpg

05/2016–05/2021

supported by INDOT (Indiana Department of Transportation) under Grant No. SPR-4006

 Automated Region-of-Interest Localization and Classification for Facility Visual Assessment

Complementary advances in computer vision and new sensing platforms have mobilized the research community to pursue automated methods for vision-based visual evaluation of our civil infrastructure. Spatial and temporal limitations typically associated with sensing in large-scale structures are being torn down through the use of low-cost aerial platforms with integrated high-resolution visual sensors. The large volumes of complex visual data, collected under uncontrolled circumstances (e.g. varied lighting, cluttered regions, occlusions, and variations in environ- mental conditions), impose a major challenge to such methods, especially when only a tiny fraction of them are used for conducting the actual assessment. Regions of interest are extracted here using structure-from- motion algorithm. The capability of the technique is successfully demonstrated using a full- scale highway sign truss with welded connections.

Research#8_image_fixed_edited.jpg

05/2015–05/2017

Paper Published [1] [2] [3]

Image-Based Collection and Measurements for Construction Pay Items

Measuring the actual quantity of pay items placed at a site is an important step in the timely completion of a construction project. Prior to each payment to contractors and suppliers, measurements are made to document the amount of pay items placed at the site. This manual process has substantial risk for personnel, but it could be made more efficient and, as a result, be less prone to human errors. In this project, a customized software tool package was developed to address these concerns. The major benefits of the tool package include (1) cost savings through accelerated pay item measurements; (2) reduced risk to on-site personnel; (3) consistency in measurements leading to greater efficiency in the measurement process; and (4) automated documentation of measurements made for improved record keeping.

Research#9_image_fixed.png

05/2015–08/2017

Paper Published [1]

supported by INDOT (Indiana Department of Transportation) under Grant No. SPR-4006

 Parametric Analysis of Scramjet Engine Varying Material and Fuel

The performance of a hypersonic flight vehicle will depend on existing materials and fuels; this work presents the performance of the ideal scramjet engine for three different combustion chamber materials and three different candidate fuels.  Engine performance is explored by parametric cycle analysis for the ideal scramjet as a function of material maximum service temperature and the lower heating value of jet engine fuels. The objective of this work is to explore material operating temperatures and fuel possibilities for the combustion chamber of a scramjet propulsion system to show how they relate to scramjet performance and the seven scramjet engine parameters: specific thrust, fuel-to-air ratio, thrust-specific fuel consumption, thermal efficiency, propulsive efficiency, overall efficiency, and thrust flux. This work yields simple algebraic equations for scramjet performance which are similar to that of the ideal ramjet, ideal turbojet and ideal turbofan engines.

Research#10_image_fixed.png

08/2012–05/2014

Paper Published [1] [2] [3] [4]

supported graduate program by University of Mississippi

bottom of page