At Samsung Medical Center (SMC), the future of medical education is taking shape: students encounter an eerily lifelike, ambiguous face hovering in midair. With a wave of the student’s hand, the model pivots and rotates, revealing a web of muscles, blood vessels, and nerves—each structure, from the zygomatic muscles to the vascular system—is exposed with unprecedented detail.
Anatomy practice lies at the core of surgical expertise, but the traditional reliance on cadavers is increasingly impractical. Those available often come from elderly individuals in long-term care and lack the muscle and fat layers essential for effective learning. Additionally, the logistical and financial burdens of cadaver-based training impose significant constraints.
In response, a quiet revolution is underway in medical education—where surgeons hone their skills not in bustling operating rooms with real patients but within virtual, immersive environments.
This is the potential of extended reality (XR) – a blend of virtual reality (VR), augmented reality (AR), and mixed reality (MR) technologies.
This shift in medical practice through the use of XR was underscored at the Future Research Symposium held last Friday at Kangbuk Samsung Hospital. There, Professor Jung Yong-gi, an otorhinolaryngology specialist and Director of Research Strategy at SMC, explained XR’s potential to transform the future of surgery by combining precision and practical application in digitally secure environments.
“Medical VR has long been explored for patient care, but its implementation has been challenging,” Professor Jung said, reflecting on the field’s journey since the release of VR devices like Google Cardboard in 2014, followed by Samsung’s Gear VR in 2015.
Technical challenges and future of XR in surgery
XR encompasses a spectrum of technologies, from VR, which immerses users in digital environments, to AR, which overlays digital elements onto the real world, and MR, which integrates digital and physical interactions.
These technologies, collectively referred to as XR, have now found new purpose in fields like medicine, driving innovations in medical education with “improved use experience, higher resolution, increased computing power, and reduced latency,” as noted by Professor Jung, who also anticipates “significant growth in research and practical applications of XR in medical training.”
However, achieving realistic modeling, particularly in ear, nose, and throat (ENT) anatomy, requires an interactive user interface/user experience (UI/UX) that accurately simulates real dissection.
Addressing current shortcomings of AR, Professor Jung pointed out that existing models displaying organs like the heart or arteries often fall short of professional standards. “We need to enhance the quality of these models, ideally incorporating up to 1,000 anatomical structures,” he said.
Professor Jung stressed the importance of high-resolution displays, aiming for approximately 2,000 pixels per eye, and emphasized the value of haptic feedback for a realistic user experience. “With the increasing complexity of 3D models involving millions of voxels, robust computing power is essential to ensure smooth, crash-free anatomical practice,” he said.
At the symposium, Professor Jung introduced a project funded through the Sungkyunkwan University (SKKU) Academic Research Fund. He had developed a unity-based platform, incorporating around 1,000 anatomical structures, with the aim to “revolutionize medical education by leveraging XR technology to bridge the gap between theoretical knowledge and practical application, promising to transform the training of future surgeons.”
In his demonstration of the technology, Professor Jung showcased an educational platform designed for immersive anatomical study using the Meta Quest 3 headset, fitting the bulky standalone pass-through device on his head to illustrate its capabilities.
Professor Jung illustrated the platform’s capabilities by presenting a 3D model of a face that could be manipulated with precision. With a simple gesture, he demonstrated how the skin peels away to reveal a complex network of muscles, blood vessels, and nerves, all rendered in three-dimensional clarity.
"This tool offers a close-up view, which is invaluable since anatomy textbooks are in 2D," Professor Jung said, showcasing the platform's interactive features that allow users to examine anatomical relationships from every angle.
He detailed the platform’s functionality, explaining that users can dissect and reconstruct structures with a click, rotate the model, and even lay it down as if performing a real dissection using both hands.
The platform also offered customizable views, enabling users to isolate specific systems.
“If you only want to see blood vessels, nerves, or bones, you can do that,” he said. “Each structure is rendered in 3D, enabling a clear visualization of spatial relationships.”
With a clipping technique similar to CT or MRI scans, users can also view cross-sections and understand the spatial relationships of anatomical structures in detail.
Professor Jung also integrated multi-user functionality for up to six simultaneous interactions into the platform, enhancing engagement during dissection with stereo sound for realistic audio cues.
“When someone speaks, the sound doesn’t just come at you flatly,” Professor Jung said. “If someone to your left speaks, you’ll hear it from the left. If someone on the right speaks, you’ll hear it from the right.”
Enhancing surgical precision with advanced tracking systems
Professor Jung has developed this surgical navigation system but has not yet sold it yet. He is currently navigating how to proceed with bringing this groundbreaking technology to market. His mission is to create a highly reliable tracking system using high-performance sensors that can revolutionize surgical operations.
“In surgery, planning and navigation is crucial,” Professor Jung said. “But the current AR navigation accuracy is around 10mm, which isn’t precise enough, especially when dealing with delicate structures like nerves.” He emphasized that even a few millimeters of deviation with the drill cause significant damage.
Professor Jung has been working on a project involving a dual sensor setup—two RGBD sensors and one optical tracker. The two sensors work together, ensuring that even if one sensor is obstructed or fails, the others continue to provide accurate tracking.
During a recent presentation, Professor Jung introduced phase three of their implementation—a model costing 5 million won ($6,905). This model, with the same reflectivity as real skin and a meticulously detailed bone structure beneath, projected the underlying bones, providing a true-to-life 3D view.
"In facial surgeries that involve drilling a hole in front of the maxillary sinus, beginners often struggle to see their target, leading to accidents," Professor Jung said. "With this system, you can clearly visualize the bone structure before cutting, and in case of error, an animation in the next frame demonstrates the correct technique."
The dual sensor tracking ensures continuous monitoring, even if one sensor is blocked. These sensors' precision maintains an accuracy within 2mm, a critical factor for successful outcomes in delicate procedures. A specialized shading technique further enhances bone positioning.
Users can clearly see each step involved in the surgery, with the guide showing precisely where to cut and what the next steps are, reducing hesitation.
“We're developing a surgical navigation system that accurately tracks surgical instruments, guiding surgeons on bone-cutting procedures,” Professor Jung said.”This might seem straightforward, but achieving this level of accuracy is a significant technical challenge.”
Professor Jung has advanced this surgical navigation technology as part of a multi-year project, now in its second year, aimed at enhancing surgical precision.
While Professor Jung acknowledged ongoing research efforts, he also highlighted the discomfort associated with the current technology. "So far, it's quite uncomfortable," he said. "Wearing the device becomes frustratingly hot and sweaty."
Professor Jung said current models are heavy and rely on slave-driven operations due to chipset limitations. “As chipsets improve, heavier models should be able to operate as standalone units,” he said. He stressed the importance of improving headset structure, noting that even lighter models can be cumbersome to wear.
“Devices like Meta Quest 3, while convenient, are bulky,” Professor Jung said, underlining the need for further research to achieve the visual quality of glasses-type devices for practical use.
"Technically challenging as it may be, I look forward to the day when we can achieve such advancements."
Related articles
- Smart ring for 24/7 blood pressure monitoring costs less than $5 for Korean patients
- Gynecologic surgeons to convene in Korea for international robotic surgery congress
- Samsung Medical Center head-turned-CEO seeks to ‘connect doctors to industry’
- Why childhood obesity is more dangerous than adult obesity
- Researchers develop tear-based Covid-19 biosensor using AI
- Study reveals key infection pathways from mothers to premature newborns weighing less than 1.5kg
- SMC succeeds in latest treatment for atrial fibrillation: Pulsed-Field Ablation
