Projects
- L3S - Robot Learning Lab collaboretion on Robot MEdiated Object manipuletion with haptic feedback (ROMEO)
ROMEO is a joint reserch with L3S - Robot Learning Lab lead by Dr. Nicolás Navarro-Guerrero. In everyday activities, humans rely heavily on tactile and kinesthetic feedback to perform delicate tasks, such as picking up fragile objects or palpating soft tissue. However, replicating these complex actions in robotic systems poses significant challenges. Whether in robot-assisted minimally invasive surgery, prosthetics, or extended reality, providing users with precise haptic feedback remains under-researched, especially for fine manipulation in complex scenarios. The ROMEO project seeks to address this gap by enhancing robot-mediated object manipulation with haptic feedback but without force feedback. One of the main focuses is the development of skin-stretch-based wearable actuators. Through virtual reality scenarios, participants will engage in tasks such as twisting, pinching, grabbing, and rotating both virtual and physical objects with varying stiffness levels using their hands or a manipulator. The collaboration with Dr. Nicolás Navarro-Guerrero from the L3S Research Center in Germany, known for its pioneering work in artificial intelligence and robotics, the project also extends its research into teleoperated robotic manipulation. The team seeks to integrate haptic feedback into teleoperation systems in environments like surgery or other high-risk tasks where precise movement is critical. This interdisciplinary project promises significant advancements in haptics, potentially influencing future developments in medical robotics, virtual reality, and prosthetic devices.
- Teleoperation and haptics for medical applications; Human-Centered Control for Force-Reflecting Teleoperated Robot-Assisted Minimally-Invasive Surgery
In teleoperated robot-assisted minimally-invasive surgery (RAMIS), a surgeon operates master manipulators to control the motion of remote robotic manipulators. The manipulators on the remote side enter the patient’s body through very small incisions. In this project we aim to push the boundary of the current tradeoff between stability and transparency in RAMIS by exploiting the human stabilization capability. We work in parallel on modeling surgeons’ movement in roboti-assisted surgery with existing unilateral teleoperation control, on modeling simple movements in contact with stable and unstable environment, and on developing new bilateral teleoepration controllers. We expect that our human-centered approach will result in a less conservative design and will allow the control parameters to be tuned such that performance is drastically improved. As part of this project, we established two robot-assisted surgery platforms in the lab – the dVRK that was donated by Intuitive Surgical Inc., and the Raven.
- Computational models of surgeon’s sensorimotor control and learning in robot-assisted surgery
It is currently unknown how to optimize training for robot-assisted surgeons, in particular to improve speed of learning and increase safety by reducing the likelihood of operative errors. RAMIS technology provides a unique opportunity to acquire data, develop models, and perform scientifically motivated and data-driven interventions and/or augmentation in real time, during both training tasks and clinical procedures. We seek to provide a novel and high-impact approach to understanding and affecting surgeon training performance within RAMIS by coupling insights derived from data recorded from da Vinci Surgical Systems in real and simulated environments and from carefully designed laboratory experiments with models of human sensorimotor control and learning.
- Computational models of perception and action in teleoperation and physical human-robot interaction
- The effects of delay and sensorimotor transformations on representation and adaptation in the motor system
In collaboration with Prof. Sandro Mussa-Ivaldi, Rehabilitation Institute of Chicago, we studied two related hypotheses: (1) that alteration of simultaneity between the visual, haptic, and proprioceptive modalities results in deformations of the proprioceptive space, perceived impedance, and control of grip force in tool mediated interaction with objects; and that (2) such alteration is responsible for disturbances of space perception, such as hemi-neglect. We also study how altered causality during adaptation to state dependent force perturbations drives adaptation. This study is expected to contribute to understanding how internal representations are formed in the motor system in spite of internal delays in information transmission that vary between modalities. It will also lead to understanding and potentially treatment of timing-related disorders such as multiple sclerosis. Finally, understanding these effects is critical for designing successful remote teleoperation systems for various applications such as remote surgery and space teleoperation.
- Sensorimotor integration of kinesthetic, tactile, and proprioceptive information
In many critical scenarios of object manipulation such as writing or surgery, one must concurrently control and sense position and force. There are two kinds of force sensing modalities in our body – kinesthetic and tactile. In robotics, separation strategies are often used to simplify control. We hypothesize that in contrast to robotics, humans integrate kinesthetic and tactile information for perception, action, and learning, and that they use optimal estimators such as a Bayesian mixture model in this integration. In this project, we will develop and experimentally validate models of kinesthetic and tactile information integration in (I) perception, (II) control of manipulation and grip forces, and (III) motor learning. To achieve this goal, we will use computational modeling and recently developed programmable devices for tactile stimulation to selectively perturb the tactile and kinesthetic information channels to break the natural coupling and congruence between them in healthy individuals and stroke survivors.
- Israel-Italy Virtual Lab on Artificial Somatosensation for Humans and Humanoids
To effectively manipulate delicate and fragile objects, humans use sensory information from many modalities to concurrently control and sense position and force. They rely strongly on their sense of touch for perception, such as when assessing the stiffness of an object, or attempting to determine the details of its shape. In addition, they use this information together with proprioception to plan their interaction with these objects, and flexibly combine force and position control depending on the task and the mechanical properties of the environment and objects. This is achieved using a variety of sensors that are distributed in the skin, joints, muscles, and tendons and with distributed sensorimotor loops that allow for an efficient encoding and integration of information and its use for planning and control of action. Somatosensory deficits, that are a frequent outcome of neurological diseases, impact the ability to control movement and force as well as the possibility to recover. In the recent years, robotic devices that interact with such delicate or fragile objects became ubiquitous in many applications including: robotic rehabilitation, robot-assisted surgery, prosthetics, assistive robotics, and humanoid robots. However, the majority of these robotic and hybrid technologies have impoverished or completely missing somatosensory capabilities, and rely predominantly on vision. Even when they can sense forces, these systems do not have sensorimotor loops to effectively interact in these challenging scenarios. Together with Prof. Maura Casadio and Prof. Fulvio Mastrogiovanni, both from U. of Genova in Italy, we have the goal to advance groundbreaking research towards a future in which humans and robots benefit from synergetic integration between natural and synthetic somatosensation for perception and control. These systems are not only bioinspired, but also go beyond human capabilities, such as tactile super-acuity, chemical sensing, and others. The lab focuses on advancing the understanding of distributed human sensorimotor loops and on using this understanding to develop devices, representation models, and control algorithms for implementing human-like sensorimotor loops in robotic rehabilitation and assistance, bionic devices, and humanoid robots. The Italian and Israeli partners and their students are collaborating to make the necessary technological and scientific leaps in sensing, representation, and control. The intent is to educate the next generation of scientists and engineers in the development of bioinspired and bio-augmenting technologies and software for providing humans and humanoid robots with effective somatosensory feedback.
- Robot-assisted Laser Tissue Soldering
Almost every surgical intervention requires bonding of tissue and closing of incisions. Technologies such as sutures, staples, and clips have been used for generations to close surgical incisions and wounds. However, suturing is a delicate, demanding, and time-consuming procedure that requires technical skill. Sutured sites involve an increased risk of leaks, infections, and scarring. In robot-assisted minimally-invasive surgery, suturing is even more complex. Adhesives are also used, but they traditionally exhibit inadequate tensile strength, they are difficult to apply on the tissue site, and are often toxic. In collaboration with Prof. Avraham Katzir, TAU, and Dr. Uri Netz, Soroka Medical Center, we develop a composite Robot-assisted Laser Tissue Soldering (RLTS) system under temperature control. Our system includes two parts: (a) a medical device, consisting of a fiber-optic diode-laser in which a laser beam is transmitted through an optical fiber that heats the incision, point by point, under temperature control; (b) a robotic system that very accurately maneuvers the distal tip of the fiber to move at a constant height along the incision, leading to uniform heating of the incision and a strong tissue bonding. The RLTS will have teleoperated, shared, and autonomous control modes. The system will be optimized and tested in a series of ex-vivo experiments and demonstrated in an in-vivo demonstration. We predict that our RLTS will allow for a faster and well-controlled heating procedure leading to a more efficient wound repair without the need for extensive training.
Support
We are thankful for the generous support of the following funding sources:
We are also proud members of the da Vinci Research Kit community and thank Intuitive Surgical Inc. for the donation of the kit to our lab
Resouces
BGU Community:
Department of Biomedical Engineering
ABC Robotics – Agricultural, Biological and Cognitive Robotics
Zlotowski Center for Neuroscience
BGU Computational Motor Control Workshop in memory of Prof. Amir Karniel
BGU Karniel Motor Control Journal Club
Research:
Raven users community
BGU stiffness exploration database
Robotics courses
Statistics
Professional Societies:
International:
Israeli:
Career Development Resources
Useful tips
Department of Biomedical Engineering
ABC Robotics – Agricultural, Biological and Cognitive Robotics
Zlotowski Center for Neuroscience
BGU Computational Motor Control Workshop in memory of Prof. Amir Karniel
BGU Karniel Motor Control Journal Club
Research:
Raven users community
BGU stiffness exploration database
Robotics courses
Statistics
Professional Societies:
International:
- Society for Neuroscience
- Society for the Neural Control of Movement
- Eurohaptics society
- Technical Committee on Haptics
- IEEE Robotics and Automation Society
Israeli:
- Israeli Association for Automatic Control
- Israel Society for Medical and Biological Engineering
- Israeli Robotics Association
Career Development Resources
Useful tips