Teleoperation
Teleoperation is the remote control of a robot by a human operator, typically using VR controllers, haptic gloves, exoskeletons, or traditional joystick interfaces to command the robot's movements in real time. Teleoperation serves two distinct roles in the 2026 robotics landscape: it is a deployment mode (humans controlling robots in environments too dangerous, distant, or delicate for direct human presence) and a training data pipeline (humans demonstrating tasks through the robot to generate the imitation learning data that trains autonomous policies).
As Deployment Mode
Surgical teleoperation is the most commercially mature application. Intuitive Surgical's da Vinci system has performed over 12 million surgical procedures, with the surgeon seated at a console controlling miniaturized instruments inside the patient's body. The da Vinci 5 (2024) added haptic feedback for the first time, allowing surgeons to feel resistance and tissue compliance through the controls — a critical advance for delicate procedures.
Hazardous environment operation includes bomb disposal, nuclear facility inspection, deep-sea maintenance, and space operations. NASA's Robonaut and GITAI's station maintenance robots use teleoperation with varying degrees of autonomous assistance. The key challenge is latency: a teleoperated robot on the Moon has a 1.3-second round-trip communication delay, making direct control sluggish. On Mars, the delay reaches 4–24 minutes, making real-time teleoperation impossible — which is why Mars rovers must be autonomous.
Telepresence robots occupy the lighter end of the spectrum: mobile screens on wheels that let remote workers "walk around" an office or factory. These require minimal dexterity but demonstrate the broader principle of remote physical presence.
As Training Data Pipeline
The 2026 humanoid robot generation depends on teleoperation for training data. The process: a human operator wears VR headsets and hand controllers (or full-body motion capture suits), controls the robot through a series of tasks, and the system records synchronized observations (camera images, joint positions, forces) and actions (motor commands). This produces the demonstration data that trains VLA models and other robot policies via imitation learning.
Figure AI trained its Helix model on 500+ hours of teleoperated demonstrations. Physical Intelligence uses teleoperation across multiple robot embodiments to build cross-platform training datasets. The quality of the training data — and therefore the quality of the autonomous policy — depends directly on the teleoperation interface: higher-fidelity control (more degrees of freedom, haptic feedback, lower latency) produces better demonstrations that lead to better learned policies.
Shared Autonomy
The frontier of teleoperation is shared autonomy: systems where the robot handles routine subtasks autonomously while the human intervenes for novel situations, error recovery, or high-stakes decisions. Rather than the binary of "fully teleoperated" or "fully autonomous," shared autonomy creates a sliding scale where the human's role shrinks as the robot's capabilities grow. A warehouse robot might autonomously navigate aisles and pick standard items, but request human teleoperation input when it encounters an unfamiliar object or a jammed bin. Each human intervention becomes training data that makes the next intervention less likely — a self-improving loop from teleoperation to autonomy.
Hardware
Teleoperation hardware ranges from consumer VR controllers (Meta Quest, used by several research labs for low-cost teleop data collection) to purpose-built systems. Exoskeleton-based teleop rigs map the operator's full body movement to a humanoid robot one-to-one. Force-feedback gloves like HaptX transmit tactile information back to the operator, enabling them to "feel" what the robot is touching — critical for tasks requiring force sensitivity like surgical procedures or delicate assembly. The trend is toward higher-fidelity bilateral systems where information flows both ways: human-to-robot for commands, robot-to-human for sensory feedback.
Further Reading
- The State of AI Agents in 2026 — Jon Radoff
- The Age of Machine Societies Has Begun — Jon Radoff