# What are the potential working regimes in telerobotics design space? # What are the potential working regimes in telerobotics design space While [[There are five heavily coupled areas of improvement for telerobotics technically]] it is important to build an opinionated set of potential solution regimes instead of “the solution could be *aaaaaaanything.*” It’s very hard to to wrap your head around a high-dimensional design space so lets use a magically really good solution to each of the five areas and the cascading consequences of it. ### Magical Components * Magical low-latency network that is limited by the speed of light. * Magical human interface that gives high-fidelity haptic feedback, enables you to look around, have peripheral vision, and spatial hearing. In any situation where there is a delay or the robot doesn’t map directly onto your motion, it has a useful way of showing ### Exercise thinking about consequences of fully-optimized axis might look like 1. **Latency** — If your latency was bounded by the speed of light, you could get [the transmission time between LA and NYC to be 13.6ms](https://www.answers.com/Q/How_long_does_it_take_light_to_tavel_between_Los_Angeles_and_new_york). For a full feedback loop that’s 27.2 ms which would be barely on the edge of human perception. You could also just divide telerobotics into two completely separate regimes: sub-human-perception latency and supra-human-perception latency. 2. **Human Interface** 3. **Actuators** The magical gold standard would be a compliant arm (ie. It wouldn’t break most things if it ran into them) that could exert a large amount of force if it chose to. In order for this to work you would probably need to (at least partially) do force control. In order to do force control, it would need a tight feedback loop either with the human operator via really good touch sensors and low latency or with an on-board sensor system. 4. **Sensors** 5. **Control Systems** ## Intuition-based Design Optima ### Supervisory control with interaction primitives Imagine an interface that works something like a first person shooter where everything can be highlighted and interacted with. The goal of the interface would be to give you as much awareness of what’s going on as possible, let you know the possibility space of interactions, and give you an idea of what the robot is going to do before it does it. This type of interaction would require a pretty sophisticated object-based physical model of the world to be generated and refreshed in real time. The robot would need to have some library of affordances for different things in the world and be able to plan local interactions with them. Perhaps the person has direct control in ‘free space’ so that they can get the robot into the best position to plan from so the planning problem becomes as easy as possible. In this situation you would also need either a compliant arm or a ton of sensors and blisteringly fast control system to approximate one. Things that are *less* necessary in this situation: simulations, haptics, fine motor capture ### Direct control via simulation Imagine an interface where you *are* the robot — however, in order to get this to work with inevitable latency, you’re actually a *simulated* robot. In this scenario, the operator is interacting with a high-fidelity environment that is a simulation of the robot’s environment t+2Δ in the future, where Δ is the delay between the robot and the operator. This system would need latency work to either make Δ very stable or dynamically respond to time-varying delays. This regime would need to capture the robot’s environment in a way that’s legible to the human interface: maybe not necessarily semantic objects, but definitely surfaces and some sense of what will happen to the surfaces over time so that they could be simulated. This would need better ways of capturing fine interactions and motions and giving haptic feedback. Perhaps the magical best way to do this is, instead of a haptic glove that couples both picking up motion and giving touch feedback you could decouple it and have maybe an external camera coupled with deep learning to pick up fine motor control and some kind of force-feedback optimized glove (filled with fluid). Or maybe it looks like both the person and the robot using the same tool (instead of trying to map between a robot hand and a human hand which would force you to either make the robot hand exactly like a human hand [probably unnecessary] or be forced into an imperfect mapping). While you wouldn’t need the control system on the robot’s end to do planning, per se, you would also need some kind of reactive system that predicts how the person would react to unexpected input. This is analogous to how [reflex arcs](https://en.wikipedia.org/wiki/Reflex_arc)work in the nervous system. <[[How could you put brains in robot fingers?]] Some interesting similarities * Better compliant arms and how to control them * Good environmental understanding * Object recognition * Some simulation of those objects ### Random Thoughts * Would another regime be some kind of swarm thing? [Web URL for this note](http://notes.benjaminreinhardt.com/What+are+the+potential+working+regimes+in+telerobotics+design+space) [Comment on this note](http://via.hypothes.is/http://notes.benjaminreinhardt.com/What+are+the+potential+working+regimes+in+telerobotics+design+space)