LQR-RRT | |
Class: | Kinodynamic motion planning |
Time: | O(n log n) |
Space: | O(n) |
Linear-quadratic regulator rapidly exploring random tree (LQR-RRT) is a sampling based algorithm for kinodynamic planning. A solver is producing random actions which are forming a funnel in the state space. The generated tree is the action sequence which fulfills the cost function. The restriction is, that a prediction model, based on differential equations, is available to simulate a physical system. The method is an extension of the rapidly exploring random tree, a widely used approach to motion planning.
The control theory is using differential equations to describe complex physical systems like an inverted pendulum.[1] A set of differential equations forms a physics engine which maps the control input to the state space of the system. The forward model is able to simulate the given domain. For example, if the user pushes a cart to the left, a pendulum mounted on the cart will react with a motion. The exact force is determined by newton's laws of motion.
A solver, for example PID controllers and model predictive control, are able to bring the simulated system into a goal state. From an abstract point of view, the problem of controlling a complex physical system is a kinodynamic motion planning problem.[2] In contrast to a normal path planning problem, the state space isn't only a 2d map which contains x and y coordinates. But a physical underactuated system has much more dimension, e.g. the applied forces, rotating angles and friction to the ground.[3] Finding a feasible trajectory in the complex state space is a demanding problem for mathematics.
Linear-quadratic regulator (LQR) is a goal formulation for a system of differential equations.[4] It defines a cost function but doesn't answer the question of how to bring the system into the desired state. In contrast to linear problems, for example a line following robot, kinodynamic problems can be solved not with a single action but with a trajectory of many control signals. These signals are determined and constantly updated with the receding horizon strategy, also known as model predictive control (MPC). LQR tracking means to find and evaluate trajectories for solving a system of differential equations.
In contrast to a PID controller, which is only able to find the next control action, a LQR tree is able to store a sequence of actions in advance.[5] This is equal to a multistage solver which keeps the time horizon in mind. An action taken in the now will affect the system indirectly in the future with a delayed feedback.
The algorithm is a university-driven research project. The first version was developed by Perez et al. at the Massachusetts Institute of Technology in 2012 in the AI laboratory. In 2016 the algorithm was listed in a survey of control techniques for autonomous vehicles[6] and was adapted by other academic robotics teams like University of Florida for building experimental path planners. In 2018, the algorithm was included in the Pythonrobotics library.[7] The algorithm is currently being tested on the Astrobee, a six degree of freedom (DOF) free-flyer with a 3 DOF robotic arm in the International Space Station.[8] [9] [10] [11] It is currently part of the Relative Satellite Swarming and Robotic Maneuvering (ReSWARM) experiments taking place at the International Space Station since April 2021 starting with expeditions 65 and 66.[12] [13] [14] [15] Future experiments will entail physical manipulation of objects to further validate the on-orbit assembly demonstration, consideration of physical objects for real-time mapping and collision avoidance, and bringing the information-theoretic framework to a greater set of uncertain robots.[16]