Robot learns fast and safe navigation strategy

Robot learns fast and safe navigation strategy

Researchers have proposed a new framework that combines deep reinforcement learning (DRL) and curriculum learning for training mobile robots to quickly navigate while maintaining low collision rates.

One of the basic requirements of autonomous mobile robots is their navigation capability. The robot must be able to navigate from its current position to the specified target position on the map as per given coordinates, while also avoiding surrounding obstacles. In some cases, the robot is required to navigate with a speed sufficient to reach its destination as quickly as possible. However, the robots that navigate faster usually have a high risk of collision, making the navigation unsafe and endangering the robot and the surrounding environment.

To solve this problem, a research group from the Active Intelligent System Laboratory (AISL) in the Department of Computer Science and Engineering at Toyohashi University of Technology (TUT) proposed a new framework capable of balancing fast but safe robot navigation. The proposed framework enables the robot to learn a policy for fast but safe navigation in an indoor environment by utilizing deep reinforcement learning and curriculum learning.

Photo
Plot of some robot trajectories over several speed settings after training. In the experiments, various speed settings were applied to the mobile robot (depicted as red circles) for three goal positions (depicted as green stars). The figure shows that employing the proposed framework, which schedules the range of the robot's linear velocity v in the training process (depicted by red lines), traces very similar trajectories when the robot is set at a slow speed (depicted by green lines).
Source: Zoyohashi University of Technology

Chandra Kusuma Dewa, a doctoral student and the first author of the paper, explained that DRL can enable the robot to learn appropriate actions based on the current state of the environment (e.g., robot position and obstacle placements) by repeatedly trying various actions. Furthermore, the paper explains that the execution of the current action stops immdediately the robot achieves the goal position or collides with obstacles because the learning algorithms assume that the actions have been successfully executed by the robot, and that consequence needs to be used for improving the policy. The proposed framework can help maintain the consistency of the learning environment so that the robot can learn a better navigation policy.

In addition, Professor Jun Miura, the head of AISL at TUT, described that the framework follows a curriculum learning strategy by setting a small value of velocity for the robot at the beginning of the training episode. As the number of episodes increases, the robot's velocity is increased gradually so that the robot can gradually learn the complex task of fast but safe navigation in the training environment from the easiest level, such as the one with the slow movement, to the most difficult level, such as the one with the fast movement.

Recommended article

Because collisions in the training phase are undesirable, the research of learning algorithms is usually conducted in a simulated environment. We simulated the indoor environment as shown below for the experiments. The proposed framework is proven to enable the robot to navigate faster with the highest success rate compared to other previously existing frameworks both in the training and in the validation process. The research group believes that the framework is valuable based on the evaluation results, and it can be widely used to train mobile robots in any field that requires fast but safe navigation.

Subscribe to our newsletter

Related articles

How to train a robot - using AI and supercomputers

How to train a robot - using AI and supercomputers

Computer scientists use TACC systems to generate synthetic objects for robot training.

Xenobots 2.0: The next generation of living robots

Xenobots 2.0: The next generation of living robots

Researchers have created life forms that self-assemble a body from single cells and do not require muscle cells to move. They're faster, live longer, and can now record information.

Algorithm designs soft robots that sense

Algorithm designs soft robots that sense

Deep learning technique optimizes the arrangement of sensors on a robot’s body to ensure efficient operation.

Engineers build smart robotic exoskeletons

Engineers build smart robotic exoskeletons

Researchers are developing exoskeletons and prosthetic legs capable of thinking and making control decisions on their own using AI technology.

Robots can draw out reluctant participants in groups

Robots can draw out reluctant participants in groups

Can a robot draw a response simply by making “eye” contact, even with people who are less inclined to speak up. A recent study suggests that it can.

Most patients wouldn't mind a 'robotic doc'

Most patients wouldn't mind a 'robotic doc'

A study finds patients are receptive to interacting with robots designed to evaluate symptoms in a contact-free way.

An ear-bot 'hears' through the ear of a locust

An ear-bot 'hears' through the ear of a locust

For the first time, the ear of a dead locust was connected to a robot that receives the ear’s electrical signals and responds accordingly.

Sensor for robots mimicks skin characteristics

Sensor for robots mimicks skin characteristics

Researchers have developed a new soft tactile sensor with skin-comparable characteristics.

AI, holographic microscopy beat scientists at analyzing immunotherapy​

AI, holographic microscopy beat scientists at analyzing immunotherapy​

AI is helping researchers decipher images from a new holographic microscopy technique needed to investigate a key process in cancer immunotherapy “live” as it takes place.

Popular articles

Subscribe to Newsletter