[ad_1]
Walking over to a friend’s house or browsing the aisles at the grocery store may seem like a simple task, but in reality it requires sophisticated abilities. This is because people can effortlessly understand their environment and discover complex information about patterns, objects and their own location in the environment.
What if robots could perceive their environment in a similar way? That’s the question that worries MIT Lab Information and Decision Systems (LIDS) researchers Luca Carlone and Jonathan Howe. In 2020, the team led by Carlon released the first iteration of Kimera, an open-source library that allows a single robot to create a three-dimensional map of its environment in real time while labeling various objects. Last year, Carlone’s and How’s research groups (SPARK Lab and Aerospace Controls Lab) presented Kimera-Multi, an innovative system in which multiple robots communicate with each other to create a unified map. A 2022 paper related to the project was just received this year IEEE Transactions on Robotics The King-Sun Fu Memorial Best Paper Award will be given to the best paper published in the journal in 2022.
Carlon, who is the Leonardo Career Development Associate Professor in Aeronautics and Astronautics, and How, the Richard Cockburn MacLaurin Professor in Aeronautics and Astronautics, spoke to LIDS about Kimera-Multi and how robots can sense and interact with their environment.
Q: Currently, your labs are focused on increasing the number of robots that can work together to create 3D maps of the environment. What are the potential advantages of the scalability of this system?
How: The main benefit depends on consistency, in the sense that the robot can create an independent map and this map is self-consistent but not globally consistent. Our goal is for the team to have a consistent map of the world; This is the main difference between trying to build consensus between robots as opposed to mapping on their own.
Carlone: A little redundancy is also good in many scenarios. For example, if we deploy one robot on a search and rescue mission and something happens to that robot, it cannot find survivors. If many robots do the research, the chances of success are much higher. Increasing the team of robots also means that any task can be completed in a shorter amount of time.
Q: What lessons have you learned from recent experiments and what challenges have you had to overcome while designing these systems?
Carlone: We recently conducted a large mapping experiment on the MIT campus in which eight robots traveled a total of 8 kilometers. The robots have no prior knowledge of the campus and no GPS. Their main task is to estimate their own trajectory and create a map around it. You want robots to understand the environment like humans; Humans not only understand the shape of obstacles to overcome them without hitting, but also understand that the object is a chair, table, etc. is part of the semantics.
The interesting thing is that when the robots meet each other, they exchange information to improve the map of the environment. For example, if the robots communicate, they can use the information to correct their own trajectory. The challenge is that if you want to reach consensus among robots, you don’t have the speed to exchange too much data. One of the main contributions of our 2022 paper is the implementation of a distributed protocol in which robots exchange limited information but can still agree on what the map looks like. They do not send camera images back and forth, but only exchange specific 3D coordinates and clues extracted from sensor data. As they continue to exchange such data, they can build consensus.
Now we build color-coded 3D grids or maps in which the color contains some semantic information, for example “green” corresponds to grass and “magenta” to a building. But as humans, we have a much more sophisticated understanding of reality and have a lot of prior knowledge about the relationships between objects. For example, if I was looking for a bed, I would go to the bedroom instead of exploring the whole house. If you start to understand the complex relationships between things, you can be much smarter about what the robot can do in the environment. We’re trying to move from just one layer of semantics to a more hierarchical representation in which robots understand rooms, buildings, and other concepts.
Question: What kinds of applications might Kimera and similar technologies have in the future?
How: Autonomous vehicle companies are doing a lot of mapping around the world and learning from the environments they’re in. The holy grail would be if these machines could communicate with each other and share information, then they could improve models and maps faster. Current decisions are individual. If a truck pulls up next to you, you can’t see in a certain direction. Can another vehicle provide an area of vision that your vehicle does not otherwise have? It’s a futuristic idea because it requires vehicles to communicate in new ways, and there are privacy issues to overcome. But if we could address these issues, you could imagine a significantly improved security situation where you have access to data from multiple perspectives, not just your field of view.
Carlone: These technologies will have many uses. I mentioned search and rescue earlier. Imagine you want to explore a forest and search for survivors, or map buildings after an earthquake to help first responders access trapped people. Another environment where these technologies can be used is factories. Currently, robots deployed in factories are very rigid. They follow floor patterns and cannot really understand their surroundings. But if you think about the more flexible factories of the future, robots will have to collaborate with humans and exist in a much less structured environment.
[ad_2]
Source link