The goal of the project is to create a proof-of-concept for an autonomous robot to aide in grocery/retail stores. This robot would be would be assigned items to grab, and it would plan an optimized path to retrieve the items. This project is split into two teams; our team focuses on the robotic base that allows for movement and navigation. The base of the robot must be able to localize itself and plan a path to and from the target. If during operation the robot should detect an obstruction, the robot needs to recalculate a path around it while leaving adequate space to avoid collision. In addition to the functions above, the robot must be able to detect when it is within arms reach of a target item using the dimensions of each arm linkage and distance (including height) from the target. This project can make the shopping process easier, especially for those with physical handicaps.
The robot does two sweeps. The first initial sweep is for the robot to explore and generate a map of the operating environment. This sweep utilizes a Gmapping Frontier Based Exploration algorithm that assigns unexplored cells an initial value of 0.5. As the robot sweeps the environment, free space values will drop to 0 while objects will rise to 1. A "template" map is then generated for the following sweeps.
The subsequent sweeps are for retrieval. Prior to implementation, we used MATLAB simulations to determine which algorithm was best suited for our purposes. Among these were A*, Theta*, D* Lite, Dijkstra,and Breadth-First Search (BFS). We ultimately decided to use a Theta* algorithm (An updated version of A* that uses angular planning) that "draws" an optimized path through all target items. During this sweep, the template map continues to update should any new obstacles be introduced.
The latest iteration of our robot is able to achieve automated environment mapping (Sweep 1) and optimized planning (Sweep 2). Our autonomous mapping system demonstrated partial success, allowing the robot to explore ~90% of a 2 x 3m within 5 minutes (averaged across 10 trials). Similarly our path planning provided somewhat effective, enabling the robot to navigate the mapped space and position itself at an optimal distance from target items. However, the mapping process requires multiple processes to run simultaneously, pushing our Raspberry Pi to its computational limits, resulting in a high runtime. Additionally, we were unable to achieve Dynamic Obstacle avoidance that we had set out to accomplish. We attempted to implement D* Lite, an extension of A* designed for dynamic environments, but this too suffered from the computational constraints preventing concurrent execution path planning and motion control. Despite these limitations, we successfully met many of our design requirements. We believe that with additional time (and a more powerful computer), the robot could achieve all the goals we originally set.
