Where We'd Go Next
We faced many different challenges and roadblocks along our path to our current implementation, and we not only learned a lot about what we were able to accomplish but also what to focus on and what not to. We focused a lot of time and effort throughout the quarter trying to get pieces of our implementation working on the physical robot which ended up not being fruitful. Instead of focusing further on this, we have learned that we are likely better set working out the rest of our implementation in simulation.
Our first next step would likely be to begin adding in some of the un-integrated scripts that we had developed in parallel to our main Locobot simulation. First, we would like to begin placing more blocks randomly scattered throughout the map and add in our multiple block detection algorithms. First, is the perception algorithm, defined in the perception page, that outlines how we will detect the locations of multiple blocks of a given color which we would then store in a saved map. In order to develop this map, we would use our second algorithm, defined in the localization and mapping page, which outlines how we can use the movement and trajectory of the robot and the perceived 3D locations within a given visual field to better update and map out the block locations as we move around.
Next, we would like to add in our movement algorithm, defined in the motion page, to better path plan towards our target block without running into the blocks in our map. We would likely build this algorithm first without a second robot in mind, but would hope to add that functionality down the road.
Finally, we would like to slightly re-factor our code to allow for the resource station beliefs to drive the state machine loop. Currently, our robot moves to a block, picks it up, and drops it off at a hard-coded location. We would like to add in four stations and let our stations beliefs drive the drop off. This would require a refactor of our dropping state to ensure that we are completing a step involving our perception algorithm to check for the current station state as well as adding in a conditional to re-loop us back to moving to a new resource station when a configuration is invalid.
While there is still much to do after this, including adding in a second robot, developing an exploration strategy, and getting our implementation working on the real robot, we believe that these are the best next steps to get us a working implementation of our full state machine (see code architecture page). We believe we have largely gotten over the hurdles that were blocking us from achieving more in simulation, and we believe that, with more time, getting this portion of the implementation working is close within reach. We believe there may still be many roadblocks that may prevent us from working on the physical robot, and we have learned that for the sake of progress it may be best to deal with those at a later time.
Our first next step would likely be to begin adding in some of the un-integrated scripts that we had developed in parallel to our main Locobot simulation. First, we would like to begin placing more blocks randomly scattered throughout the map and add in our multiple block detection algorithms. First, is the perception algorithm, defined in the perception page, that outlines how we will detect the locations of multiple blocks of a given color which we would then store in a saved map. In order to develop this map, we would use our second algorithm, defined in the localization and mapping page, which outlines how we can use the movement and trajectory of the robot and the perceived 3D locations within a given visual field to better update and map out the block locations as we move around.
Next, we would like to add in our movement algorithm, defined in the motion page, to better path plan towards our target block without running into the blocks in our map. We would likely build this algorithm first without a second robot in mind, but would hope to add that functionality down the road.
Finally, we would like to slightly re-factor our code to allow for the resource station beliefs to drive the state machine loop. Currently, our robot moves to a block, picks it up, and drops it off at a hard-coded location. We would like to add in four stations and let our stations beliefs drive the drop off. This would require a refactor of our dropping state to ensure that we are completing a step involving our perception algorithm to check for the current station state as well as adding in a conditional to re-loop us back to moving to a new resource station when a configuration is invalid.
While there is still much to do after this, including adding in a second robot, developing an exploration strategy, and getting our implementation working on the real robot, we believe that these are the best next steps to get us a working implementation of our full state machine (see code architecture page). We believe we have largely gotten over the hurdles that were blocking us from achieving more in simulation, and we believe that, with more time, getting this portion of the implementation working is close within reach. We believe there may still be many roadblocks that may prevent us from working on the physical robot, and we have learned that for the sake of progress it may be best to deal with those at a later time.