Collaboration Strategy
In order to achieve the given collaboration task, we needed a strategy that allows our robot to effectively collaborate with every and any other team from the class. We discussed many approaches, many of which included assumptions about how the other teams operate. We decided it would be best if we also added the requirement that our robot could collaborate with itself (therefore, no leader-follower assumptions) or even complete the task on its own.
Initially, since collaboration is built on the idea of achieving a shared goal with a common, understood strategy, we discussed with some other teams about the idea of creating a common station set-up strategy (such as station order determined by distance to the origin) to avoid working around ambiguity and assumptions. However, upon reading deeper into the rules, we realized this is not allowed and we shifted gears.
Instead, we created a strategy that relies on a continuously updating station belief system. We start by grabbing the most commonly needed block of our color and bringing this to the nearest station that could have a configuration that needs this block (based on our station beliefs). We then check the configuration of blocks at the station to see if adding our block will allow for a valid configuration. If it is valid, we place the block and continue, updating our beliefs, otherwise, we update our beliefs and continue to the next nearest station that our updated beliefs tell us may need the block. We then repeat this process.
This strategy can break down if the other robot is not collaborating well and places blocks such that no valid configuration is possible in any resource station. By using a station belief system, however, we can then move away from using "valid configurations" to instead using the "most valid configuration." In this case, we will try to place our block in the station that will most closely match a valid configuration such that we can maximize points.
As a stretch goal, we planned to further fine tune this algorithm if we had finished implementing this algorithm. One future step we would have wanted to add is the ability for the robot to move its own blocks from resource stations in order to fix any configurations that would be better maximized by removing resources. However, this would have further complicated our collaboration strategy, so we wanted to start simple first and add further complication later.
This collaboration strategy drives our state machine which lays out the autonomous functionality of the robot as it completes the different sub-tasks to complete our goal. Click below to see our state machine.
Initially, since collaboration is built on the idea of achieving a shared goal with a common, understood strategy, we discussed with some other teams about the idea of creating a common station set-up strategy (such as station order determined by distance to the origin) to avoid working around ambiguity and assumptions. However, upon reading deeper into the rules, we realized this is not allowed and we shifted gears.
Instead, we created a strategy that relies on a continuously updating station belief system. We start by grabbing the most commonly needed block of our color and bringing this to the nearest station that could have a configuration that needs this block (based on our station beliefs). We then check the configuration of blocks at the station to see if adding our block will allow for a valid configuration. If it is valid, we place the block and continue, updating our beliefs, otherwise, we update our beliefs and continue to the next nearest station that our updated beliefs tell us may need the block. We then repeat this process.
This strategy can break down if the other robot is not collaborating well and places blocks such that no valid configuration is possible in any resource station. By using a station belief system, however, we can then move away from using "valid configurations" to instead using the "most valid configuration." In this case, we will try to place our block in the station that will most closely match a valid configuration such that we can maximize points.
As a stretch goal, we planned to further fine tune this algorithm if we had finished implementing this algorithm. One future step we would have wanted to add is the ability for the robot to move its own blocks from resource stations in order to fix any configurations that would be better maximized by removing resources. However, this would have further complicated our collaboration strategy, so we wanted to start simple first and add further complication later.
This collaboration strategy drives our state machine which lays out the autonomous functionality of the robot as it completes the different sub-tasks to complete our goal. Click below to see our state machine.