Programming Master Task List: Difference between revisions
From 1511Wookiee
Jump to navigationJump to search
Programming (talk | contribs) No edit summary |
Programming (talk | contribs) No edit summary |
||
Line 1: | Line 1: | ||
'''Note: ''' These are not in any order currently!! | '''Note: '''These are not in any order currently!! | ||
{| cellspacing="1" cellpadding="15 | {| border="1" cellspacing="1" cellpadding="15" width="100%" | ||
|- | |- | ||
| '''Task'''<br> | | '''Task'''<br> | ||
| '''Assigned To'''<br> | | '''Assigned To'''<br> | ||
| '''Description'''<br> | | '''Description'''<br> | ||
| ''' | | '''Status''' | ||
|- | |- | ||
| Implement ThunderDrive | | Implement ThunderDrive | ||
Line 18: | Line 18: | ||
| DONE | | DONE | ||
|- | |- | ||
| Design Camera class | | | ||
Design Camera class | |||
| Calvin and Justin | | Calvin and Justin | ||
| Based on what we've learned in the Vision example, design the interface for a "ThunderCam" class. It should allow the caller to do at least these things: Get left/right angle from straight ahead that the target is "seen" at (enabling caller to instruct ThunderDrive to turn the robot an appropriate amount), capture the image for sending to dashboard. Not required things, but ideas we've talked about: having it pan/tilt around to look for target, having pan/tilt lock to "straight ahead" for use by driver as "what the robot sees", select target -- look for ball or look for goal. | | Based on what we've learned in the Vision example, design the interface for a "ThunderCam" class. It should allow the caller to do at least these things: Get left/right angle from straight ahead that the target is "seen" at (enabling caller to instruct ThunderDrive to turn the robot an appropriate amount), capture the image for sending to dashboard. Not required things, but ideas we've talked about: having it pan/tilt around to look for target, having pan/tilt lock to "straight ahead" for use by driver as "what the robot sees", select target -- look for ball or look for goal. | ||
| . | |||
|- | |- | ||
| Refine autonomous | | Refine autonomous | ||
| Justin, Candido, Calvin | | Justin, Candido, Calvin | ||
| Break down the autonomous strategies into smaller "on field" moves, and identify what "moves" are used in multiple autonomous strategies. Identifying these will let you break the autonomous code into reusable functions and smaller, more easily debugged steps. Candido started this already (on wiki) - continue it. Once you get done, start writing some function prototypes for the most useful functions. | | Break down the autonomous strategies into smaller "on field" moves, and identify what "moves" are used in multiple autonomous strategies. Identifying these will let you break the autonomous code into reusable functions and smaller, more easily debugged steps. Candido started this already (on wiki) - continue it. Once you get done, start writing some function prototypes for the most useful functions. | ||
| . | |||
|- | |- | ||
| Dashboard implementation, Drivers side | | Dashboard implementation, Drivers side | ||
| Alex | | Alex | ||
| Continue work on the LabVIEW dashboard. Number one priority is having it so that we can visualize the autonomous selection on the screen (in addition to current functions). The auto strategy should be shown when the robot is in disabled/autonomous mode. We'll make some jpg's or other image type of each one, you just need to be able to show these images on the screen. There'll be one image for each strategy; each strategy will be identified by a number (1 - ?) sent from cRIO to the Dashboard. | | Continue work on the LabVIEW dashboard. Number one priority is having it so that we can visualize the autonomous selection on the screen (in addition to current functions). The auto strategy should be shown when the robot is in disabled/autonomous mode. We'll make some jpg's or other image type of each one, you just need to be able to show these images on the screen. There'll be one image for each strategy; each strategy will be identified by a number (1 - ?) sent from cRIO to the Dashboard. | ||
| . | |||
|- | |- | ||
| Dashboard implementation, robot side | | | ||
Dashboard implementation, robot side | |||
| Calvin and Alex | | Calvin and Alex | ||
| Decide what information will be sent to the Dashboard, and then send that information each processing loop in autonomous and teleop. Use last year's code as an example of how to do it. | | Decide what information will be sent to the Dashboard, and then send that information each processing loop in autonomous and teleop. Use last year's code as an example of how to do it. | ||
| . | |||
|- | |- | ||
| Basic drive control | | Basic drive control | ||
Line 41: | Line 49: | ||
| Meet with Strategy and Controls to develop operator console | | Meet with Strategy and Controls to develop operator console | ||
| Alex, Calvin lead, everyone attend to listen | | Alex, Calvin lead, everyone attend to listen | ||
| Once the robot design is more finalized, meet with potential drive team and controls to develop concept and drawings for the operator console, especially control of the main functions for driving and kicking. Make sure to let controls know about I/O we'll need for debugging, disabling sensors, and manual control of otherwise automated systems when sensors fail. | | Once the robot design is more finalized, meet with potential drive team and controls to develop concept and drawings for the operator console, especially control of the main functions for driving and kicking. Make sure to let controls know about I/O we'll need for debugging, disabling sensors, and manual control of otherwise automated systems when sensors fail. | ||
| . | |||
|- | |- | ||
| Design hanger class | | Design hanger class | ||
| Calvin, Alex, Andy | | Calvin, Alex, Andy | ||
| Once the hanger design is a little more finalized, design the class for driving the hanger in teleop. | | Once the hanger design is a little more finalized, design the class for driving the hanger in teleop. | ||
| . | |||
|- | |- | ||
| Design right-my-robot class | | Design right-my-robot class | ||
| Calvin, Alex, Andy | | Calvin, Alex, Andy | ||
| Once the "right-my-robot" design is a little more finalized, design the class for controlling it in teleop. | | Once the "right-my-robot" design is a little more finalized, design the class for controlling it in teleop. | ||
| . | |||
|- | |- | ||
| Implement Hanger and "right-my-robot" | | Implement Hanger and "right-my-robot" | ||
| Calvin | | Calvin | ||
| Implement the hanger and "right my robot" classes, once enough information on their design is available. | | Implement the hanger and "right my robot" classes, once enough information on their design is available. | ||
| . | |||
|- | |- | ||
| Figure out where best to mount camera | | Figure out where best to mount camera | ||
| Candido, Justin, Calvin | | Candido, Justin, Calvin | ||
| Seriously look at what height will be optimal to mount the camera. Take into account how we will find range to the goal (if necessary). Report results at integration meeting and document on our Wiki sensors page. | | Seriously look at what height will be optimal to mount the camera. Take into account how we will find range to the goal (if necessary). Report results at integration meeting and document on our Wiki sensors page. | ||
| . | |||
|} | |} | ||
<br> | <br> |
Revision as of 06:58, 30 January 2010
Note: These are not in any order currently!!
Task |
Assigned To |
Description |
Status |
Implement ThunderDrive | Andy | Implement the ThunderDrive class. Start with the basic functions and then work towards the more complex ones (DriveStraight() and DriveDistance()). Test it very, very, very well as you go!! Use ThunderPlucker for testing. | DONE |
Implement Kicker | Calvin | Implement the Kicker class. Do the best you can with the information we have on the robot design. | DONE |
Design Camera class |
Calvin and Justin | Based on what we've learned in the Vision example, design the interface for a "ThunderCam" class. It should allow the caller to do at least these things: Get left/right angle from straight ahead that the target is "seen" at (enabling caller to instruct ThunderDrive to turn the robot an appropriate amount), capture the image for sending to dashboard. Not required things, but ideas we've talked about: having it pan/tilt around to look for target, having pan/tilt lock to "straight ahead" for use by driver as "what the robot sees", select target -- look for ball or look for goal. | . |
Refine autonomous | Justin, Candido, Calvin | Break down the autonomous strategies into smaller "on field" moves, and identify what "moves" are used in multiple autonomous strategies. Identifying these will let you break the autonomous code into reusable functions and smaller, more easily debugged steps. Candido started this already (on wiki) - continue it. Once you get done, start writing some function prototypes for the most useful functions. | . |
Dashboard implementation, Drivers side | Alex | Continue work on the LabVIEW dashboard. Number one priority is having it so that we can visualize the autonomous selection on the screen (in addition to current functions). The auto strategy should be shown when the robot is in disabled/autonomous mode. We'll make some jpg's or other image type of each one, you just need to be able to show these images on the screen. There'll be one image for each strategy; each strategy will be identified by a number (1 - ?) sent from cRIO to the Dashboard. | . |
Dashboard implementation, robot side |
Calvin and Alex | Decide what information will be sent to the Dashboard, and then send that information each processing loop in autonomous and teleop. Use last year's code as an example of how to do it. | . |
Basic drive control | Alex | Continue work on basic driver control of the drive train. You may wish to move some of your code to your own class to keep it all in one place and organized. | DONE |
Meet with Strategy and Controls to develop operator console | Alex, Calvin lead, everyone attend to listen | Once the robot design is more finalized, meet with potential drive team and controls to develop concept and drawings for the operator console, especially control of the main functions for driving and kicking. Make sure to let controls know about I/O we'll need for debugging, disabling sensors, and manual control of otherwise automated systems when sensors fail. | . |
Design hanger class | Calvin, Alex, Andy | Once the hanger design is a little more finalized, design the class for driving the hanger in teleop. | . |
Design right-my-robot class | Calvin, Alex, Andy | Once the "right-my-robot" design is a little more finalized, design the class for controlling it in teleop. | . |
Implement Hanger and "right-my-robot" | Calvin | Implement the hanger and "right my robot" classes, once enough information on their design is available. | . |
Figure out where best to mount camera | Candido, Justin, Calvin | Seriously look at what height will be optimal to mount the camera. Take into account how we will find range to the goal (if necessary). Report results at integration meeting and document on our Wiki sensors page. | . |