### Download and install [cosypose](https://github.com/ylabbe/cosypose) :
#### Follow the installation [here](https://github.com/ylabbe/cosypose#installation)
* __Note:__ If you don't have ROS installed on your machine and want to install it using Conda, you can replace the [environment.yaml](https://github.com/ylabbe/cosypose/blob/master/environment.yaml) from **_cosypose_** with [environment.yaml](https://github.com/EtienneAr/ros_cosypose/blob/master/environment.yaml) from **_ros_cosypose_** and proceed with the installation. _(This environment file includes ROS and other dependencies for ros_cosypose, while relaxing some version constraints for conda to be alble to solve the environment)._
## Download and install [cosypose](https://github.com/ylabbe/cosypose) :
#### Download the data [here](https://github.com/ylabbe/cosypose#downloading-and-preparing-data) :
### Follow the installation [here](https://github.com/ylabbe/cosypose#installation)
* __Note:__ If you don't have ROS installed on your machine and want to install it using Conda, you can replace the [environment.yaml](https://github.com/ylabbe/cosypose/blob/master/environment.yaml) from **_cosypose_** with [environment.yaml](https://github.com/EtienneAr/ros_cosypose/blob/master/environment.yaml) from **_ros_cosypose_** and proceed with the installation. _(This environment file includes ROS and other dependencies for ros_cosypose, while relaxing some version constraints for conda to be alble to solve the environment)._ Also, you may want to use [mamba](https://mamba.readthedocs.io/en/latest/) if the depencies conflicts are too hard to solve for conda.
### Download the data [here](https://github.com/ylabbe/cosypose#downloading-and-preparing-data) :
@@ -57,55 +55,84 @@ This should start rviz, and bringup your first usb camera (you can change it wit
* __Note:__ The camera must be calibrated first. See [this section](#usb-camera-calibration)
### Details
## Details
### Nodes
#### pose_estimation.py node :
This node creates a service `/pose_estimation`. It expects, as an input for each call :
This node is in charge of running cosypose inferance and make the interfaces with ros.
##### Parameters
*`~dataset` (string "ycbv" or "tless"): determine which dataset to use for running the inference.
*`~debug` (boolean): if true, will publish some extra-messages on debug topics for displaying informations in RViz.
*`~debug_models` (string): Path to the models to print for debug ( Relative path from cosypose local_data directory).
*`~detection_threshold` (float 0 to 1): Read at each inference for determining the detector threshold.
##### Services
This node creates a service `/pose_estimation`. This is what will trigger the inference. It expects, as an input for each call :
* The camera intrinsic matrix
* The image
***Optionally** A list of `CosyObjects` for warm-starting the pose estimation.
***Optionally** A list of `CosyObjects` for warm-starting the pose estimation.
(If this list is not empty, then the detection phased is skipped and the pose estimation is only run on the given objects.)
In debug mode, this service publish messages, for each call, in displayable by rviz. By default, it publishes on :
-`/ros_cosypose/pose_debug` a 6D PoseArray (that can be display by RViz as frames)
-`/ros_cosypose/marker_debug` a MarkerArray that contains the meshes and pose of each detected object.
##### Published topics
In debug mode the node publish messages for each call of the service. Those messages can then be displayed in RViz :
*`/ros_cosypose/pose_debug`: a 6D PoseArray (that can be display by RViz as frames).
*`/ros_cosypose/marker_debug`: a MarkerArray that contains the meshes and pose of each detected object.
#### pose_estimation_loop.py node :
This node simply calls the `/pose_estimation` indefinitely.
It subscribes to a camera image topic, and run the pose estimation in loop on the latest image. It then publishes the results on the `/cosyobject_list` topic.
This node subscribes to a camera image topic, and then run the pose estimation in loop on the latest image (by calling `/pose_estimation` service).
(It can also "track" objects by warm-starting every estimation with the result of the previous one)
### USB Camera Calibration
You will need a large [checkerboard](http://wiki.ros.org/camera_calibration/Tutorials/MonocularCalibration?action=AttachFile&do=get&target=check-108.pdf) with known dimensions. Here is a 8x6 checkerboard. (Calibration uses the interior vertex points of the checkerboard, so an "9x7" board uses the interior vertex parameter "8x6")
A window should pop-up, the grid should be recognize with color markers.
After "taking" enough pictures in various configurations click __CALIBRATE__, wait for it to compute, and then __COMMIT__ (this should store the calibration data in the correct place).
## Troubleshooting
##### Parameters
*`camera_name` (string): Name of the camera to subscribe to.
*`allow_tracking` (boolean): If true, the detector will be used only at first iteration, then cosypose will be constantly warm-started with previously seen objects.
##### Subscribed topics
*`/<camera_name>/camera_info`: Camera intrinsic data.
*`/<camera_name>/image_rect_color`: Camera images.
##### Published topics
*`cosyobject_list` (CosyObjectArray): List of every objects seen, and their pose, during the last inference.
### Launchfiles
#### singleview_service.launch
Launch the `pose_estimation.py` node with its `/pose_estimation` service. Also laucnh RViz if debug set to true.
##### Arguments
*`dataset`, `debug`, `debug_models`, `detection_threshold`: See [pose_estimation.py](#pose_estimationpy-node-) parameters.
#### singleview_loop.launch
Call `singleview_service.launch` and then launch `pose_estimation_loop.py` node in order to call the inference in loop. This launchfile can also 'bringup' your usb cam in ros for image source.
##### Arguments
*`dataset`, `debug`, `debug_models`, `detection_threshold`: See [pose_estimation.py](#pose_estimationpy-node-) parameters.
*`camera_name`, `allow_tracking`: See [pose_estimation_loop.py](#pose_estimation_looppy-node-) parameters.
*`bringup_camera` (boolean): If set to true bringup the usb camera and rectify its image. (See [camera calibration]](#usb-camera-calibration) if necessary)
*`camera_id` (Integer / Optionnal): Id of the usb camera to bringup. If set, then do not set `camera_name` paramter as it can creates conflicts.
# Troubleshooting
## Common errors
### Cosypose throwing `LOCAL_DATA_DIR` error
Make sure that you have created the `local_data` directory at the right place (a.k.a in the installed directory of cosypose).
### Python error related to `dataclasses` module
dataclasses module is not needed anymore but is getting installed anyway as requirement for other packages. Simply uninstall it.
```
pip uninstall dataclasses
```
### Python error related to yaml constructor
For example :
For example :
`yaml.constructor.ConstructorError: could not determine a constructor for the tag 'tag:yaml.org,2002:python/object:argparse.Namespace' in "<unicode string>", line 1, column 1: !!python/object:argparse.Namespace`
Force the correct version of pyyaml with :
```
pip install pyyaml==5.1
```
## USB Camera Calibration
You will need a large [checkerboard](http://wiki.ros.org/camera_calibration/Tutorials/MonocularCalibration?action=AttachFile&do=get&target=check-108.pdf) with known dimensions. Here is a 8x6 checkerboard. (Calibration uses the interior vertex points of the checkerboard, so an "9x7" board uses the interior vertex parameter "8x6")
A window should pop-up, the grid should be recognize with color markers.
After "taking" enough pictures in various configurations click __CALIBRATE__, wait for it to compute, and then __COMMIT__ (this should store the calibration data in the correct place).