diff --git a/ParticleDetection/src/ParticleDetection/modelling/__init__.py b/ParticleDetection/src/ParticleDetection/modelling/__init__.py
index f346084..408c30e 100644
--- a/ParticleDetection/src/ParticleDetection/modelling/__init__.py
+++ b/ParticleDetection/src/ParticleDetection/modelling/__init__.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 Adrian Niemann Dmitry Puzyrev
+# Copyright (c) 2023-2024 Adrian Niemann Dmitry Puzyrev
#
# This file is part of ParticleDetection.
# ParticleDetection is free software: you can redistribute it and/or modify
diff --git a/RodTracker/CHANGELOG.md b/RodTracker/CHANGELOG.md
index 8364f27..863f8f1 100644
--- a/RodTracker/CHANGELOG.md
+++ b/RodTracker/CHANGELOG.md
@@ -1,3 +1,14 @@
+## [v0.6.5]
+
+### Added
+- shows warning message when loading coordinate data for only one camera (instead of producing an error)
+- shows warning message when attempting to solve tracking or update the graphs when only one camera data is loaded
+
+### Fixed
+- updated documentation of RodTracker with better explanation of workflow and usage of example data
+- other small updates to documentation
+
+
## [v0.6.4]
### Added
diff --git a/RodTracker/README.md b/RodTracker/README.md
index a8f1a2d..1707a85 100644
--- a/RodTracker/README.md
+++ b/RodTracker/README.md
@@ -53,7 +53,7 @@ Run the **RodTracker** GUI using one of the possibilities:
```shell
YOUR/REPO/PATH/RodTracker/src/RodTracker$ python main.py
```
- - *(Python Package)* Use the registered command:
+ - *(Python Package)* Run the GUI script entry point:
```shell
ARBITRARY/PATH$ RodTracker
```
diff --git a/RodTracker/src/RodTracker/backend/rod_data.py b/RodTracker/src/RodTracker/backend/rod_data.py
index d900139..cdb38dc 100644
--- a/RodTracker/src/RodTracker/backend/rod_data.py
+++ b/RodTracker/src/RodTracker/backend/rod_data.py
@@ -413,6 +413,31 @@ def open_rod_folder(self, chosen_folder: Path) -> bool:
cams = [
col.split("_")[-1] for col in columns if re.fullmatch(RE_SEEN, col)
]
+
+ if len(cams) == 1:
+ msg = QMessageBox()
+ msg.setWindowIcon(QtGui.QIcon(fl.icon_path()))
+ msg.setIcon(QMessageBox.Question)
+ msg.setWindowTitle(RodTracker.APPNAME)
+ msg.setText(
+ "Warning: you are loading data with coordinates "
+ "for only one camera view."
+ )
+ btn_OK = msg.addButton("OK", QMessageBox.ActionRole)
+ btn_Reload = msg.addButton(
+ "Load another folder", QMessageBox.ActionRole
+ )
+ msg.exec()
+ user_decision = msg.clickedButton()
+ if user_decision == btn_OK:
+ # Load data for one camera only
+ pass
+ elif user_decision == btn_Reload:
+ # Abort loading and restart the folder selection process
+ return False
+ while len(cams) < 2:
+ cams.append("")
+
cols_pos_2d = [col for col in columns if re.fullmatch(RE_2D_POS, col)]
cols_seen = [col for col in columns if re.fullmatch(RE_SEEN, col)]
cols_pos_3d = [col for col in columns if re.fullmatch(RE_3D_POS, col)]
diff --git a/RodTracker/src/RodTracker/ui/dialogs.py b/RodTracker/src/RodTracker/ui/dialogs.py
index 71189e9..d5ee307 100644
--- a/RodTracker/src/RodTracker/ui/dialogs.py
+++ b/RodTracker/src/RodTracker/ui/dialogs.py
@@ -327,7 +327,7 @@ def show_about(parent: QtWidgets.QWidget):
- Copyright © 2023 Adrian Niemann, Dmitry Puzyrev + Copyright © 2023-2024 Adrian Niemann, Dmitry Puzyrev
""" ) diff --git a/docs/source/RodTracker/RodTracker.md b/docs/source/RodTracker/RodTracker.md index e2b86ee..74cb73d 100644 --- a/docs/source/RodTracker/RodTracker.md +++ b/docs/source/RodTracker/RodTracker.md @@ -9,7 +9,7 @@ Run the **RodTracker** GUI using one of the possibilities: ```shell YOUR/REPO/PATH/RodTracker/src/RodTracker$ python main.py ``` - - Use the registered command: + - Run the GUI script entry point: ```shell ARBITRARY/PATH$ RodTracker ``` @@ -26,34 +26,41 @@ Open images from disk using the `File` dropdown menu, the `Load Images` button o Refer to [](#dataset-format--folder-structure) for the correct dataset structure. You can also use the example dataset located in `./RodTracker/src/RodTracker/resources/example_data`. ```{hint} -You are supposed to select a folder containing images, **NOT** the images themselves. +One is supposed to select a folder which contains the images, **NOT** the images themselves. +You can start with opening the example folder `./RodTracker/src/RodTracker/resources/example_data/gp3` with images from one of the cameras. +``` +```{note} +RodTracker is intended for work with stereo image data (two camera views). You can import the images, perform detection and correct the 2D coordinate data for one view at a time, but it might produce warning messages. +To combine the data which was corrected separately, one should understand the dataset structure and know how to properly concatenate the coordinate data, see [](#saving). ``` -You can now switch between images in the folder using the `left`/`right` keys, the `Previous`/`Next` buttons or the `Slider` below. +After loading the folder with the images, you can now switch between them using the `left`/`right` keys, the `Previous`/`Next` buttons or the `Slider` below.  ### Dataset format & folder structure -For the RodTracker to work properly certain folder structures, naming conventions, and file structures must be followed, which is shown below. -All of these can also be viewed in the `RodTracker/src/RodTracker/resources/example_data` directory and that is available at `resources/example_data` that in the bundled RodTracker app folder. +For the RodTracker to work properly certain folder structures, naming conventions, and file structures must be followed. A working folder and file structure for the stereo-camera images is shown below. ``` |. -├── gp1 +├── [FIRST CAMERA NAME] │ ├── 001.jpg │ ├── 002.jpg │ ... │ └── 321.jpg -└── gp2 +└── [SECOND CAMERA NAME] ├── 001.jpg ├── 002.jpg ... └── 321.jpg ``` -When loading images the RodTracker will look for `*.png`, `*.jpeg`, and `*.jpg` files in the directory the file is in the user just chose. +The example data based on a short image sequence from one of granular gas experiments are provided in the `RodTracker/src/RodTracker/resources/example_data` directory or at `resources/example_data` in bundled RodTracker app folder. +Note that in the example dataset, [FIRST CAMERA NAME] is `gp3` and [SECOND CAMERA NAME] is `gp4` and frame numbers count from 500 to 519. + +When loading images the RodTracker will look for `*.png`, `*.jpeg`, and `*.jpg` files in the chosen directory. It will attempt to convert the filename to an integer, therefore keep a naming convention that allows instant conversion to integers. Leading 0s are usually not a problem for this. The folder name is then used as an ID for the loaded images and respective particle position data associated with them. @@ -61,7 +68,7 @@ The folder name is then used as an ID for the loaded images and respective parti Now the particles in the loaded images can be detected using a saved model. 1. Load the model in the `Detection` tab and select a frame range that the model shall identify rods on. - - The `Use Example Model` button downloads [this model](https://zenodo.org/records/10255525) used for rod detection (specifically the *model_cpu.pt* file). It is intended for detecting rods in the example images. + - The `Use Example Model` button downloads [this model](https://zenodo.org/records/10255525) used for rod detection (specifically the *model_cpu.pt* or *model_cuda.pt* file, corresponding to CPU or GPU installations of **RodTracker**). It is intended for detecting rods in the example images. 2. Deselect any color class that you are not interested in. 3. Select the *default* expected number of particles, i.e. how many particles per color are expected in each image. 4. Adjust the number of particles per color, if needed, in the table and check the box to use the customized value during detection. @@ -149,7 +156,7 @@ The dataset can be saved as `*.csv` files. Each class(/color) is saved to an ind **Example output structure:** ``` |. -└── output +└── [SAVE FOLDER NAME] ├── rods_df_black.csv ├── rods_df_blue.csv ... @@ -164,6 +171,7 @@ The dataset can be saved as `*.csv` files. Each class(/color) is saved to an ind ## Loading rod position data To load a position dataset in the form shown in [](#saving), click on the `Load Rods` button and select the folder with the desired `*.csv` files. +For the [example dataset](#dataset-format--folder-structure), pre-corrected rod position data is provided in `resources/example_data/csv` folder. ```{hint} You are supposed to select a folder containing position data files, **NOT** the data files themselves. @@ -175,19 +183,22 @@ If no rods are shown after selecting a folder check, that the correct image data ## Rod tracking and 3D coordinate reconstruction -Eventually the 2D position data of the rods shall be used to reconstruct their 3D coordinates throughout the experiment. This can be achieved in the `3D-Reconstruct` tab. Prerequisites for this are to have the stereo-camera calibration data (and transformation from the first camera's coordinate system to the experiment's coordinate system). +Eventually the 2D position data of the rods shall be used to reconstruct their 3D coordinates throughout the experiment. This can be achieved in the `3D-Reconstruct` tab. Prerequisites for this are: +- corresponding images loaded for both cameras +- rod position data for both cameras, detected or loaded from file. For optimal results, data should be corrected. +- stereo-camera calibration data (and transformation from the first camera's coordinate system to the experiment's coordinate system) -After successful reconstruction of 3D coordinates, the `3D-View` tab will display the 3D data for the current frame and the `Reconstruction performance` plots are available (after updating them). +Prepare the reconstruction: +1. Select a frame range. +2. (De-)select particle colors. +3. Toggle whether to track particles or just reconstruct their 3D coordinates on each frame (`Tracking` checkbox, see [](#tracking-vs-reconstruction-only)). +4. Select a stereo camera calibration (see [](#calibration--transformation-data-format)). For the [example dataset](#dataset-format--folder-structure), use the stereo calibration file: `resources/example_data/calibrations/gp34.json`. +5. Select a transformation to experiment coordinates file *(this is not strictly necessary but will benefit the visualization in the `3D-View` tab)* (see [](#calibration--transformation-data-format)). For the [example dataset](#dataset-format--folder-structure), use the coordinate transformation file: `resources/example_data/calibrations/transformation.json`. +Start the reconstruction by pressing the `Solve` button after making all required settings.  -Prepare the reconstruction: -- select a frame range -- (de-)select colors -- toggle whether to track particles or just reconstruct their 3D coordinates -- select a camera calibration (see [](#calibration--transformation-data-format)) -- select a transformation to experiment coordinates file *(this is not strictly necessary but will benefit the visualization in the `3D-View` tab)* (see [](#calibration--transformation-data-format)) -Start the reconstruction by pressing the `Solve` button after making all required settings. +After successful reconstruction of 3D coordinates, the `3D-View` tab will display the 3D data for the current frame and the `Reconstruction performance` plots are available (after updating them). ```{note} Unlike during the detection of particles, the results will only be accessible after completion of the process or aborting. When the process is aborted all intermediate results will be integrated in the dataset and accessible in the GUI.