Edit and execute pipeline
Once ArUco markers are placed into a scene, they can be detected by the ArUcoCamera class.
As ArUcoCamera inherits from ArFrame, the ArUcoCamera class also benefits from all the services described in the gaze analysis pipeline section.
Once defined, an ArUco marker pipeline needs to embedded inside a context that will provides it both gaze positions and camera images to process.
Edit JSON configuration
Here is a simple JSON ArUcoCamera configuration example:
{
"argaze.ArUcoMarker.ArUcoCamera.ArUcoCamera": {
"name": "My FullHD camera",
"size": [1920, 1080],
"aruco_detector": {
"dictionary": "DICT_APRILTAG_16h5"
},
"gaze_movement_identifier": {
"argaze.GazeAnalysis.DispersionThresholdIdentification.GazeMovementIdentifier": {
"deviation_max_threshold": 25,
"duration_min_threshold": 150
}
},
"image_parameters": {
"background_weight": 1,
"draw_detected_markers": {
"color": [0, 255, 0],
"draw_axes": {
"thickness": 3
}
},
"draw_gaze_positions": {
"color": [0, 255, 255],
"size": 2
},
"draw_fixations": {
"deviation_circle_color": [255, 0, 255],
"duration_border_color": [127, 0, 127],
"duration_factor": 1e-2
},
"draw_saccades": {
"line_color": [255, 0, 255]
}
}
}
}
Let's understand the meaning of each JSON entry.
argaze.ArUcoMarker.ArUcoCamera.ArUcoCamera
The class name of the object being loaded.
name - inherited from ArFrame
The name of the ArUcoCamera frame. Basically, it is useful for visualization purposes.
size - inherited from ArFrame
The size of the ArUcoCamera frame in pixels. Be aware that gaze positions have to be in the same range of value to be projected.
aruco_detector
The first ArUcoCamera pipeline step is to detect ArUco markers inside the input image.
The ArUcoDetector is in charge of detecting all markers from a specific dictionary.
Mandatory
JSON aruco_detector entry is mandatory.
gaze_movement_identifier - inherited from ArFrame
The first ArFrame pipeline step is dedicated to identify fixations or saccades from consecutive timestamped gaze positions.
image_parameters - inherited from ArFrame
The usual ArFrame visualization parameters plus one additional draw_detected_markers field.
Pipeline execution
A pipeline needs to be embedded into a context to be executed.
Copy the gaze analysis pipeline configuration defined above inside the following context configuration.
{
"argaze.utils.contexts.OpenCV.Movie": {
"name": "Movie player",
"path": "./src/argaze/utils/demo/tobii_record/segments/1/fullstream.mp4",
"pipeline": JSON CONFIGURATION
}
}
Then, use the load command to execute the context.
python -m argaze load CONFIGURATION
This command should open a GUI window with the detected markers and identified cursor fixations circles when the mouse moves over the window.
At this point, the pipeline only processes gaze movement identification without any AOI support as no scene description is provided into the JSON configuration file.
Read the next chapters to learn how to estimate scene pose, how to describe a 3D scene's AOI and how to project them into the camera frame.