Edit and execute pipeline

The ArFrame class defines a rectangular area where timestamped gaze positions are projected in and inside which they need to be analyzed.

Once defined, a gaze analysis pipeline needs to embedded inside a context that will provides it gaze positions to process.

Frame

Edit JSON configuration

Here is a simple JSON ArFrame configuration example:

{
    "argaze.ArFeatures.ArFrame": {
        "name": "My FullHD screen",
        "size": [1920, 1080],
        "gaze_movement_identifier": {
            "argaze.GazeAnalysis.DispersionThresholdIdentification.GazeMovementIdentifier": {
                "deviation_max_threshold": 50,
                "duration_min_threshold": 200
            }
        },
        "scan_path": {
            "duration_max": 30000
        },
        "scan_path_analyzers": {
            "argaze.GazeAnalysis.Basic.ScanPathAnalyzer": {},
            "argaze.GazeAnalysis.ExploreExploitRatio.ScanPathAnalyzer": {
                "short_fixation_duration_threshold": 0
            }
        }
    }
}

Let's understand the meaning of each JSON entry.

argaze.ArFeatures.ArFrame

The class name of the object being loaded.

name

The name of the ArFrame. Basically useful for visualization purposes.

size

The size of the ArFrame defines the dimension of the rectangular area where gaze positions are projected. Be aware that gaze positions have to be in the same range of value to be projected.

Free spatial unit

Gaze positions can either be integers or floats, pixels, millimeters or what ever you need. The only concern is that all spatial values used in further configurations have to be in the same unit.

gaze_movement_identifier

The first ArFrame pipeline step is to identify fixations or saccades from consecutive timestamped gaze positions.

Gaze movement identifier

The identification algorithm can be selected by instantiating a particular GazeMovementIdentifier from the GazeAnalysis submodule or from another Python package.

In the example file, the chosen identification algorithm is the Dispersion Threshold Identification (I-DT) which has two specific deviation_max_threshold and duration_min_threshold attributes.

Note

In ArGaze, Fixation and Saccade are considered as particular GazeMovements.

Mandatory

JSON gaze_movement_identifier entry is mandatory. Otherwise, the ScanPath and ScanPathAnalyzers steps are disabled.

scan_path

The second ArFrame pipeline step aims to build a ScanPath defined as a list of ScanSteps made by a fixation and a consecutive saccade.

Scan path

Once fixations and saccades are identified, they are automatically appended to the ScanPath if required.

The ScanPath.duration_max attribute is the duration from which older scan steps are removed each time new scan steps are added.

Optional

JSON scan_path entry is not mandatory. If scan_path_analyzers entry is not empty, the ScanPath step is automatically enabled.

scan_path_analyzers

Finally, the last ArFrame pipeline step consists of passing the previously built ScanPath to each loaded ScanPathAnalyzer.

Each analysis algorithm can be selected by instantiating a particular ScanPathAnalyzer from the GazeAnalysis submodule or from another Python package.

In the example file, the chosen analysis algorithms are the Basic module and the ExploreExploitRatio module, which has one specific short_fixation_duration_threshold attribute.

Pipeline execution

A pipeline needs to be embedded into a context to be executed.

Copy the gaze analysis pipeline configuration defined above inside the following context configuration.

{
    "argaze.utils.contexts.Random.GazePositionGenerator": {
        "name": "Random gaze position generator",
        "range": [1920, 1080],
        "pipeline": JSON CONFIGURATION
    }
}

Then, use the load command to execute the context.

python -m argaze load CONFIGURATION

This command should open a GUI window with a random yellow dot and identified fixations circles.

ArGaze load GUI

At this point, the pipeline only processes gaze movement identification and scan path analysis without any AOI neither any recording or visualization supports.

Read the next chapters to learn how to describe AOI, add AOI analysis, record gaze analysis and visualize pipeline steps.