# Python API Documentation

# Module Functions

# get_sdk_version_string()

Returns the SDK version string.

Returns: str

# Tracker class

# class realeyes.emotion_detection.Tracker(model_file, max_concurrency=0)

The Emotion Tracker class

# __init__(self, model_file, max_concurrency=0)

Tracker constructor: loads model file, sets up the processing.

Parameters:

  • model_file (str) - path for the used model
  • max_concurrency (int) - maximum allowed concurrency, 0 means automatic (using all cores), default: 0

# track(image, timestamp_in_ms)

Tracks the given frame.

Parameters:

  • image (numpy.ndarray) - frame from the video
  • timestamp_in_ms (int) - timestamp of the frame

Returns: TrackingResult

# reset_tracking()

Resets the internal tracking state. Should be called when a new video sequence starts.

# get_emotion_ids()

Returns the emotion IDs provided by the loaded model. The order is the same as in the TrackingResult.

Returns: list[EmotionID]

# get_emotion_names()

Returns the emotion names provided by the loaded model. The order is the same as in the TrackingResult.

Returns: list[str]

# get_model_name()

Returns the name (version etc) of the loaded model.

Returns: str

# minimum_face_ratio: float

Current minimum face size as a ratio of the smaller image dimension.

# is_face_tracking_enabled()

Returns wether the face tracker is enabled.

Returns: bool

# set_face_tracking_enabled(enable: bool)

Sets the face tracker to be enabled or disabled.

Parameters:

  • enable (bool) - new value

# is_emotion_enabled(emotion_id)

Returns wether the specified emotion is enabled.

Parameters:

  • emotion_id (EmotionID) - emotion to query

Returns: bool

# set_emotion_enabled(emotion_id: EmotionID, enable: bool)

Sets the specified emotion to enabled or disabled.

Parameters:

  • emotion_id (EmotionID) - emotion to set
  • enable (bool) - new value

# __repr__()

Returns a string representation of the Tracker.

Returns: str

# Result classes

# class EmotionID

Attributes:

  • CONFUSION = 0
  • CONTEMPT = 1
  • DISGUST = 2
  • FEAR = 3
  • HAPPY = 4
  • EMPATHY = 5
  • SURPRISE = 6
  • ATTENTION = 100
  • PRESENCE = 101
  • EYES_ON_SCREEN = 102

# class TrackingResult

Attributes:

  • emotions (list[EmotionData]) - Tracked emotions. See EmotionData
  • landmarks (LandmarkData) - Tracked landmarks. See LandmarkData

# to_json()

Converts the data to json (dicts and lists).

Returns: dict

# __repr__()

Returns a string representation of the TrackingResult.

Returns: str

# class LandmarkData

Attributes:

  • scale (float) - Scale of the face.
  • roll (float) - Roll pose angle.
  • yaw (float) - Yaw pose angle.
  • pitch (float) - Pitch pose angle.
  • translate (list[Point2d]) - Position of the head center in image coordinates.
  • landmarks2d (list[Point2d]) - Positions of the 49 landmarks, in image coordinates.
  • landmarks3d (list[Point3d]) - Positions of the 49 landmarks, in an un-scaled face-centered 3D space.
  • is_good (bool) - Whether the tracking is good quality or not.

# to_json()

Converts the data to json (dicts and lists).

Returns: dict

# __repr__()

# class Point2d

Attributes:

  • x (float)
  • y (float)

# to_json()

Converts the data to json (dicts and lists).

Returns: dict

# repr()

Returns a string representation of the Point2d.

Returns: str

# class Point3d

Attributes:

  • x (float)
  • y (float)
  • z (float)

# to_json()

Converts the data to json (dicts and lists).

Returns: dict

# repr()

Returns a string representation of the Point3d.

Returns: str

# class EmotionData

Attributes:

  • probability (float) - Probability of the emotion.
  • is_active (bool) - Whether the probability is higher than an internal threshold.
  • is_detection_successful (bool) - Whether the tracking quality was good enough to reliable detect this emotion.
  • emotion_id (EmotionID) - ID of the emotion. See EmotionID

# to_json()

Converts the data to json (dicts and lists).

Returns: dict

# repr()

Returns a string representation of the EmotionData.

Returns: str