#
C++ API Documentation
#
Tracker class
#
class nel::Tracker
The Emotion Tracker class
#
Constructor
Tracker(const std::string& modelFile, int max_concurrency = 0)
Tracker constructor: loads model file, sets up the processing.
Parameters:
modelFile- path for the used modelmax_concurrency- maximum allowed concurrency, 0 means automatic (using all cores), default: 0
#
Destructor
~Tracker()
Destructor
#
track (std::future version)
std::future<ResultType> track(const nel::ImageHeader& imageHeader, std::chrono::milliseconds timestamp)
Tracks the given frame asynchronously with the std::future API.
Note: The given ImageHeader doesn't own the image data, and it is safe to delete the data after the call, a copy is happening internally. See nel::ImageHeader for details.
Note: Calling this function is non-blocking, so calling it again with the next frame without waiting for the result is possible. Also see get_concurrent_calculations().
Note: This is the std::future based API, for callback API see nel::Tracker::track(const nel::ImageHeader&, std::chrono::milliseconds, std::function<void (ResultOrError)>).
Parameters:
imageHeader- image descriptortimestamp- timestamp of the image
#
track (callback version)
void track(const nel::ImageHeader& imageHeader, std::chrono::milliseconds timestamp,
std::function<void (ResultOrError)> callback)
Tracks the given frame asynchronously with a callback API.
Note: The given ImageHeader doesn't own the image data, and it is safe to delete the data after the call, a copy is happening internally. See nel::ImageHeader for details.
Note: Calling this function is non-blocking, so calling it again with the next frame without waiting for the result is possible. Also see get_concurrent_calculations().
Note: This is the callback based API, for std::future API see nel::Tracker::track(const nel::ImageHeader&, std::chrono::milliseconds).
Parameters:
imageHeader- image descriptortimestamp- timestamp of the imagecallback- callback to call with the result
Returns: tracked landmarks and emotions
#
resetTracking
void resetTracking()
Resets the internal tracking state. Should be called when a new video sequence starts.
#
get_emotion_IDs
const std::vector<nel::EmotionID>& get_emotion_IDs() const
Returns the emotion IDs provided by the loaded model. The order is the same as in the nel::EmotionResults.
See also: nel::EmotionResults
Returns: A vector of emotion IDs.
#
get_emotion_names
const std::vector<std::string>& get_emotion_names() const
Returns the emotion names provided by the loaded model. The order is the same as in the nel::EmotionResults.
See also: nel::EmotionResults
Returns: A vector of emotion names.
#
get_concurrent_calculations
uint16_t get_concurrent_calculations() const
Returns the value of the atomic counter for the number of calculations currently running concurrently. You can use this to limit the number of concurrent calculations.
Returns: The (approximate) number of calculations currently in-flight.
#
is_emotion_enabled
bool is_emotion_enabled(nel::EmotionID emoID) const
Returns wether the specified emotion is enabled
Parameters:
emoID- emotion to query
#
set_emotion_enabled
void set_emotion_enabled(nel::EmotionID emoID, bool enable)
Sets the specified emotion to enabled or disabled
Parameters:
emoID- emotion to setenable- boolean to set to
#
is_face_tracking_enabled
bool is_face_tracking_enabled() const
Returns wether the face tracker is enabled
#
set_face_tracking_enabled
void set_face_tracking_enabled(bool enable)
Sets the face tracker to be enabled or disabled
Parameters:
#
get_minimum_face_ratio
float get_minimum_face_ratio() const
Gets the current minimum face ratio
See also: set_minimum_face_ratio
Returns: current minimum face size as a ratio of the smaller image dimension
#
set_minimum_face_ratio
void set_minimum_face_ratio(float minimumFaceRatio)
Sets the minimum face ratio
The minimum face ratio defines the minimum face size the algorithm is looking for. The actual size is calculated from the smaller image dimension multiplied by the set minimum face ratio. If the value is 1/4.8, then in case of VGA resolution input (640x480), the minimum face size is 100x100.
Warning: The shape alignment and classifier performance can degrade in case of low resolution, tracking faces smaller than 75x75 is ill advised.
Parameters:
minimumFaceRatio- new minimum face size as a ratio of the smaller image dimension
#
get_sdk_version
static nel::Version get_sdk_version()
Returns the version of the SDK (and not the model)
Returns: version of the SDK
#
get_sdk_version_string
static std::string get_sdk_version_string()
Returns the version string of the SDK (and not the model)
Returns: version string of the SDK
#
Image header class
#
struct nel::ImageHeader
Descriptor class for image data (non-owning)
Members:
const uint8_t* data- pointer to the byte array of the imageint width- width of the image in pixelsint height- height of the image in pixelsint stride- length of one row of pixels in bytes (e.g: 3*width + padding)nel::ImageFormat format- image format
#
enum class nel::ImageFormat
Image format enum
Values:
Grayscale = 0- 8-bit grayscaleRGB = 1- 24-bit RGBRGBA = 2- 32-bit RGBA or 32-bit RGB_BGR = 3- 24-bit BGRBGRA = 4- 32-bit BGRA or 32-bit BGR_
#
Result classes
#
enum class nel::EmotionID
IDs for the supported emotions/behaviours
Values:
CONFUSION = 0CONTEMPT = 1DISGUST = 2FEAR = 3HAPPY = 4EMPATHY = 5SURPRISE = 6ATTENTION = 100PRESENCE = 101EYES_ON_SCREEN = 102FACE_DETECTION = 103
#
struct nel::Tracker::ResultType
The ResultType struct
Members:
nel::LandmarkData landmarks- Tracked landmarksnel::EmotionResults emotions- Detected emotions
#
struct nel::LandmarkData
The LandmarkData struct
Members:
double scale- scale of the facedouble roll- roll pose angledouble yaw- yaw pose angledouble pitch- pitch pose anglenel::Point2d translate- position of the head center in image coordinatesstd::vector<nel::Point2d> landmarks2d- position of the 49 landmarks, in image coordinatesstd::vector<nel::Point3d> landmarks3d- position of the 49 landmarks, in an un-scaled face-centered 3D spacebool isGood- whether the tracking is good quality or not
#
struct nel::Point2d
Point2d struct
Members:
double x- x coordinatedouble y- y coordinate
#
struct nel::Point3d
Point3d struct
Members:
double x- x coordinatedouble y- y coordinatedouble z- z coordinate
#
typedef nel::EmotionResults
typedef std::vector<nel::EmotionData> EmotionResults
EmotionResults
Vector of emotion data, the order of emotions is the same as in nel::Tracker::get_emotion_names().
See also: nel::Tracker::get_emotion_names().
#
struct nel::EmotionData
The EmotionData struct
Members:
double probability- probability of the emotionbool isActive- whether the probability is higher than an internal thresholdbool isDetectionSuccessful- whether the tracking quality was good enough to reliable detect this emotionEmotionID emotionID- ID of the emotion
#
struct nel::Version
Semantic version number for the SDK
Members:
int majorint minorint patch
std::string get_model_name() const
Returns the name (version etc) of the loaded model.
Returns: name of the model