Contact
https://verifeye-docs.realeyes.ai/contact/
# Contact Us
Get in touch with our team for support, sales inquiries, or questions.
**Email**: [verifeye@realeyes.ai](https://verifeye-docs.realeyes.ai/mailto:verifeye@realeyes.ai)
[!button variant="contrast" text="Email"](https://verifeye-docs.realeyes.ai/mailto:verifeye@realeyes.ai)
Index
https://verifeye-docs.realeyes.ai/
Index
https://verifeye-docs.realeyes.ai/introduction/
# Welcome to VerifEye
VerifEye provides comprehensive biometric verification solutions for identity verification, fraud prevention, and user authentication.
## What is VerifEye?
VerifEye provides APIs and SDKs that enable developers to build secure, reliable identity verification systems. Our platform combines multiple biometric verification technologies to ensure accurate, fast, and secure user authentication.
## Core Capabilities
### Face Verification
Detect faces, extract embeddings, and compare faces for identity verification with industry-leading accuracy.
### Demographic Estimation
Estimate age and gender from facial images for demographic analysis and age verification.
### Emotion & Attention Analysis
Analyze facial emotions and attention levels for user engagement and emotional state understanding.
### Face Recognition
Search and index faces in collections for duplicate detection and identity management.
---
## Integration Options
### REST API
Cloud-based REST APIs for easy integration without SDK installation. Perfect for server-side applications and microservices.
[!ref text="Explore REST API"](https://verifeye-docs.realeyes.ai/rest-api/)
### Native SDK
High-performance C++ libraries for Windows, Linux, macOS, iOS, and Android. Ideal for native applications requiring maximum performance.
[!ref text="Explore Native SDK"](https://verifeye-docs.realeyes.ai/native-sdk/)
---
## Getting Started
1. **Sign up** at the [VerifEye Developer Console](https://verifeye-console.realeyes.ai/)
2. **Generate** your API key
3. **Choose** your integration option (REST API or Native SDK)
4. **Start building** with our comprehensive documentation
---
## AI-Friendly Documentation
Use AI coding assistants to accelerate your VerifEye integration. Copy and paste this prompt into Claude Code, Cursor, or your preferred AI assistant:
```
Please read the VerifEye documentation from https://verifeye-docs.realeyes.ai/llms.txt and help me integrate face verification into my application.
```
[View the AI context file](https://verifeye-docs.realeyes.ai/llms.txt)
---
## Support
Need help getting started? We're here to assist you.
[!ref icon="mail" text="Contact Us"](https://verifeye-docs.realeyes.ai/contact/)
---
*Last updated: 2026-01-27*
Api C
https://verifeye-docs.realeyes.ai/native-sdk/demographic-estimation/api-c/
# C API Documentation
## Structs
### DELPoint2d
Point2d class for landmarks
```c
typedef struct DELPoint2d {
float x;
float y;
} DELPoint2d;
```
**Members:**
- `float x` - x coordinate of the point
- `float y` - y coordinate of the point
### DELPoint2dArray
Array of Point2d
```c
typedef struct DELPoint2dArray {
int count;
DELPoint2d* points;
} DELPoint2dArray;
```
**Members:**
- `int count` - number of points
- `DELPoint2d* points` - pointer to the array of points
### DELBoundingBox
Bounding Box class for the faces
```c
typedef struct DELBoundingBox {
int x;
int y;
int width;
int height;
} DELBoundingBox;
```
**Members:**
- `int x` - x coordinate of the top-left corner
- `int y` - y coordinate of the top-left corner
- `int width` - width of the bounding box in pixels
- `int height` - height of the bounding box in pixels
### DELImageHeader
Descriptor class for image data (non-owning)
```c
typedef struct DELImageHeader {
const uint8_t* data;
int width;
int height;
int stride;
DELImageFormat format;
} DELImageHeader;
```
**Members:**
- `const uint8_t* data` - pointer to the byte array of the image
- `int width` - width of the image in pixels
- `int height` - height of the image in pixels
- `int stride` - length of one row of pixels in bytes (e.g: 3*width + padding)
- `DELImageFormat format` - image format
### DELOutput
Output struct for various estimated outputs
```c
typedef struct DELOutput {
DELOutputType type;
union {
DELGender gender;
float float_value;
};
const char* name;
} DELOutput;
```
**Members:**
- `DELOutputType type` - type of the output
- `DELGender gender` - gender value (when type is DELOutputTypeGender)
- `float float_value` - float value (when type is DELOutputTypeAge or DELOutputTypeAgeUncertainty)
- `const char* name` - name of the output (valid only during callback)
### DELOutputArray
Array of Outputs
```c
typedef struct DELOutputArray {
int count;
DELOutput* outputs;
} DELOutputArray;
```
**Members:**
- `int count` - number of outputs
- `DELOutput* outputs` - pointer to the array of outputs
### DELVersion
Semantic version number for the SDK
```c
typedef struct DELVersion {
int major;
int minor;
int patch;
} DELVersion;
```
### DELFaceArray
Array of Faces
```c
typedef struct DELFaceArray {
int count;
DELFace** faces;
} DELFaceArray;
```
**Members:**
- `int count` - number of faces
- `DELFace** faces` - pointer to the array of face pointers
## Enums
### DELImageFormat
Image format enum
**Values:**
- `DELImageFormatGrayscale = 0` - 8-bit grayscale
- `DELImageFormatRGB = 1` - 24-bit RGB
- `DELImageFormatRGBA = 2` - 32-bit RGBA or 32-bit RGB_
- `DELImageFormatBGR = 3` - 24-bit BGR
- `DELImageFormatBGRA = 4` - 32-bit BGRA or 32-bit BGR_
### DELOutputType
Type of the output in Output struct
**Values:**
- `DELOutputTypeAge = 0` - Age estimation
- `DELOutputTypeGender = 1` - Gender classification
- `DELOutputTypeAgeUncertainty = 2` - Age uncertainty estimation
### DELGender
Gender enum
**Values:**
- `DELGenderFemale = 0` - Female
- `DELGenderMale = 1` - Male
## Callbacks
### DELDetectFacesCallback
```c
typedef void (*DELDetectFacesCallback)(void* user_data, DELFaceArray* faces, const char* error_msg);
```
Callback for face detection
**Parameters:**
- `user_data` - user data passed to the function
- `faces` - array of detected faces. The array and its contents are owned by the library and are valid only during the callback.
- `error_msg` - error message if any, otherwise NULL
### DELEstimateCallback
```c
typedef void (*DELEstimateCallback)(void* user_data, DELOutputArray* outputs, const char* error_msg);
```
Callback for demographic estimation
**Parameters:**
- `user_data` - user data passed to the function
- `outputs` - array of estimation outputs. The array and its contents are owned by the library and are valid only during the callback.
- `error_msg` - error message if any, otherwise NULL
## DemographicEstimator Functions
### del_demographic_estimator_new
```c
DELDemographicEstimator* del_demographic_estimator_new(const char* model_file,
int max_concurrency,
char** errorMessage);
```
Constructor: loads model file, sets up the processing.
**Parameters:**
- `model_file` - path for the used model
- `max_concurrency` - maximum allowed concurrency, 0 means automatic (using all cores), default: 0
- `errorMessage` - pointer to a char* that will be set to an error message string on failure, or NULL on success. The caller is responsible for freeing the string using free(). If errorMessage is a NULL pointer, no error message will be returned.
**Returns:** pointer to the new DemographicEstimator instance, or NULL on failure
### del_demographic_estimator_free
```c
void del_demographic_estimator_free(DELDemographicEstimator* estimator);
```
Destructor
**Parameters:**
- `estimator` - pointer to the DemographicEstimator instance to free
### del_demographic_estimator_detect_faces
```c
void del_demographic_estimator_detect_faces(DELDemographicEstimator* estimator,
const DELImageHeader* image_header,
DELDetectFacesCallback callback,
void* user_data);
```
Detects the faces on an image with a callback API.
**Note:** The given ImageHeader doesn't own the image data, and it is safe to delete the data after the call, a copy is happening internally. See DELImageHeader for details.
**Note:** Calling this function is non-blocking, so calling it again with the next frame without waiting for the result is possible. Also see del_demographic_estimator_get_concurrent_calculations().
**Parameters:**
- `estimator` - pointer to the DemographicEstimator instance
- `image_header` - image descriptor
- `callback` - callback to call with the result
- `user_data` - user data to pass to the callback
### del_demographic_estimator_estimate
```c
void del_demographic_estimator_estimate(DELDemographicEstimator* estimator,
const DELFace* face,
DELEstimateCallback callback,
void* user_data);
```
Returns the demographic estimation of the detected face.
**Note:** Calling this function is non-blocking, so calling it again with the next frame without waiting for the result is possible. Also see del_demographic_estimator_get_concurrent_calculations().
**Parameters:**
- `estimator` - pointer to the DemographicEstimator instance
- `face` - the previously detected face to estimate
- `callback` - callback to call with the result
- `user_data` - user data to pass to the callback
### del_demographic_estimator_get_concurrent_calculations
```c
int del_demographic_estimator_get_concurrent_calculations(const DELDemographicEstimator* estimator);
```
Returns the value of the atomic counter for the number of calculations currently running concurrently. You can use this to limit the number of concurrent calculations.
**Parameters:**
- `estimator` - pointer to the DemographicEstimator instance
**Returns:** The (approximate) number of calculations currently in-flight.
### del_demographic_estimator_get_model_name
```c
char* del_demographic_estimator_get_model_name(const DELDemographicEstimator* estimator);
```
Returns the name (version etc) of the loaded model.
**Parameters:**
- `estimator` - pointer to the DemographicEstimator instance
**Returns:** name of the model. The caller is responsible for freeing the returned string using free().
### del_demographic_estimator_get_sdk_version
```c
DELVersion del_demographic_estimator_get_sdk_version();
```
Returns the version of the SDK (and not the model)
**Returns:** version of the SDK
### del_demographic_estimator_get_sdk_version_string
```c
char* del_demographic_estimator_get_sdk_version_string();
```
Returns the version string of the SDK (and not the model)
**Returns:** version string of the SDK. The caller is responsible for freeing the returned string using free().
## Face Functions
### del_face_new
```c
DELFace* del_face_new(const DELImageHeader* image_header,
const DELPoint2d* landmarks,
int num_landmarks,
const DELBoundingBox* bbox,
float confidence,
char** errorMessage);
```
Constructor for the Face object to support 3rd party face detectors
**Parameters:**
- `image_header` - image descriptor
- `landmarks` - face landmarks
- `num_landmarks` - number of landmarks
- `bbox` - face bounding box
- `confidence` - face detection confidence
- `errorMessage` - pointer to a char* that will be set to an error message string on failure, or NULL on success. The caller is responsible for freeing the string using free(). If errorMessage is a NULL pointer, no error message will be returned.
**Returns:** pointer to the new Face instance, or NULL on failure
### del_face_copy
```c
DELFace* del_face_copy(const DELFace* other);
```
Copy constructor
**Parameters:**
- `other` - pointer to the Face instance to copy
**Returns:** pointer to the new Face instance, or NULL on failure
### del_face_free
```c
void del_face_free(DELFace* face);
```
Destructor
**Parameters:**
- `face` - pointer to the Face instance to free
### del_face_bounding_box
```c
DELBoundingBox del_face_bounding_box(const DELFace* face);
```
Returns the bounding box of the detected face
**Parameters:**
- `face` - pointer to the Face instance
**Returns:** DELBoundingBox
### del_face_confidence
```c
float del_face_confidence(const DELFace* face);
```
Returns the confidence value of the detected face
**Parameters:**
- `face` - pointer to the Face instance
**Returns:** float
### del_face_landmarks
```c
DELPoint2dArray* del_face_landmarks(const DELFace* face);
```
Returns the landmarks of the face
**Parameters:**
- `face` - pointer to the Face instance
**Returns:** pointer to DELPoint2dArray. The caller is responsible for freeing the returned array using free().
Api Cpp
https://verifeye-docs.realeyes.ai/native-sdk/demographic-estimation/api-cpp/
# C++ API Documentation
## Estimator class
### class del::DemographicEstimator
The Demographic Estimator class
#### Constructors
```cpp
DemographicEstimator(const std::string& modelFile, int maxConcurrency = 0)
```
Constructor: loads model file, sets up the processing.
**Parameters:**
- `modelFile` - path for the used model
- `maxConcurrency` - maximum allowed concurrency, 0 means automatic (using all cores), default: 0
```cpp
~DemographicEstimator()
```
Destructor
#### Methods
```cpp
std::future> detectFaces(const del::ImageHeader& imageHeader)
```
Detects the faces on an image with the std::future API.
**Note:** The given ImageHeader doesn't own the image data, and it is safe to delete the data after the call, a copy is happening internally. See del::ImageHeader for details.
**Note:** Calling this function is non-blocking, so calling it again with the next image without waiting for the result is possible. Also see getConcurrentCalculations().
**Note:** This is the std::future based API, for callback API see `del::DemographicEstimator::detectFaces(const del::ImageHeader&, std::function>)>)`.
**Parameters:**
- `imageHeader` - image descriptor
---
```cpp
void detectFaces(const del::ImageHeader& imageHeader,
std::function>)> callback)
```
Detects the faces on an image with a callback API.
**Note:** The given ImageHeader doesn't own the image data, and it is safe to delete the data after the call, a copy is happening internally. See del::ImageHeader for details.
**Note:** Calling this function is non-blocking, so calling it again with the next frame without waiting for the result is possible. Also see getConcurrentCalculations().
**Note:** This is the callback based API, for std::future API see `del::DemographicEstimator::detectFaces(const del::ImageHeader&)`.
**Parameters:**
- `imageHeader` - image descriptor
- `callback` - callback to call with the result
**Returns:** tracked landmarks and emotions
---
```cpp
std::future> estimate(const del::Face& face)
```
Returns the demographic estimation of the detected face.
**Note:** Calling this function is non-blocking, so calling it again with the next frame without waiting for the result is possible. Also see getConcurrentCalculations().
**Note:** This is the std::future based API, for callback API see `del::DemographicEstimator::estimate(const del::Face&, std::function>)>)`.
**Parameters:**
- `face` - the previously detected face to embed
**Returns:** estimations
---
```cpp
void estimate(const del::Face& face,
std::function>)> callback)
```
Returns the demographic estimation of the detected face.
**Note:** Calling this function is non-blocking, so calling it again with the next frame without waiting for the result is possible. Also see getConcurrentCalculations().
**Note:** This is the callback based API, for std::future API see `del::DemographicEstimator::estimate(const del::Face&)`.
**Parameters:**
- `face` - the previously detected face to embed
- `callback` - callback to call with the result
---
```cpp
int getConcurrentCalculations() const
```
Returns the value of the atomic counter for the number of calculations currently running concurrently. You can use this to limit the number of concurrent calculations.
**Returns:** The (approximate) number of calculations currently in-flight.
---
```cpp
std::string getModelName() const
```
Returns the name (version etc) of the loaded model.
**Returns:** name of the model
---
```cpp
static del::Version getSDKVersion()
```
Returns the version of the SDK (and not the model)
**Returns:** version of the SDK
---
```cpp
static std::string getSDKVersionString()
```
Returns the version string of the SDK (and not the model)
**Returns:** version string of the SDK
## Image header class
### struct del::ImageHeader
Descriptor class for image data (non-owning)
**Members:**
- `const uint8_t* data` - pointer to the byte array of the image
- `int width` - width of the image in pixels
- `int height` - height of the image in pixels
- `int stride` - length of one row of pixels in bytes (e.g: 3*width + padding)
- `del::ImageFormat format` - image format
### enum class del::ImageFormat
**Values:**
- `Grayscale = 0` - 8-bit grayscale
- `RGB = 1` - 24-bit RGB
- `RGBA = 2` - 32-bit RGBA or 32-bit RGB_
- `BGR = 3` - 24-bit BGR
- `BGRA = 4` - 32-bit BGRA or 32-bit BGR_
## Result classes
### Face
See also: [landmarks specification](https://verifeye-docs.realeyes.ai/overview.md#target-to-face-specs).
#### class del::Face
Face Class
##### Constructors
```cpp
Face(const del::ImageHeader& imageHeader,
const std::vector& landmarks,
const del::BoundingBox& bbox = del::BoundingBox(),
float confidence = 0.0f)
```
Constructor for the Face object to support 3rd party face detectors
**Parameters:**
- `imageHeader` - image descriptor
- `landmarks` - face landmarks
- `bbox` - face bounding box
- `confidence` - face detection confidence
```cpp
~Face()
```
Destructor
##### Methods
```cpp
BoundingBox boundingBox() const
```
Returns the bounding box of the detected face
**Returns:** BoundingBox
---
```cpp
float confidence() const
```
Returns the confidence value of the detected face
**Returns:** float
---
```cpp
std::vector landmarks() const
```
Returns the landmarks of the face
**Returns:** std::vector
### Point2d
#### struct del::Point2d
Point2d class for landmarks
**Members:**
- `float x` - x coordinate of the point
- `float y` - y coordinate of the point
### BoundingBox
#### struct del::BoundingBox
Bounding Box class for the faces
**Members:**
- `int x` - x coordinate of the top-left corner
- `int y` - y coordinate of the top-left corner
- `int width` - width of the bounding box in pixels
- `int height` - height of the bounding box in pixels
### OutputType
#### enum class del::OutputType
Type of the output in Output struct.
**Values:**
- `AGE = 0`
- `GENDER = 1`
- `AGE_UNCERTAINTY = 2`
### Gender
#### enum class del::Gender
Gender enum.
**Values:**
- `FEMALE = 0`
- `MALE = 1`
### Output
#### struct del::Output
Output struct for various estimated outputs.
**Members:**
- `del::OutputType type` - type of the output
- `std::variant value` - value of the output
- `std::string name` - name of the output
Api Dotnet
https://verifeye-docs.realeyes.ai/native-sdk/demographic-estimation/api-dotnet/
# .NET API Documentation
## DemographicEstimator class
### class Realeyes.DemographicEstimation.DemographicEstimator
Main entry point for face detection and demographic estimation operations.
This class is thread-safe and supports concurrent asynchronous operations.
#### Constructor
```csharp
DemographicEstimator(string modelPath, int maxConcurrency = 0)
```
Creates a new DemographicEstimator instance
**Parameters:**
- `modelPath` (string) - Path to the demographic estimation model file (.realZ)
- `maxConcurrency` (int) - Maximum concurrency (0 for automatic/all cores), default: 0
**Throws:**
- `ArgumentNullException` - When modelPath is null
- `DemographicEstimationException` - When model loading fails
#### Methods
##### DetectFacesAsync(imageHeader)
```csharp
Task DetectFacesAsync(ImageHeader imageHeader)
```
Detects faces in an image asynchronously.
This method is non-blocking and thread-safe - multiple calls can be made concurrently.
**Parameters:**
- `imageHeader` (ImageHeader) - Image to process
**Returns:** Task
**Throws:**
- `DemographicEstimationException` - When detection fails
##### EstimateAsync(face)
```csharp
Task EstimateAsync(Face face)
```
Estimates demographic information (age, gender, age uncertainty) for a detected face asynchronously.
This method is non-blocking and thread-safe - multiple calls can be made concurrently.
**Parameters:**
- `face` (Face) - Previously detected face
**Returns:** Task
**Throws:**
- `ArgumentNullException` - When face is null
- `DemographicEstimationException` - When estimation fails
#### Properties
##### ConcurrentCalculations
```csharp
int ConcurrentCalculations { get; }
```
Gets the number of calculations currently running concurrently.
Use this to limit the number of concurrent calculations if needed.
##### ModelName
```csharp
string ModelName { get; }
```
Gets the name/version of the loaded model
##### SdkVersion
```csharp
static Version SdkVersion { get; }
```
Gets the SDK version (static property)
##### SdkVersionString
```csharp
static string SdkVersionString { get; }
```
Gets the SDK version as a string (static property)
## ImageHeader class
### class Realeyes.DemographicEstimation.ImageHeader
Image descriptor for passing image data to the Demographic Estimation Library
#### Constructor
```csharp
ImageHeader(byte[] data, int width, int height, int stride, ImageFormat format)
```
Creates a new image header
**Parameters:**
- `data` (byte[]) - Image data bytes
- `width` (int) - Width in pixels
- `height` (int) - Height in pixels
- `stride` (int) - Length of one row in bytes (e.g., 3*width + padding)
- `format` (ImageFormat) - Pixel format
**Throws:**
- `ArgumentNullException` - When data is null
- `ArgumentOutOfRangeException` - When width, height, or stride is not positive
#### Properties
- `Data` (byte[]) - Image data bytes
- `Width` (int) - Width of the image in pixels
- `Height` (int) - Height of the image in pixels
- `Stride` (int) - Length of one row of pixels in bytes (e.g: 3*width + padding)
- `Format` (ImageFormat) - Image pixel format
## Result classes
### FaceList
#### class Realeyes.DemographicEstimation.FaceList
A disposable collection of Face objects that automatically disposes all faces when disposed.
Inherits from List.
##### Methods
###### Dispose()
```csharp
void Dispose()
```
Disposes all Face objects in the collection synchronously
###### DisposeAsync()
### Face
#### class Realeyes.DemographicEstimation.Face
Represents a detected face with landmarks, bounding box, and confidence information
##### Constructor
```csharp
Face(ImageHeader imageHeader, Point2d[] landmarks, BoundingBox boundingBox, float confidence)
```
Creates a Face object from a third-party face detector
**Parameters:**
- `imageHeader` (ImageHeader) - Image containing the face
- `landmarks` (Point2d[]) - Face landmarks
- `boundingBox` (BoundingBox) - Bounding box of the face
- `confidence` (float) - Detection confidence score
**Throws:**
- `ArgumentNullException` - When imageHeader or landmarks is null
- `DemographicEstimationException` - When face creation fails
##### Properties
- `BoundingBox` (BoundingBox) - Gets the bounding box of this face
- `Confidence` (float) - Gets the detection confidence score
##### Methods
###### GetLandmarks()
```csharp
Point2d[] GetLandmarks()
```
Gets the facial landmarks
**Returns:** Point2d[]
###### Clone()
```csharp
Face Clone()
```
Creates a copy of this face
**Returns:** Face
**Throws:**
- `DemographicEstimationException` - When cloning fails
###### Dispose()
```csharp
void Dispose()
```
Disposes the face and releases native resources
### Point2d
#### struct Realeyes.DemographicEstimation.Point2d
2D point representing a landmark coordinate
##### Properties
- `X` (float) - X coordinate of landmark
- `Y` (float) - Y coordinate of landmark
### BoundingBox
#### struct Realeyes.DemographicEstimation.BoundingBox
Bounding box representing a rectangular region
##### Properties
- `X` (int) - X coordinate of the top-left corner
- `Y` (int) - Y coordinate of the top-left corner
- `Width` (int) - Width of the bounding box in pixels
- `Height` (int) - Height of the bounding box in pixels
- `Right` (int) - Gets the right edge X coordinate (computed property)
- `Bottom` (int) - Gets the bottom edge Y coordinate (computed property)
- `Area` (int) - Gets the area of the bounding box (computed property)
### EstimationResult
#### class Realeyes.DemographicEstimation.EstimationResult
Result of demographic estimation containing optional age, gender, and age uncertainty values
##### Properties
- `Age` (float?) - Estimated age in years (if available)
- `Gender` (Gender?) - Estimated gender (if available)
- `AgeUncertainty` (float?) - Age uncertainty estimation (if available)
##### Methods
###### ToString()
```csharp
string ToString()
```
Returns a string representation of the estimation result
**Returns:** string
## Enumerations
### OutputType
#### enum Realeyes.DemographicEstimation.OutputType
Type of demographic estimation output
**Values:**
- `Age = 0` - Age estimation
- `Gender = 1` - Gender classification
- `AgeUncertainty = 2` - Age uncertainty estimation
### Gender
#### enum Realeyes.DemographicEstimation.Gender
Gender classification
**Values:**
- `Female = 0` - Female
- `Male = 1` - Male
### ImageFormat
#### enum Realeyes.DemographicEstimation.ImageFormat
Image pixel format
**Values:**
- `Grayscale = 0` - 8-bit grayscale
- `RGB = 1` - 24-bit RGB
- `RGBA = 2` - 32-bit RGBA or RGB with padding
- `BGR = 3` - 24-bit BGR
- `BGRA = 4` - 32-bit BGRA or BGR with padding
### Version
#### struct Realeyes.DemographicEstimation.Version
Semantic version number
##### Properties
- `Major` (int) - Major version number
- `Minor` (int) - Minor version number
- `Patch` (int) - Patch version number
##### Methods
###### ToString()
```csharp
string ToString()
```
Returns the version as a string
**Returns:** string
## Exceptions
### DemographicEstimationException
#### class Realeyes.DemographicEstimation.DemographicEstimationException
Exception thrown when demographic estimation operations fail
Disposes all Face objects in the collection asynchronously
Api Python
https://verifeye-docs.realeyes.ai/native-sdk/demographic-estimation/api-python/
# Python API Documentation
## Module Functions
### get_sdk_version_string()
Returns the version string of the SDK (and not the model).
**Returns:** str
## DemographicEstimator class
### class realeyes.demographic_estimation.DemographicEstimator(model_file, max_concurrency=0)
The Demographic Estimator class
#### \_\_init\_\_(self, model_file, max_concurrency=0)
DemographicEstimator constructor: loads model file, sets up the processing.
**Parameters:**
- `model_file` (str) - path for the used model
- `max_concurrency` (int) - maximum allowed concurrency, 0 means automatic (using all cores), default: 0
#### detect_faces(self, image)
Detects the faces on an image.
**Parameters:**
- `image` (numpy.ndarray) - image of the face(s)
**Returns:** list[Face]
#### estimate(self, face)
Returns the estimated demographics of the detected face.
**Parameters:**
- `face` (Face) - face to estimate.
**Returns:** list[Output]
#### get_model_name(self)
Returns the name (version etc) of the loaded model.
**Returns:** str
## Result classes
### Face
#### class realeyes.demographic_estimation.Face
##### \_\_init\_\_(self, image, landmarks, bbox=BoundingBox(x=0, y=0, width=0, height=0), confidence=0.0)
Face constructor to use a 3rd party face detector as face source
**Parameters:**
- `image` (numpy.ndarray) - image of the face
- `landmarks` (list[Point2d]) - landmarks of the face, see [landmarks specification](https://verifeye-docs.realeyes.ai/overview.md#target-to-face-specs)
- `bbox` (BoundingBox) - bounding box of the face
- `confidence` (float) - confidence value of the detected face
##### bounding_box(self)
Returns the bounding box of the detected face.
**Returns:** BoundingBox
##### confidence(self)
Returns the confidence value of the detected face.
**Returns:** float
##### landmarks(self)
Returns the landmarks of the detected face.
**Returns:** list[Point2d]
See also: [landmarks specification](https://verifeye-docs.realeyes.ai/overview.md#target-to-face-specs).
### Point2d
#### class realeyes.demographic_estimation.Point2d
Point2d class for the landmarks
##### \_\_init\_\_(self, x, y)
Point2d constructor
**Parameters:**
- `x` (float) - X coordinate of the point
- `y` (float) - Y coordinate of the point
##### Attributes
- `x` (float) - X coordinate of the point.
- `y` (float) - Y coordinate of the point.
### BoundingBox
#### class realeyes.demographic_estimation.BoundingBox
Bounding Box class for the faces
##### \_\_init\_\_(self, x, y, width, height)
BoundingBox constructor
**Parameters:**
- `x` (int) - X coordinate of the top-left corner
- `y` (int) - Y coordinate of the top-left corner
- `width` (int) - Width of the bounding box in pixels
- `height` (int) - Height of the bounding box in pixels
##### Attributes
- `x` (int) - X coordinate of the top-left corner.
- `y` (int) - Y coordinate of the top-left corner.
- `width` (int) - Width of the bounding box in pixels.
- `height` (int) - Height of the bounding box in pixels.
### OutputType
#### class realeyes.demographic_estimation.OutputType
**Attributes:**
- `AGE` = 0
- `GENDER` = 1
- `AGE_UNCERTAINTY` = 2
### Gender
#### class realeyes.demographic_estimation.Gender
**Attributes:**
- `FEMALE` = 0
- `MALE` = 1
### Output
#### class realeyes.demographic_estimation.Output
**Attributes:**
- `name` (str) - Name of the output
- `type` (OutputType) - Type of the output
- `value` (Union[Gender, float]) - Value of the output
Index
https://verifeye-docs.realeyes.ai/native-sdk/demographic-estimation/
# Demographic Estimation Library
Welcome to Demographic Estimation Library documentation!
## Contents:
- [Overview](https://verifeye-docs.realeyes.ai/overview.md)
- [C++ API](https://verifeye-docs.realeyes.ai/api-cpp.md)
- [C API](https://verifeye-docs.realeyes.ai/api-c.md)
- [Python API](https://verifeye-docs.realeyes.ai/api-python.md)
- [.NET API](https://verifeye-docs.realeyes.ai/api-dotnet.md)
Overview
https://verifeye-docs.realeyes.ai/native-sdk/demographic-estimation/overview/
# Overview
The Demographic Estimation Library is a portable C++ library to estimate the persons' demographic characteristics (age, gedner, etc).
The SDK provides wrappers in the following languages:
* C++ (native)
* C
* Python
* C# / .NET
## Release Notes
* **Version 1.0.0 (24 Jan 2024)**
* Initial release
## Getting Started
### Hardware requirements
The SDK doesn't have any special hardware requirement:
- **CPU:** No special requirement, any modern 64 bit capable CPU (x86-64 with AVX, ARM8) is supported
- **GPU:** No special requirement
- **RAM:** 2 GB of available RAM required
- **Camera:** No special requirement, minimum resolution: 640x480.
### Software requirements
The SDK is regularly tested on the following Operating Systems:
- Windows 10
- Ubuntu 22.4
- Mac OS 12 Monterey
- iOS 14
- Android 12
### 3rd Party Licenses
While the SDK is released under a proprietary license, the following Open-Source projects where used in it with their respective licenses:
- OpenCV - [3 clause BSD](https://opencv.org/license/)
- Tensorflow - [Apache License 2.0](https://github.com/tensorflow/tensorflow/blob/master/LICENSE)
- Protobuf - [3 clause BSD](https://github.com/protocolbuffers/protobuf/blob/master/LICENSE)
- zlib - [zlib license](https://www.zlib.net/zlib_license.html)
- minizip-ng - [zlib license](https://github.com/zlib-ng/minizip-ng/blob/master/LICENSE)
- stlab - [Boost Software License 1.0](https://github.com/stlab/libraries/blob/main/LICENSE)
- docopt.cpp - [MIT License](https://github.com/docopt/docopt.cpp/blob/master/LICENSE-MIT)
- pybind11 - [3 clause BSD](https://github.com/pybind/pybind11/blob/master/LICENSE)
- fmtlib - [MIT License](https://github.com/fmtlib/fmt/blob/master/LICENSE.rst)
### Dependencies
The public C++ API hides all the implementation details from the user, and it only depends on the C++17 Standard Library.
It also provides a binary compatible interface, making it possible to change the underlying implementation
without the need of recompilation of the user code.
### Installation
#### C++
Extract the SDK contents, include the headers from the `include` folder and link `libDemographicEstimationLibrary` to your C++ project.
#### Python
The python version of the SDK can be installed with pip:
```bash
$ pip install realeyes.demographic_estimation
```
#### C# / .NET
The .NET version of the SDK can be installed via NuGet:
```bash
$ dotnet add package Realeyes.DemographicEstimation
```
## Usage
### C++
The main entry point of this library is the `del::DemographicEstimator` class.
After a **estimator** object was constructed, the user can call the `del::DemographicEstimator::detectFaces()` function to get the faces
for a frame of a video or other frame source.
The resulting `del::Face` objects can be used to call `del::DemographicEstimator::estimate()` to get the classifiers estimations of the face.
The results returned in `del::Output` structs.
`del::DemographicEstimator::detectFaces()` and `del::DemographicEstimator::estimate()` has two versions both are non-blocking async calls, one is returning **std::future** the other
is calling the callback on completion. After one call a subsequent call is possible without waiting for the result.
For the frame data, the user must construct a `del::ImageHeader` object and pass that to `del::DemographicEstimator::detectFaces()`.
The frame data must outlive this object since it is a non-owning view of the data, but it only needs to be valid during the
`del::DemographicEstimator::detectFaces()` call, the library will copy the frame data internally.
The following example shows the basic usage of the library using OpenCV for loading images and feeding them to the estimator:
```cpp
#include "demographicestimator.h"
#include
#include
#include
int main()
{
del::DemographicEstimator estimator("model/model.realZ");
cv::Mat image = cv::imread("image.jpg");
std::vector faces = estimator.detectFaces({image.ptr(), image.cols, image.rows, static_cast(image.step1()), del::ImageFormat::BGR}).get();
for (const del::Face& face: faces) {
auto results = estimator.estimate(face).get();
del::BoundingBox bbox = face.boundingBox();
std::cout << "User at (" << bbox.x << ", " << bbox.y << ", " << bbox.width << ", " << bbox.height << ") has the following estimated characteristics:" << std::endl;
for (const del::Output& result: results) {
std::cout << " " << result.name << " = ";
if (result.type == del::OutputType::AGE || result.type == del::OutputType::AGE_UNCERTAINTY)
std::cout << std::get(result.value) << std::endl;
else
std::cout << ((std::get(result.value) == del::Gender::FEMALE) ? "FEMALE" : "MALE") << std::endl;
}
std::cout << std::endl;
}
return 0;
}
```
### Python
The main entry point of this library is the `realeyes.demographic_estimation.DemographicEstimator` class.
After a **estimator** object was constructed, the user can call the `realeyes.demographic_estimation.DemographicEstimator.detect_faces()` function to get the faces
for a frame of a video or other frame source.
The resulting `realeyes.demographic_estimation.Face` objects can be used to call `realeyes.demographic_estimation.DemographicEstimator.estimate()` to get the classifiers estimations of the face.
The following example shows the basic usage of the library using OpenCV for loading images and feeding them to the estimator:
```python
import realeyes.demographic_estimation as del
import cv2
estimator = del.DemographicEstimator('model/model.realZ')
image = cv2.imread('image.jpg')[:, :, ::-1] # opencv reads BGR we need RGB
faces = estimator.detect_faces(image)
for face in faces:
print(f'User at {face.bounding_box()} has the following estimated characteristics:')
results = estimator.estimate(face)
for result in results:
print(f' {result.name} = {result.value} ')
```
### C# / .NET
The main entry point of this library is the `DemographicEstimator` class.
After an **estimator** object is constructed, you can call the `DetectFacesAsync()` method to detect faces
in a frame. The method returns a `Task` allowing for asynchronous, non-blocking operation.
The resulting `Face` objects can be used to call `EstimateAsync()` to get demographic estimations.
Both methods support concurrent execution - you can start multiple operations in parallel without waiting for results.
The following example demonstrates parallel processing of multiple frames using the async interface:
```csharp
using Realeyes.DemographicEstimation;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
class Program
{
static async Task Main(string[] args)
{
using var estimator = new DemographicEstimator("model/model.realZ");
// Load an image (example using raw byte array)
byte[] imageData = LoadImageData("image.jpg");
var imageHeader = new ImageHeader(imageData, 1920, 1080, 1920 * 3, ImageFormat.RGB);
// Detect faces asynchronously
await using var faces = await estimator.DetectFacesAsync(imageHeader);
Console.WriteLine($"Detected {faces.Count} face(s)");
// Start parallel demographic estimation for all faces
var estimationTasks = faces.Select(face => estimator.EstimateAsync(face)).ToList();
// Check concurrent calculations in flight
Console.WriteLine($"Running {estimator.ConcurrentCalculations} concurrent calculations");
// Wait for all estimations to complete
var demographics = await Task.WhenAll(estimationTasks);
// Process results
for (int i = 0; i < faces.Count; i++)
{
var face = faces[i];
var result = demographics[i];
Console.WriteLine($"Face at ({face.BoundingBox.X}, {face.BoundingBox.Y}, " +
$"{face.BoundingBox.Width}, {face.BoundingBox.Height}):");
if (result.Age.HasValue)
Console.WriteLine($" Age: {result.Age.Value:F1}");
if (result.Gender.HasValue)
Console.WriteLine($" Gender: {result.Gender.Value}");
if (result.AgeUncertainty.HasValue)
Console.WriteLine($" Age Uncertainty: {result.AgeUncertainty.Value:F2}");
}
}
static byte[] LoadImageData(string path)
{
// Implementation depends on your image loading library
throw new NotImplementedException();
}
}
```
## Results {#target-to-face-specs}
The **Face** objects consist of the following members:
* **bounding_box:** Bounding box of the detected face (left, top, width height).
* **confidence:** Confidence of the detection ([0,1] more is better).
* **landmarks:** 5 landmarks from the face:
1. left eye
2. right eye
3. nose (tip)
4. left mouth corner
5. right mouth corner

## 3rd party face detector
It is possible to calculate the embedding of a face which was detected with a different library. One can create
`Face` object by specifying the source image and the landmarks.
C
https://verifeye-docs.realeyes.ai/native-sdk/face-verification/api-reference/c/
# C API Reference
The C API provides a plain C interface to the Face Verification Library, suitable for integration with C programs and other languages through FFI.
---
## Opaque Types
### FVLFaceVerifier
```c
typedef struct FVLFaceVerifier FVLFaceVerifier;
```
Opaque handle to a FaceVerifier instance.
### FVLFace
```c
typedef struct FVLFace FVLFace;
```
Opaque handle to a Face instance.
---
## FaceVerifier Functions
### fvl_face_verifier_new
```c
FVLFaceVerifier* fvl_face_verifier_new(const char* model_file, int max_concurrency, char** errorMessage);
```
Creates a new FaceVerifier instance. Loads the model file and sets up processing.
| Parameter | Type | Description |
|-----------|------|-------------|
| model_file | `const char*` | Path to the `.realZ` model file |
| max_concurrency | `int` | Maximum allowed concurrency. 0 means automatic (all cores) |
| errorMessage | `char**` | Output: set to error message on failure, `NULL` on success. Caller must `free()`. Pass `NULL` to skip. |
**Returns:** Pointer to the new instance, or `NULL` on failure.
### fvl_face_verifier_free
```c
void fvl_face_verifier_free(FVLFaceVerifier* verifier);
```
Frees a FaceVerifier instance.
| Parameter | Type | Description |
|-----------|------|-------------|
| verifier | [`FVLFaceVerifier`](#fvlfaceverifier)`*` | Instance to free |
### fvl_face_verifier_detect_faces
```c
void fvl_face_verifier_detect_faces(FVLFaceVerifier* verifier, const FVLImageHeader* image_header,
FVLDetectFacesCallback callback, void* user_data);
```
Detects faces on an image asynchronously. The callback is invoked with the result.
| Parameter | Type | Description |
|-----------|------|-------------|
| verifier | [`FVLFaceVerifier`](#fvlfaceverifier)`*` | FaceVerifier instance |
| image_header | `const` [`FVLImageHeader`](#fvlimageheader)`*` | Image descriptor |
| callback | [`FVLDetectFacesCallback`](#fvldetectfacescallback) | Callback invoked with the result |
| user_data | `void*` | User data passed to the callback |
The image data is copied internally — it is safe to free the data after the call. This call is non-blocking.
### fvl_face_verifier_embed_face
```c
void fvl_face_verifier_embed_face(FVLFaceVerifier* verifier, const FVLFace* face,
FVLEmbedFaceCallback callback, void* user_data);
```
Computes the embedding of a detected face asynchronously.
| Parameter | Type | Description |
|-----------|------|-------------|
| verifier | [`FVLFaceVerifier`](#fvlfaceverifier)`*` | FaceVerifier instance |
| face | `const` [`FVLFace`](#fvlface)`*` | The previously detected face |
| callback | [`FVLEmbedFaceCallback`](#fvlembedFacecallback) | Callback invoked with the result |
| user_data | `void*` | User data passed to the callback |
### fvl_face_verifier_compare_faces
```c
FVLMatch fvl_face_verifier_compare_faces(FVLFaceVerifier* verifier,
const float* embedding1, int size1,
const float* embedding2, int size2);
```
Compares two face embeddings.
| Parameter | Type | Description |
|-----------|------|-------------|
| verifier | [`FVLFaceVerifier`](#fvlfaceverifier)`*` | FaceVerifier instance |
| embedding1 | `const float*` | First face embedding |
| size1 | `int` | Size of the first embedding |
| embedding2 | `const float*` | Second face embedding |
| size2 | `int` | Size of the second embedding |
**Returns:** [`FVLMatch`](#fvlmatch) — match result with similarity metric.
### fvl_face_verifier_get_concurrent_calculations
```c
int fvl_face_verifier_get_concurrent_calculations(const FVLFaceVerifier* verifier);
```
Returns the approximate number of calculations currently in-flight.
| Parameter | Type | Description |
|-----------|------|-------------|
| verifier | `const` [`FVLFaceVerifier`](#fvlfaceverifier)`*` | FaceVerifier instance |
**Returns:** `int` — number of concurrent calculations.
### fvl_face_verifier_get_model_name
```c
char* fvl_face_verifier_get_model_name(const FVLFaceVerifier* verifier);
```
Returns the name (version, etc.) of the loaded model.
| Parameter | Type | Description |
|-----------|------|-------------|
| verifier | `const` [`FVLFaceVerifier`](#fvlfaceverifier)`*` | FaceVerifier instance |
**Returns:** `char*` — model name string. **Caller must `free()` the returned string.**
### fvl_face_verifier_get_sdk_version
```c
FVLVersion fvl_face_verifier_get_sdk_version();
```
Returns the SDK version.
**Returns:** [`FVLVersion`](#fvlversion) — SDK version.
### fvl_face_verifier_get_sdk_version_string
```c
char* fvl_face_verifier_get_sdk_version_string();
```
Returns the SDK version as a string.
**Returns:** `char*` — version string. **Caller must `free()` the returned string.**
---
## Face Functions
### fvl_face_new
```c
FVLFace* fvl_face_new(const FVLImageHeader* image_header, const FVLPoint2d* landmarks,
int num_landmarks, const FVLBoundingBox* bbox, float confidence,
char** errorMessage);
```
Creates a Face object to support 3rd party face detectors.
| Parameter | Type | Description |
|-----------|------|-------------|
| image_header | `const` [`FVLImageHeader`](#fvlimageheader)`*` | Image descriptor |
| landmarks | `const` [`FVLPoint2d`](#fvlpoint2d)`*` | Face landmarks array |
| num_landmarks | `int` | Number of landmarks |
| bbox | `const` [`FVLBoundingBox`](#fvlboundingbox)`*` | Face bounding box |
| confidence | `float` | Detection confidence |
| errorMessage | `char**` | Output: error message on failure. Caller must `free()`. Pass `NULL` to skip. |
**Returns:** Pointer to the new Face instance, or `NULL` on failure.
### fvl_face_copy
```c
FVLFace* fvl_face_copy(const FVLFace* other);
```
Creates a copy of a Face instance.
| Parameter | Type | Description |
|-----------|------|-------------|
| other | `const` [`FVLFace`](#fvlface)`*` | Face to copy |
**Returns:** Pointer to the new Face copy, or `NULL` on failure.
### fvl_face_free
```c
void fvl_face_free(FVLFace* face);
```
Frees a Face instance.
| Parameter | Type | Description |
|-----------|------|-------------|
| face | [`FVLFace`](#fvlface)`*` | Face to free |
### fvl_face_detection_quality
```c
FVLDetectionQuality fvl_face_detection_quality(const FVLFace* face);
```
Returns the detection quality of the face.
| Parameter | Type | Description |
|-----------|------|-------------|
| face | `const` [`FVLFace`](#fvlface)`*` | Face instance |
**Returns:** [`FVLDetectionQuality`](#fvldetectionquality)
### fvl_face_bounding_box
```c
FVLBoundingBox fvl_face_bounding_box(const FVLFace* face);
```
Returns the bounding box of the face.
| Parameter | Type | Description |
|-----------|------|-------------|
| face | `const` [`FVLFace`](#fvlface)`*` | Face instance |
**Returns:** [`FVLBoundingBox`](#fvlboundingbox)
### fvl_face_confidence
```c
float fvl_face_confidence(const FVLFace* face);
```
Returns the confidence value of the face.
| Parameter | Type | Description |
|-----------|------|-------------|
| face | `const` [`FVLFace`](#fvlface)`*` | Face instance |
**Returns:** `float` — detection confidence.
### fvl_face_landmarks
```c
FVLPoint2dArray* fvl_face_landmarks(const FVLFace* face);
```
Returns the landmarks of the face.
| Parameter | Type | Description |
|-----------|------|-------------|
| face | `const` [`FVLFace`](#fvlface)`*` | Face instance |
**Returns:** [`FVLPoint2dArray`](#fvlpoint2darray)`*` — landmark array. **Caller must `free()` the returned array.**
---
## Data Types
### FVLPoint2d
```c
typedef struct FVLPoint2d {
float x;
float y;
} FVLPoint2d;
```
| Field | Type | Description |
|-------|------|-------------|
| x | `float` | X coordinate of the point |
| y | `float` | Y coordinate of the point |
### FVLPoint2dArray
```c
typedef struct FVLPoint2dArray {
int count;
FVLPoint2d* points;
} FVLPoint2dArray;
```
| Field | Type | Description |
|-------|------|-------------|
| count | `int` | Number of points |
| points | [`FVLPoint2d`](#fvlpoint2d)`*` | Pointer to the array of points |
### FVLBoundingBox
```c
typedef struct FVLBoundingBox {
int x;
int y;
int width;
int height;
} FVLBoundingBox;
```
| Field | Type | Description |
|-------|------|-------------|
| x | `int` | X coordinate of the top-left corner |
| y | `int` | Y coordinate of the top-left corner |
| width | `int` | Width in pixels |
| height | `int` | Height in pixels |
### FVLImageHeader
```c
typedef struct FVLImageHeader {
const uint8_t* data;
int width;
int height;
int stride;
FVLImageFormat format;
} FVLImageHeader;
```
| Field | Type | Description |
|-------|------|-------------|
| data | `const uint8_t*` | Pointer to the byte array of the image |
| width | `int` | Width of the image in pixels |
| height | `int` | Height of the image in pixels |
| stride | `int` | Length of one row of pixels in bytes (e.g., `3*width + padding`) |
| format | [`FVLImageFormat`](#fvlimageformat) | Image pixel format |
### FVLMatch
```c
typedef struct FVLMatch {
float similarity;
} FVLMatch;
```
| Field | Type | Description |
|-------|------|-------------|
| similarity | `float` | Similarity score between the two faces |
### FVLVersion
```c
typedef struct FVLVersion {
int major;
int minor;
int patch;
} FVLVersion;
```
| Field | Type | Description |
|-------|------|-------------|
| major | `int` | Major version number |
| minor | `int` | Minor version number |
| patch | `int` | Patch version number |
### FVLFaceArray
```c
typedef struct FVLFaceArray {
int count;
FVLFace** faces;
} FVLFaceArray;
```
| Field | Type | Description |
|-------|------|-------------|
| count | `int` | Number of faces |
| faces | [`FVLFace`](#fvlface)`**` | Pointer to the array of face pointers |
### FVLFloatArray
```c
typedef struct FVLFloatArray {
int count;
float* data;
} FVLFloatArray;
```
| Field | Type | Description |
|-------|------|-------------|
| count | `int` | Number of floats |
| data | `float*` | Pointer to the array of floats |
---
## Enumerations
### FVLImageFormat
```c
typedef enum FVLImageFormat {
FVLImageFormatGrayscale = 0,
FVLImageFormatRGB = 1,
FVLImageFormatRGBA = 2,
FVLImageFormatBGR = 3,
FVLImageFormatBGRA = 4,
} FVLImageFormat;
```
| Value | Description |
|-------|-------------|
| `FVLImageFormatGrayscale` | 8-bit grayscale |
| `FVLImageFormatRGB` | 24-bit RGB |
| `FVLImageFormatRGBA` | 32-bit RGBA or RGB with padding |
| `FVLImageFormatBGR` | 24-bit BGR |
| `FVLImageFormatBGRA` | 32-bit BGRA or BGR with padding |
### FVLDetectionQuality
```c
typedef enum FVLDetectionQuality {
FVLDetectionQualityGood = 0,
FVLDetectionQualityBadQuality = 1,
FVLDetectionQualityMaybeRolled = 2,
} FVLDetectionQuality;
```
| Value | Description |
|-------|-------------|
| `FVLDetectionQualityGood` | No issues detected |
| `FVLDetectionQualityBadQuality` | Bad quality detected |
| `FVLDetectionQualityMaybeRolled` | Face may be rolled; embeddings could be incorrect |
---
## Callback Types
### FVLDetectFacesCallback
```c
typedef void (*FVLDetectFacesCallback)(void* user_data, FVLFaceArray* faces, const char* error_msg);
```
Callback for face detection results.
| Parameter | Type | Description |
|-----------|------|-------------|
| user_data | `void*` | User data passed to the detection function |
| faces | [`FVLFaceArray`](#fvlfacearray)`*` | Array of detected faces. Valid only during the callback. |
| error_msg | `const char*` | Error message if failed, `NULL` on success. Valid only during the callback. |
### FVLEmbedFaceCallback
```c
typedef void (*FVLEmbedFaceCallback)(void* user_data, FVLFloatArray* embedding, const char* error_msg);
```
Callback for face embedding results.
| Parameter | Type | Description |
|-----------|------|-------------|
| user_data | `void*` | User data passed to the embedding function |
| embedding | [`FVLFloatArray`](#fvlfloatarray)`*` | Embedding vector. Valid only during the callback. |
| error_msg | `const char*` | Error message if failed, `NULL` on success. Valid only during the callback. |
**Important:** The `faces`/`embedding` and `error_msg` parameters are only valid during the callback execution. Copy any data you need to retain before the callback returns.
---
## Memory Management Summary
| Function Returns | Must Free? | How |
|------------------|------------|-----|
| [`fvl_face_verifier_new()`](#fvl_face_verifier_new) | Yes | [`fvl_face_verifier_free()`](#fvl_face_verifier_free) |
| [`fvl_face_new()`](#fvl_face_new) | Yes | [`fvl_face_free()`](#fvl_face_free) |
| [`fvl_face_copy()`](#fvl_face_copy) | Yes | [`fvl_face_free()`](#fvl_face_free) |
| [`fvl_face_landmarks()`](#fvl_face_landmarks) | Yes | `free()` |
| [`fvl_face_verifier_get_model_name()`](#fvl_face_verifier_get_model_name) | Yes | `free()` |
| [`fvl_face_verifier_get_sdk_version_string()`](#fvl_face_verifier_get_sdk_version_string) | Yes | `free()` |
| `errorMessage` output parameter | Yes | `free()` |
| Callback parameters | No | Valid only during callback |
Cpp
https://verifeye-docs.realeyes.ai/native-sdk/face-verification/api-reference/cpp/
# C++ API Reference
## Namespace
`fvl`
---
## Classes
### FaceVerifier
The main entry point for face detection, embedding, and verification operations.
#### Constructor
```cpp
FaceVerifier(const std::string& modelFile, int maxConcurrency = 0);
```
Loads the model file and sets up processing.
| Parameter | Type | Description | Default |
|-----------|------|-------------|---------|
| modelFile | `const std::string&` | Path to the `.realZ` model file | — |
| maxConcurrency | `int` | Maximum allowed concurrency. 0 means automatic (using all cores) | `0` |
#### Destructor
```cpp
~FaceVerifier();
```
#### Methods
##### detectFaces (Future)
```cpp
std::future> detectFaces(const fvl::ImageHeader& imageHeader);
```
Detects the faces on an image. Returns a future that resolves with the detected faces.
| Parameter | Type | Description |
|-----------|------|-------------|
| imageHeader | `const` [`ImageHeader`](#imageheader)`&` | Image descriptor |
**Returns:** `std::future>` — the detected faces.
The given [`ImageHeader`](#imageheader) doesn't own the image data; it is safe to delete the data after the call — a copy is made internally.
This call is non-blocking. You can call it again with the next image without waiting for the result. See [`getConcurrentCalculations()`](#getconcurrentcalculations).
##### detectFaces (Callback)
```cpp
void detectFaces(const fvl::ImageHeader& imageHeader,
std::function>)> callback);
```
Detects the faces on an image and invokes the callback with the result.
| Parameter | Type | Description |
|-----------|------|-------------|
| imageHeader | `const` [`ImageHeader`](#imageheader)`&` | Image descriptor |
| callback | `std::function>)>` | Callback invoked with the result |
##### embedFace (Future)
```cpp
std::future> embedFace(const fvl::Face& face);
```
Returns the embedding of a detected face.
| Parameter | Type | Description |
|-----------|------|-------------|
| face | `const` [`Face`](#face)`&` | The previously detected face to embed |
**Returns:** `std::future>` — the face embedding vector.
This call is non-blocking.
##### embedFace (Callback)
```cpp
void embedFace(const fvl::Face& face,
std::function>)> callback);
```
Returns the embedding of a detected face via callback.
| Parameter | Type | Description |
|-----------|------|-------------|
| face | `const` [`Face`](#face)`&` | The previously detected face to embed |
| callback | `std::function>)>` | Callback invoked with the result |
##### compareFaces
```cpp
fvl::Match compareFaces(const std::vector& embedding1,
const std::vector& embedding2);
```
Compares two face embeddings.
| Parameter | Type | Description |
|-----------|------|-------------|
| embedding1 | `const std::vector&` | Embedding of the first face |
| embedding2 | `const std::vector&` | Embedding of the second face |
**Returns:** [`Match`](#match) — match result with the similarity metric.
##### getConcurrentCalculations
```cpp
int getConcurrentCalculations() const;
```
Returns the approximate number of calculations currently in-flight. Use this to limit the number of concurrent calculations.
**Returns:** `int` — the number of concurrent calculations.
##### getModelName
```cpp
std::string getModelName() const;
```
Returns the name (version, etc.) of the loaded model.
**Returns:** `std::string` — name of the model.
##### getSDKVersion
```cpp
static fvl::Version getSDKVersion();
```
Returns the version of the SDK (not the model).
**Returns:** [`Version`](#version) — SDK version.
##### getSDKVersionString
```cpp
static std::string getSDKVersionString();
```
Returns the version of the SDK as a string.
**Returns:** `std::string` — SDK version string.
---
### Face
Represents a detected face with landmarks, bounding box, and quality information.
#### Constructor
```cpp
Face(const fvl::ImageHeader& imageHeader,
const std::vector& landmarks,
const fvl::BoundingBox& bbox = fvl::BoundingBox(),
float confidence = 0.0f);
```
Creates a Face object to support 3rd party face detectors.
| Parameter | Type | Description | Default |
|-----------|------|-------------|---------|
| imageHeader | `const` [`ImageHeader`](#imageheader)`&` | Image containing the face | — |
| landmarks | `const std::vector<`[`Point2d`](#point2d)`>&` | Face landmarks (5 points) | — |
| bbox | `const` [`BoundingBox`](#boundingbox)`&` | Bounding box of the face | `BoundingBox()` |
| confidence | `float` | Detection confidence score | `0.0f` |
#### Methods
##### detectionQuality
```cpp
DetectionQuality detectionQuality() const;
```
**Returns:** [`DetectionQuality`](#detectionquality) — the detection quality of the face.
##### boundingBox
```cpp
BoundingBox boundingBox() const;
```
**Returns:** [`BoundingBox`](#boundingbox) — the bounding box of the face.
##### confidence
```cpp
float confidence() const;
```
**Returns:** `float` — the detection confidence score.
##### landmarks
```cpp
std::vector landmarks() const;
```
**Returns:** `std::vector<`[`Point2d`](#point2d)`>` — the 5 facial landmarks. See [landmarks specification](https://verifeye-docs.realeyes.ai/../overview.md#results).
---
## Types
### ImageHeader
Descriptor for image data (non-owning).
```cpp
struct ImageHeader {
const uint8_t* data;
int width;
int height;
int stride;
fvl::ImageFormat format;
};
```
| Field | Type | Description |
|-------|------|-------------|
| data | `const uint8_t*` | Pointer to the byte array of the image |
| width | `int` | Width of the image in pixels |
| height | `int` | Height of the image in pixels |
| stride | `int` | Length of one row of pixels in bytes (e.g., `3*width + padding`) |
| format | [`ImageFormat`](#imageformat) | Image pixel format |
### Point2d
2D point for landmark coordinates.
```cpp
struct Point2d {
float x;
float y;
};
```
| Field | Type | Description |
|-------|------|-------------|
| x | `float` | X coordinate of the point |
| y | `float` | Y coordinate of the point |
### BoundingBox
Bounding box for detected faces.
```cpp
struct BoundingBox {
int x;
int y;
int width;
int height;
};
```
| Field | Type | Description |
|-------|------|-------------|
| x | `int` | X coordinate of the top-left corner |
| y | `int` | Y coordinate of the top-left corner |
| width | `int` | Width in pixels |
| height | `int` | Height in pixels |
### Match
Result of face comparison.
```cpp
struct Match {
float similarity;
};
```
| Field | Type | Description |
|-------|------|-------------|
| similarity | `float` | Similarity score between the two faces |
### Version
Semantic version number for the SDK.
```cpp
struct Version {
int major;
int minor;
int patch;
};
```
| Field | Type | Description |
|-------|------|-------------|
| major | `int` | Major version number |
| minor | `int` | Minor version number |
| patch | `int` | Patch version number |
### ErrorType
Error information for the callback interface.
```cpp
struct ErrorType {
std::string errorString;
};
```
| Field | Type | Description |
|-------|------|-------------|
| errorString | `std::string` | Human-readable description of the error |
### ResultOrError
Type representing the result or the error in the callback interface.
```cpp
template
using ResultOrError = std::variant;
```
Use `std::get_if` or `std::visit` to extract the result or error:
```cpp
if (auto* value = std::get_if(&result)) {
// success
} else {
auto& error = std::get(result);
// handle error
}
```
---
## Enums
### ImageFormat
```cpp
enum class ImageFormat {
Grayscale = 0,
RGB = 1,
RGBA = 2,
BGR = 3,
BGRA = 4,
};
```
| Value | Description |
|-------|-------------|
| `Grayscale` | 8-bit grayscale |
| `RGB` | 24-bit RGB |
| `RGBA` | 32-bit RGBA or RGB with padding |
| `BGR` | 24-bit BGR |
| `BGRA` | 32-bit BGRA or BGR with padding |
### DetectionQuality
```cpp
enum class DetectionQuality {
Good = 0,
BadQuality = 1,
MaybeRolled = 2,
};
```
| Value | Description |
|-------|-------------|
| `Good` | No issues detected |
| `BadQuality` | Bad quality detected |
| `MaybeRolled` | Face may be rolled; embeddings could be incorrect |
---
## Thread Safety
- [`FaceVerifier`](#faceverifier) methods can be called concurrently from multiple threads.
- [`detectFaces()`](#detectfaces-future) and [`embedFace()`](#embedface-future) calls can execute concurrently.
- Use [`getConcurrentCalculations()`](#getconcurrentcalculations) to monitor and limit concurrency.
Dotnet
https://verifeye-docs.realeyes.ai/native-sdk/face-verification/api-reference/dotnet/
# .NET API Reference
## Namespace
`Realeyes.FaceVerification`
---
## Classes
### FaceVerifier
Main entry point for face detection, embedding, and verification operations. Implements `IDisposable`.
#### Constructor
```csharp
public FaceVerifier(string modelPath, int maxConcurrency = 0)
```
Creates a new FaceVerifier instance.
| Parameter | Type | Description | Default |
|-----------|------|-------------|---------|
| modelPath | `string` | Path to the `.realZ` model file | — |
| maxConcurrency | `int` | Maximum concurrency. 0 for automatic (all cores) | `0` |
**Throws:**
- `ArgumentNullException` — when `modelPath` is null
- [`FaceVerificationException`](#faceverificationexception) — when model loading fails
#### Properties
| Property | Type | Description |
|----------|------|-------------|
| ConcurrentCalculations | `int` | Number of calculations currently running concurrently (read-only) |
| ModelName | `string` | Name/version of the loaded model (read-only) |
| SdkVersion | [`Version`](#version) | SDK version (read-only, static) |
| SdkVersionString | `string` | SDK version as a string (read-only, static) |
#### Methods
##### DetectFacesAsync
```csharp
public Task DetectFacesAsync(ImageHeader imageHeader)
```
Detects faces in an image asynchronously.
| Parameter | Type | Description |
|-----------|------|-------------|
| imageHeader | [`ImageHeader`](#imageheader) | Image to process |
**Returns:** `Task<`[`FaceList`](#facelist)`>` — disposable collection of detected faces.
**Throws:** [`FaceVerificationException`](#faceverificationexception) — when detection fails.
##### EmbedFaceAsync
```csharp
public Task EmbedFaceAsync(Face face)
```
Computes the embedding vector for a detected face asynchronously.
| Parameter | Type | Description |
|-----------|------|-------------|
| face | [`Face`](#face) | Previously detected face |
**Returns:** `Task` — embedding vector.
**Throws:**
- `ArgumentNullException` — when `face` is null
- [`FaceVerificationException`](#faceverificationexception) — when embedding fails
##### CompareFaces
```csharp
public Match CompareFaces(float[] embedding1, float[] embedding2)
```
Compares two face embeddings and returns a similarity score.
| Parameter | Type | Description |
|-----------|------|-------------|
| embedding1 | `float[]` | First face embedding |
| embedding2 | `float[]` | Second face embedding |
**Returns:** [`Match`](#match) — match result with similarity score.
**Throws:**
- `ArgumentNullException` — when either embedding is null
- `ArgumentException` — when embeddings have different lengths
##### Dispose
```csharp
public void Dispose()
```
Disposes the FaceVerifier and releases native resources.
---
### FaceList
A disposable collection of [`Face`](#face) objects that automatically disposes all faces when disposed. Inherits from `List` and implements `IDisposable` and `IAsyncDisposable`.
#### Methods
##### Dispose
```csharp
public void Dispose()
```
Disposes all Face objects in the collection synchronously.
##### DisposeAsync
```csharp
public ValueTask DisposeAsync()
```
Disposes all Face objects in the collection asynchronously.
---
### Face
Represents a detected face with landmarks, bounding box, and quality information. Implements `IDisposable`.
#### Constructor
```csharp
public Face(ImageHeader imageHeader, Point2d[] landmarks, BoundingBox boundingBox, float confidence)
```
Creates a Face object from a third-party face detector.
| Parameter | Type | Description |
|-----------|------|-------------|
| imageHeader | [`ImageHeader`](#imageheader) | Image containing the face |
| landmarks | [`Point2d`](#point2d)`[]` | Face landmarks (exactly 5 points) |
| boundingBox | [`BoundingBox`](#boundingbox) | Bounding box of the face |
| confidence | `float` | Detection confidence score |
**Throws:**
- `ArgumentNullException` — when `imageHeader` or `landmarks` is null
- `ArgumentException` — when landmarks count is not exactly 5
- [`FaceVerificationException`](#faceverificationexception) — when face creation fails
#### Properties
| Property | Type | Description |
|----------|------|-------------|
| DetectionQuality | [`DetectionQuality`](#detectionquality) | Detection quality of this face (read-only) |
| BoundingBox | [`BoundingBox`](#boundingbox) | Bounding box of this face (read-only) |
| Confidence | `float` | Detection confidence score (read-only) |
#### Methods
##### GetLandmarks
```csharp
public Point2d[] GetLandmarks()
```
Gets the facial landmarks.
**Returns:** [`Point2d`](#point2d)`[]` — array of 5 landmark points.
##### Clone
```csharp
public Face Clone()
```
Creates a copy of this face.
**Returns:** [`Face`](#face) — a new Face instance with the same data.
##### Dispose
```csharp
public void Dispose()
```
Disposes the face and releases native resources.
---
## Data Types
### ImageHeader
Image descriptor for passing image data to the Face Verification Library. (`readonly record struct`)
#### Constructor
```csharp
public ImageHeader(byte[] Data, int Width, int Height, int Stride, ImageFormat Format)
```
| Parameter | Type | Description |
|-----------|------|-------------|
| Data | `byte[]` | Image data bytes |
| Width | `int` | Width in pixels |
| Height | `int` | Height in pixels |
| Stride | `int` | Length of one row in bytes (e.g., `3 * width + padding`) |
| Format | [`ImageFormat`](#imageformat) | Pixel format |
**Throws:**
- `ArgumentNullException` — when `Data` is null
- `ArgumentOutOfRangeException` — when `Width`, `Height`, or `Stride` is not positive
#### Properties
| Property | Type | Description |
|----------|------|-------------|
| Data | `byte[]` | Image data bytes (read-only) |
| Width | `int` | Width of the image in pixels (read-only) |
| Height | `int` | Height of the image in pixels (read-only) |
| Stride | `int` | Length of one row of pixels in bytes (read-only) |
| Format | [`ImageFormat`](#imageformat) | Image pixel format (read-only) |
### BoundingBox
Bounding box representing a rectangular region. (`readonly record struct`)
#### Constructor
```csharp
public BoundingBox(int X, int Y, int Width, int Height)
```
| Parameter | Type | Description |
|-----------|------|-------------|
| X | `int` | X coordinate of the top-left corner |
| Y | `int` | Y coordinate of the top-left corner |
| Width | `int` | Width of the bounding box |
| Height | `int` | Height of the bounding box |
#### Properties
| Property | Type | Description |
|----------|------|-------------|
| X | `int` | X coordinate of the top-left corner (read-only) |
| Y | `int` | Y coordinate of the top-left corner (read-only) |
| Width | `int` | Width of the bounding box (read-only) |
| Height | `int` | Height of the bounding box (read-only) |
| Right | `int` | Right edge X coordinate (`X + Width`) (read-only) |
| Bottom | `int` | Bottom edge Y coordinate (`Y + Height`) (read-only) |
| Area | `int` | Area of the bounding box (`Width * Height`) (read-only) |
### Point2d
2D point representing a landmark coordinate. (`readonly record struct`)
#### Constructor
```csharp
public Point2d(float X, float Y)
```
| Parameter | Type | Description |
|-----------|------|-------------|
| X | `float` | X coordinate |
| Y | `float` | Y coordinate |
#### Properties
| Property | Type | Description |
|----------|------|-------------|
| X | `float` | X coordinate (read-only) |
| Y | `float` | Y coordinate (read-only) |
### Match
Result of face comparison indicating similarity. (`readonly record struct`)
#### Constructor
```csharp
public Match(float Similarity)
```
| Parameter | Type | Description |
|-----------|------|-------------|
| Similarity | `float` | Similarity score |
#### Properties
| Property | Type | Description |
|----------|------|-------------|
| Similarity | `float` | Similarity score (read-only) |
#### Methods
##### ExceedsThreshold
```csharp
public bool ExceedsThreshold(float threshold)
```
Checks if the similarity score exceeds a threshold.
| Parameter | Type | Description |
|-----------|------|-------------|
| threshold | `float` | Similarity threshold |
**Returns:** `bool` — `true` if `Similarity >= threshold`.
### Version
Semantic version number. (`readonly record struct`)
#### Constructor
```csharp
public Version(int Major, int Minor, int Patch)
```
| Parameter | Type | Description |
|-----------|------|-------------|
| Major | `int` | Major version number |
| Minor | `int` | Minor version number |
| Patch | `int` | Patch version number |
#### Properties
| Property | Type | Description |
|----------|------|-------------|
| Major | `int` | Major version number (read-only) |
| Minor | `int` | Minor version number (read-only) |
| Patch | `int` | Patch version number (read-only) |
#### Methods
##### ToString
```csharp
public override string ToString()
```
**Returns:** `string` — version in `"Major.Minor.Patch"` format.
---
## Enumerations
### ImageFormat
Image pixel format.
| Value | Int | Description |
|-------|-----|-------------|
| `Grayscale` | 0 | 8-bit grayscale |
| `RGB` | 1 | 24-bit RGB |
| `RGBA` | 2 | 32-bit RGBA or RGB with padding |
| `BGR` | 3 | 24-bit BGR |
| `BGRA` | 4 | 32-bit BGRA or BGR with padding |
### DetectionQuality
Detection quality indicator.
| Value | Int | Description |
|-------|-----|-------------|
| `Good` | 0 | No issues detected |
| `BadQuality` | 1 | Bad quality detected |
| `MaybeRolled` | 2 | Face may be rolled; embeddings could be incorrect |
---
## Exceptions
### FaceVerificationException
Exception thrown when Face Verification Library operations fail. Inherits from `Exception`.
#### Constructors
```csharp
public FaceVerificationException()
public FaceVerificationException(string message)
public FaceVerificationException(string message, Exception innerException)
```
| Parameter | Type | Description |
|-----------|------|-------------|
| message | `string` | Error message |
| innerException | `Exception` | Inner exception |
Python
https://verifeye-docs.realeyes.ai/native-sdk/face-verification/api-reference/python/
# Python API Reference
## Module
```python
import realeyes.face_verification
```
---
## Classes
### FaceVerifier
The main entry point for face detection, embedding, and verification operations.
#### Constructor
```python
FaceVerifier(model_file: str, max_concurrency: int = 0)
```
Loads the model file and sets up processing.
| Parameter | Type | Description | Default |
|-----------|------|-------------|---------|
| model_file | `str` | Path to the `.realZ` model file | — |
| max_concurrency | `int` | Maximum allowed concurrency. 0 means automatic (all cores) | `0` |
#### Methods
##### detect_faces
```python
def detect_faces(self, image: numpy.ndarray[numpy.uint8]) -> list[Face]
```
Detects faces on an image.
| Parameter | Type | Description |
|-----------|------|-------------|
| image | `numpy.ndarray[numpy.uint8]` | Image in RGB format, shape `(height, width, channels)` |
**Returns:** `list[`[`Face`](#face)`]` — the detected faces.
##### embed_face
```python
def embed_face(self, face: Face) -> list[float]
```
Returns the embedding vector of a detected face.
| Parameter | Type | Description |
|-----------|------|-------------|
| face | [`Face`](#face) | The previously detected face |
**Returns:** `list[float]` — the face embedding vector.
##### compare_faces
```python
def compare_faces(self, embedding1: list[float], embedding2: list[float]) -> Match
```
Compares two face embeddings.
| Parameter | Type | Description |
|-----------|------|-------------|
| embedding1 | `list[float]` | Embedding of the first face |
| embedding2 | `list[float]` | Embedding of the second face |
**Returns:** [`Match`](#match) — match result with similarity score.
##### get_model_name
```python
def get_model_name(self) -> str
```
Returns the name (version, etc.) of the loaded model.
**Returns:** `str` — model name.
---
## Module Functions
### get_sdk_version_string
```python
def get_sdk_version_string() -> str
```
Returns the version string of the SDK (not the model version).
**Returns:** `str` — SDK version string.
---
## Result Classes
### Face
Represents a detected face with landmarks, bounding box, and quality information.
#### Constructor
```python
Face(image: numpy.ndarray[numpy.uint8],
landmarks: list[Point2d],
bbox: BoundingBox = BoundingBox(x=0, y=0, width=0, height=0),
confidence: float = 0.0)
```
Creates a Face object to support 3rd party face detectors.
| Parameter | Type | Description | Default |
|-----------|------|-----------------------------------------------------------------------------------|---------|
| image | `numpy.ndarray[numpy.uint8]` | Image of the face (RGB format) | — |
| landmarks | `list[`[`Point2d`](#point2d)`]` | Face landmarks (5 points). See [landmarks specification](https://verifeye-docs.realeyes.ai/../overview.md#results). | — |
| bbox | [`BoundingBox`](#boundingbox) | Bounding box of the face | `BoundingBox(x=0, y=0, width=0, height=0)` |
| confidence | `float` | Detection confidence score | `0.0` |
#### Methods
##### bounding_box
```python
def bounding_box(self) -> BoundingBox
```
**Returns:** [`BoundingBox`](#boundingbox) — the bounding box of the detected face.
##### confidence
```python
def confidence(self) -> float
```
**Returns:** `float` — the detection confidence score.
##### detection_quality
```python
def detection_quality(self) -> DetectionQuality
```
**Returns:** [`DetectionQuality`](#detectionquality) — the detection quality of the face.
##### landmarks
```python
def landmarks(self) -> list[Point2d]
```
**Returns:** `list[`[`Point2d`](#point2d)`]` — the 5 facial landmarks.
---
### Point2d
2D point for landmark coordinates.
```python
Point2d(x: float, y: float)
```
| Attribute | Type | Description |
|-----------|------|-------------|
| x | `float` | X coordinate of the point |
| y | `float` | Y coordinate of the point |
### BoundingBox
Bounding box for detected faces.
```python
BoundingBox(x: int, y: int, width: int, height: int)
```
| Attribute | Type | Description |
|-----------|------|-------------|
| x | `int` | X coordinate of the top-left corner |
| y | `int` | Y coordinate of the top-left corner |
| width | `int` | Width of the bounding box in pixels |
| height | `int` | Height of the bounding box in pixels |
### Match
Result of face comparison.
| Attribute | Type | Description |
|-----------|------|-------------|
| similarity | `float` | Similarity score between the two faces |
---
## Enums
### DetectionQuality
Detection quality indicator.
| Value | Description |
|-------|-------------|
| `DetectionQuality.Good` | No issues detected |
| `DetectionQuality.BadQuality` | Bad quality detected |
| `DetectionQuality.MaybeRolled` | Face may be rolled; embeddings could be incorrect |
C
https://verifeye-docs.realeyes.ai/native-sdk/face-verification/getting-started/c/
# Getting Started with C API
## Prerequisites
- C99 compatible compiler
- The Face Verification Library shared library (`.so` / `.dylib` / `.dll`)
## Installation
Extract the SDK contents, include `faceverifier_c.h` from the `include` folder and link `libFaceVerificationLibrary` to your C project.
## Quick Start Example
```c
#include "faceverifier_c.h"
#include
#include
/* Callback for face detection */
void on_faces_detected(void* user_data, FVLFaceArray* faces, const char* error_msg) {
if (error_msg) {
fprintf(stderr, "Detection error: %s\n", error_msg);
return;
}
printf("Detected %d face(s)\n", faces->count);
/* Process each face... */
for (int i = 0; i < faces->count; i++) {
FVLBoundingBox bbox = fvl_face_bounding_box(faces->faces[i]);
printf(" Face %d: (%d, %d, %d, %d)\n", i, bbox.x, bbox.y, bbox.width, bbox.height);
}
}
int main() {
char* error = NULL;
/* Create verifier */
FVLFaceVerifier* verifier = fvl_face_verifier_new("model/model.realZ", 0, &error);
if (!verifier) {
fprintf(stderr, "Failed to create verifier: %s\n", error);
free(error);
return 1;
}
/* Detect faces (async with callback) */
FVLImageHeader header = {image_data, width, height, stride, FVLImageFormatRGB};
fvl_face_verifier_detect_faces(verifier, &header, on_faces_detected, NULL);
/* Clean up */
fvl_face_verifier_free(verifier);
return 0;
}
```
## Common Patterns
### Error Handling
Functions that can fail accept a `char** errorMessage` output parameter. On failure, the function returns `NULL` and sets the error message. The caller must free the error string:
```c
char* error = NULL;
FVLFaceVerifier* verifier = fvl_face_verifier_new("model.realZ", 0, &error);
if (!verifier) {
fprintf(stderr, "Error: %s\n", error);
free(error);
}
```
Pass `NULL` for `errorMessage` if you don't need the error details.
### Memory Management
| Function Returns | Must Free? | How |
|------------------|------------|-----|
| `fvl_face_verifier_new()` | Yes | `fvl_face_verifier_free()` |
| `fvl_face_new()` | Yes | `fvl_face_free()` |
| `fvl_face_copy()` | Yes | `fvl_face_free()` |
| `fvl_face_landmarks()` | Yes | `free()` |
| `fvl_face_verifier_get_model_name()` | Yes | `free()` |
| `fvl_face_verifier_get_sdk_version_string()` | Yes | `free()` |
| `errorMessage` output parameter | Yes | `free()` |
| Callback parameters (`faces`, `embedding`, `error_msg`) | No | Valid only during callback |
### Callbacks
Async operations use callbacks. The `result` and `error_msg` parameters are only valid during the callback — copy any data you need to retain:
```c
void on_embedding(void* user_data, FVLFloatArray* embedding, const char* error_msg) {
if (error_msg) {
/* handle error */
return;
}
/* Copy embedding data if needed beyond callback lifetime */
float* my_embedding = malloc(embedding->count * sizeof(float));
memcpy(my_embedding, embedding->data, embedding->count * sizeof(float));
}
```
## Next Steps
- [C API Reference](https://verifeye-docs.realeyes.ai/../api-reference/c.md)
Cpp
https://verifeye-docs.realeyes.ai/native-sdk/face-verification/getting-started/cpp/
# Getting Started with C++ API
## Prerequisites
- C++17 compatible compiler
- CMake 3.23+
- Conan package manager
## Installation
Extract the SDK contents, include the headers from the `include` folder and link `libFaceVerificationLibrary` to your C++ project.
## Quick Start Example
```cpp
#include "faceverifier.h"
#include
#include
#include
int main()
{
fvl::FaceVerifier verifier("model/model.realZ");
cv::Mat image1 = cv::imread("image1.jpg");
cv::Mat image2 = cv::imread("image2.jpg");
// Detect faces
auto faces1 = verifier.detectFaces({image1.ptr(), image1.cols, image1.rows,
static_cast(image1.step1()), fvl::ImageFormat::BGR}).get();
auto faces2 = verifier.detectFaces({image2.ptr(), image2.cols, image2.rows,
static_cast(image2.step1()), fvl::ImageFormat::BGR}).get();
// Compute embeddings
std::vector> embeddings1, embeddings2;
for (const auto& face : faces1)
embeddings1.push_back(verifier.embedFace(face).get());
for (const auto& face : faces2)
embeddings2.push_back(verifier.embedFace(face).get());
// Compare
for (size_t i = 0; i < embeddings1.size(); ++i)
for (size_t j = 0; j < embeddings2.size(); ++j)
if (verifier.compareFaces(embeddings1[i], embeddings2[j]).similarity > 0.3)
std::cout << "Match found!" << std::endl;
return 0;
}
```
## Common Patterns
### Async Operations
Both `detectFaces()` and `embedFace()` are non-blocking. Each has two overloads:
**std::future API** — returns a `std::future` you can `.get()` on:
```cpp
std::future> result = verifier.detectFaces(imageHeader);
auto faces = result.get(); // blocks until complete
```
**Callback API** — invokes a callback when complete:
```cpp
verifier.detectFaces(imageHeader, [](fvl::ResultOrError> result) {
if (auto* faces = std::get_if>(&result)) {
// process faces
} else {
auto& error = std::get(result);
// handle error
}
});
```
You can submit multiple frames without waiting for prior results. Use `getConcurrentCalculations()` to monitor in-flight work.
### Image Data
Construct an [`ImageHeader`](https://verifeye-docs.realeyes.ai/../api-reference/cpp.md#imageheader) to describe your frame data. The `ImageHeader` is a non-owning view — the underlying data must remain valid during the `detectFaces()` call, but is copied internally so it can be freed immediately after.
### Error Handling
The callback API uses [`ResultOrError`](https://verifeye-docs.realeyes.ai/../api-reference/cpp.md#resultorerror) (`std::variant`) to represent success or failure.
## Next Steps
- [C++ API Reference](https://verifeye-docs.realeyes.ai/../api-reference/cpp.md)
Dotnet
https://verifeye-docs.realeyes.ai/native-sdk/face-verification/getting-started/dotnet/
# Getting Started with .NET API
## Prerequisites
- .NET 8.0+
## Installation
```bash
dotnet add package Realeyes.FaceVerification
```
## Quick Start Example
```csharp
using Realeyes.FaceVerification;
using OpenCvSharp;
using var verifier = new FaceVerifier("model/model.realZ");
using var image1 = Cv2.ImRead("image1.jpg");
using var image2 = Cv2.ImRead("image2.jpg");
var imageData1 = image1.ToBytes();
var imageData2 = image2.ToBytes();
var header1 = new ImageHeader(imageData1, image1.Width, image1.Height,
image1.Width * image1.Channels(), ImageFormat.BGR);
var header2 = new ImageHeader(imageData2, image2.Width, image2.Height,
image2.Width * image2.Channels(), ImageFormat.BGR);
// Detect faces in parallel
using var faces1 = await verifier.DetectFacesAsync(header1);
using var faces2 = await verifier.DetectFacesAsync(header2);
// Embed all faces
var embeddings1 = new List();
foreach (var face in faces1)
embeddings1.Add(await verifier.EmbedFaceAsync(face));
var embeddings2 = new List();
foreach (var face in faces2)
embeddings2.Add(await verifier.EmbedFaceAsync(face));
// Compare embeddings
for (int i = 0; i < embeddings1.Count; i++)
for (int j = 0; j < embeddings2.Count; j++)
if (verifier.CompareFaces(embeddings1[i], embeddings2[j]).Similarity > 0.3f)
Console.WriteLine("Match found!");
```
## Common Patterns
### Async/Await
Face detection and embedding are asynchronous. Use `await` to get results:
```csharp
using var faces = await verifier.DetectFacesAsync(header);
var embedding = await verifier.EmbedFaceAsync(faces[0]);
```
### Resource Management
[`FaceVerifier`](https://verifeye-docs.realeyes.ai/../api-reference/dotnet.md#faceverifier), [`Face`](https://verifeye-docs.realeyes.ai/../api-reference/dotnet.md#face), and [`FaceList`](https://verifeye-docs.realeyes.ai/../api-reference/dotnet.md#facelist) implement `IDisposable`. Always use `using` statements to ensure native resources are released:
```csharp
using var verifier = new FaceVerifier("model.realZ");
using var faces = await verifier.DetectFacesAsync(header);
```
`FaceList` also implements `IAsyncDisposable`:
```csharp
await using var faces = await verifier.DetectFacesAsync(header);
```
### Error Handling
Operations throw [`FaceVerificationException`](https://verifeye-docs.realeyes.ai/../api-reference/dotnet.md#faceverificationexception) on failure:
```csharp
try
{
using var verifier = new FaceVerifier("model.realZ");
using var faces = await verifier.DetectFacesAsync(header);
}
catch (FaceVerificationException ex)
{
Console.WriteLine($"Error: {ex.Message}");
}
```
Standard .NET exceptions are also thrown for invalid arguments:
- `ArgumentNullException` — null parameters
- `ArgumentException` — invalid values (e.g., mismatched embedding lengths)
- `ArgumentOutOfRangeException` — non-positive image dimensions
- `ObjectDisposedException` — accessing disposed objects
### 3rd Party Face Detector
Create a [`Face`](https://verifeye-docs.realeyes.ai/../api-reference/dotnet.md#face) from an external detector by specifying image, landmarks, bounding box, and confidence:
```csharp
var landmarks = new Point2d[]
{
new(100f, 120f), // left eye
new(160f, 120f), // right eye
new(130f, 155f), // nose tip
new(105f, 180f), // left mouth corner
new(155f, 180f), // right mouth corner
};
using var face = new Face(header, landmarks, new BoundingBox(80, 90, 100, 120), 0.95f);
var embedding = await verifier.EmbedFaceAsync(face);
```
## Next Steps
- [.NET API Reference](https://verifeye-docs.realeyes.ai/../api-reference/dotnet.md)
Python
https://verifeye-docs.realeyes.ai/native-sdk/face-verification/getting-started/python/
# Getting Started with Python API
## Prerequisites
- Python >= 3.10
- NumPy
## Installation
```bash
pip install realeyes.face_verification
```
## Quick Start Example
```python
import realeyes.face_verification as fvl
import cv2
verifier = fvl.FaceVerifier('model/model.realZ')
image1 = cv2.imread('image1.jpg')[:, :, ::-1] # OpenCV reads BGR, we need RGB
image2 = cv2.imread('image2.jpg')[:, :, ::-1]
faces1 = verifier.detect_faces(image1)
faces2 = verifier.detect_faces(image2)
embeddings1 = [verifier.embed_face(face) for face in faces1]
embeddings2 = [verifier.embed_face(face) for face in faces2]
for i, e1 in enumerate(embeddings1):
for j, e2 in enumerate(embeddings2):
if verifier.compare_faces(e1, e2).similarity > 0.3:
print(f'{faces1[i]} from image 1 and {faces2[j]} from image 2 are the same person!')
```
## Common Patterns
### Image Format
The Python API expects images as NumPy arrays (`numpy.ndarray` of `uint8`) in **RGB** format with shape `(height, width, channels)`. If you use OpenCV (which loads BGR), convert with:
```python
rgb_image = cv2.imread('photo.jpg')[:, :, ::-1]
```
### 3rd Party Face Detector
You can create [`Face`](https://verifeye-docs.realeyes.ai/../api-reference/python.md#face) objects from a 3rd party face detector by providing the image and landmarks:
```python
face = fvl.Face(
image=rgb_image,
landmarks=[
fvl.Point2d(x=100.0, y=120.0), # left eye
fvl.Point2d(x=160.0, y=120.0), # right eye
fvl.Point2d(x=130.0, y=155.0), # nose tip
fvl.Point2d(x=105.0, y=180.0), # left mouth corner
fvl.Point2d(x=155.0, y=180.0), # right mouth corner
],
bbox=fvl.BoundingBox(x=80, y=90, width=100, height=120),
confidence=0.95
)
embedding = verifier.embed_face(face)
```
### Detection Quality
Check the quality of a detection before embedding:
```python
for face in faces:
if face.detection_quality() == fvl.DetectionQuality.Good:
embedding = verifier.embed_face(face)
```
## Next Steps
- [Python API Reference](https://verifeye-docs.realeyes.ai/../api-reference/python.md)
Index
https://verifeye-docs.realeyes.ai/native-sdk/face-verification/
# Face Verification Library
Welcome to the Face Verification Library documentation!
## Contents:
- [Overview](https://verifeye-docs.realeyes.ai/overview.md)
- [C++ API](https://verifeye-docs.realeyes.ai/api-reference/cpp.md)
- [C API](https://verifeye-docs.realeyes.ai/api-reference/c.md)
- [Python API](https://verifeye-docs.realeyes.ai/api-reference/python.md)
- [.NET API](https://verifeye-docs.realeyes.ai/api-reference/dotnet.md)
Overview
https://verifeye-docs.realeyes.ai/native-sdk/face-verification/overview/
# Face Verification Library
## Introduction
The Face Verification Library is a portable C++ library for face verification, designed for real-time processing on client devices (mobile and desktop).
The SDK provides wrappers in the following languages:
- C
- Python
- C# (.NET)
## Features
- Face detection
- Face embedding extraction for verification
- Embedding comparison with similarity scoring
- Async API with both `std::future` and callback interfaces
- Configurable concurrency
- Support for 3rd party face detectors
- Cross-platform (desktop, mobile)
## Supported Platforms
### Hardware Requirements
- **CPU:** Any modern 64-bit capable CPU (x86-64 with AVX, ARM8)
- **GPU:** No special requirement
- **RAM:** 2 GB of available RAM required
- **Camera:** Minimum resolution: 640x480
### Software Requirements
The SDK is regularly tested on the following operating systems:
| Platform | Version |
|----------|---------|
| Windows | 11 |
| Linux | Ubuntu 24.04 |
| macOS | 15 Sequoia |
| iOS | 18 |
| Android | 27 |
### Platform / Architecture Support
| Platform | Architecture | Status |
|----------|-------------|--------|
| Linux | x86_64, ARM64 | Supported |
| Windows | x86_64 | Supported |
| macOS | ARM64 | Supported |
| iOS | ARM64 | Supported |
| Android | ARM64 | Supported |
## Available APIs
- **C++** - Core native API
- **C** - C-compatible API for FFI integration
- **Python** - Python bindings via pybind11
- **.NET** - C# bindings with async/await support
## API Comparison
**Create verifier**
- C++: [`FaceVerifier()`](https://verifeye-docs.realeyes.ai/api-reference/cpp.md#faceverifier)
- C: [`fvl_face_verifier_new()`](https://verifeye-docs.realeyes.ai/api-reference/c.md#fvl_face_verifier_new)
- Python: [`FaceVerifier()`](https://verifeye-docs.realeyes.ai/api-reference/python.md#faceverifier)
- .NET: [`FaceVerifier()`](https://verifeye-docs.realeyes.ai/api-reference/dotnet.md#faceverifier)
**Detect faces**
- C++: [`detectFaces()`](https://verifeye-docs.realeyes.ai/api-reference/cpp.md#detectfaces-future)
- C: [`fvl_face_verifier_detect_faces()`](https://verifeye-docs.realeyes.ai/api-reference/c.md#fvl_face_verifier_detect_faces)
- Python: [`detect_faces()`](https://verifeye-docs.realeyes.ai/api-reference/python.md#detect_faces)
- .NET: [`DetectFacesAsync()`](https://verifeye-docs.realeyes.ai/api-reference/dotnet.md#detectfacesasync)
**Embed face**
- C++: [`embedFace()`](https://verifeye-docs.realeyes.ai/api-reference/cpp.md#embedface-future)
- C: [`fvl_face_verifier_embed_face()`](https://verifeye-docs.realeyes.ai/api-reference/c.md#fvl_face_verifier_embed_face)
- Python: [`embed_face()`](https://verifeye-docs.realeyes.ai/api-reference/python.md#embed_face)
- .NET: [`EmbedFaceAsync()`](https://verifeye-docs.realeyes.ai/api-reference/dotnet.md#embedfaceasync)
**Compare faces**
- C++: [`compareFaces()`](https://verifeye-docs.realeyes.ai/api-reference/cpp.md#comparefaces)
- C: [`fvl_face_verifier_compare_faces()`](https://verifeye-docs.realeyes.ai/api-reference/c.md#fvl_face_verifier_compare_faces)
- Python: [`compare_faces()`](https://verifeye-docs.realeyes.ai/api-reference/python.md#compare_faces)
- .NET: [`CompareFaces()`](https://verifeye-docs.realeyes.ai/api-reference/dotnet.md#comparefaces)
**Get model name**
- C++: [`getModelName()`](https://verifeye-docs.realeyes.ai/api-reference/cpp.md#getmodelname)
- C: [`fvl_face_verifier_get_model_name()`](https://verifeye-docs.realeyes.ai/api-reference/c.md#fvl_face_verifier_get_model_name)
- Python: [`get_model_name()`](https://verifeye-docs.realeyes.ai/api-reference/python.md#get_model_name)
- .NET: [`ModelName`](https://verifeye-docs.realeyes.ai/api-reference/dotnet.md#properties)
**Get SDK version**
- C++: [`getSDKVersion()`](https://verifeye-docs.realeyes.ai/api-reference/cpp.md#getsdkversion)
- C: [`fvl_face_verifier_get_sdk_version()`](https://verifeye-docs.realeyes.ai/api-reference/c.md#fvl_face_verifier_get_sdk_version)
- Python: [`get_sdk_version_string()`](https://verifeye-docs.realeyes.ai/api-reference/python.md#get_sdk_version_string)
- .NET: [`SdkVersion`](https://verifeye-docs.realeyes.ai/api-reference/dotnet.md#properties)
## Quick Links
### Getting Started
- [C++ Getting Started](https://verifeye-docs.realeyes.ai/getting-started/cpp.md)
- [C Getting Started](https://verifeye-docs.realeyes.ai/getting-started/c.md)
- [Python Getting Started](https://verifeye-docs.realeyes.ai/getting-started/python.md)
- [.NET Getting Started](https://verifeye-docs.realeyes.ai/getting-started/dotnet.md)
### API Reference
- [C++ API Reference](https://verifeye-docs.realeyes.ai/api-reference/cpp.md)
- [C API Reference](https://verifeye-docs.realeyes.ai/api-reference/c.md)
- [Python API Reference](https://verifeye-docs.realeyes.ai/api-reference/python.md)
- [.NET API Reference](https://verifeye-docs.realeyes.ai/api-reference/dotnet.md)
## Quick Examples
### C++
```cpp
#include "faceverifier.h"
#include
#include
#include
int main()
{
fvl::FaceVerifier verifier("model/model.realZ");
cv::Mat image1 = cv::imread("image1.jpg");
cv::Mat image2 = cv::imread("image2.jpg");
std::vector faces1 = verifier.detectFaces({image1.ptr(), image1.cols, image1.rows, static_cast(image1.step1()), fvl::ImageFormat::BGR}).get();
std::vector faces2 = verifier.detectFaces({image2.ptr(), image2.cols, image2.rows, static_cast(image2.step1()), fvl::ImageFormat::BGR}).get();
std::vector> embeddings1, embeddings2;
for (const Face& face: faces1)
embeddings1.push_back(verifier.embedFace(face).get());
for (const Face& face: faces2)
embeddings2.push_back(verifier.embedFace(face).get());
for (size_t i = 0; i < embeddings1.size(); ++i)
for (size_t j = 0; j < embeddings2.size(); ++j)
if (verifier.compare_faces(embeddings1[i], embeddings2[j]).similarity > 0.3) {
fvl::BoundingBox bbox1 = faces1[i].bounding_box();
fvl::BoundingBox bbox2 = faces2[j].bounding_box();
std::cout << "(" << bbox1.x << ", " << bbox1.y << ", " << bbox1.width << ", " << bbox1.height << ") from image 1 and";
std::cout << "(" << bbox2.x << ", " << bbox2.y << ", " << bbox2.width << ", " << bbox2.height << ") from image 2 are the same people";
std::cout << std::endl;
}
return 0;
}
```
### Python
```python
import realeyes.face_verification as fvl
import cv2
verifier = fvl.FaceVerifier('model/model.realZ')
image1 = cv2.imread('image1.jpg')[:, :, ::-1] # opencv reads BGR we need RGB
image2 = cv2.imread('image2.jpg')[:, :, ::-1] # opencv reads BGR we need RGB
faces1 = verifier.detect_faces(image1)
faces2 = verifier.detect_faces(image2)
embeddings1 = [verifier.embed_face(face) for face in faces1]
embeddings2 = [verifier.embed_face(face) for face in faces2]
for i, e1 in enumerate(embeddings1):
for j, e2 in enumerate(embeddings2):
if verifier.compare_faces(e1, e2).similarity > 0.3:
print(f'{faces1[i]} from image 1 and {faces2[j]} from image 2 are the same person!')
```
### C#
```csharp
using Realeyes.FaceVerification;
using OpenCvSharp;
using var verifier = new FaceVerifier("model/model.realZ");
using var image1 = Cv2.ImRead("image1.jpg");
using var image2 = Cv2.ImRead("image2.jpg");
// Convert OpenCV Mat to byte array
var imageData1 = image1.ToBytes();
var imageData2 = image2.ToBytes();
var header1 = new ImageHeader(imageData1, image1.Width, image1.Height,
image1.Width * image1.Channels(), ImageFormat.BGR);
var header2 = new ImageHeader(imageData2, image2.Width, image2.Height,
image2.Width * image2.Channels(), ImageFormat.BGR);
// Detect faces in parallel
var detectTask1 = verifier.DetectFacesAsync(header1);
var detectTask2 = verifier.DetectFacesAsync(header2);
using var faces1 = await detectTask1;
using var faces2 = await detectTask2;
// Embed all faces in parallel
var embeddingTasks = new List>();
foreach (var face in faces1)
embeddingTasks.Add(verifier.EmbedFaceAsync(face));
foreach (var face in faces2)
embeddingTasks.Add(verifier.EmbedFaceAsync(face));
var allEmbeddings = await Task.WhenAll(embeddingTasks);
var embeddings1 = allEmbeddings.Take(faces1.Count).ToArray();
var embeddings2 = allEmbeddings.Skip(faces1.Count).ToArray();
// Compare embeddings
for (int i = 0; i < embeddings1.Length; i++)
{
for (int j = 0; j < embeddings2.Length; j++)
{
var match = verifier.CompareFaces(embeddings1[i], embeddings2[j]);
if (match.Similarity > 0.3f)
{
var bbox1 = faces1[i].BoundingBox;
var bbox2 = faces2[j].BoundingBox;
Console.WriteLine($"({bbox1.X}, {bbox1.Y}, {bbox1.Width}, {bbox1.Height}) from image 1 and " +
$"({bbox2.X}, {bbox2.Y}, {bbox2.Width}, {bbox2.Height}) from image 2 are the same person!");
}
}
}
```
## Results
The **Face** objects consist of the following members:
- **bounding_box:** Bounding box of the detected face (left, top, width, height).
- **confidence:** Confidence of the detection ([0,1] — higher is better).
- **landmarks:** 5 landmarks from the face:
1. Left eye
2. Right eye
3. Nose (tip)
4. Left mouth corner
5. Right mouth corner

## 3rd Party Face Detector
It is possible to calculate the embedding of a face which was detected with a different library. You can create a **Face** object by specifying the source image and the landmarks.
## Dependencies
The public C++ API hides all the implementation details from the user, and it only depends on the C++17 Standard Library. It also provides a binary compatible interface, making it possible to change the underlying implementation without the need of recompilation of the user code.
## Release Notes
- **Version 1.5.0 (15 Dec 2025)**
- Improved performance on ARM CPUs
- Cleaner .NET API
- New public C API
- **Version 1.4.0 (7 Feb 2024)**
- Experimental .NET support
- **Version 1.3.0 (9 Jun 2023)**
- Add model config version 2 support
- **Version 1.2.0 (9 Jun 2023)**
- Add support for AES encryption
- **Version 1.1.0 (3 Apr 2023)**
- Add support for 3rd party face detectors
- **Version 1.0.0 (1 Mar 2023)**
- Initial release
## 3rd Party Licenses
While the SDK is released under a proprietary license, the following open-source projects were used in it with their respective licenses:
| Library | License |
|---------|---------|
| [OpenCV](https://opencv.org/license/) | 3-clause BSD |
| [TensorFlow](https://github.com/tensorflow/tensorflow/blob/master/LICENSE) | Apache License 2.0 |
| [Protobuf](https://github.com/protocolbuffers/protobuf/blob/master/LICENSE) | 3-clause BSD |
| [zlib](https://www.zlib.net/zlib_license.html) | zlib License |
| [minizip-ng](https://github.com/zlib-ng/minizip-ng/blob/master/LICENSE) | zlib License |
| [stlab](https://github.com/stlab/libraries/blob/main/LICENSE) | Boost Software License 1.0 |
| [docopt.cpp](https://github.com/docopt/docopt.cpp/blob/master/LICENSE-MIT) | MIT License |
| [pybind11](https://github.com/pybind/pybind11/blob/master/LICENSE) | 3-clause BSD |
| [fmtlib](https://github.com/fmtlib/fmt/blob/master/LICENSE.rst) | MIT License |
## License
Proprietary - Realeyes Data Services Ltd.
Index
https://verifeye-docs.realeyes.ai/native-sdk/
# Native SDK
High-performance native libraries for cross-platform application development across Windows, Linux, macOS, iOS, and Android.
## Overview
The VerifEye Native SDK provides C++ libraries for building high-performance biometric verification applications. Our SDK is optimized for speed, accuracy, and cross-platform compatibility.
## Available Features
- **Face Verification** - Face detection, embedding extraction, and face comparison
- **Liveness Detection** - Detect live faces and prevent spoofing attacks
- **Demographic Estimation** - Age and gender estimation from facial images
- **Face Recognition** - Face recognition and duplicate detection
---
## Key Features
- **Native Performance** - Optimized C++ code for maximum speed
- **Cross-Platform** - Windows, Linux, macOS, iOS, Android
- **Offline Capable** - Works without internet connection
- **Low Latency** - Sub-100ms processing times
- **Small Footprint** - Minimal memory and storage requirements
- **Thread-Safe** - Concurrent processing support
---
## Supported Platforms
| Platform | Architecture | Minimum Version |
|----------|-------------|-----------------|
| **Windows** | x64, ARM64 | Windows 10+ |
| **Linux** | x64, ARM64 | Ubuntu 18.04+, CentOS 7+ |
| **macOS** | x64, ARM64 (M1/M2) | macOS 10.15+ |
| **iOS** | ARM64 | iOS 12.0+ |
| **Android** | ARM64, ARMv7 | Android 6.0+ (API 23+) |
---
## Prerequisites
- C++17 compatible compiler
- CMake 3.15 or higher
- OpenCV 4.x (optional, for image processing)
- Platform-specific build tools
---
## Installation
The Native SDK is available for download from the [VerifEye Developer Console](https://verifeye-console.realeyes.ai/).
### Steps to Get the SDK:
1. Log in to the [VerifEye Developer Console](https://verifeye-console.realeyes.ai/)
2. Navigate to the **SDKs** or **Model Downloads** section
3. Request access to the Native SDK (if not already enabled)
4. Download the SDK package for your platform
5. Follow the platform-specific installation instructions included in the package
---
## Documentation
For detailed documentation, API reference, and code examples, please refer to the documentation included with the SDK package.
---
*Last updated: 2026-01-27*
C
https://verifeye-docs.realeyes.ai/native-sdk/liveness-detection/api-reference/c/
# C API Reference
The C API provides a stable ABI for FFI (Foreign Function Interface) integration with other languages.
## Header
```c
#include
```
---
## Opaque Types
```c
typedef struct LLLivenessChecker LLLivenessChecker;
typedef struct LLFace LLFace;
```
These are opaque handles. Use the provided functions to interact with them.
---
## Enums
### LLDetectionQuality
```c
typedef enum LLDetectionQuality {
LLDetectionQualityGood = 0, // No issues detected
LLDetectionQualityBadQuality = 1, // Bad quality
LLDetectionQualityMaybeRolled = 2 // Face maybe rolled
} LLDetectionQuality;
```
### LLImageFormat
```c
typedef enum LLImageFormat {
LLImageFormatGrayscale = 0, // 8-bit grayscale
LLImageFormatRGB = 1, // 24-bit RGB
LLImageFormatRGBA = 2, // 32-bit RGBA
LLImageFormatBGR = 3, // 24-bit BGR
LLImageFormatBGRA = 4 // 32-bit BGRA
} LLImageFormat;
```
---
## Structs
### LLPoint2d
```c
typedef struct LLPoint2d {
float x; // X coordinate
float y; // Y coordinate
} LLPoint2d;
```
### LLPoint2dArray
```c
typedef struct LLPoint2dArray {
int count; // Number of points
LLPoint2d* points; // Array of points
} LLPoint2dArray;
```
### LLBoundingBox
```c
typedef struct LLBoundingBox {
int x; // Left edge
int y; // Top edge
int width; // Width in pixels
int height; // Height in pixels
} LLBoundingBox;
```
### LLImageHeader
```c
typedef struct LLImageHeader {
const uint8_t* data; // Pointer to pixel data (non-owning)
int width; // Width in pixels
int height; // Height in pixels
int stride; // Bytes per row
LLImageFormat format; // Pixel format
} LLImageHeader;
```
### LLCheckResult
```c
typedef struct LLCheckResult {
float score; // Liveness score (0.0-1.0)
bool is_live; // true if classified as live
} LLCheckResult;
```
### LLVersion
```c
typedef struct LLVersion {
int major;
int minor;
int patch;
} LLVersion;
```
### LLFaceArray
```c
typedef struct LLFaceArray {
int count; // Number of faces
LLFace** faces; // Array of face pointers
} LLFaceArray;
```
---
## Callback Types
### LLDetectFacesCallback
```c
typedef void (*LLDetectFacesCallback)(
void* user_data,
LLFaceArray* faces,
const char* error_msg
);
```
Called when face detection completes.
| Parameter | Description |
|-----------|-------------|
| `user_data` | User data passed to detection function |
| `faces` | Detected faces (valid only during callback) |
| `error_msg` | Error message if failed, NULL on success (valid only during callback, no need to free) |
**Important:** The `faces` array, its contents, and `error_msg` are only valid during the callback. Copy any data you need to retain.
### LLCheckImageCallback
```c
typedef void (*LLCheckImageCallback)(
void* user_data,
LLCheckResult* result,
const char* error_msg
);
```
Called when liveness check completes.
| Parameter | Description |
|-----------|-------------|
| `user_data` | User data passed to check function |
| `result` | Liveness result (valid only during callback) |
| `error_msg` | Error message if failed, NULL on success (valid only during callback, no need to free) |
**Important:** The `result` struct and `error_msg` are only valid during the callback. Copy any data you need to retain.
---
## LivenessChecker Functions
### ll_liveness_checker_new
```c
LLLivenessChecker* ll_liveness_checker_new(
const char* model_file,
int max_concurrency,
char** errorMessage
);
```
Creates a new LivenessChecker instance.
| Parameter | Type | Description |
|-----------|------|-------------|
| `model_file` | `const char*` | Path to `.realZ` model file |
| `max_concurrency` | `int` | Max concurrent operations (0 = auto) |
| `errorMessage` | `char**` | Output: error message on failure |
**Returns:** Pointer to new instance, or NULL on failure
**Memory:** Caller must free `*errorMessage` with `free()` if set
---
### ll_liveness_checker_free
```c
void ll_liveness_checker_free(LLLivenessChecker* checker);
```
Frees a LivenessChecker instance.
| Parameter | Type | Description |
|-----------|------|-------------|
| `checker` | `LLLivenessChecker*` | Instance to free |
---
### ll_liveness_checker_detect_faces
```c
void ll_liveness_checker_detect_faces(
LLLivenessChecker* checker,
const LLImageHeader* image_header,
LLDetectFacesCallback callback,
void* user_data
);
```
Detects faces in an image asynchronously.
| Parameter | Type | Description |
|-----------|------|-------------|
| `checker` | [`LLLivenessChecker`](#opaque-types)`*` | LivenessChecker instance |
| `image_header` | `const `[`LLImageHeader`](#llimageheader)`*` | Image descriptor |
| `callback` | [`LLDetectFacesCallback`](#lldetectfacescallback) | Callback function |
| `user_data` | `void*` | User data passed to callback |
**Notes:**
- Non-blocking; callback is invoked when complete
- Image data is copied internally
---
### ll_liveness_checker_check_image
```c
void ll_liveness_checker_check_image(
LLLivenessChecker* checker,
const LLFace* face,
LLCheckImageCallback callback,
void* user_data
);
```
Performs liveness check on a face asynchronously.
| Parameter | Type | Description |
|-----------|------|-------------|
| `checker` | [`LLLivenessChecker`](#opaque-types)`*` | LivenessChecker instance |
| `face` | `const `[`LLFace`](#opaque-types)`*` | Face to check |
| `callback` | [`LLCheckImageCallback`](#llcheckimagecallback) | Callback function |
| `user_data` | `void*` | User data passed to callback |
---
### ll_liveness_checker_get_concurrent_calculations
```c
int ll_liveness_checker_get_concurrent_calculations(
const LLLivenessChecker* checker
);
```
**Returns:** Approximate number of operations in flight
---
### ll_liveness_checker_get_model_name
```c
char* ll_liveness_checker_get_model_name(
const LLLivenessChecker* checker
);
```
**Returns:** Model name string (caller must `free()`)
---
### ll_liveness_checker_get_sdk_version
```c
LLVersion ll_liveness_checker_get_sdk_version(void);
```
**Returns:** SDK version as [`LLVersion`](#llversion) struct
---
### ll_liveness_checker_get_sdk_version_string
```c
char* ll_liveness_checker_get_sdk_version_string(void);
```
**Returns:** SDK version string (caller must `free()`)
---
## Face Functions
### ll_face_new
```c
LLFace* ll_face_new(
const LLImageHeader* image_header,
const LLPoint2d* landmarks,
int num_landmarks,
const LLBoundingBox* bbox,
float confidence,
char** errorMessage
);
```
Creates a Face from external detection results.
| Parameter | Type | Description |
|-----------|------|-------------|
| `image_header` | `const `[`LLImageHeader`](#llimageheader)`*` | Image containing face |
| `landmarks` | `const `[`LLPoint2d`](#llpoint2d)`*` | Array of landmarks |
| `num_landmarks` | `int` | Number of landmarks (typically 5) |
| `bbox` | `const `[`LLBoundingBox`](#llboundingbox)`*` | Face bounding box |
| `confidence` | `float` | Detection confidence |
| `errorMessage` | `char**` | Output: error message on failure |
**Returns:** Pointer to new [`LLFace`](#opaque-types), or NULL on failure
**Landmark Order:**
1. Left eye center
2. Right eye center
3. Nose tip
4. Left mouth corner
5. Right mouth corner
---
### ll_face_copy
```c
LLFace* ll_face_copy(const LLFace* other);
```
Creates a copy of a Face.
| Parameter | Type | Description |
|-----------|------|-------------|
| `other` | `const LLFace*` | Face to copy |
**Returns:** Pointer to new [`LLFace`](#opaque-types) copy, or NULL on failure
**Important:** Use this to retain faces from detection callbacks (see [`LLDetectFacesCallback`](#lldetectfacescallback)).
---
### ll_face_free
```c
void ll_face_free(LLFace* face);
```
Frees a Face instance.
| Parameter | Type | Description |
|-----------|------|-------------|
| `face` | `LLFace*` | Face to free |
---
### ll_face_detection_quality
```c
LLDetectionQuality ll_face_detection_quality(const LLFace* face);
```
**Returns:** [`LLDetectionQuality`](#lldetectionquality) - Detection quality of the face
---
### ll_face_bounding_box
```c
LLBoundingBox ll_face_bounding_box(const LLFace* face);
```
**Returns:** [`LLBoundingBox`](#llboundingbox) - Bounding box of the face
---
### ll_face_confidence
```c
float ll_face_confidence(const LLFace* face);
```
**Returns:** Detection confidence (0.0-1.0)
---
### ll_face_landmarks
```c
LLPoint2dArray* ll_face_landmarks(const LLFace* face);
```
**Returns:** Pointer to [`LLPoint2dArray`](#llpoint2darray) landmarks array (caller must `free()`)
---
## Memory Management Summary
| Function Returns | Must Free? | How |
|------------------|------------|-----|
| [`ll_liveness_checker_new()`](#ll_liveness_checker_new) | Yes | [`ll_liveness_checker_free()`](#ll_liveness_checker_free) |
| [`ll_face_new()`](#ll_face_new) | Yes | [`ll_face_free()`](#ll_face_free) |
| [`ll_face_copy()`](#ll_face_copy) | Yes | [`ll_face_free()`](#ll_face_free) |
| [`ll_liveness_checker_get_model_name()`](#ll_liveness_checker_get_model_name) | Yes | `free()` |
| [`ll_liveness_checker_get_sdk_version_string()`](#ll_liveness_checker_get_sdk_version_string) | Yes | `free()` |
| [`ll_face_landmarks()`](#ll_face_landmarks) | Yes | `free()` |
| `errorMessage` output parameter | Yes | `free()` |
| Callback `faces` parameter | No | Valid only during callback |
| Callback `result` parameter | No | Valid only during callback |
| Callback `error_msg` parameter | No | Valid only during callback |
Cpp
https://verifeye-docs.realeyes.ai/native-sdk/liveness-detection/api-reference/cpp/
# C++ API Reference
## Namespace
```cpp
namespace ll { ... }
```
All public types are in the `ll` namespace.
## Headers
```cpp
#include // Main API (includes types.h)
#include // Data types only
```
---
## Classes
### LivenessChecker
Main entry point for face detection and liveness checking operations.
#### Constructor
```cpp
LivenessChecker(const std::string& modelFile, int maxConcurrency = 0);
```
Creates a new LivenessChecker instance.
| Parameter | Type | Description | Default |
|-----------|------|-------------|---------|
| `modelFile` | `const std::string&` | Path to the `.realZ` model file | Required |
| `maxConcurrency` | `int` | Maximum concurrent operations (0 = auto) | `0` |
**Throws:** `std::runtime_error` - If model file cannot be loaded
#### Destructor
```cpp
~LivenessChecker();
```
Releases all resources. Waits for pending operations to complete.
---
#### detectFaces (Future)
```cpp
std::future> detectFaces(const ImageHeader& imageHeader);
```
Detects faces in an image asynchronously.
| Parameter | Type | Description |
|-----------|------|-------------|
| `imageHeader` | `const `[`ImageHeader`](#imageheader)`&` | Image descriptor |
**Returns:** `std::future>` - Future resolving to detected faces
**Notes:**
- Non-blocking; multiple calls can overlap
- Image data is copied internally; safe to release after call returns
---
#### detectFaces (Callback)
```cpp
void detectFaces(const ImageHeader& imageHeader,
std::function>)> callback);
```
Detects faces with callback notification.
| Parameter | Type | Description |
|-----------|------|-------------|
| `imageHeader` | `const `[`ImageHeader`](#imageheader)`&` | Image descriptor |
| `callback` | `std::function>)>` | Callback function |
---
#### checkImage (Future)
```cpp
std::future checkImage(const Face& face);
```
Performs liveness check on a detected face.
| Parameter | Type | Description |
|-----------|------|-------------|
| `face` | `const `[`Face`](#face)`&` | Face to check |
**Returns:** `std::future<`[`CheckResult`](#checkresult)`>` - Future resolving to liveness result
---
#### checkImage (Callback)
```cpp
void checkImage(const Face& face,
std::function)> callback);
```
Performs liveness check with callback notification.
| Parameter | Type | Description |
|-----------|------|-------------|
| `face` | `const `[`Face`](#face)`&` | Face to check |
| `callback` | `std::function)>` | Callback function |
---
#### getConcurrentCalculations
```cpp
int getConcurrentCalculations() const;
```
**Returns:** Approximate number of operations currently in flight
---
#### getModelName
```cpp
std::string getModelName() const;
```
**Returns:** Name/version string of the loaded model
---
#### getSDKVersion (Static)
```cpp
static Version getSDKVersion();
```
**Returns:** SDK version as a [`Version`](#version) struct
---
#### getSDKVersionString (Static)
```cpp
static std::string getSDKVersionString();
```
**Returns:** SDK version as a formatted string (e.g., "1.0.0")
---
### Face
Represents a detected face with image data, landmarks, and metadata.
#### Constructor
```cpp
Face(const ImageHeader& imageHeader,
const std::vector& landmarks,
const BoundingBox& bbox,
float confidence = 0.0f);
```
Creates a Face from external detection results (e.g., third-party detector).
| Parameter | Type | Description | Default |
|-----------|------|-------------|---------|
| `imageHeader` | `const `[`ImageHeader`](#imageheader)`&` | Image containing the face | Required |
| `landmarks` | `const std::vector<`[`Point2d`](#point2d)`>&` | 5 facial landmarks | Required |
| `bbox` | `const `[`BoundingBox`](#boundingbox)`&` | Face bounding box | Required |
| `confidence` | `float` | Detection confidence (0.0-1.0) | `0.0f` |
**Landmark Order:**
1. Left eye center
2. Right eye center
3. Nose tip
4. Left mouth corner
5. Right mouth corner
#### Destructor
```cpp
~Face();
```
---
#### detectionQuality
```cpp
DetectionQuality detectionQuality() const;
```
**Returns:** [`DetectionQuality`](#detectionquality) - Quality assessment of the detection
---
#### boundingBox
```cpp
BoundingBox boundingBox() const;
```
**Returns:** [`BoundingBox`](#boundingbox) - Bounding box of the face
---
#### confidence
```cpp
float confidence() const;
```
**Returns:** Detection confidence score (0.0-1.0)
---
#### landmarks
```cpp
std::vector landmarks() const;
```
**Returns:** Vector of 5 [`Point2d`](#point2d) facial landmark points
---
## Types
### ImageHeader
```cpp
struct ImageHeader {
const uint8_t* data; // Pointer to pixel data (non-owning)
int width; // Width in pixels
int height; // Height in pixels
int stride; // Bytes per row
ImageFormat format; // Pixel format
};
```
**Notes:**
- `data` is a non-owning pointer; caller must ensure data remains valid
- `stride` may be greater than `width * bytesPerPixel` due to padding
---
### ImageFormat
```cpp
enum class ImageFormat {
Grayscale = 0, // 8-bit grayscale (1 byte/pixel)
RGB = 1, // 24-bit RGB (3 bytes/pixel)
RGBA = 2, // 32-bit RGBA (4 bytes/pixel)
BGR = 3, // 24-bit BGR (3 bytes/pixel)
BGRA = 4 // 32-bit BGRA (4 bytes/pixel)
};
```
---
### Point2d
```cpp
struct Point2d {
float x; // X coordinate
float y; // Y coordinate
};
```
---
### BoundingBox
```cpp
struct BoundingBox {
int x; // Left edge (pixels)
int y; // Top edge (pixels)
int width; // Width (pixels)
int height; // Height (pixels)
};
```
---
### CheckResult
```cpp
struct CheckResult {
float score; // Liveness score (0.0-1.0)
bool is_live; // true if classified as live
};
```
---
### Version
```cpp
struct Version {
int major;
int minor;
int patch;
};
```
---
### DetectionQuality
```cpp
enum class DetectionQuality {
Good = 0, // High-quality detection
BadQuality = 1, // Low-quality (blurry, partial, etc.)
MaybeRolled = 2 // Face may be significantly rotated
};
```
---
### ErrorType
```cpp
struct ErrorType {
std::string errorString; // Human-readable error description
};
```
---
### ResultOrError
```cpp
template
using ResultOrError = std::variant;
```
Type representing either a successful result or an [`ErrorType`](#errortype) in callback interfaces.
**Usage:**
```cpp
void handleResult(ResultOrError result) {
if (auto* checkResult = std::get_if(&result)) {
// Success
std::cout << "Score: " << checkResult->score << std::endl;
} else {
// Error
auto& error = std::get(result);
std::cerr << "Error: " << error.errorString << std::endl;
}
}
```
---
## Thread Safety
- [`LivenessChecker`](#livenesschecker) is thread-safe for all methods
- Multiple [`detectFaces()`](#detectfaces-future) and [`checkImage()`](#checkimage-future) calls can execute concurrently
- [`Face`](#face) objects can be used from any thread but are not internally synchronized
- Image data referenced by [`ImageHeader`](#imageheader) must remain valid for async operations
---
## Exception Safety
| Method | Exceptions |
|--------|------------|
| [`LivenessChecker()`](#livenesschecker) | `std::runtime_error` on model load failure |
| [`detectFaces()`](#detectfaces-future) | None (errors via future/callback) |
| [`checkImage()`](#checkimage-future) | None (errors via future/callback) |
Future-based methods may throw when calling `.get()` if an error occurred.
Dotnet
https://verifeye-docs.realeyes.ai/native-sdk/liveness-detection/api-reference/dotnet/
# .NET API Reference
## Namespace
```csharp
using Realeyes.Liveness;
```
---
## Classes
### LivenessChecker
Main entry point for face detection and liveness checking operations.
**Implements:** `IDisposable`
#### Constructors
```csharp
public LivenessChecker(string modelPath, int maxConcurrency = 0)
```
Creates a new LivenessChecker instance.
| Parameter | Type | Description | Default |
|-----------|------|-------------|---------|
| `modelPath` | `string` | Path to `.realZ` model file | Required |
| `maxConcurrency` | `int` | Max concurrent operations (0 = auto) | `0` |
**Throws:**
- `ArgumentNullException` - If `modelPath` is null or empty
- [`LivenessException`](#livenessexception) - If model loading fails
---
#### DetectFacesAsync
```csharp
public Task DetectFacesAsync(ImageHeader imageHeader)
```
Detects faces in an image asynchronously.
| Parameter | Type | Description |
|-----------|------|-------------|
| `imageHeader` | [`ImageHeader`](#imageheader) | Image to process |
**Returns:** `Task<`[`FaceList`](#facelist)`>` - Disposable collection of detected faces
**Throws:** [`LivenessException`](#livenessexception) - If detection fails
---
#### CheckImageAsync
```csharp
public Task CheckImageAsync(Face face)
```
Performs liveness check on a detected face asynchronously.
| Parameter | Type | Description |
|-----------|------|-------------|
| `face` | [`Face`](#face) | Face to check |
**Returns:** `Task<`[`CheckResult`](#checkresult)`>` - Liveness check result
**Throws:**
- `ArgumentNullException` - If `face` is null
- [`LivenessException`](#livenessexception) - If liveness check fails
---
#### Properties
| Property | Type | Description |
|----------|------|-------------|
| `ConcurrentCalculations` | `int` | Number of operations currently in flight |
| `ModelName` | `string` | Name/version of loaded model |
#### Static Properties
| Property | Type | Description |
|----------|------|-------------|
| `SdkVersion` | [`Version`](#version) | SDK version as struct |
| `SdkVersionString` | `string` | SDK version as formatted string |
---
#### Dispose
```csharp
public void Dispose()
```
Releases all resources. Call when done with the checker.
---
### Face
Represents a detected face with landmarks, bounding box, and quality information.
**Implements:** `IDisposable`
#### Constructors
```csharp
public Face(
ImageHeader imageHeader,
Point2d[] landmarks,
BoundingBox boundingBox,
float confidence = 0.0f
)
```
Creates a Face from third-party face detector results.
| Parameter | Type | Description | Default |
|-----------|------|-------------|---------|
| `imageHeader` | [`ImageHeader`](#imageheader) | Image containing the face | Required |
| `landmarks` | [`Point2d`](#point2d)`[]` | 5 facial landmarks | Required |
| `boundingBox` | [`BoundingBox`](#boundingbox) | Face bounding box | Required |
| `confidence` | `float` | Detection confidence | `0.0f` |
**Landmark Order:**
1. Left eye center
2. Right eye center
3. Nose tip
4. Left mouth corner
5. Right mouth corner
**Throws:**
- `ArgumentNullException` - If `landmarks` is null
- [`LivenessException`](#livenessexception) - If face creation fails
---
#### Properties
| Property | Type | Description |
|----------|------|-------------|
| `DetectionQuality` | [`DetectionQuality`](#detectionquality) | Quality assessment |
| `BoundingBox` | [`BoundingBox`](#boundingbox) | Face bounding box |
| `Confidence` | `float` | Detection confidence (0.0-1.0) |
---
#### GetLandmarks
```csharp
public Point2d[] GetLandmarks()
```
**Returns:** Array of 5 [`Point2d`](#point2d) facial landmark points
---
#### Clone
```csharp
public Face Clone()
```
Creates a copy of this face.
**Returns:** New [`Face`](#face) instance with same data
---
#### Dispose
```csharp
public void Dispose()
```
Releases native resources.
---
### FaceList
Disposable collection of [`Face`](#face) objects.
**Inherits:** `List`
**Implements:** `IDisposable`, `IAsyncDisposable`
#### Dispose
```csharp
public void Dispose()
```
Disposes all [`Face`](#face) objects in the collection.
#### DisposeAsync
```csharp
public ValueTask DisposeAsync()
```
Disposes all [`Face`](#face) objects asynchronously.
---
### LivenessException
Exception thrown when Liveness Library operations fail.
**Inherits:** `Exception`
#### Constructors
```csharp
public LivenessException()
public LivenessException(string message)
public LivenessException(string message, Exception innerException)
```
---
## Enums
### DetectionQuality
```csharp
public enum DetectionQuality
{
Good = 0, // No issues detected
BadQuality = 1, // Bad quality detected
MaybeRolled = 2 // Face maybe rolled, results may be incorrect
}
```
### ImageFormat
```csharp
public enum ImageFormat
{
Grayscale = 0, // 8-bit grayscale
RGB = 1, // 24-bit RGB
RGBA = 2, // 32-bit RGBA
BGR = 3, // 24-bit BGR
BGRA = 4 // 32-bit BGRA
}
```
---
## Record Structs
### Point2d
```csharp
public readonly record struct Point2d(float X, float Y);
```
2D point for landmark coordinates.
### BoundingBox
```csharp
public readonly record struct BoundingBox(int X, int Y, int Width, int Height)
{
public int Right => X + Width;
public int Bottom => Y + Height;
public int Area => Width * Height;
}
```
Bounding box for face region.
### ImageHeader
```csharp
public readonly record struct ImageHeader
{
public ImageHeader(byte[] data, int width, int height, int stride, ImageFormat format);
public byte[] Data { get; init; }
public int Width { get; init; }
public int Height { get; init; }
public int Stride { get; init; }
public ImageFormat Format { get; init; }
}
```
Image descriptor for passing image data.
**Throws:** `ArgumentNullException`, `ArgumentOutOfRangeException` on invalid parameters
### CheckResult
```csharp
public readonly record struct CheckResult(float Score, bool IsLive);
```
Result of liveness check.
### Version
```csharp
public readonly record struct Version(int Major, int Minor, int Patch)
{
public override string ToString() => $"{Major}.{Minor}.{Patch}";
}
```
Semantic version number.
---
## Usage Examples
### Basic Usage
```csharp
using Realeyes.Liveness;
// Create checker
using var checker = new LivenessChecker("model.realZ");
Console.WriteLine($"SDK: {LivenessChecker.SdkVersionString}");
// Prepare image
var imageData = File.ReadAllBytes("photo.rgb");
var header = new ImageHeader(imageData, 640, 480, 640 * 3, ImageFormat.RGB);
// Detect and check
using var faces = await checker.DetectFacesAsync(header);
foreach (var face in faces)
{
if (face.DetectionQuality == DetectionQuality.Good)
{
var result = await checker.CheckImageAsync(face);
Console.WriteLine($"Live: {result.IsLive}, Score: {result.Score:F3}");
}
}
```
### Third-Party Face Detector
```csharp
var landmarks = new Point2d[]
{
new(100f, 120f), // left eye
new(150f, 120f), // right eye
new(125f, 150f), // nose
new(105f, 180f), // left mouth
new(145f, 180f) // right mouth
};
var bbox = new BoundingBox(80, 100, 100, 120);
using var face = new Face(imageHeader, landmarks, bbox, 0.95f);
var result = await checker.CheckImageAsync(face);
```
### Error Handling
```csharp
try
{
using var checker = new LivenessChecker("invalid.realZ");
}
catch (LivenessException ex)
{
Console.WriteLine($"Failed: {ex.Message}");
}
```
### Parallel Processing
```csharp
using var checker = new LivenessChecker("model.realZ", maxConcurrency: 4);
var tasks = images.Select(async img =>
{
using var faces = await checker.DetectFacesAsync(img);
return faces.Where(f => f.DetectionQuality == DetectionQuality.Good)
.Select(f => checker.CheckImageAsync(f));
});
await Task.WhenAll(tasks.SelectMany(t => t.Result));
```
---
## Thread Safety
- [`LivenessChecker`](#livenesschecker) is thread-safe for all async methods
- Multiple [`DetectFacesAsync`](#detectfacesasync) and [`CheckImageAsync`](#checkimageasync) calls can execute concurrently
- [`Face`](#face) objects should not be shared across threads without synchronization
- Always dispose [`FaceList`](#facelist) and [`Face`](#face) objects when done
Python
https://verifeye-docs.realeyes.ai/native-sdk/liveness-detection/api-reference/python/
# Python API Reference
## Module
```python
from realeyes.liveness import (
LivenessChecker,
Face,
CheckResult,
BoundingBox,
Point2d,
DetectionQuality,
get_sdk_version_string
)
```
---
## Classes
### LivenessChecker
Main entry point for face detection and liveness checking.
#### Constructor
```python
LivenessChecker(model_file: str, max_concurrency: int = 0) -> None
```
Creates a new LivenessChecker instance.
| Parameter | Type | Description | Default |
|-----------|------|-------------|---------|
| `model_file` | `str` | Path to `.realZ` model file | Required |
| `max_concurrency` | `int` | Max concurrent operations (0 = auto) | `0` |
**Raises:** `RuntimeError` - If model file cannot be loaded
---
#### detect_faces
```python
def detect_faces(self, image: numpy.ndarray[numpy.uint8]) -> List[Face]
```
Detects faces in an image.
| Parameter | Type | Description |
|-----------|------|-------------|
| `image` | `numpy.ndarray[uint8]` | Image in RGB format `[H x W x C]` |
**Returns:** `List[`[`Face`](#face)`]` - List of detected faces
**Notes:**
- Image must be RGB format with shape `(height, width, 3)`
- Image must be C-contiguous (`numpy.ascontiguousarray()` if needed)
---
#### check_image
```python
def check_image(self, face: Face) -> CheckResult
```
Performs liveness check on a detected face.
| Parameter | Type | Description |
|-----------|------|-------------|
| `face` | [`Face`](#face) | Face to check |
**Returns:** [`CheckResult`](#checkresult) - Liveness check result
---
#### get_model_name
```python
def get_model_name(self) -> str
```
**Returns:** Name/version string of the loaded model
---
### Face
Represents a detected face with image data, landmarks, and metadata.
#### Constructor
```python
Face(
image: numpy.ndarray[numpy.uint8],
landmarks: List[Point2d],
bbox: BoundingBox = BoundingBox(0, 0, 0, 0),
confidence: float = 0.0
) -> None
```
Creates a Face from external detection results.
| Parameter | Type | Description | Default |
|-----------|------|-------------|---------|
| `image` | `numpy.ndarray[uint8]` | Image in RGB format | Required |
| `landmarks` | `List[`[`Point2d`](#point2d)`]` | 5 facial landmarks | Required |
| `bbox` | [`BoundingBox`](#boundingbox) | Face bounding box | `BoundingBox(0,0,0,0)` |
| `confidence` | `float` | Detection confidence | `0.0` |
**Landmark Order:**
1. Left eye center
2. Right eye center
3. Nose tip
4. Left mouth corner
5. Right mouth corner
---
#### detection_quality
```python
def detection_quality(self) -> DetectionQuality
```
**Returns:** [`DetectionQuality`](#detectionquality) - Quality assessment of the detection
---
#### bounding_box
```python
def bounding_box(self) -> BoundingBox
```
**Returns:** [`BoundingBox`](#boundingbox) - Bounding box of the face
---
#### confidence
```python
def confidence(self) -> float
```
**Returns:** Detection confidence (0.0-1.0)
---
#### landmarks
```python
def landmarks(self) -> List[Point2d]
```
**Returns:** List of 5 [`Point2d`](#point2d) facial landmark points
---
### CheckResult
Result of liveness check (read-only).
#### Properties
| Property | Type | Description |
|----------|------|-------------|
| `score` | `float` | Raw liveness score (0.0-1.0) |
| `is_live` | `bool` | True if classified as live |
---
### BoundingBox
Bounding box for face region.
#### Constructor
```python
BoundingBox(x: int, y: int, width: int, height: int) -> None
```
#### Properties
| Property | Type | Description |
|----------|------|-------------|
| `x` | `int` | Left edge (pixels) |
| `y` | `int` | Top edge (pixels) |
| `width` | `int` | Width (pixels) |
| `height` | `int` | Height (pixels) |
---
### Point2d
2D point for landmark coordinates.
#### Constructor
```python
Point2d(x: float, y: float) -> None
```
#### Properties
| Property | Type | Description |
|----------|------|-------------|
| `x` | `float` | X coordinate |
| `y` | `float` | Y coordinate |
---
## Enums
### DetectionQuality
```python
class DetectionQuality(Enum):
Good = 0 # High-quality detection
BadQuality = 1 # Low-quality (blurry, partial, etc.)
MaybeRolled = 2 # Face may be significantly rotated
```
---
## Functions
### get_sdk_version_string
```python
def get_sdk_version_string() -> str
```
**Returns:** SDK version as formatted string (e.g., "1.0.0")
---
## Module Attributes
### __version__
```python
__version__: str # SDK version string
```
---
## Usage Examples
### Basic Detection and Liveness Check
```python
from realeyes.liveness import LivenessChecker, DetectionQuality
import numpy as np
from PIL import Image
# Load image
image = np.array(Image.open("photo.jpg").convert("RGB"))
# Create checker and process
checker = LivenessChecker("model.realZ")
faces = checker.detect_faces(image)
for face in faces:
if face.detection_quality() == DetectionQuality.Good:
result = checker.check_image(face)
print(f"Score: {result.score:.3f}, Live: {result.is_live}")
```
### Using OpenCV Images
```python
import cv2
import numpy as np
from realeyes.liveness import LivenessChecker
checker = LivenessChecker("model.realZ")
# OpenCV reads as BGR, convert to RGB
bgr_image = cv2.imread("photo.jpg")
rgb_image = cv2.cvtColor(bgr_image, cv2.COLOR_BGR2RGB)
# Ensure contiguous array
rgb_image = np.ascontiguousarray(rgb_image)
faces = checker.detect_faces(rgb_image)
```
### Third-Party Face Detector
```python
from realeyes.liveness import LivenessChecker, Face, Point2d, BoundingBox
import numpy as np
checker = LivenessChecker("model.realZ")
# Your detector results
landmarks = [
Point2d(100.0, 120.0), # left eye
Point2d(150.0, 120.0), # right eye
Point2d(125.0, 150.0), # nose
Point2d(105.0, 180.0), # left mouth
Point2d(145.0, 180.0), # right mouth
]
bbox = BoundingBox(80, 100, 100, 120)
face = Face(rgb_image, landmarks, bbox, confidence=0.95)
result = checker.check_image(face)
```
---
## Error Handling
```python
from realeyes.liveness import LivenessChecker
try:
checker = LivenessChecker("invalid.realZ")
except RuntimeError as e:
print(f"Failed to load model: {e}")
```
---
## Thread Safety
- [`LivenessChecker`](#livenesschecker) releases the GIL during blocking operations
- Safe to use from multiple threads
- Each thread should ideally use its own [`LivenessChecker`](#livenesschecker) instance for best performance
C
https://verifeye-docs.realeyes.ai/native-sdk/liveness-detection/getting-started/c/
# Getting Started with C API
## Prerequisites
- C99 compatible compiler
- The Liveness Library shared library for your platform
## Installation
### Using Pre-built Package
1. Download the Liveness Library package for your platform.
2. Extract the package:
```
liveness-library/
├── include/
│ └── livenesschecker_c.h
└── lib/
├── libLivenessLibrary.so (Linux)
├── LivenessLibrary.dll (Windows)
└── libLivenessLibrary.dylib (macOS)
```
3. Include the header and link the library.
## Quick Start Example
```c
#include
#include
#include
#include
// Callback context structure
typedef struct {
int completed;
LLFace** faces;
int face_count;
char* error;
} DetectContext;
typedef struct {
int completed;
LLCheckResult result;
char* error;
} CheckContext;
// Detection callback
void on_faces_detected(void* user_data, LLFaceArray* faces, const char* error_msg) {
DetectContext* ctx = (DetectContext*)user_data;
if (error_msg) {
ctx->error = strdup(error_msg);
ctx->completed = 1;
return;
}
// Copy faces (they're only valid during callback)
ctx->face_count = faces->count;
ctx->faces = malloc(sizeof(LLFace*) * faces->count);
for (int i = 0; i < faces->count; i++) {
ctx->faces[i] = ll_face_copy(faces->faces[i]);
}
ctx->completed = 1;
}
// Liveness check callback
void on_liveness_checked(void* user_data, LLCheckResult* result, const char* error_msg) {
CheckContext* ctx = (CheckContext*)user_data;
if (error_msg) {
ctx->error = strdup(error_msg);
ctx->completed = 1;
return;
}
ctx->result = *result;
ctx->completed = 1;
}
int main(int argc, char* argv[]) {
char* error_msg = NULL;
// 1. Create LivenessChecker
LLLivenessChecker* checker = ll_liveness_checker_new("model.realZ", 0, &error_msg);
if (!checker) {
fprintf(stderr, "Failed to create checker: %s\n", error_msg ? error_msg : "Unknown error");
free(error_msg);
return 1;
}
// Print SDK version
char* version = ll_liveness_checker_get_sdk_version_string();
printf("SDK Version: %s\n", version);
free(version);
// 2. Prepare image data
int width = 640;
int height = 480;
int stride = width * 3;
uint8_t* image_data = malloc(stride * height);
// ... fill image_data with actual pixels ...
LLImageHeader header = {
.data = image_data,
.width = width,
.height = height,
.stride = stride,
.format = LLImageFormatRGB
};
// 3. Detect faces
DetectContext detect_ctx = {0};
ll_liveness_checker_detect_faces(checker, &header, on_faces_detected, &detect_ctx);
// Wait for completion (in real code, use proper synchronization)
while (!detect_ctx.completed) {
// Simple busy wait - use condition variables in production
}
if (detect_ctx.error) {
fprintf(stderr, "Detection failed: %s\n", detect_ctx.error);
free(detect_ctx.error);
ll_liveness_checker_free(checker);
free(image_data);
return 1;
}
printf("Detected %d face(s)\n", detect_ctx.face_count);
// 4. Check liveness for each face
for (int i = 0; i < detect_ctx.face_count; i++) {
LLFace* face = detect_ctx.faces[i];
// Get face info
LLDetectionQuality quality = ll_face_detection_quality(face);
if (quality != LLDetectionQualityGood) {
printf("Skipping low-quality detection\n");
ll_face_free(face);
continue;
}
LLBoundingBox bbox = ll_face_bounding_box(face);
printf("Face at (%d, %d) %dx%d\n", bbox.x, bbox.y, bbox.width, bbox.height);
// Check liveness
CheckContext check_ctx = {0};
ll_liveness_checker_check_image(checker, face, on_liveness_checked, &check_ctx);
while (!check_ctx.completed) {
// Wait for completion
}
if (check_ctx.error) {
fprintf(stderr, "Liveness check failed: %s\n", check_ctx.error);
free(check_ctx.error);
} else {
printf("Liveness score: %.3f\n", check_ctx.result.score);
printf("Is live: %s\n", check_ctx.result.is_live ? "Yes" : "No");
}
ll_face_free(face);
}
// 5. Cleanup
free(detect_ctx.faces);
free(image_data);
ll_liveness_checker_free(checker);
return 0;
}
```
## Common Patterns
### Error Handling
The C API uses error output parameters and NULL returns. See [`ll_liveness_checker_new()`](https://verifeye-docs.realeyes.ai/../api-reference/c.md#ll_liveness_checker_new) for details:
```c
char* error_msg = NULL;
LLLivenessChecker* checker = ll_liveness_checker_new("model.realZ", 0, &error_msg);
if (!checker) {
fprintf(stderr, "Error: %s\n", error_msg ? error_msg : "Unknown error");
free(error_msg); // Caller must free error string
return 1;
}
```
### Memory Management
- Always call `*_free()` functions to release resources (e.g., [`ll_liveness_checker_free()`](https://verifeye-docs.realeyes.ai/../api-reference/c.md#ll_liveness_checker_free), [`ll_face_free()`](https://verifeye-docs.realeyes.ai/../api-reference/c.md#ll_face_free))
- Strings returned by the API must be freed with `free()`
- Face objects from callbacks must be copied with [`ll_face_copy()`](https://verifeye-docs.realeyes.ai/../api-reference/c.md#ll_face_copy) if needed beyond the callback
```c
// Free resources
ll_face_free(face);
ll_liveness_checker_free(checker);
// Free returned strings
char* model_name = ll_liveness_checker_get_model_name(checker);
printf("Model: %s\n", model_name);
free(model_name);
// Copy face from callback
void callback(void* data, LLFaceArray* faces, const char* error) {
if (faces && faces->count > 0) {
// Face is only valid during callback - copy it
LLFace* my_face = ll_face_copy(faces->faces[0]);
// ... use my_face later ...
ll_face_free(my_face);
}
}
```
### Using Third-Party Face Detectors
Create an [`LLFace`](https://verifeye-docs.realeyes.ai/../api-reference/c.md#opaque-types) from your own detector results using [`ll_face_new()`](https://verifeye-docs.realeyes.ai/../api-reference/c.md#ll_face_new):
```c
LLPoint2d landmarks[5] = {
{100.0f, 120.0f}, // left eye
{150.0f, 120.0f}, // right eye
{125.0f, 150.0f}, // nose
{105.0f, 180.0f}, // left mouth
{145.0f, 180.0f} // right mouth
};
LLBoundingBox bbox = {80, 100, 100, 120};
char* error_msg = NULL;
LLFace* face = ll_face_new(&header, landmarks, 5, &bbox, 0.95f, &error_msg);
if (!face) {
fprintf(stderr, "Failed to create face: %s\n", error_msg);
free(error_msg);
return;
}
// Use face for liveness check...
ll_face_free(face);
```
### Concurrency Monitoring
```c
int in_flight = ll_liveness_checker_get_concurrent_calculations(checker);
printf("Operations in flight: %d\n", in_flight);
```
## Next Step
- [C API Reference](https://verifeye-docs.realeyes.ai/../api-reference/c.md) - Complete API documentation for [`LLLivenessChecker`](https://verifeye-docs.realeyes.ai/../api-reference/c.md#opaque-types), [`LLFace`](https://verifeye-docs.realeyes.ai/../api-reference/c.md#opaque-types), [`LLImageHeader`](https://verifeye-docs.realeyes.ai/../api-reference/c.md#llimageheader), and more
Cpp
https://verifeye-docs.realeyes.ai/native-sdk/liveness-detection/getting-started/cpp/
# Getting Started with C++ API
## Prerequisites
- C++17 compatible compiler (GCC 7+, Clang 5+, MSVC 2019+)
- CMake 3.15+
## Installation
### Using Pre-built Package
1. Download the Liveness Library package for your platform from the distribution channel.
2. Extract the package to your preferred location:
```
liveness-library/
├── include/
│ ├── livenesschecker.h
│ ├── livenesschecker_c.h
│ └── types.h
└── lib/
├── libLivenessLibrary.so (Linux)
├── LivenessLibrary.dll (Windows)
└── libLivenessLibrary.dylib (macOS)
```
3. Add the include directory and the library to your project.
## Quick Start Example
```cpp
#include
#include
#include
int main() {
try {
// 1. Create LivenessChecker with model file
ll::LivenessChecker checker("model.realZ");
std::cout << "SDK Version: " << ll::LivenessChecker::getSDKVersionString() << std::endl;
std::cout << "Model: " << checker.getModelName() << std::endl;
// 2. Prepare your image data
// (In real code, load from file or camera)
int width = 640;
int height = 480;
int stride = width * 3; // RGB: 3 bytes per pixel
std::vector imageData(stride * height);
// ... fill imageData with actual image pixels ...
ll::ImageHeader header{
imageData.data(),
width,
height,
stride,
ll::ImageFormat::RGB
};
// 3. Detect faces (async with future)
auto detectFuture = checker.detectFaces(header);
std::vector faces = detectFuture.get();
std::cout << "Detected " << faces.size() << " face(s)" << std::endl;
// 4. Check liveness for each face
for (const auto& face : faces) {
// Check detection quality first
if (face.detectionQuality() != ll::DetectionQuality::Good) {
std::cout << "Skipping low-quality detection" << std::endl;
continue;
}
// Get face info
auto bbox = face.boundingBox();
std::cout << "Face at (" << bbox.x << ", " << bbox.y << ") "
<< bbox.width << "x" << bbox.height << std::endl;
// Perform liveness check
auto checkFuture = checker.checkImage(face);
ll::CheckResult result = checkFuture.get();
std::cout << "Liveness score: " << result.score << std::endl;
std::cout << "Is live: " << (result.is_live ? "Yes" : "No") << std::endl;
}
return 0;
}
catch (const std::exception& e) {
std::cerr << "Error: " << e.what() << std::endl;
return 1;
}
}
```
## Common Patterns
### Async Operations with Futures
The library provides non-blocking async APIs using [`std::future`](https://en.cppreference.com/w/cpp/thread/future):
```cpp
// Start multiple detections without waiting
auto future1 = checker.detectFaces(image1);
auto future2 = checker.detectFaces(image2);
// Do other work...
// Get results when needed
auto faces1 = future1.get();
auto faces2 = future2.get();
```
### Async Operations with Callbacks
For event-driven architectures, use the callback API with [`ResultOrError`](https://verifeye-docs.realeyes.ai/../api-reference/cpp.md#resultorerror):
```cpp
checker.detectFaces(header, [](ll::ResultOrError> result) {
if (auto* faces = std::get_if>(&result)) {
// Success - process faces
for (const auto& face : *faces) {
std::cout << "Face confidence: " << face.confidence() << std::endl;
}
} else {
// Error
auto& error = std::get(result);
std::cerr << "Detection failed: " << error.errorString << std::endl;
}
});
```
### Using Third-Party Face Detectors
If you have your own face detector, create [`Face`](https://verifeye-docs.realeyes.ai/../api-reference/cpp.md#face) objects manually:
```cpp
// Your detection results
std::vector landmarks = {
{100.0f, 120.0f}, // left eye
{150.0f, 120.0f}, // right eye
{125.0f, 150.0f}, // nose tip
{105.0f, 180.0f}, // left mouth corner
{145.0f, 180.0f} // right mouth corner
};
ll::BoundingBox bbox{80, 100, 100, 120};
float confidence = 0.95f;
// Create Face for liveness checking
ll::Face face(header, landmarks, bbox, confidence);
auto result = checker.checkImage(face).get();
```
### Concurrency Control
Monitor and limit concurrent operations:
```cpp
// Check current in-flight operations
int inFlight = checker.getConcurrentCalculations();
// Implement backpressure
const int maxConcurrent = 4;
while (checker.getConcurrentCalculations() >= maxConcurrent) {
std::this_thread::sleep_for(std::chrono::milliseconds(10));
}
checker.detectFaces(nextFrame);
```
### Error Handling
```cpp
try {
ll::LivenessChecker checker("invalid_model.realZ");
} catch (const std::runtime_error& e) {
std::cerr << "Failed to load model: " << e.what() << std::endl;
}
```
## Next Steps
- [C++ API Reference](https://verifeye-docs.realeyes.ai/../api-reference/cpp.md) - Complete API documentation for [`LivenessChecker`](https://verifeye-docs.realeyes.ai/../api-reference/cpp.md#livenesschecker), [`Face`](https://verifeye-docs.realeyes.ai/../api-reference/cpp.md#face), [`ImageHeader`](https://verifeye-docs.realeyes.ai/../api-reference/cpp.md#imageheader), and more
Dotnet
https://verifeye-docs.realeyes.ai/native-sdk/liveness-detection/getting-started/dotnet/
# Getting Started with .NET API
## Prerequisites
- .NET 8.0+
- Native Liveness Library for your platform
## Installation
### NuGet Package
```bash
dotnet add package Realeyes.Liveness
```
## Quick Start Example
```csharp
using System;
using System.Threading.Tasks;
using Realeyes.Liveness;
class Program
{
static async Task Main(string[] args)
{
// Print SDK version
Console.WriteLine($"SDK Version: {LivenessChecker.SdkVersionString}");
// 1. Create LivenessChecker with model file
using var checker = new LivenessChecker("model.realZ");
Console.WriteLine($"Model: {checker.ModelName}");
// 2. Prepare image data
// Example: Load from file using System.Drawing or ImageSharp
byte[] imageData = LoadImageAsRgbBytes("photo.jpg", out int width, out int height);
int stride = width * 3; // RGB: 3 bytes per pixel
var imageHeader = new ImageHeader(imageData, width, height, stride, ImageFormat.RGB);
// 3. Detect faces asynchronously
using var faces = await checker.DetectFacesAsync(imageHeader);
Console.WriteLine($"Detected {faces.Count} face(s)");
// 4. Check liveness for each face
foreach (var face in faces)
{
// Check detection quality first
if (face.DetectionQuality != DetectionQuality.Good)
{
Console.WriteLine("Skipping low-quality detection");
continue;
}
// Get face info
var bbox = face.BoundingBox;
Console.WriteLine($"Face at ({bbox.X}, {bbox.Y}) {bbox.Width}x{bbox.Height}");
Console.WriteLine($"Confidence: {face.Confidence:F3}");
// Perform liveness check
var result = await checker.CheckImageAsync(face);
Console.WriteLine($"Liveness score: {result.Score:F3}");
Console.WriteLine($"Is live: {result.IsLive}");
}
}
static byte[] LoadImageAsRgbBytes(string path, out int width, out int height)
{
// Example using System.Drawing (Windows) or ImageSharp (cross-platform)
// This is a placeholder - implement based on your image library
throw new NotImplementedException("Implement image loading");
}
}
```
## Common Patterns
### Async/Await
All detection and checking operations are async:
```csharp
using var checker = new LivenessChecker("model.realZ");
// Detect faces
using var faces = await checker.DetectFacesAsync(imageHeader);
// Check liveness
var result = await checker.CheckImageAsync(face);
```
### Resource Management
Use `using` statements or explicitly call [`Dispose()`](https://verifeye-docs.realeyes.ai/../api-reference/dotnet.md#dispose). Both [`LivenessChecker`](https://verifeye-docs.realeyes.ai/../api-reference/dotnet.md#livenesschecker) and [`FaceList`](https://verifeye-docs.realeyes.ai/../api-reference/dotnet.md#facelist) implement `IDisposable`:
```csharp
// Preferred: using statement
using var checker = new LivenessChecker("model.realZ");
using var faces = await checker.DetectFacesAsync(imageHeader);
// Alternative: explicit disposal
var checker = new LivenessChecker("model.realZ");
try
{
var faces = await checker.DetectFacesAsync(imageHeader);
try
{
// Process faces...
}
finally
{
faces.Dispose(); // Disposes all Face objects in the list
}
}
finally
{
checker.Dispose();
}
```
### Error Handling
Handle [`LivenessException`](https://verifeye-docs.realeyes.ai/../api-reference/dotnet.md#livenessexception) for library errors:
```csharp
try
{
using var checker = new LivenessChecker("invalid_model.realZ");
}
catch (LivenessException ex)
{
Console.WriteLine($"Failed to load model: {ex.Message}");
}
catch (ArgumentNullException ex)
{
Console.WriteLine($"Invalid argument: {ex.Message}");
}
```
### Using Third-Party Face Detectors
Create a [`Face`](https://verifeye-docs.realeyes.ai/../api-reference/dotnet.md#face) from your own detector results using [`Point2d`](https://verifeye-docs.realeyes.ai/../api-reference/dotnet.md#point2d) and [`BoundingBox`](https://verifeye-docs.realeyes.ai/../api-reference/dotnet.md#boundingbox):
```csharp
using var checker = new LivenessChecker("model.realZ");
// Your face detector results
var landmarks = new Point2d[]
{
new(100.0f, 120.0f), // left eye
new(150.0f, 120.0f), // right eye
new(125.0f, 150.0f), // nose
new(105.0f, 180.0f), // left mouth
new(145.0f, 180.0f), // right mouth
};
var bbox = new BoundingBox(80, 100, 100, 120);
float confidence = 0.95f;
// Create Face object for liveness checking
using var face = new Face(imageHeader, landmarks, bbox, confidence);
var result = await checker.CheckImageAsync(face);
Console.WriteLine($"Is live: {result.IsLive}");
```
### Processing Video Frames
```csharp
using var checker = new LivenessChecker("model.realZ");
// Process frames from video capture
while (await videoCapture.ReadFrameAsync() is byte[] frameData)
{
var header = new ImageHeader(frameData, width, height, stride, ImageFormat.RGB);
using var faces = await checker.DetectFacesAsync(header);
foreach (var face in faces)
{
if (face.DetectionQuality == DetectionQuality.Good)
{
var result = await checker.CheckImageAsync(face);
// Update UI or process result
await UpdateLivenessDisplay(face.BoundingBox, result);
}
}
}
```
### Concurrency Monitoring
```csharp
// Check operations in flight
int inFlight = checker.ConcurrentCalculations;
Console.WriteLine($"Operations in flight: {inFlight}");
// Implement backpressure
const int maxConcurrent = 4;
while (checker.ConcurrentCalculations >= maxConcurrent)
{
await Task.Delay(10);
}
```
### Parallel Processing
```csharp
using var checker = new LivenessChecker("model.realZ", maxConcurrency: 4);
var imagePaths = Directory.GetFiles("images", "*.jpg");
var tasks = imagePaths.Select(async path =>
{
var imageData = await File.ReadAllBytesAsync(path);
// Assume LoadImage converts to RGB and returns dimensions
var (data, width, height) = LoadImage(imageData);
var header = new ImageHeader(data, width, height, width * 3, ImageFormat.RGB);
using var faces = await checker.DetectFacesAsync(header);
return faces.Where(f => f.DetectionQuality == DetectionQuality.Good)
.Select(async f => new
{
Path = path,
Result = await checker.CheckImageAsync(f)
});
});
var results = await Task.WhenAll(tasks);
```
## Next Steps
- [.NET API Reference](https://verifeye-docs.realeyes.ai/../api-reference/dotnet.md) - Complete API documentation for [`LivenessChecker`](https://verifeye-docs.realeyes.ai/../api-reference/dotnet.md#livenesschecker), [`Face`](https://verifeye-docs.realeyes.ai/../api-reference/dotnet.md#face), [`ImageHeader`](https://verifeye-docs.realeyes.ai/../api-reference/dotnet.md#imageheader), and more
Python
https://verifeye-docs.realeyes.ai/native-sdk/liveness-detection/getting-started/python/
# Getting Started with Python API
## Prerequisites
- Python 3.10+
- NumPy
## Installation
```bash
pip install realeyes-liveness
```
Or install from a wheel file:
```bash
pip install realeyes_liveness-1.0.0-cp310-cp310-linux_x86_64.whl
```
## Quick Start Example
```python
from realeyes.liveness import (
LivenessChecker,
DetectionQuality,
get_sdk_version_string
)
import numpy as np
# Print SDK version
print(f"SDK Version: {get_sdk_version_string()}")
# 1. Create LivenessChecker with model file
checker = LivenessChecker("model.realZ")
print(f"Model: {checker.get_model_name()}")
# 2. Load your image as NumPy array
# Image must be RGB format with shape [height, width, channels]
# Example using PIL/Pillow:
from PIL import Image
image = np.array(Image.open("photo.jpg").convert("RGB"))
# Or using OpenCV (convert BGR to RGB):
# import cv2
# image = cv2.cvtColor(cv2.imread("photo.jpg"), cv2.COLOR_BGR2RGB)
# 3. Detect faces
faces = checker.detect_faces(image)
print(f"Detected {len(faces)} face(s)")
# 4. Check liveness for each face
for face in faces:
# Check detection quality first
if face.detection_quality() != DetectionQuality.Good:
print("Skipping low-quality detection")
continue
# Get face info
bbox = face.bounding_box()
print(f"Face at ({bbox.x}, {bbox.y}) {bbox.width}x{bbox.height}")
print(f"Confidence: {face.confidence():.3f}")
# Perform liveness check
result = checker.check_image(face)
print(f"Liveness score: {result.score:.3f}")
print(f"Is live: {result.is_live}")
```
## Common Patterns
### Processing Video Frames
```python
import cv2
from realeyes.liveness import LivenessChecker, DetectionQuality
checker = LivenessChecker("model.realZ")
cap = cv2.VideoCapture(0) # Webcam
while True:
ret, frame = cap.read()
if not ret:
break
# Convert BGR to RGB
rgb_frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
# Detect and check faces
faces = checker.detect_faces(rgb_frame)
for face in faces:
if face.detection_quality() == DetectionQuality.Good:
result = checker.check_image(face)
# Draw result on frame
bbox = face.bounding_box()
color = (0, 255, 0) if result.is_live else (0, 0, 255)
cv2.rectangle(frame,
(bbox.x, bbox.y),
(bbox.x + bbox.width, bbox.y + bbox.height),
color, 2)
label = f"Live: {result.score:.2f}" if result.is_live else f"Spoof: {result.score:.2f}"
cv2.putText(frame, label, (bbox.x, bbox.y - 10),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, color, 2)
cv2.imshow("Liveness", frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
```
### Using Third-Party Face Detectors
Create a [`Face`](https://verifeye-docs.realeyes.ai/../api-reference/python.md#face) from your own detector results using [`Point2d`](https://verifeye-docs.realeyes.ai/../api-reference/python.md#point2d) and [`BoundingBox`](https://verifeye-docs.realeyes.ai/../api-reference/python.md#boundingbox):
```python
from realeyes.liveness import LivenessChecker, Face, Point2d, BoundingBox
checker = LivenessChecker("model.realZ")
# Your face detector results
landmarks = [
Point2d(100.0, 120.0), # left eye
Point2d(150.0, 120.0), # right eye
Point2d(125.0, 150.0), # nose
Point2d(105.0, 180.0), # left mouth
Point2d(145.0, 180.0), # right mouth
]
bbox = BoundingBox(x=80, y=100, width=100, height=120)
confidence = 0.95
# Create Face object for liveness checking
face = Face(image, landmarks, bbox, confidence)
result = checker.check_image(face)
print(f"Is live: {result.is_live}")
```
### Batch Processing
```python
from pathlib import Path
from PIL import Image
import numpy as np
from realeyes.liveness import LivenessChecker, DetectionQuality
checker = LivenessChecker("model.realZ")
image_dir = Path("images")
results = []
for image_path in image_dir.glob("*.jpg"):
image = np.array(Image.open(image_path).convert("RGB"))
faces = checker.detect_faces(image)
for i, face in enumerate(faces):
if face.detection_quality() == DetectionQuality.Good:
result = checker.check_image(face)
results.append({
"file": image_path.name,
"face_index": i,
"score": result.score,
"is_live": result.is_live
})
# Print results
for r in results:
status = "LIVE" if r["is_live"] else "SPOOF"
print(f"{r['file']} face {r['face_index']}: {status} (score: {r['score']:.3f})")
```
### Error Handling
```python
from realeyes.liveness import LivenessChecker
try:
checker = LivenessChecker("invalid_model.realZ")
except RuntimeError as e:
print(f"Failed to load model: {e}")
```
## Next Steps
- [Python API Reference](https://verifeye-docs.realeyes.ai/../api-reference/python.md) - Complete API documentation for [`LivenessChecker`](https://verifeye-docs.realeyes.ai/../api-reference/python.md#livenesschecker), [`Face`](https://verifeye-docs.realeyes.ai/../api-reference/python.md#face), [`CheckResult`](https://verifeye-docs.realeyes.ai/../api-reference/python.md#checkresult), and more
Index
https://verifeye-docs.realeyes.ai/native-sdk/liveness-detection/
# Liveness Detection Library
Welcome to the Liveness Detection Library documentation!
## Contents:
- [Overview](https://verifeye-docs.realeyes.ai/overview.md)
- [C++ API](https://verifeye-docs.realeyes.ai/api-reference/cpp.md)
- [C API](https://verifeye-docs.realeyes.ai/api-reference/c.md)
- [Python API](https://verifeye-docs.realeyes.ai/api-reference/python.md)
- [.NET API](https://verifeye-docs.realeyes.ai/api-reference/dotnet.md)
Overview
https://verifeye-docs.realeyes.ai/native-sdk/liveness-detection/overview/
# Liveness Library
## Introduction
Liveness Library (LL) is a native C++17 library for real-time face detection and liveness verification on mobile and desktop platforms.
The library detects faces in images and determines whether a detected face is "live" (a real person) or a spoof attempt (photo, video playback, mask, etc.).
## Features
- Real-time face detection
- Liveness detection to prevent spoofing attacks
- Cross-platform support (Windows, Linux, macOS, mobile)
- Thread-safe async APIs with futures and callbacks
- Language bindings for Python and .NET
- Configurable concurrency with automatic tuning
## Supported Platforms
| Platform | Architecture | Status |
|----------|------------------|-----------|
| Linux | x86_64, ARM64 | Supported |
| Windows | x86_64 | Supported |
| macOS | x86_64, ARM64 | Supported |
| iOS | ARM64 | Supported |
| Android | ARM64, ARMv7 | Supported |
## Available APIs
- **C++** - Core native API with async support via `std::future` and callbacks
- **C** - C-compatible API for FFI (Foreign Function Interface)
- **Python** - Python bindings via pybind11
- **.NET** - .NET bindings via P/Invoke
## API Comparison
| Operation | C++ | C | Python | .NET |
|-----------|-----|---|--------|------|
| Create checker | [`LivenessChecker(path)`](https://verifeye-docs.realeyes.ai/api-reference/cpp.md#livenesschecker) | [`ll_liveness_checker_new()`](https://verifeye-docs.realeyes.ai/api-reference/c.md#ll_liveness_checker_new) | [`LivenessChecker(path)`](https://verifeye-docs.realeyes.ai/api-reference/python.md#livenesschecker) | [`new LivenessChecker(path)`](https://verifeye-docs.realeyes.ai/api-reference/dotnet.md#livenesschecker) |
| Detect faces | [`detectFaces(image)`](https://verifeye-docs.realeyes.ai/api-reference/cpp.md#detectfaces-future) | [`ll_liveness_checker_detect_faces()`](https://verifeye-docs.realeyes.ai/api-reference/c.md#ll_liveness_checker_detect_faces) | [`detect_faces(image)`](https://verifeye-docs.realeyes.ai/api-reference/python.md#detect_faces) | [`DetectFacesAsync()`](https://verifeye-docs.realeyes.ai/api-reference/dotnet.md#detectfacesasync) |
| Check liveness | [`checkImage(face)`](https://verifeye-docs.realeyes.ai/api-reference/cpp.md#checkimage-future) | [`ll_liveness_checker_check_image()`](https://verifeye-docs.realeyes.ai/api-reference/c.md#ll_liveness_checker_check_image) | [`check_image(face)`](https://verifeye-docs.realeyes.ai/api-reference/python.md#check_image) | [`CheckImageAsync()`](https://verifeye-docs.realeyes.ai/api-reference/dotnet.md#checkimageasync) |
| Get SDK version | [`getSDKVersionString()`](https://verifeye-docs.realeyes.ai/api-reference/cpp.md#getsdkversionstring-static) | [`ll_liveness_checker_get_sdk_version_string()`](https://verifeye-docs.realeyes.ai/api-reference/c.md#ll_liveness_checker_get_sdk_version_string) | [`get_sdk_version_string()`](https://verifeye-docs.realeyes.ai/api-reference/python.md#get_sdk_version_string) | [`SdkVersionString`](https://verifeye-docs.realeyes.ai/api-reference/dotnet.md#static-properties) |
| Cleanup | Destructor | [`ll_liveness_checker_free()`](https://verifeye-docs.realeyes.ai/api-reference/c.md#ll_liveness_checker_free) | Automatic | [`Dispose()`](https://verifeye-docs.realeyes.ai/api-reference/dotnet.md#dispose) |
## Quick Links
### Getting Started
- [C++ Getting Started](https://verifeye-docs.realeyes.ai/getting-started/cpp.md)
- [C Getting Started](https://verifeye-docs.realeyes.ai/getting-started/c.md)
- [Python Getting Started](https://verifeye-docs.realeyes.ai/getting-started/python.md)
- [.NET Getting Started](https://verifeye-docs.realeyes.ai/getting-started/dotnet.md)
### API Reference
- [C++ API Reference](https://verifeye-docs.realeyes.ai/api-reference/cpp.md)
- [C API Reference](https://verifeye-docs.realeyes.ai/api-reference/c.md)
- [Python API Reference](https://verifeye-docs.realeyes.ai/api-reference/python.md)
- [.NET API Reference](https://verifeye-docs.realeyes.ai/api-reference/dotnet.md)
## Quick Example
### C++
```cpp
#include
ll::LivenessChecker checker("model.realZ");
// Create image header from your image data
ll::ImageHeader header{imageData, width, height, stride, ll::ImageFormat::BGR};
// Detect faces asynchronously
auto faces = checker.detectFaces(header).get();
// Check liveness for each face
for (const auto& face : faces) {
auto result = checker.checkImage(face).get();
if (result.is_live) {
std::cout << "Face is live with score: " << result.score << std::endl;
}
}
```
### Python
```python
from realeyes.liveness import LivenessChecker
import numpy as np
checker = LivenessChecker("model.realZ")
image = np.array(...) # Your image as NumPy array
faces = checker.detect_faces(image)
for face in faces:
result = checker.check_image(face)
print(f"Is live: {result.is_live}, Score: {result.score}")
```
### .NET
```csharp
using RealEyes.Liveness;
using var checker = new LivenessChecker("model.realZ");
var header = new ImageHeader(imageData, width, height, stride, ImageFormat.BGR);
using var faces = await checker.DetectFacesAsync(header);
foreach (var face in faces) {
var result = await checker.CheckImageAsync(face);
Console.WriteLine($"Is live: {result.IsLive}, Score: {result.Score}");
}
```
## License
Proprietary. Copyright Realeyes OU 2012-2026. All rights reserved.
Api C
https://verifeye-docs.realeyes.ai/native-sdk/native-emotions/api-c/
# C API Documentation
The C API provides a C interface to the Native Emotions Library, allowing integration with C projects and other languages that support C bindings.
## Module Functions
### nel_tracker_get_sdk_version
```c
NELVersion nel_tracker_get_sdk_version();
```
Returns the version of the SDK (and not the model)
**Returns:** version of the SDK
### nel_tracker_get_sdk_version_string
```c
char* nel_tracker_get_sdk_version_string();
```
Returns the version string of the SDK (and not the model)
**Returns:** version string of the SDK. The caller is responsible for freeing the returned string using free().
## Tracker Functions
### Constructor/Destructor
#### nel_tracker_new
```c
NELTracker* nel_tracker_new(const char* model_file, int max_concurrency, char** error_message);
```
Tracker constructor: loads model file, sets up the processing.
**Parameters:**
- `model_file` - path for the used model
- `max_concurrency` - maximum allowed concurrency, 0 means automatic (using all cores), default: 0
- `error_message` - pointer to a char* that will be set to an error message string on failure, or NULL on success. The caller is responsible for freeing the string using free(). If error_message is a NULL pointer, no error message will be returned.
**Returns:** pointer to the new Tracker instance, or NULL on failure
#### nel_tracker_free
```c
void nel_tracker_free(NELTracker* tracker);
```
Destructor
**Parameters:**
- `tracker` - pointer to the Tracker instance to free
### Tracking
#### nel_tracker_track
```c
void nel_tracker_track(NELTracker* tracker, const NELImageHeader* image_header,
int64_t timestamp_ms, NELTrackCallback callback, void* user_data);
```
Tracks the given frame asynchronously with a callback API.
**Note:** The given ImageHeader doesn't own the image data, and it is safe to delete the data after the call, a copy is happening internally. See NELImageHeader for details.
**Note:** Calling this function is non-blocking, so calling it again with the next frame without waiting for the result is possible. Also see nel_tracker_get_concurrent_calculations().
**Note:** This API is not thread-safe. Do not call any tracker functions from multiple threads simultaneously.
**Parameters:**
- `tracker` - pointer to the Tracker instance
- `image_header` - image descriptor
- `timestamp_ms` - timestamp of the image in milliseconds
- `callback` - callback to call with the result
- `user_data` - user data to pass to the callback
#### NELTrackCallback
```c
typedef void (*NELTrackCallback)(void* user_data, NELResultType* result, const char* error_msg);
```
Callback for track function
**Parameters:**
- `user_data` - user data passed to the function
- `result` - tracked landmarks and emotions. The result and its contents are owned by the library and are valid only during the callback.
- `error_msg` - error message if any, otherwise NULL
### Query Functions
#### nel_tracker_get_emotion_ids
```c
NELEmotionIDArray* nel_tracker_get_emotion_ids(const NELTracker* tracker);
```
Returns the emotion IDs provided by the loaded model. The order is the same as in the NELEmotionResults.
**See also:** NELEmotionResults
**Parameters:**
- `tracker` - pointer to the Tracker instance
**Returns:** pointer to NELEmotionIDArray. The caller is responsible for freeing the returned array using free().
#### nel_tracker_get_emotion_names
```c
NELStringArray* nel_tracker_get_emotion_names(const NELTracker* tracker);
```
Returns the emotion names provided by the loaded model. The order is the same as in the NELEmotionResults.
**See also:** NELEmotionResults
**Parameters:**
- `tracker` - pointer to the Tracker instance
**Returns:** pointer to NELStringArray. The caller is responsible for freeing the returned array using free().
#### nel_tracker_get_model_name
```c
char* nel_tracker_get_model_name(const NELTracker* tracker);
```
Returns the name (version etc) of the loaded model.
**Parameters:**
- `tracker` - pointer to the Tracker instance
**Returns:** name of the model. The caller is responsible for freeing the returned string using free().
#### nel_tracker_get_concurrent_calculations
```c
uint16_t nel_tracker_get_concurrent_calculations(const NELTracker* tracker);
```
Returns the value of the atomic counter for the number of calculations currently running concurrently. You can use this to limit the number of concurrent calculations.
**Parameters:**
- `tracker` - pointer to the Tracker instance
**Returns:** The (approximate) number of calculations currently in-flight.
### Configuration
#### nel_tracker_is_emotion_enabled
```c
bool nel_tracker_is_emotion_enabled(const NELTracker* tracker, NELEmotionID emotion_id);
```
Returns wether the specified emotion is enabled
**Parameters:**
- `tracker` - pointer to the Tracker instance
- `emotion_id` - emotion to query
**Returns:** true if enabled, false otherwise
#### nel_tracker_set_emotion_enabled
```c
void nel_tracker_set_emotion_enabled(NELTracker* tracker, NELEmotionID emotion_id, bool enable);
```
Sets the specified emotion to enabled or disabled
**Parameters:**
- `tracker` - pointer to the Tracker instance
- `emotion_id` - emotion to set
- `enable` - boolean to set to
#### nel_tracker_is_face_tracking_enabled
```c
bool nel_tracker_is_face_tracking_enabled(const NELTracker* tracker);
```
Returns wether the face tracker is enabled
**Parameters:**
- `tracker` - pointer to the Tracker instance
**Returns:** true if enabled, false otherwise
#### nel_tracker_set_face_tracking_enabled
```c
void nel_tracker_set_face_tracking_enabled(NELTracker* tracker, bool enable);
```
Sets the face tracker to be enabled or disabled
**Parameters:**
- `tracker` - pointer to the Tracker instance
- `enable` - boolean to set to
#### nel_tracker_get_minimum_face_ratio
```c
float nel_tracker_get_minimum_face_ratio(const NELTracker* tracker);
```
Gets the current minimum face ratio
**See also:** nel_tracker_set_minimum_face_ratio
**Parameters:**
- `tracker` - pointer to the Tracker instance
**Returns:** current minimum face size as a ratio of the smaller image dimension
#### nel_tracker_set_minimum_face_ratio
```c
void nel_tracker_set_minimum_face_ratio(NELTracker* tracker, float minimum_face_ratio);
```
Sets the minimum face ratio
The minimum face ratio defines the minimum face size the algorithm is looking for. The actual size is calculated from the smaller image dimension multiplied by the set minimum face ratio. If the value is 1/4.8, then in case of VGA resolution input (640x480), the minimum face size is 100x100.
**Warning:** The shape alignment and classifier performance can degrade in case of low resolution, tracking faces smaller than 75x75 is ill advised.
**Parameters:**
- `tracker` - pointer to the Tracker instance
- `minimum_face_ratio` - new minimum face size as a ratio of the smaller image dimension
#### nel_tracker_reset_tracking
```c
void nel_tracker_reset_tracking(NELTracker* tracker);
```
Resets the internal tracking state. Should be called when a new video sequence starts.
**Parameters:**
- `tracker` - pointer to the Tracker instance
## Data Types
### Enumerations
#### NELImageFormat
```c
typedef enum NELImageFormat {
NELImageFormatGrayscale = 0,
NELImageFormatRGB = 1,
NELImageFormatRGBA = 2,
NELImageFormatBGR = 3,
NELImageFormatBGRA = 4,
} NELImageFormat;
```
Image format enum
**Values:**
- `NELImageFormatGrayscale = 0` - 8-bit grayscale
- `NELImageFormatRGB = 1` - 24-bit RGB
- `NELImageFormatRGBA = 2` - 32-bit RGBA or 32-bit RGB_
- `NELImageFormatBGR = 3` - 24-bit BGR
- `NELImageFormatBGRA = 4` - 32-bit BGRA or 32-bit BGR_
#### NELEmotionID
```c
typedef enum NELEmotionID {
NELEmotionIDConfusion = 0,
NELEmotionIDContempt = 1,
NELEmotionIDDisgust = 2,
NELEmotionIDFear = 3,
NELEmotionIDHappy = 4,
NELEmotionIDEmpathy = 5,
NELEmotionIDSurprise = 6,
NELEmotionIDAttention = 100,
NELEmotionIDPresence = 101,
NELEmotionIDEyesOnScreen = 102,
NELEmotionIDFaceDetection = 103,
} NELEmotionID;
```
IDs for the supported emotions/behaviours
**Values:**
- `NELEmotionIDConfusion = 0`
- `NELEmotionIDContempt = 1`
- `NELEmotionIDDisgust = 2`
- `NELEmotionIDFear = 3`
- `NELEmotionIDHappy = 4`
- `NELEmotionIDEmpathy = 5`
- `NELEmotionIDSurprise = 6`
- `NELEmotionIDAttention = 100`
- `NELEmotionIDPresence = 101`
- `NELEmotionIDEyesOnScreen = 102`
- `NELEmotionIDFaceDetection = 103`
### Structures
#### Version
##### NELVersion
```c
typedef struct NELVersion {
int major;
int minor;
int patch;
} NELVersion;
```
Semantic version number for the SDK
**Members:**
- `int major`
- `int minor`
- `int patch`
#### Image
##### NELImageHeader
```c
typedef struct NELImageHeader {
const uint8_t* data;
int width;
int height;
int stride;
NELImageFormat format;
} NELImageHeader;
```
Descriptor class for image data (non-owning)
**Members:**
- `const uint8_t* data` - pointer to the byte array of the image
- `int width` - width of the image in pixels
- `int height` - height of the image in pixels
- `int stride` - length of one row of pixels in bytes (e.g: 3*width + padding)
- `NELImageFormat format` - image format
#### Points
##### NELPoint2d
```c
typedef struct NELPoint2d {
double x;
double y;
} NELPoint2d;
```
Point2d struct
**Members:**
- `double x` - x coordinate
- `double y` - y coordinate
##### NELPoint3d
```c
typedef struct NELPoint3d {
double x;
double y;
double z;
} NELPoint3d;
```
Point3d struct
**Members:**
- `double x` - x coordinate
- `double y` - y coordinate
- `double z` - z coordinate
##### NELPoint2dArray
```c
typedef struct NELPoint2dArray {
int count;
NELPoint2d* points;
} NELPoint2dArray;
```
Array of Point2d
**Members:**
- `int count` - number of points
- `NELPoint2d* points` - pointer to the array of points
##### NELPoint3dArray
```c
typedef struct NELPoint3dArray {
int count;
NELPoint3d* points;
} NELPoint3dArray;
```
Array of Point3d
**Members:**
- `int count` - number of points
- `NELPoint3d* points` - pointer to the array of points
#### Results
##### NELResultType
```c
typedef struct NELResultType {
NELLandmarkData* landmarks;
NELEmotionResults* emotions;
} NELResultType;
```
Result type for the track function
**Members:**
- `NELLandmarkData* landmarks` - pointer to landmark data, or NULL if no face was detected
- `NELEmotionResults* emotions` - pointer to emotion results, or NULL if no face was detected
##### NELLandmarkData
```c
typedef struct NELLandmarkData {
double scale;
double roll;
double yaw;
double pitch;
NELPoint2d translate;
NELPoint2dArray* landmarks2d;
NELPoint3dArray* landmarks3d;
bool isGood;
} NELLandmarkData;
```
Landmark data for a tracked face
**Members:**
- `double scale` - scale of the face
- `double roll` - roll angle in radians
- `double yaw` - yaw angle in radians
- `double pitch` - pitch angle in radians
- `NELPoint2d translate` - translation of the face
- `NELPoint2dArray* landmarks2d` - pointer to 2D landmarks array
- `NELPoint3dArray* landmarks3d` - pointer to 3D landmarks array
- `bool isGood` - whether the tracking is good
##### NELEmotionData
```c
typedef struct NELEmotionData {
double probability;
bool isActive;
bool isDetectionSuccessful;
NELEmotionID emotionID;
} NELEmotionData;
```
Emotion data for a single emotion
**Members:**
- `double probability` - probability of the emotion
- `bool isActive` - whether the emotion is active
- `bool isDetectionSuccessful` - whether the detection was successful
- `NELEmotionID emotionID` - ID of the emotion
##### NELEmotionResults
```c
typedef struct NELEmotionResults {
int count;
NELEmotionData* emotions;
} NELEmotionResults;
```
Array of emotion results
**Members:**
- `int count` - number of emotions
- `NELEmotionData* emotions` - pointer to the array of emotion data
#### Arrays
##### NELEmotionIDArray
```c
typedef struct NELEmotionIDArray {
int count;
NELEmotionID* ids;
} NELEmotionIDArray;
```
Array of emotion IDs
**Members:**
- `int count` - number of emotion IDs
- `NELEmotionID* ids` - pointer to the array of emotion IDs
##### NELStringArray
```c
typedef struct NELStringArray {
int count;
char** strings;
} NELStringArray;
```
Array of strings
**Members:**
- `int count` - number of strings
- `char** strings` - pointer to the array of string pointers
### Opaque Types
#### NELTracker
```c
typedef struct NELTracker NELTracker;
```
Opaque type for the Tracker instance
Api Cpp
https://verifeye-docs.realeyes.ai/native-sdk/native-emotions/api-cpp/
# C++ API Documentation
## Tracker class
### class nel::Tracker
The Emotion Tracker class
#### Constructor
```cpp
Tracker(const std::string& modelFile, int max_concurrency = 0)
```
Tracker constructor: loads model file, sets up the processing.
**Parameters:**
- `modelFile` - path for the used model
- `max_concurrency` - maximum allowed concurrency, 0 means automatic (using all cores), default: 0
#### Destructor
```cpp
~Tracker()
```
Destructor
#### track (std::future version)
```cpp
std::future track(const nel::ImageHeader& imageHeader, std::chrono::milliseconds timestamp)
```
Tracks the given frame asynchronously with the std::future API.
**Note:** The given ImageHeader doesn't own the image data, and it is safe to delete the data after the call, a copy is happening internally. See nel::ImageHeader for details.
**Note:** Calling this function is non-blocking, so calling it again with the next frame without waiting for the result is possible. Also see get_concurrent_calculations().
**Note:** This is the std::future based API, for callback API see nel::Tracker::track(const nel::ImageHeader&, std::chrono::milliseconds, std::function).
**Parameters:**
- `imageHeader` - image descriptor
- `timestamp` - timestamp of the image
#### track (callback version)
```cpp
void track(const nel::ImageHeader& imageHeader, std::chrono::milliseconds timestamp,
std::function callback)
```
Tracks the given frame asynchronously with a callback API.
**Note:** The given ImageHeader doesn't own the image data, and it is safe to delete the data after the call, a copy is happening internally. See nel::ImageHeader for details.
**Note:** Calling this function is non-blocking, so calling it again with the next frame without waiting for the result is possible. Also see get_concurrent_calculations().
**Note:** This is the callback based API, for std::future API see nel::Tracker::track(const nel::ImageHeader&, std::chrono::milliseconds).
**Parameters:**
- `imageHeader` - image descriptor
- `timestamp` - timestamp of the image
- `callback` - callback to call with the result
**Returns:** tracked landmarks and emotions
#### resetTracking
```cpp
void resetTracking()
```
Resets the internal tracking state. Should be called when a new video sequence starts.
#### get_emotion_IDs
```cpp
const std::vector& get_emotion_IDs() const
```
Returns the emotion IDs provided by the loaded model. The order is the same as in the nel::EmotionResults.
**See also:** nel::EmotionResults
**Returns:** A vector of emotion IDs.
#### get_emotion_names
```cpp
const std::vector& get_emotion_names() const
```
Returns the emotion names provided by the loaded model. The order is the same as in the nel::EmotionResults.
**See also:** nel::EmotionResults
**Returns:** A vector of emotion names.
#### get_concurrent_calculations
```cpp
uint16_t get_concurrent_calculations() const
```
Returns the value of the atomic counter for the number of calculations currently running concurrently. You can use this to limit the number of concurrent calculations.
**Returns:** The (approximate) number of calculations currently in-flight.
#### is_emotion_enabled
```cpp
bool is_emotion_enabled(nel::EmotionID emoID) const
```
Returns wether the specified emotion is enabled
**Parameters:**
- `emoID` - emotion to query
#### set_emotion_enabled
```cpp
void set_emotion_enabled(nel::EmotionID emoID, bool enable)
```
Sets the specified emotion to enabled or disabled
**Parameters:**
- `emoID` - emotion to set
- `enable` - boolean to set to
#### is_face_tracking_enabled
```cpp
bool is_face_tracking_enabled() const
```
Returns wether the face tracker is enabled
#### set_face_tracking_enabled
```cpp
void set_face_tracking_enabled(bool enable)
```
Sets the face tracker to be enabled or disabled
**Parameters:**
#### get_minimum_face_ratio
```cpp
float get_minimum_face_ratio() const
```
Gets the current minimum face ratio
**See also:** set_minimum_face_ratio
**Returns:** current minimum face size as a ratio of the smaller image dimension
#### set_minimum_face_ratio
```cpp
void set_minimum_face_ratio(float minimumFaceRatio)
```
Sets the minimum face ratio
The minimum face ratio defines the minimum face size the algorithm is looking for. The actual size is calculated from the smaller image dimension multiplied by the set minimum face ratio. If the value is 1/4.8, then in case of VGA resolution input (640x480), the minimum face size is 100x100.
**Warning:** The shape alignment and classifier performance can degrade in case of low resolution, tracking faces smaller than 75x75 is ill advised.
**Parameters:**
- `minimumFaceRatio` - new minimum face size as a ratio of the smaller image dimension
#### get_sdk_version
```cpp
static nel::Version get_sdk_version()
```
Returns the version of the SDK (and not the model)
**Returns:** version of the SDK
#### get_sdk_version_string
```cpp
static std::string get_sdk_version_string()
```
Returns the version string of the SDK (and not the model)
**Returns:** version string of the SDK
## Image header class
### struct nel::ImageHeader
Descriptor class for image data (non-owning)
**Members:**
- `const uint8_t* data` - pointer to the byte array of the image
- `int width` - width of the image in pixels
- `int height` - height of the image in pixels
- `int stride` - length of one row of pixels in bytes (e.g: 3*width + padding)
- `nel::ImageFormat format` - image format
### enum class nel::ImageFormat
Image format enum
**Values:**
- `Grayscale = 0` - 8-bit grayscale
- `RGB = 1` - 24-bit RGB
- `RGBA = 2` - 32-bit RGBA or 32-bit RGB_
- `BGR = 3` - 24-bit BGR
- `BGRA = 4` - 32-bit BGRA or 32-bit BGR_
## Result classes
### enum class nel::EmotionID
IDs for the supported emotions/behaviours
**Values:**
- `CONFUSION = 0`
- `CONTEMPT = 1`
- `DISGUST = 2`
- `FEAR = 3`
- `HAPPY = 4`
- `EMPATHY = 5`
- `SURPRISE = 6`
- `ATTENTION = 100`
- `PRESENCE = 101`
- `EYES_ON_SCREEN = 102`
- `FACE_DETECTION = 103`
### struct nel::Tracker::ResultType
The ResultType struct
**Members:**
- `nel::LandmarkData landmarks` - Tracked landmarks
- `nel::EmotionResults emotions` - Detected emotions
### struct nel::LandmarkData
The LandmarkData struct
**Members:**
- `double scale` - scale of the face
- `double roll` - roll pose angle
- `double yaw` - yaw pose angle
- `double pitch` - pitch pose angle
- `nel::Point2d translate` - position of the head center in image coordinates
- `std::vector landmarks2d` - position of the 49 landmarks, in image coordinates
- `std::vector landmarks3d` - position of the 49 landmarks, in an un-scaled face-centered 3D space
- `bool isGood` - whether the tracking is good quality or not
### struct nel::Point2d
Point2d struct
**Members:**
- `double x` - x coordinate
- `double y` - y coordinate
### struct nel::Point3d
Point3d struct
**Members:**
- `double x` - x coordinate
- `double y` - y coordinate
- `double z` - z coordinate
### typedef nel::EmotionResults
```cpp
typedef std::vector EmotionResults
```
EmotionResults
Vector of emotion data, the order of emotions is the same as in nel::Tracker::get_emotion_names().
**See also:** nel::Tracker::get_emotion_names().
### struct nel::EmotionData
The EmotionData struct
**Members:**
- `double probability` - probability of the emotion
- `bool isActive` - whether the probability is higher than an internal threshold
- `bool isDetectionSuccessful` - whether the tracking quality was good enough to reliable detect this emotion
- `EmotionID emotionID` - ID of the emotion
### struct nel::Version
Semantic version number for the SDK
**Members:**
- `int major`
- `int minor`
- `int patch`
```cpp
std::string get_model_name() const
```
Returns the name (version etc) of the loaded model.
**Returns:** name of the model
Api Dotnet
https://verifeye-docs.realeyes.ai/native-sdk/native-emotions/api-dotnet/
# .NET API Documentation
**Namespace:** Realeyes.EmotionTracking
## EmotionTracker class
### class EmotionTracker
The Emotion Tracker class. Implements IDisposable.
#### EmotionTracker(string modelPath, int maxConcurrency = 0)
```csharp
public EmotionTracker(string modelPath, int maxConcurrency = 0)
```
EmotionTracker constructor: loads model file, sets up the processing.
**Parameters:**
- `modelPath` (string) - Path to the tracking model file (.realZ)
- `maxConcurrency` (int) - Maximum concurrency (0 for automatic/all cores), default: 0
**Throws:**
- `ArgumentNullException` - When modelPath is null
- `EmotionTrackingException` - When model loading fails
#### TrackAsync(ImageHeader imageHeader, TimeSpan timestamp)
```csharp
public Task TrackAsync(ImageHeader imageHeader, TimeSpan timestamp)
```
Tracks emotions in an image asynchronously. Multiple images can be submitted for processing without waiting, and results can be awaited later, allowing for concurrent processing, but the frames must be submitted in chronological order.
**Parameters:**
- `imageHeader` (ImageHeader) - Image to process
- `timestamp` (TimeSpan) - Timestamp of the image
**Returns:** Task -- Tracking result with landmarks and emotions
**Throws:**
- `EmotionTrackingException` - When tracking fails
**Example:**
```csharp
var tracker = new EmotionTracker("model.realZ");
// Submit multiple images for processing
var task1 = tracker.TrackAsync(image1, TimeSpan.FromMilliseconds(0));
var task2 = tracker.TrackAsync(image2, TimeSpan.FromMilliseconds(33));
var task3 = tracker.TrackAsync(image3, TimeSpan.FromMilliseconds(66));
// Await results
var result1 = await task1;
var result2 = await task2;
var result3 = await task3;
```
#### ResetTracking()
```csharp
public void ResetTracking()
```
Resets the internal tracking state. Should be called when a new video sequence starts.
#### GetEmotionIDs()
```csharp
public EmotionID[] GetEmotionIDs()
```
Gets the emotion IDs provided by the loaded model.
**Returns:** EmotionID[] -- Array of emotion IDs
#### GetEmotionNames()
```csharp
public string[] GetEmotionNames()
```
Gets the names of emotions provided by the loaded model.
**Returns:** string[] -- Array of emotion names
#### IsEmotionEnabled(EmotionID emotionId)
```csharp
public bool IsEmotionEnabled(EmotionID emotionId)
```
Checks if a specific emotion is enabled for tracking.
**Parameters:**
- `emotionId` (EmotionID) - The emotion ID to check
**Returns:** bool -- True if enabled, false otherwise
#### SetEmotionEnabled(EmotionID emotionId, bool enabled)
```csharp
public void SetEmotionEnabled(EmotionID emotionId, bool enabled)
```
Enables or disables tracking for a specific emotion.
**Parameters:**
- `emotionId` (EmotionID) - The emotion ID to configure
- `enabled` (bool) - True to enable, false to disable
#### Dispose()
```csharp
public void Dispose()
```
Releases resources used by the tracker.
#### GetSdkVersion()
```csharp
public static Version GetSdkVersion()
```
Gets the SDK version.
**Returns:** Version -- SDK version information
#### GetSdkVersionString()
```csharp
public static string GetSdkVersionString()
```
Gets the SDK version as a string.
**Returns:** string -- SDK version string
#### ConcurrentCalculations
```csharp
public int ConcurrentCalculations { get; }
```
Gets the number of concurrent calculations.
#### IsFaceTrackingEnabled
## ImageFormat enum
### enum ImageFormat
**Values:**
- `Grayscale = 0`
- `RGB = 1`
- `RGBA = 2`
- `BGR = 3`
- `BGRA = 4`
## ImageHeader struct
### struct ImageHeader
The ImageHeader readonly record struct.
**Properties:**
- `Data` (byte[]) - Image data.
- `Width` (int) - Width of image.
- `Height` (int) - Height of image.
- `Stride` (int) - Stride of image.
- `Format` (ImageFormat) - Format of image.
## Result classes
### class EmotionTrackingException
Exception thrown when emotion tracking operations fail. Inherits from Exception.
### class TrackingResult
Contains the results after a successful tracking.
**Properties:**
- `Landmarks` (LandmarkData) - Landmark data from face tracking.
- `Emotions` (Emotions) - Detected emotion data.
### class Emotions
Record containing individual emotion detection results. Each property is nullable EmotionData.
**Properties:**
- `Confusion` (EmotionData?)
- `Contempt` (EmotionData?)
- `Disgust` (EmotionData?)
- `Fear` (EmotionData?)
- `Happy` (EmotionData?)
- `Empathy` (EmotionData?)
- `Surprise` (EmotionData?)
- `Attention` (EmotionData?)
- `Presence` (EmotionData?)
- `EyesOnScreen` (EmotionData?)
- `FaceDetection` (EmotionData?)
### class LandmarkData
Contains information about face landmarks.
**Properties:**
- `Scale` (double)
- `Roll` (double)
- `Yaw` (double)
- `Pitch` (double)
- `Translate` (Point2d)
- `Landmarks2d` (Point2d[])
- `Landmarks3d` (Point3d[])
- `IsGood` (bool)
### class EmotionData
Contains information about a specific emotion detection.
**Properties:**
- `Probability` (double)
- `IsActive` (bool)
- `IsDetectionSuccessful` (bool)
### struct Point2d
Readonly record struct representing a 2D point.
**Properties:**
- `X` (double)
- `Y` (double)
### struct Point3d
Readonly record struct representing a 3D point.
**Properties:**
- `X` (double)
- `Y` (double)
- `Z` (double)
### struct Version
Readonly record struct representing SDK version information.
**Properties:**
- `Major` (int)
- `Minor` (int)
- `Patch` (int)
## EmotionID enum
### enum EmotionID
Enum representing emotion types.
**Values:**
- `Confusion = 0`
- `Contempt = 1`
- `Disgust = 2`
- `Fear = 3`
- `Happy = 4`
- `Empathy = 5`
- `Surprise = 6`
- `Attention = 100`
- `Presence = 101`
- `EyesOnScreen = 102`
- `FaceDetection = 103`
Api Java
https://verifeye-docs.realeyes.ai/native-sdk/native-emotions/api-java/
# Java API Documentation (Experimental)
**Package:** com.realeyesit.nel
## Tracker interface
### interface Tracker
The Tracker interface
#### track(ImageHeader imageHeader, long timestamp)
```java
public TrackerResultFuture track(ImageHeader imageHeader, long timestamp)
```
Tracks the given frame asynchronously with the TrackerResultFuture API.
**Note:** Calling this function is non-blocking, so calling it again with the next frame without waiting for the result is possible. Also see get_concurrent_calculations().
**Parameters:**
- `imageHeader` - image descriptor
- `timestamp` - timestamp of the image (in ms)
#### resetTracking()
```java
public void resetTracking()
```
Resets the internal tracking state. Should be called when a new video sequence starts.
#### getEmotionIDs()
```java
public java.util.List getEmotionIDs()
```
Returns the emotion IDs provided by the loaded model. The order is the same as returned by ResultType.getEmotions().
**See also:** ResultType
#### getEmotionNames()
```java
public java.util.List getEmotionNames()
```
Returns the emotion names provided by the loaded model. The order is the same as returned by ResultType.getEmotions().
**See also:** ResultType
#### getMinimumFaceRatio()
```java
public float getMinimumFaceRatio()
```
Gets the current minimum face ratio
**See also:** setMinimumFaceRatio
#### setMinimumFaceRatio(float minimumFaceRatio)
```java
public void setMinimumFaceRatio(float minimumFaceRatio)
```
Sets the minimum face ratio
The minimum face ratio defines the minimum face size the algorithm is looking for. The actual size is calculated from the smaller image dimension multiplied by the set minimum face ratio. The default value is 1/4.8, i.e., in case of VGA resolution input (640x480), the minimum face size is 100x100.
**Warning:** The shape alignment and classifier performance can degrade in case of low resolution, tracking faces smaller than 75x75 is ill advised.
**Parameters:**
- `minimumFaceRatio` - new minimum face size as a ratio of the smaller image dimension
#### isFaceTrackingEnabled()
```java
public boolean isFaceTrackingEnabled()
```
Returns wether the face tracker is enabled
#### setFaceTrackingEnabled(boolean enable)
```java
public void setFaceTrackingEnabled(boolean enable)
```
Sets the face tracker to be enabled or disabled
**Parameters:**
- `enable` - boolean to set to
#### isEmotionEnabled(EmotionID emoID)
```java
public boolean isEmotionEnabled(EmotionID emoID)
```
Returns wether the specified emotion is enabled
**Parameters:**
- `emoID` - emotion to query
#### setEmotionEnabled(EmotionID emoID, boolean enable)
```java
public void setEmotionEnabled(EmotionID emoID, boolean enable)
```
Sets the specified emotion to enabled or disabled
**Parameters:**
- `emoID` - emotion to set
- `enable` - boolean to set to
#### getModelName()
```java
public String getModelName()
```
Returns the name (version etc) of the loaded model.
**Returns:** name of the model
#### getSdkVersion()
```java
public Version getSdkVersion()
```
Returns the version of the SDK (and not the model)
**Returns:** version of the SDK
#### getSdkVersionString()
```java
public String getSdkVersionString()
```
Returns the version string of the SDK (and not the model)
**Returns:** version string of the SDK
## NelTracker class
#### NelTracker(String modelFile, int max_concurrency)
```java
public NelTracker(String modelFile, int max_concurrency)
```
Constructor
**Parameters:**
- `modelFile` (str) - path for the used model
- `max_concurrency` (int) - maximum allowed concurrency, 0 means automatic (using all cores), default: 0
## ImageHeader class
### class ImageHeader
Descriptor class for image data (non-owning)
#### ImageHeader()
```java
public ImageHeader()
```
Constructor
#### getData()
```java
public java.nio.ByteBuffer getData()
```
**Returns:** pointer to the byte array of the image
#### setData(java.nio.ByteBuffer value)
```java
public void setData(java.nio.ByteBuffer value)
```
**Parameters:**
- `value` - pointer to the byte array of the image
#### getFormat()
```java
public ImageFormat getFormat()
```
**Returns:** image format
#### setFormat(ImageFormat value)
```java
public void setFormat(ImageFormat value)
```
**Parameters:**
- `value` - image format
#### getHeight()
```java
public int getHeight()
```
**Returns:** height of the image in pixels
#### setHeight(int value)
```java
public void setHeight(int value)
```
**Returns:** height of the image in pixels
#### getStride()
```java
public int getStride()
```
**Returns:** length of one row of pixels in bytes (e.g: 3*width + padding)
#### setStride(int value)
```java
public void setStride(int value)
```
**Parameters:**
- `value` - length of one row of pixels in bytes (e.g: 3*width + padding)
#### getWidth()
```java
public int getWidth()
```
**Returns:** width of the image in pixels
#### setWidth(int value)
```java
public void setWidth(int value)
```
**Parameters:**
- `value` - width of the image in pixels
### enum ImageFormat
**Values:**
- `BGR` - 24-bit BGR
- `BGRA` - 32-bit BGRA or 32-bit BGR_
- `Grayscale` - 8-bit grayscale
- `RGB` - 24-bit RGB
- `RGBA` - 32-bit RGBA or 32-bit RGB_
## Result classes
### enum EmotionID
IDs for the supported emotions/behaviours
**Values:**
- `ATTENTION`
- `CONFUSION`
- `CONTEMPT`
- `DISGUST`
- `EMPATHY`
- `FEAR`
- `HAPPY`
- `PRESENCE`
- `SURPRISE`
- `EYES_ON_SCREEN`
- `FACE_DETECTION`
### interface TrackerResultFuture
Simple wrapper over the C++ future class
#### get()
```java
ResultType get()
```
Blocks until the future is ready and returns the result.
### interface ResultType
The ResultType struct.
#### getEmotions()
```java
java.util.List getEmotions()
```
Detected emotions.
#### getLandmarks()
```java
LandmarkData getLandmarks()
```
Tracked landmarks.
### interface LandmarkData
#### getScale()
```java
double getScale()
```
Scale of the face.
#### getRoll()
```java
double getRoll()
```
Roll pose angle.
#### getPitch()
```java
double getPitch()
```
Pitch pose angle.
#### getYaw()
```java
double getYaw()
```
Yaw pose angle.
#### getTranslate()
```java
Point2d getTranslate()
```
Position of the head center in image coordinates.
#### getLandmarks2d()
```java
java.util.List getLandmarks2d()
```
Positions of the 49 landmarks, in image coordinates.
#### getLandmarks3d()
```java
java.util.List getLandmarks3d()
```
Positions of the 49 landmarks, in an un-scaled face-centered 3D space.
#### getIsGood()
```java
boolean getIsGood()
```
Whether the tracking is good quality or not.
### interface Point2d
#### getX()
```java
double getX()
```
#### getY()
```java
double getY()
```
### interface Point3d
#### Point3d_GetInterfaceCPtr()
```java
long Point3d_GetInterfaceCPtr()
```
#### getX()
```java
double getX()
```
#### getY()
```java
double getY()
```
#### getZ()
```java
double getZ()
```
### interface EmotionData
#### getEmotionID()
```java
EmotionID getEmotionID()
```
ID of the emotion.
#### getIsActive()
```java
boolean getIsActive()
```
Whether the probability is higher than an internal threshold.
#### getIsDetectionSuccessful()
```java
boolean getIsDetectionSuccessful()
```
Whether the tracking quality was good enough to reliable detect this emotion.
#### getProbability()
```java
double getProbability()
```
Probability of the emotion.
### interface Version
Semantic version number for the SDK
#### getMajor()
```java
int getMajor()
```
#### getMinor()
```java
int getMinor()
```
#### getPatch()
```java
int getPatch()
```
Api Python
https://verifeye-docs.realeyes.ai/native-sdk/native-emotions/api-python/
# Python API Documentation
## Module Functions
### get_sdk_version_string()
Returns the SDK version string.
**Returns:** str
## Tracker class
### class realeyes.emotion_detection.Tracker(model_file, max_concurrency=0)
The Emotion Tracker class
#### \_\_init\_\_(self, model_file, max_concurrency=0)
Tracker constructor: loads model file, sets up the processing.
**Parameters:**
- `model_file` (str) - path for the used model
- `max_concurrency` (int) - maximum allowed concurrency, 0 means automatic (using all cores), default: 0
#### track(image, timestamp_in_ms)
Tracks the given frame.
**Parameters:**
- `image` (numpy.ndarray) - frame from the video
- `timestamp_in_ms` (int) - timestamp of the frame
**Returns:** TrackingResult
#### reset_tracking()
Resets the internal tracking state. Should be called when a new video sequence starts.
#### get_emotion_ids()
Returns the emotion IDs provided by the loaded model. The order is the same as in the TrackingResult.
**Returns:** list[EmotionID]
#### get_emotion_names()
Returns the emotion names provided by the loaded model. The order is the same as in the TrackingResult.
**Returns:** list[str]
#### get_model_name()
Returns the name (version etc) of the loaded model.
**Returns:** str
#### minimum_face_ratio: float
Current minimum face size as a ratio of the smaller image dimension.
#### is_face_tracking_enabled()
Returns wether the face tracker is enabled.
**Returns:** bool
#### set_face_tracking_enabled(enable: bool)
Sets the face tracker to be enabled or disabled.
**Parameters:**
- `enable` (bool) - new value
#### is_emotion_enabled(emotion_id)
Returns wether the specified emotion is enabled.
**Parameters:**
- `emotion_id` (EmotionID) - emotion to query
**Returns:** bool
#### set_emotion_enabled(emotion_id: EmotionID, enable: bool)
Sets the specified emotion to enabled or disabled.
**Parameters:**
- `emotion_id` (EmotionID) - emotion to set
- `enable` (bool) - new value
#### \_\_repr\_\_()
Returns a string representation of the Tracker.
**Returns:** str
## Result classes
### class EmotionID
**Attributes:**
- `CONFUSION = 0`
- `CONTEMPT = 1`
- `DISGUST = 2`
- `FEAR = 3`
- `HAPPY = 4`
- `EMPATHY = 5`
- `SURPRISE = 6`
- `ATTENTION = 100`
- `PRESENCE = 101`
- `EYES_ON_SCREEN = 102`
### class TrackingResult
**Attributes:**
- `emotions` (list[EmotionData]) - Tracked emotions. See EmotionData
- `landmarks` (LandmarkData) - Tracked landmarks. See LandmarkData
#### to_json()
Converts the data to json (dicts and lists).
**Returns:** dict
#### \_\_repr\_\_()
Returns a string representation of the TrackingResult.
**Returns:** str
### class LandmarkData
**Attributes:**
- `scale` (float) - Scale of the face.
- `roll` (float) - Roll pose angle.
- `yaw` (float) - Yaw pose angle.
- `pitch` (float) - Pitch pose angle.
- `translate` (list[Point2d]) - Position of the head center in image coordinates.
- `landmarks2d` (list[Point2d]) - Positions of the 49 landmarks, in image coordinates.
- `landmarks3d` (list[Point3d]) - Positions of the 49 landmarks, in an un-scaled face-centered 3D space.
- `is_good` (bool) - Whether the tracking is good quality or not.
#### to_json()
Converts the data to json (dicts and lists).
**Returns:** dict
#### \_\_repr\_\_()
### class Point2d
**Attributes:**
- `x` (float)
- `y` (float)
#### to_json()
Converts the data to json (dicts and lists).
**Returns:** dict
#### __repr__()
Returns a string representation of the Point2d.
**Returns:** str
### class Point3d
**Attributes:**
- `x` (float)
- `y` (float)
- `z` (float)
#### to_json()
Converts the data to json (dicts and lists).
**Returns:** dict
#### __repr__()
Returns a string representation of the Point3d.
**Returns:** str
### class EmotionData
**Attributes:**
- `probability` (float) - Probability of the emotion.
- `is_active` (bool) - Whether the probability is higher than an internal threshold.
- `is_detection_successful` (bool) - Whether the tracking quality was good enough to reliable detect this emotion.
- `emotion_id` (EmotionID) - ID of the emotion. See EmotionID
#### to_json()
Converts the data to json (dicts and lists).
**Returns:** dict
#### __repr__()
Returns a string representation of the EmotionData.
**Returns:** str
Index
https://verifeye-docs.realeyes.ai/native-sdk/native-emotions/
# Native Emotions Library
Welcome to Native Emotions Library documentation!
## Contents:
- [Overview](https://verifeye-docs.realeyes.ai/overview.md)
- [C++ API](https://verifeye-docs.realeyes.ai/api-cpp.md)
- [C API](https://verifeye-docs.realeyes.ai/api-c.md)
- [Python API](https://verifeye-docs.realeyes.ai/api-python.md)
- [Java API](https://verifeye-docs.realeyes.ai/api-java.md)
- [.NET API](https://verifeye-docs.realeyes.ai/api-dotnet.md)
Overview
https://verifeye-docs.realeyes.ai/native-sdk/native-emotions/overview/
# Overview
The Native Emotions Library is a portable C++ library for real-time facial emotion tracking and analysis.
The SDK provides wrappers in the following languages:
* C++ (native)
* C
* Python
* C# / .NET
* Java (Android)
## Getting Started
### Hardware requirements
The SDK doesn't have any special hardware requirement:
- **CPU:** No special requirement, any modern 64 bit capable CPU (x86-64 with AVX, ARM8) is supported
- **GPU:** No special requirement
- **RAM:** 2 GB of available RAM required
- **Camera:** No special requirement, minimum resolution: 640x480
### Software requirements
The SDK is regularly tested on the following Operating Systems:
- Windows 10+
- Ubuntu 24.04+
- macOS 15+
- iOS 18+
- Android 23+
### 3rd Party Licenses
While the SDK is released under a proprietary license, the following Open-Source projects were used in it with their respective licenses:
- OpenCV - [3 clause BSD](https://opencv.org/license/)
- Tensorflow - [Apache License 2.0](https://github.com/tensorflow/tensorflow/blob/master/LICENSE)
- Protobuf - [3 clause BSD](https://github.com/protocolbuffers/protobuf/blob/master/LICENSE)
- zlib - [zlib license](https://www.zlib.net/zlib_license.html)
- minizip-ng - [zlib license](https://github.com/zlib-ng/minizip-ng/blob/master/LICENSE)
- stlab - [Boost Software License 1.0](https://github.com/stlab/libraries/blob/main/LICENSE)
- pybind11 - [3 clause BSD](https://github.com/pybind/pybind11/blob/master/LICENSE)
- fmtlib - [MIT License](https://github.com/fmtlib/fmt/blob/master/LICENSE.rst)
### Installation
#### C++
Extract the SDK contents, include the headers from the `include` folder and link `libNativeEmotionsLibrary` to your C++ project.
#### C
Extract the SDK contents, include `tracker_c.h` from the `include` folder and link `libNativeEmotionsLibrary` to your C project.
#### Python
The python version of the SDK can be installed with pip:
```bash
$ pip install realeyes.emotion-detection
```
#### C# / .NET
The .NET version of the SDK can be installed via NuGet:
```bash
$ dotnet add package Realeyes.EmotionTracking
```
#### Java
For Android projects, add the library to your `build.gradle` dependencies.
## Usage
### C++
The main entry point of this library is the `nel::Tracker` class.
After a **tracker** object is constructed, the user can call the `nel::Tracker::track()` function to process
a frame from a video or other frame source.
The `nel::Tracker::track()` function has two versions, both are non-blocking async calls: one returns
`std::future`, the other accepts a callback that will be called on completion. After one call,
a subsequent call is possible without waiting for the result.
For the frame data, the user must construct a `nel::ImageHeader` object. The frame data must outlive
this object since it is a non-owning view, but it only needs to be valid during the `nel::Tracker::track()`
call - the library will copy the frame data internally.
The following example shows the basic usage of the library using OpenCV for loading images and feeding them to the tracker:
```cpp
#include "tracker.h"
#include
#include
#include
#include
int main()
{
nel::Tracker tracker("model/model.realZ");
cv::VideoCapture video("video.mp4");
cv::Mat frame;
while (video.read(frame)) {
nel::ImageHeader header{
frame.ptr(),
frame.cols,
frame.rows,
static_cast(frame.step1()),
nel::ImageFormat::BGR
};
int64_t timestamp_in_ms = video.get(cv2::CAP_PROP_POS_MSEC);
// Track asynchronously using std::future
auto future = tracker.track(header, std::chrono::milliseconds(timestamp_in_ms));
auto result = future.get();
// Process results
std::cout << "Face tracking: " << (result.landmarks.isGood ? "good" : "failed") << std::endl;
for (const auto& emotion : result.emotions) {
std::cout << " Probability: " << emotion.probability
<< " Active: " << emotion.isActive << std::endl;
}
}
return 0;
}
```
### C
The main entry point is the `NELTracker` opaque pointer type with associated functions.
After creating a tracker with `nel_tracker_new()`, you can track frames by calling `nel_tracker_track()`
with a callback function. The callback will be called asynchronously when tracking completes.
The following example shows basic usage:
```c
#include "tracker_c.h"
#include
#include
void track_callback(void* user_data, NELResultType* result, const char* error_msg) {
if (error_msg != NULL) {
printf("Error: %s\n", error_msg);
return;
}
printf("Face tracking: %s\n", result->landmarks->isGood ? "good" : "failed");
for (int i = 0; i < result->emotions->count; i++) {
printf(" Emotion %d - Probability: %f, Active: %d\n",
result->emotions->emotions[i].emotionID,
result->emotions->emotions[i].probability,
result->emotions->emotions[i].isActive);
}
}
int main() {
char* error_msg = NULL;
NELTracker* tracker = nel_tracker_new("model/model.realZ", 0, &error_msg);
if (tracker == NULL) {
printf("Failed to load model: %s\n", error_msg);
free(error_msg);
return 1;
}
// Prepare image data (example with dummy data)
uint8_t image_data[640 * 480 * 3]; // RGB image
NELImageHeader header = {
.data = image_data,
.width = 640,
.height = 480,
.stride = 640 * 3,
.format = NELImageFormatRGB
};
nel_tracker_track(tracker, &header, 0, track_callback, NULL);
// Clean up
nel_tracker_free(tracker);
return 0;
}
```
### Python
The main entry point of this library is the `realeyes.emotion_detection.Tracker` class.
After a **tracker** object is constructed, the user can call the `realeyes.emotion_detection.Tracker.track()`
function to process frames from a video or other frame source.
The following example shows the basic usage of the library using OpenCV for loading images:
```python
import realeyes.emotion_detection as nel
import cv2
# Initialize the tracker
tracker = nel.Tracker('model/model.realZ')
# Open video
video = cv2.VideoCapture('video.mp4')
while True:
ret, frame = video.read()
if not ret:
break
# Convert BGR to RGB (OpenCV uses BGR)
frame_rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
# Track emotions (timestamp in milliseconds)
result = tracker.track(frame_rgb, 0)
# Process results
print(f"Face tracking: {'good' if result.landmarks.is_good else 'failed'}")
for emotion in result.emotions:
print(f" Emotion ID {emotion.emotion_id}: "
f"Probability={emotion.probability:.3f}, "
f"Active={emotion.is_active}")
video.release()
```
### C# / .NET
The main entry point is the `EmotionTracker` class.
After an **tracker** object is constructed, you can call the `TrackAsync()` method to track faces
in a frame. The method returns a `Task` allowing for asynchronous, non-blocking operation.
Both the constructor and tracking method support concurrent execution - you can start multiple operations
in parallel without waiting for results.
The following example demonstrates processing a video frame:
```csharp
using Realeyes.EmotionTracking;
using System;
using System.Threading.Tasks;
class Program
{
static async Task Main(string[] args)
{
// Create tracker with model file
using var tracker = new EmotionTracker("model/model.realZ");
// Prepare image data (example with dummy RGB data)
byte[] imageData = new byte[640 * 480 * 3];
var imageHeader = new ImageHeader
{
Data = imageData,
Width = 640,
Height = 480,
Stride = 640 * 3,
Format = ImageFormat.RGB
};
// Track emotions asynchronously
var result = await tracker.TrackAsync(imageHeader, TimeSpan.Zero);
// Process results
Console.WriteLine($"Face tracking: {(result.LandmarkData?.IsGood ?? false ? "good" : "failed")}");
if (result.Emotions.Happy is { } happy)
Console.WriteLine($"Happy: {happy.Probability:P2}, Active: {happy.IsActive}");
if (result.Emotions.Confusion is { } confusion)
Console.WriteLine($"Confusion: {confusion.Probability:P2}, Active: {confusion.IsActive}");
}
}
```
### Java
The main entry point is the `Tracker` interface.
After creating a **tracker** object, you can call the `track()` method to process frames.
The method returns a `TrackerResultFuture` for asynchronous result retrieval.
The following example shows basic usage:
```java
import com.realeyesit.nel.*;
public class Example {
public static void main(String[] args) {
// Create tracker with model file
Tracker tracker = Emotion.createTracker("model/model.realZ", 0);
// Prepare image data (example with dummy RGB data)
byte[] imageData = new byte[640 * 480 * 3];
ImageHeader header = new ImageHeader();
header.setData(imageData);
header.setWidth(640);
header.setHeight(480);
header.setStride(640 * 3);
header.setFormat(ImageFormat.RGB);
// Track emotions asynchronously
TrackerResultFuture future = tracker.track(header, 0);
ResultType result = future.get();
// Process results
System.out.println("Face tracking: " +
(result.getLandmarks().getIsGood() ? "good" : "failed"));
for (EmotionData emotion : result.getEmotions()) {
System.out.println(" Emotion: " + emotion.getEmotionID() +
" Probability: " + emotion.getProbability() +
" Active: " + emotion.getIsActive());
}
}
}
```
## Results
The result of the tracking contains a `nel::LandmarkData` structure and a `nel::EmotionResults` vector.
- The `nel::LandmarkData` consists of the following members:
- **scale**, the size of the face (larger means closer the user to the camera)
- **roll**, **pitch**, **yaw**, the 3 Euler angles of the face pose
- **translate**, the position of the head center on the frame
- the **landmarks2d** vector with either 0 or 49 points,
- the **landmarks3d** vector with either 0 or 49 points,
- and the **isGood** boolean value.
The **isGood** indicates whether the tracking is deemed good enough.
**landmarks2d** and **landmarks3d** contain 0 points if the tracker failed to find a face on the image, otherwise it always contain 49 points in the following structure:

**landmarks3d** contains the 3d coordinates of the frontal face in 3D space with 0 translation and 1 scale.
- The `nel::EmotionResults` contains multiple `nel::EmotionData` elements with the following members:
- **probability**, probability of the emotion
- **isActive**, whether the probability is higher than an internal threshold
- **isDetectionSuccessful** whether the tracking quality was good enough to reliable detect this emotion
The order of the `nel::EmotionData` elements are the same as the emotions in `nel::Tracker::get_emotion_IDs()` and in `nel::Tracker::get_emotion_names()`.
## Interpretation of the classifier output
The **probability** output of the Realeyes classifier (from the `nel::EmotionData` structure) has the following properties:
- It is a continuous value from the [0,1] range
- It changes depending on type and number of facial features activated
- It typically indicates facial activity in regions of face that correspond to a given facial expression
- Strong facial wrinkles or shadows can amplify the classifier sensitivity to corresponding facial regions
- It is purposefully sensitive as the classifier is trained to capture slight expressions
- It **should not be interpreted as intensity** of a given facial expression
- It is not possible to prescribe which facial features correspond to what output levels due to the nature of the used ML models
We recommend the following interpretation of the **probability** output:
- **values close to 0**
- no or very little activity on the face with respect to a given facial expression
- **values between 0 and binary threshold**
- some facial activity was perceived, though in the view of the classifier it does not amount to a basic facial expression
- **values just below binary threshold**
- high facial activity was perceived, which under some circumstances may be interpreted as true basic facial expression, while under others not (e.g. watching ads vs. playing games)
- **values above binary threshold**
- high facial activity was perceived, which in view of the classifier amount to a basic facial expression
Demographic Estimation
https://verifeye-docs.realeyes.ai/on-prem-docker/demographic-estimation/
# Realeyes Guide for Demographic Estimation API
## Overview
This guide presents the requirements for acquiring, running and using the docker image of the Demographic Estimation API service.
---
## Changelog
**Version 1.0:** Initial version.
---
## Accessing and pulling the latest docker image
You must do the first 2 steps only once per user/computer.
### Prerequisites
It is required that you have AWS CLI installed. This command is supported using the latest version of AWS CLI version 2 or in v1.17.10 or later of AWS CLI version 1.
See how to: [Install the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html)
### Configure AWS credential profile
You should have previously received your access key ID and Secret Access Key from Realeyes. Please use them here:
```
aws configure --profile demographicestimation
AWS Access Key ID [None]:
AWS Secret Access Key [None]:
Default region name [None]:
```
### Get authorization token and pass it to docker login
```
aws ecr get-login-password --profile demographicestimation --region eu-west-1 | docker login --username AWS --password-stdin 249265253269.dkr.ecr.eu-west-1.amazonaws.com
```
The get-login-password command retrieves and displays an authentication token using the GetAuthorizationToken API -- you will use this token to authenticate to an Amazon ECR registry. You can pass the authorization token to the login command of the container client of your preference, such as the Docker CLI. After you have authenticated to an Amazon ECR registry with this command, you can use the client to pull images from that registry as long as your IAM principal has access to do so before the token expires. **NOTE:** The authorization token is valid for 12 hours.
### Pull the latest docker image
```
docker pull 249265253269.dkr.ecr.eu-west-1.amazonaws.com/verifeye/demographic-estimation-api:latest
```
---
## Running the image
The service requires an activation key.
Set:
```
ACTIVATION_KEY=
```
You need to request the activation key from Realeyes.
### Run with docker
Run the container with the following command:
```
docker run --rm -ti -p 8080:8080/tcp \
-e ACTIVATION_KEY= \
--read-only \
--pids-limit=128 \
--security-opt=no-new-privileges \
--memory=16G \
249265253269.dkr.ecr.eu-west-1.amazonaws.com/verifeye/demographic-estimation-api:latest
```
### Run with docker compose
Alternatively one can use the following docker-compose.yaml:
```yaml
services:
demographic-estimation-api:
image: 249265253269.dkr.ecr.eu-west-1.amazonaws.com/verifeye/demographic-estimation-api:latest
environment:
- ACTIVATION_KEY=
ports:
- 8080:8080
read_only: true
security_opt:
- "no-new-privileges"
deploy:
resources:
limits:
pids: 128
memory: 16G
```
---
## Interactive API Documentation (Swagger UI)
Once the service is running, you can access the interactive API documentation at:
```
http://localhost:8080/swagger/index.html
```
This Swagger UI provides a **living documentation** of the API where you can:
- Browse all available endpoints with their detailed descriptions
- View request/response schemas and example payloads
- **Try out the API directly from your browser** - send real requests and see the responses in real-time
- Explore error codes and response formats
This is the recommended way to get familiar with the API and test your integration during development.
---
## API overview
The Demographic Estimation API service provides REST API endpoints for age estimation and gender detection.
Below is the outline of the API, while a more detailed documentation is available on the Swagger UI (see above).
---
### Get Age
Estimates the age of faces detected in an image.
**Endpoint:** `POST /v1/demographic-estimation/get-age`
**Authentication:** API Key or Bearer Token
**Request Body:**
```json
{
"image": {
"bytes": "base64-encoded-image-string",
"url": null
},
"maxFaceCount": 1
}
```
**Request Parameters:**
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `image` | object | Yes | Image data provided as URL or Base64 encoded bytes |
| `image.url` | string (nullable) | No | URL of a JPEG or PNG image |
| `image.bytes` | string (nullable) | No | Base64 encoded binary JPEG or PNG image |
| `maxFaceCount` | integer | No | Maximum number of faces to be processed (default: 1) |
**Response Example:**
```json
{
"faces": [
{
"face": {
"confidence": 0.9987,
"boundingBox": {
"x": 120,
"y": 80,
"width": 200,
"height": 250
}
},
"age": {
"prediction": 28.5,
"uncertainty": 0.45
}
}
],
"unprocessedFaceCount": 0
}
```
**Response Fields:**
| Field Path | Type | Description |
|------------|------|-------------|
| `faces` | array (nullable) | The faces that were processed and their age estimation results |
| `faces[].face` | object | Face detection information |
| `faces[].face.confidence` | number | Face detection score with value range [0.0, 1.0] (higher is better) |
| `faces[].face.boundingBox` | object | Bounding box of the detected face |
| `faces[].face.boundingBox.x` | integer | Horizontal position of the detected face bounding box |
| `faces[].face.boundingBox.y` | integer | Vertical position of the detected face bounding box |
| `faces[].face.boundingBox.width` | integer | Width of the detected face bounding box |
| `faces[].face.boundingBox.height` | integer | Height of the detected face bounding box |
| `faces[].age` | object | Age estimation information |
| `faces[].age.prediction` | number (nullable) | Estimated age |
| `faces[].age.uncertainty` | number (nullable) | Uncertainty score of the estimation with value range [0.0, infinity], we recommend rejecting everything higher than 1.0 |
| `unprocessedFaceCount` | integer | The number of faces that were not processed due to the maximum face count limit |
**Example Request:**
```bash
curl -X POST "https://demographic-estimation-api-eu.realeyes.ai/v1/demographic-estimation/get-age" \
-H "Authorization: ApiKey API-KEY-FROM-DEV-CONSOLE" \
-H "Content-Type: application/json" \
-d '{
"image": {
"bytes": "/9j/4AAQSkZJRgABAQEAYABgAAD..."
},
"maxFaceCount": 5
}'
```
**Response Codes:**
- `200` - Success
- `400` - Bad Request - Invalid image format, missing required fields, or invalid parameters
- `401` - Unauthorized - Missing or invalid authentication
---
### Get Gender
Detects the gender of faces in an image.
**Endpoint:** `POST /v1/demographic-estimation/get-gender`
**Authentication:** API Key or Bearer Token
**Request Body:**
```json
{
"image": {
"bytes": "base64-encoded-image-string",
"url": null
},
"maxFaceCount": 1
}
```
**Request Parameters:**
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `image` | object | Yes | Image data provided as URL or Base64 encoded bytes |
| `image.url` | string (nullable) | No | URL of a JPEG or PNG image |
| `image.bytes` | string (nullable) | No | Base64 encoded binary JPEG or PNG image |
| `maxFaceCount` | integer | No | Maximum number of faces to be processed (default: 1) |
**Response Example:**
```json
{
"faces": [
{
"face": {
"confidence": 0.9987,
"boundingBox": {
"x": 120,
"y": 80,
"width": 200,
"height": 250
}
},
"gender": "Male"
}
],
"unprocessedFaceCount": 0
}
```
**Response Fields:**
| Field Path | Type | Description |
|------------|------|-------------|
| `faces` | array (nullable) | The faces that were processed and their gender detection results |
| `faces[].face` | object | Face detection information |
| `faces[].face.confidence` | number | Face detection score with value range [0.0, 1.0] (higher is better) |
| `faces[].face.boundingBox` | object | Bounding box of the detected face |
| `faces[].face.boundingBox.x` | integer | Horizontal position of the detected face bounding box |
| `faces[].face.boundingBox.y` | integer | Vertical position of the detected face bounding box |
| `faces[].face.boundingBox.width` | integer | Width of the detected face bounding box |
| `faces[].face.boundingBox.height` | integer | Height of the detected face bounding box |
| `faces[].gender` | string | Detected gender (Male or Female) |
| `unprocessedFaceCount` | integer | The number of faces that were not processed due to the maximum face count limit |
**Example Request:**
```bash
curl -X POST "https://demographic-estimation-api-eu.realeyes.ai/v1/demographic-estimation/get-gender" \
-H "Authorization: ApiKey API-KEY-FROM-DEV-CONSOLE" \
-H "Content-Type: application/json" \
-d '{
"image": {
"bytes": "/9j/4AAQSkZJRgABAQEAYABgAAD..."
},
"maxFaceCount": 5
}'
```
**Response Codes:**
- `200` - Success
- `400` - Bad Request - Invalid image format, missing required fields, or invalid parameters
- `401` - Unauthorized - Missing or invalid authentication
Emotion Attention
https://verifeye-docs.realeyes.ai/on-prem-docker/emotion-attention/
# Realeyes Guide for Emotion Attention API
## Overview
This guide presents the requirements for acquiring, running and using the docker image of the Emotion Attention API service.
---
## Changelog
**Version 1.0:** Initial version.
---
## Accessing and pulling the latest docker image
You must do the first 2 steps only once per user/computer.
### Prerequisites
It is required that you have AWS CLI installed. This command is supported using the latest version of AWS CLI version 2 or in v1.17.10 or later of AWS CLI version 1.
See how to: [Install the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html)
### Configure AWS credential profile
You should have previously received your access key ID and Secret Access Key from Realeyes. Please use them here:
```
aws configure --profile emotionattention
AWS Access Key ID [None]:
AWS Secret Access Key [None]:
Default region name [None]:
```
### Get authorization token and pass it to docker login
```
aws ecr get-login-password --profile emotionattention --region eu-west-1 | docker login --username AWS --password-stdin 249265253269.dkr.ecr.eu-west-1.amazonaws.com
```
The get-login-password command retrieves and displays an authentication token using the GetAuthorizationToken API -- you will use this token to authenticate to an Amazon ECR registry. You can pass the authorization token to the login command of the container client of your preference, such as the Docker CLI. After you have authenticated to an Amazon ECR registry with this command, you can use the client to pull images from that registry as long as your IAM principal has access to do so before the token expires. **NOTE:** The authorization token is valid for 12 hours.
### Pull the latest docker image
```
docker pull 249265253269.dkr.ecr.eu-west-1.amazonaws.com/verifeye/emotion-attention-api:latest
```
---
## Running the image
The service requires an activation key.
Set:
```
ACTIVATION_KEY=
```
You need to request the activation key from Realeyes.
### Run with docker
Run the container with the following command:
```
docker run --rm -ti -p 8080:8080/tcp \
-e ACTIVATION_KEY= \
--read-only \
--pids-limit=128 \
--security-opt=no-new-privileges \
--memory=16G \
249265253269.dkr.ecr.eu-west-1.amazonaws.com/verifeye/emotion-attention-api:latest
```
### Run with docker compose
Alternatively one can use the following docker-compose.yaml:
```yaml
services:
emotion-attention-api:
image: 249265253269.dkr.ecr.eu-west-1.amazonaws.com/verifeye/emotion-attention-api:latest
environment:
- ACTIVATION_KEY=
ports:
- 8080:8080
read_only: true
security_opt:
- "no-new-privileges"
deploy:
resources:
limits:
pids: 128
memory: 16G
```
---
## Interactive API Documentation (Swagger UI)
Once the service is running, you can access the interactive API documentation at:
```
http://localhost:8080/swagger/index.html
```
This Swagger UI provides a **living documentation** of the API where you can:
- Browse all available endpoints with their detailed descriptions
- View request/response schemas and example payloads
- **Try out the API directly from your browser** – send real requests and see the responses in real-time
- Explore error codes and response formats
This is the recommended way to get familiar with the API and test your integration during development.
---
## API overview
The Emotion Attention API service provides REST API endpoints for emotion and attention detection.
Below is the outline of the API, while a more detailed documentation is available on the Swagger UI (see above).
### Detect Emotions and Attention
Returns whether a face was detected and, for the dominant face in the image, the detected emotions, attention state, and facial landmarks.
**Endpoint:** `POST /v1/emotion-attention/detect`
**Authentication:** API Key or Bearer Token
**Request Body:**
```json
{
"image": {
"bytes": "base64-encoded-image-string",
"url": null
}
}
```
**Request Parameters:**
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `image` | object | Yes | Image data provided as URL or Base64 encoded bytes |
| `image.url` | string (nullable) | No | URL of a JPEG or PNG image |
| `image.bytes` | string (nullable) | No | Base64 encoded binary JPEG or PNG image |
**Response Example:**
```json
{
"emotionsAttention": {
"hasFace": true,
"presence": true,
"eyesOnScreen": true,
"attention": true,
"confusion": false,
"contempt": false,
"disgust": false,
"happy": true,
"empathy": false,
"surprise": false
},
"landmarks": {
"scale": 1.23,
"roll": -2.5,
"yaw": 5.3,
"pitch": -1.2,
"translate": {
"x": 320.5,
"y": 240.8
},
"landmarks2D": [
{ "x": 310.2, "y": 235.6 },
{ "x": 330.8, "y": 236.1 }
],
"landmarks3D": [
{ "x": 0.12, "y": -0.05, "z": 0.98 },
{ "x": 0.15, "y": -0.04, "z": 0.97 }
],
"isGood": true
}
}
```
**Response Fields:**
| Field Path | Type | Description |
|------------|------|-------------|
| `emotionsAttention` | object | Results of analyzing image for facial presence, attention, and emotional states |
| `emotionsAttention.hasFace` | boolean (nullable) | Whether a face is detected in the image (null means it could not be determined reliably) |
| `emotionsAttention.presence` | boolean (nullable) | Whether a person is present in the image (null means it could not be determined reliably) |
| `emotionsAttention.eyesOnScreen` | boolean (nullable) | Whether the person's eyes are on the screen (null means it could not be determined reliably) |
| `emotionsAttention.attention` | boolean (nullable) | Whether the person is attentive (null means it could not be determined reliably) |
| `emotionsAttention.confusion` | boolean (nullable) | Whether confusion emotion is detected (null means it could not be determined reliably) |
| `emotionsAttention.contempt` | boolean (nullable) | Whether contempt is detected (null means it could not be determined reliably) |
| `emotionsAttention.disgust` | boolean (nullable) | Whether disgust is detected (null means it could not be determined reliably) |
| `emotionsAttention.happy` | boolean (nullable) | Whether happiness is detected (null means it could not be determined reliably) |
| `emotionsAttention.empathy` | boolean (nullable) | Whether empathy is detected (null means it could not be determined reliably) |
| `emotionsAttention.surprise` | boolean (nullable) | Whether surprise is detected (null means it could not be determined reliably) |
| `landmarks` | object | Result of facial landmark detection, including pose, scale, and landmark positions in 2D and 3D space |
| `landmarks.scale` | number | Scale of the face |
| `landmarks.roll` | number | Roll pose angle |
| `landmarks.yaw` | number | Yaw pose angle |
| `landmarks.pitch` | number | Pitch pose angle |
| `landmarks.translate` | object | Translation coordinates in 2D space |
| `landmarks.translate.x` | number | The X axis coordinate of the head center in image space |
| `landmarks.translate.y` | number | The Y axis coordinate of the head center in image space |
| `landmarks.landmarks2D` | array (nullable) | Position of the 49 landmarks, in image coordinates |
| `landmarks.landmarks2D[].x` | number | The X axis coordinate of a landmark point in 2D space |
| `landmarks.landmarks2D[].y` | number | The Y axis coordinate of a landmark point in 2D space |
| `landmarks.landmarks3D` | array (nullable) | Position of the 49 landmarks, in an un-scaled face-centered 3D space |
| `landmarks.landmarks3D[].x` | number | The X axis coordinate of a landmark point in 3D space |
| `landmarks.landmarks3D[].y` | number | The Y axis coordinate of a landmark point in 3D space |
| `landmarks.landmarks3D[].z` | number | The Z axis coordinate of a landmark point in 3D space |
| `landmarks.isGood` | boolean | Whether the tracking is good quality or not |
**Example Request:**
```bash
curl -X POST "https://emotion-attention-api-eu.realeyes.ai/v1/emotion-attention/detect" \
-H "Authorization: ApiKey API-KEY-FROM-DEV-CONSOLE" \
-H "Content-Type: application/json" \
-d '{
"image": {
"bytes": "/9j/4AAQSkZJRgABAQEAYABgAAD..."
}
}'
```
**Response Codes:**
- `200` - Success
- `400` - Bad Request - Invalid image format, missing required fields, or invalid parameters
- `401` - Unauthorized - Missing or invalid authentication
Face Verification
https://verifeye-docs.realeyes.ai/on-prem-docker/face-verification/
# Realeyes Guide for Face Verification API
## Overview
This guide presents the requirements for acquiring, running and using the docker image of the Face Verification API service.
---
## Changelog
**Version 1.0:** Initial version.
---
## Accessing and pulling the latest docker image
You must do the first 2 steps only once per user/computer.
### Prerequisites
It is required that you have AWS CLI installed. This command is supported using the latest version of AWS CLI version 2 or in v1.17.10 or later of AWS CLI version 1.
See how to: [Install the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html)
### Configure AWS credential profile
You should have previously received your access key ID and Secret Access Key from Realeyes. Please use them here:
```
aws configure --profile faceverification
AWS Access Key ID [None]:
AWS Secret Access Key [None]:
Default region name [None]:
```
### Get authorization token and pass it to docker login
```
aws ecr get-login-password --profile faceverification --region eu-west-1 | docker login --username AWS --password-stdin 249265253269.dkr.ecr.eu-west-1.amazonaws.com
```
The get-login-password command retrieves and displays an authentication token using the GetAuthorizationToken API -- you will use this token to authenticate to an Amazon ECR registry. You can pass the authorization token to the login command of the container client of your preference, such as the Docker CLI. After you have authenticated to an Amazon ECR registry with this command, you can use the client to pull images from that registry as long as your IAM principal has access to do so before the token expires. **NOTE:** The authorization token is valid for 12 hours.
### Pull the latest docker image
```
docker pull 249265253269.dkr.ecr.eu-west-1.amazonaws.com/verifeye/face-verification-api:latest
```
---
## Running the image
The service requires an activation key.
Set:
```
ACTIVATION_KEY=
```
You need to request the activation key from Realeyes.
### Run with docker
Run the container with the following command:
```
docker run --rm -ti -p 8080:8080/tcp \
-e ACTIVATION_KEY= \
--read-only \
--pids-limit=128 \
--security-opt=no-new-privileges \
--memory=16G \
249265253269.dkr.ecr.eu-west-1.amazonaws.com/verifeye/face-verification-api:latest
```
### Run with docker compose
Alternatively one can use the following docker-compose.yaml:
```yaml
services:
face-verification-api:
image: 249265253269.dkr.ecr.eu-west-1.amazonaws.com/verifeye/face-verification-api:latest
environment:
- ACTIVATION_KEY=
ports:
- 8080:8080
read_only: true
security_opt:
- "no-new-privileges"
deploy:
resources:
limits:
pids: 128
memory: 16G
```
---
## Interactive API Documentation (Swagger UI)
Once the service is running, you can access the interactive API documentation at:
```
http://localhost:8080/swagger/index.html
```
This Swagger UI provides a **living documentation** of the API where you can:
- Browse all available endpoints with their detailed descriptions
- View request/response schemas and example payloads
- **Try out the API directly from your browser** – send real requests and see the responses in real-time
- Explore error codes and response formats
This is the recommended way to get familiar with the API and test your integration during development.
---
## API overview
The Face Verification API service provides the following REST API endpoints:
- detect-faces
- get-face-embeddings
- compare-face-embeddings
Below is the outline of the API, while a more detailed documentation is available on the Swagger UI (see above).
### Detect Faces
Returns a list of detected faces on the provided image with their respective bounding boxes.
**Endpoint:** `POST /v1/face-verification/detect-faces`
**Authentication:** API Key or Bearer Token
**Request Body:**
```json
{
"image": {
"bytes": "base64-encoded-image-string",
"url": null
},
"maxFaceCount": 10
}
```
**Request Parameters:**
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `image` | object | Yes | The image to process |
| `image.url` | string (nullable) | No | URL of a jpeg or png image |
| `image.bytes` | string (nullable) | No | Base 64 string encoded binary jpeg or png image |
| `maxFaceCount` | integer | No | Maximum number of faces to detect in the image |
**Response Example:**
```json
{
"faces": [
{
"confidence": 0.9876,
"boundingBox": {
"x": 120,
"y": 80,
"width": 200,
"height": 250
}
}
],
"unprocessedFaceCount": 0
}
```
**Response Fields:**
| Field Path | Type | Description |
|------------|------|-------------|
| `faces` | array (nullable) | Faces found on the image |
| `faces[].confidence` | number | Face detection score with value range [0.0, 1.0] (higher is better) |
| `faces[].boundingBox` | object | Model for the bounding box of a detected face |
| `faces[].boundingBox.x` | integer | Horizontal position of the detected face bounding box |
| `faces[].boundingBox.y` | integer | Vertical position of the detected face bounding box |
| `faces[].boundingBox.width` | integer | Width of the detected face bounding box |
| `faces[].boundingBox.height` | integer | Height of the detected face bounding box |
| `unprocessedFaceCount` | integer | Number of faces found on the image but were not returned (because the max_faces request parameter filtered them out) |
**Example Request:**
```bash
curl -X POST "https://face-verification-api-eu.realeyes.ai/v1/face-verification/detect-faces" \
-H "Authorization: ApiKey API-KEY-FROM-DEV-CONSOLE" \
-H "Content-Type: application/json" \
-d '{
"image": {
"bytes": "/9j/4AAQSkZJRgABAQEAYABgAAD..."
},
"maxFaceCount": 10
}'
```
**Response Codes:**
- `200` - Returns the detected faces results
---
### Get Face Embeddings
Returns a list of face embeddings for all the detected faces in the provided image.
**Endpoint:** `POST /v1/face-verification/get-face-embeddings`
**Authentication:** API Key or Bearer Token
**Request Body:**
```json
{
"image": {
"bytes": "base64-encoded-image-string",
"url": null
},
"maxFaceCount": 1
}
```
**Request Parameters:**
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `image` | object | Yes | The image to process |
| `image.url` | string (nullable) | No | URL of a jpeg or png image |
| `image.bytes` | string (nullable) | No | Base 64 string encoded binary jpeg or png image |
| `maxFaceCount` | integer | No | Maximum number of faces to get the embedding on |
**Response Example:**
```json
{
"faces": [
{
"face": {
"confidence": 0.9876,
"boundingBox": {
"x": 120,
"y": 80,
"width": 200,
"height": 250
}
},
"embedding": [0.123, -0.456, 0.789]
}
],
"unprocessedFaceCount": 0
}
```
**Response Fields:**
| Field Path | Type | Description |
|------------|------|-------------|
| `faces` | array (nullable) | Faces found on the image |
| `faces[].face` | object | Model for face detection |
| `faces[].face.confidence` | number | Face detection score with value range [0.0, 1.0] (higher is better) |
| `faces[].face.boundingBox` | object | Model for the bounding box of a detected face |
| `faces[].face.boundingBox.x` | integer | Horizontal position of the detected face bounding box |
| `faces[].face.boundingBox.y` | integer | Vertical position of the detected face bounding box |
| `faces[].face.boundingBox.width` | integer | Width of the detected face bounding box |
| `faces[].face.boundingBox.height` | integer | Height of the detected face bounding box |
| `faces[].embedding` | array (nullable) | Face verification embedding of the face |
| `unprocessedFaceCount` | integer | Number of faces found on the image but were not calculated the embedding on (because the max_faces request parameter filtered them out) |
**Example Request:**
```bash
curl -X POST "https://face-verification-api-eu.realeyes.ai/v1/face-verification/get-face-embeddings" \
-H "Authorization: ApiKey API-KEY-FROM-DEV-CONSOLE" \
-H "Content-Type: application/json" \
-d '{
"image": {
"bytes": "/9j/4AAQSkZJRgABAQEAYABgAAD..."
},
"maxFaceCount": 1
}'
```
**Response Codes:**
- `200` - Returns the face embeddings results
---
### Compare Face Embeddings
Returns the similarity between two face embeddings as an integer between 0 and 100.
**Endpoint:** `POST /v1/face-verification/compare-face-embeddings`
**Authentication:** API Key or Bearer Token
**Request Body:**
```json
{
"embedding1": [0.123, -0.456, 0.789],
"embedding2": [0.125, -0.450, 0.792]
}
```
**Request Parameters:**
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `embedding1` | array (nullable) | Yes | Embedding to compare |
| `embedding2` | array (nullable) | Yes | Embedding to compare with |
**Response Example:**
```json
{
"similarity": 85
}
```
**Response Fields:**
| Name | Type | Description |
|------|------|-------------|
| `similarity` | integer | Similarity between the two embeddings with value range [-1, 100] (higher is better). Reject any matches where similarity is less than 70.
**Threshold reference** (computed using extensive in-the-wild datasets):
• **95** corresponds to FPR 1e-06 (or better)
• **90** corresponds to FPR 1e-05
• **80** corresponds to FPR 1e-4
• **70** corresponds to FPR 1e-3 |
**Example Request:**
```bash
curl -X POST "https://face-verification-api-eu.realeyes.ai/v1/face-verification/compare-face-embeddings" \
-H "Authorization: ApiKey API-KEY-FROM-DEV-CONSOLE" \
-H "Content-Type: application/json" \
-d '{
"embedding1": [0.123, -0.456, 0.789],
"embedding2": [0.125, -0.450, 0.792]
}'
```
**Response Codes:**
- `200` - Returns the similarity result
Index
https://verifeye-docs.realeyes.ai/on-prem-docker/
# VerifEye On-Prem API (Docker)
This section contains documentation for running VerifEye APIs as Docker containers in your own infrastructure.
---
## Available APIs
### [Emotion Attention API](https://verifeye-docs.realeyes.ai/emotion-attention)
Guide for acquiring, running and using the docker image of the Emotion Attention API service.
### [Face Verification API](https://verifeye-docs.realeyes.ai/face-verification)
Guide for acquiring, running and using the docker image of the Face Verification API service.
### [Demographic Estimation API](https://verifeye-docs.realeyes.ai/demographic-estimation)
Guide for acquiring, running and using the docker image of the Demographic Estimation API service.
Liveness Detection
https://verifeye-docs.realeyes.ai/on-prem-docker/liveness-detection/
# Realeyes Guide for Liveness Detection API
## Overview
This guide presents the requirements for acquiring, running and using the docker image of the Liveness Detection API service.
---
## Changelog
**Version 1.0:** Initial version.
---
## Accessing and pulling the latest docker image
You must do the first 2 steps only once per user/computer.
### Prerequisites
It is required that you have AWS CLI installed. This command is supported using the latest version of AWS CLI version 2 or in v1.17.10 or later of AWS CLI version 1.
See how to: [Install the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html)
### Configure AWS credential profile
You should have previously received your access key ID and Secret Access Key from Realeyes. Please use them here:
```
aws configure --profile livenessdetection
AWS Access Key ID [None]:
AWS Secret Access Key [None]:
Default region name [None]:
```
### Get authorization token and pass it to docker login
```
aws ecr get-login-password --profile livenessdetection --region eu-west-1 | docker login --username AWS --password-stdin 249265253269.dkr.ecr.eu-west-1.amazonaws.com
```
The get-login-password command retrieves and displays an authentication token using the GetAuthorizationToken API -- you will use this token to authenticate to an Amazon ECR registry. You can pass the authorization token to the login command of the container client of your preference, such as the Docker CLI. After you have authenticated to an Amazon ECR registry with this command, you can use the client to pull images from that registry as long as your IAM principal has access to do so before the token expires. **NOTE:** The authorization token is valid for 12 hours.
### Pull the latest docker image
```
docker pull 249265253269.dkr.ecr.eu-west-1.amazonaws.com/verifeye/liveness-detection-api:latest
```
---
## Running the image
The service requires an activation key.
Set:
```
ACTIVATION_KEY=
```
You need to request the activation key from Realeyes.
### Run with docker
Run the container with the following command:
```
docker run --rm -ti -p 8080:8080/tcp \
-e ACTIVATION_KEY= \
--read-only \
--pids-limit=128 \
--security-opt=no-new-privileges \
--memory=16G \
249265253269.dkr.ecr.eu-west-1.amazonaws.com/verifeye/liveness-detection-api:latest
```
### Run with docker compose
Alternatively one can use the following docker-compose.yaml:
```yaml
services:
liveness-detection-api:
image: 249265253269.dkr.ecr.eu-west-1.amazonaws.com/verifeye/liveness-detection-api:latest
environment:
- ACTIVATION_KEY=
ports:
- 8080:8080
read_only: true
security_opt:
- "no-new-privileges"
deploy:
resources:
limits:
pids: 128
memory: 16G
```
---
## Interactive API Documentation (Swagger UI)
Once the service is running, you can access the interactive API documentation at:
```
http://localhost:8080/swagger/index.html
```
This Swagger UI provides a **living documentation** of the API where you can:
- Browse all available endpoints with their detailed descriptions
- View request/response schemas and example payloads
- **Try out the API directly from your browser** - send real requests and see the responses in real-time
- Explore error codes and response formats
This is the recommended way to get familiar with the API and test your integration during development.
---
## API overview
The Liveness Detection API service provides REST API endpoints ToDo: Finish this sentence.
Below is the outline of the API, while a more detailed documentation is available on the Swagger UI (see above).
### Check Video Liveness
Checks the liveness of a video. The video should contain a single person face. You can send base64 encoded video or a video url. The face on the video should be in an upright position. Sideways or upside-down faces are not supported. The preferred input video format is webm (VP8 codec). Maximum video length 10 seconds. Maximum video size is 2,000,000 bytes.
**Endpoint:** `POST /v1/liveness/check`
**Authentication:** API Key or Bearer Token
**Request Body:**
```json
{
"video": {
"bytes": "base64-encoded-video-string",
"url": null
},
"includeBestFrameWithFace": false
}
```
**Request Parameters:**
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `video` | object | Yes | The video to process |
| `video.url` | string (nullable) | No | URL of a video |
| `video.bytes` | string (nullable) | No | Base 64 string encoded binary video |
| `includeBestFrameWithFace` | boolean | No | Gets or sets a value indicating whether the best frame containing a detected face should be included in the results |
**Response Example:**
```json
{
"face": {
"confidence": 0.9987,
"boundingBox": {
"x": 120,
"y": 80,
"width": 200,
"height": 250
}
},
"hasFace": true,
"unprocessedFaceCount": 0,
"isLive": true,
"bestFrameWithFace": null
}
```
**Response Fields:**
| Field Path | Type | Description |
|------------|------|-------------|
| `face` | object | Model for face detection |
| `face.confidence` | number | Face detection score with value range [0.0, 1.0] (higher is better) |
| `face.boundingBox` | object | Model for the bounding box of a detected face |
| `face.boundingBox.x` | integer | Horizontal position of the detected face bounding box |
| `face.boundingBox.y` | integer | Vertical position of the detected face bounding box |
| `face.boundingBox.width` | integer | Width of the detected face bounding box |
| `face.boundingBox.height` | integer | Height of the detected face bounding box |
| `hasFace` | boolean | Indicates whether a face was found in the image |
| `unprocessedFaceCount` | integer | In case more than one face was detected, only the dominant face will be used for the recognition. This count indicates how many faces were not processed |
| `isLive` | boolean (nullable) | Describes whether the face on the image is live or not. Can be null, in case NO face was detected |
| `bestFrameWithFace` | string (nullable) | Best frame with face extracted from the video in Base64 format |
**Example Request:**
```bash
curl -X POST "https://liveness-detection-api-eu.realeyes.ai/v1/liveness/check" \
-H "Authorization: ApiKey API-KEY-FROM-DEV-CONSOLE" \
-H "Content-Type: application/json" \
-d '{
"video": {
"bytes": "UklGRiQAAABXQVZFZm10IBAAAAABAAEA..."
},
"includeBestFrameWithFace": false
}'
```
**Response Codes:**
- `200` - Returns the liveness check result
Index
https://verifeye-docs.realeyes.ai/redirect/concept/
This section describes the core concepts you will encounter when integrating with VerifEye's Redirect.
## Verification session
A **verification session** represents a single end-user attempt.
- A session has a unique identifier (`reVerificationSessionId` in the final redirect URL).
- The session aggregates the individual verifier outcomes into an overall verification result.
## Verification
A **Verification** represents a named, configurable verification experience. It defines how the verification flow must work (rules, enabled verifiers, thresholds, UX options, etc.), and it may also reference or contain a **face collection** used by the Face Recognition verifier.
A verification contains the following configurations:
- Verifier configurations
- Redirect configurations
- Result parameter configurations
- Security configurations
- Session Override configurations
### One-time setup (create once, reuse many times)
The API endpoints that create or modify a Verification are **one-time setup** operations and should be clearly labeled in the documentation as such. A Verification must typically be created **only once per face-collection scope**, either:
- via the developer/admin portal (recommended for most teams), or
- via a **single programmatic call** from a secure backend/service.
After creation, the same Verification configuration must be **reused** for all end-user sessions in that scope by referencing its `verification_id` (or `verification_name`, depending on the API design).
### What “scope” means (and why it matters)
A Verification is a configuration object that is tied to a particular **face collection scope**, which depends on your use case:
- **Login / authentication for an entire user base:**
You typically create **one Verification** for the whole user base (one shared face collection scope), and reuse it for every login attempt.
- **Project- or campaign-based access control (separate populations):**
You create **one Verification per project/campaign scope**, for example:
- - protecting a specific survey on a survey platform (each survey can have its own scope), or
- - a limited-stock promotion (e.g., “Discounted shoes” campaign), where you want to admit new participants only until inventory lasts—each campaign/promo can be isolated via its own face collection scope and Verification.
### Security guidance (must be created server-side)
Creating a Verification is a **privileged administrative action** and must be performed from a **trusted, secure environment** (backend, admin service, or portal). It must **not** be created by the end user on the client side.
If a user could create or control their own Verification, they could potentially influence the verification behavior (e.g., choose weaker settings, point to a different face collection, or otherwise manipulate the process). Therefore:
- **Only admin/backend credentials** should be allowed to call “create/update Verification” endpoints.
- Client applications should only receive a reference to an existing Verification (e.g., `verification_id`) and use it to start verification sessions.
!!!
Every configuration can be easily set either in the Developer Console or via VerifEye Service API.
[!ref target="blank" text="Developer Console"](https://verifeye-console.realeyes.ai/)
[!ref text="VerifEye Service API"](https://verifeye-docs.realeyes.ai/rest-api/verifeye-service-api)
!!!
[!ref text="For more about how to configure Verification, check out Verification configurations page"](https://verifeye-docs.realeyes.ai/redirect/verification-configurations/)
---
*Last updated: 2026-02-03*
Index
https://verifeye-docs.realeyes.ai/redirect/
# What is VerifEye's Redirect?
Redirect is a hosted user verification SaaS (Software as a Service) that helps you confirm a user’s identity-related signals during a short, guided camera flow.
At a high level, VerifEye:
- guides the user through a browser-based camera experience
- runs one or more configurable **verifiers** (liveness, face recognition, age and gender)
- redirects the user back to your application with the verification outcome in the URL
From an integrator’s perspective the system consists of:
- a **hosted verification page** (served by VerifEye) can be accessed with the provided **verification URL**
- a **target URL** (configured for your Verification) that receives the outcome as query parameters
You do not need to host, deploy, or run the server/client to use the SaaS.
For a detailed breakdown of supported integration patterns (full-page redirect, embedded iFrame, popup), see the **Use cases** section below.
## Concept
[!ref text="Check out the Concept page"](https://verifeye-docs.realeyes.ai/redirect/concept/)
## Use cases
[!ref text="Check out the Use cases page"](https://verifeye-docs.realeyes.ai/redirect/use-cases/)
## Security
[!ref text="Check out the Security page"](https://verifeye-docs.realeyes.ai/redirect/security/)
## Verification configuration
[!ref text="Check out Verification configurations page"](https://verifeye-docs.realeyes.ai/redirect/verification-configurations/)
## Parameters
[!ref text="Check out the Parameters page"](https://verifeye-docs.realeyes.ai/redirect/parameters/)
---
*Last updated: 2026-02-03*
Index
https://verifeye-docs.realeyes.ai/redirect/parameters/
!!!
All reserved parameters (whether input or output) use the **re** prefix (e.g., **reSignature**).
!!!
## Input parameters
If **IncludeCustomInputParameters** is `true` in the Verification Result configuration then all non-reserved parameters are bypassed.
### Reserved parameters
#### Session Overrides
[!ref text="Check out the Session Override Configurations section"](https://verifeye-docs.realeyes.ai/redirect/verification-configurations/#session-override-configurations)
#### Signature (`reSignature`)
[!ref text="Check out the Incoming signed Verification Url section"](https://verifeye-docs.realeyes.ai/redirect/security/#incoming-signed-verification-url-request-integrity)
#### External identifier (`reExternalId`)
Some face recognition modes require a stable external identifier to link a user across sessions.
- `reExternalId` is **required** when face recognition is configured for:
- `MatchVerification`
- `UniqueMatchVerification`
- `reExternalId` is **optional** in other modes.
## Output parameters
### Result URL contract (integration output)
The **result redirect URL** is the primary integration contract.
- VerifEye redirects the browser to your configured result URL.
- The final URL contains query parameters describing the outcome.
- A signature parameter is appended to enable tamper detection.
The following parameter names are used:
- `reVerificationSessionId`
- `reVerificationResult`
- `reLivenessResult`
- `reFaceRecognitionResult`
- `reAgeVerificationResult`
- `reGenderVerificationResult`
- `reFaceId`
- `reAge`
- `reGender`
- `reCorrelationId`
- `reTotalFaceCount`
- `reFailedReason`
- `reSignature`
Parameter presence depends on your project configuration.
---
*Last updated: 2026-02-03*
Index
https://verifeye-docs.realeyes.ai/redirect/security/
This section describes the two security mechanisms most relevant to integrations:
1. **Incoming signed verification URL** (request integrity)
2. **Outgoing signed result URL** (result integrity)
These are independent features and can be enabled/used separately.
### Incoming signed verification URL (request integrity)
Some projects may require that the initial verification URL (the one your application sends the user to) is **signed**.
Purpose:
- prevent unauthorized tampering with query parameters before the verification starts
- ensure the verification session is initiated with an authentic, unmodified set of inputs
Integration guidance:
- treat the signing key as a secret
- sign URLs on a trusted system (typically your backend)
- do not generate signatures in browser JavaScript
If your project requires signed input and the signature is missing/invalid, the verification will be rejected.
### Outgoing signature on the result URL (result integrity)
The result redirect includes `reSignature`, which can be used for tamper detection.
- Signature validation is **optional**.
- If you need higher assurance, validate `reSignature` on your backend before trusting any other parameters.
Operational recommendations:
- consider all query parameters untrusted until signature verification passes
- validate on your backend (or another trusted environment)
- if verification fails, do not trust or persist the reported outcome
### Origin validation (for iFrame / Popup window)
In embedded scenarios, the VerifEye result page posts a message to the parent/opener.
Recommendations:
- always validate `event.origin` against the expected VerifEye domain(s)
- treat `event.data.redirectedTo` as untrusted input until you validate the URL with `reSignature` checking
---
*Last updated: 2026-02-03*
Index
https://verifeye-docs.realeyes.ai/redirect/use-cases/
This section describes the integrating use cases for VerifEye's Redirect.
## Use cases Verification setup
All the below examples use the following setup:
- Liveness Verifier in Verification mode
- Age Verifier in Threshold Verification mode (Above 18)
- Face Recognition Verifier in Calculation Only mode
- Gender Verifier in Calculation Only mode
- They use the same collection
## Use case: Full-page redirect
**When to use**
- simplest integration; users leave your site temporarily and come back via redirect
**How it works (conceptually)**
- you send the user to the VerifEye verification URL
- the user completes the verification
- VerifEye redirects the browser to your result URL with the outcome in the query string parameters
**Recommended**
- validate `reSignature` server-side to avoid result tampering
[!ref text="For more information about the signature, check out the Security page"](https://verifeye-docs.realeyes.ai/redirect/security/)
**Example**
In this example we use the EU service.
- Verification URL: https://verifeye-service-eu.realeyes.ai/verification/e4fc930b-d780-47e3-ae4c-0d5d2f22e54e
- Target URL (configured for the Verification): https://example.com
- Result URL example (Target URL with the outcome for a specific session in the query string parameters):
```
https://example.com/?reVerificationSessionId=7a429371-f918-4900-bf1b-14a76e551042&reFaceId=99a9e4e3ac75406ebd638f95bc4f0b30&reVerificationResult=passed&reLivenessResult=passed&reFaceRecognitionResult=passed&reAgeVerificationResult=passed&reAge=31&reGenderVerificationResult=passed&reGender=male&reSignature=70ac1eb0762395c831a09f53f854ca7e9745bf5755b83ed8137ed21dec9c7307
```
[!ref text="Click here to try out" target="blank"](https://verifeye-service-eu.realeyes.ai/verification/e4fc930b-d780-47e3-ae4c-0d5d2f22e54e)
## Use case: Embedded iFrame
**When to use**
- you want the verification flow embedded in your page without full navigation
**How it works (conceptually)**
- you embed the hosted verification URL in an iFrame
- after completion, VerifEye navigates to a result page that posts a message to the parent window containing:
- `redirectedTo`: the final result URL
**Recommended**
- validate `event.origin` on the host page
- validate `reSignature` on your backend if you need tamper detection
[!ref text="For more information about origin check and the signature, check out the Security page"](https://verifeye-docs.realeyes.ai/redirect/security/)
**Example**
- Verification URL: https://verifeye-service-eu.realeyes.ai/verification/3bb1fb37-9d3a-423f-b5bc-1790a329f117
- Target URL (configured for the Verification): https://verifeye-docs.realeyes.ai/iframe_popup_result_example.html
- Result URL example (Target URL with the outcome for a specific session in the query string parameters):
```
https://verifeye-docs.realeyes.ai/iframe_popup_result_example.html?reVerificationSessionId=d7c09541-03c1-403e-9b3c-146468657f27&reFaceId=99a9e4e3ac75406ebd638f95bc4f0b30&reVerificationResult=passed&reLivenessResult=passed&reFaceRecognitionResult=passed&reAgeVerificationResult=passed&reAge=31&reGenderVerificationResult=passed&reGender=male&reSignature=462cfda7b1999f243867d5a2ab1725624ca7d85d807b7e8d3cb1f285648c0810
```
[!ref text="Click here to try out" target="blank"](https://verifeye-docs.realeyes.ai/static/iframe_example.html)
The following file is a minimum example for how to use VerifEye iFrame integration
:::code source="/static/iframe_example.html" language="js" title="iframe_example.html" :::
This file shows what is set for the Target URL. This is a technical file that is responsible to receive the result query string parameters and send back the whole URL to the hosted site via `postMessage`.
:::code source="/static/iframe_popup_result_example.html" language="js" title="iframe_popup_result_example.html" :::
## Use case: Popup window
**When to use**
- you want to keep the host page intact and run verification in a separate window
**How it works (conceptually)**
- you open a popup to the hosted verification URL
- after completion, the popup navigates to the result page and posts a message to `window.opener` with:
- `redirectedTo`: the final result URL
- and finally the popup window closes
**Recommended**
- handle popup blockers and provide a fallback
- validate `event.origin` on the host page
- validate `reSignature` on your backend if you need tamper detection
[!ref text="For more information about origin check and the signature, check out the Security page"](https://verifeye-docs.realeyes.ai/redirect/security/)
**Example**
- Verification URL: https://verifeye-service-eu.realeyes.ai/verification/3bb1fb37-9d3a-423f-b5bc-1790a329f117
- Target URL (configured for the Verification): https://verifeye-docs.realeyes.ai/iframe_popup_result_example.html
- Result URL example (Target URL with the outcome for a specific session in the query string parameters):
```
https://verifeye-docs.realeyes.ai/iframe_popup_result_example.html?reVerificationSessionId=5048626a-8fb5-4116-bcdf-8754a85bba4a&reFaceId=99a9e4e3ac75406ebd638f95bc4f0b30&reVerificationResult=passed&reLivenessResult=passed&reFaceRecognitionResult=passed&reAgeVerificationResult=passed&reAge=31&reGenderVerificationResult=passed&reGender=male&reSignature=462cfda7b1999f243867d5a2ab1725624ca7d85d807b7e8d3cb1f285648c0810
```
[!ref text="Click here to try out" target="blank"](https://verifeye-docs.realeyes.ai/static/popup_example.html)
The following file is a minimum example for how to use VerifEye Popup integration
:::code source="/static/popup_example.html" language="js" title="popup_example.html" :::
This file shows what is set for the Target URL. This is a technical file that is responsible to receive the result query string parameters and send back the whole URL to the hosted site via `postMessage`.
:::code source="/static/iframe_popup_result_example.html" language="js" title="iframe_popup_result_example.html" :::
---
*Last updated: 2026-02-03*
Generic Oidc
https://verifeye-docs.realeyes.ai/redirect/use-cases/open-id-connect/generic-oidc/
# Generic OpenID Connect Integration with VerifEye
This guide walks you through setting up VerifEye as a generic OpenID Connect (OIDC) provider, enabling you to use VerifEye's biometric verification as an authentication factor in any OIDC-compliant identity provider or application.
---
## Prerequisites
Before configuring your identity provider, ensure you have:
- A VerifEye account and API key
- Admin access to your identity provider or application
- Understanding of the OpenID Connect flow
- A verification configuration set up for OIDC integration (see below)
---
## Configuring the VerifEye Verification
To integrate VerifEye with any OIDC-compliant system, you need to create a VerifEye verification configuration that will be used as the OIDC provider. Follow these steps:
You can create a new verification configuration in the [VerifEye Developer Console](https://verifeye-console.realeyes.ai/) under **Verification Configurations**. Make sure to note the configuration ID and API key, as you will need these for the identity provider setup.
### 1. Basic Settings
When creating the [verification configuration](https://verifeye-docs.realeyes.ai/redirect/verification-configurations/), set the following basic settings:
| Field | Value |
|-------|--------|
| **Name** | Generic OIDC Verification |
| **Passed URL** | Set the OpenID Connect endpoints depending on your VerifEye region, for example for the EU region: `https://verifeye-oidc-eu.realeyes.ai/v1/openid/verify-result` |
| **Failed URL** | Set the OpenID Connect endpoints depending on your VerifEye region, for example for the EU region: `https://verifeye-oidc-eu.realeyes.ai/v1/openid/verify-result` |
### 2. Verifier Configuration
For the OIDC integration, you must configure the verification configuration to include the following settings:
- Face Recognition
- **Unique Match Verification**: one person can be only registered once in the system, if the same person tries to register again with a different identifier (email address), the verification will fail
- **Match Verification**: one person can be registered multiple times in the system, if the same person tries to register again with a different identifier (email address), the verification will pass
Additionally you can configure other settings such as liveness detection, age or gender verification.
### 3. Advanced Settings
For the OIDC integration, you must also configure the following advanced settings:
- **Include Signature**: enabled
- **Include Verification Result**: enabled
- **Include Custom Input Parameters**: enabled
- **Force Signed Input**: enabled
---
## OIDC Provider Configuration Details
When configuring VerifEye as an OpenID Connect provider in your identity provider or application, use the following values:
### Required Configuration Values
| Parameter | Value |
|-----------|--------|
| **Client ID** | Your VerifEye verification configuration ID, e.g. if this is your verification URL `https://verifeye-service-eu.realeyes.ai/verification/e4fc930b-d780-47e3-ae4c-0d5d2f22e54e` then it can be found at the end **e4fc930b-d780-47e3-ae4c-0d5d2f22e54e** |
| **Client Secret** | Your VerifEye API Key, which can be found in the VerifEye Console under **Settings > Account Information** |
| **Scopes** | `openid email` |
### Regional Endpoints
Configure the OpenID Connect endpoints based on your VerifEye region:
#### EU Region
| Endpoint | URL |
|----------|-----|
| **Issuer URL** | `https://verifeye-oidc-eu.realeyes.ai` |
| **Authorization URL** | `https://verifeye-oidc-eu.realeyes.ai/v1/openid/authorize` |
| **Token URL** | `https://verifeye-oidc-eu.realeyes.ai/v1/openid/token` |
| **JWKS URL** | `https://verifeye-oidc-eu.realeyes.ai/v1/openid/jwks` |
#### US Region
| Endpoint | URL |
|----------|-----|
| **Issuer URL** | `https://verifeye-oidc-us.realeyes.ai` |
| **Authorization URL** | `https://verifeye-oidc-us.realeyes.ai/v1/openid/authorize` |
| **Token URL** | `https://verifeye-oidc-us.realeyes.ai/v1/openid/token` |
| **JWKS URL** | `https://verifeye-oidc-us.realeyes.ai/v1/openid/jwks` |
---
## Authentication Flow
The VerifEye OIDC service follows the standard OAuth 2.0/OpenID Connect authorization code flow:
1. User attempts to authenticate with your application
2. Application redirects user to VerifEye authorization endpoint
3. User completes biometric verification using your configured verification settings
4. Upon successful verification, user is redirected back to your application with authorization code
5. Your application exchanges the authorization code for access and ID tokens
6. User authentication is complete
---
Index
https://verifeye-docs.realeyes.ai/redirect/use-cases/open-id-connect/
# VerifEye OpenID Connect Service
VerifEye provides a specialized OpenID Connect (OIDC) service that enables biometric verification as an authentication factor within identity providers like Okta. This service acts as a bridge between your verification configurations and your user management system, allowing you to leverage VerifEye's biometric capabilities in a standardized authentication flow.
## Regional Availability
The VerifEye OIDC service is available in two regions for optimal performance and data compliance:
- **EU (Europe)**: `https://verifeye-oidc-eu.realeyes.ai/`
- **US (United States)**: `https://verifeye-oidc-us.realeyes.ai/`
Choose the region closest to your users or that aligns with your data residency requirements.
## Key Endpoints
The OIDC service provides standard OpenID Connect endpoints for integration:
| Endpoint | Path | Description |
|----------|------|-------------|
| **Authorization** | `/v1/openid/authorize` | Initiate authentication flow |
| **Token** | `/v1/openid/token` | Exchange authorization code for tokens |
| **JWKS** | `/v1/openid/jwks` | JSON Web Key Set for token verification |
| **Verify Result** | `/v1/openid/verify-result` | Callback endpoint for verification results |
## Authentication Flow
The service follows the standard OAuth 2.0/OIDC authorization code flow:
1. User is redirected to VerifEye authorization endpoint
2. Biometric verification is performed using your configured verification settings
3. Upon completion, user is redirected back to identity provider with authorization code
4. Identity provider exchanges the code for access and ID tokens
5. User authentication is complete
## Integration Guides
Learn how to integrate VerifEye OIDC with popular identity providers:
[!ref text="Okta Integration"](https://verifeye-docs.realeyes.ai/okta-integration.md)
---
## Additional Resources
For detailed information about OpenID Connect specifications and best practices:
- [OpenID Connect Specification](https://openid.net/connect/)
- [OAuth 2.0 Authorization Framework](https://datatracker.ietf.org/doc/html/rfc6749)
- [JWT Token Specification](https://datatracker.ietf.org/doc/html/rfc7519)
Okta Integration
https://verifeye-docs.realeyes.ai/redirect/use-cases/open-id-connect/okta-integration/
# Okta Integration with VerifEye
This guide walks you through setting up VerifEye as a generic OpenID Connect (OIDC) provider in Okta, enabling you to use VerifEye's biometric verification as an authentication factor in your Okta-managed applications.
---
## Prerequisites
Before configuring Okta, ensure you have:
- Admin access to your Okta organization
- A VerifEye account and API key, and a verification configuration set up for OIDC integration (see below)
- Understanding of the OpenID Connect flow
---
## Configuring the VerifEye Verification
To integrate VerifEye with Okta, you need to create a VerifEye verification configuration that will be used as the OIDC provider. Follow these steps:
You can create a new verification configuration in the [VerifEye Developer Console](https://verifeye-console.realeyes.ai/) under **Verification Configurations**. Make sure to note the configuration ID and API key, as you will need these for the Okta setup.
### 1. Basic Settings
When creating the [verification configuration](https://verifeye-docs.realeyes.ai/redirect/verification-configurations/), set the following basic settings:
| Field | Value |
|-------|--------|
| **Name** | Okta OIDC Verification |
| **Passed URL** | Set the OpenID Connect endpoints depending on your VerifEye region, for example for the EU region: `https://verifeye-oidc-eu.realeyes.ai/v1/openid/verify-result` |
| **Failed URL** | Set the OpenID Connect endpoints depending on your VerifEye region, for example for the EU region: `https://verifeye-oidc-eu.realeyes.ai/v1/openid/verify-result` |
### 2. Verifier Configuration
For the OIDC integration, you must configure the verification configuration to include the following settings:
- Face Recognition
- **Unique Match Verification**: one person can be only registered once in the system, if the same person tries to register again with a different identifier (email address), the verification will fail
- **Match Verification**: one person can be registered multiple times in the system, if the same person tries to register again with a different identifier (email address), the verification will pass
Additionally you can configure other settings such as liveness detection, age or gender verification.
### 3. Advanced Settings
For the OIDC integration, you must also configure the following advanced settings:
- **Include Signature**: enabled
- **Include Verification Result**: enabled
- **Include Custom Input Parameters**: enabled
- **Force Signed Input**: enabled
## Adding VerifEye as Generic OIDC Provider
### 1. Create New Identity Provider
1. Navigate to **Security > Identity Providers** in your Okta Admin Console
2. Click **Add Identity Provider**
3. Select **Add OpenID Connect IdP**
### 2. Configure Provider Settings
Configure the following basic settings:
| Field | Value |
|-------|--------|
| **Name** | VerifEye OIDC |
| **Client ID** | Your VerifEye verification configuration ID, e.g. if this is your verification URL https://verifeye-service-eu.realeyes.ai/verification/e4fc930b-d780-47e3-ae4c-0d5d2f22e54e then it can be found at the end **e4fc930b-d780-47e3-ae4c-0d5d2f22e54e** |
| **Client Secret** | Your VerifEye API Key, which can be found in the VerifEye Console under **Settings > Account Information** |
| **Scopes** | `openid email` |
| **IdP Usage** | `Factor only` |
### 3. Configure the Endpoints
Set the OpenID Connect endpoints depending on your VerifEye region, for example for the EU region:
| Field | Value |
|-------|--------|
| **Issuer URL** | `https://verifeye-oidc-eu.realeyes.ai` |
| **Authorization URL** | `https://verifeye-oidc-eu.realeyes.ai/v1/openid/authorize` |
| **Token URL** | `https://verifeye-oidc-eu.realeyes.ai/v1/openid/token` |
| **JWKS URL** | `https://verifeye-oidc-eu.realeyes.ai/v1/openid/jwks` |

---
## Required Additional Settings
After creating the OIDC provider, you **must** configure additional authentication settings:
#### 1. Set Up an IdP Authenticator
- Create a new authenticator that uses the VerifEye OIDC provider
#### 2. Create or Update your App sign-on Policies
- Define your rules for when the VerifEye authentication should be triggered in your authentication policies
---
## Additional Resources
For detailed configuration options and advanced settings, refer to the official Okta documentation:
- [Okta OpenID Connect Identity Providers](https://help.okta.com/oie/en-us/content/topics/integrations/open-id-connect.htm)
- [Configuring IdP Authenticators](https://help.okta.com/oie/en-us/content/topics/identity-engine/authenticators/configure-idp-authenticator.htm)
- [Multifactor authentication
](https://help.okta.com/oie/en-us/content/topics/identity-engine/authenticators/about-authenticators.htm)
- [Authentication method chain](https://help.okta.com/oie/en-us/content/topics/identity-engine/policies/authentication-method-chain.htm)
!!!warning Important
Remember that configuring only the OIDC provider is not sufficient. You must also set up authenticators and authentication policies to ensure proper security and functionality.
!!!
Index
https://verifeye-docs.realeyes.ai/redirect/verification-configurations/
This section describes the possible Verification configurations.
## Verifier configurations
### What is a Verifier?
A **Verifier** is a configurable, optional verification module that analyzes the camera subject and returns either:
- a pass/fail decision, and/or
- derived attributes (e.g.: estimated age, estimated gender)
Each verifier can be **independently enabled or disabled**.
### Generic Verifier Settings
These settings are applied for all enabled verifiers.
| Setting | What it does |
|---|---|
| **FailOnMultipleFacesDetected** | If `true` then the verification will fail if multiple faces are detected on the captured image.
If `false` then in case of multiple faces are detected the system always chooses the dominant face.
The default value is `false`|
### Supported Verifiers
| Verifier | What it does | Configuration | Output |
|---|---|---|---|
| **Liveness** | Confirms the subject in front of the camera is a live human (not a spoof). | **Type**: `Disabled` or `Verification`
**Challenge type**: `Balanced`, or `Advanced` (High Security). | **reLivenessResult**: `passed` or `failed` or `not_executed` |
| **Face Recognition** | Extracts a face embedding from the captured face image and depending on the configuration stores or compares it with the stored embeddings. | **Type**: `Disabled` or- `CalculationOnly`: Stores the embedding and returns the faceId.
- `DuplicateVerification`: Checks whether the embedding is already stored. If it isn’t, it stores it and returns the associated faceId. Fails only if the embedding is already stored.
- `UniqueMatchVerification`: Ensures the subject is uniquely represented by the provided external identifier (**reExternalId** is required). If the identifier is not present, the embedding is stored under that identifier.
- Fails if the embedding for the subject is different than the one already stored with the **reExternalId**.
- Fails if the embeddig for the subject is already stored under a different external identifier **reExternalId**
- `MatchVerification`: Verifies that the subject exists in the collection and that the matched identity corresponds to the provided external identifier (**reExternalId** is required). If the external identifier is not present, the embedding is stored under that identifier.
- Fails if the embedding for the subject is different than the one already stored with the **reExternalId**.
| **reFaceRecognitionResult**: `passed` or `failed` or `not_executed`
**reFaceId**: the unique identifier for the stored face embedding |
| **Age** | Estimates the subject’s age and (optionally) enforces an eligibility rule. | **Type**: `Disabled` or `CalculationOnly` or `ThresholdVerification` or `RangeVerification`
**AgeThresholdConfig**: must be set if the **Type** is `ThresholdVerification`:
**Threshold** (int), **Direction**: `Above` or `Below`
(Note: the comparison is inclusive)
**AgeRangeConfig**: must be set if the **Type** is `ThresholdVerification`:
**Minimum** (int), **Maximum** (int)
(Note: the comparison is inclusive)
Note: Age verification is designed for one face, if the image contains multiple faces it fails | **reAgeVerificationResult**: `passed` or `failed` or `not_executed`
**reAge**: the estimated age (int) |
| **Gender** | Estimates the subject’s gender. | **Type**: `Disabled` or `CalculationOnly` | **reGenderVerificationResult**: `passed` or `failed` or `not_executed`
**reGender**: the estimated gender (`male` or `female`). |
[!ref text="For more details about the settings, check out VerifEye Service API"](https://verifeye-docs.realeyes.ai/rest-api/verifeye-service-api/)
[!ref text="For more details about the parameters, check out the Parameters page"](https://verifeye-docs.realeyes.ai/redirect/parameters/)
## Redirect configurations
This section controls where the user is navigated after the verification flow finishes.
- **PassedUrl**: The URL to navigate to when the verification **passes**.
- **FailedUrl**: The URL to navigate to when the verification **fails**.
## Result parameter configurations
This section specifies the set of result fields to include in the redirect by appending them to the final URL as query string parameters. Only the configured variables are added.
### Configurable parameters
| Parameter | Result parameter | Type | Description |
|---|---|---|---|
| `IncludeCustomInputParameters` | Every non-reserved parameters | `bool` | Appends custom (non-reserved) input parameters to the final redirect URL. |
| `IncludeSignature` | `reSignature` | `bool` | Appends the signature variable to the final redirect URL. |
| `IncludeSessionId` | `reVerificationSessionId` | `bool` | Appends the verification session ID to the final redirect URL. |
| `IncludeFaceId` | `reFaceId` | `bool` | Appends the face ID to the final redirect URL. |
| `IncludeAge` | `reAge` | `bool` | Appends the estimated age to the final redirect URL. |
| `IncludeGender` | `reGender` | `bool` | Appends the estimated gender to the final redirect URL. |
| `IncludeVerificationResult` | `reVerificationResult` | `bool` | Appends the overall verification result (passed/failed) to the final redirect URL. |
| `IncludeLivenessCheckResult` | `reLivenessResult` | `bool` | Appends the liveness check result to the final redirect URL. |
| `IncludeFaceRecognitionResult` | `reFaceRecognitionResult` | `bool` | Appends the face recognition result (passed/failed/not_executed) to the final redirect URL. |
| `IncludeAgeVerificationResult` | `reAgeVerificationResult` | `bool` | Appends the age verification result (passed/failed/not_executed) to the final redirect URL. |
| `IncludeGenderVerificationResult` | `reGenderVerificationResult` | `bool` | Appends the gender verification result (passed/failed/not_executed) to the final redirect URL. |
| `IncludeCorrelationId` | `reCorrelationId` | `bool` | Appends the correlation ID to the final redirect URL. |
| `IncludeTotalFaceCount` | `reTotalFaceCount` | `bool` | Appends the total number of detected faces to the final redirect URL. |
| `IncludeFailedReason` | `reFailedReason` | `bool` | Appends the failed reason to the final redirect URL. |
## Security configurations
This section groups security-related settings for the verification flow.
### Configurable security settings
| Parameter | Type | Description |
|---|---|---|
| `ForceSignedInput` | `bool` | When enabled, all input parameters must be signed. Requests with missing or invalid signatures are rejected. |
## Session Override configurations
This section controls whether session-level (runtime) requests are allowed to override the selected verification level configuration blocks.
If a specific override is not allowed, any provided session value is ignored and the project-level configuration remains in effect.
| Parameter | Type | Description |
|---|---|---|
| `AllowVerifierOverrides` | `bool` | Allows overriding verifier settings at session/runtime level. |
| `AllowResultParameterOverrides` | `bool` | Allows overriding which result parameters are appended to the final redirect URL at session/runtime level. |
| `AllowRedirectOverrides` | `bool` | Allows overriding the redirect settings (`PassedUrl` / `FailedUrl`) at session/runtime level. |
### Verifier Overrides
If `AllowVerifierOverrides` is `true` then adding one or more of the following query string parameters to the input URL will override the verification level settings for the current session.
| Key | Description |
|---|---|
| `reDisableLivenessVerifier` | If it sets to `true` then it disables the liveness verifier for the session. |
| `reDisableFaceRecognitionVerifier` | If it sets to `true` then it disables the face recognition verifier for the session. |
| `reDisableAgeVerifier` | If it sets to `true` then it disables the age verifier for the session. |
| `reDisableGenderVerifier` | If it sets to `true` then it disables the gender verifier for the session. |
### Result Parameter Overrides
If `AllowResultParameterOverrides` is `true` then adding one or more of the following query string parameters to the input URL will override the verification level settings for the current session.
| Key | Description |
|---|---|
| `reIncludeCustomInputParameters` | Includes all non-reserved (custom) input parameters in the final redirect URL for the session. |
| `reIncludeSignature` | Includes the signature result parameter in the final redirect URL for the session. |
| `reIncludeSessionId` | Includes the session ID result parameter in the final redirect URL for the session. |
| `reIncludeFaceId` | Includes the face ID result parameter in the final redirect URL for the session. |
| `reIncludeAge` | Includes the estimated age result parameter in the final redirect URL for the session. |
| `reIncludeGender` | Includes the estimated gender result parameter in the final redirect URL for the session. |
| `reIncludeVerificationResult` | Includes the overall verification result parameter in the final redirect URL for the session. |
| `reIncludeLivenessResult` | Includes the liveness check result parameter in the final redirect URL for the session. |
| `reIncludeFaceRecognitionResult` | Includes the face recognition result parameter in the final redirect URL for the session. |
| `reIncludeAgeVerificationResult` | Includes the age verification result parameter in the final redirect URL for the session. |
| `reIncludeGenderVerificationResult` | Includes the gender verification result parameter in the final redirect URL for the session. |
| `reIncludeCorrelationId` | Includes the correlation ID result parameter in the final redirect URL for the session. |
| `reIncludeTotalFaceCount` | Includes the total detected face count result parameter in the final redirect URL for the session. |
| `reIncludeFailedReason` | Includes the failed reason result parameter in the final redirect URL for the session. |
### Redirect Overrides
If `AllowRedirectOverrides` is `true` then adding one or more of the following query string parameters to the input URL will override the verification level settings for the current session.
| Key | Description |
|---|---|
| `rePassedUrl` | Overrides the redirect target URL used when verification passes for the session. |
| `reFailedUrl` | Overrides the redirect target URL used when verification fails for the session. |
---
*Last updated: 2026-02-03*
Authentication
https://verifeye-docs.realeyes.ai/rest-api/authentication/
# Authentication
VerifEye REST APIs support two authentication methods.
---
## 1. API Key Authentication
VerifEye APIs use API Key authentication with the `Authorization` header.
### Format
**Header Format:**
```
Authorization: ApiKey
```
### Example
```bash
curl -X POST "https://demographic-estimation-api-eu.realeyes.ai/v1/demographic-estimation/get-age" \
-H "Authorization: ApiKey " \
-H "Content-Type: application/json" \
-d '{
"image": {
"bytes": "/9j/4AAQSkZJRgABAQEAYABgAAD..."
}
}'
```
---
## 2. JWT Bearer Token Authentication
JWT tokens provide time-limited authentication for client-side applications. Tokens are generated using the Security API's `/token/get` endpoint.
### Format
**Header Format:**
```
Authorization: Bearer
```
### How to Get a JWT Token
**Step 1: Generate Token**
```bash
curl -X GET "https://security-api-eu.realeyes.ai/v1/token/get" \
-H "Authorization: ApiKey "
```
**Response:**
```json
{
"token": "eyJhbGciOiJSUzUxMiIsInR5cCI6IkpXVCJ9...",
"expiresAt": "2026-01-27T12:00:00Z"
}
```
**Step 2: Use Token in API Requests**
```bash
curl -X POST "https://demographic-estimation-api-eu.realeyes.ai/v1/demographic-estimation/get-age" \
-H "Authorization: Bearer eyJhbGciOiJSUzUxMiIsInR5cCI6IkpXVCJ9..." \
-H "Content-Type: application/json" \
-d '{
"image": {
"bytes": "/9j/4AAQSkZJRgABAQEAYABgAAD..."
}
}'
```
---
## Getting Your API Key
1. Sign up at the [VerifEye Developer Console](https://verifeye-console.realeyes.ai/)
2. Navigate to **Home → View API Keys** or **Profile image -> Api Key**
---
*Last updated: 2026-02-13*
Index
https://verifeye-docs.realeyes.ai/rest-api/demographic-estimation-api/
# Demographic Estimation API
## Overview
The Demographic Estimation API provides AI-powered age estimation and gender detection services for faces in images.
The Demographic Estimation API enables you to:
- Estimate the age of detected faces with uncertainty scores
- Detect the gender of faces in images
- Process multiple faces in a single image
- Utilize high-accuracy AI models for demographic analysis
## Base URLs
| Region | Base URL |
|--------|----------|
| **EU** | `https://demographic-estimation-api-eu.realeyes.ai/v1/` |
| **US** | `https://demographic-estimation-api-us.realeyes.ai/v1/` |
---
## API Endpoints
### Get Age
Estimates the age of faces detected in an image.
**Endpoint:** `POST /v1/demographic-estimation/get-age`
**Authentication:** API Key or Bearer Token
**Request Body:**
```json
{
"image": {
"bytes": "base64-encoded-image-string",
"url": null
},
"maxFaceCount": 1
}
```
**Request Parameters:**
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `image` | object | Yes | Image data provided as URL or Base64 encoded bytes |
| `image.url` | string (nullable) | No | URL of a JPEG or PNG image |
| `image.bytes` | string (nullable) | No | Base64 encoded binary JPEG or PNG image |
| `maxFaceCount` | integer | No | Maximum number of faces to be processed (default: 1) |
**Response Example:**
```json
{
"faces": [
{
"face": {
"confidence": 0.9987,
"boundingBox": {
"x": 120,
"y": 80,
"width": 200,
"height": 250
}
},
"age": {
"prediction": 28.5,
"uncertainty": 0.45
}
}
],
"unprocessedFaceCount": 0
}
```
**Response Fields:**
| Field Path | Type | Description |
|------------|------|-------------|
| `faces` | array (nullable) | The faces that were processed and their age estimation results |
| `faces[].face` | object | Face detection information |
| `faces[].face.confidence` | number | Face detection score with value range [0.0, 1.0] (higher is better) |
| `faces[].face.boundingBox` | object | Bounding box of the detected face |
| `faces[].face.boundingBox.x` | integer | Horizontal position of the detected face bounding box |
| `faces[].face.boundingBox.y` | integer | Vertical position of the detected face bounding box |
| `faces[].face.boundingBox.width` | integer | Width of the detected face bounding box |
| `faces[].face.boundingBox.height` | integer | Height of the detected face bounding box |
| `faces[].age` | object | Age estimation information |
| `faces[].age.prediction` | number (nullable) | Estimated age |
| `faces[].age.uncertainty` | number (nullable) | Uncertainty score of the estimation with value range [0.0, infinity], we recommend rejecting everything higher than 1.0 |
| `unprocessedFaceCount` | integer | The number of faces that were not processed due to the maximum face count limit |
**Example Request:**
```bash
curl -X POST "https://demographic-estimation-api-eu.realeyes.ai/v1/demographic-estimation/get-age" \
-H "Authorization: ApiKey API-KEY-FROM-DEV-CONSOLE" \
-H "Content-Type: application/json" \
-d '{
"image": {
"bytes": "/9j/4AAQSkZJRgABAQEAYABgAAD..."
},
"maxFaceCount": 5
}'
```
**Response Codes:**
- `200` - Success
- `400` - Bad Request - Invalid image format, missing required fields, or invalid parameters
- `401` - Unauthorized - Missing or invalid authentication
---
### Get Gender
Detects the gender of faces in an image.
**Endpoint:** `POST /v1/demographic-estimation/get-gender`
**Authentication:** API Key or Bearer Token
**Request Body:**
```json
{
"image": {
"bytes": "base64-encoded-image-string",
"url": null
},
"maxFaceCount": 1
}
```
**Request Parameters:**
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `image` | object | Yes | Image data provided as URL or Base64 encoded bytes |
| `image.url` | string (nullable) | No | URL of a JPEG or PNG image |
| `image.bytes` | string (nullable) | No | Base64 encoded binary JPEG or PNG image |
| `maxFaceCount` | integer | No | Maximum number of faces to be processed (default: 1) |
**Response Example:**
```json
{
"faces": [
{
"face": {
"confidence": 0.9987,
"boundingBox": {
"x": 120,
"y": 80,
"width": 200,
"height": 250
}
},
"gender": "Male"
}
],
"unprocessedFaceCount": 0
}
```
**Response Fields:**
| Field Path | Type | Description |
|------------|------|-------------|
| `faces` | array (nullable) | The faces that were processed and their gender detection results |
| `faces[].face` | object | Face detection information |
| `faces[].face.confidence` | number | Face detection score with value range [0.0, 1.0] (higher is better) |
| `faces[].face.boundingBox` | object | Bounding box of the detected face |
| `faces[].face.boundingBox.x` | integer | Horizontal position of the detected face bounding box |
| `faces[].face.boundingBox.y` | integer | Vertical position of the detected face bounding box |
| `faces[].face.boundingBox.width` | integer | Width of the detected face bounding box |
| `faces[].face.boundingBox.height` | integer | Height of the detected face bounding box |
| `faces[].gender` | string | Detected gender (Male or Female) |
| `unprocessedFaceCount` | integer | The number of faces that were not processed due to the maximum face count limit |
**Example Request:**
```bash
curl -X POST "https://demographic-estimation-api-eu.realeyes.ai/v1/demographic-estimation/get-gender" \
-H "Authorization: ApiKey API-KEY-FROM-DEV-CONSOLE" \
-H "Content-Type: application/json" \
-d '{
"image": {
"bytes": "/9j/4AAQSkZJRgABAQEAYABgAAD..."
},
"maxFaceCount": 5
}'
```
**Response Codes:**
- `200` - Success
- `400` - Bad Request - Invalid image format, missing required fields, or invalid parameters
- `401` - Unauthorized - Missing or invalid authentication
---
### Health Check
Check the API health status.
**Endpoint:** `GET /v1/healthz`
**Authentication:** None required
**Response Example:**
```
2026-02-16T13:45:30.1234567Z
```
**Response Fields:**
| Name | Type | Description |
|------|------|-------------|
| (response body) | string | The server UTC time in ISO 8601 format |
**Example Request:**
```bash
curl -X GET "https://demographic-estimation-api-eu.realeyes.ai/v1/healthz"
```
**Response Codes:**
- `200` - API is healthy
---
## Common Response Codes
| Code | Description |
|------|-------------|
| `200` | Success |
| `400` | Bad Request - Invalid parameters |
| `401` | Unauthorized - Missing or invalid authentication |
| `403` | Forbidden - Valid authentication but account not found or insufficient permissions |
| `500` | Internal Server Error |
---
## Swagger Documentation
Interactive API documentation is available via Swagger UI:
- **EU**: [https://demographic-estimation-api-eu.realeyes.ai/swagger](https://demographic-estimation-api-eu.realeyes.ai/swagger)
- **US**: [https://demographic-estimation-api-us.realeyes.ai/swagger](https://demographic-estimation-api-us.realeyes.ai/swagger)
---
*Last updated: 2026-02-16*
Release Notes
https://verifeye-docs.realeyes.ai/rest-api/demographic-estimation-api/release-notes/
# Demographic Estimation API Release Notes
## Version History
### Version 1.0
**Release Date:** 2025-10-01
#### Features
- **Initial Release**: First public release of the Demographic Estimation API
---
*For the latest updates, visit the [VerifEye Developer Console](https://verifeye-console.realeyes.ai/)*
Index
https://verifeye-docs.realeyes.ai/rest-api/emotion-attention-api/
# Emotion & Attention API
## Overview
The Emotion & Attention API provides AI-powered facial emotion detection and attention analysis for understanding user engagement and emotional states.
The Emotion & Attention API enables you to:
- Detect multiple emotions (happiness, confusion, surprise, contempt, disgust, empathy)
- Analyze attention levels and determine if eyes are on screen
- Extract detailed facial landmark positions in 2D and 3D space
- Detect face and person presence in images
## Base URLs
| Region | Base URL |
|--------|----------|
| **EU** | `https://emotion-attention-api-eu.realeyes.ai/v1/` |
| **US** | `https://emotion-attention-api-us.realeyes.ai/v1/` |
---
## API Endpoints
### Detect Emotions and Attention
Returns whether a face was detected and, for the dominant face in the image, the detected emotions, attention state, and facial landmarks.
**Endpoint:** `POST /v1/emotion-attention/detect`
**Authentication:** API Key or Bearer Token
**Request Body:**
```json
{
"image": {
"bytes": "base64-encoded-image-string",
"url": null
}
}
```
**Request Parameters:**
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `image` | object | Yes | Image data provided as URL or Base64 encoded bytes |
| `image.url` | string (nullable) | No | URL of a JPEG or PNG image |
| `image.bytes` | string (nullable) | No | Base64 encoded binary JPEG or PNG image |
**Response Example:**
```json
{
"emotionsAttention": {
"hasFace": true,
"presence": true,
"eyesOnScreen": true,
"attention": true,
"confusion": false,
"contempt": false,
"disgust": false,
"happy": true,
"empathy": false,
"surprise": false
},
"landmarks": {
"scale": 1.23,
"roll": -2.5,
"yaw": 5.3,
"pitch": -1.2,
"translate": {
"x": 320.5,
"y": 240.8
},
"landmarks2D": [
{ "x": 310.2, "y": 235.6 },
{ "x": 330.8, "y": 236.1 }
],
"landmarks3D": [
{ "x": 0.12, "y": -0.05, "z": 0.98 },
{ "x": 0.15, "y": -0.04, "z": 0.97 }
],
"isGood": true
}
}
```
**Response Fields:**
| Field Path | Type | Description |
|------------|------|-------------|
| `emotionsAttention` | object | Results of analyzing image for facial presence, attention, and emotional states |
| `emotionsAttention.hasFace` | boolean (nullable) | Whether a face is detected in the image (null means it could not be determined reliably) |
| `emotionsAttention.presence` | boolean (nullable) | Whether a person is present in the image (null means it could not be determined reliably) |
| `emotionsAttention.eyesOnScreen` | boolean (nullable) | Whether the person's eyes are on the screen (null means it could not be determined reliably) |
| `emotionsAttention.attention` | boolean (nullable) | Whether the person is attentive (null means it could not be determined reliably) |
| `emotionsAttention.confusion` | boolean (nullable) | Whether confusion emotion is detected (null means it could not be determined reliably) |
| `emotionsAttention.contempt` | boolean (nullable) | Whether contempt is detected (null means it could not be determined reliably) |
| `emotionsAttention.disgust` | boolean (nullable) | Whether disgust is detected (null means it could not be determined reliably) |
| `emotionsAttention.happy` | boolean (nullable) | Whether happiness is detected (null means it could not be determined reliably) |
| `emotionsAttention.empathy` | boolean (nullable) | Whether empathy is detected (null means it could not be determined reliably) |
| `emotionsAttention.surprise` | boolean (nullable) | Whether surprise is detected (null means it could not be determined reliably) |
| `landmarks` | object | Result of facial landmark detection, including pose, scale, and landmark positions in 2D and 3D space |
| `landmarks.scale` | number | Scale of the face |
| `landmarks.roll` | number | Roll pose angle |
| `landmarks.yaw` | number | Yaw pose angle |
| `landmarks.pitch` | number | Pitch pose angle |
| `landmarks.translate` | object | Translation coordinates in 2D space |
| `landmarks.translate.x` | number | The X axis coordinate of the head center in image space |
| `landmarks.translate.y` | number | The Y axis coordinate of the head center in image space |
| `landmarks.landmarks2D` | array (nullable) | Position of the 49 landmarks, in image coordinates |
| `landmarks.landmarks2D[].x` | number | The X axis coordinate of a landmark point in 2D space |
| `landmarks.landmarks2D[].y` | number | The Y axis coordinate of a landmark point in 2D space |
| `landmarks.landmarks3D` | array (nullable) | Position of the 49 landmarks, in an un-scaled face-centered 3D space |
| `landmarks.landmarks3D[].x` | number | The X axis coordinate of a landmark point in 3D space |
| `landmarks.landmarks3D[].y` | number | The Y axis coordinate of a landmark point in 3D space |
| `landmarks.landmarks3D[].z` | number | The Z axis coordinate of a landmark point in 3D space |
| `landmarks.isGood` | boolean | Whether the tracking is good quality or not |
**Example Request:**
```bash
curl -X POST "https://emotion-attention-api-eu.realeyes.ai/v1/emotion-attention/detect" \
-H "Authorization: ApiKey API-KEY-FROM-DEV-CONSOLE" \
-H "Content-Type: application/json" \
-d '{
"image": {
"bytes": "/9j/4AAQSkZJRgABAQEAYABgAAD..."
}
}'
```
**Response Codes:**
- `200` - Success
- `400` - Bad Request - Invalid image format, missing required fields, or invalid parameters
- `401` - Unauthorized - Missing or invalid authentication
---
### Health Check
Check the API health status.
**Endpoint:** `GET /v1/healthz`
**Authentication:** None required
**Response Example:**
```
2026-02-16T13:45:30.1234567Z
```
**Response Fields:**
| Name | Type | Description |
|------|------|-------------|
| (response body) | string | The server UTC time in ISO 8601 format |
**Example Request:**
```bash
curl -X GET "https://emotion-attention-api-eu.realeyes.ai/v1/healthz"
```
**Response Codes:**
- `200` - API is healthy
---
## Common Response Codes
| Code | Description |
|------|-------------|
| `200` | Success |
| `400` | Bad Request - Invalid parameters |
| `401` | Unauthorized - Missing or invalid authentication |
| `403` | Forbidden - Valid authentication but account not found or insufficient permissions |
| `500` | Internal Server Error |
---
## Swagger Documentation
Interactive API documentation is available via Swagger UI:
- **EU**: [https://emotion-attention-api-eu.realeyes.ai/swagger](https://emotion-attention-api-eu.realeyes.ai/swagger)
- **US**: [https://emotion-attention-api-us.realeyes.ai/swagger](https://emotion-attention-api-us.realeyes.ai/swagger)
---
*Last updated: 2026-02-16*
Release Notes
https://verifeye-docs.realeyes.ai/rest-api/emotion-attention-api/release-notes/
# Emotion & Attention API Release Notes
## Version History
### Version 1.0
**Release Date:** 2025-10-01
#### Features
- **Initial Release**: First public release of the Emotion & Attention API
---
*For the latest updates, visit the [VerifEye Developer Console](https://verifeye-console.realeyes.ai/)*
Index
https://verifeye-docs.realeyes.ai/rest-api/face-recognition-api/
# Face Recognition API
## Overview
The Face Recognition API provides AI-powered face recognition and management capabilities for building secure identity verification systems.
The Face Recognition API enables you to:
- Search for matching faces in your collections
- Index new faces to collections for future recognition
- Automatically search and index if not found (duplicate detection)
- Delete faces from collections
- Manage face recognition collections (create, retrieve, update, delete)
- Utilize high-accuracy AI models with configurable match thresholds
## Base URLs
| Region | Base URL |
|--------|----------|
| **EU** | `https://face-recognition-api-eu.realeyes.ai/v1/` |
| **US** | `https://face-recognition-api-us.realeyes.ai/v1/` |
---
## API Endpoints
### Get Collection
Retrieves a single collection by its identifier.
**Endpoint:** `GET /v1/collection/get`
**Authentication:** API Key or Bearer Token
**Query Parameters:**
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `collectionId` | string | Yes | The collection identifier |
**Response Example:**
```json
{
"recognitionCollection": {
"collectionId": "my-collection",
"description": "Collection for employee verification",
"createdAt": "2026-01-15T10:30:00Z",
"updatedAt": "2026-02-10T14:20:00Z"
}
}
```
**Response Fields:**
| Field Path | Type | Description |
|------------|------|-------------|
| `recognitionCollection` | object | The collection details |
| `recognitionCollection.collectionId` | string (nullable) | Identifier of the collection |
| `recognitionCollection.description` | string (nullable) | Optional textual description |
| `recognitionCollection.createdAt` | string | UTC timestamp when the collection was created |
| `recognitionCollection.updatedAt` | string | UTC timestamp when the collection was last updated |
**Example Request:**
```bash
curl -X GET "https://face-recognition-api-eu.realeyes.ai/v1/collection/get?collectionId=my-collection" \
-H "Authorization: ApiKey API-KEY-FROM-DEV-CONSOLE"
```
**Response Codes:**
- `200` - Collection found
- `400` - Validation failure
- `401` - Authentication failure
- `404` - Collection not found
---
### Get All Collections
Lists all collections belonging to the authenticated account.
**Endpoint:** `GET /v1/collection/get-all`
**Authentication:** API Key or Bearer Token
**Response Example:**
```json
{
"collections": [
{
"collectionId": "my-collection",
"createdAt": "2026-01-15T10:30:00Z",
"updatedAt": "2026-02-10T14:20:00Z"
},
{
"collectionId": "employee-faces",
"createdAt": "2026-01-20T08:15:00Z",
"updatedAt": "2026-01-20T08:15:00Z"
}
]
}
```
**Response Fields:**
| Field Path | Type | Description |
|------------|------|-------------|
| `collections` | array (nullable) | Collections returned for the account (may be empty) |
| `collections[].collectionId` | string (nullable) | Identifier of the collection |
| `collections[].createdAt` | string | UTC timestamp when the collection was created |
| `collections[].updatedAt` | string | UTC timestamp when the collection was last updated |
**Example Request:**
```bash
curl -X GET "https://face-recognition-api-eu.realeyes.ai/v1/collection/get-all" \
-H "Authorization: ApiKey API-KEY-FROM-DEV-CONSOLE"
```
**Response Codes:**
- `200` - Collections returned
- `400` - Validation failure
- `401` - Authentication failure
---
### Create Collection
Creates a new recognition collection.
**Endpoint:** `POST /v1/collection/create`
**Authentication:** API Key or Bearer Token
**Request Body:**
```json
{
"collectionId": "new-collection",
"description": "Collection for customer verification"
}
```
**Request Parameters:**
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `collectionId` | string | Yes | Unique identifier for the collection (max length 512) |
| `description` | string (nullable) | No | Optional description (max length 1024) |
**Response Example:**
```json
{
"collectionId": "new-collection"
}
```
**Response Fields:**
| Name | Type | Description |
|------|------|-------------|
| `collectionId` | string (nullable) | Identifier of the created collection |
**Example Request:**
```bash
curl -X POST "https://face-recognition-api-eu.realeyes.ai/v1/collection/create" \
-H "Authorization: ApiKey API-KEY-FROM-DEV-CONSOLE" \
-H "Content-Type: application/json" \
-d '{
"collectionId": "new-collection",
"description": "Collection for customer verification"
}'
```
**Response Codes:**
- `200` - Collection created successfully
- `400` - Validation failure
- `401` - Authentication failure
- `409` - Collection already exists
---
### Update Collection
Updates an existing collection (only description is mutable).
**Endpoint:** `PUT /v1/collection/update`
**Authentication:** API Key or Bearer Token
**Request Body:**
```json
{
"collectionId": "my-collection",
"description": "Updated description for my collection"
}
```
**Request Parameters:**
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `collectionId` | string | Yes | Identifier of the collection to update |
| `description` | string (nullable) | No | New description (max length 1024) |
**Response Example:**
```json
{
"collectionId": "my-collection"
}
```
**Response Fields:**
| Name | Type | Description |
|------|------|-------------|
| `collectionId` | string (nullable) | Identifier of the updated collection |
**Example Request:**
```bash
curl -X PUT "https://face-recognition-api-eu.realeyes.ai/v1/collection/update" \
-H "Authorization: ApiKey API-KEY-FROM-DEV-CONSOLE" \
-H "Content-Type: application/json" \
-d '{
"collectionId": "my-collection",
"description": "Updated description for my collection"
}'
```
**Response Codes:**
- `200` - Collection updated
- `400` - Validation failure
- `401` - Authentication failure
---
### Delete Collection
Deletes a collection by its identifier.
**Endpoint:** `DELETE /v1/collection/delete`
**Authentication:** API Key or Bearer Token
**Query Parameters:**
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `collectionId` | string | Yes | The collection identifier |
**Response Example:**
```json
{}
```
**Example Request:**
```bash
curl -X DELETE "https://face-recognition-api-eu.realeyes.ai/v1/collection/delete?collectionId=my-collection" \
-H "Authorization: ApiKey API-KEY-FROM-DEV-CONSOLE"
```
**Response Codes:**
- `200` - Collection deleted (idempotent)
- `400` - Validation failure
- `401` - Authentication failure
---
### Search Face
Search for a face in a specified collection. Returns the face ID if a match is found. The faces on the images should be in an upright position. Sideways or upside-down faces are not supported.
**Endpoint:** `POST /v1/face/search`
**Authentication:** API Key or Bearer Token
**Request Body:**
```json
{
"image": {
"bytes": "base64-encoded-image-string",
"url": null
},
"collectionId": "my-collection",
"faceMatchThreshold": 80
}
```
**Request Parameters:**
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `image` | object | Yes | The image to search |
| `image.url` | string (nullable) | No | URL of a jpeg or png image |
| `image.bytes` | string (nullable) | No | Base 64 string encoded binary jpeg or png image |
| `collectionId` | string (nullable) | Yes | The id of the collection to search against |
| `faceMatchThreshold` | integer | No | Specifies the minimum confidence in the face match to return. Default value: **80**. Valid range: 0-100.
**Threshold reference** (computed using extensive in-the-wild datasets):
• **95** corresponds to FPR 1e-06
• **90** corresponds to FPR 1e-05
• **80** corresponds to FPR 1e-4
• **70** corresponds to FPR 1e-3 |
**Response Example:**
```json
{
"face": {
"confidence": 0.9987,
"boundingBox": {
"x": 120,
"y": 80,
"width": 200,
"height": 250
}
},
"faceId": "face_abc123xyz789",
"hasFace": true,
"unprocessedFaceCount": 0
}
```
**Response Fields:**
| Field Path | Type | Description |
|------------|------|-------------|
| `face` | object | Detected face information |
| `face.confidence` | number | Face detection score with value range [0.0, 1.0] (higher is better) |
| `face.boundingBox` | object | Model for the bounding box of a detected face |
| `face.boundingBox.x` | integer | Horizontal position of the detected face bounding box |
| `face.boundingBox.y` | integer | Vertical position of the detected face bounding box |
| `face.boundingBox.width` | integer | Width of the detected face bounding box |
| `face.boundingBox.height` | integer | Height of the detected face bounding box |
| `faceId` | string (nullable) | The id of the face |
| `hasFace` | boolean | Indicates whether a face was found in the image |
| `unprocessedFaceCount` | integer | In case more than one face was detected, only the dominant face will be used for the recognition. This count indicates how many faces were not processed |
**Example Request:**
```bash
curl -X POST "https://face-recognition-api-eu.realeyes.ai/v1/face/search" \
-H "Authorization: ApiKey API-KEY-FROM-DEV-CONSOLE" \
-H "Content-Type: application/json" \
-d '{
"image": {
"bytes": "/9j/4AAQSkZJRgABAQEAYABgAAD..."
},
"collectionId": "my-collection",
"faceMatchThreshold": 80
}'
```
**Response Codes:**
- `200` - OK
- `400` - Bad Request
- `401` - Unauthorized
- `404` - Not Found
---
### Index Face
Index a new face into a specified collection. Returns the generated face ID. The faces on the images should be in an upright position. Sideways or upside-down faces are not supported.
**Endpoint:** `POST /v1/face/index`
**Authentication:** API Key or Bearer Token
**Request Body:**
```json
{
"image": {
"bytes": "base64-encoded-image-string",
"url": null
},
"collectionId": "my-collection"
}
```
**Request Parameters:**
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `image` | object | Yes | The image to index |
| `image.url` | string (nullable) | No | URL of a jpeg or png image |
| `image.bytes` | string (nullable) | No | Base 64 string encoded binary jpeg or png image |
| `collectionId` | string (nullable) | Yes | The id of the collection to index the face into |
**Response Example:**
```json
{
"face": {
"confidence": 0.9987,
"boundingBox": {
"x": 120,
"y": 80,
"width": 200,
"height": 250
}
},
"faceId": "face_new456def012",
"hasFace": true,
"unprocessedFaceCount": 0
}
```
**Response Fields:**
| Field Path | Type | Description |
|------------|------|-------------|
| `face` | object | Detected face information |
| `face.confidence` | number | Face detection score with value range [0.0, 1.0] (higher is better) |
| `face.boundingBox` | object | Model for the bounding box of a detected face |
| `face.boundingBox.x` | integer | Horizontal position of the detected face bounding box |
| `face.boundingBox.y` | integer | Vertical position of the detected face bounding box |
| `face.boundingBox.width` | integer | Width of the detected face bounding box |
| `face.boundingBox.height` | integer | Height of the detected face bounding box |
| `faceId` | string (nullable) | The id of the face |
| `hasFace` | boolean | Indicates whether a face was found in the image |
| `unprocessedFaceCount` | integer | In case more than one face was detected, only the dominant face will be used for the recognition. This count indicates how many faces were not processed |
**Example Request:**
```bash
curl -X POST "https://face-recognition-api-eu.realeyes.ai/v1/face/index" \
-H "Authorization: ApiKey API-KEY-FROM-DEV-CONSOLE" \
-H "Content-Type: application/json" \
-d '{
"image": {
"bytes": "/9j/4AAQSkZJRgABAQEAYABgAAD..."
},
"collectionId": "my-collection"
}'
```
**Response Codes:**
- `200` - OK
- `400` - Bad Request
- `401` - Unauthorized
- `404` - Not Found
---
### Search or Index Face
Search for a face within a collection or index the face if not found. Useful for duplicate detection. The faces on the images should be in an upright position. Sideways or upside-down faces are not supported.
**Endpoint:** `POST /v1/face/search-or-index`
**Authentication:** API Key or Bearer Token
**Request Body:**
```json
{
"image": {
"bytes": "base64-encoded-image-string",
"url": null
},
"collectionId": "my-collection",
"faceMatchThreshold": 80
}
```
**Request Parameters:**
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `image` | object | Yes | The image to search or index |
| `image.url` | string (nullable) | No | URL of a jpeg or png image |
| `image.bytes` | string (nullable) | No | Base 64 string encoded binary jpeg or png image |
| `collectionId` | string (nullable) | Yes | The id of the collection to search against |
| `faceMatchThreshold` | integer | No | Specifies the minimum confidence in the face match to return. Default value: **80**. Valid range: 0-100.
**Threshold reference** (computed using extensive in-the-wild datasets):
• **95** corresponds to FPR 1e-06
• **90** corresponds to FPR 1e-05
• **80** corresponds to FPR 1e-4
• **70** corresponds to FPR 1e-3 |
**Response Example:**
```json
{
"face": {
"confidence": 0.9987,
"boundingBox": {
"x": 120,
"y": 80,
"width": 200,
"height": 250
}
},
"faceId": "face_abc123xyz789",
"hasFace": true,
"unprocessedFaceCount": 0,
"resultSource": "Search"
}
```
**Response Fields:**
| Field Path | Type | Description |
|------------|------|-------------|
| `face` | object | Detected face information |
| `face.confidence` | number | Face detection score with value range [0.0, 1.0] (higher is better) |
| `face.boundingBox` | object | Model for the bounding box of a detected face |
| `face.boundingBox.x` | integer | Horizontal position of the detected face bounding box |
| `face.boundingBox.y` | integer | Vertical position of the detected face bounding box |
| `face.boundingBox.width` | integer | Width of the detected face bounding box |
| `face.boundingBox.height` | integer | Height of the detected face bounding box |
| `faceId` | string (nullable) | The id of the face |
| `hasFace` | boolean | Indicates whether a face was found in the image |
| `unprocessedFaceCount` | integer | In case more than one face was detected, only the dominant face will be used for the recognition. This count indicates how many faces were not processed |
| `resultSource` | string | Specifies the source of a result in an operation. Possible values: "None", "Search", "Index" |
**Example Request:**
```bash
curl -X POST "https://face-recognition-api-eu.realeyes.ai/v1/face/search-or-index" \
-H "Authorization: ApiKey API-KEY-FROM-DEV-CONSOLE" \
-H "Content-Type: application/json" \
-d '{
"image": {
"bytes": "/9j/4AAQSkZJRgABAQEAYABgAAD..."
},
"collectionId": "my-collection",
"faceMatchThreshold": 80
}'
```
**Response Codes:**
- `200` - OK
- `400` - Bad Request
- `401` - Unauthorized
- `404` - Not Found
---
### Delete Face
Delete a face from the specified collection. The faces on the images should be in an upright position. Sideways or upside-down faces are not supported.
**Endpoint:** `DELETE /v1/face/delete`
**Authentication:** API Key or Bearer Token
**Query Parameters:**
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `collectionId` | string | Yes | The id of the collection |
| `faceId` | string | Yes | The id of the face to be deleted |
**Response Example:**
```json
{}
```
**Example Request:**
```bash
curl -X DELETE "https://face-recognition-api-eu.realeyes.ai/v1/face/delete?collectionId=my-collection&faceId=face_abc123xyz789" \
-H "Authorization: ApiKey API-KEY-FROM-DEV-CONSOLE"
```
**Response Codes:**
- `200` - OK
- `400` - Bad Request
- `401` - Unauthorized
- `404` - Not Found
---
### Health Check
Check the API health status.
**Endpoint:** `GET /v1/healthz`
**Authentication:** None required
**Response Example:**
```
2026-02-16T10:30:45Z
```
**Response Fields:**
| Name | Type | Description |
|------|------|-------------|
| (response body) | string | The server UTC time in ISO 8601 format |
**Example Request:**
```bash
curl -X GET "https://face-recognition-api-eu.realeyes.ai/v1/healthz"
```
**Response Codes:**
- `200` - API is healthy
---
## Common Response Codes
| Code | Description |
|------|-------------|
| `200` | Success |
| `400` | Bad Request - Invalid parameters |
| `401` | Unauthorized - Missing or invalid authentication |
| `403` | Forbidden - Valid authentication but account not found or insufficient permissions |
| `404` | Not Found - Resource not found |
| `500` | Internal Server Error |
---
## Swagger Documentation
Interactive API documentation is available via Swagger UI:
- **EU**: [https://face-recognition-api-eu.realeyes.ai/swagger](https://face-recognition-api-eu.realeyes.ai/swagger)
- **US**: [https://face-recognition-api-us.realeyes.ai/swagger](https://face-recognition-api-us.realeyes.ai/swagger)
---
*Last updated: 2026-02-16*
Release Notes
https://verifeye-docs.realeyes.ai/rest-api/face-recognition-api/release-notes/
# Face Recognition API Release Notes
## Version History
### Version 1.0
**Release Date:** 2025-10-01
#### Features
- **Initial Release**: First public release of the Face Recognition API
---
*For the latest updates, visit the [VerifEye Developer Console](https://verifeye-console.realeyes.ai/)*
Index
https://verifeye-docs.realeyes.ai/rest-api/face-verification-api/
# Face Verification API
## Overview
The Face Verification API provides face detection, embedding extraction, and face comparison services for identity verification and authentication use cases.
The Face Verification API enables you to:
- Detect faces in images with bounding boxes and confidence scores
- Extract face embeddings from images for identity verification
- Compare face embeddings to verify if two faces belong to the same person
- Process multiple faces in a single image
- Utilize high-accuracy AI models for face verification
## Base URLs
| Region | Base URL |
|--------|----------|
| **EU** | `https://face-verification-api-eu.realeyes.ai/v1/` |
| **US** | `https://face-verification-api-us.realeyes.ai/v1/` |
---
## API Endpoints
### Detect Faces
Returns a list of detected faces on the provided image with their respective bounding boxes.
**Endpoint:** `POST /v1/face-verification/detect-faces`
**Authentication:** API Key or Bearer Token
**Request Body:**
```json
{
"image": {
"bytes": "base64-encoded-image-string",
"url": null
},
"maxFaceCount": 10
}
```
**Request Parameters:**
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `image` | object | Yes | The image to process |
| `image.url` | string (nullable) | No | URL of a jpeg or png image |
| `image.bytes` | string (nullable) | No | Base 64 string encoded binary jpeg or png image |
| `maxFaceCount` | integer | No | Maximum number of faces to detect in the image |
**Response Example:**
```json
{
"faces": [
{
"confidence": 0.9876,
"boundingBox": {
"x": 120,
"y": 80,
"width": 200,
"height": 250
}
}
],
"unprocessedFaceCount": 0
}
```
**Response Fields:**
| Field Path | Type | Description |
|------------|------|-------------|
| `faces` | array (nullable) | Faces found on the image |
| `faces[].confidence` | number | Face detection score with value range [0.0, 1.0] (higher is better) |
| `faces[].boundingBox` | object | Model for the bounding box of a detected face |
| `faces[].boundingBox.x` | integer | Horizontal position of the detected face bounding box |
| `faces[].boundingBox.y` | integer | Vertical position of the detected face bounding box |
| `faces[].boundingBox.width` | integer | Width of the detected face bounding box |
| `faces[].boundingBox.height` | integer | Height of the detected face bounding box |
| `unprocessedFaceCount` | integer | Number of faces found on the image but were not returned (because the max_faces request parameter filtered them out) |
**Example Request:**
```bash
curl -X POST "https://face-verification-api-eu.realeyes.ai/v1/face-verification/detect-faces" \
-H "Authorization: ApiKey API-KEY-FROM-DEV-CONSOLE" \
-H "Content-Type: application/json" \
-d '{
"image": {
"bytes": "/9j/4AAQSkZJRgABAQEAYABgAAD..."
},
"maxFaceCount": 10
}'
```
**Response Codes:**
- `200` - Returns the detected faces results
---
### Get Face Embeddings
Returns a list of face embeddings for all the detected faces in the provided image.
**Endpoint:** `POST /v1/face-verification/get-face-embeddings`
**Authentication:** API Key or Bearer Token
**Request Body:**
```json
{
"image": {
"bytes": "base64-encoded-image-string",
"url": null
},
"maxFaceCount": 1
}
```
**Request Parameters:**
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `image` | object | Yes | The image to process |
| `image.url` | string (nullable) | No | URL of a jpeg or png image |
| `image.bytes` | string (nullable) | No | Base 64 string encoded binary jpeg or png image |
| `maxFaceCount` | integer | No | Maximum number of faces to get the embedding on |
**Response Example:**
```json
{
"faces": [
{
"face": {
"confidence": 0.9876,
"boundingBox": {
"x": 120,
"y": 80,
"width": 200,
"height": 250
}
},
"embedding": [0.123, -0.456, 0.789]
}
],
"unprocessedFaceCount": 0
}
```
**Response Fields:**
| Field Path | Type | Description |
|------------|------|-------------|
| `faces` | array (nullable) | Faces found on the image |
| `faces[].face` | object | Model for face detection |
| `faces[].face.confidence` | number | Face detection score with value range [0.0, 1.0] (higher is better) |
| `faces[].face.boundingBox` | object | Model for the bounding box of a detected face |
| `faces[].face.boundingBox.x` | integer | Horizontal position of the detected face bounding box |
| `faces[].face.boundingBox.y` | integer | Vertical position of the detected face bounding box |
| `faces[].face.boundingBox.width` | integer | Width of the detected face bounding box |
| `faces[].face.boundingBox.height` | integer | Height of the detected face bounding box |
| `faces[].embedding` | array (nullable) | Face verification embedding of the face |
| `unprocessedFaceCount` | integer | Number of faces found on the image but were not calculated the embedding on (because the max_faces request parameter filtered them out) |
**Example Request:**
```bash
curl -X POST "https://face-verification-api-eu.realeyes.ai/v1/face-verification/get-face-embeddings" \
-H "Authorization: ApiKey API-KEY-FROM-DEV-CONSOLE" \
-H "Content-Type: application/json" \
-d '{
"image": {
"bytes": "/9j/4AAQSkZJRgABAQEAYABgAAD..."
},
"maxFaceCount": 1
}'
```
**Response Codes:**
- `200` - Returns the face embeddings results
---
### Compare Face Embeddings
Returns the similarity between two face embeddings as an integer between 0 and 100.
**Endpoint:** `POST /v1/face-verification/compare-face-embeddings`
**Authentication:** API Key or Bearer Token
**Request Body:**
```json
{
"embedding1": [0.123, -0.456, 0.789],
"embedding2": [0.125, -0.450, 0.792]
}
```
**Request Parameters:**
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `embedding1` | array (nullable) | Yes | Embedding to compare |
| `embedding2` | array (nullable) | Yes | Embedding to compare with |
**Response Example:**
```json
{
"similarity": 85
}
```
**Response Fields:**
| Name | Type | Description |
|------|------|-------------|
| `similarity` | integer | Similarity between the two embeddings with value range [-1, 100] (higher is better). Reject any matches where similarity is less than 70.
**Threshold reference** (computed using extensive in-the-wild datasets):
• **95** corresponds to FPR 1e-06 (or better)
• **90** corresponds to FPR 1e-05
• **80** corresponds to FPR 1e-4
• **70** corresponds to FPR 1e-3 |
**Example Request:**
```bash
curl -X POST "https://face-verification-api-eu.realeyes.ai/v1/face-verification/compare-face-embeddings" \
-H "Authorization: ApiKey API-KEY-FROM-DEV-CONSOLE" \
-H "Content-Type: application/json" \
-d '{
"embedding1": [0.123, -0.456, 0.789],
"embedding2": [0.125, -0.450, 0.792]
}'
```
**Response Codes:**
- `200` - Returns the similarity result
---
### Health Check
Check the API health status.
**Endpoint:** `GET /v1/healthz`
**Authentication:** None required
**Response Example:**
```
2026-02-16T10:30:45Z
```
**Response Fields:**
| Name | Type | Description |
|------|------|-------------|
| (response body) | string | The server UTC time in ISO 8601 format |
**Example Request:**
```bash
curl -X GET "https://face-verification-api-eu.realeyes.ai/v1/healthz"
```
**Response Codes:**
- `200` - API is healthy
---
## Common Response Codes
| Code | Description |
|------|-------------|
| `200` | Success |
| `400` | Bad Request - Invalid parameters |
| `401` | Unauthorized - Missing or invalid authentication |
| `403` | Forbidden - Valid authentication but account not found or insufficient permissions |
| `404` | Not Found - Resource not found |
| `500` | Internal Server Error |
---
## Swagger Documentation
Interactive API documentation is available via Swagger UI:
- **EU**: [https://face-verification-api-eu.realeyes.ai/swagger](https://face-verification-api-eu.realeyes.ai/swagger)
- **US**: [https://face-verification-api-us.realeyes.ai/swagger](https://face-verification-api-us.realeyes.ai/swagger)
---
*Last updated: 2026-02-16*
Release Notes
https://verifeye-docs.realeyes.ai/rest-api/face-verification-api/release-notes/
# Face Verification API Release Notes
## Version History
### Version 1.0
**Release Date:** 2025-10-01
#### Features
- **Initial Release**: First public release of the Face Verification API
---
*For the latest updates, visit the [VerifEye Developer Console](https://verifeye-console.realeyes.ai/)*
Image Requirements
https://verifeye-docs.realeyes.ai/rest-api/image-requirements/
# Image Requirements
This document describes the image requirements for VerifEye REST APIs.
---
| Condition | Requirement |
|-----------|-------------|
| **Image format** | PNG or JPEG colour image (RGB) |
| **Face position** | Face should be centered and upright in the image |
| **Face size** | The minimum face size is 150x150 pixels |
| **Image file size** | Min 50 KB / Max 2000 KB |
| **Image size** | Min 300x300 pixels / Max 2000x2000 pixels |
| **Ratio Face to Image** | 20% to 80% of Face Box over the image size |
---
*Last updated: 2026-01-27*
Index
https://verifeye-docs.realeyes.ai/rest-api/
# VerifEye REST API
The VerifEye REST API provides secure, scalable, and region-specific endpoints for authentication, token management, and verification services.
## Getting Started
### 1. Create Your Account
Sign up at the [VerifEye Developer Console](https://verifeye-console.realeyes.ai/) to get started with the VerifEye REST APIs.
### 2. Get Your API Key
1. Log in to the [VerifEye Developer Console](https://verifeye-console.realeyes.ai/)
2. Navigate to **Settings → API Keys**
3. Generate a new API key for your project
4. Store your API key securely (never commit to version control)
### 3. Choose Your Region
VerifEye APIs are available in two regions:
- **EU (Europe)**: `https://*-api-eu.realeyes.ai/`
- **US (United States)**: `https://*-api-us.realeyes.ai/`
Choose the region closest to your users for optimal performance.
### 4. Authenticate Your Requests
All API requests require authentication. See the [Authentication](https://verifeye-docs.realeyes.ai/rest-api/authentication/) documentation for details on API Key and JWT Bearer Token authentication methods.
---
## Available APIs
### [Security API](https://verifeye-docs.realeyes.ai/rest-api/security-api/)
Authentication and token management services for secure API access.
### [Face Verification API](https://verifeye-docs.realeyes.ai/rest-api/face-verification-api/)
Face detection, embedding extraction, and face comparison services for identity verification.
### [Demographic Estimation API](https://verifeye-docs.realeyes.ai/rest-api/demographic-estimation-api/)
AI-powered age estimation and gender detection services for demographic analysis.
### [Emotion & Attention API](https://verifeye-docs.realeyes.ai/rest-api/emotion-attention-api/)
AI-powered facial emotion detection and attention analysis for understanding user engagement and emotional states.
### [Face Recognition API](https://verifeye-docs.realeyes.ai/rest-api/face-recognition-api/)
AI-powered face recognition and management for building secure identity verification systems with duplicate detection.
### [VerifEye API](https://verifeye-docs.realeyes.ai/rest-api/verifeye-service-api/)
Verification configuration management and signature validation services for building secure identity verification workflows.
---
## Common Response Codes
| Code | Description |
|------|-------------|
| `200` | Success |
| `400` | Bad Request - Invalid parameters |
| `401` | Unauthorized - Invalid or missing authentication |
| `403` | Forbidden - Valid authentication but insufficient permissions |
| `500` | Internal Server Error |
---
*Last updated: 2026-01-27*
Index
https://verifeye-docs.realeyes.ai/rest-api/liveness-detection-api/
# Liveness Detection API
## Overview
The Liveness Detection API provides AI-powered face liveness detection to distinguish between real, live faces and spoofing attempts.
The Liveness Detection API enables you to:
- Detect liveness from images with face detection and confidence scores
- Verify liveness from videos with best frame extraction
- Identify presentation attacks (photos, videos, masks)
- Process passive liveness detection without user interaction
- Utilize high-accuracy AI models trained on diverse spoofing scenarios
## Base URLs
| Region | Base URL |
|--------|----------|
| **EU** | `https://liveness-detection-api-eu.realeyes.ai/v1/` |
| **US** | `https://liveness-detection-api-us.realeyes.ai/v1/` |
---
## API Endpoints
### Check Video Liveness
Checks the liveness of a video. The video should contain a single person face. You can send base64 encoded video or a video url. The face on the video should be in an upright position. Sideways or upside-down faces are not supported. The preferred input video format is webm (VP8 codec). Maximum video length 10 seconds. Maximum video size is 2,000,000 bytes.
**Endpoint:** `POST /v1/liveness/check`
**Authentication:** API Key or Bearer Token
**Request Body:**
```json
{
"video": {
"bytes": "base64-encoded-video-string",
"url": null
},
"includeBestFrameWithFace": false
}
```
**Request Parameters:**
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `video` | object | Yes | The video to process |
| `video.url` | string (nullable) | No | URL of a video |
| `video.bytes` | string (nullable) | No | Base 64 string encoded binary video |
| `includeBestFrameWithFace` | boolean | No | Gets or sets a value indicating whether the best frame containing a detected face should be included in the results |
**Response Example:**
```json
{
"face": {
"confidence": 0.9987,
"boundingBox": {
"x": 120,
"y": 80,
"width": 200,
"height": 250
}
},
"hasFace": true,
"unprocessedFaceCount": 0,
"isLive": true,
"bestFrameWithFace": null
}
```
**Response Fields:**
| Field Path | Type | Description |
|------------|------|-------------|
| `face` | object | Model for face detection |
| `face.confidence` | number | Face detection score with value range [0.0, 1.0] (higher is better) |
| `face.boundingBox` | object | Model for the bounding box of a detected face |
| `face.boundingBox.x` | integer | Horizontal position of the detected face bounding box |
| `face.boundingBox.y` | integer | Vertical position of the detected face bounding box |
| `face.boundingBox.width` | integer | Width of the detected face bounding box |
| `face.boundingBox.height` | integer | Height of the detected face bounding box |
| `hasFace` | boolean | Indicates whether a face was found in the image |
| `unprocessedFaceCount` | integer | In case more than one face was detected, only the dominant face will be used for the recognition. This count indicates how many faces were not processed |
| `isLive` | boolean (nullable) | Describes whether the face on the image is live or not. Can be null, in case NO face was detected |
| `bestFrameWithFace` | string (nullable) | Best frame with face extracted from the video in Base64 format |
**Example Request:**
```bash
curl -X POST "https://liveness-detection-api-eu.realeyes.ai/v1/liveness/check" \
-H "Authorization: ApiKey API-KEY-FROM-DEV-CONSOLE" \
-H "Content-Type: application/json" \
-d '{
"video": {
"bytes": "UklGRiQAAABXQVZFZm10IBAAAAABAAEA..."
},
"includeBestFrameWithFace": false
}'
```
**Response Codes:**
- `200` - Returns the liveness check result
---
### Health Check
Check the API health status.
**Endpoint:** `GET /v1/healthz`
**Authentication:** None required
**Response Example:**
```
2026-02-16T10:30:45Z
```
**Response Fields:**
| Name | Type | Description |
|------|------|-------------|
| (response body) | string | The server UTC time in ISO 8601 format |
**Example Request:**
```bash
curl -X GET "https://liveness-detection-api-eu.realeyes.ai/v1/healthz"
```
**Response Codes:**
- `200` - API is healthy
---
## Common Response Codes
| Code | Description |
|------|-------------|
| `200` | Success |
| `400` | Bad Request - Invalid parameters |
| `401` | Unauthorized - Missing or invalid authentication |
| `403` | Forbidden - Valid authentication but account not found or insufficient permissions |
| `404` | Not Found - Resource not found |
| `500` | Internal Server Error |
---
## Swagger Documentation
Interactive API documentation is available via Swagger UI:
- **EU**: [https://liveness-detection-api-eu.realeyes.ai/swagger](https://liveness-detection-api-eu.realeyes.ai/swagger)
- **US**: [https://liveness-detection-api-us.realeyes.ai/swagger](https://liveness-detection-api-us.realeyes.ai/swagger)
---
*Last updated: 2026-02-16*
Release Notes
https://verifeye-docs.realeyes.ai/rest-api/liveness-detection-api/release-notes/
# Liveness Detection API Release Notes
## Version History
### Version 1.0
**Release Date:** 2025-10-01
#### Features
- **Initial Release**: First public release of the Liveness Detection API
---
*For the latest updates, visit the [VerifEye Developer Console](https://verifeye-console.realeyes.ai/)*
Index
https://verifeye-docs.realeyes.ai/rest-api/security-api/
# Security API
## Overview
The VerifEye Security API provides authentication and token management services for secure access to the VerifEye ecosystem.
The Security API enables you to:
- Generate JWT tokens for client-side authentication
- Retrieve server UTC time for synchronization
- Validate API credentials
- Monitor API health status
!!!tip
This API is intended for **server-side use only**. Using it directly from the client side would expose it to potential misuse and abuse.
!!!
## Base URLs
| Region | Base URL |
|--------|----------|
| **EU** | `https://security-api-eu.realeyes.ai/v1/` |
| **US** | `https://security-api-us.realeyes.ai/v1/` |
---
## API Endpoints
### Get JWT Token
Generate a JWT token for authentication.
**Endpoint:** `GET /v1/token/get`
**Authentication:** API Key only (Bearer tokens are not allowed for this endpoint)
**Query Parameters:**
| Parameter | Type | Required | Default | Description |
|-----------|------|----------|---------|-------------|
| `expirationInSeconds` | integer | No | 900 | The JWT token expiration in seconds. Minimum value is 60. Maximum value is 3600. |
**Response Example:**
```json
{
"expirationUtcTime": "2026-01-26T14:30:00.000Z",
"token": "eyJhbGciOiJSUzUxMiIsInR5cCI6IkpXVCJ9..."
}
```
**Response Fields:**
| Name | Type | Description |
|-----------|------|----------|---------|-------------|
| `expirationUtcTime` | string (datetime ISO 8601) | The UTC time when the token expires |
| `token` | string | The authentication token used to authorize requests. |
**Example Request:**
```bash
curl -X GET "https://security-api-eu.realeyes.ai/v1/token/get?expirationInSeconds=3600" \
-H "Authorization: ApiKey API-KEY-FROM-DEV-CONSOLE"
```
**Response Codes:**
- `200` - Success
- `401` - Unauthorized (invalid or missing API key)
---
### Get Server UTC Time
Retrieve the current server time in UTC for time synchronization.
**Endpoint:** `GET /v1/datetime/utc-now`
**Authentication:** API Key or Bearer Token
**Response Example:**
```json
{
"utcNow": "2026-01-26T13:45:30.1234567Z"
}
```
**Response Fields:**
| Name | Type | Description |
|-----------|------|----------|---------|-------------|
| `utcNow` | string (datetime ISO 8601) | UtcNow time of the server in ISO 8601. |
**Example Request:**
```bash
curl -X GET "https://security-api-eu.realeyes.ai/v1/datetime/utc-now" \
-H "Authorization: ApiKey API-KEY-FROM-DEV-CONSOLE"
```
**Response Codes:**
- `200` - Success
- `401` - Unauthorized
- `403` - Forbidden (valid auth but account not found)
---
### Health Check
Check the API health status.
**Endpoint:** `GET /v1/healthz`
**Authentication:** None required
**Response Example:**
```
2026-01-26T13:45:30.1234567Z
```
**Response Content:**
The server UTC time (ISO 8601)
**Example Request:**
```bash
curl -X GET "https://security-api-eu.realeyes.ai/v1/healthz"
```
**Response Codes:**
- `200` - API is healthy
---
## Common Response Codes
| Code | Description |
|------|-------------|
| `200` | Success |
| `400` | Bad Request - Invalid parameters |
| `401` | Unauthorized - Missing or invalid authentication |
| `403` | Forbidden - Valid authentication but account not found or insufficient permissions |
| `500` | Internal Server Error |
---
## Swagger Documentation
Interactive API documentation is available via Swagger UI:
- **EU**: [https://security-api-eu.realeyes.ai/swagger](https://security-api-eu.realeyes.ai/swagger)
- **US**: [https://security-api-us.realeyes.ai/swagger](https://security-api-us.realeyes.ai/swagger)
---
*Last updated: 2026-01-26*
Release Notes
https://verifeye-docs.realeyes.ai/rest-api/security-api/release-notes/
# Security API Release Notes
## Version History
### Version 1.0
**Release Date:** 2025-10-01
#### Features
- **Initial Release**: First public release of the Security API
---
*For the latest updates, visit the [VerifEye Developer Console](https://verifeye-console.realeyes.ai/)*
Index
https://verifeye-docs.realeyes.ai/rest-api/verifeye-service-api/
# VerifEye Service API
## Overview
The VerifEye Service API provides endpoints to manage verification configurations and to create and validate signed query strings.
The VerifEye Service API enables you to:
- Create, retrieve, list, update and delete verification configurations
- Configure verification settings for liveness, age, gender, and face recognition
- Create signed query strings for secure API requests
- Validate signed query strings to ensure request integrity
- Manage redirect URLs and result parameters
- Control session-level configuration overrides
!!!tip
This API is intended for **server-side use only**. Using it directly from the client side would expose it to potential misuse and abuse.
!!!
## Base URLs
| Region | Base URL |
|--------|----------|
| **EU** | `https://verifeye-service-api-api-eu.realeyes.ai/v1/` |
| **US** | `https://verifeye-service-api-api-us.realeyes.ai/v1/` |
---
## API Endpoints
### Create Signature
Creates a signed query string based on the query string provided in the request body.
**Endpoint:** `POST /v1/signature/create`
**Authentication:** API Key or Bearer Token
**Request Body:**
```json
{
"queryString": "key1=value1&key2=value2...keyN=valueN"
}
```
**Request Parameters:**
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `queryString` | string | Yes | The full query string to be signed |
**Response Example:**
```json
{
"signature": "THE_SIGNATURE",
"signedQueryString": "key1=value1&key2=value2...keyN=valueN&reSignature=THE_SIGNATURE"
}
```
**Response Fields:**
| Name | Type | Description |
|------|------|-------------|
| `signature` | string | The signature for the query string |
| `signedQueryString` | string | The signed query string that includes the signature |
**Example Request:**
```bash
curl -X POST "https://verifeye-service-api-eu.realeyes.ai/v1/signature/create" \
-H "Authorization: ApiKey API-KEY-FROM-DEV-CONSOLE" \
-H "Content-Type: application/json" \
-d '{
"queryString": "key1=value1&key2=value2...keyN=valueN"
}'
```
**Response Codes:**
- `200` - Success
- `400` - Bad Request - Invalid parameters
- `401` - Unauthorized - Missing or invalid authentication
---
### Validate Signature
Validates the signature of the request using the query string and the signature inside the query string.
**Endpoint:** `POST /v1/signature/validate`
**Authentication:** API Key or Bearer Token
**Request Body:**
```json
{
"queryString": "key1=value1&key2=value2...keyN=valueN&reSignature=THE_SIGNATURE"
}
```
**Request Parameters:**
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `queryString` | string | Yes | The full query string of the request to validate |
**Response Example:**
```json
{
"result": "Valid"
}
```
**Response Fields:**
| Name | Type | Description |
|------|------|-------------|
| `result` | string | The result of the signature validation (Valid or Invalid) |
**Example Request:**
```bash
curl -X POST "https://verifeye-service-api-eu.realeyes.ai/v1/signature/validate" \
-H "Authorization: ApiKey API-KEY-FROM-DEV-CONSOLE" \
-H "Content-Type: application/json" \
-d '{
"queryString": "key1=value1&key2=value2...keyN=valueN&reSignature=THE_SIGNATURE"
}'
```
**Response Codes:**
- `200` - Success
- `400` - Bad Request - Invalid parameters
- `401` - Unauthorized - Missing or invalid authentication
---
### Get Verification
Returns a verification configuration by `verificationId`.
**Endpoint:** `GET /v1/verification/get`
**Authentication:** API Key or Bearer Token
**Query Parameters:**
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `verificationId` | string | Yes | The ID of the verification configuration to retrieve |
**Response Example:**
```json
{
"verification": {
"created": "2025-11-07T00:00:00Z",
"lastModified": "2025-11-07T00:00:00Z",
"verificationId": "39e00c96-6431-4b2e-8707-2fb80e561fbf",
"name": "Test Verification",
"verificationConfig": {
"verifierConfigs": {
"liveness": {
"type": "Verification",
"challengeType": "Balanced"
},
"age": {
"type": "CalculationOnly"
},
"faceRecognition": {
"type": "DuplicateVerification"
},
"gender": {
"type": "CalculationOnly"
}
},
"redirectConfig": {
"passedUrl": "https://example.com?result=passed",
"failedUrl": "https://example.com?result=failed"
},
"resultParameterConfig": {
"includeSignature": true,
"includeSessionId": true,
"includeFaceId": true,
"includeAge": true,
"includeGender": true,
"includeVerificationResult": true,
"includeCustomInputParameters": true,
"includeLivenessCheckResult": true,
"includeAgeVerificationResult": true,
"includeGenderVerificationResult": false,
"includeFaceRecognitionResult": true,
"includeCorrelationId": true,
"includeTotalFaceCount": true,
"includeFailedReason": true
},
"securityConfig": {
"forceSignedInput": false
},
"sessionOverrideConfig": {
"allowVerifierOverrides": true,
"allowResultParameterOverrides": false,
"allowRedirectOverrides": false
}
}
}
}
```
**Response Fields:**
| Field Path | Type | Description |
|------------|------|-------------|
| `verification` | object | Detailed verification configuration model including metadata and nested configuration sections |
| `verification.verificationId` | string | Unique identifier of the verification configuration |
| `verification.name` | string | Human readable name of the verification configuration |
| `verification.created` | string | UTC timestamp when the configuration was created (ISO 8601 format) |
| `verification.lastModified` | string | UTC timestamp of the last modification (ISO 8601 format) |
| `verification.verificationConfig` | object | Root container for all adjustable verification configuration sections |
| `verification.verificationConfig.verifierConfigs` | object | Collection of verifier specific configuration sections |
| `verification.verificationConfig.verifierConfigs.liveness` | object | Liveness verifier configuration model |
| `verification.verificationConfig.verifierConfigs.liveness.type` | string | Supported liveness verification modes (Disabled, Verification) |
| `verification.verificationConfig.verifierConfigs.liveness.challengeType` | string | Supported liveness verification challenges (Balanced, Advanced) |
| `verification.verificationConfig.verifierConfigs.age` | object | Age verifier configuration model |
| `verification.verificationConfig.verifierConfigs.age.type` | string | Supported age verification modes (Disabled, CalculationOnly, ThresholdVerification, RangeVerification) |
| `verification.verificationConfig.verifierConfigs.age.thresholdConfig` | object | Configuration for threshold based age verification |
| `verification.verificationConfig.verifierConfigs.age.thresholdConfig.threshold` | integer | Age threshold value used for verification |
| `verification.verificationConfig.verifierConfigs.age.thresholdConfig.direction` | string | Direction used when evaluating age threshold comparisons (Above, Below) |
| `verification.verificationConfig.verifierConfigs.age.rangeConfig` | object | Configuration for range based age verification |
| `verification.verificationConfig.verifierConfigs.age.rangeConfig.minimum` | integer | Minimum allowed age |
| `verification.verificationConfig.verifierConfigs.age.rangeConfig.maximum` | integer | Maximum allowed age |
| `verification.verificationConfig.verifierConfigs.faceRecognition` | object | Face recognition verifier configuration model |
| `verification.verificationConfig.verifierConfigs.faceRecognition.type` | string | Supported face recognition verification modes (Disabled, CalculationOnly, DuplicateVerification, UniqueMatchVerification, MatchVerification) |
| `verification.verificationConfig.verifierConfigs.gender` | object | Gender verifier configuration model |
| `verification.verificationConfig.verifierConfigs.gender.type` | string | Supported gender verification modes (Disabled, CalculationOnly) |
| `verification.verificationConfig.verifierConfigs.commonSettings` | object | Common settings configuration model |
| `verification.verificationConfig.verifierConfigs.commonSettings.failOnMultipleFacesDetected` | boolean | Fail the verification if multiple faces are detected in the input image |
| `verification.verificationConfig.redirectConfig` | object | Redirect URLs used after different verification outcomes |
| `verification.verificationConfig.redirectConfig.passedUrl` | string | URL to redirect to when verification succeeds |
| `verification.verificationConfig.redirectConfig.failedUrl` | string | URL to redirect to when verification fails |
| `verification.verificationConfig.resultParameterConfig` | object | Controls which result parameters are included in API responses |
| `verification.verificationConfig.resultParameterConfig.includeSignature` | boolean | Include a signature value in the response |
| `verification.verificationConfig.resultParameterConfig.includeSessionId` | boolean | Include the session identifier in the response |
| `verification.verificationConfig.resultParameterConfig.includeFaceId` | boolean | Include a face identifier (if available) in the response |
| `verification.verificationConfig.resultParameterConfig.includeAge` | boolean | Include the estimated age |
| `verification.verificationConfig.resultParameterConfig.includeGender` | boolean | Include the estimated gender |
| `verification.verificationConfig.resultParameterConfig.includeVerificationResult` | boolean | Include the overall verification result status |
| `verification.verificationConfig.resultParameterConfig.includeCustomInputParameters` | boolean | Include any custom input parameters that were part of the request |
| `verification.verificationConfig.resultParameterConfig.includeLivenessCheckResult` | boolean | Include the liveness check result |
| `verification.verificationConfig.resultParameterConfig.includeAgeVerificationResult` | boolean | Include the age verification result |
| `verification.verificationConfig.resultParameterConfig.includeGenderVerificationResult` | boolean | Include the gender verification result |
| `verification.verificationConfig.resultParameterConfig.includeFaceRecognitionResult` | boolean | Include the face recognition verification result |
| `verification.verificationConfig.resultParameterConfig.includeCorrelationId` | boolean | Include the correlation identifier |
| `verification.verificationConfig.resultParameterConfig.includeTotalFaceCount` | boolean | Include the total face count detected |
| `verification.verificationConfig.resultParameterConfig.includeFailedReason` | boolean | Include the failed reason |
| `verification.verificationConfig.securityConfig` | object | Security related options influencing verification request validation |
| `verification.verificationConfig.securityConfig.forceSignedInput` | boolean | When true, input payloads must be signed |
| `verification.verificationConfig.sessionOverrideConfig` | object | Settings that allow selectively overriding verification behavior per session |
| `verification.verificationConfig.sessionOverrideConfig.allowVerifierOverrides` | boolean | Allows overriding which verifiers are enabled/disabled |
| `verification.verificationConfig.sessionOverrideConfig.allowResultParameterOverrides` | boolean | Allows overriding which result parameters are returned |
| `verification.verificationConfig.sessionOverrideConfig.allowRedirectOverrides` | boolean | Allows overriding redirect URLs |
**Example Request:**
```bash
curl -X GET "https://verifeye-service-api-eu.realeyes.ai/v1/verification/get?verificationId=39e00c96-6431-4b2e-8707-2fb80e561fbf" \
-H "Authorization: ApiKey API-KEY-FROM-DEV-CONSOLE"
```
**Response Codes:**
- `200` - Success
- `400` - Bad Request - Invalid parameters
- `401` - Unauthorized - Missing or invalid authentication
- `404` - Not Found - Verification configuration not found
---
### Get All Verifications
Returns all verification configurations for the account.
**Endpoint:** `GET /v1/verification/get-all`
**Authentication:** API Key or Bearer Token
**Response Example:**
```json
{
"verifications": [
{
"verificationId": "39e00c96-6431-4b2e-8707-2fb80e561fbf",
"name": "Test Verification",
"created": "2026-02-01T00:00:00Z",
"lastModified": "2026-02-01T00:00:00Z"
}
]
}
```
**Response Fields:**
| Name | Type | Description |
|------|------|-------------|
| `verifications` | array | Collection of verification configuration overview models |
| `verifications[].verificationId` | string | Unique identifier of the verification configuration |
| `verifications[].name` | string | Human readable name of the configuration |
| `verifications[].created` | string | UTC timestamp when the configuration was created (ISO 8601 format) |
| `verifications[].lastModified` | string | UTC timestamp of the last modification (ISO 8601 format) |
**Example Request:**
```bash
curl -X GET "https://verifeye-service-api-eu.realeyes.ai/v1/verification/get-all" \
-H "Authorization: ApiKey API-KEY-FROM-DEV-CONSOLE"
```
**Response Codes:**
- `200` - Success
- `401` - Unauthorized - Missing or invalid authentication
---
### Create Verification
Create a new verification configuration. If you require idempotent behavior — where the configuration is created when it does not yet exist or updated when it does — use [Create or Update Verification](#create-or-update-verification) instead.
!!!danger IMPORTANT
A verification is supposed to be a reusable configuration that can be used across multiple verification sessions and with the same face collection. This follows a one-time setup concept: create it once and reuse it many times. For more details, see [One-time setup: Create once, reuse many times](https://verifeye-docs.realeyes.ai/../../redirect/concept/#one-time-setup-create-once-reuse-many-times).
!!!
**Endpoint:** `POST /v1/verification/create`
**Authentication:** API Key or Bearer Token
**Request Body:**
```json
{
"name": "Test Verification",
"verificationConfig": {
"verifierConfigs": {
"liveness": {
"type": "Verification",
"challengeType": "Balanced"
},
"age": {
"type": "CalculationOnly"
},
"faceRecognition": {
"type": "DuplicateVerification"
},
"gender": {
"type": "CalculationOnly"
}
},
"redirectConfig": {
"passedUrl": "https://example.com?result=passed",
"failedUrl": "https://example.com?result=failed"
},
"resultParameterConfig": {
"includeSignature": true,
"includeSessionId": true,
"includeFaceId": true,
"includeAge": true,
"includeGender": true,
"includeVerificationResult": true,
"includeCustomInputParameters": true,
"includeLivenessCheckResult": true,
"includeAgeVerificationResult": true,
"includeGenderVerificationResult": false,
"includeFaceRecognitionResult": true,
"includeCorrelationId": true,
"includeTotalFaceCount": true,
"includeFailedReason": true
},
"securityConfig": {
"forceSignedInput": false
},
"sessionOverrideConfig": {
"allowVerifierOverrides": true,
"allowResultParameterOverrides": false,
"allowRedirectOverrides": false
}
}
}
```
**Request Parameters:**
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | Yes | Human readable name for the verification configuration |
| `verificationConfig` | object | Yes | Root container for all adjustable verification configuration sections |
| `verificationConfig.verifierConfigs` | object | No | Collection of verifier specific configuration sections |
| `verificationConfig.verifierConfigs.liveness` | object | No | Liveness verifier configuration model |
| `verificationConfig.verifierConfigs.liveness.type` | string | No | Supported liveness verification modes (Disabled, Verification) |
| `verificationConfig.verifierConfigs.liveness.challengeType` | string | No | Supported liveness verification challenges (, Balanced, Advanced) |
| `verificationConfig.verifierConfigs.age` | object | No | Age verifier configuration model |
| `verificationConfig.verifierConfigs.age.type` | string | No | Supported age verification modes (Disabled, CalculationOnly, ThresholdVerification, RangeVerification) |
| `verificationConfig.verifierConfigs.age.thresholdConfig` | object | No | Configuration for threshold based age verification |
| `verificationConfig.verifierConfigs.age.thresholdConfig.threshold` | integer | No | Age threshold value used for verification |
| `verificationConfig.verifierConfigs.age.thresholdConfig.direction` | string | No | Direction used when evaluating age threshold comparisons (Above, Below) |
| `verificationConfig.verifierConfigs.age.rangeConfig` | object | No | Configuration for range based age verification |
| `verificationConfig.verifierConfigs.age.rangeConfig.minimum` | integer | No | Minimum allowed age |
| `verificationConfig.verifierConfigs.age.rangeConfig.maximum` | integer | No | Maximum allowed age |
| `verificationConfig.verifierConfigs.faceRecognition` | object | No | Face recognition verifier configuration model |
| `verificationConfig.verifierConfigs.faceRecognition.type` | string | No | Supported face recognition verification modes (Disabled, CalculationOnly, DuplicateVerification, UniqueMatchVerification, MatchVerification) |
| `verificationConfig.verifierConfigs.gender` | object | No | Gender verifier configuration model |
| `verificationConfig.verifierConfigs.gender.type` | string | No | Supported gender verification modes (Disabled, CalculationOnly) |
| `verificationConfig.verifierConfigs.commonSettings` | object | No | Common settings configuration model |
| `verificationConfig.verifierConfigs.commonSettings.failOnMultipleFacesDetected` | boolean | No | Fail the verification if multiple faces are detected in the input image |
| `verificationConfig.redirectConfig` | object | No | Redirect URLs used after different verification outcomes |
| `verificationConfig.redirectConfig.passedUrl` | string | No | URL to redirect to when verification succeeds |
| `verificationConfig.redirectConfig.failedUrl` | string | No | URL to redirect to when verification fails |
| `verificationConfig.resultParameterConfig` | object | No | Controls which result parameters are included in API responses |
| `verificationConfig.resultParameterConfig.includeSignature` | boolean | No | Include a signature value in the response |
| `verificationConfig.resultParameterConfig.includeSessionId` | boolean | No | Include the session identifier in the response |
| `verificationConfig.resultParameterConfig.includeFaceId` | boolean | No | Include a face identifier (if available) in the response |
| `verificationConfig.resultParameterConfig.includeAge` | boolean | No | Include the estimated age |
| `verificationConfig.resultParameterConfig.includeGender` | boolean | No | Include the estimated gender |
| `verificationConfig.resultParameterConfig.includeVerificationResult` | boolean | No | Include the overall verification result status |
| `verificationConfig.resultParameterConfig.includeCustomInputParameters` | boolean | No | Include any custom input parameters that were part of the request |
| `verificationConfig.resultParameterConfig.includeLivenessCheckResult` | boolean | No | Include the liveness check result |
| `verificationConfig.resultParameterConfig.includeAgeVerificationResult` | boolean | No | Include the age verification result |
| `verificationConfig.resultParameterConfig.includeGenderVerificationResult` | boolean | No | Include the gender verification result |
| `verificationConfig.resultParameterConfig.includeFaceRecognitionResult` | boolean | No | Include the face recognition verification result |
| `verificationConfig.resultParameterConfig.includeCorrelationId` | boolean | No | Include the correlation identifier |
| `verificationConfig.resultParameterConfig.includeTotalFaceCount` | boolean | No | Include the total face count detected |
| `verificationConfig.resultParameterConfig.includeFailedReason` | boolean | Include the failed reason |
| `verificationConfig.securityConfig` | object | No | Security related options influencing verification request validation |
| `verificationConfig.securityConfig.forceSignedInput` | boolean | No | When true, input payloads must be signed |
| `verificationConfig.sessionOverrideConfig` | object | No | Settings that allow selectively overriding verification behavior per session |
| `verificationConfig.sessionOverrideConfig.allowVerifierOverrides` | boolean | No | Allows overriding which verifiers are enabled/disabled |
| `verificationConfig.sessionOverrideConfig.allowResultParameterOverrides` | boolean | No | Allows overriding which result parameters are returned |
| `verificationConfig.sessionOverrideConfig.allowRedirectOverrides` | boolean | No | Allows overriding redirect URLs |
**Response Example:**
```json
{
"verificationId": "39e00c96-6431-4b2e-8707-2fb80e561fbf"
}
```
**Response Fields:**
| Name | Type | Description |
|------|------|-------------|
| `verificationId` | string | Identifier of the newly created verification configuration |
**Example Request:**
```bash
curl -X POST "https://verifeye-service-api-eu.realeyes.ai/v1/verification/create" \
-H "Authorization: ApiKey API-KEY-FROM-DEV-CONSOLE" \
-H "Content-Type: application/json" \
-d '{
"name": "Test Verification",
"verificationConfig": {
"verifierConfigs": {
"liveness": {
"type": "Verification",
"challengeType": "Balanced"
},
"age": {
"type": "CalculationOnly"
},
"faceRecognition": {
"type": "DuplicateVerification"
},
"gender": {
"type": "CalculationOnly"
}
},
"redirectConfig": {
"passedUrl": "https://example.com?result=passed",
"failedUrl": "https://example.com?result=failed"
},
"resultParameterConfig": {
"includeSignature": true,
"includeSessionId": true,
"includeFaceId": true,
"includeAge": true,
"includeGender": true,
"includeVerificationResult": true,
"includeCustomInputParameters": true,
"includeLivenessCheckResult": true,
"includeAgeVerificationResult": true,
"includeGenderVerificationResult": false,
"includeFaceRecognitionResult": true,
"includeCorrelationId": true,
"includeTotalFaceCount": true,
"includeFailedReason": true
},
"securityConfig": {
"forceSignedInput": false
},
"sessionOverrideConfig": {
"allowVerifierOverrides": true,
"allowResultParameterOverrides": false,
"allowRedirectOverrides": false
}
}
}'
```
**Response Codes:**
- `200` - Success
- `400` - Bad Request - Invalid parameters
- `401` - Unauthorized - Missing or invalid authentication
- `409` - Conflict - A verification configuration with the same name already exists
---
### Create or Update Verification
Create a new verification configuration, or update the existing one if a verification with the same name already exists.
!!!danger IMPORTANT
A verification is supposed to be a reusable configuration that can be used across multiple verification sessions and with the same face collection. This follows a one-time setup concept: create it once and reuse it many times. For more details, see [One-time setup: Create once, reuse many times](https://verifeye-docs.realeyes.ai/../../redirect/concept/#one-time-setup-create-once-reuse-many-times).
!!!
**Endpoint:** `POST /v1/verification/create-or-update`
**Authentication:** API Key or Bearer Token
**Request Body:**
```json
{
"name": "Test Verification",
"verificationConfig": {
"verifierConfigs": {
"liveness": {
"type": "Verification",
"challengeType": "Balanced"
},
"age": {
"type": "CalculationOnly"
},
"faceRecognition": {
"type": "DuplicateVerification"
},
"gender": {
"type": "CalculationOnly"
}
},
"redirectConfig": {
"passedUrl": "https://example.com?result=passed",
"failedUrl": "https://example.com?result=failed"
},
"resultParameterConfig": {
"includeSignature": true,
"includeSessionId": true,
"includeFaceId": true,
"includeAge": true,
"includeGender": true,
"includeVerificationResult": true,
"includeCustomInputParameters": true,
"includeLivenessCheckResult": true,
"includeAgeVerificationResult": true,
"includeGenderVerificationResult": false,
"includeFaceRecognitionResult": true,
"includeCorrelationId":true,
"includeTotalFaceCount":true,
"includeFailedReason":true
},
"securityConfig": {
"forceSignedInput": false
},
"sessionOverrideConfig": {
"allowVerifierOverrides": true,
"allowResultParameterOverrides": false,
"allowRedirectOverrides": false
}
}
}
```
**Request Parameters:**
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | Yes | Human readable name for the verification configuration |
| `verificationConfig` | object | Yes | Root container for all adjustable verification configuration sections |
| `verificationConfig.verifierConfigs` | object | No | Collection of verifier specific configuration sections |
| `verificationConfig.verifierConfigs.liveness` | object | No | Liveness verifier configuration model |
| `verificationConfig.verifierConfigs.liveness.type` | string | No | Supported liveness verification modes (Disabled, Verification) |
| `verificationConfig.verifierConfigs.liveness.challengeType` | string | No | Supported liveness verification challenges (, Balanced, Advanced) |
| `verificationConfig.verifierConfigs.age` | object | No | Age verifier configuration model |
| `verificationConfig.verifierConfigs.age.type` | string | No | Supported age verification modes (Disabled, CalculationOnly, ThresholdVerification, RangeVerification) |
| `verificationConfig.verifierConfigs.age.thresholdConfig` | object | No | Configuration for threshold based age verification |
| `verificationConfig.verifierConfigs.age.thresholdConfig.threshold` | integer | No | Age threshold value used for verification |
| `verificationConfig.verifierConfigs.age.thresholdConfig.direction` | string | No | Direction used when evaluating age threshold comparisons (Above, Below) |
| `verificationConfig.verifierConfigs.age.rangeConfig` | object | No | Configuration for range based age verification |
| `verificationConfig.verifierConfigs.age.rangeConfig.minimum` | integer | No | Minimum allowed age |
| `verificationConfig.verifierConfigs.age.rangeConfig.maximum` | integer | No | Maximum allowed age |
| `verificationConfig.verifierConfigs.faceRecognition` | object | No | Face recognition verifier configuration model |
| `verificationConfig.verifierConfigs.faceRecognition.type` | string | No | Supported face recognition verification modes (Disabled, CalculationOnly, DuplicateVerification, UniqueMatchVerification, MatchVerification) |
| `verificationConfig.verifierConfigs.gender` | object | No | Gender verifier configuration model |
| `verificationConfig.verifierConfigs.gender.type` | string | No | Supported gender verification modes (Disabled, CalculationOnly) |
| `verificationConfig.verifierConfigs.commonSettings` | object | No | Common settings configuration model |
| `verificationConfig.verifierConfigs.commonSettings.failOnMultipleFacesDetected` | boolean | No | Fail the verification if multiple faces are detected in the input image |
| `verificationConfig.redirectConfig` | object | No | Redirect URLs used after different verification outcomes |
| `verificationConfig.redirectConfig.passedUrl` | string | No | URL to redirect to when verification succeeds |
| `verificationConfig.redirectConfig.failedUrl` | string | No | URL to redirect to when verification fails |
| `verificationConfig.resultParameterConfig` | object | No | Controls which result parameters are included in API responses |
| `verificationConfig.resultParameterConfig.includeSignature` | boolean | No | Include a signature value in the response |
| `verificationConfig.resultParameterConfig.includeSessionId` | boolean | No | Include the session identifier in the response |
| `verificationConfig.resultParameterConfig.includeFaceId` | boolean | No | Include a face identifier (if available) in the response |
| `verificationConfig.resultParameterConfig.includeAge` | boolean | No | Include the estimated age |
| `verificationConfig.resultParameterConfig.includeGender` | boolean | No | Include the estimated gender |
| `verificationConfig.resultParameterConfig.includeVerificationResult` | boolean | No | Include the overall verification result status |
| `verificationConfig.resultParameterConfig.includeCustomInputParameters` | boolean | No | Include any custom input parameters that were part of the request |
| `verificationConfig.resultParameterConfig.includeLivenessCheckResult` | boolean | No | Include the liveness check result |
| `verificationConfig.resultParameterConfig.includeAgeVerificationResult` | boolean | No | Include the age verification result |
| `verificationConfig.resultParameterConfig.includeGenderVerificationResult` | boolean | No | Include the gender verification result |
| `verificationConfig.resultParameterConfig.includeFaceRecognitionResult` | boolean | No | Include the face recognition verification result |
| `verificationConfig.resultParameterConfig.includeCorrelationId` | boolean | No | Include the correlation identifier |
| `verificationConfig.resultParameterConfig.includeTotalFaceCount` | boolean | No | Include the total face count detected |
| `verificationConfig.resultParameterConfig.includeFailedReason` | boolean | Include the failed reason |
| `verificationConfig.securityConfig` | object | No | Security related options influencing verification request validation |
| `verificationConfig.securityConfig.forceSignedInput` | boolean | No | When true, input payloads must be signed |
| `verificationConfig.sessionOverrideConfig` | object | No | Settings that allow selectively overriding verification behavior per session |
| `verificationConfig.sessionOverrideConfig.allowVerifierOverrides` | boolean | No | Allows overriding which verifiers are enabled/disabled |
| `verificationConfig.sessionOverrideConfig.allowResultParameterOverrides` | boolean | No | Allows overriding which result parameters are returned |
| `verificationConfig.sessionOverrideConfig.allowRedirectOverrides` | boolean | No | Allows overriding redirect URLs |
**Response Example:**
```json
{
"verificationId": "39e00c96-6431-4b2e-8707-2fb80e561fbf",
"wasCreated": true
}
```
**Response Fields:**
| Name | Type | Description |
|------|------|-------------|
| `verificationId` | string | Identifier of the created or updated verification configuration |
| `wasCreated` | boolean | Indicates whether the verification configuration was newly created (true) or an existing one with the same name was updated (false) |
**Example Request:**
```bash
curl -X POST "https://verifeye-service-api-eu.realeyes.ai/v1/verification/create-or-update" \
-H "Authorization: ApiKey API-KEY-FROM-DEV-CONSOLE" \
-H "Content-Type: application/json" \
-d '{
"name": "Test Verification",
"verificationConfig": {
"verifierConfigs": {
"liveness": {
"type": "Verification",
"challengeType": "Balanced"
},
"age": {
"type": "CalculationOnly"
},
"faceRecognition": {
"type": "DuplicateVerification"
},
"gender": {
"type": "CalculationOnly"
}
},
"redirectConfig": {
"passedUrl": "https://example.com?result=passed",
"failedUrl": "https://example.com?result=failed"
},
"resultParameterConfig": {
"includeSignature": true,
"includeSessionId": true,
"includeFaceId": true,
"includeAge": true,
"includeGender": true,
"includeVerificationResult": true,
"includeCustomInputParameters": true,
"includeLivenessCheckResult": true,
"includeAgeVerificationResult": true,
"includeGenderVerificationResult": false,
"includeFaceRecognitionResult": true,
"includeCorrelationId":true,
"includeTotalFaceCount":true,
"includeFailedReason":true
},
"securityConfig": {
"forceSignedInput": false
},
"sessionOverrideConfig": {
"allowVerifierOverrides": true,
"allowResultParameterOverrides": false,
"allowRedirectOverrides": false
}
}
}'
```
**Response Codes:**
- `200` - Success
- `400` - Bad Request - Invalid parameters
- `401` - Unauthorized - Missing or invalid authentication
---
### Update Verification
Update a verification configuration.
**Endpoint:** `PUT /v1/verification/update`
**Authentication:** API Key or Bearer Token
**Request Body:**
```json
{
"name":"Test Verification",
"verificationConfig":{
"verifierConfigs":{
"liveness":{
"type":"Verification",
"challengeType": "Balanced"
},
"age":{
"type":"CalculationOnly"
},
"faceRecognition":{
"type":"Disabled"
},
"gender":{
"type":"CalculationOnly"
}
},
"redirectConfig":{
"passedUrl":"https://example.com?result=passed",
"failedUrl":"https://example.com?result=failed"
},
"resultParameterConfig":{
"includeSignature":true,
"includeSessionId":true,
"includeFaceId":true,
"includeAge":true,
"includeGender":true,
"includeVerificationResult":true,
"includeCustomInputParameters":true,
"includeLivenessCheckResult":true,
"includeAgeVerificationResult":true,
"includeGenderVerificationResult":false,
"includeFaceRecognitionResult":true,
"includeCorrelationId":true,
"includeTotalFaceCount":true,
"includeFailedReason":true
},
"securityConfig":{
"forceSignedInput":false
},
"sessionOverrideConfig":{
"allowVerifierOverrides":true,
"allowResultParameterOverrides":false,
"allowRedirectOverrides":false
}
},
"verificationId":"39e00c96-6431-4b2e-8707-2fb80e561fbf"
}
```
**Request Parameters:**
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `verificationId` | string | Yes | The ID of the verification configuration to update |
| `name` | string | Yes | Human readable name for the verification configuration |
| `verificationConfig` | object | Yes | Root container for all adjustable verification configuration sections |
| `verificationConfig.verifierConfigs` | object | No | Collection of verifier specific configuration sections |
| `verificationConfig.verifierConfigs.liveness` | object | No | Liveness verifier configuration model |
| `verificationConfig.verifierConfigs.liveness.type` | string | No | Supported liveness verification modes (Disabled, Verification) |
| `verificationConfig.verifierConfigs.liveness.challengeType` | string | No | Supported liveness verification challenges (, Balanced, Advanced) |
| `verificationConfig.verifierConfigs.age` | object | No | Age verifier configuration model |
| `verificationConfig.verifierConfigs.age.type` | string | No | Supported age verification modes (Disabled, CalculationOnly, ThresholdVerification, RangeVerification) |
| `verificationConfig.verifierConfigs.age.thresholdConfig` | object | No | Configuration for threshold based age verification |
| `verificationConfig.verifierConfigs.age.thresholdConfig.threshold` | integer | No | Age threshold value used for verification |
| `verificationConfig.verifierConfigs.age.thresholdConfig.direction` | string | No | Direction used when evaluating age threshold comparisons (Above, Below) |
| `verificationConfig.verifierConfigs.age.rangeConfig` | object | No | Configuration for range based age verification |
| `verificationConfig.verifierConfigs.age.rangeConfig.minimum` | integer | No | Minimum allowed age |
| `verificationConfig.verifierConfigs.age.rangeConfig.maximum` | integer | No | Maximum allowed age |
| `verificationConfig.verifierConfigs.faceRecognition` | object | No | Face recognition verifier configuration model |
| `verificationConfig.verifierConfigs.faceRecognition.type` | string | No | Supported face recognition verification modes (Disabled, CalculationOnly, DuplicateVerification, UniqueMatchVerification, MatchVerification) |
| `verificationConfig.verifierConfigs.gender` | object | No | Gender verifier configuration model |
| `verificationConfig.verifierConfigs.gender.type` | string | No | Supported gender verification modes (Disabled, CalculationOnly) |
| `verificationConfig.verifierConfigs.commonSettings` | object | No | Common settings configuration model |
| `verificationConfig.verifierConfigs.commonSettings.failOnMultipleFacesDetected` | boolean | No | Fail the verification if multiple faces are detected in the input image |
| `verificationConfig.redirectConfig` | object | No | Redirect URLs used after different verification outcomes |
| `verificationConfig.redirectConfig.passedUrl` | string | No | URL to redirect to when verification succeeds |
| `verificationConfig.redirectConfig.failedUrl` | string | No | URL to redirect to when verification fails |
| `verificationConfig.resultParameterConfig` | object | No | Controls which result parameters are included in API responses |
| `verificationConfig.resultParameterConfig.includeSignature` | boolean | No | Include a signature value in the response |
| `verificationConfig.resultParameterConfig.includeSessionId` | boolean | No | Include the session identifier in the response |
| `verificationConfig.resultParameterConfig.includeFaceId` | boolean | No | Include a face identifier (if available) in the response |
| `verificationConfig.resultParameterConfig.includeAge` | boolean | No | Include the estimated age |
| `verificationConfig.resultParameterConfig.includeGender` | boolean | No | Include the estimated gender |
| `verificationConfig.resultParameterConfig.includeVerificationResult` | boolean | No | Include the overall verification result status |
| `verificationConfig.resultParameterConfig.includeCustomInputParameters` | boolean | No | Include any custom input parameters that were part of the request |
| `verificationConfig.resultParameterConfig.includeLivenessCheckResult` | boolean | No | Include the liveness check result |
| `verificationConfig.resultParameterConfig.includeAgeVerificationResult` | boolean | No | Include the age verification result |
| `verificationConfig.resultParameterConfig.includeGenderVerificationResult` | boolean | No | Include the gender verification result |
| `verificationConfig.resultParameterConfig.includeFaceRecognitionResult` | boolean | No | Include the face recognition verification result |
| `verificationConfig.resultParameterConfig.includeCorrelationId` | boolean | No | Include the correlation identifier |
| `verificationConfig.resultParameterConfig.includeTotalFaceCount` | boolean | No | Include the total face count detected |
| `verificationConfig.resultParameterConfig.includeFailedReason` | boolean | Include the failed reason |
| `verificationConfig.securityConfig` | object | No | Security related options influencing verification request validation |
| `verificationConfig.securityConfig.forceSignedInput` | boolean | No | When true, input payloads must be signed |
| `verificationConfig.sessionOverrideConfig` | object | No | Settings that allow selectively overriding verification behavior per session |
| `verificationConfig.sessionOverrideConfig.allowVerifierOverrides` | boolean | No | Allows overriding which verifiers are enabled/disabled |
| `verificationConfig.sessionOverrideConfig.allowResultParameterOverrides` | boolean | No | Allows overriding which result parameters are returned |
| `verificationConfig.sessionOverrideConfig.allowRedirectOverrides` | boolean | No | Allows overriding redirect URLs |
**Response Example:**
```json
{
"verificationId": "39e00c96-6431-4b2e-8707-2fb80e561fbf"
}
```
**Response Fields:**
| Name | Type | Description |
|------|------|-------------|
| `verificationId` | string | Identifier of the updated verification configuration |
**Example Request:**
```bash
curl -X PUT "https://verifeye-service-api-eu.realeyes.ai/v1/verification/update" \
-H "Authorization: ApiKey API-KEY-FROM-DEV-CONSOLE" \
-H "Content-Type: application/json" \
-d '{
"name": "Test Verification",
"verificationConfig": {
"verifierConfigs": {
"liveness": {
"type": "Verification",
"challengeType": "Balanced"
},
"age": {
"type": "CalculationOnly"
},
"faceRecognition": {
"type": "Disabled"
},
"gender": {
"type": "CalculationOnly"
}
},
"redirectConfig": {
"passedUrl": "https://example.com?result=passed",
"failedUrl": "https://example.com?result=failed"
},
"resultParameterConfig": {
"includeSignature": true,
"includeSessionId": true,
"includeFaceId": true,
"includeAge": true,
"includeGender": true,
"includeVerificationResult": true,
"includeCustomInputParameters": true,
"includeLivenessCheckResult": true,
"includeAgeVerificationResult": true,
"includeGenderVerificationResult": false,
"includeFaceRecognitionResult": true,
"includeCorrelationId": true,
"includeTotalFaceCount": true,
"includeFailedReason": true
},
"securityConfig": {
"forceSignedInput": false
},
"sessionOverrideConfig": {
"allowVerifierOverrides": true,
"allowResultParameterOverrides": false,
"allowRedirectOverrides": false
}
},
"verificationId": "39e00c96-6431-4b2e-8707-2fb80e561fbf"
}'
```
**Response Codes:**
- `200` - Success
- `400` - Bad Request - Invalid parameters
- `401` - Unauthorized - Missing or invalid authentication
- `404` - Not Found - Verification configuration not found
- `409` - Conflict - A verification configuration with the same name already exists
---
### Delete Verification
Delete a verification configuration.
**Endpoint:** `DELETE /v1/verification/delete`
**Authentication:** API Key or Bearer Token
**Query Parameters:**
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `verificationId` | string | Yes | The ID of the verification configuration to delete |
**Example Request:**
```bash
curl -X DELETE "https://verifeye-service-api-eu.realeyes.ai/v1/verification/delete?verificationId=39e00c96-6431-4b2e-8707-2fb80e561fbf" \
-H "Authorization: ApiKey API-KEY-FROM-DEV-CONSOLE"
```
**Response Codes:**
- `200` - Success
- `400` - Bad Request - Invalid parameters
- `401` - Unauthorized - Missing or invalid authentication
- `404` - Not Found - Verification configuration not found
---
### Health Check
Check the API health status.
**Endpoint:** `GET /v1/healthz`
**Authentication:** None required
**Response Example:**
```
2026-02-08T11:34:54.3015450Z
```
**Response Fields:**
| Name | Type | Description |
|------|------|-------------|
| (response body) | string | The server UTC time in ISO 8601 format |
**Example Request:**
```bash
curl -X GET "https://verifeye-service-api-eu.realeyes.ai/v1/healthz"
```
**Response Codes:**
- `200` - API is healthy
---
## Common Response Codes
| Code | Description |
|------|-------------|
| `200` | Success |
| `400` | Bad Request - Invalid parameters |
| `401` | Unauthorized - Missing or invalid authentication |
| `403` | Forbidden - Valid authentication but account not found or insufficient permissions |
| `404` | Not Found - Resource not found |
| `500` | Internal Server Error |
---
## Swagger Documentation
Interactive API documentation is available via Swagger UI:
- **EU**: [https://verifeye-service-api-eu.realeyes.ai/swagger/](https://verifeye-service-api-eu.realeyes.ai/swagger/)
- **US**: [https://verifeye-service-api-us.realeyes.ai/swagger/](https://verifeye-service-api-us.realeyes.ai/swagger/)
---
*Last updated: 2026-02-16*
Release Notes
https://verifeye-docs.realeyes.ai/rest-api/verifeye-service-api/release-notes/
# Verify API Release Notes
## Version History
### Version 1.0
**Release Date:** 2025-10-01
#### Features
- **Initial Release**: First public release of the Verify API
---
*For the latest updates, visit the [VerifEye Developer Console](https://verifeye-console.realeyes.ai/)*