# Java API Documentation (Experimental)

**Package:** com.realeyesit.nel

## Tracker interface

### interface Tracker

The Tracker interface

#### track(ImageHeader imageHeader, long timestamp)

```java
public TrackerResultFuture track(ImageHeader imageHeader, long timestamp)
```

Tracks the given frame asynchronously with the TrackerResultFuture API.

**Note:** Calling this function is non-blocking, so calling it again with the next frame without waiting for the result is possible. Also see get_concurrent_calculations().

**Parameters:**
- `imageHeader` - image descriptor
- `timestamp` - timestamp of the image (in ms)

#### resetTracking()

```java
public void resetTracking()
```

Resets the internal tracking state. Should be called when a new video sequence starts.

#### getEmotionIDs()

```java
public java.util.List<EmotionID> getEmotionIDs()
```

Returns the emotion IDs provided by the loaded model. The order is the same as returned by ResultType.getEmotions().

**See also:** ResultType

#### getEmotionNames()

```java
public java.util.List<String> getEmotionNames()
```

Returns the emotion names provided by the loaded model. The order is the same as returned by ResultType.getEmotions().

**See also:** ResultType

#### getMinimumFaceRatio()

```java
public float getMinimumFaceRatio()
```

Gets the current minimum face ratio

**See also:** setMinimumFaceRatio

#### setMinimumFaceRatio(float minimumFaceRatio)

```java
public void setMinimumFaceRatio(float minimumFaceRatio)
```

Sets the minimum face ratio

The minimum face ratio defines the minimum face size the algorithm is looking for. The actual size is calculated from the smaller image dimension multiplied by the set minimum face ratio. The default value is 1/4.8, i.e., in case of VGA resolution input (640x480), the minimum face size is 100x100.

**Warning:** The shape alignment and classifier performance can degrade in case of low resolution, tracking faces smaller than 75x75 is ill advised.

**Parameters:**
- `minimumFaceRatio` - new minimum face size as a ratio of the smaller image dimension

#### isFaceTrackingEnabled()

```java
public boolean isFaceTrackingEnabled()
```

Returns wether the face tracker is enabled

#### setFaceTrackingEnabled(boolean enable)

```java
public void setFaceTrackingEnabled(boolean enable)
```

Sets the face tracker to be enabled or disabled

**Parameters:**
- `enable` - boolean to set to

#### isEmotionEnabled(EmotionID emoID)

```java
public boolean isEmotionEnabled(EmotionID emoID)
```

Returns wether the specified emotion is enabled

**Parameters:**
- `emoID` - emotion to query

#### setEmotionEnabled(EmotionID emoID, boolean enable)

```java
public void setEmotionEnabled(EmotionID emoID, boolean enable)
```

Sets the specified emotion to enabled or disabled

**Parameters:**
- `emoID` - emotion to set
- `enable` - boolean to set to

#### getModelName()

```java
public String getModelName()
```

Returns the name (version etc) of the loaded model.

**Returns:** name of the model

#### getSdkVersion()

```java
public Version getSdkVersion()
```

Returns the version of the SDK (and not the model)

**Returns:** version of the SDK

#### getSdkVersionString()

```java
public String getSdkVersionString()
```

Returns the version string of the SDK (and not the model)

**Returns:** version string of the SDK

## NelTracker class

#### NelTracker(String modelFile, int max_concurrency)

```java
public NelTracker(String modelFile, int max_concurrency)
```

Constructor

**Parameters:**
- `modelFile` (str) - path for the used model
- `max_concurrency` (int) - maximum allowed concurrency, 0 means automatic (using all cores), default: 0

## ImageHeader class

### class ImageHeader

Descriptor class for image data (non-owning)

#### ImageHeader()

```java
public ImageHeader()
```

Constructor

#### getData()

```java
public java.nio.ByteBuffer getData()
```

**Returns:** pointer to the byte array of the image

#### setData(java.nio.ByteBuffer value)

```java
public void setData(java.nio.ByteBuffer value)
```

**Parameters:**
- `value` - pointer to the byte array of the image

#### getFormat()

```java
public ImageFormat getFormat()
```

**Returns:** image format

#### setFormat(ImageFormat value)

```java
public void setFormat(ImageFormat value)
```

**Parameters:**
- `value` - image format

#### getHeight()

```java
public int getHeight()
```

**Returns:** height of the image in pixels

#### setHeight(int value)

```java
public void setHeight(int value)
```

**Returns:** height of the image in pixels

#### getStride()

```java
public int getStride()
```

**Returns:** length of one row of pixels in bytes (e.g: 3*width + padding)

#### setStride(int value)

```java
public void setStride(int value)
```

**Parameters:**
- `value` - length of one row of pixels in bytes (e.g: 3*width + padding)

#### getWidth()

```java
public int getWidth()
```

**Returns:** width of the image in pixels

#### setWidth(int value)

```java
public void setWidth(int value)
```

**Parameters:**
- `value` - width of the image in pixels

### enum ImageFormat

**Values:**
- `BGR` - 24-bit BGR
- `BGRA` - 32-bit BGRA or 32-bit BGR_
- `Grayscale` - 8-bit grayscale
- `RGB` - 24-bit RGB
- `RGBA` - 32-bit RGBA or 32-bit RGB_

## Result classes

### enum EmotionID

IDs for the supported emotions/behaviours

**Values:**
- `ATTENTION`
- `CONFUSION`
- `CONTEMPT`
- `DISGUST`
- `EMPATHY`
- `FEAR`
- `HAPPY`
- `PRESENCE`
- `SURPRISE`
- `EYES_ON_SCREEN`
- `FACE_DETECTION`

### interface TrackerResultFuture

Simple wrapper over the C++ future class

#### get()

```java
ResultType get()
```

Blocks until the future is ready and returns the result.

### interface ResultType

The ResultType struct.

#### getEmotions()

```java
java.util.List<EmotionData> getEmotions()
```

Detected emotions.

#### getLandmarks()

```java
LandmarkData getLandmarks()
```

Tracked landmarks.

### interface LandmarkData

#### getScale()

```java
double getScale()
```

Scale of the face.

#### getRoll()

```java
double getRoll()
```

Roll pose angle.

#### getPitch()

```java
double getPitch()
```

Pitch pose angle.

#### getYaw()

```java
double getYaw()
```

Yaw pose angle.

#### getTranslate()

```java
Point2d getTranslate()
```

Position of the head center in image coordinates.

#### getLandmarks2d()

```java
java.util.List<Point2d> getLandmarks2d()
```

Positions of the 49 landmarks, in image coordinates.

#### getLandmarks3d()

```java
java.util.List<Point3d> getLandmarks3d()
```

Positions of the 49 landmarks, in an un-scaled face-centered 3D space.

#### getIsGood()

```java
boolean getIsGood()
```

Whether the tracking is good quality or not.

### interface Point2d

#### getX()

```java
double getX()
```

#### getY()

```java
double getY()
```

### interface Point3d

#### Point3d_GetInterfaceCPtr()

```java
long Point3d_GetInterfaceCPtr()
```

#### getX()

```java
double getX()
```

#### getY()

```java
double getY()
```

#### getZ()

```java
double getZ()
```

### interface EmotionData

#### getEmotionID()

```java
EmotionID getEmotionID()
```

ID of the emotion.

#### getIsActive()

```java
boolean getIsActive()
```

Whether the probability is higher than an internal threshold.

#### getIsDetectionSuccessful()

```java
boolean getIsDetectionSuccessful()
```

Whether the tracking quality was good enough to reliable detect this emotion.

#### getProbability()

```java
double getProbability()
```

Probability of the emotion.

### interface Version

Semantic version number for the SDK

#### getMajor()

```java
int getMajor()
```

#### getMinor()

```java
int getMinor()
```

#### getPatch()

```java
int getPatch()
```
