Getting Started with Python API

Prerequisites

  • Python >= 3.10
  • NumPy

Installation

pip install realeyes.face_verification

Quick Start Example

import realeyes.face_verification as fvl
import cv2

verifier = fvl.FaceVerifier('model/model.realZ')

image1 = cv2.imread('image1.jpg')[:, :, ::-1]  # OpenCV reads BGR, we need RGB
image2 = cv2.imread('image2.jpg')[:, :, ::-1]

faces1 = verifier.detect_faces(image1)
faces2 = verifier.detect_faces(image2)

embeddings1 = [verifier.embed_face(face) for face in faces1]
embeddings2 = [verifier.embed_face(face) for face in faces2]

for i, e1 in enumerate(embeddings1):
    for j, e2 in enumerate(embeddings2):
        if verifier.compare_faces(e1, e2).similarity > 0.3:
            print(f'{faces1[i]} from image 1 and {faces2[j]} from image 2 are the same person!')

Common Patterns

Image Format

The Python API expects images as NumPy arrays (numpy.ndarray of uint8) in RGB format with shape (height, width, channels). If you use OpenCV (which loads BGR), convert with:

rgb_image = cv2.imread('photo.jpg')[:, :, ::-1]

Concurrent processing

You can safely use a single FaceVerifier instance from multiple threads to improve throughput and better utilize available CPU cores. The example below uses Python’s standard ThreadPoolExecutor.

from collections import deque
from concurrent.futures import Future, ThreadPoolExecutor
from pathlib import Path

import numpy as np
from PIL import Image
from realeyes.face_verification import FaceVerifier


def process_images(images: list[Path],
                   face_verifier: FaceVerifier,
                   concurrent_inferences: int) -> list[list[float] | None]:

    def process_image(image_path: Path) -> list[float] | None:
        img = Image.open(image_path)
        faces = face_verifier.detect_faces(np.array(img))
        if len(faces) == 0:
            return None
        return face_verifier.embed_face(faces[0])

    if concurrent_inferences == 1:
        return [process_image(image) for image in images]

    embeddings: list[list[float] | None] = [None] * len(images)
    with ThreadPoolExecutor(max_workers=concurrent_inferences) as executor:
        futures: deque[tuple[int, Future[list[float] | None]]] = deque()
        for idx, image in enumerate(images):
            # Manage futures queue to avoid memory buildup
            if len(futures) >= 2 * concurrent_inferences:
                processed_idx, future = futures.popleft()
                embeddings[processed_idx] = future.result()

            future = executor.submit(process_image, image)
            futures.append((idx, future))

        # Process remaining futures
        while futures:
            processed_idx, future = futures.popleft()
            embeddings[processed_idx] = future.result()

    return embeddings

3rd Party Face Detector

You can create Face objects from a 3rd party face detector by providing the image and landmarks:

face = fvl.Face(
    image=rgb_image,
    landmarks=[
        fvl.Point2d(x=100.0, y=120.0),  # left eye
        fvl.Point2d(x=160.0, y=120.0),  # right eye
        fvl.Point2d(x=130.0, y=155.0),  # nose tip
        fvl.Point2d(x=105.0, y=180.0),  # left mouth corner
        fvl.Point2d(x=155.0, y=180.0),  # right mouth corner
    ],
    bbox=fvl.BoundingBox(x=80, y=90, width=100, height=120),
    confidence=0.95
)
embedding = verifier.embed_face(face)

Detection Quality

Check the quality of a detection before embedding:

for face in faces:
    if face.detection_quality() == fvl.DetectionQuality.Good:
        embedding = verifier.embed_face(face)

Next Steps