CORAS SYSTEM

Corrections facilities nationwide are suffering from severe staffing shortages, making it difficult for facilities to fulfill their duties and meet government regulations. While our CORA system cannot replicate everything an officer can do, it can conduct remedial tasks that take up time and energy, freeing officers to more efficiently use their time.

CORA (Correctional Officer Robotics Assistance) is a new tool to be utilized by correctional facilities. The Stokes engineered robotic platform is a tool to assist officers in their daily work in correctional facilities. While the robot can assist with certain jobs inspections and head counts for example, it is not a replacement for officers. Recent polling indicates that there is a 50% deficit in the number of corrections officers that are needed in the US. The robot is a tool in the toolbox for correctional professionals the same way an officer might have a gun and a taser on their belt; different tools for different jobs. These robots can be monitored on site or remotely through a secure network. If a prisoner needs one on one monitoring, it can be done by anyone on staff from anywhere.

Technical Overviews

Prison Head Counting System

The prison head counting system is a real-time, AI-powered application designed to automate inmate verification through facial recognition during robotic patrols. The system deploys a mobile robot equipped with a camera and audio system, which navigates through cells, cell blocks, and dormitories, capturing video frames for facial analysis. It replaces manual roll calls with consistent, automated biometric verification.

The core of the system utilizes InsightFace for deep learning–based face detection and 512-dimensional embedding generation. These embeddings are compared against a PostgreSQL database containing multiple stored embeddings per inmate, allowing for robust identity matching. Face similarity is calculated using a dot product–based thresholded similarity function. Detections are logged with confidence scores, embedding data, and optional image snapshots. The system differentiates between known inmates, unknown individuals, and mismatches (unexpected identities or absences).

The backend is implemented in Python using Flask and SQLAlchemy. Patrol sessions, inmate assignments, detections, expected headcounts, and location metadata are normalized across relational tables. Each patrol can span multiple locations and supports multiple detection events per inmate. All data is persisted for auditability and retrospective analysis.
The dashboard interface exposes a REST API and real-time video stream. It reports patrol progress, visual confirmation of arrivals, unknown detections, and discrepancies between expected and actual headcounts. Alerts are generated for missing or unidentified individuals. The system supports synchronization with external inmate management databases and future integration of camera/audio metadata and cross-facility tracking.

Engineered for scalability and modular deployment, this system enhances institutional accountability, reduces manual workload, and improves response times to security breaches or anomalies in inmate presence.

Multi-Camera Surveillance System

The prison head counting system is a real-time, AI-powered application designed to automate inmate verification through facial recognition during robotic patrols. The system deploys a mobile robot equipped with a camera and audio system, which navigates through cells, cell blocks, and dormitories, capturing video frames for facial analysis. It replaces manual roll calls with consistent, automated biometric verification.
The core of the system utilizes InsightFace for deep learning–based face detection and 512-dimensional embedding generation. These embeddings are compared against a PostgreSQL database containing multiple stored embeddings per inmate, allowing for robust identity matching. Face similarity is calculated using a dot product–based thresholded similarity function. Detections are logged with confidence scores, embedding data, and optional image snapshots. The system differentiates between known inmates, unknown individuals, and mismatches (unexpected identities or absences).

The backend is implemented in Python using Flask and SQLAlchemy. Patrol sessions, inmate assignments, detections, expected headcounts, and location metadata are normalized across relational tables. Each patrol can span multiple locations and supports multiple detection events per inmate. All data is persisted for auditability and retrospective analysis.

The dashboard interface exposes a REST API and real-time video stream. It reports patrol progress, visual confirmation of arrivals, unknown detections, and discrepancies between expected and actual headcounts. Alerts are generated for missing or unidentified individuals. The system supports synchronization with external inmate management databases and future integration of camera/audio metadata and cross-facility tracking.

Engineered for scalability and modular deployment, this system enhances institutional accountability, reduces manual workload, and improves response times to security breaches or anomalies in inmate presence.

Two-Way Audio System

The CORAS two-way audio communication system facilitates secure and real-time voice interaction between correctional officers and inmates or remote operators. It integrates:
 
  • A high-fidelity, noise-canceling microphone array, capable of isolating speech from background noise.
  • A directional speaker system, optimized for clarity even in acoustically challenging environments like concrete-walled corridors.
  • Dynamic voice modulation, adjusting playback volume based on ambient noise levels.

 

The system supports:

  • Live voice relay via the CORAS control interface.
  • Automated alerts, where the AI assistant can convert detected events into spoken warnings (e.g., “Unauthorized object detected”).
  • Push-to-talk and always-on modes, configurable through the web dashboard.

 

For security, all audio transmissions are encrypted using TLS-secured WebRTC channels, preventing eavesdropping or unauthorized access.

This system enhances command efficiency and response coordination, allowing officers to communicate seamlessly with personnel or inmates without direct engagement.

Robotic Arm Inspection System

The robotic arm is an advanced inspection tool equipped with:
 
  • 6 degrees of freedom (DoF) articulation, enabling precise manipulation.
  • An integrated optical camera, delivering detailed close-up visual analysis.
  • A precision gripper, capable of handling small objects, doors, and secured enclosures.

Autonomous Navigation and Obstacle Avoidance

The autonomous navigation system enables the robot to patrol indoor and outdoor facilities without human intervention. The system is built on a SLAM (Simultaneous Localization and Mapping) framework, which combines:

  • Lidar-based and vision-based mapping to create an accurate environmental model.
  • Real-time obstacle detection using depth cameras and AI-assisted scene interpretation
  • Dynamic path planning, allowing the robot to adjust its route in response to moving objects and unexpected obstacles.


The system operates in two primary modes:

  • Autonomous patrols, where the robot follows predefined routes and dynamically adjusts based on environmental feedback.
  • Manual control, allowing workers to override navigation via a web-based interface or handheld controller.

Positioning accuracy is enhanced through sensor fusion, integrating:
  • IMU (Inertial Measurement Unit) data for precise motion tracking.
  • Leg odometry corrections, improving stability on rough surfaces.

 

GPS localization (optional for open-yard operations).

This autonomous system reduces the workload of staff, increases patrol coverage, and enhances safety enforcement with continuous monitoring.

AI Object Detection with YOLO

The object detection system is built on YOLO (You Only Look Once) deep learning models, trained specifically for industrial facility environments. It enables:
  • Real-time object detection, identifying fire extinguishers, open/closed doors or windows, etc.

 

Automated anomaly recognition, flagging anomalies and sending a report through the proper channels.

The detection pipeline is optimized using:

  • TensorRT acceleration for low-latency inference.
  • Model retraining with prison-specific datasets, ensuring high detection accuracy.
  • Threshold-based alerting, triggering officer notifications based on confidence scores.

 

Detected objects are logged with timestamps, confidence levels, and associated patrol locations, creating an auditable security record.