AI Anomaly Detection On-Device Roadmap

This document outlines the current roadmap for the development of an on-device AI anomaly detection system using ONNX models. The system is designed to work entirely offline without cloud dependencies.


Project Overview

  • Goal: Enable embedded Linux devices to detect anomalies in real-time using AI models.
  • Scope: On-device inference only; no cloud integration at this stage.
  • Model Format: ONNX (Open Neural Network Exchange) to allow flexible model updates and runtime integration.
  • Use Cases: Predictive maintenance, sensor anomaly detection, and general-purpose monitoring.

Current Status (In Progress)

Core Components

  1. ONNX Runtime Integration
    • Developing a C-based runtime system to load and execute ONNX models.
    • Support for CPU-only inference on embedded devices.
  2. Lua-Configurable Data Acquisition
    • Handlers implemented in Lua to acquire sensor or event data.
    • Configurable through uMINK plugin system.
  3. Preprocessing and Feature Extraction
    • Transform raw sensor data into model-compatible input tensors.
    • Configurable pipeline in Lua for flexibility.
  4. Model Execution & Inference
    • Run ONNX models on-device and output anomaly scores or classifications.
    • Lightweight logging of model inference results.
  5. Signal Integration
    • uMINK signal handlers wrap AI model execution.
    • Allows chaining signals and reporting results internally.

Planned Enhancements

  • Persistent Model Storage: Support for updating models from a local source.
  • Performance Monitoring: Integration with M.perf_* functions to track inference latency and error rates.
  • Modular Deployment: Package as a reusable uMINK plugin for multiple devices.

Development Milestones

Milestone Target Date Status
ONNX Runtime Integration Q3 2025 In Progress
Lua Data Acquisition Handlers Q3 2025 In Progress
Feature Preprocessing Pipeline Q3 2025 In Progress
Model Inference & Scoring Q3 2025 In Progress
Integration with uMINK Signals Q3 2025 Planned
Performance Monitoring & Metrics Q4 2025 Planned
Model Update & Persistence Q4 2025 Planned

Notes

  • No cloud dependencies: All AI computations and data storage occur locally on the device.
  • Extensibility: The system is designed to allow new anomaly detection models to be added easily via ONNX files.
  • Target Devices: Low-power embedded Linux devices with modest CPU and memory constraints.

This roadmap is actively evolving as development progresses, with focus on lightweight, modular, and fully on-device AI anomaly detection.