scadable add model
Generate an ONNX model wrapper in models/.
scadable add model <name>Example
scadable add model anomaly-detectorCreates models/anomaly_detector.py:
"""TODO: describe this model."""
from scadable import ONNXModel
class AnomalyDetector(ONNXModel):
id = "anomaly-detector"
name = "Anomaly Detector"
version = "0.1.0"
file = "models/anomaly-detector.onnx"
def preprocess(self, *args):
"""Transform raw sensor values into model input tensor."""
return list(args)
def inference(self, prediction):
"""Interpret model output into actionable result."""
return {"score": prediction[0]}What you fill in
Three things make this real:
- Drop the actual ONNX file into
models/anomaly-detector.onnx(or whereverfile =points). - Implement
preprocess(*args)to turn raw sensor values into the tensor the model expects. - Implement
inference(prediction)to convert model output into a dict your controllers can use.
Calling from a controller
from models.anomaly_detector import AnomalyDetector
class PredictiveMaintenance(Controller):
detector = AnomalyDetector()
@on.interval(1, MINUTES)
def predict(self):
result = self.detector.run(
MotorDrive.vibration,
MotorDrive.current,
MotorDrive.temperature,
)
if result["score"] > 0.8:
self.alert("warning", f"Anomaly score: {result['score']}")The runtime ships the ONNX file inside the bundle, loads it on gateway boot, and runs inference locally. No cloud round-trips.
Required methods
| Method | What it does | Required |
|---|---|---|
preprocess(*args) | Raw values to model input tensor | Yes |
inference(prediction) | Model output to dict | Yes |
run(*args) | Chains preprocess to predict to inference | No (inherited) |
Exit codes
| Code | Meaning |
|---|---|
| 0 | File created |
| 1 | Model file already exists |
Next steps
- scadable compile: bundle the model with your project
- Deploying to a Gateway: get the bundle running
Updated 4 days ago
