Using the RKNN toolkit for inference with non-image data
Posted: 2024-09-21 12:45
Hi,
Is it possible to use the rknn.api toolkit with non image data. I have a preventative maintenance application. It measures various input parameters like: Current, Motor RPM, Vibration, Temperature and then tries to predict failure.
I have the following test code I am running on a PC to first evaluate this:
I am converting a tflite database.
Will this work?
The function
outputs = rknn.inference(inputs=[quantized_input_data])
looks like it is always expecting image data>
In my case the data is just four input values with which I hope to predict Motor Failure.
Can someone tell me if my approach is correct?
Is it possible to use the rknn.api toolkit with non image data. I have a preventative maintenance application. It measures various input parameters like: Current, Motor RPM, Vibration, Temperature and then tries to predict failure.
I have the following test code I am running on a PC to first evaluate this:
Code: Select all
from rknn.api import RKNN
import numpy as np
import pandas as pd
# Function to show the output (adapt based on your specific model output)
def show_outputs(outputs):
# Assuming the model returns a single output value for motor failure probability
failure_prob = outputs[0] # Extract the scalar value from the NumPy array
if isinstance(failure_prob, np.ndarray):
failure_prob = failure_prob.item() # Convert NumPy array to scalar
print(f"Motor Failure Prediction Probability: {failure_prob:.6f}")
if failure_prob > 0.5:
print("Motor Failure Detected")
else:
print("No Motor Failure Detected")
def dequantize(outputs, scale, zp):
outputs[0] = (outputs[0] - zp) * scale
return outputs
if __name__ == '__main__':
# Create RKNN object
rknn = RKNN(verbose=True)
# Pre-process config
# Statistical Summary of Features Before Scaling and Balancing:
# RPM - Mean: 1603.866, Standard Deviation: 195.843
# Temperature (°C) - Mean: 24.354, Standard Deviation: 4.987
# Vibration (g) - Mean: 0.120, Standard Deviation: 0.020
# Current (A) - Mean: 3.494, Standard Deviation: 0.308
# Mean and std values as NumPy arrays
mean_values = np.array([1603.866, 24.354, 0.120, 3.494], dtype=np.float32)
std_values = np.array([195.843, 4.987, 0.020, 0.308], dtype=np.float32)
print('--> Configuring RKNN model')
rknn.config(mean_values=mean_values.tolist(), std_values=std_values.tolist(), target_platform='rv1106')
print('done')
# Load the TFLite model
print('--> Loading TFLite model')
ret = rknn.load_tflite(model='models/preventive_forecast.tflite')
if ret != 0:
print('Load model failed!')
exit(ret)
print('done')
# Build the RKNN model (no additional quantization since the model is already quantized in the tflite stage)
print('--> Building RKNN model')
ret = rknn.build(do_quantization=False)
if ret != 0:
print('Build model failed!')
exit(ret)
print('done')
# Export RKNN model
print('--> Exporting RKNN model')
ret = rknn.export_rknn('./preventive_forecast.rknn')
if ret != 0:
print('Export rknn model failed!')
exit(ret)
print('done')
# Prepare input data (RPM, Temperature, Vibration, Current)
# Example input data - replace with real sensor data during actual inference
input_data = np.array([[1700, 25.5, 0.135, 3.75]], dtype=np.float32) # Example values for RPM, Temp, Vib, Current
# Normalize input data using mean and std values
normalized_input_data = (input_data - mean_values) / std_values # Normalize the data
# Init runtime environment
print('--> Init runtime environment')
ret = rknn.init_runtime()
if ret != 0:
print('Init runtime environment failed!')
exit(ret)
print('done')
# Input Scale: 8.292830467224121, Input Zero Point: -128
# Output Scale: 0.00390625, Output Zero Point: -128
quantized_input_data = np.round(normalized_input_data / 8.292830467224121 + -128).astype(np.int8)
outputs = rknn.inference(inputs=[quantized_input_data])
# Inference
print('--> Running inference')
outputs = rknn.inference(inputs=[input_data])
# Check raw outputs
print(f'Raw outputs from the model: {outputs}')
# Dequantize if necessary (adjust scale and zero-point based on the quantization of your model)
# Example scale and zp, change these based on your actual model's quantization details
# For example, in my case: Output Tensor - Scale: 0.00390625, Zero Point: -128
dequantized_outputs = dequantize(outputs, scale=0.00390625, zp=-128) # Adjust scale and zero-point (zp)
# Show results
show_outputs(dequantized_outputs)
print('Inference done')
# Release the RKNN context
rknn.release()
Will this work?
The function
outputs = rknn.inference(inputs=[quantized_input_data])
looks like it is always expecting image data>
In my case the data is just four input values with which I hope to predict Motor Failure.
Can someone tell me if my approach is correct?