How to use a custom TF.lite model with 2 classes on a Rasperry Pi with a Coral?
两天前,我在 Tflite 中根据图像数据集创建了一个自定义模型。准确率为 97.4 %,它只有 2 个类(人、花)
我将模型转换为在我的 Rasberry Pi 中使用 TPU Google Coral。
目前,我遇到了一些问题。 Google Coral 的文档并不适合我。
语言:Python3
图书馆
- 喀拉斯
- 张量流
- 枕头
- 皮卡梅拉
- 麻木的
- EdgeTPU-引擎
项目树:
——–>模型(子文件夹)
———–>model.tflite
———–>labels.txt
——–>video_detection.py
这是 Python 代码:(实际上代码来自文档)
1
2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 |
import argparse
import io import time import numpy as np import picamera import edgetpu.classification.engine def main(): parser = argparse.ArgumentParser() parser.add_argument( ‘–model’, help=‘File path of Tflite model.’, required=True) parser.add_argument( ‘–label’, help=‘File path of label file.’, required=True) args = parser.parse_args() with open(args.label, ‘r’, encoding=“utf-8”) as f: pairs = (l.strip().split(maxsplit=2) for l in f.readlines()) labels = dict((int(k), v) for k, v in pairs) engine = edgetpu.classification.engine.ClassificationEngine(args.model) with picamera.PiCamera() as camera: camera.resolution = (640, 480) camera.framerate = 30 _, width, height, channels = engine.get_input_tensor_shape() camera.start_preview() try: stream = io.BytesIO() for foo in camera.capture_continuous(stream, format=‘rgb’, use_video_port=True, resize=(width, height)): stream.truncate() stream.seek(0) input = np.frombuffer(stream.getvalue(), dtype=np.uint8) start_ms = time.time() results = engine.ClassifyWithInputTensor(input, top_k=1) elapsed_ms = time.time() – start_ms if results: camera.annotate_text =“%s %.2f\ %.2fms” % ( labels[results[0][0]], results[0][1], elapsed_ms*1000.0) finally: camera.stop_preview() if __name__ == ‘__main__’: main() |
如何运行脚本
python3 video_detection.py –model model/model.tflite –label model/labels.txt
错误
1
2 3 4 5 6 7 8 |
`Traceback (most recent call last):
File“video_detection.py”, line 41, in <module> main() File“video_detection.py”, line 16, in main labels = dict((int(k), v) for k, v in pairs) File“video_detection.py”, line 16, in <genexpr> labels = dict((int(k), v) for k, v in pairs) ValueError: not enough values to unpack (expected 2, got 1)` |
对我来说,现在很难集成自定义模型并将其与珊瑚一起使用。
文档:
-
https://coral.withgoogle.com/docs/edgetpu/models-intro/
-
https://coral.withgoogle.com/docs/edgetpu/api-intro/
-
https://coral.withgoogle.com/docs/edgetpu/tflite-python/
-
https://coral.googlesource.com/edgetpu/ /refs/heads/release-chef/edgetpu/
感谢阅读,问候
E.
错误在labels.txt文件中:
1
2 |
labels = dict((int(k), v) for k, v in pairs)
ValueError: not enough values to unpack (expected 2, got 1)` |
看起来你有一些行只有一个值而不是两个
- 谢谢大佬,问题出在txt上。标签必须是:0 人 1 花而不是人花
来源:https://www.codenong.com/58392587/