Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue in setting up high performance inference pipeline #71326

Open
Aryaman9999 opened this issue Feb 27, 2025 · 0 comments
Open

Issue in setting up high performance inference pipeline #71326

Aryaman9999 opened this issue Feb 27, 2025 · 0 comments
Assignees
Labels
status/new-issue 新建 type/build 编译/安装问题

Comments

@Aryaman9999
Copy link

问题描述 Issue Description

The new update mentions that we don't need a serial number to access the high performance pipeline. I am tring to run it in the official paddlex image - paddlex3.0.0rc0-paddlepaddle3.0.0rc0-gpu-cuda11.8-cudnn8.6-trt8.5 , and ran paddlex --install hpi-gpu
it does run successfully and installs everything but when i run paddlex --serve --pipeline OCR --port 22 --use_hpip i get this error-RuntimeError: The PaddleX HPI plugin is not properly installed, and the high-performance model inference features are not available.

版本&环境信息 Version & Environment Information

Paddle version: 3.0.0-rc1
Paddle With CUDA: True

OS: ubuntu 20.04
GCC version: (GCC) 8.2.0
Clang version: N/A
CMake version: version 3.18.0
Libc version: glibc 2.31
Python version: 3.10.16

CUDA version: 11.8.89
Build cuda_11.8.r11.8/compiler.31833905_0
cuDNN version: N/A
Nvidia driver version: 560.35.03
Nvidia driver List:
GPU 0: Tesla T4

@Aryaman9999 Aryaman9999 added status/new-issue 新建 type/build 编译/安装问题 labels Feb 27, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
status/new-issue 新建 type/build 编译/安装问题
Projects
None yet
Development

No branches or pull requests

2 participants