Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Getting Predictions over HTTP/HTTPS Stream #735

Open
1 task done
amankumarchagti opened this issue Oct 9, 2024 · 12 comments
Open
1 task done

Getting Predictions over HTTP/HTTPS Stream #735

amankumarchagti opened this issue Oct 9, 2024 · 12 comments
Labels
question Further information is requested

Comments

@amankumarchagti
Copy link

Search before asking

  • I have searched the Inference issues and found no similar feature requests.

Question

Hi there, i am using below code for inference on my RPi in headless mode.

import the InferencePipeline interface

from inference import InferencePipeline

import a built in sink called render_boxes (sinks are the logic that happens after inference)

from inference.core.interfaces.stream.sinks import render_boxes

create an inference pipeline object

pipeline = InferencePipeline.init(
model_id="cow-lie-stand-walk/2", # set the model id to a yolov8x model with in put size 1280
video_reference="rtsp://192.168.1.100:5543/live/channel0", # set the video reference (source of video), it can be a link/path to a video file, an RTSP stream url, or an integer representing a device id (usually 0 for built in webcams)
on_prediction=render_boxes, # tell the pipeline object what to do with each set of inference by passing a function
api_key="", # provide your roboflow api key for loading models from the roboflow api
)

start the pipeline

pipeline.start()

wait for the pipeline to finish

pipeline.join()

how can I retrieve predictions over HTTP/HTTPS. if I need to initiate UDP Sink?

Additional

NA

@amankumarchagti amankumarchagti added the question Further information is requested label Oct 9, 2024
@amankumarchagti amankumarchagti changed the title Getting Prediction over HTTP/HTTPS Getting Predictions over HTTP/HTTPS Stream Oct 9, 2024
@PawelPeczek-Roboflow
Copy link
Collaborator

Hello -
UDP sink is letting you to expose predictions via UDP socket so this is not the 100% exact think you wish to achieve - for HTTP you have two options:

  • if the intent is that on each prediction InferencePipeline makes HTTP call to the service collecting data - you shall create a sink to make the request (you can take inspiration from UDP sink implementation), yet I recommend sending request once for batch of predictions (otherwise for practical reasons the solution would be to slow)
  • if it works for your use-case - you may be pooling the processing results from the InferencePipeline by your service using HTTP - this is quite a new feature (still experimental phase) and works only for Workflows running inside inference server - here you may find docs for the new feature I am talking about

@amankumarchagti
Copy link
Author

Hi @PawelPeczek-Roboflow, can you give some code reference?

@PawelPeczek-Roboflow
Copy link
Collaborator

PawelPeczek-Roboflow commented Oct 10, 2024

Which particular scenario are you asking for?

@amankumarchagti
Copy link
Author

for the 1st one:
if the intent is that on each prediction InferencePipeline makes HTTP call to the service collecting data - you shall create a sink to make the request (you can take inspiration from UDP sink implementation), yet I recommend sending request once for batch of predictions (otherwise for practical reasons the solution would be to slow)

@PawelPeczek-Roboflow
Copy link
Collaborator

well that would be just sending request using requests library depending on the format your service accept - do you have any specification of the endpoint on the other end?

@amankumarchagti
Copy link
Author

yes, it will be on 127.0.0.1:80

@PawelPeczek-Roboflow
Copy link
Collaborator

ok, but this does not tell anything about the endpoint accepting data. Could you share something like Swagger?

@amankumarchagti
Copy link
Author

I didn't understand the question. Can you give some reference?

@PawelPeczek-Roboflow
Copy link
Collaborator

the REST API accepts requests formatted in a very specific way - dependent on the code of the service exposing REST API.
Services may expose meta-definition of their endpoints in Swagger format.
Example can be seen here

@PawelPeczek-Roboflow
Copy link
Collaborator

Not knowing what is the spec of the endpoint makes it barely impossible / very hard to create a client to call the API

@amankumarchagti
Copy link
Author

Can you please mention what an endpoint means here? @PawelPeczek-Roboflow

@PawelPeczek-Roboflow
Copy link
Collaborator

I believe this is quite a good article that may help: https://beeceptor.com/docs/concepts/http-endpoints/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants