Replies: 1 comment
-
actually |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I'm working on a project that involves performing multi-turn model evaluations during model training.
The majority of these evaluation scripts are all based on the OpenAI API format. It would be really helpful if I could start and stop the vllm OpenAI-compatible inference server programmatically, instead of using the command line.
I am new to
vllm
and am curious if there is a built-in way to achieve this. If not, how challenging would it be to implement this functionality ourselves?Here is a code example illustrating what I am aiming for:
Any guidance or suggestions would be greatly appreciated!
Beta Was this translation helpful? Give feedback.
All reactions