Skip to content

Releases: roboflow/inference

v0.9.0

06 Oct 15:28
380deaa
Compare
Choose a tag to compare

Summary

This release includes:

  • The new inference-cli to make starting the inference server easy and automated
  • A new inference-client to as a helpful utility when interacting with the inference HTTP API
  • Updates and added features to the Device Manager (enterprise feature)
  • Unified model APIs so that all Roboflow models adhere to a consistent processing pipeline
  • Bug fixes, maintenance

Breaking Changes:

  • Some model APIs have been updated (see instance segmentation and classification)

v0.8.9

03 Oct 21:54
60b72a0
Compare
Choose a tag to compare

Summary

This release includes a new env var DISABLE_INFERENCE_CACHE. When set to true, internal inference caching will be disabled. Also, logging has been updated to be less verbose by default. To increase verbosity, set LOG_LEVEL=DEBUG.

v0.8.8

27 Sep 11:41
c23622a
Compare
Choose a tag to compare

Summary

Contains a fix in imread/imdecode logic. Also moves logic out of version.py to fix github actions.

v0.8.7

26 Sep 18:40
eb59d01
Compare
Choose a tag to compare

Summary

  • Abandons Pillow in favor of OpenCV for faster end to end processing
  • Fixes a bug with new device management logic
  • Upgrades version checking logic
  • Adds env var to fix Jetson 5.1.1 images

v0.8.6

22 Sep 16:17
28af07f
Compare
Choose a tag to compare

Summary
This release includes logic to detect and log if there is a newer release available. It also contains a new enterprise device manager.

v0.8.5

19 Sep 13:42
c534d34
Compare
Choose a tag to compare

Summary
Contains bug fixes for configurations that use the LICENSE_SERVER setting.

v0.8.4

15 Sep 16:30
2f8f773
Compare
Choose a tag to compare

Summary:

  • Image loading is now multi-threaded for batch requests. This should increase total FPS, especially for batch requests that include large images.
  • The regression test Github action now runs on a Github actions runner.
  • The extras require has been fixed for the various distribution packages

v0.8.2

07 Sep 20:13
7cc992e
Compare
Choose a tag to compare

Summary
Updated the Jetson images so that the default execution provider is CUDA. TensorRT is now an optional configuration via the environment variable ONNXRUNTIME_EXECUTION_PROVIDERS=TensorrtExecutionProvider. The images are also renamed to:

  • roboflow/roboflow-inference-server-jetson-4.5.0
  • roboflow/roboflow-inference-server-jetson-4.6.1
  • roboflow/roboflow-inference-server-jetson-5.1.1

v0.8.1

05 Sep 18:41
1684822
Compare
Choose a tag to compare

Summary

  • Optional Byte Track for UDP interface
  • Updated SAM and CLIP requirements and added README quickstarts
  • Bug fix for single channel numpy strings

Breaking Changes

  • The output for UDP JSON messages has updated the key class_name to class to match HTTP responses.

v0.8.0

31 Aug 19:12
d823e39
Compare
Choose a tag to compare

Summary
This release includes a bit of an overhaul for the model APIs. As this repository started as an internal tool for hosting inference logic, the APIs were tailored for an HTTP interface. With this release, we have made using inference within your python code much smoother and easier. We also updated imports to be less verbose. See the README and docs for new usage. Additionally, a new interface is provided for consuming video streams, and then broadcasting the results over UDP. This interface is tuned for low latency and is ideal for use cases that need to the most up to date information as possible from a video stream. See https://blog.roboflow.com/udp-inference/ for more details.

Breaking Changes
The main change was creating new definitions for model infer() functions that now take many keyword arguments instead of a single request argument. To continue inferring using request objects, a new method infer_from_request() is provided.