Skip to content

Releases: roboflow/inference

v0.19.0

18 Sep 19:28
13332f8
Compare
Choose a tag to compare

What's Changed

Full Changelog: v0.18.1...v0.19.0

v0.18.1

06 Sep 17:18
3f4a264
Compare
Choose a tag to compare

🔨 Fixed

New VLM as Classifier Workflows block had bug - multi-label classification results were generated with "class_name" instead of "class" field in prediction details: #637

🌱 Changed

  • Increase timeout to 30 minutes for .github/workflows/test_package_install_inference_with_extras.yml by @grzegorz-roboflow in #635

Full Changelog: v0.18.0...v0.18.1

v0.18.0

06 Sep 13:09
297d420
Compare
Choose a tag to compare

🚀 Added

💪 New VLMs in Workflows

We've shipped blocks to integrate with Google Gemini and Anthropic Claude, but that's not everything! OpenAI block got updated. New "VLM Interface" of the block assumes that it can be prompted using pre-configured options and model output can be processed by set of formatter blocs to achieve desired end. It is now possible to:

  • use classification prompting in VLM block and apply VLM as Classifier block to turn output string into classification result and process further using other blocks from ecosystem
  • the same can be achieved for object-detection prompting and VLM as Detector block, which converts text produced by model into sv.Detections(...)

From now one, VLMs are much easier to integrate.

🧑‍🦱 USE CASE: PII protection when prompting VLM

Detect faces first, apply blur prediction visualisation and ask VLMs to tell what is the person eye colour - they won't be able to tell 🙃

👨‍🎨 USE CASE: VLM as object detection model

👓 USE CASE: VLM as secondary classifier

Turn VLM output into classification results and process using downstream blocks - here we ask Gemini to classify crops of dogs to tell what is dog's breed - then we extract top class as property.

image

🤯 Workflows previews in documentation 📖

Thanks to @joaomarcoscrs we can embed Workflows into documentation pages. Just take a look how amazing it is ❗

🌱 Changed

BREAKINGBatch[X] kinds removed from Workflows

What was changed and why?

In inference release 0.18.0 we decided to make drastic move to heal the ecosystem from the problem with ambiguous kinds names (Batch[X] vs X - see more here).

The change is breaking only for non-Roboflow Workflow plugins depending on imports from inference.core.workflows.execution_engine.entities.types module. To the best of our knowledge, there is no such plugin.

The change is not breaking in terms of running Workflows on Roboflow platform and on-prem given that external plugins were not used.

Migration guide

Migration should be relatively easy - in the code of a Workflow block, all instances of

from inference.core.workflows.execution_engine.entities.types import BATCH_OF_{{KIND_NAME}}

should be replaced with

from inference.core.workflows.execution_engine.entities.types import {{KIND_NAME}}

PR with changes as reference: #618

Full Changelog: v0.17.1...v0.18.0

v0.17.1

03 Sep 09:20
03860f0
Compare
Choose a tag to compare

❗IMPORTANT ❗Security issue in opencv-python

This PR provides fix for the following security issue:

opencv-python versions before v4.8.1.78 bundled libwebp binaries in wheels that are vulnerable to GHSA-j7hp-h8jx-5ppr. opencv-python v4.8.1.78 upgrades the bundled libwebp binary to v1.3.2.

We advise all clients using inference to migrate, especially in production environments.

Full Changelog: v0.17.0...v0.17.1

v0.17.0

30 Aug 14:55
dc65d23
Compare
Choose a tag to compare

🚀 Added

💪 More Classical Computer Vision blocks in workflows

Good news for the fans of classical computer vision!
We heard you – and we’ve added a bunch of new blocks to enhance your workflows.

Basic operations on images

Workflow Definition Preview

Camera focus check

Workflow Definition Preview

🚀 Upgrade of CLIP Comparison and Roboflow Dataset Upload blocks

We’ve made it even more versatile. The new outputs allow seamless integration with many other blocks, enabling powerful workflows like:

detection → crop → CLIP classification (on crops) → detection class replacement

Get ready to streamline your processes with enhanced compatibility and new possibilities!

image

For Roboflow Dataset Upload @ v2 there is now possibility to sample percentage of data to upload and we changed the default sizes of saved images to be bigger.

Do not worry! All your old Workflows using mentioned blocks are not affected with the change thanks to versioning 😄

💥 New version of 📖 Workflow docs 🔥

The Wait is Over – Our Workflows Documentation is Finally Here!

We’ve revamped and expanded the documentation to make your experience smoother. It’s now organized into three clear sections:

  • General Overview: Perfect for getting you up and running quickly.
  • Mid-Level User Guide: Gain a solid understanding of the ecosystem without diving too deep into the technical details.
  • Detailed Developer Guide: Designed for contributors, packed with everything you need to develop within the ecosystem.

Check it out and let us know what you think of the new docs!

🌱 Changed

🔨 Fixed

🏅 New Contributors

Full Changelog: v0.16.3...v0.17.0

v0.16.3

22 Aug 19:23
bbf64e1
Compare
Choose a tag to compare

🔨 Fixed

🚀 Added

SAM2 extension

While making inference from SAM2 model you may request inference package and inference server to cache prompts and low-resolution masks from your inputs to be re-used later on upon request. You are given two parameters (both in SAM2 request payload and SegmentAnything2.segment_image(...) method:

  • save_logits_to_cache
  • load_logits_from_cache
    which decide how the functionality should work. Saving logits masks to cache will make it possible, to re-use them for consecutive inferences agains the same image. Enabling loading triggers search through cache intended to find the most similar prompt cached for this specific image to retrieve its mask. The mechanism is useful when the same image is segmented multiple times with slightly different sets of prompts - as injecting previous masks in that scenario may lead to better results:
Before After

Please note that this feature is different than cache for image embeddings which speed consecutive requests with the same image up and if you don't wish the feature to be enabled, set DISABLE_SAM2_LOGITS_CACHE=True in your env.

🏅 @probicheaux and @tonylampada added the functionality in #582

Remaining changes

  • @EmilyGavrilenko added Workflow block search metadata to improve UI experience in #588
  • @grzegorz-roboflow added internal parameter for workflows request denoting preview in UI #595
  • @grzegorz-roboflow improved usage tracking extending it to models in #601 and #548
  • workflows equipped with new batch-oriented input - VideoFrameMetadata letting blocks to process videos statefully see #590, #597 more docs will come soon

Full Changelog: v0.16.2...v0.16.3

v0.16.2

16 Aug 18:00
088eef6
Compare
Choose a tag to compare

🚀 Added

Segment Anything 2 in workflows 🥳

We prepared great amount of changes to workflows, could not really decide which update to start with, but at the end we found onboarding of SAM 2 model as most exciting.

Thanks to @hansent effort we have introduced SAM 2 workflow block.
SAM2 workflow

You can use SAM2 standalone, or you can ground it's predictions with other detection models which is the true power of workflows. Thanks to grounding, you can generate instance segmentation masks for each bounding box predicted by your object detection model

❗ We do not support SAM2 at Roboflow Hosted Platform yet, but it is possible to use inference server start command to run local server supporting SAM2 model and connect it to workflows UI to run examples.

workflows 🤝 SAHI

We've added set of blocks that let people apply SAHI technique based on utilities provided by supervision.

We are going to work to simplify SAHI usage in workflows UI, but now you need to use three blocks to effectively apply technique:
image

Classical Computer Vision methods in workflows 🔥

We do not forget about old good friends - that's why we also added bunch of blocks with classical Computer Vision algorithms:

🌱 Changed

🔨 Fixed

  • Fix security issues with landing page by @PawelPeczek-Roboflow in #584
  • Fixed a Regression in the Custom Metadata Block that was introduced in v0.16.0 - PR with fix by @chandlersupple (#573) - we kindly ask clients relying on the Custom Metadata Block, running their workflows on-prem to update inference or inference server
  • Bug in workflows Execution Engine that was making it impossible to feed the same block with two identical selectors (fixed in #581)

❗ In release 0.16.0 we introduced bug impacting workflows and inference_sdk

The mistake was introduced in #565 and fixed in #585 (both by @PawelPeczek-Roboflow 😢 ) and was causing issues with order of results - regarding specific workflows blocks:

  • blocks with Roboflow models, whenever used with batch input (for instance when workflow was run against multiple images, or Dynamic Crop was used) were mismatching order of predictions with respect to order of images
  • the same was true for OpenAI block and GPT-4V block
  • the problem was also introduced into inference_sdk, so whenever client was called with multiple images - results may have been missmatched

We advice all our clients to upgrade to new release and abandon usage inference=0.16.0

🏅 New Contributors

Full Changelog: v0.16.0...v0.16.1

v0.16.0

09 Aug 16:48
0e2bc54
Compare
Choose a tag to compare

❗ In release 0.16.0 we introduced bug impacting workflows and inference_sdk

The mistake was introduced in #565 and fixed in #585 (both by @PawelPeczek-Roboflow 😢 ) and was causing issues with order of results - regarding specific workflows blocks:

  • blocks with Roboflow models, whenever used with batch input (for instance when workflow was run against multiple images, or Dynamic Crop was used) were mismatching order of predictions with respect to order of images
  • the same was true for OpenAI block and GPT-4V block
  • the problem was also introduced into inference_sdk, so whenever client was called with multiple images - results may have been missmatched

🚀 Added

Next bunch of updates for workflows 🥳

⚓ Versioning

From now on, both Execution Engine and workflows blocks are versioned to ensure greater stability across changes we make to improve ecosystem. Each workflow definition now declares version forcing the app to run against specific version of Execution Engine. If denoted version is 1.1.0, then workflow would require Execution Engine >=1.1.0,<2.0.0 and we gain ability to expose concurrently multiple major versions of EE in the library (doing our best to ensure that within a major version we only add features and support everything that was released earlier within the same major). On top of that:

  • block manifest metadata field name now will be understood as name of blocks family with additional tag called version possible to be added; we propose the following naming conventions for block names: namespace/family_name@v1. Thanks to those changes anyone could maintain multiple versions of the same block (appending new implementation to their plugin) ensuring backwards compatibilities on breaking changes
  • each block manifest class may optionally expose class method get_execution_engine_compatibility(...) which would be used while model loading to ensure that selected Execution Engine is capable to run specific block
✋ Example block manifest
class BlockManifest(WorkflowBlockManifest):
    model_config = ConfigDict(
        json_schema_extra={
            "name": "My Block",
            "version": "v1",
            ...
        }
    )
    type: Literal["my_namespace/mu_block@v1"]
    ...

    @classmethod
    def get_execution_engine_compatibility(cls) -> Optional[str]:
        return ">=1.0.0,<2.0.0"

🚨 ⚠️ BREAKING ⚠️ 🚨 Got rid of asyncio in Execution Engine

If you were tired of coroutines performing compute heavy tasks in workflows:

class MyBlock(WorkflowBlock):
    async def run():
        pass

we have great news. We've got rid of asyncio in favour of standard functions and methods which are much more intuitive in our setup. This change is obviously breaking all other steps, but worry not. Here is the example of what needs to be changed - usually you just need to remove async markers, but sometimes unfortunately pieces of asyncio code would need to be recreated.

class MyBlock(WorkflowBlock):
    def run():
        pass

Endpoint to expose workflow definition schema

Thanks to @EmilyGavrilenko (#550) UI would now be able to verify syntax errors in workflows definitions automatically.

Roboflow Dedicated Deployment is closer and closer 😃

Thanks to @PacificDou, inference server is getting ready to support new functionality which has a nickname Dedicated Deployment. Stay tuned to learn more details - we can tell that this is something worth waiting for. You may find some hints in the PR.

🔨 Fixed

🚨 ⚠️ BREAKING ⚠️ 🚨 HTTP client of inference server changes default behaviour

The default value for flag client_downsizing_disabled was changed from False to True in release 0.16.0! For clients using models with input size above 1024x1024, running models on hosted platform it should improve predictions quality (as previous default behaviour was causing that input was downsized and then artificially upsized on the server side with worse image quality). There may be some clients that would like to remain previous settings to potentially improve speed (when internet connection is a bottleneck and large images are submitted despite small model input size).

If you liked the previous behaviour more - simply:

from inference_sdk import InferenceHTTPClient, InferenceConfiguration

client = InferenceHTTPClient(
    "https://detect.roboflow.com",
    api_key="XXX",
).configure(InferenceConfiguration(
    client_downsizing_disabled=False,
))

setuptools were migrated to version above 70.0.0 to mitigate security issue

We've updated rf-clip package to support setuptools>70.0.0 and bumped the version on inference side.

🌱 Changed

🏅 New Contributors

Full Changelog: v0.15.2...v0.16.0

v0.15.2

03 Aug 00:03
ecdfe11
Compare
Choose a tag to compare

What's Changed

Full Changelog: v0.15.1...v0.15.2

v0.15.1

20 Jul 23:23
6e6d1d7
Compare
Choose a tag to compare

What's Changed

Full Changelog: v0.15.0...v0.15.1