You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
from transformers import AutoModelForTokenClassification
model = AutoModelForTokenClassification.from_pretrained("segment-any-text/sat-1l-sm")
Expected behavior
Expected the model to load without any issue. However, I get the following error instead:
Traceback (most recent call last):
File "/Users/pim.jv/Documents/Code/wtpsplit/test_hf.py", line 4, in <module>
model = AutoModelForTokenClassification.from_pretrained("segment-any-text/sat-1l-sm", force_download=False)
File "/opt/homebrew/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 444, in from_pretrained
config, kwargs = AutoConfig.from_pretrained(
File "/opt/homebrew/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 940, in from_pretrained
config_class = CONFIG_MAPPING[config_dict["model_type"]]
File "/opt/homebrew/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 655, in __getitem__
raise KeyError(key)
KeyError: 'xlm-token'
The text was updated successfully, but these errors were encountered:
Hey, there is no config assignment for this model type. It is needed to add related type and config into mapping. @zucchini-nlp is it ok for you? I could try to fix it
Seems like you are trying to load XLMForTokenClassification. The model type should be modified in the config.json file on the hub. XLM models can be loaded via xlm-roberta model type. If the model belongs to you you can update it yourself, otherwise open a PR on model page.
@zucchini-nlp Just one thing is weird to me and because of that, I suppose, model loading is failed, is that in config.json of this model "base_model": "xlm-roberta-base". I am not sure that it is correct, should not be a base_model is different one?
System Info
transformers
version: 4.29.2Who can help?
No response
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
Setup a python virtual environment with the command
python -m venv .venv
Enter the virtual environment with
source .venv/bin/activate
Install the following python dependencies:
transformers==4.29.2
accelerate==0.19.0
datasets
pysbd
wandb
h5py
nltk
spacy
ersatz
iso-639
scikit-learn==1.2.2
numpy==1.23.5
pydantic
torchinfo
conllu
pandarallel
cohere
replicate
onnx
onnxruntime
torchinfo
mosestokenizer
cached_property
tqdm
skops
pandas
protobuf==3.20
Run the lines of python code
Expected behavior
Expected the model to load without any issue. However, I get the following error instead:
The text was updated successfully, but these errors were encountered: