Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Alias duplication for llama3 in config template #295

Open
joshbainbridge opened this issue Jun 23, 2024 · 0 comments
Open

Alias duplication for llama3 in config template #295

joshbainbridge opened this issue Jun 23, 2024 · 0 comments

Comments

@joshbainbridge
Copy link

In the default template the 'llama3' alias is used twice, once for groq and again for ollama. The ollama appears to take priority and masks groq. Should these have unique identifiers?

Also related to the ollama config, it currently targets the 'llama3:70b' model. I'd propose that this is changed to 'llama3', the default 8b model. Most users won't have a 40GB GPU required to practically run 70b, and will more likely have the default model already installed.

Great project by the way, really appreciate all the hard work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant