7.23. Language model#

Added in version 14.1.0.

Note

This is an experimental feature. Currently, this feature is still not stable.

7.23.1. Summary#

Language model is useful for full text search too. Groonga can integrate with language model.

Groonga uses language models in local. We’ll provide a tool to manage language models in local in the feature but it doesn’t exist yet for now. You need to download one or more language models manually for now.

This feature uses llama.cpp internally. You can use only GGUF formatted language models. See also the “supported models” section in the llama.cpp README.

7.23.2. How to manage language models#

You need to put GGUF formatted language models to ${PREFIX}/share/groonga/language_models/.

For example: /usr/local/share/groonga/language_models/mistral-7b-v0.1.Q4_K_M.gguf

You can download GGUF formatted language models from Hugging Face. Some official language models provide GGUF formatted language models too. But most of them don’t provide GGUF formatted language models.

You can convert existing language models on Hugging Face to GGUF format by GGUF-my-repo.

7.23.3. Functions#