Optimizing Ollama Models for BrainSoup

For users looking to leverage the full capabilities of their agents in BrainSoup, especially when working with extensive documents or utilizing tools, it is recommended to use models with a larger context window. By default, Ollama templates are configured with a context window of 2048 tokens. However, for more complex tasks, a context window of 8192 tokens or more is advisable.

Below is a step-by-step guide on how to modify an Ollama model's context window:

Step 1: Retrieve the Model Configuration

First, we need to obtain the configuration file of the model you wish to modify. For example, to modify mistral , use the following command:

ollama show mistral --modelfile > conf.txt

This command exports the current configuration of mistral into a file named conf.txt .

Step 2: Modify the Configuration File

Open conf.txt in your preferred text editor and make the following changes:

  • Add a new line with the parameter for the context window size: PARAMETER num_ctx 8192 .
  • Find the line that starts with "FROM" and replace it entirely with FROM mistral:latest . This ensures that our new model will be based on mistral:latest .

After making these changes, save and close the file.

Step 3: Create a New Model with Updated Configuration

With your modified configuration file ready, create a new local model named mistral-8K . Run the following command:

ollama create mistral-8K -f conf.txt

This command creates a new model based on your updated configuration.

Conclusion

You have successfully created a new local model named mistral-8K with an increased context window. This model is now available for use in Ollama and BrainSoup for enhanced performance during complex tasks.

Remember, adjusting the context window allows for more detailed interactions and processing but may also increase computational requirements. Always consider your specific needs and system capabilities when making such modifications.