Max Mode
Our feature for activating the largest available context windows, which results in slower performance and higher costs
Normally, Tess uses a context window of 32k tokens (~24,000 words). Max Mode allows you to enable the largest available context windows for all supported models. This is especially useful for long chats and for models that support extended context, such as 200k tokens on Claude models or 1M tokens on GPT-4.1 and Gemini 2.5 Pro.
Info: MAX Mode pricing is calculated based on tokens, based on the model provider’s API price.
Note that using Max Mode may be slower and more expensive.