Max mode
Max mode is designed for users who require a larger context window and a more complex invocation of tools. After Max mode is enabled, the context window of the AI model will be significantly expanded, allowing it to process input on a much larger scale.
Note: Max mode is only available in IDE mode.
Core advantages
- Ultra-large context window: The context window has been expanded to a maximum of 1M, allowing the AI model to understand and process your tasks more accurately.
- Large-scale tool invocation: A single task supports up to 200 rounds of tool invocation, accommodating multi-step and multi-dependency task processing.
- Long file reading: Up to 750 lines can be read at a time, reducing the frequency of segmented processing and improving the efficiency of code parsing and analysis.
Use cases
Rapid generation of initial drafts for large, complex projects
Supports importing dependencies, data structures, and configuration files at once to directly generate a runnable global prototype. Suitable for quickly producing initial drafts of functional modules in large projects, including API calls, data structure definitions, and more.
Analysis and implementation of long docs or complex requirements
Able to comprehend and work with lengthy PRDs, design docs, or compliance agreements, and achieve a seamless transformation from understanding requirements to code implementation. For example, directly generate code from product or architecture documentation, or automatically generate implementation and validation logic by integrating contract protocols.
Understanding and refactoring of cross-module or cross-file code
With a larger context window, AI can understand the dependencies between modules and perform testing, fixing, and rerunning during long-chain execution. Applicable to scenarios such as SDK or framework upgrades, global naming convention adjustments, cross-module API refactoring, and more.
Generation of automation scripts for complex interactions or multi-step processes
AI can continuously operate, detect errors, and make corrections during execution until the entire process is fully executed. Typical scenarios include CI/CD pipeline generation, cross-service call orchestration, automated test script writing, and more.
Context preservation in real-time interactive development
Leveraging large context windows and long-chain execution capabilities enables AI to continuously remember historical decisions made during the development process. Suitable for interactive debugging, continuous debugging, or large-scale refactoring.
Available models
The following table lists the models available in Max mode and the context window they support in a single turn of chat (measured in tokens).
| Premium Model Name | Context Window |
|---|---|
| Claude-4-sonnet | 200k - 1M. You can set a desired context window. |
| Claude-3.7-sonnet | 200k |
| Claude-3.5-sonnet | 200k |
Who can use
- Users who have subscribed to the Pro plan
- Users of the Free plan who have purchased an extra package that has remaining available fast requests
- After enabling SOLO mode, the 200 fast requests awarded can also be used in Max mode
Billing
Max mode only consumes fast requests and uses a token-based billing method. After each turn of chat, the bottom of the chat area in IDE and the Usage page on TRAE's website will clearly display the consumption details for that chat turn, including the number of input and output tokens, the converted amount, the corresponding number of fast requests, and the context usage rate. Token pricing follows the official rates.
Enable Max mode
In the bottom-right corner of the AI chat input box, click the model name, then toggle the Max Mode switch on.
Set a context window for Claude-4-Sonnet
Click the context window icon to the right of the Claude-4-Sonnet model, then select the desired context window from the list. Available options include 200k, 400k, 600k, 800k, and 1M.