Chat
Your presets
Enter user message...
Preset
Model
Respnse format
FunctionsFunction calling lets you describe custom functions to the assistant. This allows the assistant to intelligently call those functions by outputting a JSON object containing relevant arguments.
Model configuration
TemperatureControlls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.
1.00
Max tokensThe maximum number of tokens to generate shared between the prompt and completion. The exact limit varies by the model. (One token is roughly 4 characters for standard English text)
2048
Stop sequencesUp to four sequences where the model wil stop generating further tokens. The returned text will not contan the stop sequence.
Enter sequence and press Tab
Top PControls diversity via nucleus sampling: 0.5 means half of all likelihood-weighted options are considered.
1.00
Frequency penaltyHow much to penalize new tokens based on thier existing frequency in the next so far. Decreases the model's likelihood to repeat the same line verbatim.
0.00
Presence penaltyHow much to penalize new tokens based on whether they appear in the text so far. Increases the model's likelihood to talk about new topics.
0.00
Settings
Preset
Model
Respnse format
FunctionsFunction calling lets you describe custom functions to the assistant. This allows the assistant to intelligently call those functions by outputting a JSON object containing relevant arguments.
Model configuration
TemperatureControlls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.
1.00
Max tokensThe maximum number of tokens to generate shared between the prompt and completion. The exact limit varies by the model. (One token is roughly 4 characters for standard English text)
2048
Stop sequencesUp to four sequences where the model wil stop generating further tokens. The returned text will not contan the stop sequence.
Enter sequence and press Tab
Top PControls diversity via nucleus sampling: 0.5 means half of all likelihood-weighted options are considered.
1.00
Frequency penaltyHow much to penalize new tokens based on thier existing frequency in the next so far. Decreases the model's likelihood to repeat the same line verbatim.
0.00
Presence penaltyHow much to penalize new tokens based on whether they appear in the text so far. Increases the model's likelihood to talk about new topics.
0.00