Generate a Telegram completion
Ask for a completion and stores the prompt in the chat history.
Replica chat supports two response formats: streamed and JSON. To switch between these formats, use the 'Accept' header, specifying either 'text/event-stream' for streaming or 'application/json' for JSON. The streamed response honours the Stream Protocol, allowing the use of a number of SDKs, including Vercel AI SDK.
The streamed variant is not specified in the OpenAPI Schema because it is not an OpenAPI endpoint.
Body
-
The prompt to generate completions for, encoded as a string.
Minimum length is
1
, maximum length is100000
. -
When set to true, historical messages are not used in the context, and the message is not appended to the conversation history, thus it is excluded from all future chat context.
Default value is
false
. -
The URL of the image to be used as context for the completion.
Format should match the following pattern:
https?:\/\/[-a-zA-Z0-9@:%._+~#=]{1,256}\.[a-zA-Z0-9()]+\b([-a-zA-Z0-9()@:%_+.~#?&\/=]*)
. -
Telegram information about the message
Additional properties are NOT allowed.
curl \
--request POST 'https://api.sensay.io/v1/replicas/03db5651-cb61-4bdf-9ef0-89561f7c9c53/chat/completions/telegram' \
--header "X-ORGANIZATION-SECRET: $API_KEY" \
--header "X-USER-ID: $API_KEY" \
--header "Content-Type: application/json" \
--data '{"content":"How did you handle the immense pressure during the Civil War?","skip_chat_history":false,"imageURL":"https://images.invalid/photo.jpeg","telegram_data":{"chat_type":"string","chat_id":42.0,"user_id":42.0,"username":"string","message_id":42.0,"message_thread_id":42.0}}'
{
"content": "How did you handle the immense pressure during the Civil War?",
"skip_chat_history": false,
"imageURL": "https://images.invalid/photo.jpeg",
"telegram_data": {
"chat_type": "string",
"chat_id": 42.0,
"user_id": 42.0,
"username": "string",
"message_id": 42.0,
"message_thread_id": 42.0
}
}
{
"content": "I handled the immense pressure during the Civil War by...",
"success": true
}
{"content"=>"I handled the immense pressure during the Civil War by...", "success"=>true}
{
"error": "A text representation of the error",
"success": false,
"request_id": "xyz1::reg1:reg1::ab3c4-1234567890123-0123456789ab",
"fingerprint": "14fceadd84e74ec499afe9b0f7952d6b"
}
{
"error": "A text representation of the error",
"success": false,
"request_id": "xyz1::reg1:reg1::ab3c4-1234567890123-0123456789ab",
"fingerprint": "14fceadd84e74ec499afe9b0f7952d6b"
}
{
"error": "A text representation of the error",
"success": false,
"request_id": "xyz1::reg1:reg1::ab3c4-1234567890123-0123456789ab"
}
{
"error": "A text representation of the error",
"success": false,
"request_id": "xyz1::reg1:reg1::ab3c4-1234567890123-0123456789ab",
"fingerprint": "14fceadd84e74ec499afe9b0f7952d6b"
}
{
"error": "A text representation of the error",
"success": false,
"request_id": "xyz1::reg1:reg1::ab3c4-1234567890123-0123456789ab",
"fingerprint": "14fceadd84e74ec499afe9b0f7952d6b",
"inner_exception": {
"name": "Server overheated",
"cause": "Request too complicated",
"stack": "Error: Server overheated due to an unexpected situation\n at Object.eval (eval at <anonymous>...",
"message": "The server overheated due to an unexpected situation"
}
}