Skip to main content

Chat API

Introduction

This interface serves as the foundational dialogue API, generating AI assistant responses based on user input. Through a single interface, you can access all LinkAI capabilities:

  1. Support for binding applications or workflows to leverage their underlying knowledge bases and plugins
  2. One-click switching between all supported large language models
  3. Support for both streaming/non-streaming output, with OpenAI-compatible interface structure
  4. Support for multimodal input/output, allowing text and image inputs; text, image, video, and file outputs

API Definition

Endpoint

POST https://api.linkai.cloud/v1/chat/completions

Request Headers

ParameterValueDescription
AuthorizationBearer YOUR_API_KEYCreate an API Key following the API Authentication guide
Content-Typeapplication/jsonIndicates JSON format request

Request Body

ParameterTypeRequiredDescription
messageslist<object>YesMessage context list, where each element has the structure {"role": "user", "content": "Hello"}. The role field can be "system", "user", or "assistant", and content cannot be empty
app_codestringNoCode for the application or workflow. If omitted, the request is sent directly to the model without binding to a specific application
modelstringNoModel code. If not provided, the application's default model is used. See Model List for all available models
temperaturefloatNoControls randomness. Range [0, 1]. Higher values produce more creative responses, lower values produce more deterministic responses
top_pintNoControls the sampling range, default is 1
frequency_penaltyfloatNoDiscourages repetition. Range [-2, 2], default is 0
presence_penaltyfloatNoEncourages diversity. Range [-2, 2], default is 0
streamboolNoWhether to stream the output, default is false

Note:

  • When specifying an application via app_code, the system will use the application settings as system prompts, the default model configured in the application, and the application temperature as the temperature value
  • When specifying a workflow via app_code, the workflow will execute from the start node, and the output of the end node will be returned through the interface

Request example:

{
"app_code": "G7z6vKwp",
"messages": [
{ "role": "user", "content": "Hello" }
]
}

Note: Replace app_code with your own application code, a public application code from the application marketplace, or omit it to directly use the underlying model capabilities.

Response

Non-Streaming Response

By default, the interface returns all content at once after generation is complete:

{
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! How can I assist you today?"
}
}
],
"usage": {
"prompt_tokens": 9,
"completion_tokens": 17,
"total_tokens": 26
}
}

Note:

  • choices.message.content contains the AI's response. The usage section shows prompt_tokens, completion_tokens, and total_tokens, representing the token count for the request, response, and total consumption respectively.

  • A conversation's token calculation includes both request and response tokens. The request includes application settings, conversation history, knowledge base content, and user questions. These token limits can be configured in Application Management.

Streaming Response

To enable streaming, set the stream parameter to true. This will return content in real-time as the model generates it, suitable for web pages, apps, and mini-programs:

data: {"choices": [{"index": 0, "delta": {"content": "Hello!"}, "finish_reason": null}], "session_id": null}

data: {"choices": [{"index": 0, "delta": {"content": " How"}, "finish_reason": null}], "session_id": null}

data: {"choices": [{"index": 0, "delta": {"content": " can"}, "finish_reason": null}], "session_id": null}

data: {"choices": [{"index": 0, "delta": {"content": " I"}, "finish_reason": null}], "session_id": null}

data: {"choices": [{"index": 0, "delta": {"content": " help"}, "finish_reason": null}], "session_id": null}

data: {"choices": [{"index": 0, "delta": {"content": " you?"}, "finish_reason": null}], "session_id": null}

data: {"choices": [{"index": 0, "delta": {}, "finish_reason": "stop", "usage": {"prompt_tokens": 9, "completion_tokens": 6, "total_tokens": 15}}], "session_id": null}

data: [DONE]

Note: The output "[DONE]" indicates the end of the stream.

Error Responses

When an exception occurs, the API returns the following structure:

{
"error": {
"message": "Invalid request: user message content is empty",
"type": "invalid_request_error"
}
}

Error types are determined by HTTP status codes and error messages:

HTTP Status CodeDescription
400Request format error
401Authentication failure, check if your API Key is correct
402Application does not exist, check if the app_code parameter is correct
403No access permission; for private applications, only the creator account can access
406Insufficient account credits
409Content moderation failed; questions, answers, or knowledge base may contain sensitive content
503Interface call exception, contact customer service

Model List

The complete list of supported models is available on the Model Management page:

Model CodeContext LengthDescription
gpt-4.11000KOpenAI 4.1 model
gpt-4.1-mini1000KOpenAI 4.1 mini model
gpt-4.1-nano1000KOpenAI 4.1 nano model
gpt-3.516KOpenAI 3.5 model
gpt-4o-mini128KOpenAI 4o-mini model
gpt-4o128KOpenAI 4o model
gpt-4-turbo128KOpenAI 4-turbo model
gpt-48KOpenAI 4.0 model
o1-mini128KOptimized for code, math, and reasoning scenarios
o1-preview128KOptimized for complex reasoning tasks
claude-3-7-sonnet200KClaude 3.7 model
claude-3-5-sonnet200KClaude 3.5 model
claude-3-haiku200KClaude 3 Haiku
claude-3-sonnet200KClaude 3 Sonnet
claude-3-opus200KClaude 3 Opus
gemini-2.5-pro1000KGemini 2.5 Pro
gemini-2.0-flash1000KGemini 2.0 Flash
gemini-1.5-flash1000KGemini 1.5 Flash
gemini-1.5-pro1000KGemini 1.5 Pro
deepseek-chat64KDeepSeek-V3 conversation model
deepseek-reasoner64KDeepSeek-R1 model, returns thinking process
qwen3128KQwen 3
qwen-turbo8KQwen Turbo
qwen-plus32KQwen Plus
qwen-max8KQwen Max

To use a model, pass its code in the model parameter. We recommend not specifying the model parameter to use the default model configured in your application. For pricing, see Billing Rules.

Example Code

Text Dialogue

1. CURL Request

curl --request POST \
--url https://api.linkai.cloud/v1/chat/completions \
--header 'Authorization: Bearer YOUR_API_KEY' \
--header 'Content-Type: application/json' \
--data '{
"app_code": "",
"messages": [
{
"role": "user",
"content": "Who are you?"
}
]
}'

Note: Replace YOUR_API_KEY with your own API Key and fill in your application code in app_code.

2. Python Request

import requests

url = "https://api.linkai.cloud/v1/chat/completions"
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer YOUR_API_KEY"
}
body = {
"app_code": "",
"messages": [
{
"role": "user",
"content": "Hello"
}
]
}
res = requests.post(url, json=body, headers=headers)
if res.status_code == 200:
reply_text = res.json().get("choices")[0]['message']['content']
print(reply_text)
else:
error = res.json().get("error")
print(f"Request error, status code={res.status_code}, error type={error.get('type')}, error message={error.get('message')}")

Note:

  • Replace YOUR_API_KEY with your own API Key and fill in your application code in app_code.
  • If you're using the OpenAI SDK, you can quickly integrate by modifying the api_base configuration. See OpenAI Compatibility for details.

Image Recognition

Users can upload images and ask questions about them. Prerequisites:

  • For application integration: The "Image Recognition" plugin must be enabled in the application
  • For workflow integration: The workflow must use the "Image Recognition" plugin
curl https://api.linkai.cloud/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"app_code": "default",
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "What is shown in this image?"
},
{
"type": "image_url",
"image_url": {
"url": "https://cdn.linkai.cloud/docs/vision-model-config.jpg"
}
}
]
}
]
}'

Note:

  • Replace YOUR_API_KEY with your own API Key and replace the app_code value with your application or workflow code.
  • The image URL must be a publicly accessible image address.
  • Image editing calls work similarly to image recognition but require the GPT-Image-1 or AI Image Editing plugin. When you provide an image URL, the response will include the URL of the generated image.

OpenAI Compatibility

This interface is fully compatible with OpenAI's input and output formats, so you can use the OpenAI SDK directly by simply setting the api_base and api_key:

client = OpenAI(
base_url = "https://api.linkai.cloud/v1",
api_key = "YOUR API KEY"
)

If you need to specify an application while using the OpenAI SDK, you can append the app_code parameter to the api_key using a "-" separator, for example: Link_tOCJYmHxxm55eA1xs-Kv2fXJcH2.