Create
Create
client.chat.completions.create(CompletionCreateParamsbody, RequestOptionsoptions?): completion_messageCompletionMessageidstringmetricsarrayCreateChatCompletionResponse | Stream<eventEventidstringCreateChatCompletionResponseStreamChunk>
post/chat/completions
Generate a chat completion for the given messages using the specified model.
Parameters
CompletionCreateParamsalias
Returns
completion_messageCompletionMessageidstringmetricsarrayCreateChatCompletionResponse
import LlamaAPIClient from 'llama-api-client';
const client = new LlamaAPIClient({
apiKey: 'My API Key',
});
const createChatCompletionResponse = await client.chat.completions.create({
messages: [{ content: 'string', role: 'user' }],
model: 'model',
});
console.log(createChatCompletionResponse.id);200 Example
{
"completion_message": {
"role": "assistant",
"content": "string",
"stop_reason": "stop",
"tool_calls": [
{
"id": "id",
"function": {
"arguments": "arguments",
"name": "name"
}
}
]
},
"id": "id",
"metrics": [
{
"metric": "metric",
"value": 0,
"unit": "unit"
}
]
}