Skip to content
  • Auto
  • Light
  • Dark
Log in to API

Create

Create
client.chat.completions.create(CompletionCreateParamsbody, RequestOptionsoptions?): completion_messageCompletionMessageidstringmetricsarrayCreateChatCompletionResponse | Stream<eventEventidstringCreateChatCompletionResponseStreamChunk>
post/chat/completions

Generate a chat completion for the given messages using the specified model.

Parameters
CompletionCreateParamsalias
Hide ParametersShow Parameters
CompletionCreateParamsBase
Hide ParametersShow Parameters
streamfalse
optional

If True, generate an SSE event stream of the response. Defaults to False.

Hide ParametersShow Parameters
false
CompletionCreateParamsNonStreaming extends streamfalseCompletionCreateParamsBase
Hide ParametersShow Parameters
streamfalse
optional

If True, generate an SSE event stream of the response. Defaults to False.

Hide ParametersShow Parameters
false
CompletionCreateParamsNonStreaming extends streamfalseCompletionCreateParamsBase
Hide ParametersShow Parameters
streamfalse
optional

If True, generate an SSE event stream of the response. Defaults to False.

Hide ParametersShow Parameters
false
Returns
completion_messageCompletionMessageidstringmetricsarrayCreateChatCompletionResponse
import LlamaAPIClient from 'llama-api-client';

const client = new LlamaAPIClient({
  apiKey: 'My API Key',
});

const createChatCompletionResponse = await client.chat.completions.create({
  messages: [{ content: 'string', role: 'user' }],
  model: 'model',
});

console.log(createChatCompletionResponse.id);
200 Example
{
  "completion_message": {
    "role": "assistant",
    "content": "string",
    "stop_reason": "stop",
    "tool_calls": [
      {
        "id": "id",
        "function": {
          "arguments": "arguments",
          "name": "name"
        }
      }
    ]
  },
  "id": "id",
  "metrics": [
    {
      "metric": "metric",
      "value": 0,
      "unit": "unit"
    }
  ]
}