Skip to content
  • Auto
  • Light
  • Dark
Log in to API

Create

Create
client.Chat.Completions.New(ctx, body) (*CompletionMessageCompletionMessageIDstringMetricsarrayCreateChatCompletionResponse, error)
post/chat/completions

Generate a chat completion for the given messages using the specified model.

Parameters
bodyMessagesfieldModelfieldMaxCompletionTokensfieldRepetitionPenaltyfieldResponseFormatfieldTemperaturefieldToolChoicefieldToolsfieldTopKfieldTopPfieldUserfieldChatCompletionNewParams
Hide ParametersShow Parameters
Messagesfield
param.Field[[]CompletionMessagestruct{…}MessageUnion]

List of messages in the conversation.

Hide ParametersShow Parameters
UserMessagestruct

A message from the user in a chat conversation.

Hide ParametersShow Parameters

The content of the user message, which can include text and other media.

Hide ParametersShow Parameters
RoleUserMessageRoleUserUserMessageRoleUserMessageRole

Must be "user" to identify this as a user message.

Hide ParametersShow Parameters
UserMessageRoleUserconst
"user"
SystemMessagestruct

A system message providing instructions or context to the model.

Hide ParametersShow Parameters

The content of the system message.

Hide ParametersShow Parameters
RoleSystemSystemSystemSystem

Must be "system" to identify this as a system message

Hide ParametersShow Parameters
SystemSystemconst
SystemSystemSystemSystem
"system"
ToolResponseMessagestruct

A message representing the result of a tool invocation.

Hide ParametersShow Parameters

The content of the user message, which can include text and other media.

Hide ParametersShow Parameters
RoleToolToolToolTool

Must be "tool" to identify this as a tool response

Hide ParametersShow Parameters
ToolToolconst
ToolToolToolTool
"tool"
ToolCallIDstring

Unique identifier for the tool call this response is for

CompletionMessagestruct

A message containing the model's (assistant) response in a chat conversation.

Hide ParametersShow Parameters
RoleAssistantAssistantAssistantAssistant

Must be "assistant" to identify this as the model's response

Hide ParametersShow Parameters
AssistantAssistantconst
AssistantAssistantAssistantAssistant
"assistant"
ContentMessageTextContentItemstruct{…}CompletionMessageContentUnion
optional

The content of the model's response.

Hide ParametersShow Parameters
string
MessageTextContentItemstruct

A text content item

Hide ParametersShow Parameters
Textstring

Text content

Discriminator type of the content item. Always "text"

Hide ParametersShow Parameters
MessageTextContentItemTypeTextconst
"text"
StopReasonCompletionMessageStopReasonStopCompletionMessageStopReasonCompletionMessageStopReasonToolCallsCompletionMessageStopReasonCompletionMessageStopReasonLengthCompletionMessageStopReasonCompletionMessageStopReason
optional

The reason why we stopped. Options are: - "stop": The model reached a natural stopping point. - "tool_calls": The model finished generating and invoked a tool call. - "length": The model reached the maxinum number of tokens specified in the request.

Hide ParametersShow Parameters
CompletionMessageStopReasonStopconst
CompletionMessageStopReasonStopCompletionMessageStopReasonCompletionMessageStopReasonToolCallsCompletionMessageStopReasonCompletionMessageStopReasonLengthCompletionMessageStopReasonCompletionMessageStopReason
"stop"
CompletionMessageStopReasonToolCallsconst
CompletionMessageStopReasonStopCompletionMessageStopReasonCompletionMessageStopReasonToolCallsCompletionMessageStopReasonCompletionMessageStopReasonLengthCompletionMessageStopReasonCompletionMessageStopReason
"tool_calls"
CompletionMessageStopReasonLengthconst
CompletionMessageStopReasonStopCompletionMessageStopReasonCompletionMessageStopReasonToolCallsCompletionMessageStopReasonCompletionMessageStopReasonLengthCompletionMessageStopReasonCompletionMessageStopReason
"length"
ToolCallsarray
optional

The tool calls generated by the model, such as function calls.

Hide ParametersShow Parameters
IDstring

The ID of the tool call.

FunctionArgumentsstringNamestringCompletionMessageToolCallFunction

The function that the model called.

Hide ParametersShow Parameters
Argumentsstring

The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function.

Namestring

The name of the function to call.

Modelfield
param.Field[string]

The identifier of the model to use.

MaxCompletionTokensfield
optional
param.Field[int64]

The maximum number of tokens to generate.

minimum1
RepetitionPenaltyfield
optional
param.Field[float64]

Controls the likelyhood and generating repetitive responses.

minimum1
maximum2
ResponseFormatfield
optional

An object specifying the format that the model must output. Setting to { "type": "json_schema", "json_schema": {...} } enables Structured Outputs which ensures the model will match your supplied JSON schema. If not specified, the default is {"type": "text"}, and model will return a free-form text response.

Temperaturefield
optional
param.Field[float64]

Controls randomness of the response by setting a temperature. Higher value leads to more creative responses. Lower values will make the response more focused and deterministic.

minimum0
maximum1
ToolChoicefield
optional

Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {"type": "function", "function": {"name": "my_function"}} forces the model to call that tool.

none is the default when no tools are present. auto is the default if tools are present.

Toolsfield
optional

List of tool definitions available to the model

Hide ParametersShow Parameters
FunctionNamestringDescriptionstringParametersmapStrictboolChatCompletionNewParamsToolFunction
Hide ParametersShow Parameters
Namestring

The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.

Descriptionstring
optional

A description of what the function does, used by the model to choose when and how to call the function.

Parametersmap
optional
map[string, any]

The parameters the functions accepts, described as a JSON Schema object. Omitting parameters defines a function with an empty parameter list.

Strictbool
optional

Whether to enable strict schema adherence when generating the function call. If set to true, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is true. Learn more about Structured Outputs in the function calling guide.

Typestring

The type of the tool. Currently, only function is supported.

Hide ParametersShow Parameters
ChatCompletionNewParamsToolTypeFunctionconst
"function"
TopKfield
optional
param.Field[int64]

Only sample from the top K options for each subsequent token.

minimum0
TopPfield
optional
param.Field[float64]

Controls diversity of the response by setting a probability threshold when choosing the next token.

minimum0
maximum1
Userfield
optional
param.Field[string]

A unique identifier representing your application end-user for monitoring abuse.

Returns
CompletionMessageCompletionMessageIDstringMetricsarrayCreateChatCompletionResponse
package main

import (
  "context"
  "fmt"

  "github.com/stainless-sdks/-go"
  "github.com/stainless-sdks/-go/option"
)

func main() {
  client := llamaapi.NewClient(
    option.WithAPIKey("My API Key"),
  )
  createChatCompletionResponse, err := client.Chat.Completions.New(context.TODO(), llamaapi.ChatCompletionNewParams{
    Messages: []llamaapi.MessageUnionParam{llamaapi.MessageUnionParam{
      OfUser: &llamaapi.UserMessageParam{
        Content: llamaapi.UserMessageContentUnionParam{
          OfString: llamaapi.String("string"),
        },
        Role: llamaapi.UserMessageRoleUser,
      },
    }},
    Model: "model",
  })
  if err != nil {
    panic(err.Error())
  }
  fmt.Printf("%+v\n", createChatCompletionResponse.ID)
}
200 Example
{
  "completion_message": {
    "role": "assistant",
    "content": "string",
    "stop_reason": "stop",
    "tool_calls": [
      {
        "id": "id",
        "function": {
          "arguments": "arguments",
          "name": "name"
        }
      }
    ]
  },
  "id": "id",
  "metrics": [
    {
      "metric": "metric",
      "value": 0,
      "unit": "unit"
    }
  ]
}