# Completions ## Create `client.Chat.Completions.New(ctx, body) (*CreateChatCompletionResponse, error)` **post** `/chat/completions` Generate a chat completion for the given messages using the specified model. ### Parameters - **body:** `ChatCompletionNewParams` - **Messages:** `param.Field[[]MessageUnion]` List of messages in the conversation. - Not supported A message from the user in a chat conversation. - **Content:** `UserMessageContentUnion` The content of the user message, which can include text and other media. - `string` - `[]UserMessageContentArrayOfContentItemUnion` - **Role:** `UserMessageRole` Must be "user" to identify this as a user message. - `UserMessageRole` - Not supported A system message providing instructions or context to the model. - **Content:** `SystemMessageContentUnion` The content of the system message. - `string` - `[]MessageTextContentItem` - **Role:** `System` Must be "system" to identify this as a system message - `System` - Not supported A message representing the result of a tool invocation. - **Content:** `ToolResponseMessageContentUnion` The content of the user message, which can include text and other media. - `string` - `[]MessageTextContentItem` - **Role:** `Tool` Must be "tool" to identify this as a tool response - `Tool` - **ToolCallID:** `string` Unique identifier for the tool call this response is for - Not supported A message containing the model's (assistant) response in a chat conversation. - **Role:** `Assistant` Must be "assistant" to identify this as the model's response - `Assistant` - **Content:** `CompletionMessageContentUnion` The content of the model's response. - `string` - Not supported A text content item - **Text:** `string` Text content - **Type:** `MessageTextContentItemType` Discriminator type of the content item. Always "text" - `MessageTextContentItemType` - **StopReason:** `CompletionMessageStopReason` The reason why we stopped. Options are: - "stop": The model reached a natural stopping point. - "tool_calls": The model finished generating and invoked a tool call. - "length": The model reached the maxinum number of tokens specified in the request. - `CompletionMessageStopReason` - `CompletionMessageStopReason` - `CompletionMessageStopReason` - **ToolCalls:** `[]CompletionMessageToolCall` The tool calls generated by the model, such as function calls. - **ID:** `string` The ID of the tool call. - **Function:** `CompletionMessageToolCallFunction` The function that the model called. - **Arguments:** `string` The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function. - **Name:** `string` The name of the function to call. - **Model:** `param.Field[string]` The identifier of the model to use. - **MaxCompletionTokens:** `param.Field[int64]` The maximum number of tokens to generate. - **RepetitionPenalty:** `param.Field[float64]` Controls the likelyhood and generating repetitive responses. - **ResponseFormat:** `param.Field[ChatCompletionNewParamsResponseFormatUnion]` An object specifying the format that the model must output. Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured Outputs which ensures the model will match your supplied JSON schema. If not specified, the default is {"type": "text"}, and model will return a free-form text response. - `ChatCompletionNewParamsResponseFormatJsonSchema` - `ChatCompletionNewParamsResponseFormatText` - Not supported - **Temperature:** `param.Field[float64]` Controls randomness of the response by setting a temperature. Higher value leads to more creative responses. Lower values will make the response more focused and deterministic. - **ToolChoice:** `param.Field[ChatCompletionNewParamsToolChoiceUnion]` Controls which (if any) tool is called by the model. `none` means the model will not call any tool and instead generates a message. `auto` means the model can pick between generating a message or calling one or more tools. `required` means the model must call one or more tools. Specifying a particular tool via `{"type": "function", "function": {"name": "my_function"}}` forces the model to call that tool. `none` is the default when no tools are present. `auto` is the default if tools are present. - `string` - `ChatCompletionNewParamsToolChoiceChatCompletionNamedToolChoice` - **Tools:** `param.Field[[]ChatCompletionNewParamsTool]` List of tool definitions available to the model - **Function:** `ChatCompletionNewParamsToolFunction` - **Name:** `string` The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64. - **Description:** `string` A description of what the function does, used by the model to choose when and how to call the function. - **Parameters:** `map[string, any]` The parameters the functions accepts, described as a JSON Schema object. Omitting `parameters` defines a function with an empty parameter list. - **Strict:** `bool` Whether to enable strict schema adherence when generating the function call. If set to true, the model will follow the exact schema defined in the `parameters` field. Only a subset of JSON Schema is supported when `strict` is `true`. Learn more about Structured Outputs in the [function calling guide](docs/guides/function-calling). - **Type:** `string` The type of the tool. Currently, only `function` is supported. - `ChatCompletionNewParamsToolType` - **TopK:** `param.Field[int64]` Only sample from the top K options for each subsequent token. - **TopP:** `param.Field[float64]` Controls diversity of the response by setting a probability threshold when choosing the next token. - **User:** `param.Field[string]` A unique identifier representing your application end-user for monitoring abuse. ### Returns - `CreateChatCompletionResponse` ### Example ```go package main import ( "context" "fmt" "github.com/stainless-sdks/-go" "github.com/stainless-sdks/-go/option" ) func main() { client := llamaapi.NewClient( option.WithAPIKey("My API Key"), ) createChatCompletionResponse, err := client.Chat.Completions.New(context.TODO(), llamaapi.ChatCompletionNewParams{ Messages: []llamaapi.MessageUnionParam{llamaapi.MessageUnionParam{ OfUser: &llamaapi.UserMessageParam{ Content: llamaapi.UserMessageContentUnionParam{ OfString: llamaapi.String("string"), }, Role: llamaapi.UserMessageRoleUser, }, }}, Model: "model", }) if err != nil { panic(err.Error()) } fmt.Printf("%+v\n", createChatCompletionResponse.ID) } ```