## Create `chat.completions.create(CompletionCreateParams**kwargs) -> CreateChatCompletionResponse` **post** `/chat/completions` Generate a chat completion for the given messages using the specified model. ### Parameters - **messages:** `Iterable[MessageParam]` List of messages in the conversation. - `UserMessage` - `SystemMessage` - `ToolResponseMessage` - `CompletionMessage` - **model:** `str` The identifier of the model to use. - **max\_completion\_tokens:** `int` The maximum number of tokens to generate. - **repetition\_penalty:** `float` Controls the likelyhood and generating repetitive responses. - **response\_format:** `ResponseFormat` An object specifying the format that the model must output. Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured Outputs which ensures the model will match your supplied JSON schema. If not specified, the default is {"type": "text"}, and model will return a free-form text response. - `class ResponseFormatJsonSchemaResponseFormat` Configuration for JSON schema-guided response generation. - **json\_schema:** `ResponseFormatJsonSchemaResponseFormatJsonSchema` The JSON schema the response should conform to. - **name:** `str` The name of the response format. - **schema:** `object` The JSON schema the response should conform to. In a Python SDK, this is often a `pydantic` model. - **type:** `Literal["json_schema"]` The type of response format being defined. Always `json_schema`. - `"json_schema"` - `class ResponseFormatTextResponseFormat` Configuration for text-guided response generation. - **type:** `Literal["text"]` The type of response format being defined. Always `text`. - `"text"` - **stream:** `Literal[false]` If True, generate an SSE event stream of the response. Defaults to False. - `false` - **temperature:** `float` Controls randomness of the response by setting a temperature. Higher value leads to more creative responses. Lower values will make the response more focused and deterministic. - **tool\_choice:** `ToolChoice` Controls which (if any) tool is called by the model. `none` means the model will not call any tool and instead generates a message. `auto` means the model can pick between generating a message or calling one or more tools. `required` means the model must call one or more tools. Specifying a particular tool via `{"type": "function", "function": {"name": "my_function"}}` forces the model to call that tool. `none` is the default when no tools are present. `auto` is the default if tools are present. - **ToolChoiceUnionMember0:** `Literal["none", "auto", "required"]` `none` means the model will not call any tool and instead generates a message. `auto` means the model can pick between generating a message or calling one or more tools. `required` means the model must call one or more tools. - `"none"` - `"auto"` - `"required"` - `class ToolChoiceChatCompletionNamedToolChoice` Specifies a tool the model should use. Use to force the model to call a specific function. - **function:** `ToolChoiceChatCompletionNamedToolChoiceFunction` - **name:** `str` The name of the function to call. - **type:** `Literal["function"]` The type of the tool. Currently, only `function` is supported. - `"function"` - **tools:** `Iterable[Tool]` List of tool definitions available to the model - **function:** `ToolFunction` - **name:** `str` The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64. - **description:** `str` A description of what the function does, used by the model to choose when and how to call the function. - **parameters:** `Dict[str, object]` The parameters the functions accepts, described as a JSON Schema object. Omitting `parameters` defines a function with an empty parameter list. - **strict:** `bool` Whether to enable strict schema adherence when generating the function call. If set to true, the model will follow the exact schema defined in the `parameters` field. Only a subset of JSON Schema is supported when `strict` is `true`. Learn more about Structured Outputs in the [function calling guide](docs/guides/function-calling). - **type:** `Literal["function"]` The type of the tool. Currently, only `function` is supported. - `"function"` - **top\_k:** `int` Only sample from the top K options for each subsequent token. - **top\_p:** `float` Controls diversity of the response by setting a probability threshold when choosing the next token. - **user:** `str` A unique identifier representing your application end-user for monitoring abuse. ### Returns - `CreateChatCompletionResponse` ### Example ```python from llama_api_client import LlamaAPIClient client = LlamaAPIClient( api_key="My API Key", ) create_chat_completion_response = client.chat.completions.create( messages=[{ "content": "string", "role": "user", }], model="model", ) print(create_chat_completion_response.id) ```