# Completions ## Create `client.chat.completions.create(CompletionCreateParamsbody, RequestOptionsoptions?): CreateChatCompletionResponse | Stream` **post** `/chat/completions` Generate a chat completion for the given messages using the specified model. ### Parameters - **CompletionCreateParams:** `CompletionCreateParamsNonStreaming | CompletionCreateParamsStreaming` - `CompletionCreateParamsBase` - **stream:** `false` If True, generate an SSE event stream of the response. Defaults to False. - `false` - `CompletionCreateParamsNonStreaming extends CompletionCreateParamsBase` - **stream:** `false` If True, generate an SSE event stream of the response. Defaults to False. - `false` - `CompletionCreateParamsNonStreaming extends CompletionCreateParamsBase` - **stream:** `false` If True, generate an SSE event stream of the response. Defaults to False. - `false` ### Returns - `CreateChatCompletionResponse` ### Example ```typescript import LlamaAPIClient from 'llama-api-client'; const client = new LlamaAPIClient({ apiKey: 'My API Key', }); const createChatCompletionResponse = await client.chat.completions.create({ messages: [{ content: 'string', role: 'user' }], model: 'model', }); console.log(createChatCompletionResponse.id); ```