## Create `client.Moderations.New(ctx, body) (*ModerationNewResponse, error)` **post** `/moderations` Classifies if given messages are potentially harmful across several categories. ### Parameters - **body:** `ModerationNewParams` - **Messages:** `param.Field[[]MessageUnion]` List of messages in the conversation. - Not supported A message from the user in a chat conversation. - **Content:** `UserMessageContentUnion` The content of the user message, which can include text and other media. - `string` - `[]UserMessageContentArrayOfContentItemUnion` - **Role:** `UserMessageRole` Must be "user" to identify this as a user message. - `UserMessageRole` - Not supported A system message providing instructions or context to the model. - **Content:** `SystemMessageContentUnion` The content of the system message. - `string` - `[]MessageTextContentItem` - **Role:** `System` Must be "system" to identify this as a system message - `System` - Not supported A message representing the result of a tool invocation. - **Content:** `ToolResponseMessageContentUnion` The content of the user message, which can include text and other media. - `string` - `[]MessageTextContentItem` - **Role:** `Tool` Must be "tool" to identify this as a tool response - `Tool` - **ToolCallID:** `string` Unique identifier for the tool call this response is for - Not supported A message containing the model's (assistant) response in a chat conversation. - **Role:** `Assistant` Must be "assistant" to identify this as the model's response - `Assistant` - **Content:** `CompletionMessageContentUnion` The content of the model's response. - `string` - Not supported A text content item - **Text:** `string` Text content - **Type:** `MessageTextContentItemType` Discriminator type of the content item. Always "text" - `MessageTextContentItemType` - **StopReason:** `CompletionMessageStopReason` The reason why we stopped. Options are: - "stop": The model reached a natural stopping point. - "tool_calls": The model finished generating and invoked a tool call. - "length": The model reached the maxinum number of tokens specified in the request. - `CompletionMessageStopReason` - `CompletionMessageStopReason` - `CompletionMessageStopReason` - **ToolCalls:** `[]CompletionMessageToolCall` The tool calls generated by the model, such as function calls. - **ID:** `string` The ID of the tool call. - **Function:** `CompletionMessageToolCallFunction` The function that the model called. - **Arguments:** `string` The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function. - **Name:** `string` The name of the function to call. - **Model:** `param.Field[string]` Optional identifier of the model to use. Defaults to "Llama-Guard". ### Returns - Not supported - **Model:** `string` - **Results:** `[]ModerationNewResponseResult` - **Flagged:** `bool` - **FlaggedCategories:** `[]string` ### Example ```go package main import ( "context" "fmt" "github.com/stainless-sdks/-go" "github.com/stainless-sdks/-go/option" ) func main() { client := llamaapi.NewClient( option.WithAPIKey("My API Key"), ) moderation, err := client.Moderations.New(context.TODO(), llamaapi.ModerationNewParams{ Messages: []llamaapi.MessageUnionParam{llamaapi.MessageUnionParam{ OfUser: &llamaapi.UserMessageParam{ Content: llamaapi.UserMessageContentUnionParam{ OfString: llamaapi.String("string"), }, Role: llamaapi.UserMessageRoleUser, }, }}, }) if err != nil { panic(err.Error()) } fmt.Printf("%+v\n", moderation.Model) } ```