Skip to content
  • Auto
  • Light
  • Dark
Log in to API

Create

Create
client.moderations.create(ModerationCreateParamsbody, RequestOptionsoptions?): modelstringresultsarrayModerationCreateResponse
post/moderations

Classifies if given messages are potentially harmful across several categories.

Parameters
bodymessagesarraymodelstringModerationCreateParams
Hide ParametersShow Parameters
messagesarray
Array<Message>

List of messages in the conversation.

Hide ParametersShow Parameters
contentunionrole"user"UserMessage
contentunionrole"system"SystemMessage
contentunionrole"tool"tool_call_idstringToolResponseMessage
role"assistant"contentunionstop_reasonuniontool_callsarrayCompletionMessage
modelstring
optional

Optional identifier of the model to use. Defaults to "Llama-Guard".

Returns
ModerationCreateResponse
Hide ParametersShow Parameters
modelstring
resultsarray
Array<Result>
Hide ParametersShow Parameters
flaggedboolean
flagged_categoriesarray
Array<string>
import LlamaAPIClient from 'llama-api-client';

const client = new LlamaAPIClient({
  apiKey: 'My API Key',
});

const moderation = await client.moderations.create({ messages: [{ content: 'string', role: 'user' }] });

console.log(moderation.model);
200 Example
{
  "model": "model",
  "results": [
    {
      "flagged": true,
      "flagged_categories": [
        "string"
      ]
    }
  ]
}