Skip to content
  • Auto
  • Light
  • Dark
Log in to API

Create

Create
moderations.create(ModerationCreateParams**kwargs) -> modelstrresultslistModerationCreateResponse
post/moderations

Classifies if given messages are potentially harmful across several categories.

Parameters
messagesiterable

List of messages in the conversation.

Hide ParametersShow Parameters
contentunionroleliteralUserMessage
contentunionroleliteralSystemMessage
contentunionroleliteraltool_call_idstrToolResponseMessage
roleliteralcontentContentstop_reasonliteraltool_callslistCompletionMessage
modelstr
optional

Optional identifier of the model to use. Defaults to "Llama-Guard".

Returns
ModerationCreateResponseclass
Hide ParametersShow Parameters
modelstr
resultslist
List[Result]
Hide ParametersShow Parameters
flaggedbool
flagged_categorieslist
List[str]
from llama_api_client import LlamaAPIClient

client = LlamaAPIClient(
    api_key="My API Key",
)
moderation = client.moderations.create(
    messages=[{
        "content": "string",
        "role": "user",
    }],
)
print(moderation.model)
200 Example
{
  "model": "model",
  "results": [
    {
      "flagged": true,
      "flagged_categories": [
        "string"
      ]
    }
  ]
}