Create
Create
moderations.create(ModerationCreateParams**kwargs) -> modelstrresultslistModerationCreateResponse
post/moderations
Classifies if given messages are potentially harmful across several categories.
Parameters
modelstr
optional
Optional identifier of the model to use. Defaults to "Llama-Guard".
Returns
ModerationCreateResponseclass
from llama_api_client import LlamaAPIClient
client = LlamaAPIClient(
api_key="My API Key",
)
moderation = client.moderations.create(
messages=[{
"content": "string",
"role": "user",
}],
)
print(moderation.model)200 Example
{
"model": "model",
"results": [
{
"flagged": true,
"flagged_categories": [
"string"
]
}
]
}