## Create `moderations.create(ModerationCreateParams**kwargs) -> ModerationCreateResponse` **post** `/moderations` Classifies if given messages are potentially harmful across several categories. ### Parameters - **messages:** `Iterable[MessageParam]` List of messages in the conversation. - `UserMessage` - `SystemMessage` - `ToolResponseMessage` - `CompletionMessage` - **model:** `str` Optional identifier of the model to use. Defaults to "Llama-Guard". ### Returns - `class ModerationCreateResponse` - **model:** `str` - **results:** `List[Result]` - **flagged:** `bool` - **flagged\_categories:** `List[str]` ### Example ```python from llama_api_client import LlamaAPIClient client = LlamaAPIClient( api_key="My API Key", ) moderation = client.moderations.create( messages=[{ "content": "string", "role": "user", }], ) print(moderation.model) ```