User Input Guard Rail
Validates user inputs before LLM execution.
UserInputGuardRails provide safety checks on content that users provide to AI systems, ensuring that potentially harmful, inappropriate, or policy-violating content is detected and handled before being processed by the LLM.
This interface provides overloads for different input types while maintaining the base string validation from the GuardRail interface.
Properties
Functions
Link copied to clipboard
Combines multiple user messages into a single string for validation.
Link copied to clipboard
Validate the given input within the provided blackboard context.
Validate multimodal content containing text and potentially images.
Validate a list of user messages from a conversation.