UserInputGuardRail

Validates user inputs before LLM execution.

UserInputGuardRails provide safety checks on content that users provide to AI systems, ensuring that potentially harmful, inappropriate, or policy-violating content is detected and handled before being processed by the LLM.

This interface provides overloads for different input types while maintaining the base string validation from the GuardRail interface.

Properties

Link copied to clipboard
abstract val description: String

Description of what this validator checks. Used for documentation and debugging purposes.

Link copied to clipboard
abstract val name: String

Human-readable name for this validator. Used for logging, error reporting, and configuration.

Functions

Link copied to clipboard
open fun combineMessages(userMessages: List<UserMessage>): String

Combines multiple user messages into a single string for validation.

Link copied to clipboard
abstract fun validate(input: String, blackboard: Blackboard): ValidationResult

Validate the given input within the provided blackboard context.

open fun validate(content: MultimodalContent, blackboard: Blackboard): ValidationResult

Validate multimodal content containing text and potentially images.

open fun validate(userMessages: List<UserMessage>, blackboard: Blackboard): ValidationResult

Validate a list of user messages from a conversation.