Rules and filters applied around an LLM to stop it from producing harmful, off-topic, or legally dangerous output. Usually a second model + validators.
"Added guardrails so the bot won't quote stock prices."
No comments yet — say something.
Add your own interpretation of "guardrails".