I recently built this at a hackathon: similar to website-to-chatbot products it crawls a webpage to understand a business and provide a model
But instead of a chatbot, this generates a set of guardrails for a chatbot based on your webpage
-
For example, if your website has information about a hotel, an LLM using RAG would attempt to answer most questions about hotels.
But by default there's no real-time information on things like weather or traffic conditions.
Rather than risk the chatbot hallucinating an answer, the guardrail model would detect a query likely to result in a hallucination and preemptively block it from reaching the underlying model
I recently built this at a hackathon: similar to website-to-chatbot products it crawls a webpage to understand a business and provide a model
But instead of a chatbot, this generates a set of guardrails for a chatbot based on your webpage
-
For example, if your website has information about a hotel, an LLM using RAG would attempt to answer most questions about hotels.
But by default there's no real-time information on things like weather or traffic conditions.
Rather than risk the chatbot hallucinating an answer, the guardrail model would detect a query likely to result in a hallucination and preemptively block it from reaching the underlying model