Securing LLM Systems Against Prompt Injection – Nvidia Technical Blog
by yandie on 8/4/2023, 3:14:10 PM
Who executes LLM-generated code (or non PR-ed code in general) against trusted environments/databases? I hope that's just a bad pattern introduced by langchain pattern and not the norm...
Who executes LLM-generated code (or non PR-ed code in general) against trusted environments/databases? I hope that's just a bad pattern introduced by langchain pattern and not the norm...