by swax on 3/8/2024, 8:21:05 PM
by brevitea on 3/8/2024, 10:04:34 PM
Thanks for the demo video; very cool. How do you prevent prompt injection attacks?
E.g., if you create a user account for the LLM models to run in, how to do prevent an attack where the LLMs can be leveraged to execute privilege escalation to get as close to admin privs as possible? Or, how to ensure PII/PHI standards in a functionality such as this?
NAISYS is an open source command shell proxy for LLM agents that I just released a few days ago.
It will run the agents in a wrapper around your actual shell that is context friendly. It even has a custom mail client and browser wrapper that is agent/context friendly.
I have a demo video where a Claude3 and GPT4 agent build a website from scratch together on the command line. https://www.youtube.com/watch?v=Ttya3ixjumo