• by nbrad on 10/18/2023, 7:47:13 PM

    In general, just providing a schema and asking for the response in JSON with few-shot examples is extremely (99%+) reliable in my experience.

    I've found GPT-3.5 more than adequate at inferring schemas and filling them for conventional use cases like chat-based forms (as an alternative to Google Forms/TypeForm); my code and prompts available at: https://github.com/nsbradford/talkformai - i've also used this to extract structured data from code for LLM coding agents (e.g. "return the names of every function in this file")

    In my opinion, more and more APIs are likely to become unstructured and be reduced to LLM agents chatting with each other; I wrote a brief blog about this here: https://nickbradford.substack.com/p/llm-agents-behind-every-...

  • by AlwaysNewb23 on 10/18/2023, 3:57:17 PM

    I've tried doing things like this and found that it's often not totally reliable. I've had a hard time getting a consistent output and will randomly get variations I did not expect. I've found it's useful if you're trying to clean up data as a manual task, but not trying to automate a process.

  • by PaulHoule on 10/18/2023, 3:46:38 PM

    How successful was this effort? How did you know how successful it was?

  • by tmaly on 10/18/2023, 4:36:12 PM

    I have had some good results processing survey data.

    Having the LLM generalize responses, look for patterns and rank by frequency