• by extasia on 5/6/2024, 5:40:04 PM

    They're poor at recollecting the exact descriptions of the ~150,000 ICD codes, which is why these other approaches [1,2] give the known information (codes) in some way to the model and let it do the task of _assigning_ them to discharge notes, which is the hard part of the task!.

    (Disclaimer: I am an author of one of these papers)

    1. https://arxiv.org/abs/2311.13735

    2. https://arxiv.org/abs/2310.06552

  • by macksd on 5/6/2024, 5:11:20 PM

    >> Our study reaffirms the limitations of LLM tokenization

    Because they used data that needs to be tokenized differently, and didn't really tune the models for use on that data. That's not really a limit of LLM tokenization per se.

    >> We did not evaluate strategies known to improve LLM performance, including ... retrieval augmented generation

    Which is a shame because this is exactly the kind of use case RAG is supposed to be good for and they largely observed problems it's supposed to help with.

    Looking at the authors, it seems to me they're all subject matter experts in medicine and digital medicine, but their conclusion is the one in support of medical professionals and they really don't seem to have tried that hard to get good deep learning results.

    I've had nightmares every time I've seen a doctor in the US, frequently because of things not being coded correctly. So honestly I'd just love to see a rigorous study of how often the human staff is messing it up too.

  • by barfbagginus on 5/7/2024, 12:14:47 AM

    The limitations section mentions the study omitted RAG and focused on base performance as a key bottle neck. But given the usefulness of RAG, and weakness of base LLMs for this kind of task, base recall performance is not necessarily relevant or a key bottle neck preventing accurate coding.

    Adding even some slapdash RAG attempts would have provided a more realistic and still disappointing result, since assisted LLMs are still only around 75% accurate (see the RAG paper another author shares in their comment). I suppose the space of possible RAG solutions makes it hard to represent fairly, so is reasonably left to further research.

    I appreciate testing base performance, with a STRONG proviso that a relevant conclusion requires more work, along the lines of RAG and other tools. I wish this was communicated more clearly in the intro and abstract, and wonder if the authors had some unstated reasons for not being more blatant about that.

    The study does provide an interesting value. Its benchmark is open source and extensive. It should be easy to adapt and replicate in other systems. It could become a target benchmark for tool and retrieval enhanced medical coding LLMs.