• by binarymax on 1/2/2024, 8:04:20 PM

    Interesting, but this aspect makes me double-take: "We demonstrate that Mistral-7B, when fine-tuned solely on synthetic data, attains competitive performance on the BEIR [ 40 ] and MTEB [27] benchmarks".

    E5/BGE large are an order of magnitude smaller than Mistral-7B. So is this just "bigger model wins" in disguise?

    I need to read the whole paper carefully, but this jumped out at me.

  • by nalzok on 1/2/2024, 10:58:45 PM

    > Subjects: Computation and Language (cs.CL); Information Retrieval (cs.IR)

    I'm surprised they didn't put `Machine Learning (cs.LG)` and `Machine Learning (stat.ML)`.

  • by 3abiton on 1/3/2024, 12:25:17 AM

    I am confused, aren't LLMs already embeddings of text?