• by jbellis on 6/7/2025, 11:44:53 AM

    This is a useful analysis, but only as a first cut and sometimes not even that -- Grok 3 mini and DeepSeek V3 are by far the least expensive coding models that are worth trying for scenarios where you do and don't care about the vendor training on your requests, respectively. One of those is "open source" (by which he seems to mean "open weights") but far too large to run locally.

    [I guess that must be a useful market niche though, apparently this is by a company selling batch compute on exactly those small open weights models.]

    The problem is the author is evaluating by dividing the Artificial Analysis score by a blended cost per token, but most tasks have an intelligence "floor" below which it doesn't matter how cheap something is, it will never succeed. And when you strip out the very high results from super cheap 4B OSS models the rest are significantly outclassed by Flash 2.0 (not on his chart but still worth considering) and 2.5, not to mention other models that might be better in domain specific tasks like grok-3 mini for code.

    (Nobody should be using Haiku in 2025. The OpenAI mini models are not as bad as Haiku in p/p and maybe there is a use case for prefering one over Flash but if so I don't know what it is.)

  • by ramesh31 on 6/6/2025, 8:51:23 PM

    Flash is just so obscenely cheap at this point it's hard to justify the headache of self hosting though. Really only applies to sensitive data IMO.

  • by delichon on 6/6/2025, 9:37:25 PM

    Pass the choices through, please. It's so context dependent that I want a <dumber> and a <smarter> button, with units of $/M tokens. And another setting to send a particular prompt to "[x] batch" and email me with the answer later. For most things I'll start dumb and fast, but switch to smart and slow when the going gets rough.