I understand that LLaMa 70B isn't even at GPT-3's level yet.
If you have paying customers who expect GPT-4-level quality, is using LLaMa 70B to generate responses really better than just telling users that the OpenAI API is temporarily down, from a reputational standpoint?
I understand that LLaMa 70B isn't even at GPT-3's level yet.
If you have paying customers who expect GPT-4-level quality, is using LLaMa 70B to generate responses really better than just telling users that the OpenAI API is temporarily down, from a reputational standpoint?