You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In our ad hoc testing with KurtOpenAI and KurtVertexAI, we have seen problems like:
Vertex AI sometimes failing to generate valid data, even when supposedly being forced to by the relevant API parameters
Vertex AI sometimes returning 500 errors
We want to be able to formalize this kind of testing for any LLM provider, so we can share empirically validated findings about the relative capabilities of different LLM providers within the context of the features that are important for Kurt users.
I envision:
a script that can be used to generate an output report for a given LLM provider / model
a way of storing those results in a repository (either this one, or maybe a separate one dedicated to this work)
a nicely readable way (markdown? HTML?) of seeing the current summary of capabilities across LLM providers / models
a blog post showing our findings across the "big three" LLMs (GPT, Gemini, Claude)
The text was updated successfully, but these errors were encountered:
I've noticed some changes today in VertexAI behavior - I haven't tested extensively but it seems more reliable than before.
This is the kind of situation where it would be helpful to have the capability eval suite I could run, to comprehensively re-test all the various situations where we've found limitations before.
Another issue I found with VertexAI to add to the eval suite: it seems incapable of generating an apostrophe character inside a structured data string field - likely because they are using single-quoted strings under the hood, and the model hasn't been trained to generate an escaped apostrophe character.
Currently, as soon as it encounters an apostrophe in the text it's trying to generate in such a field, Gemini will end the string instead of continuing to generate the rest of the text.
In our ad hoc testing with
KurtOpenAI
andKurtVertexAI
, we have seen problems like:We want to be able to formalize this kind of testing for any LLM provider, so we can share empirically validated findings about the relative capabilities of different LLM providers within the context of the features that are important for Kurt users.
I envision:
The text was updated successfully, but these errors were encountered: