Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ValueError: No valid completion model args passed in #42

Open
BeMain opened this issue Sep 21, 2023 · 16 comments
Open

ValueError: No valid completion model args passed in #42

BeMain opened this issue Sep 21, 2023 · 16 comments

Comments

@BeMain
Copy link

BeMain commented Sep 21, 2023

I'm trying to migrate a project from Fortran95 (yes, I know it's old, that's why it needs to be migrated... 😁) to Dart, but I get the following error:

Traceback (most recent call last):

  File "/var/home/agardh/Development/gpt-migrate/gpt_migrate/main.py", line 127, in <module>
    app()

  File "/var/home/agardh/Development/gpt-migrate/gpt_migrate/main.py", line 87, in main
    create_environment(globals)

  File "/var/home/agardh/Development/gpt-migrate/gpt_migrate/steps/setup.py", line 15, in create_environment
    llm_write_file(prompt,

  File "/var/home/agardh/Development/gpt-migrate/gpt_migrate/utils.py", line 52, in llm_write_file
    file_name,language,file_content = globals.ai.write_code(prompt)[0]
                                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/var/home/agardh/Development/gpt-migrate/gpt_migrate/ai.py", line 23, in write_code
    response = completion(
               ^^^^^^^^^^^

  File "/var/home/agardh/.local/lib/python3.11/site-packages/litellm/utils.py", line 98, in wrapper
    raise e

  File "/var/home/agardh/.local/lib/python3.11/site-packages/litellm/utils.py", line 89, in wrapper
    result = original_function(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/var/home/agardh/.local/lib/python3.11/site-packages/litellm/timeout.py", line 44, in wrapper
    result = future.result(timeout=local_timeout_duration)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/usr/lib64/python3.11/concurrent/futures/_base.py", line 456, in result
    return self.__get_result()
           ^^^^^^^^^^^^^^^^^^^

  File "/usr/lib64/python3.11/concurrent/futures/_base.py", line 401, in __get_result
    raise self._exception

  File "/var/home/agardh/.local/lib/python3.11/site-packages/litellm/timeout.py", line 35, in async_func
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^

  File "/var/home/agardh/.local/lib/python3.11/site-packages/litellm/main.py", line 248, in completion
    raise exception_type(model=model, original_exception=e)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/var/home/agardh/.local/lib/python3.11/site-packages/litellm/utils.py", line 273, in exception_type
    raise original_exception # base case - return the original exception
    ^^^^^^^^^^^^^^^^^^^^^^^^

  File "/var/home/agardh/.local/lib/python3.11/site-packages/litellm/main.py", line 242, in completion
    raise ValueError(f"No valid completion model args passed in - {args}")

It also prints all the parameters passed to the model, if you want I can give you those too.

The command I run is:

python3 main.py --sourcedir "../my/source" --sourceentry md_3PG.f95 --sourcelang fortran95 --targetlang dart --targetdir "../my/target"

I couldn't find anyone else had encountered the same problem, so I thought I might open an Issue.

I have tried a few different models, but that doesn't seem to make a difference; all of them throw the same error. Except when using the model "gpt-3.5-turbo", then it insteads complains that the model's maximum context length has been reached.

@mpmnath
Copy link

mpmnath commented Sep 21, 2023

Same issue here. I think you need to upgrade your API plan to use GPT-4 model. Readme file says GPT-4-32K is preferred

@christian-ek
Copy link

Getting the same problem, I only have access to gpt-4 and not gpt-4-32k and no way to upgrade from what I can see.

@mpmnath
Copy link

mpmnath commented Sep 21, 2023

I was able to at least get this running now. but there is a rate limit error and the execution stops abruptly 😞

@BeMain
Copy link
Author

BeMain commented Sep 21, 2023

Oh, so that's what the error message means; that the model isn't available? That might be it, as using gpt-3.5-turbo or gpt-4 doesn't cause the error (but instead complains about token length). Strange, I tried this a few weeks back and think I got the error that the model isn't available (when trying to use gpt-4-32k) back then too, only I remember the error message as being much clearer. Maybe things have changed recently.

I too only have access to gpt-4, so perhaps the only option is waiting for #2 then?

@BeMain
Copy link
Author

BeMain commented Sep 21, 2023

I was able to at least get this running now. but there is a rate limit error and the execution stops abruptly 😞

What did you do different to get it to work?

@mpmnath
Copy link

mpmnath commented Sep 22, 2023

I was able to at least get this running now. but there is a rate limit error and the execution stops abruptly 😞

What did you do different to get it to work?

used gpt-4 model. Since there is a rate limit for gpt-4 unlike gpt-4-32K, I used try and catch for all API calls in utils.py file to retry the request.

def llm_run(prompt,waiting_message,success_message,globals):
    
    output = ""
    with yaspin(text=waiting_message, spinner="dots") as spinner:
        try:
            output = globals.ai.run(prompt)
        except openai.error.RateLimitError as e:
            retry_time = e.retry_after if hasattr(e, 'retry_after') else 30
            time.sleep(retry_time)
            output = globals.ai.run(prompt)  # Retry the API request
        spinner.ok("✅ ")

    if success_message:
        success_text = typer.style(success_message, fg=typer.colors.GREEN)
        typer.echo(success_text)
    
    return output

@krrishdholakia
Copy link

Hey @BeMain @mpmnath @christian-ek

I'm the co-maintainer of litellm. We should have returned a better error message here. That error means that no valid model was found.

Can you show me the code to repro this problem? What was the model name that got passed in

@BeMain
Copy link
Author

BeMain commented Sep 27, 2023

The default model for gpt-migrate is gpt-4-32k. That was the model throwing the error, because I don't have access to it. If that's what you mean?

I'm not sure about the underlying code; I haven't checked the source myself. But this repo in combination with the stack trace I provided should be enough to figure out what exactly was throwing the error...I think? Let me know if there's something more you need.

@krrishdholakia
Copy link

Sounds good @BeMain - will follow up

@stridera
Copy link

stridera commented Jan 29, 2024

Was there any update to this? It doesn't like gpt-4 at all, but gpt-4-turbo-preview seems to work but the tokens are incorrect.

gpt-4
openai.error.InvalidRequestError: The model `gpt-4` does not exist or you do not have access to it. Learn more: https://help.openai.com/en/articles/7102672-how-can-i-access-gpt-4.

gpt-4-turbo-preview
ValueError: No valid completion model args passed in - {'model': 'gpt-4-turbo-preview', 'messages': [{'role': 'user', 'content': '...'}], 'functions': [], 'function_call': '', 'temperature': 0.0, 'top_p': 1, 'n': 1, 'stream': False, 'stop': None, 'max_tokens': 10000, 'presence_penalty': 0, 'frequency_penalty': 0, 'logit_bias': {}, 'user': '', 'force_timeout': 60, 'azure': False, 'logger_fn': None, 'verbose': False, 'optional_params': {'temperature': 0.0, 'max_tokens': 10000}}

I'll start updating args based on the openai API.

@st9824-att
Copy link

I just downloaded this last week and starting setting up. I'm using gpt-4-32k and currently stuck on this error. How do I fix it?

@krrishdholakia
Copy link

Hey - we should have this fixed on our end (i.e. raising the right error here). I also don't see this line ValueError: No valid completion model args passed in in litellm's codebase anymore - https://github.com/search?q=repo%3ABerriAI%2Flitellm+%22ValueError%3A+No+valid+completion+model+args+passed+in%22&type=code

@st9824-att
Copy link

Interesting. I wonder if this is coming from one of the python libraries. Since I just downloaded the gpt migrate last week, I should have the latest code.

│ 270 │ │ raise InvalidRequestError(f"CohereException - {error_str}", f"{model}") │
│ 271 │ │ elif "CohereConnectionError" in exception_type: # cohere seems to fire these err │
│ 272 │ │ raise RateLimitError(f"CohereException - {original_exception.message}") │
│ > 273 │ raise original_exception # base case - return the original exception │
│ 274 │ else: │
│ 275 │ raise original_exception │
│ 276 │
│ │
│ ┌─────────────────────────────────────────── locals ───────────────────────────────────────────┐ │
│ │ error_str = "No valid completion model args passed in - {'model': │ │
│ │ 'openrouter/openai/gpt-4-32"+3349 │ │
│ │ exception_type = 'ValueError' │ │
│ │ model = 'openrouter/openai/gpt-4-32k' │ │
│ │ original_exception = ValueError('No valid completion model args passed in - {'model':

@krrishdholakia
Copy link

krrishdholakia commented Apr 8, 2024

that looks like litellm but i don't think it's our latest version

Based on the requirements.txt i can see it's on a pretty old version -

Can you try this snippet on our latest version (v1.34.35) and let me know if you see it

#pip install litellm==1.34.35

from litellm import completion
import os

## set ENV variables
os.environ["OPENAI_API_KEY"] = "your-openai-key"

messages = [{ "content": "Hello, how are you?","role": "user"}]

# openai call
response = completion(model="gpt-4-32k", messages=messages)

print(response)

@hannesaasamets
Copy link

Thank you @krrishdholakia! Upgrading litellm did the trick and it got past the "ValueError: No valid completion model args passed in" error.

I used your snippet for gpt-4o now and it worked (didn't work for gpt-4-32k).

@krrishdholakia
Copy link

hey @hannesaasamets what was the issue you saw for gpt-4-32k, they follow the same codepath, so i'm curious what happened

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants