Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: GraphRAG doesn't get generated - 400 Please use a valid role: user, model. #2300

Closed
1 task done
marcfon opened this issue Sep 7, 2024 · 6 comments
Closed
1 task done
Labels
bug Something isn't working

Comments

@marcfon
Copy link

marcfon commented Sep 7, 2024

Is there an existing issue for the same bug?

  • I have checked the existing issues.

Branch name

main

Commit ID

--

Other environment information

* dev version

KB settings
* chunk method: knowledge graph
* embedding model: Gemini (tried others as well)

Actual behavior

Traceback (most recent call last):
  File "/ragflow/graphrag/graph_extractor.py", line 128, in __call__
    result, token_count = self._process_document(text, prompt_variables)
  File "/ragflow/graphrag/graph_extractor.py", line 177, in _process_document
    if response.find("**ERROR**") >=0: raise Exception(response)
Exception: **ERROR**: 400 Please use a valid role: user, model.
**ERROR**: contents must not be empty```

### Expected behavior

_No response_

### Steps to reproduce

```Markdown
Add a plain text document to the KB and parse.

Additional information

No response

@marcfon marcfon added the bug Something isn't working label Sep 7, 2024
@KevinHuSh
Copy link
Collaborator

Pull the dev version of docker. It's fixed.

@marcfon
Copy link
Author

marcfon commented Sep 9, 2024

Pull the dev version of docker. It's fixed.

Thanks for the response Kevin. The error still exists for me in the latest dev version.

I followed the upgrade instructions from here. With the difference that docker compose up ragflow -d doesn't work for me (no configuration file provided: not found). I need to cd ~/ragflow/docker and run docker compose up -d.

@marcfon
Copy link
Author

marcfon commented Sep 11, 2024

@KevinHuSh just pulled the latest dev version today and checked again. Still getting the same error when processing a file.

@marcfon
Copy link
Author

marcfon commented Oct 2, 2024

@KevinHuSh I think I have found the (root) issue which still exists in the latest dev version.

Inside Model Providers > System Model Settings > Chat model, when I select gpt-4o-mini as the Chat model everything works fine, selecting something else for instance gemini-1.5-flash-lastest it throws the 400 Please use a valid role: user, model. error.

@KevinHuSh
Copy link
Collaborator

Could you submit an issue for gemini-1.5-flash-lastest?

@marcfon
Copy link
Author

marcfon commented Oct 5, 2024

@KevinHuSh #2720

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants