The "garbage in garbage out" status quo remains unchanged despite the fact that LLMs have advanced Natural Language Processing (NLP) significantly. In response, RAGFlow introduces two unique features compared to other Retrieval-Augmented Generation (RAG) products.
- Fine-grained document parsing: Document parsing involves images and tables, with the flexibility for you to intervene as needed.
- Traceable answers with reduced hallucinations: You can trust RAGFlow's responses as you can view the citations and references supporting them.
English, simplified Chinese, traditional Chinese for now.
We put painstaking effort into document pre-processing tasks like layout analysis, table structure recognition, and OCR (Optical Character Recognition) using our vision model. This contributes to the additional time required.
ARM64 and Ascend GPU are not supported.
These APIs are still in development. Contributions are welcome.
No, this feature is still in development. Contributions are welcome.
This feature and the related APIs are still in development. Contributions are welcome.
Do you support multiple rounds of dialogues, i.e., referencing previous dialogues as context for the current dialogue?
This feature and the related APIs are still in development. Contributions are welcome.
- Right click the desired dialog to display the Chat Configuration window.
- Switch to the Model Setting tab and adjust the Max Tokens slider to get the desired length.
- Click OK to confirm your change.
You limit what the system responds to what you specify in Empty response if nothing is retrieved from your knowledge base. If you do not specify anything in Empty response, you let your LLM improvise, giving it a chance to hallucinate.
You can use Ollama to deploy local LLM. See here for more information.
- If RAGFlow is locally deployed, ensure that your RAGFlow and Ollama are in the same LAN.
- If you are using our online demo, ensure that the IP address of your Ollama server is public and accessible.
- Click the Knowledge Base tab in the middle top of the page.
- Right click the desired knowledge base to display the Configuration dialogue.
- Choose Q&A as the chunk method and click Save to confirm your change.
Ignore this warning and continue. All system warnings can be ignored.
dependency failed to start: container ragflow-mysql is unhealthy
means that your MySQL container failed to start. If you are using a Mac with an M1/M2 chip, replace mysql:5.7.18
with mariadb:10.5.8
in docker-compose-base.yml.
Ignore this warning and continue. All system warnings can be ignored.
Parsing requests have to wait in queue due to limited server resources. We are currently enhancing our algorithms and increasing computing power.
If your RAGFlow is deployed locally, try the following:
- Check the log of your RAGFlow server to see if it is running properly:
docker logs -f ragflow-server
- Check if the tast_executor.py process exist.
- Check if your RAGFlow server can access hf-mirror.com or huggingface.com.
An index failure usually indicates an unavailable Elasticsearch service.
tail -f path_to_ragflow/docker/ragflow-logs/rag/*.log
$ docker ps
The system displays the following if all your RAGFlow components are running properly:
5bc45806b680 infiniflow/ragflow:v0.2.0 "./entrypoint.sh" 11 hours ago Up 11 hours 0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp, 0.0.0.0:9380->9380/tcp, :::9380->9380/tcp ragflow-server
91220e3285dd docker.elastic.co/elasticsearch/elasticsearch:8.11.3 "/bin/tini -- /usr/l…" 11 hours ago Up 11 hours (healthy) 9300/tcp, 0.0.0.0:9200->9200/tcp, :::9200->9200/tcp ragflow-es-01
d8c86f06c56b mysql:5.7.18 "docker-entrypoint.s…" 7 days ago Up 16 seconds (healthy) 0.0.0.0:3306->3306/tcp, :::3306->3306/tcp ragflow-mysql
cd29bcb254bc quay.io/minio/minio:RELEASE.2023-12-20T01-00-02Z "/usr/bin/docker-ent…" 2 weeks ago Up 11 hours 0.0.0.0:9001->9001/tcp, :::9001->9001/tcp, 0.0.0.0:9000->9000/tcp, :::9000->9000/tcp ragflow-minio
- Check the status of your Elasticsearch component:
$ docker ps
The status of a 'healthy' Elasticsearch component in your RAGFlow should look as follows:
91220e3285dd docker.elastic.co/elasticsearch/elasticsearch:8.11.3 "/bin/tini -- /usr/l…" 11 hours ago Up 11 hours (healthy) 9300/tcp, 0.0.0.0:9200->9200/tcp, :::9200->9200/tcp ragflow-es-01
-
If your container keeps restarting, ensure
vm.max_map_count
>= 262144 as per this README. -
If your issue persists, ensure that the ES host setting is correct:
- If you are running RAGFlow with Docker, it is in docker/service_conf.yml. Set it as follows:
es: hosts: 'http://es01:9200'
- If you run RAGFlow outside of Docker, verify the ES host setting in conf/service_conf.yml using:
curl http://<IP_OF_ES>:<PORT_OF_ES>
Your IP address or port number may be incorrect. If you are using the default configurations, enter http://<IP_OF_YOUR_MACHINE> (NOT localhost
, NOT 9380, AND NO PORT NUMBER REQUIRED!) in your browser. This should work.
A correct Ollama IP address and port is crucial to adding models to Ollama:
- If you are on demo.ragflow.io, ensure that the server hosting Ollama has a publicly accessible IP address. 127.0.0.1 is not an accessible IP address.
- If you deploy RAGFlow locally, ensure that Ollama and RAGFlow are in the same LAN and can comunicate with each other.
Yes, we do. See the Python files under the rag/app folder.
You probably forgot to update the MAX_CONTENT_LENGTH environment variable:
- Add environment variable
MAX_CONTENT_LENGTH
to ragflow/docker/.env:
MAX_CONTENT_LENGTH=100000000
- Update docker-compose.yml:
environment:
- MAX_CONTENT_LENGTH=${MAX_CONTENT_LENGTH}
- Restart the RAGFlow server:
docker compose up ragflow -d
Now you should be able to upload files of sizes less than 100MB.