Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adjust the structure of FAQ #479

Merged
merged 5 commits into from
Apr 22, 2024
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
202 changes: 144 additions & 58 deletions docs/faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,99 +2,160 @@

## General

### What sets RAGFlow apart from other RAG products?
### 1. What sets RAGFlow apart from other RAG products?

The "garbage in garbage out" status quo remains unchanged despite the fact that LLMs have advanced Natural Language Processing (NLP) significantly. In response, RAGFlow introduces two unique features compared to other Retrieval-Augmented Generation (RAG) products.

- Fine-grained document parsing: Document parsing involves images and tables, with the flexibility for you to intervene as needed.
- Traceable answers with reduced hallucinations: You can trust RAGFlow's responses as you can view the citations and references supporting them.

### Which languages does RAGFlow support?
### 2. Which languages does RAGFlow support?

English, simplified Chinese, traditional Chinese for now.

## Performance

### Why does it take longer for RAGFlow to parse a document than LangChain?
### 1. Why does it take longer for RAGFlow to parse a document than LangChain?

We put painstaking effort into document pre-processing tasks like layout analysis, table structure recognition, and OCR (Optical Character Recognition) using our vision model. This contributes to the additional time required.

### 2. Why does RAGFlow require more resources than other projects?

RAGFlow has a number of built-in models for document structure parsing, which account for the additional computational resources.

## Feature

### Which architectures or devices does RAGFlow support?
### 1. Which architectures or devices does RAGFlow support?

ARM64 and Ascend GPU are not supported.
Currently, we only support x86 CPU and Nvidia GPU.

### Do you offer an API for integration with third-party applications?
### 2. Do you offer an API for integration with third-party applications?

These APIs are still in development. Contributions are welcome.
The corresponding APIs are now available. See the [Conversation API](./conversation_api.md) for more information.

### Do you support stream output?
### 3. Do you support stream output?

No, this feature is still in development. Contributions are welcome.

### Is it possible to share dialogue through URL?
### 4. Is it possible to share dialogue through URL?

This feature and the related APIs are still in development. Contributions are welcome.
Yes, this feature is now available.

### Do you support multiple rounds of dialogues, i.e., referencing previous dialogues as context for the current dialogue?
### 5. Do you support multiple rounds of dialogues, i.e., referencing previous dialogues as context for the current dialogue?

This feature and the related APIs are still in development. Contributions are welcome.

## Configurations

### How to increase the length of RAGFlow responses?
## Troubleshooting

1. Right click the desired dialog to display the **Chat Configuration** window.
2. Switch to the **Model Setting** tab and adjust the **Max Tokens** slider to get the desired length.
3. Click **OK** to confirm your change.
### 1. Issues with docker images

#### 1.1 Due to the fast iteration of RAGFlow updates, it is recommended to build the image from scratch.

### What does Empty response mean? How to set it?
```
$ git clone https://github.com/infiniflow/ragflow.git
$ cd ragflow
$ docker build -t infiniflow/ragflow:v0.3.0 .
$ cd ragflow/docker
$ chmod +x ./entrypoint.sh
$ docker compose up -d
```

You limit what the system responds to what you specify in **Empty response** if nothing is retrieved from your knowledge base. If you do not specify anything in **Empty response**, you let your LLM improvise, giving it a chance to hallucinate.
#### 1.2 `process "/bin/sh -c cd ./web && npm i && npm run build"` failed

### Can I set the base URL for OpenAI somewhere?
1. Check your network from within Docker, for example:
```bash
curl https://hf-mirror.com
```

![](https://github.com/infiniflow/ragflow/assets/93570324/8cfb6fa4-8a97-415d-b9fa-b6f405a055f3)
2. If your network works fine, the issue lies with the Docker network configuration. Adjust the Docker building accordingly:
```
# Original:
docker build -t infiniflow/ragflow:v0.3.0 .
# Current:
docker build -t infiniflow/ragflow:v0.3.0 . --network host
```

### 2. Issues with huggingface models.

### How to run RAGFlow with a locally deployed LLM?
#### 2.1. `MaxRetryError: HTTPSConnectionPool(host='hf-mirror.com', port=443)`

You can use Ollama to deploy local LLM. See [here](https://github.com/infiniflow/ragflow/blob/main/docs/ollama.md) for more information.
This error suggests that you do not have Internet access or are unable to connect to hf-mirror.com. Try the following:

### How to link up ragflow and ollama servers?
1. Manually download the resource files from [huggingface.co/InfiniFlow/deepdoc](https://huggingface.co/InfiniFlow/deepdoc) to your local folder **~/deepdoc**.
2. Add a volumes to **docker-compose.yml**, for example:
```
- ~/deepdoc:/ragflow/rag/res/deepdoc
```

- If RAGFlow is locally deployed, ensure that your RAGFlow and Ollama are in the same LAN.
- If you are using our online demo, ensure that the IP address of your Ollama server is public and accessible.
#### 2.2 `FileNotFoundError: [Errno 2] No such file or directory: '/root/.cache/huggingface/hub/models--InfiniFlow--deepdoc/snapshots/FileNotFoundError: [Errno 2] No such file or directory: '/ragflow/rag/res/deepdoc/ocr.res'be0c1e50eef6047b412d1800aa89aba4d275f997/ocr.res'`

### How to configure RAGFlow to respond with 100% matched results, rather than utilizing LLM?
1. Check your network from within Docker, for example:
```bash
curl https://hf-mirror.com
```
2. Run `ifconfig` to check the `mtu` value. If the server's `mtu` is `1450` while the NIC's `mtu` in the container is `1500`, this mismatch may cause network instability. Adjust the `mtu` policy as follows:

1. Click the **Knowledge Base** tab in the middle top of the page.
2. Right click the desired knowledge base to display the **Configuration** dialogue.
3. Choose **Q&A** as the chunk method and click **Save** to confirm your change.
```
vim docker-compose-base.yml
# Original configuration:
networks:
ragflow:
driver: bridge
# Modified configuration:
networks:
ragflow:
driver: bridge
driver_opts:
com.docker.network.driver.mtu: 1450
```

## Debugging
### 3. Issues with RAGFlow servers

### `WARNING: can't find /raglof/rag/res/borker.tm`
#### 3.1 `WARNING: can't find /raglof/rag/res/borker.tm`

Ignore this warning and continue. All system warnings can be ignored.

### `dependency failed to start: container ragflow-mysql is unhealthy`
#### 3.2 `network anomaly There is an abnormality in your network and you cannot connect to the server.`

![anomaly](https://github.com/infiniflow/ragflow/assets/93570324/beb7ad10-92e4-4a58-8886-bfb7cbd09e5d)

You will not log in to RAGFlow unless the server is fully initialized. Run `docker logs -f ragflow-server`.

*The server is successfully initialized, if your system displays the following:*

```
____ ______ __
/ __ \ ____ _ ____ _ / ____// /____ _ __
/ /_/ // __ `// __ `// /_ / // __ \| | /| / /
/ _, _// /_/ // /_/ // __/ / // /_/ /| |/ |/ /
/_/ |_| \__,_/ \__, //_/ /_/ \____/ |__/|__/
/____/

* Running on all addresses (0.0.0.0)
* Running on http://127.0.0.1:9380
* Running on http://x.x.x.x:9380
INFO:werkzeug:Press CTRL+C to quit
```


### 4. Issues with RAGFlow backend services

`dependency failed to start: container ragflow-mysql is unhealthy` means that your MySQL container failed to start. If you are using a Mac with an M1/M2 chip, replace `mysql:5.7.18` with `mariadb:10.5.8` in **docker-compose-base.yml**.
#### 4.1 `dependency failed to start: container ragflow-mysql is unhealthy`

### `Realtime synonym is disabled, since no redis connection`
`dependency failed to start: container ragflow-mysql is unhealthy` means that your MySQL container failed to start. Try replacing `mysql:5.7.18` with `mariadb:10.5.8` in **docker-compose-base.yml** if mysql fails to start.

#### 4.2 `Realtime synonym is disabled, since no redis connection`

Ignore this warning and continue. All system warnings can be ignored.

![](https://github.com/infiniflow/ragflow/assets/93570324/ef5a6194-084a-4fe3-bdd5-1c025b40865c)

### Why does it take so long to parse a 2MB document?
#### 4.3 Why does it take so long to parse a 2MB document?

Parsing requests have to wait in queue due to limited server resources. We are currently enhancing our algorithms and increasing computing power.

### Why does my document parsing stall at under one percent?
#### 4.4 Why does my document parsing stall at under one percent?

![stall](https://github.com/infiniflow/ragflow/assets/93570324/3589cc25-c733-47d5-bbfc-fedb74a3da50)

Expand All @@ -107,27 +168,18 @@ docker logs -f ragflow-server
2. Check if the **tast_executor.py** process exist.
3. Check if your RAGFlow server can access hf-mirror.com or huggingface.com.

### `MaxRetryError: HTTPSConnectionPool(host='hf-mirror.com', port=443)`

This error suggests that you do not have Internet access or are unable to connect to hf-mirror.com. Try the following:

1. Manually download the resource files from [huggingface.co/InfiniFlow/deepdoc](https://huggingface.co/InfiniFlow/deepdoc) to your local folder **~/deepdoc**.
2. Add a volumes to **docker-compose.yml**, for example:
```
- ~/deepdoc:/ragflow/rag/res/deepdoc
```

### `Index failure`
#### 4.5 `Index failure`

An index failure usually indicates an unavailable Elasticsearch service.

### How to check the log of RAGFlow?
#### 4.6 How to check the log of RAGFlow?

```bash
tail -f path_to_ragflow/docker/ragflow-logs/rag/*.log
```

### How to check the status of each component in RAGFlow?
#### 4.7 How to check the status of each component in RAGFlow?

```bash
$ docker ps
Expand All @@ -141,7 +193,7 @@ d8c86f06c56b mysql:5.7.18 "docker-entrypoint.s…" 7 days ago Up
cd29bcb254bc quay.io/minio/minio:RELEASE.2023-12-20T01-00-02Z "/usr/bin/docker-ent…" 2 weeks ago Up 11 hours 0.0.0.0:9001->9001/tcp, :::9001->9001/tcp, 0.0.0.0:9000->9000/tcp, :::9000->9000/tcp ragflow-minio
```

### `Exception: Can't connect to ES cluster`
#### 4.8 `Exception: Can't connect to ES cluster`

1. Check the status of your Elasticsearch component:

Expand All @@ -153,7 +205,7 @@ $ docker ps
91220e3285dd docker.elastic.co/elasticsearch/elasticsearch:8.11.3 "/bin/tini -- /usr/l…" 11 hours ago Up 11 hours (healthy) 9300/tcp, 0.0.0.0:9200->9200/tcp, :::9200->9200/tcp ragflow-es-01
```

2. If your container keeps restarting, ensure `vm.max_map_count` >= 262144 as per [this README](https://github.com/infiniflow/ragflow?tab=readme-ov-file#-start-up-the-server).
2. If your container keeps restarting, ensure `vm.max_map_count` >= 262144 as per [this README](https://github.com/infiniflow/ragflow?tab=readme-ov-file#-start-up-the-server). Updating the `vm.max_map_count` value in **/etc/sysctl.conf** is required, if you wish to keep your change permanent. This configuration works only for Linux.


3. If your issue persists, ensure that the ES host setting is correct:
Expand All @@ -169,22 +221,22 @@ $ docker ps
```


### `{"data":null,"retcode":100,"retmsg":"<NotFound '404: Not Found'>"}`
#### 4.9 `{"data":null,"retcode":100,"retmsg":"<NotFound '404: Not Found'>"}`

Your IP address or port number may be incorrect. If you are using the default configurations, enter http://<IP_OF_YOUR_MACHINE> (**NOT `localhost`, NOT 9380, AND NO PORT NUMBER REQUIRED!**) in your browser. This should work.

### `Ollama - Mistral instance running at 127.0.0.1:11434 but cannot add Ollama as model in RagFlow`
#### 4.10 `Ollama - Mistral instance running at 127.0.0.1:11434 but cannot add Ollama as model in RagFlow`

A correct Ollama IP address and port is crucial to adding models to Ollama:

- If you are on demo.ragflow.io, ensure that the server hosting Ollama has a publicly accessible IP address.Note that 127.0.0.1 is not a publicly accessible IP address.
- If you deploy RAGFlow locally, ensure that Ollama and RAGFlow are in the same LAN and can comunicate with each other.

### Do you offer examples of using deepdoc to parse PDF or other files?
#### 4.11 Do you offer examples of using deepdoc to parse PDF or other files?

Yes, we do. See the Python files under the **rag/app** folder.

### Why did I fail to upload a 10MB+ file to my locally deployed RAGFlow?
#### 4.12 Why did I fail to upload a 10MB+ file to my locally deployed RAGFlow?

You probably forgot to update the **MAX_CONTENT_LENGTH** environment variable:

Expand All @@ -196,14 +248,14 @@ MAX_CONTENT_LENGTH=100000000
```
environment:
- MAX_CONTENT_LENGTH=${MAX_CONTENT_LENGTH}
```
```
3. Restart the RAGFlow server:
```
docker compose up ragflow -d
```
*Now you should be able to upload files of sizes less than 100MB.*

### `Table 'rag_flow.document' doesn't exist`
#### 4.13 `Table 'rag_flow.document' doesn't exist`

This exception occurs when starting up the RAGFlow server. Try the following:

Expand All @@ -226,10 +278,44 @@ This exception occurs when starting up the RAGFlow server. Try the following:
docker compose up
```

### `hint : 102 Fail to access model Connection error`
#### 4.14 `hint : 102 Fail to access model Connection error`

![hint102](https://github.com/infiniflow/ragflow/assets/93570324/6633d892-b4f8-49b5-9a0a-37a0a8fba3d2)

1. Ensure that the RAGFlow server can access the base URL.
2. Do not forget to append **/v1/** to **http://IP:port**:
**http://IP:port/v1/**
**http://IP:port/v1/**


## Usage

### 1. How to increase the length of RAGFlow responses?

1. Right click the desired dialog to display the **Chat Configuration** window.
2. Switch to the **Model Setting** tab and adjust the **Max Tokens** slider to get the desired length.
3. Click **OK** to confirm your change.


### 2. What does Empty response mean? How to set it?

You limit what the system responds to what you specify in **Empty response** if nothing is retrieved from your knowledge base. If you do not specify anything in **Empty response**, you let your LLM improvise, giving it a chance to hallucinate.

### 3. Can I set the base URL for OpenAI somewhere?

![](https://github.com/infiniflow/ragflow/assets/93570324/8cfb6fa4-8a97-415d-b9fa-b6f405a055f3)


### 4. How to run RAGFlow with a locally deployed LLM?

You can use Ollama to deploy local LLM. See [here](https://github.com/infiniflow/ragflow/blob/main/docs/ollama.md) for more information.

### 5. How to link up ragflow and ollama servers?

- If RAGFlow is locally deployed, ensure that your RAGFlow and Ollama are in the same LAN.
- If you are using our online demo, ensure that the IP address of your Ollama server is public and accessible.

### 6. How to configure RAGFlow to respond with 100% matched results, rather than utilizing LLM?

1. Click the **Knowledge Base** tab in the middle top of the page.
2. Right click the desired knowledge base to display the **Configuration** dialogue.
3. Choose **Q&A** as the chunk method and click **Save** to confirm your change.