Skip to content

Commit

Permalink
[pre-commit.ci] auto fixes from pre-commit.com hooks
Browse files Browse the repository at this point in the history
for more information, see https://pre-commit.ci
  • Loading branch information
pre-commit-ci[bot] authored and lianhao committed Aug 23, 2024
1 parent c291ffd commit 3f97188
Show file tree
Hide file tree
Showing 5 changed files with 9 additions and 10 deletions.
12 changes: 6 additions & 6 deletions helm-charts/chatqna/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -112,9 +112,9 @@ Access `http://localhost:5174` to play with the ChatQnA workload through UI.

## Values

| Key | Type | Default | Description |
| -------------------------------------- | ------ | ----------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| image.repository | string | `"opea/chatqna"` | |
| service.port | string | `"8888"` | |
| tgi.LLM_MODEL_ID | string | `"Intel/neural-chat-7b-v3-3"` | Models id from https://huggingface.co/, or predownloaded model directory |
| global.horizontalPodAutoscaler.enabled | bop; | false | HPA autoscaling for the TGI and TEI service deployments based on metrics they provide. See #pre-conditions and #gotchas before enabling! (If one doesn't want one of them to be scaled, given service `maxReplicas` can be set to `1`) |
| Key | Type | Default | Description |
| -------------------------------------- | ------ | ----------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| image.repository | string | `"opea/chatqna"` | |
| service.port | string | `"8888"` | |
| tgi.LLM_MODEL_ID | string | `"Intel/neural-chat-7b-v3-3"` | Models id from https://huggingface.co/, or predownloaded model directory |
| global.horizontalPodAutoscaler.enabled | bop; | false | HPA autoscaling for the TGI and TEI service deployments based on metrics they provide. See #pre-conditions and #gotchas before enabling! (If one doesn't want one of them to be scaled, given service `maxReplicas` can be set to `1`) |
2 changes: 1 addition & 1 deletion helm-charts/common/tei/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ If cluster does not run [Prometheus operator](https://github.com/prometheus-oper
yet, it SHOULD be be installed before enabling HPA, e.g. by using:
https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack

`horizontalPodAutoscaler` enabled in top level Helm chart depending on this component (e.g. `chatqna`),
`horizontalPodAutoscaler` enabled in top level Helm chart depending on this component (e.g. `chatqna`),
so that relevant custom metric queries are configured for PrometheusAdapter.

### Gotchas
Expand Down
2 changes: 1 addition & 1 deletion helm-charts/common/teirerank/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ If cluster does not run [Prometheus operator](https://github.com/prometheus-oper
yet, it SHOULD be be installed before enabling HPA, e.g. by using:
https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack

`horizontalPodAutoscaler` enabled in top level Helm chart depending on this component (e.g. `chatqna`),
`horizontalPodAutoscaler` enabled in top level Helm chart depending on this component (e.g. `chatqna`),
so that relevant custom metric queries are configured for PrometheusAdapter.

### Gotchas
Expand Down
2 changes: 1 addition & 1 deletion helm-charts/common/tgi/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ If cluster does not run [Prometheus operator](https://github.com/prometheus-oper
yet, it SHOULD be be installed before enabling HPA, e.g. by using:
https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack

`horizontalPodAutoscaler` enabled in top level Helm chart depending on this component (e.g. `chatqna`),
`horizontalPodAutoscaler` enabled in top level Helm chart depending on this component (e.g. `chatqna`),
so that relevant custom metric queries are configured for PrometheusAdapter.

### Gotchas
Expand Down
1 change: 0 additions & 1 deletion microservices-connector/config/HPA/customMetrics.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -48,4 +48,3 @@ kind: ConfigMap
metadata:
name: adapter-config
namespace: monitoring

0 comments on commit 3f97188

Please sign in to comment.