Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for Google Gemini LLMs and Embeddings #1965

Merged
merged 5 commits into from
Jul 8, 2024
Merged

Conversation

uw4
Copy link
Contributor

@uw4 uw4 commented Jun 7, 2024

Initial support for Gemini, enables usage of Google LLMs and embedding models (see settings-gemini.yaml)

Install via
poetry install --extras "llms-gemini embeddings-gemini"

Notes:

  • had to bump llama-index-core to later version that supports Gemini
  • poetry --no-update did not work: Gemini/llama_index seem to require more (transient) updates to make it work...

Initial support for Gemini, enables usage of Google LLMs and embedding models (see settings-gemini.yaml)

Install via
poetry install --extras "llms-gemini embeddings-gemini"

Notes:
* had to bump llama-index-core to later version that supports Gemini
* poetry --no-update did not work: Gemini/llama_index seem to require more (transient) updates to make it work...
@raphant
Copy link

raphant commented Jun 18, 2024

Gemini's 1m context window for pro and flash will seem to be a great aid to ingesting and analyzing big documents. Looking forward to seeing this implemented!

pyproject.toml Show resolved Hide resolved
jaluma
jaluma previously approved these changes Jul 5, 2024
@jaluma
Copy link
Collaborator

jaluma commented Jul 8, 2024

Updated PR with last main changes

# Conflicts:
#	fern/docs/pages/manual/llms.mdx
@imartinez imartinez merged commit fc13368 into zylon-ai:main Jul 8, 2024
7 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants