Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AI setup wizard providing more info about models being installed #541

Merged
merged 6 commits into from
Dec 19, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion ui/src/components/AI.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ const AI = () => {
// attach tasks to models
const modelsWithTasks = modelsInDB.map((model) => {
const modelWithTasks = { ...model } as any;
if (model.id === defaultLLM.id) {
if (model.id === defaultLLM?.id) {
modelWithTasks.default = true;
// find tasks for default model
const matchingTasks = tasksInDB.filter(
Expand Down
91 changes: 71 additions & 20 deletions ui/src/components/Login.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ import { useContext, useEffect, useState } from "react";
import { useNavigate } from "react-router-dom";
import { Ad4minContext } from "../context/Ad4minContext";
import { AgentContext } from "../context/AgentContext";
import { open } from "@tauri-apps/plugin-shell";
import "../index.css";
import Logo from "./Logo";

Expand Down Expand Up @@ -92,19 +93,19 @@ const Login = () => {
async function saveModels() {
if (await apiValid()) {
// add llm model
const llm = { name: "LLM Model 1", modelType: "LLM" } as ModelInput;
if (aiMode === "Local") {
llm.local = {
fileName: "solar_10_7b_instruct",
tokenizerSource: "",
modelParameters: "",
};
} else {
llm.api = { baseUrl: apiUrl, apiKey, apiType: "OPEN_AI" };
if (aiMode !== "None") {
const llm = { name: "LLM Model 1", modelType: "LLM" } as ModelInput;
if (aiMode === "Local") {
llm.local = {
fileName: "solar_10_7b_instruct",
tokenizerSource: "",
modelParameters: "",
};
} else {
llm.api = { baseUrl: apiUrl, apiKey, apiType: "OPEN_AI" };
}
client!.ai.addModel(llm).then((modelId) => client!.ai.setDefaultModel("LLM", modelId));
}
client!.ai
.addModel(llm)
.then((modelId) => client!.ai.setDefaultModel("LLM", modelId));
// add embedding model
client!.ai.addModel({
name: "bert",
Expand Down Expand Up @@ -415,13 +416,33 @@ const Login = () => {
style={{
textAlign: "center",
width: "100%",
maxWidth: 500,
maxWidth: 570,
marginBottom: 40,
}}
>
<j-text size="600" nomargin color="ui-900">
ADAM allows you to control the AI used for transcription, vector
embedding, and LLM tasks.

<j-text size="800" nomargin color="ui-900">
Is your computer capabale of running Large Language Models locally?
</j-text>
<j-text>
Regardless of your choice here, we will always download and use small AI models
(such as <a
onClick={() => open("https://huggingface.co/openai/whisper-small")}
style={{cursor: "pointer"}}
>Whisper small</a> and an <a
onClick={() => open("https://huggingface.co/Snowflake/snowflake-arctic-embed-xs")}
style={{cursor: "pointer"}}
>Embedding model</a>)
to handle basic tasks on all devices.
<br></br>
<br></br>
When it comes to LLMs, it depends on you having either an Apple Silicon mac (M1 or better)
or an nVidia GPU.
<br></br>
<br></br>
Alternatively, you can configure ADAM to out-source LLM tasks to a remote API.
If you unsure, you can select "None" now and add, remove or change model settings
later-on in the <b>AI tab</b>.
</j-text>
</j-flex>

Expand All @@ -440,8 +461,8 @@ const Login = () => {
Local
</j-text>
<j-text size="500" nomargin color="ui-800">
Select <b>Local</b> if your device is capable or running large
models locally.
Select Local if you have an <b>M1 mac</b> (or better)
or an <b>nVidia GPU</b>
</j-text>
</button>

Expand All @@ -459,7 +480,7 @@ const Login = () => {
Remote
</j-text>
<j-text size="500" nomargin color="ui-800">
Select <b>Remote</b> to use an external API like OpenAI.
Select to use an external API like <b>OpenAI</b> or your own <b>Ollama</b> server.
</j-text>
</button>

Expand All @@ -477,11 +498,27 @@ const Login = () => {
None
</j-text>
<j-text size="500" nomargin color="ui-800">
Select <b>None</b> if you'd prefer not use AI.
Select if you'd prefer <b>NOT to use LLMs</b> at all.
</j-text>
</button>
</j-flex>

{aiMode === "Local" && (
<j-flex
direction="column"
a="center"
gap="400"
style={{ marginTop: 30, maxWidth: 350 }}
>
<j-text>
This will download <a
onClick={() => open("https://huggingface.co/TheBloke/SOLAR-10.7B-Instruct-v1.0-GGUF")}
style={{cursor: "pointer"}}
>SOLAR 10.7b instruct</a>
</j-text>
</j-flex>
)}

{aiMode === "Remote" && (
<j-flex
direction="column"
Expand Down Expand Up @@ -529,6 +566,20 @@ const Login = () => {
</j-flex>
)}

{aiMode === "None" && (
<j-flex
direction="column"
a="center"
gap="400"
style={{ marginTop: 30, maxWidth: 350 }}
>
<j-text>
Selecting <b>None</b> here and not having any LLM configured
might result in new Synergy features not working in Flux...
</j-text>
</j-flex>
)}

<j-flex gap="500" j="center" wrap style={{ marginTop: 60 }}>
<j-button size="xl" onClick={() => setCurrentIndex(4)}>
Previous
Expand Down
12 changes: 7 additions & 5 deletions ui/src/components/ModelModal.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -27,10 +27,7 @@ const llmModels = [
const transcriptionModels = ["whisper"];
const embeddingModels = ["bert"];

export default function ModelModal(props: {
close: () => void;
oldModel?: any;
}) {
export default function ModelModal(props: { close: () => void; oldModel?: any }) {
const { close, oldModel } = props;
const {
state: { client },
Expand Down Expand Up @@ -108,7 +105,12 @@ export default function ModelModal(props: {
};
}
if (oldModel) client!.ai.updateModel(oldModel.id, model);
else client!.ai.addModel(model);
else {
const newModelId = await client!.ai.addModel(model);
// if no default LLM set, mark new model as default
const defaultLLM = await client!.ai.getDefaultModel("LLM");
if (!defaultLLM) client!.ai.setDefaultModel("LLM", newModelId);
}
close();
}
}
Expand Down