-
-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
⭐️ Feat: Sync LM Studio models to gollama #68
Comments
I need this too. I have hundreds of GGUF files. I want to symlink them to %USERPROFILE%.ollama\models\blobs with their appropriate templates. |
I was excited for #13 (downloading models from huggingface), but this might be easier to implement. Maybe I'm mistaken, but I believe Ollama needs to import and manipulate GGUF file before using them, so it'll duplicate the files. |
I just checked it. The GGUF file imported to Ollama is identical to the original file. I also tested symlinking by deleting the sha256-834... file and creating a symbolic link to the original file. It works too!
You're probably not mistaken though. A few months ago, I also read someone mentioning that Ollama did modify the GGUF file when importing. Perhaps they changed it recently so that it no longer manipulates the GGUF file. Anyway, it's going to be a pain to create a MODELFILE for many GGUF files with their appropriate templates and parameters. Then importing each one of them with Ollama, figuring out the actual GGUF file from the sha256-randomnumber files, deleting it, and creating symlinks to each of their original files. I ain't gonna do all that just to use Ollama. Hopefully, gollama will make this job somewhat easier in the future. |
In the meantime, Claude created this for just converting LM Studio models to Ollama ones. It's not perfect, but it works for the most part if you change a part: BASE_PATH="/ai_models"
OLLAMA_DIR="${BASE_PATH}/ollama/models"
LMSTUDIO_DIR="${BASE_PATH}/lm_studio/models" I ran this one through quite a few times just testing it.
I can test later with a python script whether it's possible to manually create the layers manually if this is true. Script#!/bin/bash
set -o pipefail
# Configuration
BASE_PATH="/ai_models"
OLLAMA_DIR="${BASE_PATH}/ollama/models"
LMSTUDIO_DIR="${BASE_PATH}/lm_studio/models"
CONFIG_FILE="${BASE_PATH}/sync_config.json"
# Load configuration
if [[ -f "$CONFIG_FILE" ]]; then
OLLAMA_DIR=$(jq -r '.ollama_dir // empty' "$CONFIG_FILE") || OLLAMA_DIR="${BASE_PATH}/ollama/models"
LMSTUDIO_DIR=$(jq -r '.lmstudio_dir // empty' "$CONFIG_FILE") || LMSTUDIO_DIR="${BASE_PATH}/lm_studio/models"
fi
# Logging functions
log() { echo "[$(date +'%Y-%m-%d %H:%M:%S')] $1" >&2; }
debug_log() { echo "[DEBUG] $(date +'%Y-%m-%d %H:%M:%S') $1" >&2; }
error_log() { echo "[ERROR] $(date +'%Y-%m-%d %H:%M:%S') $1" >&2; }
# Check if Ollama is running
check_ollama_running() {
debug_log "Checking if Ollama is running..."
if ! pgrep -x "ollama" > /dev/null; then
error_log "Ollama is not running. Please start Ollama and try again."
exit 1
fi
debug_log "Ollama is running."
}
# Create Modelfile
create_modelfile() {
local model_dir=$1
local gguf_file=$2
local modelfile="${model_dir}/Modelfile"
debug_log "Creating Modelfile for ${gguf_file} in ${model_dir}"
cat <<EOL > "${modelfile}"
FROM ./${gguf_file}
PARAMETER num_ctx 4096
PARAMETER repeat_penalty 1.1
PARAMETER top_p 0.95
PARAMETER top_k 40
PARAMETER temperature 0.8
SYSTEM """
Execute the task to the best of your abilities.
"""
EOL
debug_log "Modelfile created for ${gguf_file}"
}
# Check if model exists in Ollama
model_exists_in_ollama() {
local model_name=$1
ollama list | grep -q "${model_name}"
}
# Check available disk space
check_disk_space() {
local required_space=$1 # in MB
local available_space=$(df -BM --output=avail /var/lib/ollama | tail -n 1 | tr -d 'M')
if [ $available_space -lt $required_space ]; then
error_log "Not enough disk space. Required: ${required_space}MB, Available: ${available_space}MB"
return 1
fi
return 0
}
# Process a single model
process_model() {
local gguf_path=$1
local dir_path=$(dirname "${gguf_path}")
local gguf_file=$(basename "${gguf_path}")
local model_name=$(basename "${dir_path}")
local ollama_model_dir="${OLLAMA_DIR}/${model_name}"
local ollama_gguf_path="${ollama_model_dir}/${gguf_file}"
log "Processing: ${gguf_file}"
# Check available disk space (adjust required space as needed)
if ! check_disk_space 10000; then
error_log "Skipping ${gguf_file} due to insufficient disk space"
return 1
fi
local model_installed=false
if ollama list | grep -qi "\b${model_name}\b"; then
log " ${model_name} is present in Ollama"
model_installed=true
else
log " ${model_name} is not found in Ollama"
fi
read -p " Do you want to proceed with ${gguf_file}? (Y/n): " -r
REPLY=${REPLY:-Y}
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
log " Skipping ${gguf_file}"
return 2
fi
if [ "$model_installed" = true ]; then
log " Updating ${model_name}"
if ollama pull "${model_name}"; then
log " Successfully updated ${model_name}"
else
log " Failed to update ${model_name}, attempting to recreate"
model_installed=false
fi
fi
if [ "$model_installed" = false ]; then
# Remove existing Ollama model if it exists
if ollama list | grep -qi "\b${model_name}\b"; then
log " Removing existing ${model_name} from Ollama"
ollama rm "${model_name}" || true
fi
log " Creating new Ollama model: ${model_name}"
rm -rf "${ollama_model_dir}"
mkdir -p "${ollama_model_dir}"
if ! ln -sf "${gguf_path}" "${ollama_gguf_path}"; then
error_log " Failed to link ${gguf_file}"
return 1
fi
create_modelfile "${ollama_model_dir}" "${gguf_file}"
if ! ollama create "${model_name}" -f "${ollama_model_dir}/Modelfile"; then
error_log " Failed to create Ollama model for ${gguf_file}"
return 1
fi
fi
if ! ollama list | grep -qi "\b${model_name}\b"; then
error_log " Failed to verify Ollama model installation for ${gguf_file}"
return 1
fi
log " Successfully processed ${gguf_file}"
# Clean up the entire Ollama model directory
if [ -d "${ollama_model_dir}" ]; then
log " Removing Ollama model directory"
if sudo rm -rf "${ollama_model_dir}"; then
log " Successfully removed Ollama model directory: ${ollama_model_dir}"
else
error_log " Failed to remove Ollama model directory: ${ollama_model_dir}"
fi
fi
# Clean up LM Studio files
cleanup_lmstudio_files "${gguf_path}"
return 0
}
# Clean up LM Studio files
cleanup_lmstudio_files() {
local full_path=$1
local lmstudio_model_dir=$(dirname "${full_path}")
local model_name=$(basename "${lmstudio_model_dir}")
# Check if the directory exists
if [ -d "${lmstudio_model_dir}" ]; then
# Check if the directory contains any .gguf files
if ls "${lmstudio_model_dir}"/*.gguf 1> /dev/null 2>&1; then
read -p "Do you want to remove the LM Studio files for ${model_name}? (Y/n): " -r
REPLY=${REPLY:-Y}
if [[ $REPLY =~ ^[Yy]$ || -z $REPLY ]]; then
log "Removing LM Studio files for ${model_name}"
if sudo rm -rf "${lmstudio_model_dir}"/*.gguf; then
log "LM Studio files for ${model_name} have been removed"
# Check if the directory is now empty and remove it if so
if [ -z "$(ls -A ${lmstudio_model_dir})" ]; then
log "Removing empty directory: ${lmstudio_model_dir}"
sudo rm -rf "${lmstudio_model_dir}"
# Check if parent directory is empty and remove it if so
local parent_dir=$(dirname "${lmstudio_model_dir}")
if [ -z "$(ls -A ${parent_dir})" ]; then
log "Removing empty parent directory: ${parent_dir}"
sudo rm -rf "${parent_dir}"
fi
fi
else
error_log "Failed to remove LM Studio files for ${model_name}"
fi
else
log "Keeping LM Studio files for ${model_name}"
fi
else
debug_log "No .gguf files found in ${lmstudio_model_dir}"
fi
else
debug_log "Directory ${lmstudio_model_dir} does not exist"
fi
}
# Process all models
process_models() {
local processed=0
local skipped=0
local total_models=0
debug_log "Starting to process models"
local model_files=($(find "${LMSTUDIO_DIR}" -type f -name "*.gguf"))
debug_log "Found ${#model_files[@]} model files"
for gguf_path in "${model_files[@]}"; do
debug_log "Processing file: ${gguf_path}"
((total_models++))
if process_model "${gguf_path}"; then
((processed++))
elif [ $? -eq 2 ]; then
((skipped++))
else
error_log "Failed to process model: ${gguf_path}"
fi
done
log "Summary:"
log " Processed: ${processed}"
log " Skipped: ${skipped}"
log " Total models found: ${total_models}"
}
# Error handler
error_handler() {
local error_code=$?
error_log "An error occurred on line $1 with exit code $error_code"
error_log "Stack trace:"
local frame=0
while caller $frame; do
((frame++))
done
}
# Set up error handling
trap 'error_handler $LINENO' ERR
# Main function
main() {
log "Starting LM Studio to Ollama model sync process..."
check_ollama_running
log "Models found in LM Studio:"
find "${LMSTUDIO_DIR}" -type f -name "*.gguf" -printf "- %P\n" | sort
log "Syncing from LM Studio to Ollama..."
if ! process_models; then
error_log "Error occurred during model processing"
exit 1
fi
log "Sync process completed."
}
# Run the main function
main
exit 0 |
Howdy all 👋, This is something I had started to look at but de-prioritised a little. As @i486 mentioned there's quite a few assumptions that would go into creating an Ollama modelfile with the correct parameters, template format (don't even get me started on how annoying these are) etc... What I had drafted out in my mind was something like:
There are some limitations to this approach, one big one being that often people want to link all models - and this way won't scale. |
Here are a few thoughts and suggestions.
LM Studio structure:
My GGUF structure, which has worked well with most LLM UIs except Ollama:
I actually used to store all my raw GGUF files in one folder but had to switch to a one-folder-per-model structure due to the increasing use of vision/mmproj models. Anyway, If you decide to work on this, please enable users to specify their own directories in addition to auto-importing from LM Studio. You could simply recurse through the specified folder and generate a list view.
I was going to suggest implementing whatever Ollama does in the
It seems Ollama is smart enough to automatically detect correct template from GGUF and create its templates blob when user only specifies |
I forgot to mention this earlier. As a Windows user, I haven't tested Gollama yet. However, I'm providing feedback in the hope that someone will add proper Windows support in the future. It shouldn't be too difficult considering Go's cross-platform nature and the fact that someone is already using it on Windows PowerShell. I haven't managed to do it myself yet. I understand that you don't currently plan to support Windows yourself, but you would accept pull requests. Therefore, it would be great if you could create a new open issue labeled "Feat: Windows support" with a "help wanted" tag. Alternatively, you could reopen any existing Windows-related issues. This could help attract contributors. |
Description
Add reverse functionality for taking models in LM studio and making them available in ollama. Very similar to
L
, except it works in the reverse direction.The text was updated successfully, but these errors were encountered: