Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fine-Tuning a Pre-Trained YOLOv10 Model Using QLoRA and Converting to TensorFlow Lite (TFLite) #13494

Open
vinujaj opened this issue Jan 20, 2025 · 2 comments
Labels
exports Model exports (ONNX, TensorRT, TFLite, etc.) question Further information is requested

Comments

@vinujaj
Copy link

vinujaj commented Jan 20, 2025

I have searched similar issues but couldn't able to find anything.

Question:

I want to fine-tune my pre-trained yolov10 model. I specifically want to use Quantized Lorank Adaptation. Then I'm playing to convert the fine-tuned model into Tflite. Could you provide me with the steps and a tutorial on how to fine-tune the pre-trained yolov10 model using QLoRA?
I found this hard to do.

Thank You.

@UltralyticsAssistant UltralyticsAssistant added exports Model exports (ONNX, TensorRT, TFLite, etc.) question Further information is requested labels Jan 20, 2025
@UltralyticsAssistant
Copy link
Member

👋 Hello @vinujaj, thank you for your interest in YOLOv5 🚀! It seems like you're diving into fine-tuning and quantization, very exciting! While YOLOv10 is not an official release from Ultralytics (our latest official releases are YOLOv5 🚀 and YOLOv8 🌟), you can still explore similar workflows built around YOLOv5 and YOLOv8 models.

If you are looking to perform custom fine-tuning and quantization tasks like the one you mentioned, please note the following suggestions and resources:

Key Resources 🔗

Notes 🛠️

  • QLoRA: At this time, direct QLoRA (Quantized LoRA) adaptation is not officially supported in YOLOv5 or YOLOv8 workflows. However, leveraging external libraries for LoRA-based fine-tuning on PyTorch models may be possible. We encourage you to provide additional details or code examples of your pipeline for further guidance.
  • TFLite Conversion: YOLOv8 offers native TFLite export functionality, but for YOLOv5, you may need third-party solutions or custom scripts. Ensure your TFLite conversion steps account for any additional quantization changes from QLoRA.

Help Us Help You 😊

To better assist you, please include the following, if applicable:

  1. Details about your pre-trained model (e.g., architecture, weights used, etc.)
  2. Steps or scripts you’ve tried for fine-tuning using QLoRA.
  3. Errors or issues you’re encountering with the TFLite conversion process.
  4. Logs or any code snippets that can help us understand the problem more clearly.

Requirements

Here are the minimal repo requirements to replicate your issue or explore solutions:

git clone https://github.com/ultralytics/yolov5
cd yolov5
pip install -r requirements.txt

Environments

You can also explore YOLOv5 on the following platforms for faster experimentation:

  • Free GPUs: Run on Gradient Open In Colab Open In Kaggle

Status

YOLOv5 CI
If this badge is green, all YOLOv5 GitHub Actions CI tests are passing. CI tests verify functionality for training, validation, inference, export, and benchmarking daily.


🚨 This is an automated response to help guide you. An Ultralytics engineer will review your question and provide additional assistance shortly.

@pderrenger
Copy link
Member

Thank you for your question! Currently, Ultralytics YOLO does not officially support QLoRA (Quantized Low-Rank Adaptation) for fine-tuning. However, you can fine-tune a YOLOv5 or newer YOLO models using standard methods and then convert the fine-tuned model to TFLite.

To fine-tune your model, refer to the YOLOv5 training documentation. Once fine-tuning is complete, you can follow the TFLite export guide here to convert the model for deployment.

If you require QLoRA-specific workflows, you may need to integrate external tools or frameworks. Let us know if you have further questions!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
exports Model exports (ONNX, TensorRT, TFLite, etc.) question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants