You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have searched similar issues but couldn't able to find anything.
Question:
I want to fine-tune my pre-trained yolov10 model. I specifically want to use Quantized Lorank Adaptation. Then I'm playing to convert the fine-tuned model into Tflite. Could you provide me with the steps and a tutorial on how to fine-tune the pre-trained yolov10 model using QLoRA?
I found this hard to do.
Thank You.
The text was updated successfully, but these errors were encountered:
👋 Hello @vinujaj, thank you for your interest in YOLOv5 🚀! It seems like you're diving into fine-tuning and quantization, very exciting! While YOLOv10 is not an official release from Ultralytics (our latest official releases are YOLOv5 🚀 and YOLOv8 🌟), you can still explore similar workflows built around YOLOv5 and YOLOv8 models.
If you are looking to perform custom fine-tuning and quantization tasks like the one you mentioned, please note the following suggestions and resources:
YOLOv8 Docs: YOLOv8 is our latest model family with cutting-edge features for training and export. Check for similar TFLite export workflows in YOLOv8 Export Guide.
QLoRA: At this time, direct QLoRA (Quantized LoRA) adaptation is not officially supported in YOLOv5 or YOLOv8 workflows. However, leveraging external libraries for LoRA-based fine-tuning on PyTorch models may be possible. We encourage you to provide additional details or code examples of your pipeline for further guidance.
TFLite Conversion: YOLOv8 offers native TFLite export functionality, but for YOLOv5, you may need third-party solutions or custom scripts. Ensure your TFLite conversion steps account for any additional quantization changes from QLoRA.
Help Us Help You 😊
To better assist you, please include the following, if applicable:
Details about your pre-trained model (e.g., architecture, weights used, etc.)
Steps or scripts you’ve tried for fine-tuning using QLoRA.
Errors or issues you’re encountering with the TFLite conversion process.
Logs or any code snippets that can help us understand the problem more clearly.
Requirements
Here are the minimal repo requirements to replicate your issue or explore solutions:
git clone https://github.com/ultralytics/yolov5
cd yolov5
pip install -r requirements.txt
Environments
You can also explore YOLOv5 on the following platforms for faster experimentation:
Free GPUs:
Status
If this badge is green, all YOLOv5 GitHub Actions CI tests are passing. CI tests verify functionality for training, validation, inference, export, and benchmarking daily.
🚨 This is an automated response to help guide you. An Ultralytics engineer will review your question and provide additional assistance shortly.
Thank you for your question! Currently, Ultralytics YOLO does not officially support QLoRA (Quantized Low-Rank Adaptation) for fine-tuning. However, you can fine-tune a YOLOv5 or newer YOLO models using standard methods and then convert the fine-tuned model to TFLite.
To fine-tune your model, refer to the YOLOv5 training documentation. Once fine-tuning is complete, you can follow the TFLite export guide here to convert the model for deployment.
If you require QLoRA-specific workflows, you may need to integrate external tools or frameworks. Let us know if you have further questions!
I have searched similar issues but couldn't able to find anything.
Question:
I want to fine-tune my pre-trained yolov10 model. I specifically want to use Quantized Lorank Adaptation. Then I'm playing to convert the fine-tuned model into Tflite. Could you provide me with the steps and a tutorial on how to fine-tune the pre-trained yolov10 model using QLoRA?
I found this hard to do.
Thank You.
The text was updated successfully, but these errors were encountered: