Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is there any stable version of python and pip to train common object detection model? #5080

Closed
ly15927086342 opened this issue Jan 17, 2024 · 7 comments
Assignees
Labels
os:windows MediaPipe issues on Windows platform:python MediaPipe Python issues type:build/install For Build and Installation issues type:modelmaker Issues related to creation of custom on-device ML solutions

Comments

@ly15927086342
Copy link

Have I written custom code (as opposed to using a stock example script provided in MediaPipe)

None

OS Platform and Distribution

windows10

MediaPipe version

No response

Bazel version

No response

Solution

Object Detector

Programming Language and version

Python3.11

Describe the actual behavior

install mediapipe-model-maker fail

Describe the expected behaviour

install mediapipe-model-maker success

Standalone code/steps you may have used to try to get what you need

Recently, I try to train my own object detection model as the example in https://developers.google.com/mediapipe/solutions/customization/object_detector. I tried it with M1, Python3.10 and Python3.9, neithor of them success, with the problem of mediapipe-model-maker installation. And then, I tries it with windows10, Python3.11, but still failed. I searched related issue in this repo, which can not solve my problem.
I just want to know is there any officially suggested  version about Python, pip and the mediapipe-model-maker and so on, which has been proved to be working. Windows10 and M1 will help me a lot.
Expect for your apply!

Other info / Complete Logs

No response

@ly15927086342 ly15927086342 added the type:support General questions label Jan 17, 2024
@kuaashish
Copy link
Collaborator

Hi @ly15927086342,

Unfortunately, you won't be able to install the latest version of MediaPipe model maker (0.2.1.3) because tensorflow-text stopped supporting Windows, Aarch64, and Apple Macs since version 2.11 (refer to tensorflow/text#1140 and https://github.com/tensorflow/text#a-note-about-different-operating-system-packages).

To install the latest version, you can try building the tensorflow-text package from their Github if you are using Windows. If you are not using the text_classifier task in Model Maker (which requires tensorflow-text), you can attempt to install it using pip with the "--no-deps" parameter. You would need to install each requirement independently, excluding tensorflow-text. Note that this is a workaround suggested by a user here: tensorflow/text#1206.

Alternatively, you can consider using Windows Subsystem for Linux (WSL) or the Colab cloud environment to install these packages in a Linux environment.

However, based on our analysis, you can install mediapipe-model-maker-0.1.0.2, which is based on an older version of MediaPipe (0.9.0.1) without any issues but this is not our recommended approach.

Thank you

@kuaashish kuaashish assigned kuaashish and unassigned ayushgdev Jan 18, 2024
@kuaashish kuaashish added type:modelmaker Issues related to creation of custom on-device ML solutions type:build/install For Build and Installation issues platform:python MediaPipe Python issues os:windows MediaPipe issues on Windows stat:awaiting response Waiting for user response and removed type:support General questions labels Jan 18, 2024
@ly15927086342
Copy link
Author

I have train my model by Colab, it works will, thank you!
There is another question. After export my model.tflite, the connect is closed. So is there any approach for me to import the model, and contine to do the model quantization step?
Hope for your apply!

@google-ml-butler google-ml-butler bot removed the stat:awaiting response Waiting for user response label Jan 19, 2024
@kuaashish
Copy link
Collaborator

@ly15927086342,

Thank you for confirming. Kindly grant us some time to internally review the matter. We will inform you of the feasibility from our perspective.

@kuaashish
Copy link
Collaborator

Hi @ly15927086342,

I attempted to replicate the scenario in Colab. It appears that running the demo with a smaller dataset was successful. After importing the customized TensorFlow Lite model, the quantization example codes executed smoothly without runtime interruptions.

In your situation, if you aim to train the model with a larger dataset without encountering timeouts, you may consider extending the runtime duration by implementing the following suggestions:

Run the following code in the console and it will prevent you from disconnecting.

Ctrl + Shift + I to open the inspector view. Then go to the console.

function ClickConnect(){
    console.log("Working");
    document.querySelector("colab-toolbar-button#connect").click()
}
setInterval(ClickConnect,60000)

Reference: [How can I prevent Google Colab from disconnecting?](https://stackoverflow.com/questions/57113226/how-can-i-prevent-google-colab-from-disconnecting).

Thank you!!

@kuaashish kuaashish added the stat:awaiting response Waiting for user response label Jan 25, 2024
Copy link

github-actions bot commented Feb 2, 2024

This issue has been marked stale because it has no recent activity since 7 days. It will be closed if no further activity occurs. Thank you.

@github-actions github-actions bot added the stale label Feb 2, 2024
Copy link

This issue was closed due to lack of activity after being marked stale for past 7 days.

Copy link

Are you satisfied with the resolution of your issue?
Yes
No

@kuaashish kuaashish removed stat:awaiting response Waiting for user response stale labels Feb 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
os:windows MediaPipe issues on Windows platform:python MediaPipe Python issues type:build/install For Build and Installation issues type:modelmaker Issues related to creation of custom on-device ML solutions
Projects
None yet
Development

No branches or pull requests

3 participants