-
Notifications
You must be signed in to change notification settings - Fork 6.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update readme.md with macOS installation instructions #129
Conversation
macOS does not have conda command. |
how many it/s can you have on the M1 pro ? |
If you follow the Apple technical document in the procedure, it will instal miniconda3 from the Anaconda repo. |
wow 8s/it - that is quite a bit waiting |
Yes, it is, but it's similar to what I get with ComfyUI or Automatic1111 using SDXL; SD1.5 is faster though. I don't think you can do better with M1 + SDXL (?) I don't know what optimizations you included in Fooocus, but the image quality is vastly superior to ComfyUI or Automatic1111. Thanks for giving us the chance to play with this project! 😄 |
Note that once Miniconda3 is installed and activated in the shell, the Linux instructions work perfectly on macOS. |
After I install it locally with your steps, I got the same issue like described here. #286 |
You just need to restart your computer |
It looks like I'm running into a problem with the environment, how can I get past this? I didn't see the instructions for this part in your Readme.
|
I had to launch with |
Works like a charm ! Is it normal the program just use too much RAM ~ 20GB is used ? |
Please refer to the macOS installation guide in the |
Last login: Sat Oct 14 11:02:20 on ttys000 conda activate fooocus python entry_with_update.py To create a public link, set No matter I download the model again, it's useless. |
MetadataIncompleteBuffer is corrupted files. |
122s/it, macbook pro m2. ....so slow |
[Fooocus Model Management] Moving model (s) has taken 63.73 seconds, move the model once before each generation, too slow |
@omioki23 I also have the same issue. Seems like a Mac thing |
Setup was a breeze, but as others have mentioned generation is extremely slow. Unfortunate. |
i did everything and i got url when i gave a prompt and tap generate it completed but i cant see any of images? |
When trying to generate an image on my mac book m1 air, it gave the following error code: RuntimeError: MPS backend out of memory (MPS allocated: 8.83 GB, other allocations: 231.34 MB, max allowed: 9.07 GB). Tried to allocate 25.00 MB on private pool. Use PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 to disable upper limit for memory allocations (may cause system failure) Clearly it is implying I do not have enough memory, though has anyone figured out how to rectify this please? Thanks |
@Shuffls i tried for similar issue and fixed my problem |
RuntimeError: MPS backend out of memory (MPS allocated: 6.34 GB, other allocations: 430.54 MB, max allowed: 6.77 GB). Tried to allocate 10.00 MB on private pool. Use PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 to disable upper limit for memory allocations (may cause system failure). |
Sorry bro been there. running on intel is a waste of time. Runs.. but will remain slow. |
I get around 10-15/it with |
@Zeeshan-2k1 did you solve your issue? I had the exact same error shown. In my case, I had already a newer version of python (3.12) installed, with a link to it, so whenever I was doing commands with "python" it was linked to the newer version of course. However, when you follow the steps in the readme, it will install python 3.11 for you and you have to use this python also, as libraries such as pygit2 are installed in that framework as well (within Conda). Hope this helps! |
I was testing some combinations of all the parameters, however, long story short, the best for me (MacBook Pro, M3 Pro apple silicon) was: So the Also the newest version of Fooocus (web UI) allows you to choose for extreme speed setting (when selecting "Advanced"), where only 8 iterations per image are needed. By selecting so, you might create the images even faster, of course, with a slight quality decrease. |
Just to clarify for anyone else reading this thread. I can confirm that there is a speed-memory issue at 16Gb using the Mac M-series. You need to force 16-bit floating point, to reduce memory just enough to avoid a bottleneck. This will over double the speed... I went from over 120 it/s, down to <60 it/s. Still not fast, but it becomes usable. I usually run with |
Thanks bro! |
For optimize the execution of your command in the command line and potentially speed up the process, we can focus on the parameters that most affect performance. However, given that your command already contains many parameters for optimizing memory and computational resource usage, the main directions for improvement may involve more efficient GPU usage and reducing the amount of data required for processing in each iteration. Here's a modified version of your command considering possible optimizations:
Explanation of changes: Removed unsupported parameters: Parameters that caused an error due to their absence in the list of supported script parameters (--num-workers, --batch-size, --optimizer, --learning-rate, --precision-backend, --gradient-accumulation-steps) have been removed. Clarification of FP16 usage: Explicit indications for using FP16 for different parts of the model (--unet-in-fp16, --vae-in-fp16, --clip-in-fp16) have been added. This suggests that your model may include components like U-Net, VAE (Variational Autoencoder), and CLIP. Using FP16 can speed up computations and reduce memory consumption, although it may also slightly affect the accuracy of the results. Using asynchronous CUDA memory allocation: The --async-cuda-allocation parameter implies that the script will use asynchronous memory allocation, which can speed up data loading and the start of computations. Additional tips: Performance analysis: Use profiling tools to analyze CPU and GPU usage to identify bottlenecks. |
I try to use this and get message /anisotropic.py:132: UserWarning: The operator 'aten::std_mean.correction' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/mps/MPSFallback.mm:13.) |
I have 161.37s/it. can someone help me why?. Like how can i make my mac faster. Its. 2022 model so it has the m1 chip. But why is it this slow? |
Is it just quitting when trying to generate an image for anyone else? (M2 Mac Air) |
Did you guys cd Fooocus and conda activate Fooocus before python entry_with_update.py? |
You're probably not using the optimization parameters mentioned right above your post. |
Thank you, got it down to around 13-14 s/it on 2020 M1 MacBook Air 16GB. It starts with 10.5 tho, and slows down after a couple of steps. Fooocus still runs a bit slower than A1111 (7-8 s/it), but IMO still usable. I think it could be faster if it used both CPU and GPU cores. For now, it sits on about 96% with frequent dips to 80% GPU and only 10-17% CPU. Any way to change that? I want my whole machine to generate. |
Great work @Deniffler, you clearly spent more time and effort than I have, I was just glad to get it running fully off the GPU... Very glad that I helped set you on the right path, as you've now got us all running as past as possible. I'm much more productive now, many thanks. |
Posting here as described should be done. Can convert to an issue later if necessary. The On my fork I found the rows for the prompt box was set to 1024 and It would also seem that image prompting is not working at all for me. I check "image prompt", place in two images (a landscape and an animal) and click generate. Fooocus then generates a random portrait image of a man/woman. However, if I put an image into the describe tab, and click describe, it will indeed create a prompt from the image. So the tab/image handling seems to be working at least? Anyone else having a similar problem? |
I get that when I try to install the requirements folder : I got two error like that, I don't know how to solve that because then it doesn't run at all |
follow this it works : https://youtu.be/IebiL16lFyo?si=GSaczBlUuzjnP9TM |
I've tested some of the commands above Results: 🐱 🐆 🌟 My Configuration: ATTENTION |
I installed it successfully. Do I need to use the terminal every time to run it? or is there a way to create an execution file? |
Step 1 : Create a Step 2 (Option 1) : Follow this answer to convert it into a Step 2 (Option 2) : Select that file as one of the "login items" in your setting. Note that this way the server will always run in the background. |
Getting below error. Can someone please help @lllyasviel @jorge-campo @huameiwei-vc Set vram state to: SHARED During handling of the above exception, another exception occurred: Traceback (most recent call last): |
Your checkpoint is broken or has format unknown to Fooocus. redownload or try another checkpoint |
Added option to run OBP Presets randomly
The operator is not yet supported by Apple, that's all. You can (want to) tinker with it as much as you want. Der Operator wird bis dato nicht von Apple unterstützt, dass ist alles. Da kann man noch so viel rumschrauben (wollen) wie man will. |
With all the optimisation flags in place I'm still at 70s/it on an M2 Macbook PRO 1GB of RAM |
Adds specific Mac M1/M2 installation instructions for Fooocus.
The same instructions for Linux work on Mac. The only prerequisite is the PyTorch installation, as described in the procedure.
On my M1 Mac, the installation and first run ran error-clean, and I could start generating images without any additional configuration.