Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MinGW: clblast "variable length arrays are not supported", unicode issues in common.c #1581

Closed
asctime opened this issue May 24, 2023 · 8 comments
Labels

Comments

@asctime
Copy link

asctime commented May 24, 2023

Prerequisites

Please answer the following questions for yourself before submitting an issue.

  • [ x] I am running the latest code. Development is very rapid so there are no tagged versions as of now.
  • [ x] I carefully followed the README.md.
  • [x ] I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
  • [ x] I reviewed the Discussions, and have a new bug or useful enhancement to share.

Sorry I am not sure how much of what I have to ask fits into your template. First, thank you for this project many people are extremely excited about it, and several from the mingw community are looking at this I think (mingw is just gcc/clang compiler on windows). So to my issue.
Essentially, I compiled llama.cpp #5ea4339 Windows (MinGW64 gcc) OpenCL headers 20200327 with clinfo:
Number of platforms 2
Platform Name NVIDIA CUDA
Platform Vendor NVIDIA Corporation
Platform Version OpenCL 3.0 CUDA 11.6.134
...
Get a crash in main.exe:
Initializing CLBlast (First Run)...
Attempting to use: Platform=0, Device=0 (If invalid, program will crash)
Using Platform: NVIDIA CUDA Device: NVIDIA GeForce GTX 1650 Ti
:1:133: error: variable length arrays are not supported in OpenCL

I was hoping an experienced user could help me understand what "variable length arrays are not supported in OpenCL" implies concerning a mingw-llama. Is it something that could be theoretically patched around?

Also, I thought it was worth mentioning while I have anyone's ear that I am working on a mingw patch. It's mainly focused on the projects existing/conflicting use of ascii/unicode (wchar) on windows in the examples. I am sure it all compiles on msvcrt to mix unicode and ascii, but gcc has a coniption. Basically in mingw to support unicode wide-characters you need to compile with -DUNICODE and link with -municode and have a main called wmain. Everything has to be completely consistent. I understand that most devs don't anticipate gcc on windows but it's actually a thing now and growing. I am still testing my patches, just thought I should point it out..

@asctime asctime changed the title [User] Insert summary of your issue or enhancement.. MinGW: clblast "variable length arrays are not supported", unicode issues in common.c May 24, 2023
@SlyEcho
Copy link
Collaborator

SlyEcho commented May 24, 2023

Thy a newer commit, the OpenCL code has been almost completely rewritten now.

Basically in mingw to support unicode wide-characters you need to compile with -DUNICODE and link with -municode and have a main called wmain.

Why is it necessary to use Unicode wide characters? The LLaMa tokenizer itself is only working in UTF-8.

@asctime
Copy link
Author

asctime commented May 24, 2023

Thanks I will have a go with the new code.

Re UTF8 ok but there seem to be issues there, I'm still looking at it but experience is that anything beyond native code pages really should use wmain to render correctly in MinGW. Wasn't an issue in alpaca.cpp as long as I used the Windows 10 console just to interpret the color codes and terminal commands. Which reminds me, common.c actually excludes legacy console support? A dumb terminal fallback for what is essentially a simple chat might be more what I am after but this is a different topic.

@SlyEcho
Copy link
Collaborator

SlyEcho commented May 24, 2023

I don't know how well this works, but Unicode support may be needed for reading non-ANSI file names?

@asctime
Copy link
Author

asctime commented May 31, 2023

I had a couple more hours free this week so I tested the latest commit as opposed to the latest release, as suggested. I can report from my experience that the OpenCL support now seems to work correctly, at least in the basic sense appropriate to this issue. I realize that the console issue is completely separate and will re-open once I have a reasonable patch to propose. Thank you. Good to close from my side.

@SlyEcho
Copy link
Collaborator

SlyEcho commented May 31, 2023

OpenCL is going to have more updates, you can test them as well: #1653

@asctime
Copy link
Author

asctime commented May 31, 2023

Great!! Thank you I will do my best.

@asctime
Copy link
Author

asctime commented Jun 16, 2023

Sorry I had another project to finish.
5dc35d3

I'll get to the rest now..

update:
@SlyEcho tested today's code and all is still well with CLBlast under MinGW64 broadly speaking, once #1897 applied.

I am happy to keep testing but all good for now.

manyoso added a commit to nomic-ai/llama.cpp that referenced this issue Oct 28, 2023
… with

AMD radeon cards with lower workgroup count limit

Partially fixes ggerganov#1581
manyoso added a commit to nomic-ai/llama.cpp that referenced this issue Oct 28, 2023
cebtenzzre pushed a commit to cebtenzzre/llama.cpp that referenced this issue Nov 7, 2023
… with

AMD radeon cards with lower workgroup count limit

Partially fixes ggerganov#1581
cebtenzzre pushed a commit to cebtenzzre/llama.cpp that referenced this issue Nov 7, 2023
@github-actions github-actions bot added the stale label Mar 25, 2024
Copy link
Contributor

github-actions bot commented Apr 9, 2024

This issue was closed because it has been inactive for 14 days since being marked as stale.

@github-actions github-actions bot closed this as completed Apr 9, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants