Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix for mingw compilers (including wx64devkit) - fixes #1423 fixes #1529 #1462

Merged
merged 1 commit into from
May 20, 2023

Conversation

DannyDaemonic
Copy link
Collaborator

@DannyDaemonic DannyDaemonic commented May 15, 2023

It seems mingw's version of getwchar can get stuck in an infinite loop.

It's easy to repeat in one piece of code.
#include <windows.h>
#include <winnls.h>
#include <fcntl.h>
#include <wchar.h>
#include <stdio.h>
#include <io.h>

int main() {
    // Initialize for reading wchars and writing out UTF-8
    DWORD dwMode = 0;
    HANDLE hConsole = GetStdHandle(STD_OUTPUT_HANDLE);
    if (hConsole == INVALID_HANDLE_VALUE || !GetConsoleMode(hConsole, &dwMode)) {
        hConsole = GetStdHandle(STD_ERROR_HANDLE);
        if (hConsole != INVALID_HANDLE_VALUE && (!GetConsoleMode(hConsole, &dwMode))) {
            hConsole = NULL;
        }
    }
    if (hConsole) {
        SetConsoleMode(hConsole, dwMode | ENABLE_VIRTUAL_TERMINAL_PROCESSING);
        SetConsoleOutputCP(CP_UTF8);
    }
    HANDLE hConIn = GetStdHandle(STD_INPUT_HANDLE);
    if (hConIn != INVALID_HANDLE_VALUE && GetConsoleMode(hConIn, &dwMode)) {
        _setmode(_fileno(stdin), _O_WTEXT);
        dwMode &= ~(ENABLE_LINE_INPUT | ENABLE_ECHO_INPUT);
        SetConsoleMode(hConIn, dwMode);
    }

    // Echo input
    while (1) {
        // Read
        wchar_t wcs[2] = { getwchar(), L'\0' };
        if (wcs[0] == WEOF) break;
        if (wcs[0] >= 0xD800 && wcs[0] <= 0xDBFF) { // If we have a high surrogate...
            wcs[1] = getwchar(); // Read the low surrogate
            if (wcs[1] == WEOF) break;
        }
        // Write
        char utf8[5] = {0};
        int result = WideCharToMultiByte(CP_UTF8, 0, wcs, (wcs[1] == L'\0') ? 1 : 2, utf8, 4, NULL, NULL);
        if (result > 0) {
            printf("%s", utf8);
        }
    }
    return 0;
}

The code works when compiled with MS's compiler (and Intel's) but I guess getwchar is a weird enough function that no one uses it and it's untested on mingw. The Windows functions themselves are much more well tested and supported in misc compilers, so that's what I changed the code to use.

Edit: Confirmed to fix #1423 and #1529.

@DannyDaemonic DannyDaemonic changed the title Fix for mingw Fix for mingw (fixes #1423) May 15, 2023
@DannyDaemonic
Copy link
Collaborator Author

Verified: #1423 (comment)

@newTomas
Copy link
Contributor

Works for me

@DannyDaemonic DannyDaemonic changed the title Fix for mingw (fixes #1423) Fix for mingw compilers (including wx64devkit) (fixes #1423) May 18, 2023
@Folko-Ven
Copy link
Contributor

Hi, everything was working before #1509. The compilation now gives errors. Could you please request a review and merge it finally?

@DannyDaemonic
Copy link
Collaborator Author

Yeah, I've been sick so most of my stuff has just been sitting.

You should open a new issue for the problem with #1509. I can't imagine it's related to this in any other way than the compiler again, mingw.

@Folko-Ven
Copy link
Contributor

Well, that’s strange. I tried again just git pull origin master 175aff7a4e1735dc7d4b79f9b10b49b8faa32db0, compiled it, and it worked totally fine. I don’t know why I had errors last time.

@DannyDaemonic
Copy link
Collaborator Author

Maybe you just needed a make clean? Whenever I get a compiling error after a pull I do a make clean && make.

@Folko-Ven
Copy link
Contributor

Folko-Ven commented May 18, 2023

Yes, this is possible. Usually I use a homemade powershell script to update, and there I run make clean make -B. I guess I forgot to do this in manual mode :)

@DannyDaemonic DannyDaemonic changed the title Fix for mingw compilers (including wx64devkit) (fixes #1423) Fix for mingw compilers (including wx64devkit) - fixes #1423 fixes #1529 May 19, 2023
@DannyDaemonic
Copy link
Collaborator Author

I don't know who else to request a review from. Is anyone else familiar with the Windows API?

@Folko-Ven
Copy link
Contributor

Folko-Ven commented May 20, 2023

If there is no one else, maybe ask ggerganov? I have been using your fix for four days now and everything is working well.

Copy link
Owner

@ggerganov ggerganov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't have Windows to test, but if people confirm it is working - please merge it

@DannyDaemonic DannyDaemonic merged commit d2c59b8 into ggerganov:master May 20, 2023
ggerganov pushed a commit to JohannesGaessler/llama.cpp that referenced this pull request May 20, 2023
ggerganov added a commit that referenced this pull request May 20, 2023
…oadcasting for ggml_mul (#1483)

* Broadcasting for ggml_mul

* CUDA kernel for ggml_mul, norms in VRAM

* GPU weights not in RAM, direct loading with cuFile

* fixup! GPU weights not in RAM, direct loading with cuFile

* fixup! GPU weights not in RAM, direct loading with cuFile

* define default model path once, sync path with readme (#1366)

* ~7% faster Q5_1 AVX2 code (#1477)

* convert.py: Support models which are stored in a single pytorch_model.bin (#1469)

* Support models in a single pytorch_model.bin

* Remove spurious line with typo

* benchmark-matmul: Print the average of the test results (#1490)

* Remove unused n_parts parameter (#1509)

* Fixes #1511 lambda issue for w64devkit (mingw) (#1513)

* Fix for w64devkit and mingw

* make kv_f16 the default for api users (#1517)

* minor : fix compile warnings

* readme : adds WizardLM to the list of supported models (#1485)

* main : make reverse prompt option act as a stop token in non-interactive mode (#1032)

* Make reverse prompt option act as a stop token in non-interactive scenarios

* Making requested review changes

* Update gpt_params_parse and fix a merge error

* Revert "Update gpt_params_parse and fix a merge error"

This reverts commit 2bb2ff1.

* Update gpt_params_parse and fix a merge error take 2

* examples : add persistent chat (#1495)

* examples : add persistent chat

* examples : fix whitespace

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* tests : add missing header

* ggml : use F16 instead of F32 in Q4_0, Q4_1, Q8_0 (#1508)

* ggml : use F16 instead of F32 in Q4_0, Q4_1 and Q8_0

* llama : bump LLAMA_FILE_VERSION to 3

* cuda : update Q4 and Q8 dequantize kernels

* ggml : fix AVX dot products

* readme : update performance table + hot topics

* ggml : fix scalar implementation of Q4_1 dot

* llama : fix compile warnings in llama_set_state_data()

* llama : fix name shadowing and C4146 (#1526)

* Fix name shadowing and C4146

* Fix if macros not using defined when required

* Update llama-util.h

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>

* Update llama-util.h

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>

* Code style

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

---------

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* Fix for mingw (#1462)

* llama : add llama_init_backend() API (close #1527)

* feature : add blis and other BLAS implementation support (#1502)

* feature: add blis support

* feature: allow all BLA_VENDOR to be assigned in cmake arguments. align with whisper.cpp pr 927

* fix: version detection for BLA_SIZEOF_INTEGER, recover min version of cmake

* Fix typo in INTEGER

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* Revert "feature : add blis and other BLAS implementation support (#1502)"

This reverts commit 07e9ace.

* GPU weights not in RAM, direct loading with cuFile

* llama : code style fixes + progress print fix

* ggml : ggml_mul better broadcast support

* cmake : workarounds for cufile when CMake version < 3.25

* gg rebase fixup

* Loop in llama.cpp, fixed progress callback

* Attempt clang-tidy fix

* llama : fix vram size computation

* Add forgotten fclose()

---------

Co-authored-by: András Salamon <ott2@users.noreply.github.com>
Co-authored-by: Ilya Kurdyukov <59548320+ilyakurdyukov@users.noreply.github.com>
Co-authored-by: Tom Jobbins <784313+TheBloke@users.noreply.github.com>
Co-authored-by: rankaiyx <rankaiyx@rankaiyx.com>
Co-authored-by: Stephan Walter <stephan@walter.name>
Co-authored-by: DannyDaemonic <DannyDaemonic@gmail.com>
Co-authored-by: Erik Scholz <Green-Sky@users.noreply.github.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Co-authored-by: David Kennedy <dakennedyd@gmail.com>
Co-authored-by: Jason McCartney <jmac@theroot.org>
Co-authored-by: Evan Jones <evan.q.jones@gmail.com>
Co-authored-by: Maxime <672982+maximegmd@users.noreply.github.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Zenix <zenixls2@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Unable to get a response in interactive mode
4 participants