Skip to content
This repository has been archived by the owner on Jan 23, 2023. It is now read-only.

[release/2.1] Replace hex number representation in ASM files #26136

Merged
merged 1 commit into from
Sep 12, 2019

Conversation

omajid
Copy link
Member

@omajid omajid commented Aug 12, 2019

Description

.NET Core 2.1 does not build with new LLVM versions (e.g. llvm 8 on Fedora 30).

Customer Impact

Anybody who wants to build .NET Core 2.1 from source on newer distros needs to carry and maintain patch with workaround. It would be beneficial for everybody to have this patch included in release/2.1 (see comment from Red Hat below #26136 (comment))

Regression?

No.

Risk

Low. This is trivial find&replace for constant literals format in asm code.


This commit fixes coreclr to build in newer versions of llvm (tested with llvm 8 on Fedora 30).

These recent versions of llvm (as well as GCC) do not accept values like 20h as valid integer literals:

src/debug/ee/amd64/dbghelpers.S:32:21: error: unknown token in expression
        add rsp, 20h
                    ^

This was reported as a bug to llvm upstream and they explicitly rejected supporting these literals: https://reviews.llvm.org/D59810

This is partial backport of cbd672e (PR #22810), with some modifications to compile, which was about adding compatibility with GCC 5.

One open question: this is enough on my machine to compile, but there are several other uses of constants with suffix h in various files. Should I replace them with 0x-prefix style too?

@janvorli
Copy link
Member

there are several other uses of constants with suffix h in various files

Are these in any of the .S files or in files included from those?

@omajid
Copy link
Member Author

omajid commented Aug 12, 2019

Yes. For example, there's still a 0Bh and 0FFh left around in src/vm/amd64/jithelpers_fast.S around line 180.

But I don't know how to find a comprehensive list. Does this look okay?

find -iname '*.S' | xargs egrep '\W[0-9a-fA-F]+h\W'
./src/debug/ee/amd64/dbghelpers.S:        // We used to do an "alloc_stack 0h" because the stack has been allocated for us
./src/vm/i386/jithelp.S:    #define arg1 [esp + 0Ch]
./src/vm/i386/jithelp.S:    or      ah, 0Ch                 // turn on OE and DE flags
./src/vm/i386/jithelp.S:    fld     QWORD PTR [esp + 0Ch]   // fetch arg
./src/vm/i386/jithelp.S:    cmp     BYTE PTR [edx + 0F0F0F0F0h], 0FFh
./src/vm/i386/jithelp.S:    mov     BYTE PTR [edx+0F0F0F0F0h], 0FFh
./src/vm/i386/jithelp.S:    cmp     BYTE PTR [edx + 0F0F0F0F0h], 0FFh
./src/vm/i386/jithelp.S:    mov     BYTE PTR [edx + 0F0F0F0F0h], 0FFh
./src/vm/i386/asmhelpers.S:    and     ax, 0f00h     // preserve precision and rounding control
./src/vm/i386/asmhelpers.S:    or      ax, 007fh     // mask all exceptions
./src/vm/i386/asmhelpers.S:    xor     ecx, 200000h  // Invert the ID bit
./src/vm/i386/asmhelpers.S:    // Note that some multi-procs have different stepping number for each proc
./src/vm/i386/asmhelpers.S:    mov     eax, 0400h   // report 486
./src/vm/i386/asmhelpers.S:    xor     ecx, 200000h // Invert the ID bit.
./src/vm/i386/asmhelpers.S:    // now we must push each field of the ArgumentRegister structure
./src/vm/arm64/asmhelpers.S:// and each entry can be written atomically
./src/vm/arm/asmhelpers.S://    -0Ch   gsCookie
./src/vm/arm/asmhelpers.S://    -08h   __VFN_table
./src/vm/arm/asmhelpers.S://    -04h   m_Next
./src/vm/arm/asmhelpers.S://    +00h   m_calleeSavedRgisters.r4
./src/vm/arm/asmhelpers.S://    +04h                        .r5
./src/vm/arm/asmhelpers.S://    +08h                        .r6
./src/vm/arm/asmhelpers.S://    +0Ch                        .r7
./src/vm/arm/asmhelpers.S://    +10h                        .r8
./src/vm/arm/asmhelpers.S://    +14h                        .r9
./src/vm/arm/asmhelpers.S://    +18h                        .r10
./src/vm/arm/asmhelpers.S://    +1Ch                        .r11
./src/vm/arm/asmhelpers.S://    +20h                        .r14 -or- m_ReturnAddress
./src/vm/arm/asmhelpers.S:    // There 4 versions of each write barriers. A 2x2 combination of multi-proc/single-proc and pre/post grow version
./src/vm/amd64/jithelpers_fast.S:        cmp     esi, dword ptr [rdi + OFFSETOF__PtrArray__m_NumComponents] // 8h -> array size offset
./src/vm/amd64/jithelpers_fast.S:        mov     rcx, [r10 + OFFSETOF__MethodTable__m_ElementType]   // 10h -> typehandle offset
find -iname '*.asm' -or -iname '*.S' -or -iname '.inc' | xargs egrep '\W[0-9a-fA-F]+h\W'
./src/debug/ee/amd64/dbghelpers.asm:        ; We used to do an "alloc_stack 0h" because the stack has been allocated for us
./src/debug/ee/amd64/dbghelpers.asm:        mov     rax, [rsp + 20h]
./src/debug/ee/amd64/dbghelpers.asm:        mov     rax, [rsp + 28h]
./src/debug/ee/amd64/dbghelpers.asm:        mov     [rsp + 8h], rax
./src/debug/ee/amd64/dbghelpers.asm:        mov     rax, [rsp + 30h]
./src/debug/ee/amd64/dbghelpers.asm:        mov     [rsp + 10h], rax
./src/debug/ee/amd64/dbghelpers.asm:        mov     rax, [rsp + 38h]
./src/debug/ee/amd64/dbghelpers.asm:        mov     [rsp + 18h], rax
./src/debug/ee/amd64/dbghelpers.S:        // We used to do an "alloc_stack 0h" because the stack has been allocated for us
./src/vm/i386/jithelp.S:    #define arg1 [esp + 0Ch]
./src/vm/i386/jithelp.S:    or      ah, 0Ch                 // turn on OE and DE flags
./src/vm/i386/jithelp.S:    fld     QWORD PTR [esp + 0Ch]   // fetch arg
./src/vm/i386/jithelp.S:    cmp     BYTE PTR [edx + 0F0F0F0F0h], 0FFh
./src/vm/i386/jithelp.S:    mov     BYTE PTR [edx+0F0F0F0F0h], 0FFh
./src/vm/i386/jithelp.S:    cmp     BYTE PTR [edx + 0F0F0F0F0h], 0FFh
./src/vm/i386/jithelp.S:    mov     BYTE PTR [edx + 0F0F0F0F0h], 0FFh
./src/vm/i386/asmhelpers.S:    and     ax, 0f00h     // preserve precision and rounding control
./src/vm/i386/asmhelpers.S:    or      ax, 007fh     // mask all exceptions
./src/vm/i386/asmhelpers.S:    xor     ecx, 200000h  // Invert the ID bit
./src/vm/i386/asmhelpers.S:    // Note that some multi-procs have different stepping number for each proc
./src/vm/i386/asmhelpers.S:    mov     eax, 0400h   // report 486
./src/vm/i386/asmhelpers.S:    xor     ecx, 200000h // Invert the ID bit.
./src/vm/i386/asmhelpers.S:    // now we must push each field of the ArgumentRegister structure
./src/vm/i386/jithelp.asm:        add ecx,7fffffffh               ; if difference>0 then increment integer
./src/vm/i386/jithelp.asm:        add ecx,7fffffffh               ; if difference<0 then decrement integer
./src/vm/i386/jithelp.asm:arg1	equ	<[esp+0Ch]>
./src/vm/i386/jithelp.asm:    or	ah, 0Ch                     ; turn on OE and DE flags
./src/vm/i386/jithelp.asm:arg1	equ	<[esp+0Ch]>
./src/vm/i386/jithelp.asm:        cmp     byte ptr [edx+0F0F0F0F0h], 0FFh
./src/vm/i386/jithelp.asm:        mov     byte ptr [edx+0F0F0F0F0h], 0FFh
./src/vm/i386/jithelp.asm:        cmp     byte ptr [edx+0F0F0F0F0h], 0FFh
./src/vm/i386/jithelp.asm:        mov     byte ptr [edx+0F0F0F0F0h], 0FFh
./src/vm/i386/jithelp.asm:        db (48) DUP (0CCh)
./src/vm/i386/asmhelpers.asm:        and ctrlWord, 0f00h     ; preserve precision and rounding control
./src/vm/i386/asmhelpers.asm:        or  ctrlWord, 007fh     ; mask all exceptions
./src/vm/i386/asmhelpers.asm:        mov     [ecx-0Ch], eax
./src/vm/i386/asmhelpers.asm:        mov     [ecx-10h], eax
./src/vm/i386/asmhelpers.asm:        lea     esp, [ecx-10h]
./src/vm/i386/asmhelpers.asm:        xor     ecx, 200000h ; Invert the ID bit.
./src/vm/i386/asmhelpers.asm:        ; Note that some multi-procs have different stepping number for each proc
./src/vm/i386/asmhelpers.asm:        mov     eax, 0400h ; report 486
./src/vm/i386/asmhelpers.asm:        xor     ecx, 200000h ; Invert the ID bit.
./src/vm/i386/asmhelpers.asm:        ; now we must push each field of the ArgumentRegister structure
./src/vm/i386/asmhelpers.asm:    and     ax, 3800h    ; Check the top-of-fp-stack bits
./src/vm/i386/asmhelpers.asm:.errnz (StackImbalanceCookie__HAS_FP_RETURN_VALUE AND 00ffffffh), HAS_FP_RETURN_VALUE has changed - update asm code
./src/vm/i386/asmhelpers.asm:    offset_pInputStack          equ 0Ch 
./src/vm/i386/asmhelpers.asm:    offset_pOutputStackOffsets  equ 14h 
./src/vm/i386/asmhelpers.asm:    ; In each case, the data fits in 32 bits. Instead, we use the upper half of 
./src/vm/arm64/asmhelpers.S:// and each entry can be written atomically
./src/vm/arm/asmhelpers.S://    -0Ch   gsCookie
./src/vm/arm/asmhelpers.S://    -08h   __VFN_table
./src/vm/arm/asmhelpers.S://    -04h   m_Next
./src/vm/arm/asmhelpers.S://    +00h   m_calleeSavedRgisters.r4
./src/vm/arm/asmhelpers.S://    +04h                        .r5
./src/vm/arm/asmhelpers.S://    +08h                        .r6
./src/vm/arm/asmhelpers.S://    +0Ch                        .r7
./src/vm/arm/asmhelpers.S://    +10h                        .r8
./src/vm/arm/asmhelpers.S://    +14h                        .r9
./src/vm/arm/asmhelpers.S://    +18h                        .r10
./src/vm/arm/asmhelpers.S://    +1Ch                        .r11
./src/vm/arm/asmhelpers.S://    +20h                        .r14 -or- m_ReturnAddress
./src/vm/arm/asmhelpers.S:    // There 4 versions of each write barriers. A 2x2 combination of multi-proc/single-proc and pre/post grow version
./src/vm/arm/asmhelpers.asm:;    -0Ch   gsCookie
./src/vm/arm/asmhelpers.asm:;    -08h   __VFN_table
./src/vm/arm/asmhelpers.asm:;    -04h   m_Next
./src/vm/arm/asmhelpers.asm:;    +00h   m_calleeSavedRgisters.r4
./src/vm/arm/asmhelpers.asm:;    +04h                        .r5
./src/vm/arm/asmhelpers.asm:;    +08h                        .r6
./src/vm/arm/asmhelpers.asm:;    +0Ch                        .r7
./src/vm/arm/asmhelpers.asm:;    +10h                        .r8
./src/vm/arm/asmhelpers.asm:;    +14h                        .r9
./src/vm/arm/asmhelpers.asm:;    +18h                        .r10
./src/vm/arm/asmhelpers.asm:;    +1Ch                        .r11
./src/vm/arm/asmhelpers.asm:;    +20h                        .r14 -or- m_ReturnAddress
./src/vm/arm/asmhelpers.asm:    ; As we assemble each write barrier function we build a descriptor for the offsets within that function
./src/vm/arm/asmhelpers.asm:    ; each write barrier that need to be modified dynamically.
./src/vm/arm/asmhelpers.asm:        ; have this limitation purely because we only record one offset for each GC global).
./src/vm/arm/asmhelpers.asm:    ; There 4 versions of each write barriers. A 2x2 combination of multi-proc/single-proc and pre/post grow version
./src/vm/amd64/ExternalMethodFixupThunk.asm:        PROLOG_WITH_TRANSITION_BLOCK 0, 10h, r8, r9
./src/vm/amd64/ExternalMethodFixupThunk.asm:        PROLOG_WITH_TRANSITION_BLOCK 8h, 10h, r8, r9
./src/vm/amd64/CrtHelpers.asm:        mov     r9, 0101010101010101h   
./src/vm/amd64/CrtHelpers.asm:        and     r8, 7fh                 ; and r8 with 0111 1111
./src/vm/amd64/CrtHelpers.asm:        and     r8, 3fh                 ; and with 0011 1111 
./src/vm/amd64/CrtHelpers.asm:        and     r8, 3fh                 ; and with 0011 1111 
./src/vm/amd64/VirtualCallStubAMD64.asm:        mov     rax, [rax+18h]   ;; get the next entry in the chain (don't bother checking the first entry again)
./src/vm/amd64/VirtualCallStubAMD64.asm:        cmp    rdx, [rax+00h]    ;; compare our MT with the one in the ResolveCacheElem
./src/vm/amd64/VirtualCallStubAMD64.asm:        cmp    r10, [rax+08h]    ;; compare our DispatchToken with one in the ResolveCacheElem
./src/vm/amd64/VirtualCallStubAMD64.asm:        mov    rax, [rax+10h]    ;; get the ImplTarget
./src/vm/amd64/RedirectedHandledJITCase.asm:        alloc_stack     28h                     ; CONTEXT*, callee scratch area
./src/vm/amd64/RedirectedHandledJITCase.asm:        mov             [rbp+20h], rax
./src/vm/amd64/RedirectedHandledJITCase.asm:.errnz REDIRECTSTUB_RBP_OFFSET_CONTEXT - 20h, REDIRECTSTUB_RBP_OFFSET_CONTEXT has changed - update asm stubs
./src/vm/amd64/RedirectedHandledJITCase.asm:        mov             [rbp+30h], rax
./src/vm/amd64/RedirectedHandledJITCase.asm:        save_reg_postrsp    rcx, REDIRECT_FOR_THROW_CONTROL_FRAME_SIZE + 8h     ; FaultingExceptionFrame
./src/vm/amd64/RedirectedHandledJITCase.asm:        save_reg_postrsp    rdx, REDIRECT_FOR_THROW_CONTROL_FRAME_SIZE + 10h    ; Original RSP
./src/vm/amd64/RedirectedHandledJITCase.asm:        mov             rdx, [rsp + REDIRECT_FOR_THROW_CONTROL_FRAME_SIZE + 10h] ; Original RSP
./src/vm/amd64/RedirectedHandledJITCase.asm:        mov             rcx, [rsp + REDIRECT_FOR_THROW_CONTROL_FRAME_SIZE + 8h] ; FaultingExceptionFrame
./src/vm/amd64/JitHelpers_InlineGetThread.asm:        sub     ecx, 18h  ; sizeof(ObjHeader) + sizeof(Object) + last slot
./src/vm/amd64/UMThunkStub.asm:        mov             rcx, [rsp + TheUMEntryPrestub_STACK_FRAME_SIZE + 8h]
./src/vm/amd64/UMThunkStub.asm:        mov             rdx, [rsp + TheUMEntryPrestub_STACK_FRAME_SIZE + 10h]
./src/vm/amd64/UMThunkStub.asm:        mov             r8,  [rsp + TheUMEntryPrestub_STACK_FRAME_SIZE + 18h]
./src/vm/amd64/UMThunkStub.asm:        mov             r9,  [rsp + TheUMEntryPrestub_STACK_FRAME_SIZE + 20h]
./src/vm/amd64/UMThunkStub.asm:        movdqa          xmm1, xmmword ptr [rsp + TheUMEntryPrestub_XMM_SAVE_OFFSET + 10h]
./src/vm/amd64/UMThunkStub.asm:        movdqa          xmm2, xmmword ptr [rsp + TheUMEntryPrestub_XMM_SAVE_OFFSET + 20h]
./src/vm/amd64/UMThunkStub.asm:        movdqa          xmm3, xmmword ptr [rsp + TheUMEntryPrestub_XMM_SAVE_OFFSET + 30h]
./src/vm/amd64/UMThunkStub.asm:        mov             [rbp + UMThunkStubAMD64_ARGUMENTS_STACK_HOME_OFFSET +  0h], rcx
./src/vm/amd64/UMThunkStub.asm:        mov             [rbp + UMThunkStubAMD64_ARGUMENTS_STACK_HOME_OFFSET +  8h], rdx
./src/vm/amd64/UMThunkStub.asm:        mov             [rbp + UMThunkStubAMD64_ARGUMENTS_STACK_HOME_OFFSET + 10h], r8
./src/vm/amd64/UMThunkStub.asm:        mov             [rbp + UMThunkStubAMD64_ARGUMENTS_STACK_HOME_OFFSET + 18h], r9
./src/vm/amd64/UMThunkStub.asm:        movdqa          xmmword ptr[rbp + UMThunkStubAMD64_XMM_SAVE_OFFSET +  0h], xmm0
./src/vm/amd64/UMThunkStub.asm:        movdqa          xmmword ptr[rbp + UMThunkStubAMD64_XMM_SAVE_OFFSET + 10h], xmm1
./src/vm/amd64/UMThunkStub.asm:        movdqa          xmmword ptr[rbp + UMThunkStubAMD64_XMM_SAVE_OFFSET + 20h], xmm2
./src/vm/amd64/UMThunkStub.asm:        movdqa          xmmword ptr[rbp + UMThunkStubAMD64_XMM_SAVE_OFFSET + 30h], xmm3
./src/vm/amd64/UMThunkStub.asm:        mov             rcx,  [rbp + UMThunkStubAMD64_ARGUMENTS_STACK_HOME_OFFSET +  0h] 
./src/vm/amd64/UMThunkStub.asm:        mov             rdx,  [rbp + UMThunkStubAMD64_ARGUMENTS_STACK_HOME_OFFSET +  8h] 
./src/vm/amd64/UMThunkStub.asm:        mov             r8,   [rbp + UMThunkStubAMD64_ARGUMENTS_STACK_HOME_OFFSET + 10h] 
./src/vm/amd64/UMThunkStub.asm:        mov             r9,   [rbp + UMThunkStubAMD64_ARGUMENTS_STACK_HOME_OFFSET + 18h]
./src/vm/amd64/UMThunkStub.asm:        movdqa          xmm0, xmmword ptr [rbp + UMThunkStubAMD64_XMM_SAVE_OFFSET +  0h]
./src/vm/amd64/UMThunkStub.asm:        movdqa          xmm1, xmmword ptr [rbp + UMThunkStubAMD64_XMM_SAVE_OFFSET + 10h]
./src/vm/amd64/UMThunkStub.asm:        movdqa          xmm2, xmmword ptr [rbp + UMThunkStubAMD64_XMM_SAVE_OFFSET + 20h]
./src/vm/amd64/UMThunkStub.asm:        movdqa          xmm3, xmmword ptr [rbp + UMThunkStubAMD64_XMM_SAVE_OFFSET + 30h]
./src/vm/amd64/UMThunkStub.asm:        mov             [rbp + UMThunkStubAMD64_ARGUMENTS_STACK_HOME_OFFSET +  0h], rcx
./src/vm/amd64/UMThunkStub.asm:        mov             [rbp + UMThunkStubAMD64_ARGUMENTS_STACK_HOME_OFFSET +  8h], rdx
./src/vm/amd64/UMThunkStub.asm:        mov             [rbp + UMThunkStubAMD64_ARGUMENTS_STACK_HOME_OFFSET + 10h], r8
./src/vm/amd64/UMThunkStub.asm:        mov             [rbp + UMThunkStubAMD64_ARGUMENTS_STACK_HOME_OFFSET + 18h], r9
./src/vm/amd64/UMThunkStub.asm:        movdqa          xmmword ptr [rbp + UMThunkStubAMD64_XMM_SAVE_OFFSET +  0h], xmm0
./src/vm/amd64/UMThunkStub.asm:        movdqa          xmmword ptr [rbp + UMThunkStubAMD64_XMM_SAVE_OFFSET + 10h], xmm1
./src/vm/amd64/UMThunkStub.asm:        movdqa          xmmword ptr [rbp + UMThunkStubAMD64_XMM_SAVE_OFFSET + 20h], xmm2
./src/vm/amd64/UMThunkStub.asm:        movdqa          xmmword ptr [rbp + UMThunkStubAMD64_XMM_SAVE_OFFSET + 30h], xmm3
./src/vm/amd64/UMThunkStub.asm:        mov             rcx,  [rbp + UMThunkStubAMD64_ARGUMENTS_STACK_HOME_OFFSET +  0h] 
./src/vm/amd64/UMThunkStub.asm:        mov             rdx,  [rbp + UMThunkStubAMD64_ARGUMENTS_STACK_HOME_OFFSET +  8h] 
./src/vm/amd64/UMThunkStub.asm:        mov             r8,   [rbp + UMThunkStubAMD64_ARGUMENTS_STACK_HOME_OFFSET + 10h] 
./src/vm/amd64/UMThunkStub.asm:        mov             r9,   [rbp + UMThunkStubAMD64_ARGUMENTS_STACK_HOME_OFFSET + 18h]
./src/vm/amd64/UMThunkStub.asm:        movdqa          xmm0, xmmword ptr [rbp + UMThunkStubAMD64_XMM_SAVE_OFFSET +  0h]
./src/vm/amd64/UMThunkStub.asm:        movdqa          xmm1, xmmword ptr [rbp + UMThunkStubAMD64_XMM_SAVE_OFFSET + 10h]
./src/vm/amd64/UMThunkStub.asm:        movdqa          xmm2, xmmword ptr [rbp + UMThunkStubAMD64_XMM_SAVE_OFFSET + 20h]
./src/vm/amd64/UMThunkStub.asm:        movdqa          xmm3, xmmword ptr [rbp + UMThunkStubAMD64_XMM_SAVE_OFFSET + 30h]
./src/vm/amd64/UMThunkStub.asm:        ; rax = cbStackArgs (with 20h for register args subtracted out already)
./src/vm/amd64/UMThunkStub.asm:        mov             [rbp + UMThunkStubAMD64_ARGUMENTS_STACK_HOME_OFFSET +  0h], rcx
./src/vm/amd64/UMThunkStub.asm:        mov             [rbp + UMThunkStubAMD64_ARGUMENTS_STACK_HOME_OFFSET +  8h], rdx
./src/vm/amd64/UMThunkStub.asm:        mov             [rbp + UMThunkStubAMD64_ARGUMENTS_STACK_HOME_OFFSET + 10h], r8
./src/vm/amd64/UMThunkStub.asm:        mov             rcx, [rbp + UMThunkStubAMD64_ARGUMENTS_STACK_HOME_OFFSET +  0h]
./src/vm/amd64/UMThunkStub.asm:        mov             rdx, [rbp + UMThunkStubAMD64_ARGUMENTS_STACK_HOME_OFFSET +  8h]
./src/vm/amd64/UMThunkStub.asm:        mov             r8, [rbp + UMThunkStubAMD64_ARGUMENTS_STACK_HOME_OFFSET + 10h]
./src/vm/amd64/UMThunkStub.asm:        mov             [rbp + UMThunkStubAMD64_ARGUMENTS_STACK_HOME_OFFSET +  0h], rcx
./src/vm/amd64/UMThunkStub.asm:        mov             [rbp + UMThunkStubAMD64_ARGUMENTS_STACK_HOME_OFFSET +  8h], rdx
./src/vm/amd64/UMThunkStub.asm:        mov             [rbp + UMThunkStubAMD64_ARGUMENTS_STACK_HOME_OFFSET + 10h], r8
./src/vm/amd64/UMThunkStub.asm:        mov             [rbp + UMThunkStubAMD64_ARGUMENTS_STACK_HOME_OFFSET + 18h], r9
./src/vm/amd64/UMThunkStub.asm:        movdqa          xmmword ptr [rbp + UMThunkStubAMD64_XMM_SAVE_OFFSET +  0h], xmm0
./src/vm/amd64/UMThunkStub.asm:        movdqa          xmmword ptr [rbp + UMThunkStubAMD64_XMM_SAVE_OFFSET + 10h], xmm1
./src/vm/amd64/UMThunkStub.asm:        movdqa          xmmword ptr [rbp + UMThunkStubAMD64_XMM_SAVE_OFFSET + 20h], xmm2
./src/vm/amd64/UMThunkStub.asm:        movdqa          xmmword ptr [rbp + UMThunkStubAMD64_XMM_SAVE_OFFSET + 30h], xmm3
./src/vm/amd64/UMThunkStub.asm:        mov             rax,  [rbp + UMThunkStubAMD64_ARGUMENTS_STACK_HOME_OFFSET +  0h]
./src/vm/amd64/UMThunkStub.asm:        movdqa          xmm0, xmmword ptr [rbp + UMThunkStubAMD64_XMM_SAVE_OFFSET +  0h]
./src/vm/amd64/UMThunkStub.asm:;       Thread *pThread);               ; [entry_sp + 28h]
./src/vm/amd64/UMThunkStub.asm:        mov             rcx, [rsi +  0h]
./src/vm/amd64/UMThunkStub.asm:        mov             rdx, [rsi +  8h]
./src/vm/amd64/UMThunkStub.asm:        mov             r8,  [rsi + 10h]
./src/vm/amd64/UMThunkStub.asm:        mov             r9,  [rsi + 18h]
./src/vm/amd64/UMThunkStub.asm:        movdqa          xmm0, xmmword ptr [rsi + UMThunkStubAMD64_XMM_SAVE_OFFSET - UMThunkStubAMD64_ARGUMENTS_STACK_HOME_OFFSET +  0h]
./src/vm/amd64/UMThunkStub.asm:        movdqa          xmm1, xmmword ptr [rsi + UMThunkStubAMD64_XMM_SAVE_OFFSET - UMThunkStubAMD64_ARGUMENTS_STACK_HOME_OFFSET + 10h]
./src/vm/amd64/UMThunkStub.asm:        movdqa          xmm2, xmmword ptr [rsi + UMThunkStubAMD64_XMM_SAVE_OFFSET - UMThunkStubAMD64_ARGUMENTS_STACK_HOME_OFFSET + 20h]
./src/vm/amd64/UMThunkStub.asm:        movdqa          xmm3, xmmword ptr [rsi + UMThunkStubAMD64_XMM_SAVE_OFFSET - UMThunkStubAMD64_ARGUMENTS_STACK_HOME_OFFSET + 30h]
./src/vm/amd64/UMThunkStub.asm:        mov             [rsi + 0h], rax
./src/vm/amd64/UMThunkStub.asm:        movdqa          xmmword ptr [rsi + UMThunkStubAMD64_XMM_SAVE_OFFSET - UMThunkStubAMD64_ARGUMENTS_STACK_HOME_OFFSET +  0h], xmm0
./src/vm/amd64/UMThunkStub.asm:        ; rdx = cbStackArgs (with 20h for register args subtracted out already)
./src/vm/amd64/JitHelpers_FastWriteBarriers.asm:        shr     rax, 0Ch ; SoftwareWriteWatch::AddressToTableByteIndexShift
./src/vm/amd64/JitHelpers_FastWriteBarriers.asm:        shr     rax, 0Ch ; SoftwareWriteWatch::AddressToTableByteIndexShift
./src/vm/amd64/JitHelpers_FastWriteBarriers.asm:        shr     rax, 0Ch ; SoftwareWriteWatch::AddressToTableByteIndexShift
./src/vm/amd64/AsmHelpers.asm:        add     rsp, 28h        ; pop callee scratch area
./src/vm/amd64/AsmHelpers.asm:        add     rsp, 28h        ; pop callee scratch area
./src/vm/amd64/AsmHelpers.asm:        mov             rcx, [rsp + 70h]
./src/vm/amd64/AsmHelpers.asm:        mov             rdx, [rsp + 78h]
./src/vm/amd64/AsmHelpers.asm:        mov             r8,  [rsp + 80h]
./src/vm/amd64/AsmHelpers.asm:        mov             r9,  [rsp + 88h]
./src/vm/amd64/AsmHelpers.asm:        mov             r11, [rsp + 60h]
./src/vm/amd64/AsmHelpers.asm:        movdqa          xmm0, [rsp + 20h]
./src/vm/amd64/AsmHelpers.asm:        movdqa          xmm1, [rsp + 30h]
./src/vm/amd64/AsmHelpers.asm:        movdqa          xmm2, [rsp + 40h]
./src/vm/amd64/AsmHelpers.asm:        movdqa          xmm3, [rsp + 50h]
./src/vm/amd64/AsmHelpers.asm:    movdqa      [rsp+20h], xmm0     ; Save xmm0  
./src/vm/amd64/AsmHelpers.asm:    mov         [rsp+30h], rax      ; Save rax  
./src/vm/amd64/AsmHelpers.asm:    movdqa      xmm0, [rsp+20h]     ; Restore xmm0	
./src/vm/amd64/AsmHelpers.asm:    mov         rax,  [rsp+30h]     ; Restore rax  
./src/vm/amd64/AsmHelpers.asm:        mov     [rsp+10h], rdx
./src/vm/amd64/AsmHelpers.asm:        movsd   xmm0, real8 ptr [rsp+10h]
./src/vm/amd64/AsmHelpers.asm:        mov     [rsp+10h], rdx
./src/vm/amd64/AsmHelpers.asm:        movss   xmm0, real4 ptr [rsp+10h]
./src/vm/amd64/AsmHelpers.asm:        alloc_stack     20h + SIZEOF__CONTEXT
./src/vm/amd64/AsmHelpers.asm:        alloc_stack         30h ; make extra room for xmm0
./src/vm/amd64/AsmHelpers.asm:        movdqa              xmm0, [rsp + 20h]
./src/vm/amd64/AsmHelpers.asm:SIZEOF_PROFILE_PLATFORM_SPECIFIC_DATA   equ 8h*11 + 4h*2    ; includes fudge to make FP_SPILL right
./src/vm/amd64/AsmHelpers.asm:SIZEOF_OUTGOING_ARGUMENT_HOMES          equ 8h*4
./src/vm/amd64/AsmHelpers.asm:SIZEOF_FP_ARG_SPILL                     equ 10h*1
./src/vm/amd64/AsmHelpers.asm:        lea                     rax, [rsp + 10h]    ; caller rsp
./src/vm/amd64/AsmHelpers.asm:        mov                     r10, [rax - 8h]     ; return address
./src/vm/amd64/AsmHelpers.asm:        mov                     [rsp + OFFSETOF_PLATFORM_SPECIFIC_DATA +  0h], r8     ; r8 is null      -- struct functionId field
./src/vm/amd64/AsmHelpers.asm:        save_reg_postrsp        rbp, OFFSETOF_PLATFORM_SPECIFIC_DATA +    8h          ;                 -- struct rbp field
./src/vm/amd64/AsmHelpers.asm:        mov                     [rsp + OFFSETOF_PLATFORM_SPECIFIC_DATA + 10h], rax    ; caller rsp      -- struct probeRsp field
./src/vm/amd64/AsmHelpers.asm:        mov                     [rsp + OFFSETOF_PLATFORM_SPECIFIC_DATA + 18h], r10    ; return address  -- struct ip field
./src/vm/amd64/AsmHelpers.asm:        mov                     [rsp + OFFSETOF_PLATFORM_SPECIFIC_DATA + 20h], rdx    ;                 -- struct profiledRsp field
./src/vm/amd64/AsmHelpers.asm:        mov                     [rsp + OFFSETOF_PLATFORM_SPECIFIC_DATA + 28h], r8     ; r8 is null      -- struct rax field
./src/vm/amd64/AsmHelpers.asm:        mov                     [rsp + OFFSETOF_PLATFORM_SPECIFIC_DATA + 30h], r8     ; r8 is null      -- struct hiddenArg field
./src/vm/amd64/AsmHelpers.asm:        movsd                   real8 ptr [rsp + OFFSETOF_PLATFORM_SPECIFIC_DATA + 38h], xmm0    ;      -- struct flt0 field
./src/vm/amd64/AsmHelpers.asm:        movsd                   real8 ptr [rsp + OFFSETOF_PLATFORM_SPECIFIC_DATA + 40h], xmm1    ;      -- struct flt1 field
./src/vm/amd64/AsmHelpers.asm:        movsd                   real8 ptr [rsp + OFFSETOF_PLATFORM_SPECIFIC_DATA + 48h], xmm2    ;      -- struct flt2 field
./src/vm/amd64/AsmHelpers.asm:        movsd                   real8 ptr [rsp + OFFSETOF_PLATFORM_SPECIFIC_DATA + 50h], xmm3    ;      -- struct flt3 field
./src/vm/amd64/AsmHelpers.asm:        mov                     [rsp + OFFSETOF_PLATFORM_SPECIFIC_DATA + 58h], r10d   ; flags    ;      -- struct flags field
./src/vm/amd64/AsmHelpers.asm:        movdqa                  xmm0, [rsp + OFFSETOF_FP_ARG_SPILL +  0h]
./src/vm/amd64/AsmHelpers.asm:        lea                     r10, [rsp + 10h]    ; caller rsp
./src/vm/amd64/AsmHelpers.asm:        mov                     r11, [r10 - 8h]     ; return address
./src/vm/amd64/AsmHelpers.asm:        mov                     [rsp + OFFSETOF_PLATFORM_SPECIFIC_DATA +  0h], r8     ; r8 is null      -- struct functionId field      
./src/vm/amd64/AsmHelpers.asm:        save_reg_postrsp        rbp, OFFSETOF_PLATFORM_SPECIFIC_DATA +    8h          ;                 -- struct rbp field
./src/vm/amd64/AsmHelpers.asm:        mov                     [rsp + OFFSETOF_PLATFORM_SPECIFIC_DATA + 10h], r10    ; caller rsp      -- struct probeRsp field
./src/vm/amd64/AsmHelpers.asm:        mov                     [rsp + OFFSETOF_PLATFORM_SPECIFIC_DATA + 18h], r11    ; return address  -- struct ip field
./src/vm/amd64/AsmHelpers.asm:        mov                     [rsp + OFFSETOF_PLATFORM_SPECIFIC_DATA + 20h], rdx    ;                 -- struct profiledRsp field
./src/vm/amd64/AsmHelpers.asm:        mov                     [rsp + OFFSETOF_PLATFORM_SPECIFIC_DATA + 28h], rax    ; return value    -- struct rax field
./src/vm/amd64/AsmHelpers.asm:        mov                     [rsp + OFFSETOF_PLATFORM_SPECIFIC_DATA + 30h], r8     ; r8 is null      -- struct hiddenArg field
./src/vm/amd64/AsmHelpers.asm:        movsd                   real8 ptr [rsp + OFFSETOF_PLATFORM_SPECIFIC_DATA + 38h], xmm0    ;      -- struct flt0 field
./src/vm/amd64/AsmHelpers.asm:        movsd                   real8 ptr [rsp + OFFSETOF_PLATFORM_SPECIFIC_DATA + 40h], xmm1    ;      -- struct flt1 field
./src/vm/amd64/AsmHelpers.asm:        movsd                   real8 ptr [rsp + OFFSETOF_PLATFORM_SPECIFIC_DATA + 48h], xmm2    ;      -- struct flt2 field
./src/vm/amd64/AsmHelpers.asm:        movsd                   real8 ptr [rsp + OFFSETOF_PLATFORM_SPECIFIC_DATA + 50h], xmm3    ;      -- struct flt3 field
./src/vm/amd64/AsmHelpers.asm:        mov                     [rsp + OFFSETOF_PLATFORM_SPECIFIC_DATA + 58h], r10d   ; flags           -- struct flags field
./src/vm/amd64/AsmHelpers.asm:        movdqa                  xmm0, [rsp + OFFSETOF_FP_ARG_SPILL +  0h]
./src/vm/amd64/AsmHelpers.asm:        lea                     rax, [rsp + 10h]    ; caller rsp
./src/vm/amd64/AsmHelpers.asm:        mov                     r11, [rax - 8h]     ; return address
./src/vm/amd64/AsmHelpers.asm:        mov                     [rsp + OFFSETOF_PLATFORM_SPECIFIC_DATA +  0h], r8     ; r8 is null      -- struct functionId field
./src/vm/amd64/AsmHelpers.asm:        save_reg_postrsp        rbp, OFFSETOF_PLATFORM_SPECIFIC_DATA +    8h          ;                 -- struct rbp field
./src/vm/amd64/AsmHelpers.asm:        mov                     [rsp + OFFSETOF_PLATFORM_SPECIFIC_DATA + 10h], rax    ; caller rsp      -- struct probeRsp field 
./src/vm/amd64/AsmHelpers.asm:        mov                     [rsp + OFFSETOF_PLATFORM_SPECIFIC_DATA + 18h], r11    ; return address  -- struct ip field
./src/vm/amd64/AsmHelpers.asm:        mov                     [rsp + OFFSETOF_PLATFORM_SPECIFIC_DATA + 20h], rdx    ;                 -- struct profiledRsp field
./src/vm/amd64/AsmHelpers.asm:        mov                     [rsp + OFFSETOF_PLATFORM_SPECIFIC_DATA + 28h], r8     ; r8 is null      -- struct rax field
./src/vm/amd64/AsmHelpers.asm:        mov                     [rsp + OFFSETOF_PLATFORM_SPECIFIC_DATA + 30h], r8     ; r8 is null      -- struct hiddenArg field 
./src/vm/amd64/AsmHelpers.asm:        mov                     [rsp + OFFSETOF_PLATFORM_SPECIFIC_DATA + 38h], r8     ; r8 is null      -- struct flt0 field
./src/vm/amd64/AsmHelpers.asm:        mov                     [rsp + OFFSETOF_PLATFORM_SPECIFIC_DATA + 40h], r8     ; r8 is null      -- struct flt1 field 
./src/vm/amd64/AsmHelpers.asm:        mov                     [rsp + OFFSETOF_PLATFORM_SPECIFIC_DATA + 48h], r8     ; r8 is null      -- struct flt2 field
./src/vm/amd64/AsmHelpers.asm:        mov                     [rsp + OFFSETOF_PLATFORM_SPECIFIC_DATA + 50h], r8     ; r8 is null      -- struct flt3 field
./src/vm/amd64/AsmHelpers.asm:        mov                     [rsp + OFFSETOF_PLATFORM_SPECIFIC_DATA + 58h], r10d   ; flags           -- struct flags field
./src/vm/amd64/AsmHelpers.asm:        movdqa                  xmm0, [rsp + OFFSETOF_FP_ARG_SPILL +  0h]
./src/vm/amd64/jithelpers_fast.S:        cmp     esi, dword ptr [rdi + OFFSETOF__PtrArray__m_NumComponents] // 8h -> array size offset
./src/vm/amd64/jithelpers_fast.S:        mov     rcx, [r10 + OFFSETOF__MethodTable__m_ElementType]   // 10h -> typehandle offset
./src/vm/amd64/InstantiatingStub.asm:                                        18h + 8h ; +8 for stack alignment padding
./src/vm/amd64/InstantiatingStub.asm:                                        SIZEOF_CalleeSavedRegisters + 8h ; +8 for return address
./src/vm/amd64/InstantiatingStub.asm:; + 8h  callee scratch
./src/vm/amd64/InstantiatingStub.asm:; +10h  callee scratch
./src/vm/amd64/InstantiatingStub.asm:; +18h  callee scratch
./src/vm/amd64/InstantiatingStub.asm:; + 8h      entrypoint of shared MethodDesc
./src/vm/amd64/InstantiatingStub.asm:; +10h      extra stack param
./src/vm/amd64/InstantiatingStub.asm:; +18h      padding
./src/vm/amd64/InstantiatingStub.asm:; +20h      gsCookie
./src/vm/amd64/InstantiatingStub.asm:; +28h      __VFN_table
./src/vm/amd64/InstantiatingStub.asm:; +30h      m_Next
./src/vm/amd64/InstantiatingStub.asm:; +38h      m_calleeSavedRegisters
./src/vm/amd64/InstantiatingStub.asm:; +98h      m_ReturnAddress
./src/vm/amd64/InstantiatingStub.asm:; +a0h  rcx home
./src/vm/amd64/InstantiatingStub.asm:; +a8h  rdx home
./src/vm/amd64/InstantiatingStub.asm:; +b0h  r8 home
./src/vm/amd64/InstantiatingStub.asm:; +b8h  r9 home
./src/vm/amd64/InstantiatingStub.asm:        .allocstack             SIZEOF_FIXED_FRAME - 8h     ; -8 for return address
./src/vm/amd64/InstantiatingStub.asm:        mov     rcx, [rbp + OFFSETOF_SECRET_PARAMS + 0h]        ; nStackSlots (includes padding for stack alignment)
./src/vm/amd64/InstantiatingStub.asm:        push    qword ptr [rbp+OFFSETOF_SECRET_PARAMS + 10h]    ; push extra stack arg
./src/vm/amd64/InstantiatingStub.asm:        mov     rcx, [rbp + SIZEOF_FIXED_FRAME + 00h]
./src/vm/amd64/InstantiatingStub.asm:        mov     rdx, [rbp + SIZEOF_FIXED_FRAME + 08h]
./src/vm/amd64/InstantiatingStub.asm:        mov     r8, [rbp + SIZEOF_FIXED_FRAME + 10h]
./src/vm/amd64/InstantiatingStub.asm:        mov     r9, [rbp + SIZEOF_FIXED_FRAME + 18h]
./src/vm/amd64/InstantiatingStub.asm:        call    qword ptr [rbp+OFFSETOF_SECRET_PARAMS + 8h]     ; call target
./src/vm/amd64/GenericComCallStubs.asm:        add     eax, 8h                 ; alignment padding
./src/vm/amd64/GenericComCallStubs.asm:        and     rax, 0FFFFFFFFFFFFFFf0h ; for proper stack alignment, v-liti remove partial register stall
./src/vm/amd64/GenericComCallStubs.asm:        movdqa  xmm0, [rdx + ComMethodFrame_XMM_SAVE_OFFSET + 00h]
./src/vm/amd64/GenericComCallStubs.asm:        movdqa  xmm1, [rdx + ComMethodFrame_XMM_SAVE_OFFSET + 10h]
./src/vm/amd64/GenericComCallStubs.asm:        movdqa  xmm2, [rdx + ComMethodFrame_XMM_SAVE_OFFSET + 20h]
./src/vm/amd64/GenericComCallStubs.asm:        movdqa  xmm3, [rdx + ComMethodFrame_XMM_SAVE_OFFSET + 30h]
./src/vm/amd64/GenericComCallStubs.asm:        mov     rcx, [rbp + 40h]        ; ignoring the COM IP at [rsp]
./src/vm/amd64/GenericComCallStubs.asm:        mov     rdx, [rsp + 08h]
./src/vm/amd64/GenericComCallStubs.asm:        mov     r8,  [rsp + 10h]
./src/vm/amd64/GenericComCallStubs.asm:        mov     r9,  [rsp + 18h]
./src/vm/amd64/GenericComCallStubs.asm:;                                 INT_PTR pDangerousThis  // rsp + 28h on entry
./src/vm/amd64/GenericComCallStubs.asm:        alloc_stack     28h     ; alloc scratch space + alignment,   pDangerousThis moves to [rsp+50]
./src/vm/amd64/GenericComCallStubs.asm:        movdqa  xmm0, [rdx + ComMethodFrame_XMM_SAVE_OFFSET + 00h]
./src/vm/amd64/GenericComCallStubs.asm:        movdqa  xmm1, [rdx + ComMethodFrame_XMM_SAVE_OFFSET + 10h]
./src/vm/amd64/GenericComCallStubs.asm:        movdqa  xmm2, [rdx + ComMethodFrame_XMM_SAVE_OFFSET + 20h]
./src/vm/amd64/GenericComCallStubs.asm:        movdqa  xmm3, [rdx + ComMethodFrame_XMM_SAVE_OFFSET + 30h]
./src/vm/amd64/GenericComCallStubs.asm:        mov     rcx, [rsp + 50h]        ; ignoring the COM IP at [r11 + 00h]
./src/vm/amd64/GenericComCallStubs.asm:        mov     rdx, [r11 + 08h]
./src/vm/amd64/GenericComCallStubs.asm:        mov     r8,  [r11 + 10h]
./src/vm/amd64/GenericComCallStubs.asm:        mov     r9,  [r11 + 18h]
./src/vm/amd64/JitHelpers_Slow.asm:        shr     r10, 0Ch ; SoftwareWriteWatch::AddressToTableByteIndexShift
./src/vm/amd64/JitHelpers_Slow.asm:        sub     ecx, 18h  ; sizeof(ObjHeader) + sizeof(Object) + last slot
./src/vm/amd64/JitHelpers_Fast.asm:        shr     rax, 0Ch ; SoftwareWriteWatch::AddressToTableByteIndexShift
./src/vm/amd64/JitHelpers_Fast.asm:        shr     rax, 0Ch ; SoftwareWriteWatch::AddressToTableByteIndexShift
./src/vm/amd64/JitHelpers_Fast.asm:        cmp     edx, dword ptr [rcx + OFFSETOF__PtrArray__m_NumComponents] ; 8h -> array size offset
./src/vm/amd64/JitHelpers_Fast.asm:        mov     r9, [r10 + OFFSETOF__MethodTable__m_ElementType]   ; 10h -> typehandle offset
./src/vm/amd64/JitHelpers_Fast.asm:        mov     rcx, [rsp + MIN_SIZE + 8h]
./src/vm/amd64/JitHelpers_Fast.asm:        mov     rdx, [rsp + MIN_SIZE + 10h]
./src/vm/amd64/JitHelpers_Fast.asm:        mov     r8,  [rsp + MIN_SIZE + 18h]
./src/vm/amd64/JitHelpers_Fast.asm:        lea     rcx, [rsp + MIN_SIZE + 18h]
./src/vm/amd64/JitHelpers_Fast.asm:        lea     rdx, [rsp + MIN_SIZE + 8h]
./src/vm/amd64/JitHelpers_Fast.asm:        mov     rcx, [rsp + MIN_SIZE + 8h]
./src/vm/amd64/JitHelpers_Fast.asm:        mov     rdx, [rsp + MIN_SIZE + 10h]
./src/vm/amd64/JitHelpers_Fast.asm:        mov     r8,  [rsp + MIN_SIZE + 18h]
./src/vm/amd64/JitHelpers_Fast.asm:; + 8h  callee scratch
./src/vm/amd64/JitHelpers_Fast.asm:; +10h  callee scratch
./src/vm/amd64/JitHelpers_Fast.asm:; +18h  callee scratch
./src/vm/amd64/JitHelpers_Fast.asm:; + 8h      __VFN_table
./src/vm/amd64/JitHelpers_Fast.asm:; +10h      m_Next
./src/vm/amd64/JitHelpers_Fast.asm:; +18h      m_pGCLayout
./src/vm/amd64/JitHelpers_Fast.asm:; +20h      m_padding
./src/vm/amd64/JitHelpers_Fast.asm:; +28h      m_rdi
./src/vm/amd64/JitHelpers_Fast.asm:; +30h      m_rsi
./src/vm/amd64/JitHelpers_Fast.asm:; +38h      m_rbx
./src/vm/amd64/JitHelpers_Fast.asm:; +40h      m_rbp
./src/vm/amd64/JitHelpers_Fast.asm:; +48h      m_r12
./src/vm/amd64/JitHelpers_Fast.asm:; +50h      m_r13
./src/vm/amd64/JitHelpers_Fast.asm:; +58h      m_r14
./src/vm/amd64/JitHelpers_Fast.asm:; +60h      m_r15
./src/vm/amd64/JitHelpers_Fast.asm:; +68h      m_ReturnAddress
./src/vm/amd64/JitHelpers_Fast.asm:        alloc_stack             48h     ; m_padding, m_pGCLayout, m_Next, __VFN_table, gsCookie, outgoing shadow area
./src/vm/amd64/JitHelpers_Fast.asm:        lea     rsp, [r13 + 28h]
./src/vm/amd64/CallDescrWorkerAMD64.asm:        alloc_stack     28h     ;; alloc callee scratch and align the stack
./src/vm/amd64/CallDescrWorkerAMD64.asm:        cmp     ah, ASM_ELEMENT_TYPE_R8 ;
./src/vm/amd64/CallDescrWorkerAMD64.asm:        mov     r8, 10h[rsp]            ;
./src/vm/amd64/CallDescrWorkerAMD64.asm:        movss   xmm2, real4 ptr 10h[rsp];
./src/vm/amd64/CallDescrWorkerAMD64.asm:        movsd   xmm2, real8 ptr 10h[rsp];
./src/vm/amd64/CallDescrWorkerAMD64.asm:        mov     r9, 18h[rsp]            ;
./src/vm/amd64/CallDescrWorkerAMD64.asm:        movss   xmm3, real4 ptr 18h[rsp];
./src/vm/amd64/CallDescrWorkerAMD64.asm:        cmp     ah, ASM_ELEMENT_TYPE_R8 ;
./src/vm/amd64/CallDescrWorkerAMD64.asm:        movsd   xmm3, real8 ptr 18h[rsp];

@janvorli
Copy link
Member

The .asm files are windows only. On Unix, the .S ones are used. So it seems only the .S files in src/vm/i386 need to get fixed in addition to what you've already changed. The other .S files only have constants with "h" suffix in comments, which is fine.

This commit fixes coreclr to build in newer versions of llvm (tested
with llvm 8 on Fedora 30).

These recent versions of llvm (as well as GCC) do not accept values like
"20h" as valid integer literals:

    src/debug/ee/amd64/dbghelpers.S:32:21: error: unknown token in expression
            add rsp, 20h
                        ^

This was reported as a bug to llvm upstream and they explicitly rejected
supporting these literals: https://reviews.llvm.org/D59810

This is partial backport of cbd672e
(PR dotnet#22810), with some modifications to compile, which was about adding
compatibility with GCC 5.
@omajid omajid changed the title Replace hex number representation in ASM files [release/2.1] Replace hex number representation in ASM files Aug 13, 2019
@omajid
Copy link
Member Author

omajid commented Aug 13, 2019

@janvorli Can you take a look at this again?

Copy link
Member

@janvorli janvorli left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thank you!

@janvorli
Copy link
Member

Oh, I've just realized you are trying to merge this change into release/2.1 - that is not good, it should go to master.

@janvorli janvorli added the * NO MERGE * The PR is not ready for merge yet (see discussion for detailed reasons) label Aug 13, 2019
@janvorli
Copy link
Member

I have added "NO MERGE" attribute so that it is not accidentally merged to the release/2.1

@omajid
Copy link
Member Author

omajid commented Aug 14, 2019

@janvorli Sorry, but I am a bit confused. The changes should already be in master as part of #22810. Is there anything you want me to do in master first?

Edit: just to confirm, I just rebuilt coreclr's master (commit 952ded9f0a) using llvm/clang 8.0 without any issues and without needing this patch.

@janvorli
Copy link
Member

Hmm, I have not realized that. So I am confused - why are you porting it to release/2.1?

@omajid
Copy link
Member Author

omajid commented Aug 14, 2019

@janvorli I would like to be able to build .NET Core 2.1 (using source-build, which includes coreclr) on Fedora 30. If coreclr's release/2.1 branch supports llvm 8 (which it does, with this small patch), I don't have to carry any patches in source-build or in Fedora to work around this issue.

@omajid
Copy link
Member Author

omajid commented Aug 21, 2019

Any thoughts, @janvorli ?

@janvorli
Copy link
Member

I dont' think this change would meet the bar for porting to older release branch. @jkotas do you agree?

@jkotas
Copy link
Member

jkotas commented Aug 22, 2019

I agree. We do not patch release branches to work with newer compilers.

@omajid
Copy link
Member Author

omajid commented Aug 22, 2019

Thanks. I am going to close this PR now.

However, I really hope this policy will be revisisted - once dotnet LTS versions get included in Linux distributions, carrying patches that affect how code is compiled might break dotnet if the patches cant be reviewed and merged into the LTS release branches.

@omajid omajid closed this Aug 22, 2019
@RheaAyase
Copy link
Member

A version of RHEL with clang8 will be released within the lifetime of 2.1 (as well as many other distributions will be (already are) carrying their own patches for this.)

I believe that this should really be patched here because otherwise nobody will be able to build future updates of 2.1 (unless everyone patches it on their end.)

@jkotas
Copy link
Member

jkotas commented Sep 11, 2019

@RheaAyase Ok, let me try to run this through servicing process.

@jkotas jkotas reopened this Sep 11, 2019
@jkotas jkotas added Servicing-consider Issue for next servicing release review and removed * NO MERGE * The PR is not ready for merge yet (see discussion for detailed reasons) labels Sep 11, 2019
@jkotas jkotas added this to the 2.1.x milestone Sep 11, 2019
@jkotas
Copy link
Member

jkotas commented Sep 12, 2019

Approved offline by @Pilchie

@jkotas jkotas merged commit 0fd2dc6 into dotnet:release/2.1 Sep 12, 2019
@danmoseley danmoseley added Servicing-approved Approved for servicing release and removed Servicing-consider Issue for next servicing release review labels Sep 17, 2019
@danmoseley danmoseley removed this from the 2.1.x milestone Sep 17, 2019
@danmoseley danmoseley added this to the 2.1.14 milestone Sep 17, 2019
@danmoseley
Copy link
Member

danmoseley commented Sep 17, 2019

@omajid @jkotas does this need to ship in October or can it wait to November? We may not have a payload to justify an October servicing release otherwise.

@jkotas
Copy link
Member

jkotas commented Sep 17, 2019

This does not need to ship in October. It is fine to bundle with the next real change we need to ship.

@danmoseley
Copy link
Member

OK thanks

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Servicing-approved Approved for servicing release
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants