-
Notifications
You must be signed in to change notification settings - Fork 12.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[AVR] Force relocations for non-encodable jumps #121498
Conversation
cc @benshi001 |
@llvm/pr-subscribers-backend-loongarch @llvm/pr-subscribers-backend-x86 Author: Patryk Wychowaniec (Patryk27) ChangesThis commit changes the branch emission logic so that instead of throwing the "branch target out of range" error, we emit a relocation instead (if that's possible). MotivationWhen rustc builds an application, it compiles all crates (i.e. libraries) that belong to the project - those crates are compiled in their entirety, even if some of their functions actually end up not being called (as dead-code elimination happens later). This poses a problem in that if any library contains a function that we won't be able to emit (e.g. it's too large), we'll crash, even if ultimately we don't care about that particular function - that's what @mbuesch stumbled upon over #118015 (comment). This used not to be a problem before #106722, but now requires extra care within LLVM. N.B. I think my approach presented here is a bit hacky, I'm open to other ideas. Patch is 27.66 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/121498.diff 24 Files Affected:
diff --git a/llvm/include/llvm/MC/MCAsmBackend.h b/llvm/include/llvm/MC/MCAsmBackend.h
index b105a294d875c5..505bd1f59dd439 100644
--- a/llvm/include/llvm/MC/MCAsmBackend.h
+++ b/llvm/include/llvm/MC/MCAsmBackend.h
@@ -96,6 +96,7 @@ class MCAsmBackend {
virtual bool shouldForceRelocation(const MCAssembler &Asm,
const MCFixup &Fixup,
const MCValue &Target,
+ const uint64_t Value,
const MCSubtargetInfo *STI) {
return false;
}
diff --git a/llvm/lib/MC/MCAssembler.cpp b/llvm/lib/MC/MCAssembler.cpp
index 3c18d0832efcc7..3e5e0151d265e2 100644
--- a/llvm/lib/MC/MCAssembler.cpp
+++ b/llvm/lib/MC/MCAssembler.cpp
@@ -222,7 +222,7 @@ bool MCAssembler::evaluateFixup(const MCFixup &Fixup, const MCFragment *DF,
// Let the backend force a relocation if needed.
if (IsResolved &&
- getBackend().shouldForceRelocation(*this, Fixup, Target, STI)) {
+ getBackend().shouldForceRelocation(*this, Fixup, Target, Value, STI)) {
IsResolved = false;
WasForced = true;
}
diff --git a/llvm/lib/Target/AArch64/MCTargetDesc/AArch64AsmBackend.cpp b/llvm/lib/Target/AArch64/MCTargetDesc/AArch64AsmBackend.cpp
index 3ba0f2a2682855..337b81d68c93a0 100644
--- a/llvm/lib/Target/AArch64/MCTargetDesc/AArch64AsmBackend.cpp
+++ b/llvm/lib/Target/AArch64/MCTargetDesc/AArch64AsmBackend.cpp
@@ -98,7 +98,7 @@ class AArch64AsmBackend : public MCAsmBackend {
unsigned getFixupKindContainereSizeInBytes(unsigned Kind) const;
bool shouldForceRelocation(const MCAssembler &Asm, const MCFixup &Fixup,
- const MCValue &Target,
+ const MCValue &Target, const uint64_t Value,
const MCSubtargetInfo *STI) override;
};
@@ -520,6 +520,7 @@ bool AArch64AsmBackend::writeNopData(raw_ostream &OS, uint64_t Count,
bool AArch64AsmBackend::shouldForceRelocation(const MCAssembler &Asm,
const MCFixup &Fixup,
const MCValue &Target,
+ const uint64_t,
const MCSubtargetInfo *STI) {
unsigned Kind = Fixup.getKind();
if (Kind >= FirstLiteralRelocationKind)
diff --git a/llvm/lib/Target/AMDGPU/MCTargetDesc/AMDGPUAsmBackend.cpp b/llvm/lib/Target/AMDGPU/MCTargetDesc/AMDGPUAsmBackend.cpp
index 3172a83e5a1fe0..988c9002efee80 100644
--- a/llvm/lib/Target/AMDGPU/MCTargetDesc/AMDGPUAsmBackend.cpp
+++ b/llvm/lib/Target/AMDGPU/MCTargetDesc/AMDGPUAsmBackend.cpp
@@ -53,7 +53,7 @@ class AMDGPUAsmBackend : public MCAsmBackend {
std::optional<MCFixupKind> getFixupKind(StringRef Name) const override;
const MCFixupKindInfo &getFixupKindInfo(MCFixupKind Kind) const override;
bool shouldForceRelocation(const MCAssembler &Asm, const MCFixup &Fixup,
- const MCValue &Target,
+ const MCValue &Target, uint64_t Value,
const MCSubtargetInfo *STI) override;
};
@@ -197,6 +197,7 @@ const MCFixupKindInfo &AMDGPUAsmBackend::getFixupKindInfo(
bool AMDGPUAsmBackend::shouldForceRelocation(const MCAssembler &,
const MCFixup &Fixup,
const MCValue &,
+ const uint64_t,
const MCSubtargetInfo *STI) {
return Fixup.getKind() >= FirstLiteralRelocationKind;
}
diff --git a/llvm/lib/Target/ARM/MCTargetDesc/ARMAsmBackend.cpp b/llvm/lib/Target/ARM/MCTargetDesc/ARMAsmBackend.cpp
index d0e759d3356f10..d164f87ae97c98 100644
--- a/llvm/lib/Target/ARM/MCTargetDesc/ARMAsmBackend.cpp
+++ b/llvm/lib/Target/ARM/MCTargetDesc/ARMAsmBackend.cpp
@@ -956,6 +956,7 @@ unsigned ARMAsmBackend::adjustFixupValue(const MCAssembler &Asm,
bool ARMAsmBackend::shouldForceRelocation(const MCAssembler &Asm,
const MCFixup &Fixup,
const MCValue &Target,
+ const uint64_t,
const MCSubtargetInfo *STI) {
const MCSymbolRefExpr *A = Target.getSymA();
const MCSymbol *Sym = A ? &A->getSymbol() : nullptr;
diff --git a/llvm/lib/Target/ARM/MCTargetDesc/ARMAsmBackend.h b/llvm/lib/Target/ARM/MCTargetDesc/ARMAsmBackend.h
index f33cd8b7c2425a..2932e68cd98e56 100644
--- a/llvm/lib/Target/ARM/MCTargetDesc/ARMAsmBackend.h
+++ b/llvm/lib/Target/ARM/MCTargetDesc/ARMAsmBackend.h
@@ -36,7 +36,7 @@ class ARMAsmBackend : public MCAsmBackend {
const MCFixupKindInfo &getFixupKindInfo(MCFixupKind Kind) const override;
bool shouldForceRelocation(const MCAssembler &Asm, const MCFixup &Fixup,
- const MCValue &Target,
+ const MCValue &Target, const uint64_t Value,
const MCSubtargetInfo *STI) override;
unsigned adjustFixupValue(const MCAssembler &Asm, const MCFixup &Fixup,
diff --git a/llvm/lib/Target/AVR/MCTargetDesc/AVRAsmBackend.cpp b/llvm/lib/Target/AVR/MCTargetDesc/AVRAsmBackend.cpp
index fd35f8fcb8e7b8..8e6599ff420b7a 100644
--- a/llvm/lib/Target/AVR/MCTargetDesc/AVRAsmBackend.cpp
+++ b/llvm/lib/Target/AVR/MCTargetDesc/AVRAsmBackend.cpp
@@ -31,10 +31,14 @@ namespace adjust {
using namespace llvm;
-static void signed_width(unsigned Width, uint64_t Value,
- std::string Description, const MCFixup &Fixup,
- MCContext *Ctx) {
- if (!isIntN(Width, Value)) {
+static bool checkSignedWidth(unsigned Width, uint64_t Value,
+ std::string Description, const MCFixup &Fixup,
+ MCContext *Ctx) {
+ if (isIntN(Width, Value)) {
+ return true;
+ }
+
+ if (Ctx) {
std::string Diagnostic = "out of range " + Description;
int64_t Min = minIntN(Width);
@@ -45,12 +49,18 @@ static void signed_width(unsigned Width, uint64_t Value,
Ctx->reportError(Fixup.getLoc(), Diagnostic);
}
+
+ return false;
}
-static void unsigned_width(unsigned Width, uint64_t Value,
- std::string Description, const MCFixup &Fixup,
- MCContext *Ctx) {
- if (!isUIntN(Width, Value)) {
+static bool checkUnsignedWidth(unsigned Width, uint64_t Value,
+ std::string Description, const MCFixup &Fixup,
+ MCContext *Ctx) {
+ if (isUIntN(Width, Value)) {
+ return true;
+ }
+
+ if (Ctx) {
std::string Diagnostic = "out of range " + Description;
int64_t Max = maxUIntN(Width);
@@ -60,6 +70,8 @@ static void unsigned_width(unsigned Width, uint64_t Value,
Ctx->reportError(Fixup.getLoc(), Diagnostic);
}
+
+ return false;
}
/// Adjusts the value of a branch target before fixup application.
@@ -67,15 +79,16 @@ static void adjustBranch(unsigned Size, const MCFixup &Fixup, uint64_t &Value,
MCContext *Ctx) {
// We have one extra bit of precision because the value is rightshifted by
// one.
- unsigned_width(Size + 1, Value, std::string("branch target"), Fixup, Ctx);
+ checkUnsignedWidth(Size + 1, Value, std::string("branch target"), Fixup, Ctx);
// Rightshifts the value by one.
AVR::fixups::adjustBranchTarget(Value);
}
/// Adjusts the value of a relative branch target before fixup application.
-static void adjustRelativeBranch(unsigned Size, const MCFixup &Fixup,
- uint64_t &Value, MCContext *Ctx) {
+static bool adjustRelativeBranch(unsigned Size, const MCFixup &Fixup,
+ uint64_t &Value, MCContext *Ctx,
+ const MCSubtargetInfo *STI = nullptr) {
// Jumps are relative to the current instruction.
Value -= 2;
@@ -83,8 +96,11 @@ static void adjustRelativeBranch(unsigned Size, const MCFixup &Fixup,
// one.
Size += 1;
- if (!isIntN(Size, Value) &&
- Ctx->getSubtargetInfo()->hasFeature(AVR::FeatureWrappingRjmp)) {
+ if (!STI) {
+ STI = Ctx->getSubtargetInfo();
+ }
+
+ if (!isIntN(Size, Value) && STI->hasFeature(AVR::FeatureWrappingRjmp)) {
const int32_t FlashSize = 0x2000;
int32_t SignedValue = Value;
@@ -96,10 +112,15 @@ static void adjustRelativeBranch(unsigned Size, const MCFixup &Fixup,
}
}
- signed_width(Size, Value, std::string("branch target"), Fixup, Ctx);
+ if (!checkSignedWidth(Size, Value, std::string("branch target"), Fixup,
+ Ctx)) {
+ return false;
+ }
// Rightshifts the value by one.
AVR::fixups::adjustBranchTarget(Value);
+
+ return true;
}
/// 22-bit absolute fixup.
@@ -152,7 +173,7 @@ static void fixup_13_pcrel(unsigned Size, const MCFixup &Fixup, uint64_t &Value,
/// Resolves to:
/// 10q0 qq10 0000 1qqq
static void fixup_6(const MCFixup &Fixup, uint64_t &Value, MCContext *Ctx) {
- unsigned_width(6, Value, std::string("immediate"), Fixup, Ctx);
+ checkUnsignedWidth(6, Value, std::string("immediate"), Fixup, Ctx);
Value = ((Value & 0x20) << 8) | ((Value & 0x18) << 7) | (Value & 0x07);
}
@@ -164,7 +185,7 @@ static void fixup_6(const MCFixup &Fixup, uint64_t &Value, MCContext *Ctx) {
/// 0000 0000 kk00 kkkk
static void fixup_6_adiw(const MCFixup &Fixup, uint64_t &Value,
MCContext *Ctx) {
- unsigned_width(6, Value, std::string("immediate"), Fixup, Ctx);
+ checkUnsignedWidth(6, Value, std::string("immediate"), Fixup, Ctx);
Value = ((Value & 0x30) << 2) | (Value & 0x0f);
}
@@ -174,7 +195,7 @@ static void fixup_6_adiw(const MCFixup &Fixup, uint64_t &Value,
/// Resolves to:
/// 0000 0000 AAAA A000
static void fixup_port5(const MCFixup &Fixup, uint64_t &Value, MCContext *Ctx) {
- unsigned_width(5, Value, std::string("port number"), Fixup, Ctx);
+ checkUnsignedWidth(5, Value, std::string("port number"), Fixup, Ctx);
Value &= 0x1f;
@@ -186,7 +207,7 @@ static void fixup_port5(const MCFixup &Fixup, uint64_t &Value, MCContext *Ctx) {
/// Resolves to:
/// 1011 0AAd dddd AAAA
static void fixup_port6(const MCFixup &Fixup, uint64_t &Value, MCContext *Ctx) {
- unsigned_width(6, Value, std::string("port number"), Fixup, Ctx);
+ checkUnsignedWidth(6, Value, std::string("port number"), Fixup, Ctx);
Value = ((Value & 0x30) << 5) | (Value & 0x0f);
}
@@ -197,7 +218,7 @@ static void fixup_port6(const MCFixup &Fixup, uint64_t &Value, MCContext *Ctx) {
/// 1010 ikkk dddd kkkk
static void fixup_lds_sts_16(const MCFixup &Fixup, uint64_t &Value,
MCContext *Ctx) {
- unsigned_width(7, Value, std::string("immediate"), Fixup, Ctx);
+ checkUnsignedWidth(7, Value, std::string("immediate"), Fixup, Ctx);
Value = ((Value & 0x70) << 8) | (Value & 0x0f);
}
@@ -331,13 +352,15 @@ void AVRAsmBackend::adjustFixupValue(const MCFixup &Fixup,
adjust::ldi::ms8(Size, Fixup, Value, Ctx);
break;
case AVR::fixup_16:
- adjust::unsigned_width(16, Value, std::string("port number"), Fixup, Ctx);
+ adjust::checkUnsignedWidth(16, Value, std::string("port number"), Fixup,
+ Ctx);
Value &= 0xffff;
break;
case AVR::fixup_16_pm:
Value >>= 1; // Flash addresses are always shifted.
- adjust::unsigned_width(16, Value, std::string("port number"), Fixup, Ctx);
+ adjust::checkUnsignedWidth(16, Value, std::string("port number"), Fixup,
+ Ctx);
Value &= 0xffff;
break;
@@ -512,14 +535,25 @@ bool AVRAsmBackend::writeNopData(raw_ostream &OS, uint64_t Count,
bool AVRAsmBackend::shouldForceRelocation(const MCAssembler &Asm,
const MCFixup &Fixup,
const MCValue &Target,
+ const uint64_t Value,
const MCSubtargetInfo *STI) {
switch ((unsigned)Fixup.getKind()) {
default:
return Fixup.getKind() >= FirstLiteralRelocationKind;
+
case AVR::fixup_7_pcrel:
- case AVR::fixup_13_pcrel:
- // Always resolve relocations for PC-relative branches
- return false;
+ case AVR::fixup_13_pcrel: {
+ uint64_t ValueEx = Value;
+ uint64_t Size = AVRAsmBackend::getFixupKindInfo(Fixup.getKind()).TargetSize;
+
+ // If the jump is too large to encode it, fall back to a relocation.
+ //
+ // Note that trying to actually link that relocation *would* fail, but the
+ // hopes are that the module we're currently compiling won't be actually
+ // linked to the final binary.
+ return !adjust::adjustRelativeBranch(Size, Fixup, ValueEx, nullptr, STI);
+ }
+
case AVR::fixup_call:
return true;
}
diff --git a/llvm/lib/Target/AVR/MCTargetDesc/AVRAsmBackend.h b/llvm/lib/Target/AVR/MCTargetDesc/AVRAsmBackend.h
index 2337319590324d..1a9ae94f2f49e9 100644
--- a/llvm/lib/Target/AVR/MCTargetDesc/AVRAsmBackend.h
+++ b/llvm/lib/Target/AVR/MCTargetDesc/AVRAsmBackend.h
@@ -53,7 +53,7 @@ class AVRAsmBackend : public MCAsmBackend {
const MCSubtargetInfo *STI) const override;
bool shouldForceRelocation(const MCAssembler &Asm, const MCFixup &Fixup,
- const MCValue &Target,
+ const MCValue &Target, const uint64_t Value,
const MCSubtargetInfo *STI) override;
private:
diff --git a/llvm/lib/Target/CSKY/MCTargetDesc/CSKYAsmBackend.cpp b/llvm/lib/Target/CSKY/MCTargetDesc/CSKYAsmBackend.cpp
index dd06971e1cf976..ebe12fa6afd1fd 100644
--- a/llvm/lib/Target/CSKY/MCTargetDesc/CSKYAsmBackend.cpp
+++ b/llvm/lib/Target/CSKY/MCTargetDesc/CSKYAsmBackend.cpp
@@ -262,6 +262,7 @@ bool CSKYAsmBackend::mayNeedRelaxation(const MCInst &Inst,
bool CSKYAsmBackend::shouldForceRelocation(const MCAssembler &Asm,
const MCFixup &Fixup,
const MCValue &Target,
+ const uint64_t /*Value*/,
const MCSubtargetInfo * /*STI*/) {
if (Fixup.getKind() >= FirstLiteralRelocationKind)
return true;
diff --git a/llvm/lib/Target/CSKY/MCTargetDesc/CSKYAsmBackend.h b/llvm/lib/Target/CSKY/MCTargetDesc/CSKYAsmBackend.h
index 4b659f401d253a..faa84a6ef71d58 100644
--- a/llvm/lib/Target/CSKY/MCTargetDesc/CSKYAsmBackend.h
+++ b/llvm/lib/Target/CSKY/MCTargetDesc/CSKYAsmBackend.h
@@ -52,7 +52,7 @@ class CSKYAsmBackend : public MCAsmBackend {
const MCSubtargetInfo *STI) const override;
bool shouldForceRelocation(const MCAssembler &Asm, const MCFixup &Fixup,
- const MCValue &Target,
+ const MCValue &Target, const uint64_t Value,
const MCSubtargetInfo *STI) override;
std::unique_ptr<MCObjectTargetWriter>
diff --git a/llvm/lib/Target/Hexagon/MCTargetDesc/HexagonAsmBackend.cpp b/llvm/lib/Target/Hexagon/MCTargetDesc/HexagonAsmBackend.cpp
index 7864d45d594ac5..98b1dde8fa3fc2 100644
--- a/llvm/lib/Target/Hexagon/MCTargetDesc/HexagonAsmBackend.cpp
+++ b/llvm/lib/Target/Hexagon/MCTargetDesc/HexagonAsmBackend.cpp
@@ -201,7 +201,7 @@ class HexagonAsmBackend : public MCAsmBackend {
}
bool shouldForceRelocation(const MCAssembler &Asm, const MCFixup &Fixup,
- const MCValue &Target,
+ const MCValue &Target, const uint64_t,
const MCSubtargetInfo *STI) override {
switch(Fixup.getTargetKind()) {
default:
diff --git a/llvm/lib/Target/LoongArch/MCTargetDesc/LoongArchAsmBackend.cpp b/llvm/lib/Target/LoongArch/MCTargetDesc/LoongArchAsmBackend.cpp
index 0c24008301d022..eb4f6edc117a4a 100644
--- a/llvm/lib/Target/LoongArch/MCTargetDesc/LoongArchAsmBackend.cpp
+++ b/llvm/lib/Target/LoongArch/MCTargetDesc/LoongArchAsmBackend.cpp
@@ -251,6 +251,7 @@ bool LoongArchAsmBackend::shouldInsertFixupForCodeAlign(MCAssembler &Asm,
bool LoongArchAsmBackend::shouldForceRelocation(const MCAssembler &Asm,
const MCFixup &Fixup,
const MCValue &Target,
+ const uint64_t,
const MCSubtargetInfo *STI) {
if (Fixup.getKind() >= FirstLiteralRelocationKind)
return true;
diff --git a/llvm/lib/Target/LoongArch/MCTargetDesc/LoongArchAsmBackend.h b/llvm/lib/Target/LoongArch/MCTargetDesc/LoongArchAsmBackend.h
index 9df4ff22625c68..adbfd01410a4e6 100644
--- a/llvm/lib/Target/LoongArch/MCTargetDesc/LoongArchAsmBackend.h
+++ b/llvm/lib/Target/LoongArch/MCTargetDesc/LoongArchAsmBackend.h
@@ -57,7 +57,7 @@ class LoongArchAsmBackend : public MCAsmBackend {
MCAlignFragment &AF) override;
bool shouldForceRelocation(const MCAssembler &Asm, const MCFixup &Fixup,
- const MCValue &Target,
+ const MCValue &Target, const uint64_t Value,
const MCSubtargetInfo *STI) override;
unsigned getNumFixupKinds() const override {
diff --git a/llvm/lib/Target/Mips/MCTargetDesc/MipsAsmBackend.cpp b/llvm/lib/Target/Mips/MCTargetDesc/MipsAsmBackend.cpp
index 6001d9d51d16ac..4af6768b13cc90 100644
--- a/llvm/lib/Target/Mips/MCTargetDesc/MipsAsmBackend.cpp
+++ b/llvm/lib/Target/Mips/MCTargetDesc/MipsAsmBackend.cpp
@@ -542,6 +542,7 @@ bool MipsAsmBackend::writeNopData(raw_ostream &OS, uint64_t Count,
bool MipsAsmBackend::shouldForceRelocation(const MCAssembler &Asm,
const MCFixup &Fixup,
const MCValue &Target,
+ const uint64_t,
const MCSubtargetInfo *STI) {
if (Fixup.getKind() >= FirstLiteralRelocationKind)
return true;
diff --git a/llvm/lib/Target/Mips/MCTargetDesc/MipsAsmBackend.h b/llvm/lib/Target/Mips/MCTargetDesc/MipsAsmBackend.h
index 799dd569f1ad90..3a2c5e824a53b0 100644
--- a/llvm/lib/Target/Mips/MCTargetDesc/MipsAsmBackend.h
+++ b/llvm/lib/Target/Mips/MCTargetDesc/MipsAsmBackend.h
@@ -55,7 +55,7 @@ class MipsAsmBackend : public MCAsmBackend {
const MCSubtargetInfo *STI) const override;
bool shouldForceRelocation(const MCAssembler &Asm, const MCFixup &Fixup,
- const MCValue &Target,
+ const MCValue &Target, const uint64_t Value,
const MCSubtargetInfo *STI) override;
bool isMicroMips(const MCSymbol *Sym) const override;
diff --git a/llvm/lib/Target/PowerPC/MCTargetDesc/PPCAsmBackend.cpp b/llvm/lib/Target/PowerPC/MCTargetDesc/PPCAsmBackend.cpp
index ef36af1a5e6660..9077aa7de5a373 100644
--- a/llvm/lib/Target/PowerPC/MCTargetDesc/PPCAsmBackend.cpp
+++ b/llvm/lib/Target/PowerPC/MCTargetDesc/PPCAsmBackend.cpp
@@ -161,7 +161,7 @@ class PPCAsmBackend : public MCAsmBackend {
}
bool shouldForceRelocation(const MCAssembler &Asm, const MCFixup &Fixup,
- const MCValue &Target,
+ const MCValue &Target, const uint64_t,
const MCSubtargetInfo *STI) override {
MCFixupKind Kind = Fixup.getKind();
switch ((unsigned)Kind) {
diff --git a/llvm/lib/Target/RISCV/MCTargetDesc/RISCVAsmBackend.cpp b/llvm/lib/Target/RISCV/MCTargetDesc/RISCVAsmBackend.cpp
index eab4a5e77d96e5..270c4bb3b72450 100644
--- a/llvm/lib/Target/RISCV/MCTargetDesc/RISCVAsmBackend.cpp
+++ b/llvm/lib/Target/RISCV/MCTargetDesc/RISCVAsmBackend.cpp
@@ -112,6 +112,7 @@ RISCVAsmBackend::getFixupKindInfo(MCFixupKind Kind) const {
bool RISCVAsmBackend::shouldForceRelocation(const MCAssembler &Asm,
const MCFixup &Fixup,
const MCValue &Target,
+ const uint64_t,
const MCSubtargetInfo *STI) {
if (Fixup.getKind() >= FirstLiteralRelocationKind)
return true;
@@ -567,7 +568,7 @@ ...
[truncated]
|
@llvm/pr-subscribers-backend-sparc Author: Patryk Wychowaniec (Patryk27) ChangesThis commit changes the branch emission logic so that instead of throwing the "branch target out of range" error, we emit a relocation instead (if that's possible). MotivationWhen rustc builds an application, it compiles all crates (i.e. libraries) that belong to the project - those crates are compiled in their entirety, even if some of their functions actually end up not being called (as dead-code elimination happens later). This poses a problem in that if any library contains a function that we won't be able to emit (e.g. it's too large), we'll crash, even if ultimately we don't care about that particular function - that's what @mbuesch stumbled upon over #118015 (comment). This used not to be a problem before #106722, but now requires extra care within LLVM. N.B. I think my approach presented here is a bit hacky, I'm open to other ideas. Patch is 27.66 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/121498.diff 24 Files Affected:
diff --git a/llvm/include/llvm/MC/MCAsmBackend.h b/llvm/include/llvm/MC/MCAsmBackend.h
index b105a294d875c5..505bd1f59dd439 100644
--- a/llvm/include/llvm/MC/MCAsmBackend.h
+++ b/llvm/include/llvm/MC/MCAsmBackend.h
@@ -96,6 +96,7 @@ class MCAsmBackend {
virtual bool shouldForceRelocation(const MCAssembler &Asm,
const MCFixup &Fixup,
const MCValue &Target,
+ const uint64_t Value,
const MCSubtargetInfo *STI) {
return false;
}
diff --git a/llvm/lib/MC/MCAssembler.cpp b/llvm/lib/MC/MCAssembler.cpp
index 3c18d0832efcc7..3e5e0151d265e2 100644
--- a/llvm/lib/MC/MCAssembler.cpp
+++ b/llvm/lib/MC/MCAssembler.cpp
@@ -222,7 +222,7 @@ bool MCAssembler::evaluateFixup(const MCFixup &Fixup, const MCFragment *DF,
// Let the backend force a relocation if needed.
if (IsResolved &&
- getBackend().shouldForceRelocation(*this, Fixup, Target, STI)) {
+ getBackend().shouldForceRelocation(*this, Fixup, Target, Value, STI)) {
IsResolved = false;
WasForced = true;
}
diff --git a/llvm/lib/Target/AArch64/MCTargetDesc/AArch64AsmBackend.cpp b/llvm/lib/Target/AArch64/MCTargetDesc/AArch64AsmBackend.cpp
index 3ba0f2a2682855..337b81d68c93a0 100644
--- a/llvm/lib/Target/AArch64/MCTargetDesc/AArch64AsmBackend.cpp
+++ b/llvm/lib/Target/AArch64/MCTargetDesc/AArch64AsmBackend.cpp
@@ -98,7 +98,7 @@ class AArch64AsmBackend : public MCAsmBackend {
unsigned getFixupKindContainereSizeInBytes(unsigned Kind) const;
bool shouldForceRelocation(const MCAssembler &Asm, const MCFixup &Fixup,
- const MCValue &Target,
+ const MCValue &Target, const uint64_t Value,
const MCSubtargetInfo *STI) override;
};
@@ -520,6 +520,7 @@ bool AArch64AsmBackend::writeNopData(raw_ostream &OS, uint64_t Count,
bool AArch64AsmBackend::shouldForceRelocation(const MCAssembler &Asm,
const MCFixup &Fixup,
const MCValue &Target,
+ const uint64_t,
const MCSubtargetInfo *STI) {
unsigned Kind = Fixup.getKind();
if (Kind >= FirstLiteralRelocationKind)
diff --git a/llvm/lib/Target/AMDGPU/MCTargetDesc/AMDGPUAsmBackend.cpp b/llvm/lib/Target/AMDGPU/MCTargetDesc/AMDGPUAsmBackend.cpp
index 3172a83e5a1fe0..988c9002efee80 100644
--- a/llvm/lib/Target/AMDGPU/MCTargetDesc/AMDGPUAsmBackend.cpp
+++ b/llvm/lib/Target/AMDGPU/MCTargetDesc/AMDGPUAsmBackend.cpp
@@ -53,7 +53,7 @@ class AMDGPUAsmBackend : public MCAsmBackend {
std::optional<MCFixupKind> getFixupKind(StringRef Name) const override;
const MCFixupKindInfo &getFixupKindInfo(MCFixupKind Kind) const override;
bool shouldForceRelocation(const MCAssembler &Asm, const MCFixup &Fixup,
- const MCValue &Target,
+ const MCValue &Target, uint64_t Value,
const MCSubtargetInfo *STI) override;
};
@@ -197,6 +197,7 @@ const MCFixupKindInfo &AMDGPUAsmBackend::getFixupKindInfo(
bool AMDGPUAsmBackend::shouldForceRelocation(const MCAssembler &,
const MCFixup &Fixup,
const MCValue &,
+ const uint64_t,
const MCSubtargetInfo *STI) {
return Fixup.getKind() >= FirstLiteralRelocationKind;
}
diff --git a/llvm/lib/Target/ARM/MCTargetDesc/ARMAsmBackend.cpp b/llvm/lib/Target/ARM/MCTargetDesc/ARMAsmBackend.cpp
index d0e759d3356f10..d164f87ae97c98 100644
--- a/llvm/lib/Target/ARM/MCTargetDesc/ARMAsmBackend.cpp
+++ b/llvm/lib/Target/ARM/MCTargetDesc/ARMAsmBackend.cpp
@@ -956,6 +956,7 @@ unsigned ARMAsmBackend::adjustFixupValue(const MCAssembler &Asm,
bool ARMAsmBackend::shouldForceRelocation(const MCAssembler &Asm,
const MCFixup &Fixup,
const MCValue &Target,
+ const uint64_t,
const MCSubtargetInfo *STI) {
const MCSymbolRefExpr *A = Target.getSymA();
const MCSymbol *Sym = A ? &A->getSymbol() : nullptr;
diff --git a/llvm/lib/Target/ARM/MCTargetDesc/ARMAsmBackend.h b/llvm/lib/Target/ARM/MCTargetDesc/ARMAsmBackend.h
index f33cd8b7c2425a..2932e68cd98e56 100644
--- a/llvm/lib/Target/ARM/MCTargetDesc/ARMAsmBackend.h
+++ b/llvm/lib/Target/ARM/MCTargetDesc/ARMAsmBackend.h
@@ -36,7 +36,7 @@ class ARMAsmBackend : public MCAsmBackend {
const MCFixupKindInfo &getFixupKindInfo(MCFixupKind Kind) const override;
bool shouldForceRelocation(const MCAssembler &Asm, const MCFixup &Fixup,
- const MCValue &Target,
+ const MCValue &Target, const uint64_t Value,
const MCSubtargetInfo *STI) override;
unsigned adjustFixupValue(const MCAssembler &Asm, const MCFixup &Fixup,
diff --git a/llvm/lib/Target/AVR/MCTargetDesc/AVRAsmBackend.cpp b/llvm/lib/Target/AVR/MCTargetDesc/AVRAsmBackend.cpp
index fd35f8fcb8e7b8..8e6599ff420b7a 100644
--- a/llvm/lib/Target/AVR/MCTargetDesc/AVRAsmBackend.cpp
+++ b/llvm/lib/Target/AVR/MCTargetDesc/AVRAsmBackend.cpp
@@ -31,10 +31,14 @@ namespace adjust {
using namespace llvm;
-static void signed_width(unsigned Width, uint64_t Value,
- std::string Description, const MCFixup &Fixup,
- MCContext *Ctx) {
- if (!isIntN(Width, Value)) {
+static bool checkSignedWidth(unsigned Width, uint64_t Value,
+ std::string Description, const MCFixup &Fixup,
+ MCContext *Ctx) {
+ if (isIntN(Width, Value)) {
+ return true;
+ }
+
+ if (Ctx) {
std::string Diagnostic = "out of range " + Description;
int64_t Min = minIntN(Width);
@@ -45,12 +49,18 @@ static void signed_width(unsigned Width, uint64_t Value,
Ctx->reportError(Fixup.getLoc(), Diagnostic);
}
+
+ return false;
}
-static void unsigned_width(unsigned Width, uint64_t Value,
- std::string Description, const MCFixup &Fixup,
- MCContext *Ctx) {
- if (!isUIntN(Width, Value)) {
+static bool checkUnsignedWidth(unsigned Width, uint64_t Value,
+ std::string Description, const MCFixup &Fixup,
+ MCContext *Ctx) {
+ if (isUIntN(Width, Value)) {
+ return true;
+ }
+
+ if (Ctx) {
std::string Diagnostic = "out of range " + Description;
int64_t Max = maxUIntN(Width);
@@ -60,6 +70,8 @@ static void unsigned_width(unsigned Width, uint64_t Value,
Ctx->reportError(Fixup.getLoc(), Diagnostic);
}
+
+ return false;
}
/// Adjusts the value of a branch target before fixup application.
@@ -67,15 +79,16 @@ static void adjustBranch(unsigned Size, const MCFixup &Fixup, uint64_t &Value,
MCContext *Ctx) {
// We have one extra bit of precision because the value is rightshifted by
// one.
- unsigned_width(Size + 1, Value, std::string("branch target"), Fixup, Ctx);
+ checkUnsignedWidth(Size + 1, Value, std::string("branch target"), Fixup, Ctx);
// Rightshifts the value by one.
AVR::fixups::adjustBranchTarget(Value);
}
/// Adjusts the value of a relative branch target before fixup application.
-static void adjustRelativeBranch(unsigned Size, const MCFixup &Fixup,
- uint64_t &Value, MCContext *Ctx) {
+static bool adjustRelativeBranch(unsigned Size, const MCFixup &Fixup,
+ uint64_t &Value, MCContext *Ctx,
+ const MCSubtargetInfo *STI = nullptr) {
// Jumps are relative to the current instruction.
Value -= 2;
@@ -83,8 +96,11 @@ static void adjustRelativeBranch(unsigned Size, const MCFixup &Fixup,
// one.
Size += 1;
- if (!isIntN(Size, Value) &&
- Ctx->getSubtargetInfo()->hasFeature(AVR::FeatureWrappingRjmp)) {
+ if (!STI) {
+ STI = Ctx->getSubtargetInfo();
+ }
+
+ if (!isIntN(Size, Value) && STI->hasFeature(AVR::FeatureWrappingRjmp)) {
const int32_t FlashSize = 0x2000;
int32_t SignedValue = Value;
@@ -96,10 +112,15 @@ static void adjustRelativeBranch(unsigned Size, const MCFixup &Fixup,
}
}
- signed_width(Size, Value, std::string("branch target"), Fixup, Ctx);
+ if (!checkSignedWidth(Size, Value, std::string("branch target"), Fixup,
+ Ctx)) {
+ return false;
+ }
// Rightshifts the value by one.
AVR::fixups::adjustBranchTarget(Value);
+
+ return true;
}
/// 22-bit absolute fixup.
@@ -152,7 +173,7 @@ static void fixup_13_pcrel(unsigned Size, const MCFixup &Fixup, uint64_t &Value,
/// Resolves to:
/// 10q0 qq10 0000 1qqq
static void fixup_6(const MCFixup &Fixup, uint64_t &Value, MCContext *Ctx) {
- unsigned_width(6, Value, std::string("immediate"), Fixup, Ctx);
+ checkUnsignedWidth(6, Value, std::string("immediate"), Fixup, Ctx);
Value = ((Value & 0x20) << 8) | ((Value & 0x18) << 7) | (Value & 0x07);
}
@@ -164,7 +185,7 @@ static void fixup_6(const MCFixup &Fixup, uint64_t &Value, MCContext *Ctx) {
/// 0000 0000 kk00 kkkk
static void fixup_6_adiw(const MCFixup &Fixup, uint64_t &Value,
MCContext *Ctx) {
- unsigned_width(6, Value, std::string("immediate"), Fixup, Ctx);
+ checkUnsignedWidth(6, Value, std::string("immediate"), Fixup, Ctx);
Value = ((Value & 0x30) << 2) | (Value & 0x0f);
}
@@ -174,7 +195,7 @@ static void fixup_6_adiw(const MCFixup &Fixup, uint64_t &Value,
/// Resolves to:
/// 0000 0000 AAAA A000
static void fixup_port5(const MCFixup &Fixup, uint64_t &Value, MCContext *Ctx) {
- unsigned_width(5, Value, std::string("port number"), Fixup, Ctx);
+ checkUnsignedWidth(5, Value, std::string("port number"), Fixup, Ctx);
Value &= 0x1f;
@@ -186,7 +207,7 @@ static void fixup_port5(const MCFixup &Fixup, uint64_t &Value, MCContext *Ctx) {
/// Resolves to:
/// 1011 0AAd dddd AAAA
static void fixup_port6(const MCFixup &Fixup, uint64_t &Value, MCContext *Ctx) {
- unsigned_width(6, Value, std::string("port number"), Fixup, Ctx);
+ checkUnsignedWidth(6, Value, std::string("port number"), Fixup, Ctx);
Value = ((Value & 0x30) << 5) | (Value & 0x0f);
}
@@ -197,7 +218,7 @@ static void fixup_port6(const MCFixup &Fixup, uint64_t &Value, MCContext *Ctx) {
/// 1010 ikkk dddd kkkk
static void fixup_lds_sts_16(const MCFixup &Fixup, uint64_t &Value,
MCContext *Ctx) {
- unsigned_width(7, Value, std::string("immediate"), Fixup, Ctx);
+ checkUnsignedWidth(7, Value, std::string("immediate"), Fixup, Ctx);
Value = ((Value & 0x70) << 8) | (Value & 0x0f);
}
@@ -331,13 +352,15 @@ void AVRAsmBackend::adjustFixupValue(const MCFixup &Fixup,
adjust::ldi::ms8(Size, Fixup, Value, Ctx);
break;
case AVR::fixup_16:
- adjust::unsigned_width(16, Value, std::string("port number"), Fixup, Ctx);
+ adjust::checkUnsignedWidth(16, Value, std::string("port number"), Fixup,
+ Ctx);
Value &= 0xffff;
break;
case AVR::fixup_16_pm:
Value >>= 1; // Flash addresses are always shifted.
- adjust::unsigned_width(16, Value, std::string("port number"), Fixup, Ctx);
+ adjust::checkUnsignedWidth(16, Value, std::string("port number"), Fixup,
+ Ctx);
Value &= 0xffff;
break;
@@ -512,14 +535,25 @@ bool AVRAsmBackend::writeNopData(raw_ostream &OS, uint64_t Count,
bool AVRAsmBackend::shouldForceRelocation(const MCAssembler &Asm,
const MCFixup &Fixup,
const MCValue &Target,
+ const uint64_t Value,
const MCSubtargetInfo *STI) {
switch ((unsigned)Fixup.getKind()) {
default:
return Fixup.getKind() >= FirstLiteralRelocationKind;
+
case AVR::fixup_7_pcrel:
- case AVR::fixup_13_pcrel:
- // Always resolve relocations for PC-relative branches
- return false;
+ case AVR::fixup_13_pcrel: {
+ uint64_t ValueEx = Value;
+ uint64_t Size = AVRAsmBackend::getFixupKindInfo(Fixup.getKind()).TargetSize;
+
+ // If the jump is too large to encode it, fall back to a relocation.
+ //
+ // Note that trying to actually link that relocation *would* fail, but the
+ // hopes are that the module we're currently compiling won't be actually
+ // linked to the final binary.
+ return !adjust::adjustRelativeBranch(Size, Fixup, ValueEx, nullptr, STI);
+ }
+
case AVR::fixup_call:
return true;
}
diff --git a/llvm/lib/Target/AVR/MCTargetDesc/AVRAsmBackend.h b/llvm/lib/Target/AVR/MCTargetDesc/AVRAsmBackend.h
index 2337319590324d..1a9ae94f2f49e9 100644
--- a/llvm/lib/Target/AVR/MCTargetDesc/AVRAsmBackend.h
+++ b/llvm/lib/Target/AVR/MCTargetDesc/AVRAsmBackend.h
@@ -53,7 +53,7 @@ class AVRAsmBackend : public MCAsmBackend {
const MCSubtargetInfo *STI) const override;
bool shouldForceRelocation(const MCAssembler &Asm, const MCFixup &Fixup,
- const MCValue &Target,
+ const MCValue &Target, const uint64_t Value,
const MCSubtargetInfo *STI) override;
private:
diff --git a/llvm/lib/Target/CSKY/MCTargetDesc/CSKYAsmBackend.cpp b/llvm/lib/Target/CSKY/MCTargetDesc/CSKYAsmBackend.cpp
index dd06971e1cf976..ebe12fa6afd1fd 100644
--- a/llvm/lib/Target/CSKY/MCTargetDesc/CSKYAsmBackend.cpp
+++ b/llvm/lib/Target/CSKY/MCTargetDesc/CSKYAsmBackend.cpp
@@ -262,6 +262,7 @@ bool CSKYAsmBackend::mayNeedRelaxation(const MCInst &Inst,
bool CSKYAsmBackend::shouldForceRelocation(const MCAssembler &Asm,
const MCFixup &Fixup,
const MCValue &Target,
+ const uint64_t /*Value*/,
const MCSubtargetInfo * /*STI*/) {
if (Fixup.getKind() >= FirstLiteralRelocationKind)
return true;
diff --git a/llvm/lib/Target/CSKY/MCTargetDesc/CSKYAsmBackend.h b/llvm/lib/Target/CSKY/MCTargetDesc/CSKYAsmBackend.h
index 4b659f401d253a..faa84a6ef71d58 100644
--- a/llvm/lib/Target/CSKY/MCTargetDesc/CSKYAsmBackend.h
+++ b/llvm/lib/Target/CSKY/MCTargetDesc/CSKYAsmBackend.h
@@ -52,7 +52,7 @@ class CSKYAsmBackend : public MCAsmBackend {
const MCSubtargetInfo *STI) const override;
bool shouldForceRelocation(const MCAssembler &Asm, const MCFixup &Fixup,
- const MCValue &Target,
+ const MCValue &Target, const uint64_t Value,
const MCSubtargetInfo *STI) override;
std::unique_ptr<MCObjectTargetWriter>
diff --git a/llvm/lib/Target/Hexagon/MCTargetDesc/HexagonAsmBackend.cpp b/llvm/lib/Target/Hexagon/MCTargetDesc/HexagonAsmBackend.cpp
index 7864d45d594ac5..98b1dde8fa3fc2 100644
--- a/llvm/lib/Target/Hexagon/MCTargetDesc/HexagonAsmBackend.cpp
+++ b/llvm/lib/Target/Hexagon/MCTargetDesc/HexagonAsmBackend.cpp
@@ -201,7 +201,7 @@ class HexagonAsmBackend : public MCAsmBackend {
}
bool shouldForceRelocation(const MCAssembler &Asm, const MCFixup &Fixup,
- const MCValue &Target,
+ const MCValue &Target, const uint64_t,
const MCSubtargetInfo *STI) override {
switch(Fixup.getTargetKind()) {
default:
diff --git a/llvm/lib/Target/LoongArch/MCTargetDesc/LoongArchAsmBackend.cpp b/llvm/lib/Target/LoongArch/MCTargetDesc/LoongArchAsmBackend.cpp
index 0c24008301d022..eb4f6edc117a4a 100644
--- a/llvm/lib/Target/LoongArch/MCTargetDesc/LoongArchAsmBackend.cpp
+++ b/llvm/lib/Target/LoongArch/MCTargetDesc/LoongArchAsmBackend.cpp
@@ -251,6 +251,7 @@ bool LoongArchAsmBackend::shouldInsertFixupForCodeAlign(MCAssembler &Asm,
bool LoongArchAsmBackend::shouldForceRelocation(const MCAssembler &Asm,
const MCFixup &Fixup,
const MCValue &Target,
+ const uint64_t,
const MCSubtargetInfo *STI) {
if (Fixup.getKind() >= FirstLiteralRelocationKind)
return true;
diff --git a/llvm/lib/Target/LoongArch/MCTargetDesc/LoongArchAsmBackend.h b/llvm/lib/Target/LoongArch/MCTargetDesc/LoongArchAsmBackend.h
index 9df4ff22625c68..adbfd01410a4e6 100644
--- a/llvm/lib/Target/LoongArch/MCTargetDesc/LoongArchAsmBackend.h
+++ b/llvm/lib/Target/LoongArch/MCTargetDesc/LoongArchAsmBackend.h
@@ -57,7 +57,7 @@ class LoongArchAsmBackend : public MCAsmBackend {
MCAlignFragment &AF) override;
bool shouldForceRelocation(const MCAssembler &Asm, const MCFixup &Fixup,
- const MCValue &Target,
+ const MCValue &Target, const uint64_t Value,
const MCSubtargetInfo *STI) override;
unsigned getNumFixupKinds() const override {
diff --git a/llvm/lib/Target/Mips/MCTargetDesc/MipsAsmBackend.cpp b/llvm/lib/Target/Mips/MCTargetDesc/MipsAsmBackend.cpp
index 6001d9d51d16ac..4af6768b13cc90 100644
--- a/llvm/lib/Target/Mips/MCTargetDesc/MipsAsmBackend.cpp
+++ b/llvm/lib/Target/Mips/MCTargetDesc/MipsAsmBackend.cpp
@@ -542,6 +542,7 @@ bool MipsAsmBackend::writeNopData(raw_ostream &OS, uint64_t Count,
bool MipsAsmBackend::shouldForceRelocation(const MCAssembler &Asm,
const MCFixup &Fixup,
const MCValue &Target,
+ const uint64_t,
const MCSubtargetInfo *STI) {
if (Fixup.getKind() >= FirstLiteralRelocationKind)
return true;
diff --git a/llvm/lib/Target/Mips/MCTargetDesc/MipsAsmBackend.h b/llvm/lib/Target/Mips/MCTargetDesc/MipsAsmBackend.h
index 799dd569f1ad90..3a2c5e824a53b0 100644
--- a/llvm/lib/Target/Mips/MCTargetDesc/MipsAsmBackend.h
+++ b/llvm/lib/Target/Mips/MCTargetDesc/MipsAsmBackend.h
@@ -55,7 +55,7 @@ class MipsAsmBackend : public MCAsmBackend {
const MCSubtargetInfo *STI) const override;
bool shouldForceRelocation(const MCAssembler &Asm, const MCFixup &Fixup,
- const MCValue &Target,
+ const MCValue &Target, const uint64_t Value,
const MCSubtargetInfo *STI) override;
bool isMicroMips(const MCSymbol *Sym) const override;
diff --git a/llvm/lib/Target/PowerPC/MCTargetDesc/PPCAsmBackend.cpp b/llvm/lib/Target/PowerPC/MCTargetDesc/PPCAsmBackend.cpp
index ef36af1a5e6660..9077aa7de5a373 100644
--- a/llvm/lib/Target/PowerPC/MCTargetDesc/PPCAsmBackend.cpp
+++ b/llvm/lib/Target/PowerPC/MCTargetDesc/PPCAsmBackend.cpp
@@ -161,7 +161,7 @@ class PPCAsmBackend : public MCAsmBackend {
}
bool shouldForceRelocation(const MCAssembler &Asm, const MCFixup &Fixup,
- const MCValue &Target,
+ const MCValue &Target, const uint64_t,
const MCSubtargetInfo *STI) override {
MCFixupKind Kind = Fixup.getKind();
switch ((unsigned)Kind) {
diff --git a/llvm/lib/Target/RISCV/MCTargetDesc/RISCVAsmBackend.cpp b/llvm/lib/Target/RISCV/MCTargetDesc/RISCVAsmBackend.cpp
index eab4a5e77d96e5..270c4bb3b72450 100644
--- a/llvm/lib/Target/RISCV/MCTargetDesc/RISCVAsmBackend.cpp
+++ b/llvm/lib/Target/RISCV/MCTargetDesc/RISCVAsmBackend.cpp
@@ -112,6 +112,7 @@ RISCVAsmBackend::getFixupKindInfo(MCFixupKind Kind) const {
bool RISCVAsmBackend::shouldForceRelocation(const MCAssembler &Asm,
const MCFixup &Fixup,
const MCValue &Target,
+ const uint64_t,
const MCSubtargetInfo *STI) {
if (Fixup.getKind() >= FirstLiteralRelocationKind)
return true;
@@ -567,7 +568,7 @@ ...
[truncated]
|
@llvm/pr-subscribers-backend-risc-v Author: Patryk Wychowaniec (Patryk27) ChangesThis commit changes the branch emission logic so that instead of throwing the "branch target out of range" error, we emit a relocation instead (if that's possible). MotivationWhen rustc builds an application, it compiles all crates (i.e. libraries) that belong to the project - those crates are compiled in their entirety, even if some of their functions actually end up not being called (as dead-code elimination happens later). This poses a problem in that if any library contains a function that we won't be able to emit (e.g. it's too large), we'll crash, even if ultimately we don't care about that particular function - that's what @mbuesch stumbled upon over #118015 (comment). This used not to be a problem before #106722, but now requires extra care within LLVM. N.B. I think my approach presented here is a bit hacky, I'm open to other ideas. Patch is 27.66 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/121498.diff 24 Files Affected:
diff --git a/llvm/include/llvm/MC/MCAsmBackend.h b/llvm/include/llvm/MC/MCAsmBackend.h
index b105a294d875c5..505bd1f59dd439 100644
--- a/llvm/include/llvm/MC/MCAsmBackend.h
+++ b/llvm/include/llvm/MC/MCAsmBackend.h
@@ -96,6 +96,7 @@ class MCAsmBackend {
virtual bool shouldForceRelocation(const MCAssembler &Asm,
const MCFixup &Fixup,
const MCValue &Target,
+ const uint64_t Value,
const MCSubtargetInfo *STI) {
return false;
}
diff --git a/llvm/lib/MC/MCAssembler.cpp b/llvm/lib/MC/MCAssembler.cpp
index 3c18d0832efcc7..3e5e0151d265e2 100644
--- a/llvm/lib/MC/MCAssembler.cpp
+++ b/llvm/lib/MC/MCAssembler.cpp
@@ -222,7 +222,7 @@ bool MCAssembler::evaluateFixup(const MCFixup &Fixup, const MCFragment *DF,
// Let the backend force a relocation if needed.
if (IsResolved &&
- getBackend().shouldForceRelocation(*this, Fixup, Target, STI)) {
+ getBackend().shouldForceRelocation(*this, Fixup, Target, Value, STI)) {
IsResolved = false;
WasForced = true;
}
diff --git a/llvm/lib/Target/AArch64/MCTargetDesc/AArch64AsmBackend.cpp b/llvm/lib/Target/AArch64/MCTargetDesc/AArch64AsmBackend.cpp
index 3ba0f2a2682855..337b81d68c93a0 100644
--- a/llvm/lib/Target/AArch64/MCTargetDesc/AArch64AsmBackend.cpp
+++ b/llvm/lib/Target/AArch64/MCTargetDesc/AArch64AsmBackend.cpp
@@ -98,7 +98,7 @@ class AArch64AsmBackend : public MCAsmBackend {
unsigned getFixupKindContainereSizeInBytes(unsigned Kind) const;
bool shouldForceRelocation(const MCAssembler &Asm, const MCFixup &Fixup,
- const MCValue &Target,
+ const MCValue &Target, const uint64_t Value,
const MCSubtargetInfo *STI) override;
};
@@ -520,6 +520,7 @@ bool AArch64AsmBackend::writeNopData(raw_ostream &OS, uint64_t Count,
bool AArch64AsmBackend::shouldForceRelocation(const MCAssembler &Asm,
const MCFixup &Fixup,
const MCValue &Target,
+ const uint64_t,
const MCSubtargetInfo *STI) {
unsigned Kind = Fixup.getKind();
if (Kind >= FirstLiteralRelocationKind)
diff --git a/llvm/lib/Target/AMDGPU/MCTargetDesc/AMDGPUAsmBackend.cpp b/llvm/lib/Target/AMDGPU/MCTargetDesc/AMDGPUAsmBackend.cpp
index 3172a83e5a1fe0..988c9002efee80 100644
--- a/llvm/lib/Target/AMDGPU/MCTargetDesc/AMDGPUAsmBackend.cpp
+++ b/llvm/lib/Target/AMDGPU/MCTargetDesc/AMDGPUAsmBackend.cpp
@@ -53,7 +53,7 @@ class AMDGPUAsmBackend : public MCAsmBackend {
std::optional<MCFixupKind> getFixupKind(StringRef Name) const override;
const MCFixupKindInfo &getFixupKindInfo(MCFixupKind Kind) const override;
bool shouldForceRelocation(const MCAssembler &Asm, const MCFixup &Fixup,
- const MCValue &Target,
+ const MCValue &Target, uint64_t Value,
const MCSubtargetInfo *STI) override;
};
@@ -197,6 +197,7 @@ const MCFixupKindInfo &AMDGPUAsmBackend::getFixupKindInfo(
bool AMDGPUAsmBackend::shouldForceRelocation(const MCAssembler &,
const MCFixup &Fixup,
const MCValue &,
+ const uint64_t,
const MCSubtargetInfo *STI) {
return Fixup.getKind() >= FirstLiteralRelocationKind;
}
diff --git a/llvm/lib/Target/ARM/MCTargetDesc/ARMAsmBackend.cpp b/llvm/lib/Target/ARM/MCTargetDesc/ARMAsmBackend.cpp
index d0e759d3356f10..d164f87ae97c98 100644
--- a/llvm/lib/Target/ARM/MCTargetDesc/ARMAsmBackend.cpp
+++ b/llvm/lib/Target/ARM/MCTargetDesc/ARMAsmBackend.cpp
@@ -956,6 +956,7 @@ unsigned ARMAsmBackend::adjustFixupValue(const MCAssembler &Asm,
bool ARMAsmBackend::shouldForceRelocation(const MCAssembler &Asm,
const MCFixup &Fixup,
const MCValue &Target,
+ const uint64_t,
const MCSubtargetInfo *STI) {
const MCSymbolRefExpr *A = Target.getSymA();
const MCSymbol *Sym = A ? &A->getSymbol() : nullptr;
diff --git a/llvm/lib/Target/ARM/MCTargetDesc/ARMAsmBackend.h b/llvm/lib/Target/ARM/MCTargetDesc/ARMAsmBackend.h
index f33cd8b7c2425a..2932e68cd98e56 100644
--- a/llvm/lib/Target/ARM/MCTargetDesc/ARMAsmBackend.h
+++ b/llvm/lib/Target/ARM/MCTargetDesc/ARMAsmBackend.h
@@ -36,7 +36,7 @@ class ARMAsmBackend : public MCAsmBackend {
const MCFixupKindInfo &getFixupKindInfo(MCFixupKind Kind) const override;
bool shouldForceRelocation(const MCAssembler &Asm, const MCFixup &Fixup,
- const MCValue &Target,
+ const MCValue &Target, const uint64_t Value,
const MCSubtargetInfo *STI) override;
unsigned adjustFixupValue(const MCAssembler &Asm, const MCFixup &Fixup,
diff --git a/llvm/lib/Target/AVR/MCTargetDesc/AVRAsmBackend.cpp b/llvm/lib/Target/AVR/MCTargetDesc/AVRAsmBackend.cpp
index fd35f8fcb8e7b8..8e6599ff420b7a 100644
--- a/llvm/lib/Target/AVR/MCTargetDesc/AVRAsmBackend.cpp
+++ b/llvm/lib/Target/AVR/MCTargetDesc/AVRAsmBackend.cpp
@@ -31,10 +31,14 @@ namespace adjust {
using namespace llvm;
-static void signed_width(unsigned Width, uint64_t Value,
- std::string Description, const MCFixup &Fixup,
- MCContext *Ctx) {
- if (!isIntN(Width, Value)) {
+static bool checkSignedWidth(unsigned Width, uint64_t Value,
+ std::string Description, const MCFixup &Fixup,
+ MCContext *Ctx) {
+ if (isIntN(Width, Value)) {
+ return true;
+ }
+
+ if (Ctx) {
std::string Diagnostic = "out of range " + Description;
int64_t Min = minIntN(Width);
@@ -45,12 +49,18 @@ static void signed_width(unsigned Width, uint64_t Value,
Ctx->reportError(Fixup.getLoc(), Diagnostic);
}
+
+ return false;
}
-static void unsigned_width(unsigned Width, uint64_t Value,
- std::string Description, const MCFixup &Fixup,
- MCContext *Ctx) {
- if (!isUIntN(Width, Value)) {
+static bool checkUnsignedWidth(unsigned Width, uint64_t Value,
+ std::string Description, const MCFixup &Fixup,
+ MCContext *Ctx) {
+ if (isUIntN(Width, Value)) {
+ return true;
+ }
+
+ if (Ctx) {
std::string Diagnostic = "out of range " + Description;
int64_t Max = maxUIntN(Width);
@@ -60,6 +70,8 @@ static void unsigned_width(unsigned Width, uint64_t Value,
Ctx->reportError(Fixup.getLoc(), Diagnostic);
}
+
+ return false;
}
/// Adjusts the value of a branch target before fixup application.
@@ -67,15 +79,16 @@ static void adjustBranch(unsigned Size, const MCFixup &Fixup, uint64_t &Value,
MCContext *Ctx) {
// We have one extra bit of precision because the value is rightshifted by
// one.
- unsigned_width(Size + 1, Value, std::string("branch target"), Fixup, Ctx);
+ checkUnsignedWidth(Size + 1, Value, std::string("branch target"), Fixup, Ctx);
// Rightshifts the value by one.
AVR::fixups::adjustBranchTarget(Value);
}
/// Adjusts the value of a relative branch target before fixup application.
-static void adjustRelativeBranch(unsigned Size, const MCFixup &Fixup,
- uint64_t &Value, MCContext *Ctx) {
+static bool adjustRelativeBranch(unsigned Size, const MCFixup &Fixup,
+ uint64_t &Value, MCContext *Ctx,
+ const MCSubtargetInfo *STI = nullptr) {
// Jumps are relative to the current instruction.
Value -= 2;
@@ -83,8 +96,11 @@ static void adjustRelativeBranch(unsigned Size, const MCFixup &Fixup,
// one.
Size += 1;
- if (!isIntN(Size, Value) &&
- Ctx->getSubtargetInfo()->hasFeature(AVR::FeatureWrappingRjmp)) {
+ if (!STI) {
+ STI = Ctx->getSubtargetInfo();
+ }
+
+ if (!isIntN(Size, Value) && STI->hasFeature(AVR::FeatureWrappingRjmp)) {
const int32_t FlashSize = 0x2000;
int32_t SignedValue = Value;
@@ -96,10 +112,15 @@ static void adjustRelativeBranch(unsigned Size, const MCFixup &Fixup,
}
}
- signed_width(Size, Value, std::string("branch target"), Fixup, Ctx);
+ if (!checkSignedWidth(Size, Value, std::string("branch target"), Fixup,
+ Ctx)) {
+ return false;
+ }
// Rightshifts the value by one.
AVR::fixups::adjustBranchTarget(Value);
+
+ return true;
}
/// 22-bit absolute fixup.
@@ -152,7 +173,7 @@ static void fixup_13_pcrel(unsigned Size, const MCFixup &Fixup, uint64_t &Value,
/// Resolves to:
/// 10q0 qq10 0000 1qqq
static void fixup_6(const MCFixup &Fixup, uint64_t &Value, MCContext *Ctx) {
- unsigned_width(6, Value, std::string("immediate"), Fixup, Ctx);
+ checkUnsignedWidth(6, Value, std::string("immediate"), Fixup, Ctx);
Value = ((Value & 0x20) << 8) | ((Value & 0x18) << 7) | (Value & 0x07);
}
@@ -164,7 +185,7 @@ static void fixup_6(const MCFixup &Fixup, uint64_t &Value, MCContext *Ctx) {
/// 0000 0000 kk00 kkkk
static void fixup_6_adiw(const MCFixup &Fixup, uint64_t &Value,
MCContext *Ctx) {
- unsigned_width(6, Value, std::string("immediate"), Fixup, Ctx);
+ checkUnsignedWidth(6, Value, std::string("immediate"), Fixup, Ctx);
Value = ((Value & 0x30) << 2) | (Value & 0x0f);
}
@@ -174,7 +195,7 @@ static void fixup_6_adiw(const MCFixup &Fixup, uint64_t &Value,
/// Resolves to:
/// 0000 0000 AAAA A000
static void fixup_port5(const MCFixup &Fixup, uint64_t &Value, MCContext *Ctx) {
- unsigned_width(5, Value, std::string("port number"), Fixup, Ctx);
+ checkUnsignedWidth(5, Value, std::string("port number"), Fixup, Ctx);
Value &= 0x1f;
@@ -186,7 +207,7 @@ static void fixup_port5(const MCFixup &Fixup, uint64_t &Value, MCContext *Ctx) {
/// Resolves to:
/// 1011 0AAd dddd AAAA
static void fixup_port6(const MCFixup &Fixup, uint64_t &Value, MCContext *Ctx) {
- unsigned_width(6, Value, std::string("port number"), Fixup, Ctx);
+ checkUnsignedWidth(6, Value, std::string("port number"), Fixup, Ctx);
Value = ((Value & 0x30) << 5) | (Value & 0x0f);
}
@@ -197,7 +218,7 @@ static void fixup_port6(const MCFixup &Fixup, uint64_t &Value, MCContext *Ctx) {
/// 1010 ikkk dddd kkkk
static void fixup_lds_sts_16(const MCFixup &Fixup, uint64_t &Value,
MCContext *Ctx) {
- unsigned_width(7, Value, std::string("immediate"), Fixup, Ctx);
+ checkUnsignedWidth(7, Value, std::string("immediate"), Fixup, Ctx);
Value = ((Value & 0x70) << 8) | (Value & 0x0f);
}
@@ -331,13 +352,15 @@ void AVRAsmBackend::adjustFixupValue(const MCFixup &Fixup,
adjust::ldi::ms8(Size, Fixup, Value, Ctx);
break;
case AVR::fixup_16:
- adjust::unsigned_width(16, Value, std::string("port number"), Fixup, Ctx);
+ adjust::checkUnsignedWidth(16, Value, std::string("port number"), Fixup,
+ Ctx);
Value &= 0xffff;
break;
case AVR::fixup_16_pm:
Value >>= 1; // Flash addresses are always shifted.
- adjust::unsigned_width(16, Value, std::string("port number"), Fixup, Ctx);
+ adjust::checkUnsignedWidth(16, Value, std::string("port number"), Fixup,
+ Ctx);
Value &= 0xffff;
break;
@@ -512,14 +535,25 @@ bool AVRAsmBackend::writeNopData(raw_ostream &OS, uint64_t Count,
bool AVRAsmBackend::shouldForceRelocation(const MCAssembler &Asm,
const MCFixup &Fixup,
const MCValue &Target,
+ const uint64_t Value,
const MCSubtargetInfo *STI) {
switch ((unsigned)Fixup.getKind()) {
default:
return Fixup.getKind() >= FirstLiteralRelocationKind;
+
case AVR::fixup_7_pcrel:
- case AVR::fixup_13_pcrel:
- // Always resolve relocations for PC-relative branches
- return false;
+ case AVR::fixup_13_pcrel: {
+ uint64_t ValueEx = Value;
+ uint64_t Size = AVRAsmBackend::getFixupKindInfo(Fixup.getKind()).TargetSize;
+
+ // If the jump is too large to encode it, fall back to a relocation.
+ //
+ // Note that trying to actually link that relocation *would* fail, but the
+ // hopes are that the module we're currently compiling won't be actually
+ // linked to the final binary.
+ return !adjust::adjustRelativeBranch(Size, Fixup, ValueEx, nullptr, STI);
+ }
+
case AVR::fixup_call:
return true;
}
diff --git a/llvm/lib/Target/AVR/MCTargetDesc/AVRAsmBackend.h b/llvm/lib/Target/AVR/MCTargetDesc/AVRAsmBackend.h
index 2337319590324d..1a9ae94f2f49e9 100644
--- a/llvm/lib/Target/AVR/MCTargetDesc/AVRAsmBackend.h
+++ b/llvm/lib/Target/AVR/MCTargetDesc/AVRAsmBackend.h
@@ -53,7 +53,7 @@ class AVRAsmBackend : public MCAsmBackend {
const MCSubtargetInfo *STI) const override;
bool shouldForceRelocation(const MCAssembler &Asm, const MCFixup &Fixup,
- const MCValue &Target,
+ const MCValue &Target, const uint64_t Value,
const MCSubtargetInfo *STI) override;
private:
diff --git a/llvm/lib/Target/CSKY/MCTargetDesc/CSKYAsmBackend.cpp b/llvm/lib/Target/CSKY/MCTargetDesc/CSKYAsmBackend.cpp
index dd06971e1cf976..ebe12fa6afd1fd 100644
--- a/llvm/lib/Target/CSKY/MCTargetDesc/CSKYAsmBackend.cpp
+++ b/llvm/lib/Target/CSKY/MCTargetDesc/CSKYAsmBackend.cpp
@@ -262,6 +262,7 @@ bool CSKYAsmBackend::mayNeedRelaxation(const MCInst &Inst,
bool CSKYAsmBackend::shouldForceRelocation(const MCAssembler &Asm,
const MCFixup &Fixup,
const MCValue &Target,
+ const uint64_t /*Value*/,
const MCSubtargetInfo * /*STI*/) {
if (Fixup.getKind() >= FirstLiteralRelocationKind)
return true;
diff --git a/llvm/lib/Target/CSKY/MCTargetDesc/CSKYAsmBackend.h b/llvm/lib/Target/CSKY/MCTargetDesc/CSKYAsmBackend.h
index 4b659f401d253a..faa84a6ef71d58 100644
--- a/llvm/lib/Target/CSKY/MCTargetDesc/CSKYAsmBackend.h
+++ b/llvm/lib/Target/CSKY/MCTargetDesc/CSKYAsmBackend.h
@@ -52,7 +52,7 @@ class CSKYAsmBackend : public MCAsmBackend {
const MCSubtargetInfo *STI) const override;
bool shouldForceRelocation(const MCAssembler &Asm, const MCFixup &Fixup,
- const MCValue &Target,
+ const MCValue &Target, const uint64_t Value,
const MCSubtargetInfo *STI) override;
std::unique_ptr<MCObjectTargetWriter>
diff --git a/llvm/lib/Target/Hexagon/MCTargetDesc/HexagonAsmBackend.cpp b/llvm/lib/Target/Hexagon/MCTargetDesc/HexagonAsmBackend.cpp
index 7864d45d594ac5..98b1dde8fa3fc2 100644
--- a/llvm/lib/Target/Hexagon/MCTargetDesc/HexagonAsmBackend.cpp
+++ b/llvm/lib/Target/Hexagon/MCTargetDesc/HexagonAsmBackend.cpp
@@ -201,7 +201,7 @@ class HexagonAsmBackend : public MCAsmBackend {
}
bool shouldForceRelocation(const MCAssembler &Asm, const MCFixup &Fixup,
- const MCValue &Target,
+ const MCValue &Target, const uint64_t,
const MCSubtargetInfo *STI) override {
switch(Fixup.getTargetKind()) {
default:
diff --git a/llvm/lib/Target/LoongArch/MCTargetDesc/LoongArchAsmBackend.cpp b/llvm/lib/Target/LoongArch/MCTargetDesc/LoongArchAsmBackend.cpp
index 0c24008301d022..eb4f6edc117a4a 100644
--- a/llvm/lib/Target/LoongArch/MCTargetDesc/LoongArchAsmBackend.cpp
+++ b/llvm/lib/Target/LoongArch/MCTargetDesc/LoongArchAsmBackend.cpp
@@ -251,6 +251,7 @@ bool LoongArchAsmBackend::shouldInsertFixupForCodeAlign(MCAssembler &Asm,
bool LoongArchAsmBackend::shouldForceRelocation(const MCAssembler &Asm,
const MCFixup &Fixup,
const MCValue &Target,
+ const uint64_t,
const MCSubtargetInfo *STI) {
if (Fixup.getKind() >= FirstLiteralRelocationKind)
return true;
diff --git a/llvm/lib/Target/LoongArch/MCTargetDesc/LoongArchAsmBackend.h b/llvm/lib/Target/LoongArch/MCTargetDesc/LoongArchAsmBackend.h
index 9df4ff22625c68..adbfd01410a4e6 100644
--- a/llvm/lib/Target/LoongArch/MCTargetDesc/LoongArchAsmBackend.h
+++ b/llvm/lib/Target/LoongArch/MCTargetDesc/LoongArchAsmBackend.h
@@ -57,7 +57,7 @@ class LoongArchAsmBackend : public MCAsmBackend {
MCAlignFragment &AF) override;
bool shouldForceRelocation(const MCAssembler &Asm, const MCFixup &Fixup,
- const MCValue &Target,
+ const MCValue &Target, const uint64_t Value,
const MCSubtargetInfo *STI) override;
unsigned getNumFixupKinds() const override {
diff --git a/llvm/lib/Target/Mips/MCTargetDesc/MipsAsmBackend.cpp b/llvm/lib/Target/Mips/MCTargetDesc/MipsAsmBackend.cpp
index 6001d9d51d16ac..4af6768b13cc90 100644
--- a/llvm/lib/Target/Mips/MCTargetDesc/MipsAsmBackend.cpp
+++ b/llvm/lib/Target/Mips/MCTargetDesc/MipsAsmBackend.cpp
@@ -542,6 +542,7 @@ bool MipsAsmBackend::writeNopData(raw_ostream &OS, uint64_t Count,
bool MipsAsmBackend::shouldForceRelocation(const MCAssembler &Asm,
const MCFixup &Fixup,
const MCValue &Target,
+ const uint64_t,
const MCSubtargetInfo *STI) {
if (Fixup.getKind() >= FirstLiteralRelocationKind)
return true;
diff --git a/llvm/lib/Target/Mips/MCTargetDesc/MipsAsmBackend.h b/llvm/lib/Target/Mips/MCTargetDesc/MipsAsmBackend.h
index 799dd569f1ad90..3a2c5e824a53b0 100644
--- a/llvm/lib/Target/Mips/MCTargetDesc/MipsAsmBackend.h
+++ b/llvm/lib/Target/Mips/MCTargetDesc/MipsAsmBackend.h
@@ -55,7 +55,7 @@ class MipsAsmBackend : public MCAsmBackend {
const MCSubtargetInfo *STI) const override;
bool shouldForceRelocation(const MCAssembler &Asm, const MCFixup &Fixup,
- const MCValue &Target,
+ const MCValue &Target, const uint64_t Value,
const MCSubtargetInfo *STI) override;
bool isMicroMips(const MCSymbol *Sym) const override;
diff --git a/llvm/lib/Target/PowerPC/MCTargetDesc/PPCAsmBackend.cpp b/llvm/lib/Target/PowerPC/MCTargetDesc/PPCAsmBackend.cpp
index ef36af1a5e6660..9077aa7de5a373 100644
--- a/llvm/lib/Target/PowerPC/MCTargetDesc/PPCAsmBackend.cpp
+++ b/llvm/lib/Target/PowerPC/MCTargetDesc/PPCAsmBackend.cpp
@@ -161,7 +161,7 @@ class PPCAsmBackend : public MCAsmBackend {
}
bool shouldForceRelocation(const MCAssembler &Asm, const MCFixup &Fixup,
- const MCValue &Target,
+ const MCValue &Target, const uint64_t,
const MCSubtargetInfo *STI) override {
MCFixupKind Kind = Fixup.getKind();
switch ((unsigned)Kind) {
diff --git a/llvm/lib/Target/RISCV/MCTargetDesc/RISCVAsmBackend.cpp b/llvm/lib/Target/RISCV/MCTargetDesc/RISCVAsmBackend.cpp
index eab4a5e77d96e5..270c4bb3b72450 100644
--- a/llvm/lib/Target/RISCV/MCTargetDesc/RISCVAsmBackend.cpp
+++ b/llvm/lib/Target/RISCV/MCTargetDesc/RISCVAsmBackend.cpp
@@ -112,6 +112,7 @@ RISCVAsmBackend::getFixupKindInfo(MCFixupKind Kind) const {
bool RISCVAsmBackend::shouldForceRelocation(const MCAssembler &Asm,
const MCFixup &Fixup,
const MCValue &Target,
+ const uint64_t,
const MCSubtargetInfo *STI) {
if (Fixup.getKind() >= FirstLiteralRelocationKind)
return true;
@@ -567,7 +568,7 @@ ...
[truncated]
|
@llvm/pr-subscribers-mc Author: Patryk Wychowaniec (Patryk27) ChangesThis commit changes the branch emission logic so that instead of throwing the "branch target out of range" error, we emit a relocation instead (if that's possible). MotivationWhen rustc builds an application, it compiles all crates (i.e. libraries) that belong to the project - those crates are compiled in their entirety, even if some of their functions actually end up not being called (as dead-code elimination happens later). This poses a problem in that if any library contains a function that we won't be able to emit (e.g. it's too large), we'll crash, even if ultimately we don't care about that particular function - that's what @mbuesch stumbled upon over #118015 (comment). This used not to be a problem before #106722, but now requires extra care within LLVM. N.B. I think my approach presented here is a bit hacky, I'm open to other ideas. Patch is 27.66 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/121498.diff 24 Files Affected:
diff --git a/llvm/include/llvm/MC/MCAsmBackend.h b/llvm/include/llvm/MC/MCAsmBackend.h
index b105a294d875c5..505bd1f59dd439 100644
--- a/llvm/include/llvm/MC/MCAsmBackend.h
+++ b/llvm/include/llvm/MC/MCAsmBackend.h
@@ -96,6 +96,7 @@ class MCAsmBackend {
virtual bool shouldForceRelocation(const MCAssembler &Asm,
const MCFixup &Fixup,
const MCValue &Target,
+ const uint64_t Value,
const MCSubtargetInfo *STI) {
return false;
}
diff --git a/llvm/lib/MC/MCAssembler.cpp b/llvm/lib/MC/MCAssembler.cpp
index 3c18d0832efcc7..3e5e0151d265e2 100644
--- a/llvm/lib/MC/MCAssembler.cpp
+++ b/llvm/lib/MC/MCAssembler.cpp
@@ -222,7 +222,7 @@ bool MCAssembler::evaluateFixup(const MCFixup &Fixup, const MCFragment *DF,
// Let the backend force a relocation if needed.
if (IsResolved &&
- getBackend().shouldForceRelocation(*this, Fixup, Target, STI)) {
+ getBackend().shouldForceRelocation(*this, Fixup, Target, Value, STI)) {
IsResolved = false;
WasForced = true;
}
diff --git a/llvm/lib/Target/AArch64/MCTargetDesc/AArch64AsmBackend.cpp b/llvm/lib/Target/AArch64/MCTargetDesc/AArch64AsmBackend.cpp
index 3ba0f2a2682855..337b81d68c93a0 100644
--- a/llvm/lib/Target/AArch64/MCTargetDesc/AArch64AsmBackend.cpp
+++ b/llvm/lib/Target/AArch64/MCTargetDesc/AArch64AsmBackend.cpp
@@ -98,7 +98,7 @@ class AArch64AsmBackend : public MCAsmBackend {
unsigned getFixupKindContainereSizeInBytes(unsigned Kind) const;
bool shouldForceRelocation(const MCAssembler &Asm, const MCFixup &Fixup,
- const MCValue &Target,
+ const MCValue &Target, const uint64_t Value,
const MCSubtargetInfo *STI) override;
};
@@ -520,6 +520,7 @@ bool AArch64AsmBackend::writeNopData(raw_ostream &OS, uint64_t Count,
bool AArch64AsmBackend::shouldForceRelocation(const MCAssembler &Asm,
const MCFixup &Fixup,
const MCValue &Target,
+ const uint64_t,
const MCSubtargetInfo *STI) {
unsigned Kind = Fixup.getKind();
if (Kind >= FirstLiteralRelocationKind)
diff --git a/llvm/lib/Target/AMDGPU/MCTargetDesc/AMDGPUAsmBackend.cpp b/llvm/lib/Target/AMDGPU/MCTargetDesc/AMDGPUAsmBackend.cpp
index 3172a83e5a1fe0..988c9002efee80 100644
--- a/llvm/lib/Target/AMDGPU/MCTargetDesc/AMDGPUAsmBackend.cpp
+++ b/llvm/lib/Target/AMDGPU/MCTargetDesc/AMDGPUAsmBackend.cpp
@@ -53,7 +53,7 @@ class AMDGPUAsmBackend : public MCAsmBackend {
std::optional<MCFixupKind> getFixupKind(StringRef Name) const override;
const MCFixupKindInfo &getFixupKindInfo(MCFixupKind Kind) const override;
bool shouldForceRelocation(const MCAssembler &Asm, const MCFixup &Fixup,
- const MCValue &Target,
+ const MCValue &Target, uint64_t Value,
const MCSubtargetInfo *STI) override;
};
@@ -197,6 +197,7 @@ const MCFixupKindInfo &AMDGPUAsmBackend::getFixupKindInfo(
bool AMDGPUAsmBackend::shouldForceRelocation(const MCAssembler &,
const MCFixup &Fixup,
const MCValue &,
+ const uint64_t,
const MCSubtargetInfo *STI) {
return Fixup.getKind() >= FirstLiteralRelocationKind;
}
diff --git a/llvm/lib/Target/ARM/MCTargetDesc/ARMAsmBackend.cpp b/llvm/lib/Target/ARM/MCTargetDesc/ARMAsmBackend.cpp
index d0e759d3356f10..d164f87ae97c98 100644
--- a/llvm/lib/Target/ARM/MCTargetDesc/ARMAsmBackend.cpp
+++ b/llvm/lib/Target/ARM/MCTargetDesc/ARMAsmBackend.cpp
@@ -956,6 +956,7 @@ unsigned ARMAsmBackend::adjustFixupValue(const MCAssembler &Asm,
bool ARMAsmBackend::shouldForceRelocation(const MCAssembler &Asm,
const MCFixup &Fixup,
const MCValue &Target,
+ const uint64_t,
const MCSubtargetInfo *STI) {
const MCSymbolRefExpr *A = Target.getSymA();
const MCSymbol *Sym = A ? &A->getSymbol() : nullptr;
diff --git a/llvm/lib/Target/ARM/MCTargetDesc/ARMAsmBackend.h b/llvm/lib/Target/ARM/MCTargetDesc/ARMAsmBackend.h
index f33cd8b7c2425a..2932e68cd98e56 100644
--- a/llvm/lib/Target/ARM/MCTargetDesc/ARMAsmBackend.h
+++ b/llvm/lib/Target/ARM/MCTargetDesc/ARMAsmBackend.h
@@ -36,7 +36,7 @@ class ARMAsmBackend : public MCAsmBackend {
const MCFixupKindInfo &getFixupKindInfo(MCFixupKind Kind) const override;
bool shouldForceRelocation(const MCAssembler &Asm, const MCFixup &Fixup,
- const MCValue &Target,
+ const MCValue &Target, const uint64_t Value,
const MCSubtargetInfo *STI) override;
unsigned adjustFixupValue(const MCAssembler &Asm, const MCFixup &Fixup,
diff --git a/llvm/lib/Target/AVR/MCTargetDesc/AVRAsmBackend.cpp b/llvm/lib/Target/AVR/MCTargetDesc/AVRAsmBackend.cpp
index fd35f8fcb8e7b8..8e6599ff420b7a 100644
--- a/llvm/lib/Target/AVR/MCTargetDesc/AVRAsmBackend.cpp
+++ b/llvm/lib/Target/AVR/MCTargetDesc/AVRAsmBackend.cpp
@@ -31,10 +31,14 @@ namespace adjust {
using namespace llvm;
-static void signed_width(unsigned Width, uint64_t Value,
- std::string Description, const MCFixup &Fixup,
- MCContext *Ctx) {
- if (!isIntN(Width, Value)) {
+static bool checkSignedWidth(unsigned Width, uint64_t Value,
+ std::string Description, const MCFixup &Fixup,
+ MCContext *Ctx) {
+ if (isIntN(Width, Value)) {
+ return true;
+ }
+
+ if (Ctx) {
std::string Diagnostic = "out of range " + Description;
int64_t Min = minIntN(Width);
@@ -45,12 +49,18 @@ static void signed_width(unsigned Width, uint64_t Value,
Ctx->reportError(Fixup.getLoc(), Diagnostic);
}
+
+ return false;
}
-static void unsigned_width(unsigned Width, uint64_t Value,
- std::string Description, const MCFixup &Fixup,
- MCContext *Ctx) {
- if (!isUIntN(Width, Value)) {
+static bool checkUnsignedWidth(unsigned Width, uint64_t Value,
+ std::string Description, const MCFixup &Fixup,
+ MCContext *Ctx) {
+ if (isUIntN(Width, Value)) {
+ return true;
+ }
+
+ if (Ctx) {
std::string Diagnostic = "out of range " + Description;
int64_t Max = maxUIntN(Width);
@@ -60,6 +70,8 @@ static void unsigned_width(unsigned Width, uint64_t Value,
Ctx->reportError(Fixup.getLoc(), Diagnostic);
}
+
+ return false;
}
/// Adjusts the value of a branch target before fixup application.
@@ -67,15 +79,16 @@ static void adjustBranch(unsigned Size, const MCFixup &Fixup, uint64_t &Value,
MCContext *Ctx) {
// We have one extra bit of precision because the value is rightshifted by
// one.
- unsigned_width(Size + 1, Value, std::string("branch target"), Fixup, Ctx);
+ checkUnsignedWidth(Size + 1, Value, std::string("branch target"), Fixup, Ctx);
// Rightshifts the value by one.
AVR::fixups::adjustBranchTarget(Value);
}
/// Adjusts the value of a relative branch target before fixup application.
-static void adjustRelativeBranch(unsigned Size, const MCFixup &Fixup,
- uint64_t &Value, MCContext *Ctx) {
+static bool adjustRelativeBranch(unsigned Size, const MCFixup &Fixup,
+ uint64_t &Value, MCContext *Ctx,
+ const MCSubtargetInfo *STI = nullptr) {
// Jumps are relative to the current instruction.
Value -= 2;
@@ -83,8 +96,11 @@ static void adjustRelativeBranch(unsigned Size, const MCFixup &Fixup,
// one.
Size += 1;
- if (!isIntN(Size, Value) &&
- Ctx->getSubtargetInfo()->hasFeature(AVR::FeatureWrappingRjmp)) {
+ if (!STI) {
+ STI = Ctx->getSubtargetInfo();
+ }
+
+ if (!isIntN(Size, Value) && STI->hasFeature(AVR::FeatureWrappingRjmp)) {
const int32_t FlashSize = 0x2000;
int32_t SignedValue = Value;
@@ -96,10 +112,15 @@ static void adjustRelativeBranch(unsigned Size, const MCFixup &Fixup,
}
}
- signed_width(Size, Value, std::string("branch target"), Fixup, Ctx);
+ if (!checkSignedWidth(Size, Value, std::string("branch target"), Fixup,
+ Ctx)) {
+ return false;
+ }
// Rightshifts the value by one.
AVR::fixups::adjustBranchTarget(Value);
+
+ return true;
}
/// 22-bit absolute fixup.
@@ -152,7 +173,7 @@ static void fixup_13_pcrel(unsigned Size, const MCFixup &Fixup, uint64_t &Value,
/// Resolves to:
/// 10q0 qq10 0000 1qqq
static void fixup_6(const MCFixup &Fixup, uint64_t &Value, MCContext *Ctx) {
- unsigned_width(6, Value, std::string("immediate"), Fixup, Ctx);
+ checkUnsignedWidth(6, Value, std::string("immediate"), Fixup, Ctx);
Value = ((Value & 0x20) << 8) | ((Value & 0x18) << 7) | (Value & 0x07);
}
@@ -164,7 +185,7 @@ static void fixup_6(const MCFixup &Fixup, uint64_t &Value, MCContext *Ctx) {
/// 0000 0000 kk00 kkkk
static void fixup_6_adiw(const MCFixup &Fixup, uint64_t &Value,
MCContext *Ctx) {
- unsigned_width(6, Value, std::string("immediate"), Fixup, Ctx);
+ checkUnsignedWidth(6, Value, std::string("immediate"), Fixup, Ctx);
Value = ((Value & 0x30) << 2) | (Value & 0x0f);
}
@@ -174,7 +195,7 @@ static void fixup_6_adiw(const MCFixup &Fixup, uint64_t &Value,
/// Resolves to:
/// 0000 0000 AAAA A000
static void fixup_port5(const MCFixup &Fixup, uint64_t &Value, MCContext *Ctx) {
- unsigned_width(5, Value, std::string("port number"), Fixup, Ctx);
+ checkUnsignedWidth(5, Value, std::string("port number"), Fixup, Ctx);
Value &= 0x1f;
@@ -186,7 +207,7 @@ static void fixup_port5(const MCFixup &Fixup, uint64_t &Value, MCContext *Ctx) {
/// Resolves to:
/// 1011 0AAd dddd AAAA
static void fixup_port6(const MCFixup &Fixup, uint64_t &Value, MCContext *Ctx) {
- unsigned_width(6, Value, std::string("port number"), Fixup, Ctx);
+ checkUnsignedWidth(6, Value, std::string("port number"), Fixup, Ctx);
Value = ((Value & 0x30) << 5) | (Value & 0x0f);
}
@@ -197,7 +218,7 @@ static void fixup_port6(const MCFixup &Fixup, uint64_t &Value, MCContext *Ctx) {
/// 1010 ikkk dddd kkkk
static void fixup_lds_sts_16(const MCFixup &Fixup, uint64_t &Value,
MCContext *Ctx) {
- unsigned_width(7, Value, std::string("immediate"), Fixup, Ctx);
+ checkUnsignedWidth(7, Value, std::string("immediate"), Fixup, Ctx);
Value = ((Value & 0x70) << 8) | (Value & 0x0f);
}
@@ -331,13 +352,15 @@ void AVRAsmBackend::adjustFixupValue(const MCFixup &Fixup,
adjust::ldi::ms8(Size, Fixup, Value, Ctx);
break;
case AVR::fixup_16:
- adjust::unsigned_width(16, Value, std::string("port number"), Fixup, Ctx);
+ adjust::checkUnsignedWidth(16, Value, std::string("port number"), Fixup,
+ Ctx);
Value &= 0xffff;
break;
case AVR::fixup_16_pm:
Value >>= 1; // Flash addresses are always shifted.
- adjust::unsigned_width(16, Value, std::string("port number"), Fixup, Ctx);
+ adjust::checkUnsignedWidth(16, Value, std::string("port number"), Fixup,
+ Ctx);
Value &= 0xffff;
break;
@@ -512,14 +535,25 @@ bool AVRAsmBackend::writeNopData(raw_ostream &OS, uint64_t Count,
bool AVRAsmBackend::shouldForceRelocation(const MCAssembler &Asm,
const MCFixup &Fixup,
const MCValue &Target,
+ const uint64_t Value,
const MCSubtargetInfo *STI) {
switch ((unsigned)Fixup.getKind()) {
default:
return Fixup.getKind() >= FirstLiteralRelocationKind;
+
case AVR::fixup_7_pcrel:
- case AVR::fixup_13_pcrel:
- // Always resolve relocations for PC-relative branches
- return false;
+ case AVR::fixup_13_pcrel: {
+ uint64_t ValueEx = Value;
+ uint64_t Size = AVRAsmBackend::getFixupKindInfo(Fixup.getKind()).TargetSize;
+
+ // If the jump is too large to encode it, fall back to a relocation.
+ //
+ // Note that trying to actually link that relocation *would* fail, but the
+ // hopes are that the module we're currently compiling won't be actually
+ // linked to the final binary.
+ return !adjust::adjustRelativeBranch(Size, Fixup, ValueEx, nullptr, STI);
+ }
+
case AVR::fixup_call:
return true;
}
diff --git a/llvm/lib/Target/AVR/MCTargetDesc/AVRAsmBackend.h b/llvm/lib/Target/AVR/MCTargetDesc/AVRAsmBackend.h
index 2337319590324d..1a9ae94f2f49e9 100644
--- a/llvm/lib/Target/AVR/MCTargetDesc/AVRAsmBackend.h
+++ b/llvm/lib/Target/AVR/MCTargetDesc/AVRAsmBackend.h
@@ -53,7 +53,7 @@ class AVRAsmBackend : public MCAsmBackend {
const MCSubtargetInfo *STI) const override;
bool shouldForceRelocation(const MCAssembler &Asm, const MCFixup &Fixup,
- const MCValue &Target,
+ const MCValue &Target, const uint64_t Value,
const MCSubtargetInfo *STI) override;
private:
diff --git a/llvm/lib/Target/CSKY/MCTargetDesc/CSKYAsmBackend.cpp b/llvm/lib/Target/CSKY/MCTargetDesc/CSKYAsmBackend.cpp
index dd06971e1cf976..ebe12fa6afd1fd 100644
--- a/llvm/lib/Target/CSKY/MCTargetDesc/CSKYAsmBackend.cpp
+++ b/llvm/lib/Target/CSKY/MCTargetDesc/CSKYAsmBackend.cpp
@@ -262,6 +262,7 @@ bool CSKYAsmBackend::mayNeedRelaxation(const MCInst &Inst,
bool CSKYAsmBackend::shouldForceRelocation(const MCAssembler &Asm,
const MCFixup &Fixup,
const MCValue &Target,
+ const uint64_t /*Value*/,
const MCSubtargetInfo * /*STI*/) {
if (Fixup.getKind() >= FirstLiteralRelocationKind)
return true;
diff --git a/llvm/lib/Target/CSKY/MCTargetDesc/CSKYAsmBackend.h b/llvm/lib/Target/CSKY/MCTargetDesc/CSKYAsmBackend.h
index 4b659f401d253a..faa84a6ef71d58 100644
--- a/llvm/lib/Target/CSKY/MCTargetDesc/CSKYAsmBackend.h
+++ b/llvm/lib/Target/CSKY/MCTargetDesc/CSKYAsmBackend.h
@@ -52,7 +52,7 @@ class CSKYAsmBackend : public MCAsmBackend {
const MCSubtargetInfo *STI) const override;
bool shouldForceRelocation(const MCAssembler &Asm, const MCFixup &Fixup,
- const MCValue &Target,
+ const MCValue &Target, const uint64_t Value,
const MCSubtargetInfo *STI) override;
std::unique_ptr<MCObjectTargetWriter>
diff --git a/llvm/lib/Target/Hexagon/MCTargetDesc/HexagonAsmBackend.cpp b/llvm/lib/Target/Hexagon/MCTargetDesc/HexagonAsmBackend.cpp
index 7864d45d594ac5..98b1dde8fa3fc2 100644
--- a/llvm/lib/Target/Hexagon/MCTargetDesc/HexagonAsmBackend.cpp
+++ b/llvm/lib/Target/Hexagon/MCTargetDesc/HexagonAsmBackend.cpp
@@ -201,7 +201,7 @@ class HexagonAsmBackend : public MCAsmBackend {
}
bool shouldForceRelocation(const MCAssembler &Asm, const MCFixup &Fixup,
- const MCValue &Target,
+ const MCValue &Target, const uint64_t,
const MCSubtargetInfo *STI) override {
switch(Fixup.getTargetKind()) {
default:
diff --git a/llvm/lib/Target/LoongArch/MCTargetDesc/LoongArchAsmBackend.cpp b/llvm/lib/Target/LoongArch/MCTargetDesc/LoongArchAsmBackend.cpp
index 0c24008301d022..eb4f6edc117a4a 100644
--- a/llvm/lib/Target/LoongArch/MCTargetDesc/LoongArchAsmBackend.cpp
+++ b/llvm/lib/Target/LoongArch/MCTargetDesc/LoongArchAsmBackend.cpp
@@ -251,6 +251,7 @@ bool LoongArchAsmBackend::shouldInsertFixupForCodeAlign(MCAssembler &Asm,
bool LoongArchAsmBackend::shouldForceRelocation(const MCAssembler &Asm,
const MCFixup &Fixup,
const MCValue &Target,
+ const uint64_t,
const MCSubtargetInfo *STI) {
if (Fixup.getKind() >= FirstLiteralRelocationKind)
return true;
diff --git a/llvm/lib/Target/LoongArch/MCTargetDesc/LoongArchAsmBackend.h b/llvm/lib/Target/LoongArch/MCTargetDesc/LoongArchAsmBackend.h
index 9df4ff22625c68..adbfd01410a4e6 100644
--- a/llvm/lib/Target/LoongArch/MCTargetDesc/LoongArchAsmBackend.h
+++ b/llvm/lib/Target/LoongArch/MCTargetDesc/LoongArchAsmBackend.h
@@ -57,7 +57,7 @@ class LoongArchAsmBackend : public MCAsmBackend {
MCAlignFragment &AF) override;
bool shouldForceRelocation(const MCAssembler &Asm, const MCFixup &Fixup,
- const MCValue &Target,
+ const MCValue &Target, const uint64_t Value,
const MCSubtargetInfo *STI) override;
unsigned getNumFixupKinds() const override {
diff --git a/llvm/lib/Target/Mips/MCTargetDesc/MipsAsmBackend.cpp b/llvm/lib/Target/Mips/MCTargetDesc/MipsAsmBackend.cpp
index 6001d9d51d16ac..4af6768b13cc90 100644
--- a/llvm/lib/Target/Mips/MCTargetDesc/MipsAsmBackend.cpp
+++ b/llvm/lib/Target/Mips/MCTargetDesc/MipsAsmBackend.cpp
@@ -542,6 +542,7 @@ bool MipsAsmBackend::writeNopData(raw_ostream &OS, uint64_t Count,
bool MipsAsmBackend::shouldForceRelocation(const MCAssembler &Asm,
const MCFixup &Fixup,
const MCValue &Target,
+ const uint64_t,
const MCSubtargetInfo *STI) {
if (Fixup.getKind() >= FirstLiteralRelocationKind)
return true;
diff --git a/llvm/lib/Target/Mips/MCTargetDesc/MipsAsmBackend.h b/llvm/lib/Target/Mips/MCTargetDesc/MipsAsmBackend.h
index 799dd569f1ad90..3a2c5e824a53b0 100644
--- a/llvm/lib/Target/Mips/MCTargetDesc/MipsAsmBackend.h
+++ b/llvm/lib/Target/Mips/MCTargetDesc/MipsAsmBackend.h
@@ -55,7 +55,7 @@ class MipsAsmBackend : public MCAsmBackend {
const MCSubtargetInfo *STI) const override;
bool shouldForceRelocation(const MCAssembler &Asm, const MCFixup &Fixup,
- const MCValue &Target,
+ const MCValue &Target, const uint64_t Value,
const MCSubtargetInfo *STI) override;
bool isMicroMips(const MCSymbol *Sym) const override;
diff --git a/llvm/lib/Target/PowerPC/MCTargetDesc/PPCAsmBackend.cpp b/llvm/lib/Target/PowerPC/MCTargetDesc/PPCAsmBackend.cpp
index ef36af1a5e6660..9077aa7de5a373 100644
--- a/llvm/lib/Target/PowerPC/MCTargetDesc/PPCAsmBackend.cpp
+++ b/llvm/lib/Target/PowerPC/MCTargetDesc/PPCAsmBackend.cpp
@@ -161,7 +161,7 @@ class PPCAsmBackend : public MCAsmBackend {
}
bool shouldForceRelocation(const MCAssembler &Asm, const MCFixup &Fixup,
- const MCValue &Target,
+ const MCValue &Target, const uint64_t,
const MCSubtargetInfo *STI) override {
MCFixupKind Kind = Fixup.getKind();
switch ((unsigned)Kind) {
diff --git a/llvm/lib/Target/RISCV/MCTargetDesc/RISCVAsmBackend.cpp b/llvm/lib/Target/RISCV/MCTargetDesc/RISCVAsmBackend.cpp
index eab4a5e77d96e5..270c4bb3b72450 100644
--- a/llvm/lib/Target/RISCV/MCTargetDesc/RISCVAsmBackend.cpp
+++ b/llvm/lib/Target/RISCV/MCTargetDesc/RISCVAsmBackend.cpp
@@ -112,6 +112,7 @@ RISCVAsmBackend::getFixupKindInfo(MCFixupKind Kind) const {
bool RISCVAsmBackend::shouldForceRelocation(const MCAssembler &Asm,
const MCFixup &Fixup,
const MCValue &Target,
+ const uint64_t,
const MCSubtargetInfo *STI) {
if (Fixup.getKind() >= FirstLiteralRelocationKind)
return true;
@@ -567,7 +568,7 @@ ...
[truncated]
|
@llvm/pr-subscribers-backend-powerpc Author: Patryk Wychowaniec (Patryk27) ChangesThis commit changes the branch emission logic so that instead of throwing the "branch target out of range" error, we emit a relocation instead (if that's possible). MotivationWhen rustc builds an application, it compiles all crates (i.e. libraries) that belong to the project - those crates are compiled in their entirety, even if some of their functions actually end up not being called (as dead-code elimination happens later). This poses a problem in that if any library contains a function that we won't be able to emit (e.g. it's too large), we'll crash, even if ultimately we don't care about that particular function - that's what @mbuesch stumbled upon over #118015 (comment). This used not to be a problem before #106722, but now requires extra care within LLVM. N.B. I think my approach presented here is a bit hacky, I'm open to other ideas. Patch is 27.66 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/121498.diff 24 Files Affected:
diff --git a/llvm/include/llvm/MC/MCAsmBackend.h b/llvm/include/llvm/MC/MCAsmBackend.h
index b105a294d875c5..505bd1f59dd439 100644
--- a/llvm/include/llvm/MC/MCAsmBackend.h
+++ b/llvm/include/llvm/MC/MCAsmBackend.h
@@ -96,6 +96,7 @@ class MCAsmBackend {
virtual bool shouldForceRelocation(const MCAssembler &Asm,
const MCFixup &Fixup,
const MCValue &Target,
+ const uint64_t Value,
const MCSubtargetInfo *STI) {
return false;
}
diff --git a/llvm/lib/MC/MCAssembler.cpp b/llvm/lib/MC/MCAssembler.cpp
index 3c18d0832efcc7..3e5e0151d265e2 100644
--- a/llvm/lib/MC/MCAssembler.cpp
+++ b/llvm/lib/MC/MCAssembler.cpp
@@ -222,7 +222,7 @@ bool MCAssembler::evaluateFixup(const MCFixup &Fixup, const MCFragment *DF,
// Let the backend force a relocation if needed.
if (IsResolved &&
- getBackend().shouldForceRelocation(*this, Fixup, Target, STI)) {
+ getBackend().shouldForceRelocation(*this, Fixup, Target, Value, STI)) {
IsResolved = false;
WasForced = true;
}
diff --git a/llvm/lib/Target/AArch64/MCTargetDesc/AArch64AsmBackend.cpp b/llvm/lib/Target/AArch64/MCTargetDesc/AArch64AsmBackend.cpp
index 3ba0f2a2682855..337b81d68c93a0 100644
--- a/llvm/lib/Target/AArch64/MCTargetDesc/AArch64AsmBackend.cpp
+++ b/llvm/lib/Target/AArch64/MCTargetDesc/AArch64AsmBackend.cpp
@@ -98,7 +98,7 @@ class AArch64AsmBackend : public MCAsmBackend {
unsigned getFixupKindContainereSizeInBytes(unsigned Kind) const;
bool shouldForceRelocation(const MCAssembler &Asm, const MCFixup &Fixup,
- const MCValue &Target,
+ const MCValue &Target, const uint64_t Value,
const MCSubtargetInfo *STI) override;
};
@@ -520,6 +520,7 @@ bool AArch64AsmBackend::writeNopData(raw_ostream &OS, uint64_t Count,
bool AArch64AsmBackend::shouldForceRelocation(const MCAssembler &Asm,
const MCFixup &Fixup,
const MCValue &Target,
+ const uint64_t,
const MCSubtargetInfo *STI) {
unsigned Kind = Fixup.getKind();
if (Kind >= FirstLiteralRelocationKind)
diff --git a/llvm/lib/Target/AMDGPU/MCTargetDesc/AMDGPUAsmBackend.cpp b/llvm/lib/Target/AMDGPU/MCTargetDesc/AMDGPUAsmBackend.cpp
index 3172a83e5a1fe0..988c9002efee80 100644
--- a/llvm/lib/Target/AMDGPU/MCTargetDesc/AMDGPUAsmBackend.cpp
+++ b/llvm/lib/Target/AMDGPU/MCTargetDesc/AMDGPUAsmBackend.cpp
@@ -53,7 +53,7 @@ class AMDGPUAsmBackend : public MCAsmBackend {
std::optional<MCFixupKind> getFixupKind(StringRef Name) const override;
const MCFixupKindInfo &getFixupKindInfo(MCFixupKind Kind) const override;
bool shouldForceRelocation(const MCAssembler &Asm, const MCFixup &Fixup,
- const MCValue &Target,
+ const MCValue &Target, uint64_t Value,
const MCSubtargetInfo *STI) override;
};
@@ -197,6 +197,7 @@ const MCFixupKindInfo &AMDGPUAsmBackend::getFixupKindInfo(
bool AMDGPUAsmBackend::shouldForceRelocation(const MCAssembler &,
const MCFixup &Fixup,
const MCValue &,
+ const uint64_t,
const MCSubtargetInfo *STI) {
return Fixup.getKind() >= FirstLiteralRelocationKind;
}
diff --git a/llvm/lib/Target/ARM/MCTargetDesc/ARMAsmBackend.cpp b/llvm/lib/Target/ARM/MCTargetDesc/ARMAsmBackend.cpp
index d0e759d3356f10..d164f87ae97c98 100644
--- a/llvm/lib/Target/ARM/MCTargetDesc/ARMAsmBackend.cpp
+++ b/llvm/lib/Target/ARM/MCTargetDesc/ARMAsmBackend.cpp
@@ -956,6 +956,7 @@ unsigned ARMAsmBackend::adjustFixupValue(const MCAssembler &Asm,
bool ARMAsmBackend::shouldForceRelocation(const MCAssembler &Asm,
const MCFixup &Fixup,
const MCValue &Target,
+ const uint64_t,
const MCSubtargetInfo *STI) {
const MCSymbolRefExpr *A = Target.getSymA();
const MCSymbol *Sym = A ? &A->getSymbol() : nullptr;
diff --git a/llvm/lib/Target/ARM/MCTargetDesc/ARMAsmBackend.h b/llvm/lib/Target/ARM/MCTargetDesc/ARMAsmBackend.h
index f33cd8b7c2425a..2932e68cd98e56 100644
--- a/llvm/lib/Target/ARM/MCTargetDesc/ARMAsmBackend.h
+++ b/llvm/lib/Target/ARM/MCTargetDesc/ARMAsmBackend.h
@@ -36,7 +36,7 @@ class ARMAsmBackend : public MCAsmBackend {
const MCFixupKindInfo &getFixupKindInfo(MCFixupKind Kind) const override;
bool shouldForceRelocation(const MCAssembler &Asm, const MCFixup &Fixup,
- const MCValue &Target,
+ const MCValue &Target, const uint64_t Value,
const MCSubtargetInfo *STI) override;
unsigned adjustFixupValue(const MCAssembler &Asm, const MCFixup &Fixup,
diff --git a/llvm/lib/Target/AVR/MCTargetDesc/AVRAsmBackend.cpp b/llvm/lib/Target/AVR/MCTargetDesc/AVRAsmBackend.cpp
index fd35f8fcb8e7b8..8e6599ff420b7a 100644
--- a/llvm/lib/Target/AVR/MCTargetDesc/AVRAsmBackend.cpp
+++ b/llvm/lib/Target/AVR/MCTargetDesc/AVRAsmBackend.cpp
@@ -31,10 +31,14 @@ namespace adjust {
using namespace llvm;
-static void signed_width(unsigned Width, uint64_t Value,
- std::string Description, const MCFixup &Fixup,
- MCContext *Ctx) {
- if (!isIntN(Width, Value)) {
+static bool checkSignedWidth(unsigned Width, uint64_t Value,
+ std::string Description, const MCFixup &Fixup,
+ MCContext *Ctx) {
+ if (isIntN(Width, Value)) {
+ return true;
+ }
+
+ if (Ctx) {
std::string Diagnostic = "out of range " + Description;
int64_t Min = minIntN(Width);
@@ -45,12 +49,18 @@ static void signed_width(unsigned Width, uint64_t Value,
Ctx->reportError(Fixup.getLoc(), Diagnostic);
}
+
+ return false;
}
-static void unsigned_width(unsigned Width, uint64_t Value,
- std::string Description, const MCFixup &Fixup,
- MCContext *Ctx) {
- if (!isUIntN(Width, Value)) {
+static bool checkUnsignedWidth(unsigned Width, uint64_t Value,
+ std::string Description, const MCFixup &Fixup,
+ MCContext *Ctx) {
+ if (isUIntN(Width, Value)) {
+ return true;
+ }
+
+ if (Ctx) {
std::string Diagnostic = "out of range " + Description;
int64_t Max = maxUIntN(Width);
@@ -60,6 +70,8 @@ static void unsigned_width(unsigned Width, uint64_t Value,
Ctx->reportError(Fixup.getLoc(), Diagnostic);
}
+
+ return false;
}
/// Adjusts the value of a branch target before fixup application.
@@ -67,15 +79,16 @@ static void adjustBranch(unsigned Size, const MCFixup &Fixup, uint64_t &Value,
MCContext *Ctx) {
// We have one extra bit of precision because the value is rightshifted by
// one.
- unsigned_width(Size + 1, Value, std::string("branch target"), Fixup, Ctx);
+ checkUnsignedWidth(Size + 1, Value, std::string("branch target"), Fixup, Ctx);
// Rightshifts the value by one.
AVR::fixups::adjustBranchTarget(Value);
}
/// Adjusts the value of a relative branch target before fixup application.
-static void adjustRelativeBranch(unsigned Size, const MCFixup &Fixup,
- uint64_t &Value, MCContext *Ctx) {
+static bool adjustRelativeBranch(unsigned Size, const MCFixup &Fixup,
+ uint64_t &Value, MCContext *Ctx,
+ const MCSubtargetInfo *STI = nullptr) {
// Jumps are relative to the current instruction.
Value -= 2;
@@ -83,8 +96,11 @@ static void adjustRelativeBranch(unsigned Size, const MCFixup &Fixup,
// one.
Size += 1;
- if (!isIntN(Size, Value) &&
- Ctx->getSubtargetInfo()->hasFeature(AVR::FeatureWrappingRjmp)) {
+ if (!STI) {
+ STI = Ctx->getSubtargetInfo();
+ }
+
+ if (!isIntN(Size, Value) && STI->hasFeature(AVR::FeatureWrappingRjmp)) {
const int32_t FlashSize = 0x2000;
int32_t SignedValue = Value;
@@ -96,10 +112,15 @@ static void adjustRelativeBranch(unsigned Size, const MCFixup &Fixup,
}
}
- signed_width(Size, Value, std::string("branch target"), Fixup, Ctx);
+ if (!checkSignedWidth(Size, Value, std::string("branch target"), Fixup,
+ Ctx)) {
+ return false;
+ }
// Rightshifts the value by one.
AVR::fixups::adjustBranchTarget(Value);
+
+ return true;
}
/// 22-bit absolute fixup.
@@ -152,7 +173,7 @@ static void fixup_13_pcrel(unsigned Size, const MCFixup &Fixup, uint64_t &Value,
/// Resolves to:
/// 10q0 qq10 0000 1qqq
static void fixup_6(const MCFixup &Fixup, uint64_t &Value, MCContext *Ctx) {
- unsigned_width(6, Value, std::string("immediate"), Fixup, Ctx);
+ checkUnsignedWidth(6, Value, std::string("immediate"), Fixup, Ctx);
Value = ((Value & 0x20) << 8) | ((Value & 0x18) << 7) | (Value & 0x07);
}
@@ -164,7 +185,7 @@ static void fixup_6(const MCFixup &Fixup, uint64_t &Value, MCContext *Ctx) {
/// 0000 0000 kk00 kkkk
static void fixup_6_adiw(const MCFixup &Fixup, uint64_t &Value,
MCContext *Ctx) {
- unsigned_width(6, Value, std::string("immediate"), Fixup, Ctx);
+ checkUnsignedWidth(6, Value, std::string("immediate"), Fixup, Ctx);
Value = ((Value & 0x30) << 2) | (Value & 0x0f);
}
@@ -174,7 +195,7 @@ static void fixup_6_adiw(const MCFixup &Fixup, uint64_t &Value,
/// Resolves to:
/// 0000 0000 AAAA A000
static void fixup_port5(const MCFixup &Fixup, uint64_t &Value, MCContext *Ctx) {
- unsigned_width(5, Value, std::string("port number"), Fixup, Ctx);
+ checkUnsignedWidth(5, Value, std::string("port number"), Fixup, Ctx);
Value &= 0x1f;
@@ -186,7 +207,7 @@ static void fixup_port5(const MCFixup &Fixup, uint64_t &Value, MCContext *Ctx) {
/// Resolves to:
/// 1011 0AAd dddd AAAA
static void fixup_port6(const MCFixup &Fixup, uint64_t &Value, MCContext *Ctx) {
- unsigned_width(6, Value, std::string("port number"), Fixup, Ctx);
+ checkUnsignedWidth(6, Value, std::string("port number"), Fixup, Ctx);
Value = ((Value & 0x30) << 5) | (Value & 0x0f);
}
@@ -197,7 +218,7 @@ static void fixup_port6(const MCFixup &Fixup, uint64_t &Value, MCContext *Ctx) {
/// 1010 ikkk dddd kkkk
static void fixup_lds_sts_16(const MCFixup &Fixup, uint64_t &Value,
MCContext *Ctx) {
- unsigned_width(7, Value, std::string("immediate"), Fixup, Ctx);
+ checkUnsignedWidth(7, Value, std::string("immediate"), Fixup, Ctx);
Value = ((Value & 0x70) << 8) | (Value & 0x0f);
}
@@ -331,13 +352,15 @@ void AVRAsmBackend::adjustFixupValue(const MCFixup &Fixup,
adjust::ldi::ms8(Size, Fixup, Value, Ctx);
break;
case AVR::fixup_16:
- adjust::unsigned_width(16, Value, std::string("port number"), Fixup, Ctx);
+ adjust::checkUnsignedWidth(16, Value, std::string("port number"), Fixup,
+ Ctx);
Value &= 0xffff;
break;
case AVR::fixup_16_pm:
Value >>= 1; // Flash addresses are always shifted.
- adjust::unsigned_width(16, Value, std::string("port number"), Fixup, Ctx);
+ adjust::checkUnsignedWidth(16, Value, std::string("port number"), Fixup,
+ Ctx);
Value &= 0xffff;
break;
@@ -512,14 +535,25 @@ bool AVRAsmBackend::writeNopData(raw_ostream &OS, uint64_t Count,
bool AVRAsmBackend::shouldForceRelocation(const MCAssembler &Asm,
const MCFixup &Fixup,
const MCValue &Target,
+ const uint64_t Value,
const MCSubtargetInfo *STI) {
switch ((unsigned)Fixup.getKind()) {
default:
return Fixup.getKind() >= FirstLiteralRelocationKind;
+
case AVR::fixup_7_pcrel:
- case AVR::fixup_13_pcrel:
- // Always resolve relocations for PC-relative branches
- return false;
+ case AVR::fixup_13_pcrel: {
+ uint64_t ValueEx = Value;
+ uint64_t Size = AVRAsmBackend::getFixupKindInfo(Fixup.getKind()).TargetSize;
+
+ // If the jump is too large to encode it, fall back to a relocation.
+ //
+ // Note that trying to actually link that relocation *would* fail, but the
+ // hopes are that the module we're currently compiling won't be actually
+ // linked to the final binary.
+ return !adjust::adjustRelativeBranch(Size, Fixup, ValueEx, nullptr, STI);
+ }
+
case AVR::fixup_call:
return true;
}
diff --git a/llvm/lib/Target/AVR/MCTargetDesc/AVRAsmBackend.h b/llvm/lib/Target/AVR/MCTargetDesc/AVRAsmBackend.h
index 2337319590324d..1a9ae94f2f49e9 100644
--- a/llvm/lib/Target/AVR/MCTargetDesc/AVRAsmBackend.h
+++ b/llvm/lib/Target/AVR/MCTargetDesc/AVRAsmBackend.h
@@ -53,7 +53,7 @@ class AVRAsmBackend : public MCAsmBackend {
const MCSubtargetInfo *STI) const override;
bool shouldForceRelocation(const MCAssembler &Asm, const MCFixup &Fixup,
- const MCValue &Target,
+ const MCValue &Target, const uint64_t Value,
const MCSubtargetInfo *STI) override;
private:
diff --git a/llvm/lib/Target/CSKY/MCTargetDesc/CSKYAsmBackend.cpp b/llvm/lib/Target/CSKY/MCTargetDesc/CSKYAsmBackend.cpp
index dd06971e1cf976..ebe12fa6afd1fd 100644
--- a/llvm/lib/Target/CSKY/MCTargetDesc/CSKYAsmBackend.cpp
+++ b/llvm/lib/Target/CSKY/MCTargetDesc/CSKYAsmBackend.cpp
@@ -262,6 +262,7 @@ bool CSKYAsmBackend::mayNeedRelaxation(const MCInst &Inst,
bool CSKYAsmBackend::shouldForceRelocation(const MCAssembler &Asm,
const MCFixup &Fixup,
const MCValue &Target,
+ const uint64_t /*Value*/,
const MCSubtargetInfo * /*STI*/) {
if (Fixup.getKind() >= FirstLiteralRelocationKind)
return true;
diff --git a/llvm/lib/Target/CSKY/MCTargetDesc/CSKYAsmBackend.h b/llvm/lib/Target/CSKY/MCTargetDesc/CSKYAsmBackend.h
index 4b659f401d253a..faa84a6ef71d58 100644
--- a/llvm/lib/Target/CSKY/MCTargetDesc/CSKYAsmBackend.h
+++ b/llvm/lib/Target/CSKY/MCTargetDesc/CSKYAsmBackend.h
@@ -52,7 +52,7 @@ class CSKYAsmBackend : public MCAsmBackend {
const MCSubtargetInfo *STI) const override;
bool shouldForceRelocation(const MCAssembler &Asm, const MCFixup &Fixup,
- const MCValue &Target,
+ const MCValue &Target, const uint64_t Value,
const MCSubtargetInfo *STI) override;
std::unique_ptr<MCObjectTargetWriter>
diff --git a/llvm/lib/Target/Hexagon/MCTargetDesc/HexagonAsmBackend.cpp b/llvm/lib/Target/Hexagon/MCTargetDesc/HexagonAsmBackend.cpp
index 7864d45d594ac5..98b1dde8fa3fc2 100644
--- a/llvm/lib/Target/Hexagon/MCTargetDesc/HexagonAsmBackend.cpp
+++ b/llvm/lib/Target/Hexagon/MCTargetDesc/HexagonAsmBackend.cpp
@@ -201,7 +201,7 @@ class HexagonAsmBackend : public MCAsmBackend {
}
bool shouldForceRelocation(const MCAssembler &Asm, const MCFixup &Fixup,
- const MCValue &Target,
+ const MCValue &Target, const uint64_t,
const MCSubtargetInfo *STI) override {
switch(Fixup.getTargetKind()) {
default:
diff --git a/llvm/lib/Target/LoongArch/MCTargetDesc/LoongArchAsmBackend.cpp b/llvm/lib/Target/LoongArch/MCTargetDesc/LoongArchAsmBackend.cpp
index 0c24008301d022..eb4f6edc117a4a 100644
--- a/llvm/lib/Target/LoongArch/MCTargetDesc/LoongArchAsmBackend.cpp
+++ b/llvm/lib/Target/LoongArch/MCTargetDesc/LoongArchAsmBackend.cpp
@@ -251,6 +251,7 @@ bool LoongArchAsmBackend::shouldInsertFixupForCodeAlign(MCAssembler &Asm,
bool LoongArchAsmBackend::shouldForceRelocation(const MCAssembler &Asm,
const MCFixup &Fixup,
const MCValue &Target,
+ const uint64_t,
const MCSubtargetInfo *STI) {
if (Fixup.getKind() >= FirstLiteralRelocationKind)
return true;
diff --git a/llvm/lib/Target/LoongArch/MCTargetDesc/LoongArchAsmBackend.h b/llvm/lib/Target/LoongArch/MCTargetDesc/LoongArchAsmBackend.h
index 9df4ff22625c68..adbfd01410a4e6 100644
--- a/llvm/lib/Target/LoongArch/MCTargetDesc/LoongArchAsmBackend.h
+++ b/llvm/lib/Target/LoongArch/MCTargetDesc/LoongArchAsmBackend.h
@@ -57,7 +57,7 @@ class LoongArchAsmBackend : public MCAsmBackend {
MCAlignFragment &AF) override;
bool shouldForceRelocation(const MCAssembler &Asm, const MCFixup &Fixup,
- const MCValue &Target,
+ const MCValue &Target, const uint64_t Value,
const MCSubtargetInfo *STI) override;
unsigned getNumFixupKinds() const override {
diff --git a/llvm/lib/Target/Mips/MCTargetDesc/MipsAsmBackend.cpp b/llvm/lib/Target/Mips/MCTargetDesc/MipsAsmBackend.cpp
index 6001d9d51d16ac..4af6768b13cc90 100644
--- a/llvm/lib/Target/Mips/MCTargetDesc/MipsAsmBackend.cpp
+++ b/llvm/lib/Target/Mips/MCTargetDesc/MipsAsmBackend.cpp
@@ -542,6 +542,7 @@ bool MipsAsmBackend::writeNopData(raw_ostream &OS, uint64_t Count,
bool MipsAsmBackend::shouldForceRelocation(const MCAssembler &Asm,
const MCFixup &Fixup,
const MCValue &Target,
+ const uint64_t,
const MCSubtargetInfo *STI) {
if (Fixup.getKind() >= FirstLiteralRelocationKind)
return true;
diff --git a/llvm/lib/Target/Mips/MCTargetDesc/MipsAsmBackend.h b/llvm/lib/Target/Mips/MCTargetDesc/MipsAsmBackend.h
index 799dd569f1ad90..3a2c5e824a53b0 100644
--- a/llvm/lib/Target/Mips/MCTargetDesc/MipsAsmBackend.h
+++ b/llvm/lib/Target/Mips/MCTargetDesc/MipsAsmBackend.h
@@ -55,7 +55,7 @@ class MipsAsmBackend : public MCAsmBackend {
const MCSubtargetInfo *STI) const override;
bool shouldForceRelocation(const MCAssembler &Asm, const MCFixup &Fixup,
- const MCValue &Target,
+ const MCValue &Target, const uint64_t Value,
const MCSubtargetInfo *STI) override;
bool isMicroMips(const MCSymbol *Sym) const override;
diff --git a/llvm/lib/Target/PowerPC/MCTargetDesc/PPCAsmBackend.cpp b/llvm/lib/Target/PowerPC/MCTargetDesc/PPCAsmBackend.cpp
index ef36af1a5e6660..9077aa7de5a373 100644
--- a/llvm/lib/Target/PowerPC/MCTargetDesc/PPCAsmBackend.cpp
+++ b/llvm/lib/Target/PowerPC/MCTargetDesc/PPCAsmBackend.cpp
@@ -161,7 +161,7 @@ class PPCAsmBackend : public MCAsmBackend {
}
bool shouldForceRelocation(const MCAssembler &Asm, const MCFixup &Fixup,
- const MCValue &Target,
+ const MCValue &Target, const uint64_t,
const MCSubtargetInfo *STI) override {
MCFixupKind Kind = Fixup.getKind();
switch ((unsigned)Kind) {
diff --git a/llvm/lib/Target/RISCV/MCTargetDesc/RISCVAsmBackend.cpp b/llvm/lib/Target/RISCV/MCTargetDesc/RISCVAsmBackend.cpp
index eab4a5e77d96e5..270c4bb3b72450 100644
--- a/llvm/lib/Target/RISCV/MCTargetDesc/RISCVAsmBackend.cpp
+++ b/llvm/lib/Target/RISCV/MCTargetDesc/RISCVAsmBackend.cpp
@@ -112,6 +112,7 @@ RISCVAsmBackend::getFixupKindInfo(MCFixupKind Kind) const {
bool RISCVAsmBackend::shouldForceRelocation(const MCAssembler &Asm,
const MCFixup &Fixup,
const MCValue &Target,
+ const uint64_t,
const MCSubtargetInfo *STI) {
if (Fixup.getKind() >= FirstLiteralRelocationKind)
return true;
@@ -567,7 +568,7 @@ ...
[truncated]
|
✅ With the latest revision this PR passed the C/C++ code formatter. |
8b17ab4
to
0f27b8d
Compare
// Jumps are relative to the current instruction. | ||
Value -= 2; | ||
|
||
// We have one extra bit of precision because the value is rightshifted by | ||
// one. | ||
Size += 1; | ||
|
||
if (!isIntN(Size, Value) && | ||
Ctx->getSubtargetInfo()->hasFeature(AVR::FeatureWrappingRjmp)) { | ||
if (!STI) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In no context where there is a branch should the subtarget be uncertain?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's two contexts this function is called from:
- from within
adjustFixupValue()
, where we're gottenMCContext *Ctx
, - from within
shouldForceRelocation()
, where we're gotten onlyconst MCSubtargetInfo *STI
.
This if (!STI)
part is invoked when we're being called from adjustFixupValue()
and it just fetches STI
from the Ctx
, since it's available there. Alternatively we could adjust the call-sites of this function to be (..., Ctx, Ctx->getSubtargetInfo())
, but I thought automatic substitution would be less error-prone.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The subtarget is a function level concept, the global one shouldn't really exist. The subtarget should be passed in from the original callsite
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The subtarget is a function level concept
I don't think so, you specify the subtarget when you run llc
, it's also explicitly mentioned in MCAsmBackendCtorTy
etc., no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a legacy issue, the subtarget should always come from the function
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree. It would be better that we only fix the relocation issue in this PR, and fix other parts in other PRs.
I have tested with avr-gcc, it seems avr-gcc also leave such tasks to the linker, so I agree with this change. Please fix other concerns of @arsenm . |
0f27b8d
to
5578970
Compare
Okie, code adjusted 🙂 |
5578970
to
22bc447
Compare
22bc447
to
249c888
Compare
} | ||
|
||
/// Adjusts the value of a branch target before fixup application. | ||
static void adjustBranch(unsigned Size, const MCFixup &Fixup, uint64_t &Value, | ||
MCContext *Ctx) { | ||
// We have one extra bit of precision because the value is rightshifted by | ||
// one. | ||
unsigned_width(Size + 1, Value, std::string("branch target"), Fixup, Ctx); | ||
checkUnsignedWidth(Size + 1, Value, std::string("branch target"), Fixup, Ctx); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No std::string
@@ -164,7 +160,7 @@ static void fixup_6(const MCFixup &Fixup, uint64_t &Value, MCContext *Ctx) { | |||
/// 0000 0000 kk00 kkkk | |||
static void fixup_6_adiw(const MCFixup &Fixup, uint64_t &Value, | |||
MCContext *Ctx) { | |||
unsigned_width(6, Value, std::string("immediate"), Fixup, Ctx); | |||
checkUnsignedWidth(6, Value, std::string("immediate"), Fixup, Ctx); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No std::string throughout
249c888
to
b7dc691
Compare
Alright, I've minimized the pull request - now it only contains changes regarding the relative jumps; other things will be extracted into separate PRs. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
What's your opinion? @arsenm
Thanks! Could you backport it, @benshi001? (pinging llvm-bot should do, it's just that I don't have permissions for that) |
ping @llvmbot |
I think you have satisfied the threshold to apply for some permissions, please send an email to Chris Lattner. |
This commit changes the branch emission logic so that instead of throwing the "branch target out of range" error, we emit a relocation instead (if that's possible).
Motivation
When rustc builds an application, it compiles all crates (i.e. libraries) that belong to the project - those crates are compiled in their entirety, even if some of their functions actually end up not being called (as dead-code elimination happens later).
This poses a problem in that if any library contains a function that we won't be able to emit (e.g. it's too large), we'll crash, even if ultimately we don't care about that particular function - that's what @mbuesch stumbled upon over #118015 (comment).
This used not to be a problem before #106722, but now requires extra care within LLVM.
N.B. I think my approach presented here is a bit hacky, I'm open to other ideas.