Skip to content
This repository has been archived by the owner on Aug 3, 2023. It is now read-only.

Segmentation fault after wrangler executes its command #1464

Closed
orium opened this issue Jul 27, 2020 · 25 comments
Closed

Segmentation fault after wrangler executes its command #1464

orium opened this issue Jul 27, 2020 · 25 comments
Labels
bug Something isn't working

Comments

@orium
Copy link
Member

orium commented Jul 27, 2020

🐛 Bug Report

wrangler crashes with a segmentation fault after running the given command. This apparently only happens with release build and not with debug builds. Reproducible with wrangler version 1.10.3 as well as master (as of b601101); also reproducible with both rustc 1.45.0 (5c1f21c3b 2020-07-13) as well as rustc 1.46.0-nightly (346aec9b0 2020-07-11).

Environment

  • operating system: Arch linux: Linux 5.7.8-arch1-1 parameterize project name #1 SMP PREEMPT Thu, 09 Jul 2020 16:34:01 +0000 x86_64 GNU/Linux
  • output of rustc -V: I've tried multiple version, but including the currect stable: rustc 1.45.0 (5c1f21c3b 2020-07-13)
  • output of node -v: not installed
  • output of wrangler -V: repoducible in 1.10.3 and master (as of b601101)
  • contents of wrangler.toml: empty

Steps to reproduce

You can either cargo install wrangler from crates.io, or in master (as of b601101):

$ cargo build --release

$ ./target/release/wrangler
The Wrangler Team <wrangler@cloudflare.com>

USAGE:
    wrangler [SUBCOMMAND]

FLAGS:
    -h, --help       Prints help information
    -V, --version    Prints version information

SUBCOMMANDS:
    kv:namespace     Interact with your Workers KV Namespaces
    kv:key           Individually manage Workers KV key-value pairs
    kv:bulk          Interact with multiple Workers KV key-value pairs at once
    route            List or delete worker routes.
    secret           Generate a secret that can be referenced in the worker script
    generate         Generate a new worker project
    init             Create a wrangler.toml for an existing project
    build            Build your worker
    preview          Preview your code temporarily on cloudflareworkers.com
    dev              Start a local server for developing your worker
    publish          Publish your worker to the orange cloud
    config           Set up wrangler with your Cloudflare account
    subdomain        Configure your workers.dev subdomain
    whoami           Retrieve your user info and test your auth config
    tail             Aggregate logs from production worker
    help            Prints this message or the help of the given subcommand(s)
zsh: segmentation fault (core dumped)  ./target/release/wrangler

Running inside lldb:

$ lldb ./target/release/wrangler
(lldb) target create "./target/release/wrangler"
Current executable set to '/home/orium/programming/cloudflare/wrangler/target/release/wrangler' (x86_64).
(lldb) r
Process 550702 launched: '/home/orium/programming/cloudflare/wrangler/target/release/wrangler' (x86_64)
 wrangler 1.10.3
The Wrangler Team <wrangler@cloudflare.com>

USAGE:
    wrangler [SUBCOMMAND]

FLAGS:
    -h, --help       Prints help information
    -V, --version    Prints version information

SUBCOMMANDS:
    kv:namespace     Interact with your Workers KV Namespaces
    kv:key           Individually manage Workers KV key-value pairs
    kv:bulk          Interact with multiple Workers KV key-value pairs at once
    route            List or delete worker routes.
    secret           Generate a secret that can be referenced in the worker script
    generate         Generate a new worker project
    init             Create a wrangler.toml for an existing project
    build            Build your worker
    preview          Preview your code temporarily on cloudflareworkers.com
    dev              Start a local server for developing your worker
    publish          Publish your worker to the orange cloud
    config           Set up wrangler with your Cloudflare account
    subdomain        Configure your workers.dev subdomain
    whoami           Retrieve your user info and test your auth config
    tail             Aggregate logs from production worker
    help            Prints this message or the help of the given subcommand(s)
Process 550702 stopped
* thread #3, name = 'reqwest-interna', stop reason = signal SIGSEGV: invalid address (fault address: 0x18)
    frame #0: 0x00007ffff79bb8e6 libpthread.so.0`__pthread_rwlock_wrlock + 22
libpthread.so.0`__pthread_rwlock_wrlock:
->  0x7ffff79bb8e6 <+22>: movl   0x18(%rdi), %edx
    0x7ffff79bb8e9 <+25>: movl   %fs:0x2d0, %eax
    0x7ffff79bb8f1 <+33>: cmpl   %eax, %edx
    0x7ffff79bb8f3 <+35>: je     0x7ffff79bb960            ; <+144>
(lldb) bt
* thread #3, name = 'reqwest-interna', stop reason = signal SIGSEGV: invalid address (fault address: 0x18)
  * frame #0: 0x00007ffff79bb8e6 libpthread.so.0`__pthread_rwlock_wrlock + 22
    frame #1: 0x00007ffff7bbd02a libcrypto.so.1.1`CRYPTO_THREAD_write_lock + 10
    frame #2: 0x00007ffff7b50ef0 libcrypto.so.1.1`OPENSSL_init_crypto + 800
    frame #3: 0x00007ffff7ce61f2 libssl.so.1.1`OPENSSL_init_ssl + 50
    frame #4: 0x0000555555d4ab8b wrangler`std::sync::once::Once::call_inner::hcff3709ae0293da4 at once.rs:416:21
    frame #5: 0x0000555555be6ac0 wrangler`openssl_sys::init::hb7d4ee155f3460e3 + 64
    frame #6: 0x0000555555be64a3 wrangler`openssl::ssl::connector::ctx::h1740157059bc9ac1 + 19
    frame #7: 0x0000555555be6564 wrangler`openssl::ssl::connector::SslConnector::builder::hbfe4bf78845fde53 + 20
    frame #8: 0x0000555555be241c wrangler`native_tls::imp::TlsConnector::new::h15c5ba45815bd72a + 60
    frame #9: 0x0000555555be2b65 wrangler`native_tls::TlsConnectorBuilder::build::ha2b64693ad06c741 + 21
    frame #10: 0x0000555555b091f4 wrangler`reqwest::connect::Connector::new_default_tls::h602c88545d6f5fda + 52
    frame #11: 0x0000555555ba6427 wrangler`reqwest::async_impl::client::ClientBuilder::build::hb12da7537dfe3e0a + 1223
    frame #12: 0x0000555555b3f008 wrangler`_$LT$core..future..from_generator..GenFuture$LT$T$GT$$u20$as$u20$core..future..future..Future$GT$::poll::hb8ed2143500ac64b + 88
    frame #13: 0x0000555555b5ea2f wrangler`tokio::macros::scoped_tls::ScopedKey$LT$T$GT$::set::ha65e6a297784a429 + 287
    frame #14: 0x0000555555b8622d wrangler`tokio::runtime::basic_scheduler::BasicScheduler$LT$P$GT$::block_on::hb3a58c78048f2d8b + 285
    frame #15: 0x0000555555b147ca wrangler`tokio::runtime::context::enter::h5e9f6f9d15b6ebb6 + 186
    frame #16: 0x0000555555b9276b wrangler`std::sys_common::backtrace::__rust_begin_short_backtrace::h5d994d9dea05c497 + 1371
    frame #17: 0x0000555555b02505 wrangler`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::h4e89c2fa4cd2a1a3 + 101
    frame #18: 0x0000555555d5842a wrangler`std::sys::unix::thread::Thread::new::thread_start::h3b6d8a0cd87a87c6 [inlined] _$LT$alloc..boxed..Box$LT$F$GT$$u20$as$u20$core..ops..function..FnOnce$LT$A$GT$$GT$::call_once::hcf205bcf9b46c587 at boxed.rs:1076:9
    frame #19: 0x0000555555d58424 wrangler`std::sys::unix::thread::Thread::new::thread_start::h3b6d8a0cd87a87c6 [inlined] _$LT$alloc..boxed..Box$LT$F$GT$$u20$as$u20$core..ops..function..FnOnce$LT$A$GT$$GT$::call_once::h2d53e2246128f5d8 at boxed.rs:1076
    frame #20: 0x0000555555d5841b wrangler`std::sys::unix::thread::Thread::new::thread_start::h3b6d8a0cd87a87c6 at thread.rs:87
    frame #21: 0x00007ffff79b6422 libpthread.so.0`start_thread + 226
    frame #22: 0x00007ffff7e41bf3 libc.so.6`__clone + 67

Also worth saying that valgrind doesn't see any invalid memory accesses.

@kentonv
Copy link
Member

kentonv commented Jul 27, 2020

I've noticed this too (on Linux). Seems to be common after error messages, but it doesn't always happen.

@EverlastingBugstopper
Copy link
Contributor

EverlastingBugstopper commented Jul 27, 2020

are you able to reproduce this issue if you build with the command cargo build --release --features vendored-openssl or cargo install --features vendored-openssl?

@EverlastingBugstopper EverlastingBugstopper added investigate bug Something isn't working labels Jul 27, 2020
@orium
Copy link
Member Author

orium commented Jul 27, 2020

are you able to reproduce this issue if you build with the command cargo build --release --features vendored-openssl or cargo install --features vendored-openssl?

Still have a segmentation fault.

@orium
Copy link
Member Author

orium commented Jul 27, 2020

It seems the issue goes away if I have a ~/.wrangler/version.toml...

@orium
Copy link
Member Author

orium commented Jul 27, 2020

I could minify the bug to this:

fn main() {
    std::thread::spawn(move || {
        reqwest::blocking::Client::new()
            .get("https://crates.io/api/v1/crates/wrangler")
            .send().unwrap();
    });

    // Just so we write some memory to trigger the bug.
    let mut v = Vec::new();
    for i in 0..100000 {
        v.push(i);
    }
}

Still get the segmentation fault almost all the time. It looks like it might be cause by sfackler/rust-openssl#1293.

Ref rustwasm/wasm-pack#823

@EverlastingBugstopper
Copy link
Contributor

That does seem to be the issue doesn't it - nice job tracking it down! If your PR gets merged upstream and a release is cut we'll be sure to update the dependency. Thanks a bunch @orium!

@nataliescottdavidson
Copy link
Contributor

Hey @orium - I'm not able to reproduce this as is. Could you include your Cargo.toml for the minified example?

@orium
Copy link
Member Author

orium commented Aug 26, 2020

I think I don't have it anymore, but IIRC it only had a dependency on reqwest and nothing else. It is normal for some systems not to be able to reproduce this very easily. It's a non-deterministic race condition based on thread scheduling and timing.

In any case the issue is fixed upstream (sfackler/rust-openssl#1293) and once we get a new release of rust-openssl (which is a transitive dependency of reqwest) we should be good.

@codenoid
Copy link

codenoid commented Oct 1, 2020

yes, reproduced with:

> cargo install wrangler
Ubuntu 20.04.1 LTS
 >  cargo --version
cargo 1.45.0 (744bd1fbb 2020-06-15)
 >  rustc --version
rustc 1.45.0 (5c1f21c3b 2020-07-13)

wait, when writing this comment, i got another error message when just executing wrangler command

 >  wrangler
👷 ✨  wrangler 1.11.0
The Wrangler Team <wrangler@cloudflare.com>

USAGE:
    wrangler [SUBCOMMAND]

FLAGS:
    -h, --help       Prints help information
    -V, --version    Prints version information

SUBCOMMANDS:
    kv:namespace    🗂️  Interact with your Workers KV Namespaces
    kv:key          🔑  Individually manage Workers KV key-value pairs
    kv:bulk         💪  Interact with multiple Workers KV key-value pairs at once
    route           ➡️  List or delete worker routes.
    secret          🤫  Generate a secret that can be referenced in the worker script
    generate        👯  Generate a new worker project
    init            📥  Create a wrangler.toml for an existing project
    build           🦀  Build your worker
    preview         🔬  Preview your code temporarily on cloudflareworkers.com
    dev             👂  Start a local server for developing your worker
    publish         🆙  Publish your worker to the orange cloud
    config          🕵️  Authenticate Wrangler with a Cloudflare API Token or Global API Key
    subdomain       👷  Configure your workers.dev subdomain
    whoami          🕵️  Retrieve your user info and test your auth config
    tail            🦚  Aggregate logs from production worker
    login           🔓 Authenticate Wrangler with your Cloudflare username and password
    help            Prints this message or the help of the given subcommand(s)
thread '<unnamed>' panicked at 'Client::new(): reqwest::Error { kind: Builder, source: Normal(ErrorStack([])) }', /home/ken/.cargo/registry/src/github.com-1ecc6299db9ec823/reqwest-0.10.8/src/blocking/client.rs:575:9
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

Update

okay, this weird

Screenshot from 2020-10-01 13-01-12

@orium
Copy link
Member Author

orium commented Oct 1, 2020

@codenoid I think the

thread '<unnamed>' panicked at 'Client::new(): reqwest::Error { kind: Builder, source: Normal(ErrorStack([])) }', /home/ken/.cargo/registry/src/github.com-1ecc6299db9ec823/reqwest-0.10.8/src/blocking/client.rs:575:9
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

is caused by the same issue, so the upstream fix in rust-openssl should fix that as well.

@orium
Copy link
Member Author

orium commented Jan 18, 2021

This is now fixed since we are using openssl v0.10.32, which has a fix for this.

@xortive
Copy link
Contributor

xortive commented Feb 24, 2021

this is still a problem on the latest version of wrangler (despite being on openssl v0.10.32), it can manifest as either a segfault or the panic at client::new()

@xortive
Copy link
Contributor

xortive commented Feb 24, 2021

I tried this hack (sfackler/rust-openssl#1174 (comment)) and it was unsuccessful, I can still replicate a segfault on fully statically linked binary every 1/10 invocations or so.

EDIT: backtrace

#0  0x00007f0eb7328916 in pthread_rwlock_trywrlock ()
#1  0x00007f0eb73288a7 in pthread_rwlock_timedwrlock ()
#2  0x00007f0eb779fd40 in tlsv1_3_server_method_data ()
#3  0x0000555555776f60 in ?? ()
#4  0x00007f0eb70c7ea9 in CRYPTO_THREAD_write_lock ()
#5  0x00007f0eb709a4b2 in RAND_get_rand_method ()
#6  0x00007f0eb709a59f in RAND_priv_bytes ()
#7  0x00007f0eb6ffe8a1 in SSL_CTX_new ()
#8  0x00007f0eb6feb30c in openssl::ssl::connector::ctx ()
#9  0x00007f0eb6feb3d4 in openssl::ssl::connector::SslConnector::builder ()
#10 0x00007f0eb6fe50ec in native_tls::imp::TlsConnector::new ()
#11 0x00007f0eb6fe58a5 in native_tls::TlsConnectorBuilder::build ()
#12 0x00007f0eb6f1f136 in reqwest::connect::Connector::new_default_tls ()
#13 0x00007f0eb6efc237 in reqwest::async_impl::client::ClientBuilder::build ()
#14 0x00007f0eb6f4fd28 in <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll ()
#15 0x00007f0eb6f9092f in tokio::macros::scoped_tls::ScopedKey<T>::set ()
#16 0x00007f0eb6fbaf54 in tokio::runtime::basic_scheduler::BasicScheduler<P>::block_on ()
#17 0x00007f0eb6ef29e9 in tokio::runtime::context::enter ()
#18 0x00007f0eb6f0e37c in tokio::runtime::handle::Handle::enter ()
#19 0x00007f0eb6eeb754 in std::sys_common::backtrace::__rust_begin_short_backtrace ()
#20 0x00007f0eb6f146c5 in core::ops::function::FnOnce::call_once{{vtable-shim}} ()
#21 0x00007f0eb72f875a in alloc::boxed::{{impl}}::call_once<(),FnOnce<()>> () at /rustc/18bf6b4f01a6feaf7259ba7cdae58031af1b7b39/library/alloc/src/boxed.rs:1042
#22 alloc::boxed::{{impl}}::call_once<(),alloc::boxed::Box<FnOnce<()>>> () at /rustc/18bf6b4f01a6feaf7259ba7cdae58031af1b7b39/library/alloc/src/boxed.rs:1042
#23 std::sys::unix::thread::{{impl}}::new::thread_start () at library/std/src/sys/unix/thread.rs:87
#24 0x00007f0eb73233be in start ()
#25 0x00007f0eb64d2b20 in ?? ()
#26 0x0000000000000000 in ?? ()

@vberlier
Copy link

vberlier commented Oct 1, 2021

I just tried running wrangler generate --help and it looks like I'm getting the same thing:

wrangler-generate 1.19.3
Generate a new worker project

USAGE:
    wrangler generate [FLAGS] [OPTIONS] [ARGS]

FLAGS:
    -h, --help       Prints help information
    -s, --site       Initializes a Workers Sites project. Overrides 'type' and 'template'
        --verbose    Toggle verbose output (when applicable)

OPTIONS:
    -c, --config <config>    Path to configuration file [default: wrangler.toml]
    -e, --env <env>          Environment to perform a command on
    -t, --type <type>        The type of project you want generated

ARGS:
    <name>        The name of your worker! [default: worker]
    <template>    A link to a GitHub template! Defaults to https://github.com/cloudflare/worker-template
[1]    18492 segmentation fault  wrangler generate --help

It seems like the segmentation fault is not systematic for me either. It doesn't always happen, about 50% of the time.

@manen
Copy link

manen commented Nov 1, 2021

I GDB'd the error and this is what I got, not sure what to make out of it:

Reading symbols from /home/manen/.cargo/bin/wrangler...
(gdb) run generate server https://github.com/cloudflare/rustwasm-worker-template
Starting program: /home/manen/.cargo/bin/wrangler generate server https://github.com/cloudflare/rustwasm-worker-template
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/usr/lib/libthread_db.so.1".
[New Thread 0x7ffff72ff640 (LWP 30366)]
[Thread 0x7ffff72ff640 (LWP 30366) exited]
⬇️   Installing cargo-generate v0.5.0...
[New Thread 0x7ffff70fe640 (LWP 30367)]
[Thread 0x7ffff70fe640 (LWP 30367) exited]

Thread 1 "wrangler" received signal SIGSEGV, Segmentation fault.
0x00007ffff7aa02c3 in SSL_get_peer_certificate () from /usr/lib/libssl.so.1.1

@nashley
Copy link

nashley commented Nov 12, 2021

Here's a backtrace from gdb:

Starting program: /home/nick/.cargo/bin/wrangler generate --type=rust rate_limit_cloudflare_worker
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/usr/lib/libthread_db.so.1".
[New Thread 0x7ffff72fa640 (LWP 1109341)]
[New Thread 0x7ffff70f9640 (LWP 1109342)]
⬇️   Installing cargo-generate v0.5.0...
[New Thread 0x7ffff6ef8640 (LWP 1109343)]
[Thread 0x7ffff6ef8640 (LWP 1109343) exited]
[New Thread 0x7ffff6ef8640 (LWP 1109344)]

Thread 1 "wrangler" received signal SIGSEGV, Segmentation fault.
0x00007ffff7a9b2c3 in SSL_get_peer_certificate () from /usr/lib/libssl.so.1.1
(gdb) bt full
#0  0x00007ffff7a9b2c3 in SSL_get_peer_certificate () from /usr/lib/libssl.so.1.1
No symbol table info available.
#1  0x00007ffff7f6fd66 in ?? () from /usr/lib/libcurl.so.4
No symbol table info available.
#2  0x00007ffff7f73f05 in ?? () from /usr/lib/libcurl.so.4
No symbol table info available.
#3  0x00007ffff7f74fd7 in ?? () from /usr/lib/libcurl.so.4
No symbol table info available.
#4  0x00007ffff7f2c149 in ?? () from /usr/lib/libcurl.so.4
No symbol table info available.
#5  0x00007ffff7f45508 in ?? () from /usr/lib/libcurl.so.4
No symbol table info available.
#6  0x00007ffff7f46b06 in curl_multi_perform () from /usr/lib/libcurl.so.4
No symbol table info available.
#7  0x00007ffff7f1ed0c in curl_easy_perform () from /usr/lib/libcurl.so.4
No symbol table info available.
#8  0x0000555555bac2aa in curl::easy::handler::Easy2<H>::perform ()
No symbol table info available.
#9  0x0000555555baadb2 in curl::easy::handle::Transfer::perform ()
No symbol table info available.
#10 0x0000555555ba3162 in binary_install::curl ()
No symbol table info available.
#11 0x0000555555b9f741 in binary_install::Cache::_download ()
No symbol table info available.
#12 0x0000555555b9f32a in binary_install::Cache::download_version ()
No symbol table info available.
#13 0x0000000000000001 in ?? ()
No symbol table info available.
#14 0x00005555565c2180 in ?? ()
No symbol table info available.
#15 0x0000000000000070 in ?? ()
No symbol table info available.
#16 0x00005555565bd970 in ?? ()
No symbol table info available.
#17 0x0000000000000005 in ?? ()
No symbol table info available.
#18 0x0000000000000005 in ?? ()
No symbol table info available.
#19 0x000000000000000e in ?? ()
No symbol table info available.
#20 0x0000555555ab7d3f in wrangler::install::install ()
No symbol table info available.
#21 0x0000555555ab71ab in wrangler::install::install_cargo_generate ()
No symbol table info available.
#22 0x00007fffffffd688 in ?? ()
No symbol table info available.
#23 0x0000000000000000 in ?? ()
No symbol table info available.
(gdb) 
> openssl version
OpenSSL 1.1.1l  24 Aug 2021
> cargo --version
cargo 1.56.0 (4ed5d137b 2021-10-04)
> wrangler --version
wrangler 1.19.5

I also confirmed the same behavior in a fresh arch linux install with minimal dependencies (vim, grub, dhcpcd, base-devel, rustup in addition to base, linux, linux-firmware):
2021-11-12-225653_516x130_scrot

It also happens even if I build wrangler myself in debug mode:

> wrangler/target/debug/wrangler generate --type=rust rate_limit_cloudflare_worker
⬇️   Installing cargo-generate v0.5.0...
fish: Job 1, 'wrangler/target/debug/wrangler…' terminated by signal SIGSEGV (Address boundary error)

The server it talks to seems fine:

> curl -vvv https://workers.cloudflare.com/get-binary/ashleygwilliams/cargo-generate/v0.5.0/x86_64-unknown-linux-musl.tar.gz
*   Trying 104.16.132.9:443...
* Connected to workers.cloudflare.com (104.16.132.9) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
*  CAfile: /etc/ssl/certs/ca-certificates.crt
*  CApath: none
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
...

@nashley
Copy link

nashley commented Nov 13, 2021

I don't understand why cargo isn't just used to pull in cargo-generate and wasm-pack. I'm not a fan of software downloading binaries from the internet and running them without even asking me.

I was able to modify src/install/mod.rs to return a static PathBuf pointing to my installed cargo-generate (v0.11.0) and wasm-pack (v0.10.1) binaries (change the paths as appropriate):

diff --git a/src/install/mod.rs b/src/install/mod.rs
index d7418ad..cc6f627 100644
--- a/src/install/mod.rs
+++ b/src/install/mod.rs
@@ -27,9 +27,7 @@ pub fn install_cargo_generate() -> Result<PathBuf> {
     let tool_author = "ashleygwilliams";
     let is_binary = true;
     let version = Version::parse(dependencies::GENERATE_VERSION)?;
-    install(tool_name, tool_author, is_binary, version)?
-        .binary(tool_name)
-        .map_err(|e| anyhow::Error::from(e.compat()))
+    Ok(PathBuf::from(r"/home/nick/.cargo/bin/cargo-generate"))
 }
 
 pub fn install_wasm_pack() -> Result<PathBuf> {
@@ -37,9 +35,7 @@ pub fn install_wasm_pack() -> Result<PathBuf> {
     let tool_author = "rustwasm";
     let is_binary = true;
     let version = Version::parse(dependencies::WASM_PACK_VERSION)?;
-    install(tool_name, tool_author, is_binary, version)?
-        .binary(tool_name)
-        .map_err(|e| anyhow!(e.compat()))
+    Ok(PathBuf::from(r"/home/nick/.cargo/bin/wasm-pack"))
 }
 
 pub fn install(

That let me run ~/wrangler/target/debug/wrangler generate --type=rust test and ~/wrangler/target/debug/wrangler config.

I then got a similar SIGSEGV when using ~/wrangler/target/debug/wrangler build. Once I downgraded wasm-pack to 0.10.0 and ran rustup target add wasm32-unknown-unknown, I was able to run RUST_STACKTRACE=1 ~/wrangler/target/debug/wrangler build. It still segfaults, but now it manages to continue.

~/wrangler/target/debug/wrangler publish fails during cargo install -q worker-build && worker-build --release, and I haven't yet found a way to run worker-build --release without it core dumping:

> ~/wrangler/target/debug/wrangler publish
🌀  Running cargo install -q worker-build && worker-build --release
[INFO]: Checking for the Wasm target...
[INFO]: Compiling to Wasm...
    Finished release [optimized] target(s) in 0.05s
[INFO]: Installing wasm-bindgen...
Error: wasm-pack exited with status signal: 11 (core dumped)
Error: Build failed! Status Code: 1

@nashley
Copy link

nashley commented Nov 13, 2021

Downgrading to wasm-pack v0.9.1 fixes the wrangler build/wrangler publish issue, so it looks like the only outstanding problem is how wrangler downloads outdated binaries instead of using the already-installed versions (and adding them as dependencies for cargo blocked on rust-lang/cargo#9096). My patch works, but it's ugly, and I'm not particularly inclined to submit a proper PR.

@chand1012
Copy link

Hi all, I'm having a similar issue on Arch Linux. I installed the latest (v1.19.5) version of wrangler via cargo, but when I try to generate a new Rust project, I get this error.

❯ wrangler generate wasm-test -t rust
⬇️   Installing cargo-generate v0.5.0...
[1]    3638 segmentation fault (core dumped)  wrangler generate wasm-test -t rust

I was able to generate a new project when I installed the node version, but then when I try to run a development server, I get the following error:

> wrangler dev
[INFO]: Installing wasm-bindgen...
Error: wasm-pack exited with status signal: 11 (core dumped)

Is there any solution to this? I know that the above issue could be fixed by recompiling my own version with wasm-pack v0.9.1 but that's not a very elegant solution. But what about the first issue? Is there any fix for that?

@jyn514
Copy link
Contributor

jyn514 commented Dec 7, 2021

@xortive here's a backtrace with debug symbols if it helps (it cuts off after the tokio internals because they're not interesting):

$ gdb target/debug/wrangler core.1000.1840657.1638908760 
Reading symbols from target/debug/wrangler...
[New LWP 1840660]
[New LWP 1840657]
[New LWP 1840659]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Core was generated by `target/debug/wrangler'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  __strcasecmp_l_avx () at ../sysdeps/x86_64/multiarch/strcmp-sse42.S:271
271	../sysdeps/x86_64/multiarch/strcmp-sse42.S: No such file or directory.
[Current thread is 1 (Thread 0x7f68deaad700 (LWP 1840660))]
warning: Missing auto-load script at offset 0 in section .debug_gdb_scripts
of file /home/jnelson/src/wrangler/target/debug/wrangler.
Use `info auto-load python-scripts [REGEXP]' to list them.
(gdb) where
#0  __strcasecmp_l_avx () at ../sysdeps/x86_64/multiarch/strcmp-sse42.S:271
#1  0x00005607f7bb3744 in getrn (lh=0x7f68d000e830, data=0x7f68deaa3b50, 
    rhash=<optimized out>) at crypto/lhash/lhash.c:328
#2  OPENSSL_LH_retrieve (lh=0x7f68d000e830, data=data@entry=0x7f68deaa3b50)
    at crypto/lhash/lhash.c:173
#3  0x00005607f7bb4c53 in lh_NAMENUM_ENTRY_retrieve (lh=<optimized out>, d=0x7f68deaa3b50)
    at crypto/core_namemap.c:26
#4  namemap_name2num_n (name=0x7f68d009b80c "der", name_len=3, namemap=<optimized out>)
    at crypto/core_namemap.c:188
#5  ossl_namemap_name2num_n (namemap=0x7f68d000e7c0, namemap@entry=0x7f68d009b80c, 
    name=0x7f68d009b80c "der", name@entry=0x7f68d004ab10 " D\001\320h\177", name_len=3)
    at crypto/core_namemap.c:208
#6  0x00005607f7bb4e2c in ossl_namemap_name2num (namemap=<optimized out>, 
    name=name@entry=0x7f68d009b80c "der") at crypto/core_namemap.c:219
#7  0x00005607f7b8e13d in OSSL_DECODER_is_a (decoder=decoder@entry=0x7f68d004ab10, 
    name=name@entry=0x7f68d009b80c "der") at crypto/encode_decode/decoder_meth.c:507
#8  0x00005607f7b8d5bb in collect_extra_decoder (decoder=0x7f68d004ab10, 
    arg=<optimized out>) at crypto/encode_decode/decoder_lib.c:372
#9  OSSL_DECODER_CTX_add_extra (ctx=ctx@entry=0x7f68d009bc00, libctx=libctx@entry=0x0, 
    propq=propq@entry=0x0) at crypto/encode_decode/decoder_lib.c:538
#10 0x00005607f7b8edac in OSSL_DECODER_CTX_new_for_pkey (pkey=pkey@entry=0x7f68d0098a00, 
    input_type=0x5607f82453fe "DER", input_structure=<optimized out>, 
    keytype=keytype@entry=0x7f68deaa3ca0 "rsaEncryption", selection=selection@entry=134, 
    libctx=0x0, propquery=0x0) at crypto/encode_decode/decoder_pkey.c:453
#11 0x00005607f7c09a13 in x509_pubkey_ex_d2i_ex (pval=<optimized out>, 
    pval@entry=0x7f68d009ad90, in=<optimized out>, len=<optimized out>, len@entry=1269, 
    it=<optimized out>, it@entry=0x5607f8a182c0 <X509_PUBKEY_it.local_it>, 
    tag=<optimized out>, aclass=0, opt=0 '\000', ctx=0x7f68deaa41c8, libctx=0x0, propq=0x0)
    at crypto/x509/x_pubkey.c:208
#12 0x00005607f7b4c7ab in asn1_item_embed_d2i (pval=pval@entry=0x7f68d009ad90, 
    in=0x312e343030, in@entry=0x7f68deaa3e58, len=len@entry=1269, it=0x1, tag=65535, 
    tag@entry=-1, aclass=aclass@entry=0, opt=0 '\000', ctx=0x7f68deaa41c8, 
    depth=<optimized out>, libctx=0x0, propq=0x0) at crypto/asn1/tasn_dec.c:262
#13 0x00005607f7b4de07 in asn1_template_noexp_d2i (val=val@entry=0x7f68d009ad90, 
    in=in@entry=0x7f68deaa3f78, len=<optimized out>, 
    tt=tt@entry=0x5607f8a18750 <X509_CINF_seq_tt+240>, opt=0 '\000', 
    ctx=ctx@entry=0x7f68deaa41c8, depth=2, libctx=0x0, propq=0x0)
    at crypto/asn1/tasn_dec.c:682
#14 0x00005607f7b4d0da in asn1_template_ex_d2i (val=0x7f68d009ad90, 
    in=in@entry=0x7f68deaa3f78, inlen=1921234732, 
    tt=tt@entry=0x5607f8a18750 <X509_CINF_seq_tt+240>, opt=<optimized out>, 
    ctx=0x7f68deaa41c8, depth=2, libctx=0x0, propq=0x0) at crypto/asn1/tasn_dec.c:558
#15 0x00005607f7b4cb74 in asn1_item_embed_d2i (pval=pval@entry=0x7f68deaa3ff8, 
    in=<optimized out>, in@entry=0x7f68deaa4028, len=65535, len@entry=2003, 
    it=0x5607f8a185b8 <X509_CINF_it.local_it>, tag=<optimized out>, tag@entry=-1, 
    aclass=<optimized out>, aclass@entry=0, opt=0 '\000', ctx=0x7f68deaa41c8, 
    depth=<optimized out>, libctx=0x0, propq=0x0) at crypto/asn1/tasn_dec.c:422
#16 0x00005607f7b4de07 in asn1_template_noexp_d2i (val=0x7f68deaa3ff8, 
    val@entry=0x7f68d009ad40, in=in@entry=0x7f68deaa4148, len=<optimized out>, tt=tt@entry=0x5607f8a187f0 <X509_seq_tt>, opt=0 '\000', ctx=ctx@entry=0x7f68deaa41c8, depth=1, libctx=0x0, propq=0x0) at crypto/asn1/tasn_dec.c:682
#17 0x00005607f7b4d0da in asn1_template_ex_d2i (val=0x7f68d009ad40, in=in@entry=0x7f68deaa4148, inlen=1921234732, tt=tt@entry=0x5607f8a187f0 <X509_seq_tt>, opt=<optimized out>, ctx=0x7f68deaa41c8, depth=1, libctx=0x0, propq=0x0) at crypto/asn1/tasn_dec.c:558
#18 0x00005607f7b4cb74 in asn1_item_embed_d2i (pval=pval@entry=0x7f68d0097080, in=<optimized out>, len=65535, it=0x5607f8a185f0 <X509_it.local_it>, tag=<optimized out>, tag@entry=-1, aclass=<optimized out>, aclass@entry=0, opt=0 '\000', ctx=0x7f68deaa41c8, depth=<optimized out>, libctx=0x0, propq=0x0) at crypto/asn1/tasn_dec.c:422
#19 0x00005607f7b4c4f2 in asn1_item_ex_d2i_intern (pval=0x7f68d0097080, in=<optimized out>, len=<optimized out>, it=0x5607f8a185f0 <X509_it.local_it>, tag=-1, aclass=0, opt=0 '\000', ctx=0x7f68deaa41c8, libctx=<optimized out>, propq=<optimized out>) at crypto/asn1/tasn_dec.c:118
#20 ASN1_item_d2i_ex (pval=0x7f68d0097080, in=0x312e343030, len=65535, it=0x5607f8a185f0 <X509_it.local_it>, libctx=0x0, propq=0x7283b72c <error: Cannot access memory at address 0x7283b72c>) at crypto/asn1/tasn_dec.c:144
#21 0x00005607f7bce629 in PEM_X509_INFO_read_bio_ex (bp=0x7f68d0098770, sk=<optimized out>, sk@entry=0x0, cb=<optimized out>, cb@entry=0x0, u=<optimized out>, libctx=libctx@entry=0x0, propq=<optimized out>, propq@entry=0x0) at crypto/pem/pem_info.c:168
#22 0x00005607f7bf355a in X509_load_cert_crl_file_ex (ctx=ctx@entry=0x7f68d0098530, file=<optimized out>, type=type@entry=1, libctx=libctx@entry=0x0, propq=propq@entry=0x0) at crypto/x509/by_file.c:231
#23 0x00005607f7bf373a in by_file_ctrl_ex (ctx=0x7f68d0098530, cmd=<optimized out>, argp=<optimized out>, argl=<optimized out>, ret=<optimized out>, libctx=0x0, propq=0x0) at crypto/x509/by_file.c:64
#24 0x00005607f7bfe7b2 in X509_STORE_set_default_paths_ex (ctx=0x7f68d000e0a0, libctx=0x0, propq=0x0) at crypto/x509/x509_d2.c:23
#25 0x00005607f7af1b46 in openssl::ssl::SslContextBuilder::set_default_verify_paths (self=0x7f68deaa43f8) at /home/jnelson/.local/lib/cargo/registry/src/github.com-1ecc6299db9ec823/openssl-0.10.38/src/ssl/mod.rs:894
#26 0x00005607f7b01d7b in openssl::ssl::connector::SslConnector::builder (method=...) at /home/jnelson/.local/lib/cargo/registry/src/github.com-1ecc6299db9ec823/openssl-0.10.38/src/ssl/connector.rs:69
#27 0x00005607f7ae59a0 in native_tls::imp::TlsConnector::new (builder=0x7f68deaa5770) at /home/jnelson/.local/lib/cargo/registry/src/github.com-1ecc6299db9ec823/native-tls-0.2.8/src/imp/openssl.rs:257
#28 0x00005607f7ae7f02 in native_tls::TlsConnectorBuilder::build (self=0x7f68deaa5770) at /home/jnelson/.local/lib/cargo/registry/src/github.com-1ecc6299db9ec823/native-tls-0.2.8/src/lib.rs:433
#29 0x00005607f77913bf in reqwest::connect::Connector::new_default_tls (http=..., tls=..., proxies=..., user_agent=<error reading variable: Cannot access memory at address 0xffff>, local_addr=<error reading variable: Cannot access memory at address 0x0>, nodelay=true) at /home/jnelson/.local/lib/cargo/registry/src/github.com-1ecc6299db9ec823/reqwest-0.11.6/src/connect.rs:220
#30 0x00005607f76ddecb in reqwest::async_impl::client::ClientBuilder::build (self=...) at /home/jnelson/.local/lib/cargo/registry/src/github.com-1ecc6299db9ec823/reqwest-0.11.6/src/async_impl/client.rs:294
#31 0x00005607f775db89 in reqwest::blocking::client::ClientHandle::new::{{closure}}::{{closure}} () at /home/jnelson/.local/lib/cargo/registry/src/github.com-1ecc6299db9ec823/reqwest-0.11.6/src/blocking/client.rs:950

@jyn514
Copy link
Contributor

jyn514 commented Dec 7, 2021

Hmm, it looks like OpenSSL has installed its own exit handlers? I wonder if they don't interact well with multi-threading ... there's an open rust issue about this: rust-lang/rust#83994

Thread 2 (Thread 0x7f68decafb40 (LWP 1840657)):
#0  getrn (lh=0x5607f9a5f2e0, data=0x7ffc7eaf3e70, rhash=<optimized out>) at crypto/lhash/lhash.c:316
#1  OPENSSL_LH_delete (lh=0x5607f9a5f2e0, data=data@entry=0x7ffc7eaf3e70) at crypto/lhash/lhash.c:144
#2  0x00005607f7bcb4f0 in lh_OBJ_NAME_delete (lh=<optimized out>, d=0x7ffc7eaf3e70) at crypto/objects/obj_local.h:12
#3  OBJ_NAME_remove (name=0x5607f825b2d2 "aria128", type=2) at crypto/objects/o_names.c:268
#4  0x00005607f7bb37da in doall_util_fn (lh=0x5607f9a5f2e0, use_arg=0, func=0x5607f7bcb660 <names_lh_free_doall>, func_arg=0x0, arg=0x0) at crypto/lhash/lhash.c:207
#5  OPENSSL_LH_doall (lh=0x5607f9a5f2e0, func=0x5607f7bcb660 <names_lh_free_doall>) at crypto/lhash/lhash.c:215
#6  0x00005607f7bcb5f8 in lh_OBJ_NAME_doall (lh=<optimized out>, doall=0x40) at crypto/objects/obj_local.h:12
#7  OBJ_NAME_cleanup (type=type@entry=2) at crypto/objects/o_names.c:390
#8  0x00005607f7ba8608 in evp_cleanup_int () at crypto/evp/names.c:156
#9  0x00005607f7bb67d0 in OPENSSL_cleanup () at crypto/init.c:431
#10 0x00007f68dee90a27 in __run_exit_handlers (status=1, listp=0x7f68df032718 <__exit_funcs>, run_list_atexit=run_list_atexit@entry=true, run_dtors=run_dtors@entry=true) at exit.c:108
#11 0x00007f68dee90be0 in __GI_exit (status=<optimized out>) at exit.c:139
#12 0x00005607f80b89e7 in std::sys::unix::os::exit () at library/std/src/sys/unix/os.rs:628
#13 0x00005607f80b074f in std::process::exit () at library/std/src/process.rs:1907
#14 0x00005607f74d8613 in clap::app::App::get_matches_from::{{closure}} (e=...) at /home/jnelson/.local/lib/cargo/registry/src/github.com-1ecc6299db9ec823/clap-2.33.3/src/app/mod.rs:1531

I also noticed that this is making network requests right at startup, which seems odd - it turns out wrangler is checking for updates in the background:

#4  0x00005607f777b1c3 in reqwest::blocking::wait::timeout (fut=..., timeout=...) at /home/jnelson/.local/lib/cargo/registry/src/github.com-1ecc6299db9ec823/reqwest-0.11.6/src/blocking/wait.rs:50
#5  0x00005607f775d6b2 in reqwest::blocking::client::ClientHandle::new (builder=...) at /home/jnelson/.local/lib/cargo/registry/src/github.com-1ecc6299db9ec823/reqwest-0.11.6/src/blocking/client.rs:983
#6  0x00005607f775ca5f in reqwest::blocking::client::ClientBuilder::build (self=...) at /home/jnelson/.local/lib/cargo/registry/src/github.com-1ecc6299db9ec823/reqwest-0.11.6/src/blocking/client.rs:102
#7  0x00005607f6875873 in wrangler::version::get_latest_version_from_api (installed_version=...) at src/version/mod.rs:138
#8  0x00005607f6875272 in wrangler::version::get_latest_version (installed_version=..., version_file=0x7f68d8000c30, current_time=...) at src/version/mod.rs:122
#9  0x00005607f6874c16 in wrangler::version::check_wrangler_versions () at src/version/mod.rs:92
#10 0x00005607f6a27a13 in wrangler::version::background_check_for_updates::{{closure}} () at src/version/mod.rs:20

Maybe we can wait to do that until the process exits / only do it if wrangler finishes successfully, so there's no other threads running at the same time?

@jyn514
Copy link
Contributor

jyn514 commented Dec 7, 2021

I guess another alternative is to kill the thread looking for background updates before exiting, but that seems unnecessarily complicated when all it gets us is slightly faster latency when exiting the process.

@jyn514
Copy link
Contributor

jyn514 commented Dec 7, 2021

Yet another alternative is to avoid constructing a reqwest::Client for the background update checker and instead use an async executor; that should avoid launching a thread until the executor is initialized, which is after most of the process::exit calls. That seems like the cleanest solution, but it's also the most work; wrangler has a bunch of executors internally already and it would be a pain to try and consolidate on one when we're already hoping to switch most users to Wrangler 2 sometime soon.

@Felixoid
Copy link

Hello dear colleagues,
another happy Arch Linux user here.

I've faced core dumps too with both 1.19.5 and #2150 versions. Here my backtrace and what I've run:

> wrangler dev            
⚠️  
    Your configuration file is missing compatibility_date, so a past date is assumed.
    To get the latest possibly-breaking bug fixes, add this line to your wrangler.toml:

        compatibility_date = "2021-12-10"

    For more information, see: https://developers.cloudflare.com/workers/platform/compatibility-dates
        
⬇️   Installing wranglerjs v1.19.5...
zsh: segmentation fault (core dumped)  wrangler dev
> coredumpctl debug 1013782
.....
Core was generated by `wrangler dev'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x00007f29f78b42c3 in SSL_get_peer_certificate () from /usr/lib/libssl.so.1.1
(gdb) bt
#0  0x00007f29f78b42c3 in SSL_get_peer_certificate () from /usr/lib/libssl.so.1.1
#1  0x00007f29f7d88d66 in ?? () from /usr/lib/libcurl.so.4
#2  0x00007f29f7d8cf05 in ?? () from /usr/lib/libcurl.so.4
#3  0x00007f29f7d8dfd7 in ?? () from /usr/lib/libcurl.so.4
#4  0x00007f29f7d45149 in ?? () from /usr/lib/libcurl.so.4
#5  0x00007f29f7d5e508 in ?? () from /usr/lib/libcurl.so.4
#6  0x00007f29f7d5fb06 in curl_multi_perform () from /usr/lib/libcurl.so.4
#7  0x00007f29f7d37d0c in curl_easy_perform () from /usr/lib/libcurl.so.4
#8  0x00005575ba17b3ba in curl::easy::handler::Easy2<H>::perform ()
#9  0x00005575ba179e72 in curl::easy::handle::Transfer::perform ()
#10 0x00005575ba171f82 in binary_install::curl ()
#11 0x00005575ba170a62 in binary_install::Cache::_download_artifact ()
#12 0x00005575ba1708a6 in binary_install::Cache::download_artifact_version ()
#13 0x00005575bc603c60 in ?? ()
#14 0x0000000000000006 in ?? ()
#15 0x000000000000000a in ?? ()
#16 0x00005575ba086b22 in wrangler::install::install ()
#17 0x00005575b9d8ac94 in wrangler::wranglerjs::setup_build ()
#18 0x00005575b9d898e7 in wrangler::wranglerjs::run_build ()
#19 0x00005575b9d83f82 in wrangler::build::build_target ()
#20 0x00005575b9f433f8 in wrangler::commands::dev::dev ()
#21 0x00005575b9e8137b in wrangler::cli::dev::dev ()
#22 0x00005575b9cf3212 in wrangler::main ()
#23 0x00005575b9cef8e3 in std::sys_common::backtrace::__rust_begin_short_backtrace ()
#24 0x00005575b9cf01dd in std::rt::lang_start::_$u7b$$u7b$closure$u7d$$u7d$::hc936eb6f11018442 ()
#25 0x00005575ba75282b in core::ops::function::impls::{impl#2}::call_once<(), (dyn core::ops::function::Fn<(), Output=i32> + core::marker::Sync + core::panic::unwind_safe::RefUnwindSafe)> (self=..., 
    args=<optimized out>) at /rustc/1.57.0/library/core/src/ops/function.rs:259
#26 std::panicking::try::do_call<&(dyn core::ops::function::Fn<(), Output=i32> + core::marker::Sync + core::panic::unwind_safe::RefUnwindSafe), i32> (data=<optimized out>) at library/std/src/panicking.rs:403
#27 std::panicking::try<i32, &(dyn core::ops::function::Fn<(), Output=i32> + core::marker::Sync + core::panic::unwind_safe::RefUnwindSafe)> (f=...) at library/std/src/panicking.rs:367
#28 std::panic::catch_unwind<&(dyn core::ops::function::Fn<(), Output=i32> + core::marker::Sync + core::panic::unwind_safe::RefUnwindSafe), i32> (f=...) at library/std/src/panic.rs:133
#29 std::rt::lang_start_internal::{closure#2} () at library/std/src/rt.rs:128
#30 std::panicking::try::do_call<std::rt::lang_start_internal::{closure#2}, isize> (
    data=<optimized out>) at library/std/src/panicking.rs:403
#31 std::panicking::try<isize, std::rt::lang_start_internal::{closure#2}> (f=...)
    at library/std/src/panicking.rs:367
#32 std::panic::catch_unwind<std::rt::lang_start_internal::{closure#2}, isize> (f=...)
    at library/std/src/panic.rs:133
#33 std::rt::lang_start_internal (main=..., argc=<optimized out>, argv=<optimized out>)
    at library/std/src/rt.rs:128
#34 0x00005575b9cf46d2 in main ()

Sadly, I can't provide more info unless one would ask me for something specific.

@petebacondarwin
Copy link

Closing as there is a new version of Wrangler built on node.js, which should not get such segmentation faults. If you can reproduce on the new Wrangler, please create a new issue at https://github.com/cloudflare/wrangler2/issues/new/choose.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests