Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

panic: runtime error while starting Influx #497

Closed
chendo opened this issue May 4, 2014 · 4 comments
Closed

panic: runtime error while starting Influx #497

chendo opened this issue May 4, 2014 · 4 comments
Milestone

Comments

@chendo
Copy link

chendo commented May 4, 2014

After less than a day of using Influx with a low load (less than 30 data points an hour), I found that it crashed. Trying to start it up again gives the error as follows:

[05/04/14 19:28:07] [INFO] Loading configuration file /opt/influxdb/shared/config.toml
Running the actual command
panic: runtime error: index out of range

goroutine 21 [running]:
runtime.panic(0x889ee0, 0x100cad7)
    /home/vagrant/bin/go/src/pkg/runtime/panic.c:266 +0xb6
datastore.(*LevelDbShard).Write(0xc210167fc0, 0xc210115350, 0x8, 0xc21016e000, 0x0, ...)
    /home/vagrant/influxdb/src/datastore/leveldb_shard.go:86 +0x86a
datastore.(*LevelDbShardDatastore).Write(0xc2100bc310, 0xc21016b680, 0x0, 0x0)
    /home/vagrant/influxdb/src/datastore/leveldb_shard_datastore.go:167 +0x116
cluster.func·006(0xc21016b680, 0xc200000001, 0x7f68b475ad18, 0x1)
    /home/vagrant/influxdb/src/cluster/cluster_configuration.go:1010 +0x239
wal.(*WAL).RecoverServerFromRequestNumber(0xc210078e00, 0xc200000001, 0xc2100e6a40, 0x1, 0x1, ...)
    /home/vagrant/influxdb/src/wal/wal.go:196 +0x95a
wal.(*WAL).RecoverServerFromLastCommit(0xc210078e00, 0xc200000001, 0xc2100e6a40, 0x1, 0x1, ...)
    /home/vagrant/influxdb/src/wal/wal.go:132 +0x1be
cluster.(*ClusterConfiguration).recover(0xc210042000, 0x7f6800000001, 0x7f68b8052ee8, 0xc2100bc310, 0xc2100bc310, ...)
    /home/vagrant/influxdb/src/cluster/cluster_configuration.go:1016 +0x25b
cluster.func·004(0x1)
    /home/vagrant/influxdb/src/cluster/cluster_configuration.go:978 +0xce
created by cluster.(*ClusterConfiguration).RecoverFromWAL
    /home/vagrant/influxdb/src/cluster/cluster_configuration.go:981 +0x323

goroutine 1 [semacquire]:
sync.runtime_Semacquire(0xc2100e69e8)
    /tmp/makerelease886106415/go/src/pkg/runtime/sema.goc:199 +0x30
sync.(*WaitGroup).Wait(0xc210169ac0)
    /home/vagrant/bin/go/src/pkg/sync/waitgroup.go:127 +0x14b
cluster.(*ClusterConfiguration).RecoverFromWAL(0xc210042000, 0xc21016a310, 0x0)
    /home/vagrant/influxdb/src/cluster/cluster_configuration.go:996 +0x39b
server.(*Server).ListenAndServe(0xc2100bc460, 0xc2100bc460, 0x0)
    /home/vagrant/influxdb/src/server/server.go:111 +0x276
main.main()
    /home/vagrant/influxdb/src/daemon/influxd.go:155 +0xb97

goroutine 3 [syscall]:
os/signal.loop()
    /home/vagrant/bin/go/src/pkg/os/signal/signal_unix.go:21 +0x1e
created by os/signal.init·1
    /home/vagrant/bin/go/src/pkg/os/signal/signal_unix.go:27 +0x31

goroutine 4 [chan receive]:
code.google.com/p/log4go.ConsoleLogWriter.run(0xc21000b160, 0x7f68b804a110, 0xc210000008)
    /home/vagrant/influxdb/src/code.google.com/p/log4go/termlog.go:27 +0x60
created by code.google.com/p/log4go.NewConsoleLogWriter
    /home/vagrant/influxdb/src/code.google.com/p/log4go/termlog.go:19 +0x67

goroutine 5 [select]:
code.google.com/p/log4go.func·002()
    /home/vagrant/influxdb/src/code.google.com/p/log4go/filelog.go:84 +0x84c
created by code.google.com/p/log4go.NewFileLogWriter
    /home/vagrant/influxdb/src/code.google.com/p/log4go/filelog.go:116 +0x2d1

goroutine 6 [syscall]:
runtime.goexit()
    /home/vagrant/bin/go/src/pkg/runtime/proc.c:1394

goroutine 7 [chan receive]:
wal.(*WAL).processEntries(0xc210078e00)
    /home/vagrant/influxdb/src/wal/wal.go:242 +0x3f
created by wal.NewWAL
    /home/vagrant/influxdb/src/wal/wal.go:103 +0x9f3

goroutine 8 [sleep]:
time.Sleep(0x8bb2c97000)
    /tmp/makerelease886106415/go/src/pkg/runtime/time.goc:31 +0x31
cluster.func·001()
    /home/vagrant/influxdb/src/cluster/cluster_configuration.go:132 +0x35
created by cluster.(*ClusterConfiguration).CreateFutureShardsAutomaticallyBeforeTimeComes
    /home/vagrant/influxdb/src/cluster/cluster_configuration.go:137 +0x63

goroutine 9 [chan receive]:
main.waitForSignals(0x7f68b804b148, 0xc2100bc460)
    /home/vagrant/influxdb/src/daemon/null_profiler.go:23 +0x126
created by main.startProfiler
    /home/vagrant/influxdb/src/daemon/null_profiler.go:15 +0x38

goroutine 10 [IO wait]:
net.runtime_pollWait(0x7f68b804c168, 0x72, 0x0)
    /tmp/makerelease886106415/go/src/pkg/runtime/netpoll.goc:116 +0x6a
net.(*pollDesc).Wait(0xc2100bc530, 0x72, 0x7f68b8049f88, 0xb)
    /home/vagrant/bin/go/src/pkg/net/fd_poll_runtime.go:81 +0x34
net.(*pollDesc).WaitRead(0xc2100bc530, 0xb, 0x7f68b8049f88)
    /home/vagrant/bin/go/src/pkg/net/fd_poll_runtime.go:86 +0x30
net.(*netFD).accept(0xc2100bc4d0, 0x9fcb20, 0x0, 0x7f68b8049f88, 0xb)
    /home/vagrant/bin/go/src/pkg/net/fd_unix.go:382 +0x2c2
net.(*TCPListener).AcceptTCP(0xc2100aa298, 0x18, 0xc2100af010, 0x5cb0f3)
    /home/vagrant/bin/go/src/pkg/net/tcpsock_posix.go:233 +0x47
net.(*TCPListener).Accept(0xc2100aa298, 0x0, 0x0, 0x0, 0x0)
    /home/vagrant/bin/go/src/pkg/net/tcpsock_posix.go:243 +0x27
net/http.(*Server).Serve(0xc2100a4e10, 0x7f68b804b1c8, 0xc2100aa298, 0x0, 0x0)
    /home/vagrant/bin/go/src/pkg/net/http/server.go:1622 +0x91
coordinator.func·007()
    /home/vagrant/influxdb/src/coordinator/raft_server.go:530 +0x3a
created by coordinator.(*RaftServer).Serve
    /home/vagrant/influxdb/src/coordinator/raft_server.go:534 +0x4d9

goroutine 17 [select]:
coordinator.(*RaftServer).raftLeaderLoop(0xc210071790, 0xc210167a80)
    /home/vagrant/influxdb/src/coordinator/raft_server.go:430 +0x29c
created by coordinator.(*RaftServer).raftEventHandler
    /home/vagrant/influxdb/src/coordinator/raft_server.go:419 +0x1d0

goroutine 13 [select]:
github.com/goraft/raft.(*server).leaderLoop(0xc2100a5360)
    /home/vagrant/influxdb/src/github.com/goraft/raft/server.go:765 +0x5fe
github.com/goraft/raft.(*server).loop(0xc2100a5360)
    /home/vagrant/influxdb/src/github.com/goraft/raft/server.go:568 +0x33f
created by github.com/goraft/raft.(*server).Start
    /home/vagrant/influxdb/src/github.com/goraft/raft/server.go:472 +0x7af

goroutine 14 [select]:
coordinator.(*RaftServer).CompactLog(0xc210071790)
    /home/vagrant/influxdb/src/coordinator/raft_server.go:320 +0x2ef
created by coordinator.(*RaftServer).startRaft
    /home/vagrant/influxdb/src/coordinator/raft_server.go:374 +0x375

goroutine 16 [finalizer wait]:
runtime.park(0x451590, 0x10239f8, 0x100e428)
    /home/vagrant/bin/go/src/pkg/runtime/proc.c:1342 +0x66
runfinq()
    /home/vagrant/bin/go/src/pkg/runtime/mgc0.c:2279 +0x84
runtime.goexit()
    /home/vagrant/bin/go/src/pkg/runtime/proc.c:1394

goroutine 19 [IO wait]:
net.runtime_pollWait(0x7f68b804c0c0, 0x72, 0x0)
    /tmp/makerelease886106415/go/src/pkg/runtime/netpoll.goc:116 +0x6a
net.(*pollDesc).Wait(0xc2100d3060, 0x72, 0x7f68b8049f88, 0xb)
    /home/vagrant/bin/go/src/pkg/net/fd_poll_runtime.go:81 +0x34
net.(*pollDesc).WaitRead(0xc2100d3060, 0xb, 0x7f68b8049f88)
    /home/vagrant/bin/go/src/pkg/net/fd_poll_runtime.go:86 +0x30
net.(*netFD).accept(0xc2100d3000, 0x9fcb20, 0x0, 0x7f68b8049f88, 0xb)
    /home/vagrant/bin/go/src/pkg/net/fd_unix.go:382 +0x2c2
net.(*TCPListener).AcceptTCP(0xc2100e6a10, 0xc21016a3a0, 0x0, 0x7f68b804b198)
    /home/vagrant/bin/go/src/pkg/net/tcpsock_posix.go:233 +0x47
net.(*TCPListener).Accept(0xc2100e6a10, 0xc21016a3a0, 0x7f68b473ef38, 0x1, 0x1)
    /home/vagrant/bin/go/src/pkg/net/tcpsock_posix.go:243 +0x27
coordinator.(*ProtobufServer).ListenAndServe(0xc2100a7a40)
    /home/vagrant/influxdb/src/coordinator/protobuf_server.go:64 +0x1c7
created by server.(*Server).ListenAndServe
    /home/vagrant/influxdb/src/server/server.go:108 +0x215

goroutine 20 [select]:
cluster.(*WriteBuffer).handleWrites(0xc2100d3f50)
    /home/vagrant/influxdb/src/cluster/write_buffer.go:74 +0xca
created by cluster.NewWriteBuffer
    /home/vagrant/influxdb/src/cluster/write_buffer.go:43 +0x24f

Version: InfluxDB v0.6.0 (git: deaa95444be6567e33a58915681f8ecf090a6ad8) (leveldb: 1.15)

Let me know if I'm missing any details.

@jvshahid
Copy link
Contributor

jvshahid commented May 5, 2014

@chendo i tried to hit you up on irc, but you weren't there. Can you zip up your data and send it to us, to the support email maybe. I think I understand the nature of the bug which is related to #501. In your case though, the data seems to have made its way into the wal, which suggests that it was the output of a continuous query. Do you have any continuous queries running ?

@chendo
Copy link
Author

chendo commented May 5, 2014

I’ve nuked the data already (wasn’t critical), but will try to duplicate and send at next opportunity. There shouldn’t have been any continuous queries as I haven’t even looked at how they work yet. At most, I was using the built-in dashboard to do SELECT * FROM x to verify that stuff worked.

On Tuesday, 6 May 2014 at 5:06 am, John Shahid wrote:

@chendo (https://github.com/chendo) i tried to hit you up on irc, but you weren't there. Can you zip up your data and send it to us, to the support email maybe. I think I understand the nature of the bug which is related to #501 (#501). In your case though, the data seems to have made its way into the wal, which suggests that it was the output of a continuous query. Do you have any continuous queries running ?


Reply to this email directly or view it on GitHub (#497 (comment)).

@jvshahid jvshahid added this to the 0.6.2 milestone May 7, 2014
@jvshahid
Copy link
Contributor

jvshahid commented May 7, 2014

@chendo i'm going to go ahead and close this issue since there's nothing actionable here. Feel free to reopen it if you were able to reproduce it and have the data available.

@jvshahid jvshahid closed this as completed May 7, 2014
@chendo
Copy link
Author

chendo commented May 7, 2014

Sorry, been busy! Will do

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants