You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I ran out of space in /var/lib/ (where I had influxdb).
Now I cannot start influxdb.
Expected behavior: [influxdb should start]
Actual behavior: [Cannot start influxdb]
Additional info: [Include gist of relevant config, logs, etc.]
influxdb.log states:
[tsm1] 2016/04/25 12:52:12 /d0/influxdb/data/_internal/monitor/23 database index loaded in 3.312297ms
[store] 2016/04/25 12:52:12 /d0/influxdb/data/_internal/monitor/23 opened in 15.019705488s
[tsm1] 2016/04/25 12:52:17 /d0/influxdb/data/regression/default/12 database index loaded in 20.03670118s
[store] 2016/04/25 12:52:17 /d0/influxdb/data/regression/default/12 opened in 20.087676684s
panic: runtime error: slice bounds out of range
It looks like you might have a truncated WAL segment in one of your shards. If you take a look at the /var/lib/wal dir, you might be able to identify the shard that is causing problems. It will likely be the last WAL segment in the problem shard that is causing the panic. If you can identify it, you remove that segment to get the DB started again.
Thanks. I found the problem file (it was the last, and one of the biggest iles) and removing it brought the database back. Did I loose data? If so, was it more recent or random? Thanks again
If you deleted the segment file, then the writes in that segment will be gone. You can see what time range that shard contains by running show shards. WAL segments are usually recently written data.
Bug report
__System info: __ [Linux: 3.10.0-327.13.1.el7.x86_64, influxdb-0.12.2-1.x86_64]
Steps to reproduce:
Expected behavior: [influxdb should start]
Actual behavior: [Cannot start influxdb]
Additional info: [Include gist of relevant config, logs, etc.]
influxdb.log states:
[tsm1] 2016/04/25 12:52:12 /d0/influxdb/data/_internal/monitor/23 database index loaded in 3.312297ms
[store] 2016/04/25 12:52:12 /d0/influxdb/data/_internal/monitor/23 opened in 15.019705488s
[tsm1] 2016/04/25 12:52:17 /d0/influxdb/data/regression/default/12 database index loaded in 20.03670118s
[store] 2016/04/25 12:52:17 /d0/influxdb/data/regression/default/12 opened in 20.087676684s
panic: runtime error: slice bounds out of range
goroutine 20 [running]:
github.com/influxdata/influxdb/tsdb/engine/tsm1.(_WriteWALEntry).UnmarshalBinary(0xc208139a68, 0xc20d8aa000, 0xb1, 0x100000, 0x0, 0x0)
/root/go/src/github.com/influxdata/influxdb/tsdb/engine/tsm1/wal.go:593 +0x68f
github.com/influxdata/influxdb/tsdb/engine/tsm1.(_WALSegmentReader).Next(0xc2093560c0, 0xc20c9e1c00)
/root/go/src/github.com/influxdata/influxdb/tsdb/engine/tsm1/wal.go:804 +0x7be
github.com/influxdata/influxdb/tsdb/engine/tsm1.func·001(0x0, 0x0)
/root/go/src/github.com/influxdata/influxdb/tsdb/engine/tsm1/cache.go:444 +0x3a1
github.com/influxdata/influxdb/tsdb/engine/tsm1.(_CacheLoader).Load(0xc208146320, 0xc208082e80, 0x0, 0x0)
/root/go/src/github.com/influxdata/influxdb/tsdb/engine/tsm1/cache.go:466 +0x10f
github.com/influxdata/influxdb/tsdb/engine/tsm1.(_Engine).reloadCache(0xc208176fd0, 0x0, 0x0)
/root/go/src/github.com/influxdata/influxdb/tsdb/engine/tsm1/engine.go:663 +0xe2
github.com/influxdata/influxdb/tsdb/engine/tsm1.(_Engine).Open(0xc208176fd0, 0x0, 0x0)
/root/go/src/github.com/influxdata/influxdb/tsdb/engine/tsm1/engine.go:159 +0x1dd
github.com/influxdata/influxdb/tsdb.func·004(0x0, 0x0)
/root/go/src/github.com/influxdata/influxdb/tsdb/shard.go:149 +0x26a
github.com/influxdata/influxdb/tsdb.(_Shard).Open(0xc208050fc0, 0x0, 0x0)
/root/go/src/github.com/influxdata/influxdb/tsdb/shard.go:159 +0x6f
github.com/influxdata/influxdb/tsdb.func·009(0xc208031c20, 0xc20801f952, 0xa, 0xc2080ecf1d, 0x7, 0xc2080ed165, 0x2)
/root/go/src/github.com/influxdata/influxdb/tsdb/store.go:162 +0x65f
created by github.com/influxdata/influxdb/tsdb.(*Store).loadShards
/root/go/src/github.com/influxdata/influxdb/tsdb/store.go:170 +0xac8
Please note, the quickest way to fix a bug is to open a Pull Request.
The text was updated successfully, but these errors were encountered: