Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[0.9.6-nightly] Distributed aggregative query ERR: unexpected end of JSON input #4937

Closed
li-ang opened this issue Dec 1, 2015 · 9 comments
Closed

Comments

@li-ang
Copy link

li-ang commented Dec 1, 2015

Distributed aggregative query maybe get the result: ERR: unexpected end of JSON input

Reproduce steps

(If you can't get the query err, please insert more points which have different tags values)

Visit https://enterprise.influxdata.com to register for updates, InfluxDB server management, and monitoring.
Connected to http://localhost:8086 version 0.9
InfluxDB shell 0.9
> show servers
id  cluster_addr    raft    raft-leader
1   127.0.0.1:8088  true    true
2   127.0.0.1:9099  true    false
3   127.0.0.1:10101 true    false

> create database foo
> create retention policy one_hour on foo duration 1h replication 1
> insert into foo.one_hour cpu,host=host_1,region=region_1 value=1
Using database foo
Using retention policy one_hour
> insert into foo.one_hour cpu,host=host_2,region=region_2 value=2
Using database foo
Using retention policy one_hour
> insert into foo.one_hour cpu,host=host_3,region=region_3 value=2
Using database foo
Using retention policy one_hour
> select mean(valeu) from one_hour.cpu
name: cpu
---------
time    mean
0   
0   

> select mean(value) from one_hour.cpu
name: cpu
---------
time    mean
0   2
0   1

> insert into foo.one_hour cpu,host=host_3,region=region_3 value=3
Using database foo
Using retention policy one_hour
> select mean(value) from one_hour.cpu
name: cpu
---------
time    mean
0   2.3333333333333335
0   1

> insert into foo.one_hour cpu,host=host_4,region=region_4 value=3
Using database foo
Using retention policy one_hour
> select mean(value) from one_hour.cpu
ERR: unexpected end of JSON input
> show shards
name: foo
---------
id  database    retention_policy    shard_group start_time      end_time        expiry_time     owners
1   foo     one_hour        1       2015-12-01T03:00:00Z    2015-12-01T04:00:00Z    2015-12-01T05:00:00Z    2
2   foo     one_hour        1       2015-12-01T03:00:00Z    2015-12-01T04:00:00Z    2015-12-01T05:00:00Z    3
3   foo     one_hour        1       2015-12-01T03:00:00Z    2015-12-01T04:00:00Z    2015-12-01T05:00:00Z    1

> show shard groups
name: shard groups
------------------
id  database    retention_policy    start_time      end_time        expiry_time
1   foo     one_hour        2015-12-01T03:00:00Z    2015-12-01T04:00:00Z    2015-12-01T05:00:00Z

> 
@li-ang
Copy link
Author

li-ang commented Dec 1, 2015

Also ,there is another issue:

> select mean(value) from one_hour.cpu
name: cpu
---------
time    mean
0   2.3333333333333335
0   1

I think we could talk about it later. Please focus on ERR: unexpected end of JSON input in the issue page.

@li-ang
Copy link
Author

li-ang commented Dec 1, 2015

BTW, all InfluxDB logs of three nodes are normal, there are no panic or errors.

@beckettsean
Copy link
Contributor

@li-ang I'm not entirely sure what the insert into behavior should be. I noticed the behavior in #3188 (comment) but I have never heard if this is intended or a strange side effect.

If you insert the points using curl does the problem still happen?

@li-ang
Copy link
Author

li-ang commented Dec 2, 2015

@beckettsean
The following steps use curl to insert points.

Create InfluxDB cluster with three node. There is cluster stats:

> show servers
id  cluster_addr    raft    raft-leader
1   127.0.0.1:8088  true    true
2   127.0.0.1:9099  true    false
3   127.0.0.1:10101 true    false

Create a database named foo and a retention policy named one_hour with duration 1h and replication 1:

> create database foo
> create retention policy one_hour on foo duration 1h replication 1
> show databases
name: databases
---------------
name
foo

> show retention policies on foo
name        duration    replicaN    default
default     0       3       true
one_hour    1h0m0s      1       false

> 

Write several points to one_hour retention policy of foo database using curl:

➜  ~  curl -i -XPOST 'http://127.0.0.1:8086/write?db=foo&rp=one_hour' --data-binary 'cpu,host=host_1,region=region_1 value=1'
HTTP/1.1 204 No Content
Request-Id: 0632942a-98a4-11e5-8013-000000000000
X-Influxdb-Version: 0.9
Date: Wed, 02 Dec 2015 03:23:15 GMT
Connection: close

➜  ~  curl -i -XPOST 'http://127.0.0.1:8086/write?db=foo&rp=one_hour' --data-binary 'cpu,host=host_2,region=region_2 value=2'
HTTP/1.1 204 No Content
Request-Id: 0c490066-98a4-11e5-8014-000000000000
X-Influxdb-Version: 0.9
Date: Wed, 02 Dec 2015 03:23:25 GMT
Connection: close

➜  ~  curl -i -XPOST 'http://127.0.0.1:8086/write?db=foo&rp=one_hour' --data-binary 'cpu,host=host_3,region=region_3 value=3'
HTTP/1.1 204 No Content
Request-Id: 100443e8-98a4-11e5-8015-000000000000
X-Influxdb-Version: 0.9
Date: Wed, 02 Dec 2015 03:23:31 GMT
Connection: close

➜  ~  curl -i -XPOST 'http://127.0.0.1:8086/write?db=foo&rp=one_hour' --data-binary 'cpu,host=host_4,region=region_4 value=4'
HTTP/1.1 204 No Content
Request-Id: 1432f71e-98a4-11e5-8016-000000000000
X-Influxdb-Version: 0.9
Date: Wed, 02 Dec 2015 03:23:38 GMT
Connection: close

Now, there are four points in foo database:

➜  ~  curl -G 'http://127.0.0.1:8086/query?pretty=true' --data-urlencode "db=foo" --data-urlencode "q=SELECT * FROM one_hour.cpu"          
{
    "results": [
        {
            "series": [
                {
                    "name": "cpu",
                    "columns": [
                        "time",
                        "host",
                        "region",
                        "value"
                    ],
                    "values": [
                        [
                            "2015-12-02T03:23:15.070506552Z",
                            "host_1",
                            "region_1",
                            1
                        ],
                        [
                            "2015-12-02T03:23:25.283799043Z",
                            "host_2",
                            "region_2",
                            2
                        ],
                        [
                            "2015-12-02T03:23:31.544199738Z",
                            "host_3",
                            "region_3",
                            3
                        ],
                        [
                            "2015-12-02T03:23:38.561141279Z",
                            "host_4",
                            "region_4",
                            4
                        ]
                    ]
                }
            ]
        }
    ]
}

use mean or other functions to execute aggregative query:

 ➜  ~  curl -G 'http://127.0.0.1:8086/query?pretty=true' --data-urlencode "db=foo" --data-urlencode "q=SELECT mean(value) FROM one_hour.cpu"
{
    "results": [
        {
            "error": "unexpected end of JSON input"
        }
    ]
}
➜  ~  curl -G 'http://127.0.0.1:8086/query?pretty=true' --data-urlencode "db=foo" --data-urlencode "q=SELECT sum(value) FROM one_hour.cpu"
{
    "results": [
        {
            "error": "unexpected end of JSON input"
        }
    ]
}
➜  ~  curl -G 'http://127.0.0.1:8086/query?pretty=true' --data-urlencode "db=foo" --data-urlencode "q=SELECT min(value) FROM one_hour.cpu"
{
    "results": [
        {
            "error": "unexpected end of JSON input"
        }
    ]
}
➜  ~  curl -G 'http://127.0.0.1:8086/query?pretty=true' --data-urlencode "db=foo" --data-urlencode "q=SELECT max(value) FROM one_hour.cpu"
{
    "results": [
        {
            "error": "unexpected end of JSON input"
        }
    ]
}

@li-ang
Copy link
Author

li-ang commented Dec 2, 2015

If anybody want to reproduce the issue ,please create InfluxDB cluster and set replication of the retention policy as 1. Then, write some points _which are different from tagSets_ . At last, execute a aggregative query which can _cover all points_ you write in.

@li-ang
Copy link
Author

li-ang commented Dec 2, 2015

@benbjohnson Also, I think the labels should be category/clusting and category/functions

@seyoonhan
Copy link

Is it fixed issue?
I have same problem on 0.9.6.1.

@li-ang
Copy link
Author

li-ang commented Dec 29, 2015

@seyoonhan Can you give me more details about your problem?

@just2d
Copy link

just2d commented Jan 5, 2016

@li-ang I got same problem here
#5269

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants