Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

v3.0.1 of the application inspector is malfunctioning. #12048

Open
1 task done
Ma-due opened this issue Feb 17, 2025 · 12 comments
Open
1 task done

v3.0.1 of the application inspector is malfunctioning. #12048

Ma-due opened this issue Feb 17, 2025 · 12 comments

Comments

@Ma-due
Copy link

Ma-due commented Feb 17, 2025

Prerequisites

Please check the FAQ, and search existing issues for similar questions before creating a new issue.YOU MAY DELETE THIS PREREQUISITES SECTION.

  • I have checked the FAQ, and issues and found no answer.

What version of pinpoint are you using?

  • web/collector/agent v3.0.1
  • pinot 3.2.0
  • kafka 2.13-2.8.1
  • tomcat 11.0.2

Describe your problem**

The Application Inspector is not working correctly

What have you done?

I followed the installation guide.

Screenshots

Image

Other features, such as the Agent Inspector and Server map, are working correctly
However, the Application Inspector shows all data as 0.

Image

It seems the Pinot tables are fine, and the Kafka topic (attached below) is receiving messages without issue.

Logs

kafka-topic-messages.txt
web.log
col.log

Additional context

@donghun-cho
Copy link
Contributor

Here are some things to check

  1. check data in pinot
    Run query In pinot controler/Query Console and check results

this is example query

select * from inspectorStatApp 
where metricName = 'jvmGc'
order by roundedEventTime desc
limit 100
  1. check actual pinpoint web queries
    look for INFO level [QueryLogger] logs in pinot broker and check actual queries

@Ma-due
Copy link
Author

Ma-due commented Feb 19, 2025

Pinot_Data Explorer.xls

I am attaching the query results from the Pinot table. In fact, all the data in the table query was 0. It seems there may be an issue with the Kafka topic when storing data in Pinot.

brk_new.log

web_new.log

I am attaching the logs from the broker and the web. I could not find any logs related to 'INFO QueryLogger,' and I am unable to pinpoint exactly where the issue lies.
There was an HBase-related error in the web logs, but it does not appear to be related to this issue.

Thank you for care

@luanalmeida-xipptech
Copy link

luanalmeida-xipptech commented Feb 20, 2025

Take a look in https://github.com/pinpoint-apm/pinpoint-docker/blob/master/docker-compose-metric.yml

I think if you execute script of pinot-init and kakfa-init should work

@Ma-due
Copy link
Author

Ma-due commented Feb 20, 2025

@luanalmeida-xipptech

Looking at the codes in pinot-init and kafka-init, it seems to generate related table schemas and topics.

The data related to inspector stat app was added correctly. The inspector stat agent is functioning properly.

Please let us know the basis for your assessment that this is where the problem lies.

@Ma-due
Copy link
Author

Ma-due commented Feb 24, 2025

@donghun-cho

Can you provide me with additional assistance? I have deleted all schemas and tables, and even recreated them using the luanalmeida-xipptech script, but the same issue still occurs. I really want to resolve this problem

@donghun-cho
Copy link
Contributor

donghun-cho commented Feb 24, 2025

I checked kafka-init, and it doesn't have the --partitions 64 parameter
However, Pinot table configuration assumes 64 partitions.

Altering the number of partitions to 64 or changing pinot table config to 1 might work
You need to restart pinot consuming for it to take effect immediately.

@Ma-due
Copy link
Author

Ma-due commented Feb 24, 2025

@donghun-cho

I am sharing the JSON file I used. I deleted the table and schema, rebooted Pinot, and recreated the table. The issue remains the same
tableIndexConfig.segmentPartitionConfig.columnPartitionMap.sortKey.numPartitions 64 -> 1

{
  "tableName": "inspectorStatApp",
  "tableType": "REALTIME",
  "query" : {
    "disableGroovy": false
  },
  "segmentsConfig": {
    "timeColumnName": "roundedEventTime",
    "timeType": "MILLISECONDS",
    "schemaName": "inspectorStatApp",
    "replicasPerPartition": "1",
    "retentionTimeUnit": "DAYS",
    "retentionTimeValue": "7"
  },
  "tenants": {},
  "tableIndexConfig": {
    "sortedColumn": ["sortKey"],
    "bloomFilterColumns": ["tenantId", "serviceName", "sortKey", "applicationName", "metricName", "fieldName", "version", "primaryTag"],
    "noDictionaryColumns": ["sumFieldValue", "minFieldValue", "maxFieldValue", "countFieldValue", "roundedEventTime"],
    "segmentPartitionConfig": {
      "columnPartitionMap": {
        "sortKey": {
          "functionName": "Murmur",
          "numPartitions": 1 # 64 > 1
        }
      }
    },
    "loadMode": "MMAP",
    "nullHandlingEnabled": true,
    "streamConfigs": {
      "streamType": "kafka",
      "stream.kafka.consumer.type": "lowlevel",
      "stream.kafka.topic.name": "inspector-stat-app",
      "stream.kafka.decoder.class.name": "org.apache.pinot.plugin.stream.kafka.KafkaJSONMessageDecoder",
      "stream.kafka.consumer.factory.class.name": "org.apache.pinot.plugin.stream.kafka20.KafkaConsumerFactory",
      "stream.kafka.broker.list": "localhost:9092",
      "realtime.segment.flush.threshold.rows": "0",
      "realtime.segment.flush.threshold.time": "24h",
      "realtime.segment.flush.threshold.segment.size": "64M",
      "stream.kafka.consumer.prop.auto.offset.reset": "smallest"
    }
  },
  "ingestionConfig": {
    "transformConfigs": [
      {
        "columnName": "roundedEventTime",
        "transformFunction": "DATETIME_CONVERT(eventTime, '1:MILLISECONDS:EPOCH', '1:MILLISECONDS:EPOCH', '30:SECONDS')"
      }
    ],
    "aggregationConfigs": [
      {
        "columnName": "sumFieldValue",
        "aggregationFunction": "SUM(fieldValue)"
      },
      {
        "columnName": "minFieldValue",
        "aggregationFunction": "MIN(fieldValue)"
      },
      {
        "columnName": "maxFieldValue",
        "aggregationFunction": "MAX(fieldValue)"
      },
      {
        "columnName": "countFieldValue",
        "aggregationFunction": "COUNT(*)"
      }
    ]
  },
  "metadata": {
    "customConfigs": {}
  },
  "routing": {
    "segmentPrunerTypes": [
      "time",
      "partition"
    ]
  }
}

@donghun-cho
Copy link
Contributor

donghun-cho commented Feb 24, 2025

Could you explain how you set up Kafka? I'll try it and check
What command did you use to create the topic for inspector-app? Or did you just use docker-compose?

@Ma-due
Copy link
Author

Ma-due commented Feb 24, 2025

I am currently using three VMs.
I am using Kafka version 2.13-2.8.1, and I have only set the Zookeeper address separately in the server.properties.

I will share the table registration script I used for Pinot.
It combines the kafka-init and pinot-init from the link above,
and I changed tableIndexConfig.segmentPartitionConfig.columnPartitionMap.sortKey.numPartitions 64 > 1 directly in the JSON.

#!/bin/bash

KAFKA_HOME=/pinot-1.2.0/kafka_2.13-2.8.1
PINOT_HOME=/pinot-1.2.0
PINPOINT_VERSION=v3.0.1

# Create Kafka topics
$KAFKA_HOME/bin/kafka-topics.sh --create --topic url-stat --bootstrap-server localhost:9092
$KAFKA_HOME/bin/kafka-topics.sh --create --topic system-metric-data-type --bootstrap-server localhost:9092
$KAFKA_HOME/bin/kafka-topics.sh --create --topic system-metric-tag --bootstrap-server localhost:9092
$KAFKA_HOME/bin/kafka-topics.sh --create --topic system-metric-double --bootstrap-server localhost:9092
$KAFKA_HOME/bin/kafka-topics.sh --create --topic exception-trace --bootstrap-server localhost:9092
$KAFKA_HOME/bin/kafka-topics.sh --create --topic inspector-stat-app --bootstrap-server localhost:9092


# Download JSON files
curl https://raw.githubusercontent.com/pinpoint-apm/pinpoint/${PINPOINT_VERSION}/uristat/uristat-common/src/main/pinot/pinot-uriStat-realtime-table.json > uriStatTableReal.json
curl https://raw.githubusercontent.com/pinpoint-apm/pinpoint/${PINPOINT_VERSION}/uristat/uristat-common/src/main/pinot/pinot-uriStat-offline-table.json > uriStatTableOFF.json
curl https://raw.githubusercontent.com/pinpoint-apm/pinpoint/${PINPOINT_VERSION}/uristat/uristat-common/src/main/pinot/pinot-uriStat-schema.json > uriStatSchema.json

curl https://raw.githubusercontent.com/pinpoint-apm/pinpoint/${PINPOINT_VERSION}/metric-module/metric/src/main/pinot/pinot-tag-realtime-table.json > tagTable.json
curl https://raw.githubusercontent.com/pinpoint-apm/pinpoint/${PINPOINT_VERSION}/metric-module/metric/src/main/pinot/pinot-tag-schema.json > tagSchema.json
curl https://raw.githubusercontent.com/pinpoint-apm/pinpoint/${PINPOINT_VERSION}/metric-module/metric/src/main/pinot/pinot-double-realtime-table.json > doubleTable.json
curl https://raw.githubusercontent.com/pinpoint-apm/pinpoint/${PINPOINT_VERSION}/metric-module/metric/src/main/pinot/pinot-double-schema.json > doubleSchema.json
curl https://raw.githubusercontent.com/pinpoint-apm/pinpoint/${PINPOINT_VERSION}/metric-module/metric/src/main/pinot/pinot-dataType-realtime-table.json > dataTypeTable.json
curl https://raw.githubusercontent.com/pinpoint-apm/pinpoint/${PINPOINT_VERSION}/metric-module/metric/src/main/pinot/pinot-dataType-schema.json > dataTypeSchema.json

curl https://raw.githubusercontent.com/pinpoint-apm/pinpoint/${PINPOINT_VERSION}/exceptiontrace/exceptiontrace-common/src/main/pinot/pinot-exceptionTrace-offline-table.json > exceptionTraceTable.json
curl https://raw.githubusercontent.com/pinpoint-apm/pinpoint/${PINPOINT_VERSION}/exceptiontrace/exceptiontrace-common/src/main/pinot/pinot-exceptionTrace-schema.json > exceptionTraceSchema.json

curl https://raw.githubusercontent.com/pinpoint-apm/pinpoint/${PINPOINT_VERSION}/inspector-module/inspector-collector/src/main/pinot/pinot-inspector-stat-agent-realtime-table.json > inspectorAgentTable.json
curl https://raw.githubusercontent.com/pinpoint-apm/pinpoint/${PINPOINT_VERSION}/inspector-module/inspector-collector/src/main/pinot/pinot-inspector-stat-agent-schema.json > inspectorAgentSchema.json
curl https://raw.githubusercontent.com/pinpoint-apm/pinpoint/${PINPOINT_VERSION}/inspector-module/inspector-collector/src/main/pinot/pinot-inspector-stat-application-realtime-table.json > inspectorApplicationTable.json
curl https://raw.githubusercontent.com/pinpoint-apm/pinpoint/${PINPOINT_VERSION}/inspector-module/inspector-collector/src/main/pinot/pinot-inspector-stat-application-schema.json > inspectorApplicationSchema.json

# Modify JSON files
sed -i 's/localhost:19092/localhost:9092/g' uriStatTableReal.json uriStatTableOFF.json tagTable.json doubleTable.json dataTypeTable.json exceptionTraceTable.json inspectorAgentTable.json inspectorApplicationTable.json
sed -i 's/.*replicasPerPartition.*/    \"replicasPerPartition\": \"1\",/g' uriStatTableReal.json uriStatTableOFF.json tagTable.json doubleTable.json dataTypeTable.json exceptionTraceTable.json inspectorAgentTable.json inspectorApplicationTable.json

$PINOT_HOME/bin/pinot-admin.sh AddTable -schemaFile uriStatSchema.json -realtimeTableConfigFile uriStatTableReal.json -offlineTableConfigFile uriStatTableOFF.json -controllerPort 9000 -exec

$PINOT_HOME/bin/pinot-admin.sh AddTable -schemaFile tagSchema.json -tableConfigFile tagTable.json -controllerPort 9000 -exec

$PINOT_HOME/bin/pinot-admin.sh AddTable -schemaFile doubleSchema.json -tableConfigFile doubleTable.json -controllerPort 9000 -exec

$PINOT_HOME/bin/pinot-admin.sh AddTable -schemaFile dataTypeSchema.json -tableConfigFile dataTypeTable.json -controllerPort 9000 -exec

$PINOT_HOME/bin/pinot-admin.sh AddTable -schemaFile exceptionTraceSchema.json -tableConfigFile exceptionTraceTable.json -controllerPort 9000 -exec

$PINOT_HOME/bin/pinot-admin.sh AddTable -schemaFile inspectorAgentSchema.json -tableConfigFile inspectorAgentTable.json -controllerPort 9000 -exec

$PINOT_HOME/bin/pinot-admin.sh AddTable -schemaFile inspectorApplicationSchema.json -tableConfigFile inspectorApplicationTable.json -controllerPort 9000 -exec

@donghun-cho
Copy link
Contributor

Could you check or add Pinot server logs?

The Kafka topic messages seem fine, but countFieldValue from the query result is 0.
It might be related to Ingestion Aggregations

@Ma-due
Copy link
Author

Ma-due commented Feb 26, 2025

I think the original log is too long, so I'm giving you some of the relevant logs.

Settings related logs

The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.segment.size' was supplied but isn't a known config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.factory.class.name' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.

grep -i 'error' server.log

[Consumer clientId=systemMetricDouble_REALTIME-system-metric-double-0, groupId=null] Error while fetching metadata with correlation id 1 : {system-metric-double=LEADER_NOT_AVAILABLE}
[Consumer clientId=inspectorStatApp_REALTIME-inspector-stat-app-0, groupId=null] Error while fetching metadata with correlation id 2 : {inspector-stat-app=LEADER_NOT_AVAILABLE}
[Consumer clientId=systemMetricTag_REALTIME-system-metric-tag-0, groupId=null] Error while fetching metadata with correlation id 1 : {system-metric-tag=LEADER_NOT_AVAILABLE}
[Consumer clientId=systemMetricDataType_REALTIME-system-metric-data-type-0, groupId=null] Error while fetching metadata with correlation id 1 : {system-metric-data-type=LEADER_NOT_AVAILABLE}
[Consumer clientId=inspectorStatAgent00_REALTIME-inspector-stat-agent-00-0, groupId=null] Error while fetching metadata with correlation id 2 : {inspector-stat-agent-00=LEADER_NOT_AVAILABLE}
[Consumer clientId=uriStat_REALTIME-url-stat-0, groupId=null] Error while fetching metadata with correlation id 1 : {url-stat=LEADER_NOT_AVAILABLE}
Error registering AppInfo mbean
Error registering AppInfo mbean

grep -i 'insepctorstatapp' server.log

srv.txt

@Ma-due
Copy link
Author

Ma-due commented Feb 27, 2025

@donghun-cho

I have resolved the issue.

It seems there was a problem when deleting Kafka topics in ZooKeeper, so I initialized the ZooKeeper data and configured the Kafka topic to have 64 partitions upon creation.

After making these changes, I noticed the following logs on the Pinot server, and I modified the application table.json as shown below.

Metrics aggregation cannot be turned ON in presence of no-dictionary datetime/time columns, eg: roundedEventTime

  "tableIndexConfig": {
    "sortedColumn": ["sortKey"],
    "bloomFilterColumns": ["tenantId", "serviceName", "sortKey", "applicationName", "metricName", "fieldName", "version", "primaryTag"],
    "noDictionaryColumns": ["sumFieldValue", "minFieldValue", "maxFieldValue", "countFieldValue"], # delete roundedEventTime
    "segmentPartitionConfig": {
      "columnPartitionMap": {
        "sortKey": {
          "functionName": "Murmur",
          "numPartitions": 64
        }
      }
    },

This part differs from the information in the master branch of Pinpoint. Please review the differences related to this.

Image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants