Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Postgres jdbc connector: Decimal Data-Type #280

Closed
amit-c1x opened this issue Sep 11, 2017 · 15 comments
Closed

Postgres jdbc connector: Decimal Data-Type #280

amit-c1x opened this issue Sep 11, 2017 · 15 comments

Comments

@amit-c1x
Copy link

Hi,

I have configured jdbc connector on postgres database. One of the columns has DECIMAL(15,3) type. However, in kafka topic, I am seeing some string values.

Could someone please explain what is going on and how to rectify this?

Example of relevant fields:

	"schema": {
		"type": "struct",
		"fields": [{
			"type": "bytes",
			"optional": false,
			"name": "org.apache.kafka.connect.data.Decimal",
			"version": 1,
			"parameters": {
				"scale": "3"
			},
			"field": "amount"
		}, {
			"type": "bytes",
			"optional": false,
			"name": "org.apache.kafka.connect.data.Decimal",
			"version": 1,
			"parameters": {
				"scale": "3"
			},
			"field": "ops-amount"
		}],
		"optional": false
	},
	"payload": {
		"amount": "adw=",
		"ops-amount": "AA=="
	}
}```
@strengthening
Copy link

I have the same question.

@rhauch
Copy link
Contributor

rhauch commented Jun 4, 2018

@strengthening, @amit-c1x: have you tried the latest 4.1.1 build? It includes a change (see #396) that adds a new behavior for mapping numerics with non-zero scales, but it is disabled by default to avoid unexpected behavioral changes upon upgrade. To enable, the feature set numeric.mapping=best_fit to map columns with float point types with non-zero precision and scale to the most appropriate primitive types. Note that if columns have precision greater than what's allowed in a primitive type, BigDecimal (aka Connect's DECIMAL) type is still used.

@strengthening
Copy link

@rhauch Hi, I'm the lastest version 4.1.1 and my db is mysql . I tried to set numeric.mapping=best_fit in the source properties. But the result is also incorrect.For example the decimal '0.00' store in 'AA==' in string. I'm trying the query config to resolve it.It seems not elegant :(

@rhauch
Copy link
Contributor

rhauch commented Jun 15, 2018

@strengthening sure, but can you provide more information to help replicate and diagnose? For example, what is the DDL for the table (or columns)?

@strengthening
Copy link

@rhauch Oh, thx for your continuous help.
My source properties file is

name=connect-source-debug-v2-fund
connector.class=io.confluent.connect.jdbc.JdbcSourceConnector
tasks.max=1

connection.url=jdbc:mysql://localhost:3306/jijindoubulk?zeroDateTimeBehavior=ROUND
connection.user=XXXX
connection.password=XXXX

mode=timestamp+incrementing
incrementing.column.name=id
timestamp.column.name=update_time
validate.non.null=false

table.whitelist=trade_deal
numeric.mapping=best_fit

topic.prefix=connect-source-debug-v2-fund

then the consumser receive data like this:

{"id":1526607,"apply_serial":"20110522000956565918","user_id":100004778,"fund_code":"260102","trade_account":"138797","trade_type":"022","share_type":"A","apply_amount":"AA==","apply_shares":"ARFw","confirm_date":1479686399000,"confirm_flag":"1","nav":"JxA=","confirm_amount":"ARGG","confirm_shares":"ARFw","poundage":"Aiw=","create_time":1479436213000,"cancel_time":-62135769600000,"portfolio_id":0,"explain":"","channel":"","aip_serial":"","batch_serial":0,"product_id":0,"target_code":"","org_apply_serial":"","aip_id":0,"update_time":1526947796000,"talicai_apply_serial":""}

the column apply_amount apply_shares nav confirm_amount poundage is the type decimal(20,2)

I download the latest conflunent file the the jars like blew:

-rw-r--r-- 1 strengthen strengthen 20786 5月 12 07:03 common-utils-4.1.1.jar
-rw-r--r-- 1 strengthen strengthen 87325 5月 12 07:03 jline-0.9.94.jar
-rw-r--r-- 1 strengthen strengthen 147132 5月 12 07:03 kafka-connect-jdbc-4.1.1.jar
-rw-r--r-- 1 strengthen strengthen 489884 5月 12 07:03 log4j-1.2.17.jar
-rw-r--r-- 1 strengthen strengthen 1330394 5月 12 07:03 netty-3.10.5.Final.jar
-rw-r--r-- 1 strengthen strengthen 658466 5月 12 07:03 postgresql-9.4-1206-jdbc41.jar
-rw-r--r-- 1 strengthen strengthen 41203 5月 12 07:03 slf4j-api-1.7.25.jar
-rw-r--r-- 1 strengthen strengthen 5575351 5月 12 07:03 sqlite-jdbc-3.8.11.2.jar
-rw-r--r-- 1 strengthen strengthen 74798 5月 12 07:03 zkclient-0.10.jar
-rw-r--r-- 1 strengthen strengthen 871369 5月 12 07:03 zookeeper-3.4.10.jar

Is anything wrong with my config? Thx again~

@strengthening
Copy link

@rhauch is there any function to resolve the issue?Thx

@git-lyn
Copy link

git-lyn commented Dec 13, 2018

@strengthening Did you solve this problem? I also encountered the same problem.

@git-lyn
Copy link

git-lyn commented Dec 13, 2018

@strengthening 你的这个问题解决了吗?这个使用了base64进行编码,怎么解码还没有好的方法。

@git-lyn
Copy link

git-lyn commented Dec 13, 2018

@rhauch How to solve this problem? I also encountered the same problem with @strengthening

@justin-tomlinson
Copy link

Also having the same issue but with AWS Aurora MySql engine. Currently using the confluent 5.0.1 docker images in my testing env.

DB: AWS Aurora Mysql engine 5.6.10a
Database field types: DECIMAL(20,2)

values appearing in topic, read using console consumer:
"points":"HTM=","invalid_points":"AA==","account_modified":"HTM=","account_before_transaction":"AMJg"

connector config:
{ "connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector", "transforms": "createKey,extractInt", "transforms.createKey.type": "org.apache.kafka.connect.transforms.ValueToKey", "transforms.createKey.fields": "id", "transforms.extractInt.type": "org.apache.kafka.connect.transforms.ExtractField$Key", "transforms.extractInt.field": "id", "connection.url": "jdbc:mariadb://<host>:<port>", "connection.user": "<user>", "connection.password": "****", "table.whitelist": "users, transactions", "numeric.mapping": "best_fit", "mode": "timestamp+incrementing", "incrementing.column.name": "id", "timestamp.column.name": "modified", "validate.non.null": false, "table.types": "TABLE", "topic.prefix": "src_" }

@git-lyn
Copy link

git-lyn commented Dec 21, 2018

@justin-tomlinson Yes, I don't know how to deal with confluent official website, but I tried it, no use. it is base64 encode。 you can do it by BigDecimal bigDecimal = new BigDecimal(new BigInteger(Base64.getDecoder().decode("Fg==")),2);

@git-lyn
Copy link

git-lyn commented Dec 21, 2018

@justin-tomlinson but it is not so good

@rmoff
Copy link

rmoff commented Jan 7, 2019

@strengthening @justin-tomlinson see #563 - because MySQL treats NUMERIC as DECIMAL, the numeric.precision support currently doesn't work for it, because it only works with NUMERIC currently.

@rmoff
Copy link

rmoff commented Jan 7, 2019

I'm closing this issue as it was originally opened for Postgres, and numeric.precision works perfectly here.

@rmoff rmoff closed this as completed Jan 7, 2019
@sunrongpu
Copy link

这么大的公司,连这个小问题都解决不了,好几年了还存在,真不敢想象

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants