-
Notifications
You must be signed in to change notification settings - Fork 180
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
client side lo_import fails with files larger then 8190 bytes on postgresql 13.6 #460
Comments
as an addition, this seems to have come in with pg 1.3.0. Reverting down to 1.2.3 fixes the issue. |
I can not reproduce your issue. Regardless of the file size or content, it can be imported without any error on my system. I'm running Ubuntu-22.04 with PostgreSQL-14. I tried ruby-3.0.2 and 3.1.2 and pg versions 1.2.3 and 1.3.4. Connection to the DBMS was through UnixSocket. pg-1.3.x didn't change anything to the This kind of error message, usually appears when a single connection is used by multiple threads. But your code doesn't look like that. Can you add these lines before trace_io = File.open("test_trace.out", 'w', 0600 )
conn.trace( trace_io )
trace_io.close |
Thank you for the response. test_trace.txt attached. |
I just noticed, that the error is raised in the rescue path of
|
I ran it again with the error I'm getting is a PG::Error without any message or anything attached. Without having any real clue how to read that trace, it doesn't seem erroneous to me per se? |
further digging through the gem I noticed that lo_import seems to return 0, which in turn causes the empty PG::Error. Investigating further. |
Testing the same thing on a freshly installed linux seems to work fine. I have the feeling there is something in combination with our postgresql installation. Still investigating, but I'd appreciate any pointers if anything can be read from that trace that I can not. |
more updates, I'm at my wits end. I wrote a very basic program against libpq in C that runs just fine and succeeds in lo_import on our system. There must be something specific that causes this issue. What I figured out now is that lo_import always returns 0 when run from the context of the pg gem (I recompiled it to print the oid returned by lo_import). Any pointers? |
alright, as I was getting to wits end as above, just at this moment, I discovered ... something. conn = PG.connect("localhost", 5432, nil, nil, "testdb", "user") <== when opening the connection like this, everything works conn = PG.connect(dbname: "testdb") <== when opening the connection like this, it does not.... !? |
And further debugging still reveals now that apparently connecting to the postgresql unix socket does not work... I am now relatively certain that this is entirely on my side. Sorry for all the noise, fix should be relatively easy. Took way too long. |
For more details, this was run on an OpenBSD machine. Since OpenBSD 7.0, the read/write packet frames on unix domain sockets are limited to 8192 bytes. On a linux system, the limits are set around 212k per default, which of course don't cause the same issues then. Whether this is something you want to/can address, I'll leave up to you, but this may help someone for posterity. |
libpq's "lo_*" calls fail when a bigger amount of data is transferred and the connection is set to nonblocking mode. According to some discussion here https://www.postgresql.org/message-id/20130606.082048.998330612674120762.t-ishii%40sraoss.co.jp the large object interface isn't intended to be used with nonblocking mode. It works to some extend with small data (< 8000 Bytes), but fails after lo_write of bigger data at the next exec call on the same connection like so: ``` message type 0x56 arrived from server while idle message type 0x5a arrived from server while idle message type 0x43 arrived from server while idle message type 0x5a arrived from server while idle PG::UnableToSend: unexpected response from server; first received character was "V" ``` I think the best solution is to disable nonblocking mode for the time of the lo_* call. Calling PQisnonblocking and PQsetnonblocking is very low overhead, since these functions do a few memory operations only. Fixes ged#460
libpq's "lo_*" calls fail when a bigger amount of data is transferred and the connection is set to nonblocking mode. According to some discussion here https://www.postgresql.org/message-id/20130606.082048.998330612674120762.t-ishii%40sraoss.co.jp the large object interface isn't intended to be used with nonblocking mode. It works to some extend with small data (< 8000 Bytes), but fails after lo_write of bigger data at the next exec call on the same connection like so: ``` message type 0x56 arrived from server while idle message type 0x5a arrived from server while idle message type 0x43 arrived from server while idle message type 0x5a arrived from server while idle PG::UnableToSend: unexpected response from server; first received character was "V" ``` I think the best solution is to disable nonblocking mode for the time of the lo_* call. Calling PQisnonblocking and PQsetnonblocking is very low overhead, since these functions do a few memory operations only. Fixes ged#460
libpq's "lo_*" calls fail when a bigger amount of data is transferred and the connection is set to nonblocking mode. According to some discussion here https://www.postgresql.org/message-id/20130606.082048.998330612674120762.t-ishii%40sraoss.co.jp the large object interface isn't intended to be used with nonblocking mode. It works to some extend with small data (< 8000 Bytes), but fails after lo_write of bigger data at the next exec call on the same connection like so: ``` message type 0x56 arrived from server while idle message type 0x5a arrived from server while idle message type 0x43 arrived from server while idle message type 0x5a arrived from server while idle PG::UnableToSend: unexpected response from server; first received character was "V" ``` I think the best solution is to disable nonblocking mode for the time of the lo_* call. Calling PQisnonblocking and PQsetnonblocking is very low overhead, since these functions do a few memory operations only. Fixes ged#460
libpq's "lo_*" calls fail when a bigger amount of data is transferred and the connection is set to nonblocking mode. According to some discussion here https://www.postgresql.org/message-id/20130606.082048.998330612674120762.t-ishii%40sraoss.co.jp the large object interface isn't intended to be used with nonblocking mode. It works to some extend with small data (< 8000 Bytes), but fails after lo_write of bigger data at the next exec call on the same connection like so: ``` message type 0x56 arrived from server while idle message type 0x5a arrived from server while idle message type 0x43 arrived from server while idle message type 0x5a arrived from server while idle PG::UnableToSend: unexpected response from server; first received character was "V" ``` I think the best solution is to disable nonblocking mode for the time of the lo_* call. Calling PQisnonblocking and PQsetnonblocking is very low overhead, since these functions do a few memory operations only. Fixes ged#460
libpq's "lo_*" calls fail when a bigger amount of data is transferred and the connection is set to nonblocking mode. According to some discussion here https://www.postgresql.org/message-id/20130606.082048.998330612674120762.t-ishii%40sraoss.co.jp the large object interface isn't intended to be used with nonblocking mode. It works to some extend with small data (< 8000 Bytes), but fails after lo_write of bigger data at the next exec call on the same connection like so: ``` message type 0x56 arrived from server while idle message type 0x5a arrived from server while idle message type 0x43 arrived from server while idle message type 0x5a arrived from server while idle PG::UnableToSend: unexpected response from server; first received character was "V" ``` I think the best solution is to disable nonblocking mode for the time of the lo_* call. Calling PQisnonblocking and PQsetnonblocking is very low overhead, since these functions do a few memory operations only. Fixes ged#460
Hello,
I ran into a rather odd bug with the pg gem running on ruby 3.0.2p107 connecting to postgresql 13.6 when using the lo_import function. When trying to lo_import a file with more then 8190 bytes ( 8191 bytes is exactly where it breaks for me ), the following happens:
the servers postgresql log mentions nothing at all to this connection. Variations of playing around with the issue have often showed variants of showing that oids would not exist or oid being 0, but the most consistent error picture is the one above.
The code used to trigger the issue ( using a file the size of 76k, containing random printable ascii characters):
the user in question has administrator privileges on the database (though that should not be relevant).
If any further data is required for this to be debugged, please let me know.
The text was updated successfully, but these errors were encountered: