Обсуждение: Re: [JDBC] [HACKERS] pgjdbc logical replication client throwing exception
Re: [JDBC] [HACKERS] pgjdbc logical replication client throwing exception
От
Vladimir Sitnikov
Дата:
++pgjdbc dev list.
Have you tried "withStatusInterval(20, TimeUnit.SECONDS)" instead of 20 millis? I don't think it matter much, however 20ms seems to be an overkill.
>I am facing unusual connection breakdown problem. Here is the simple code that I am using to read WAL file:
Does it always fails?
Can you create a test case? For instance, if you file a pull request with the test, it will get automatically tested across various PG versions, so it would be easier to reson about
Have you tried "withStatusInterval(20, TimeUnit.SECONDS)" instead of 20 millis? I don't think it matter much, however 20ms seems to be an overkill.
Vladimir
пт, 15 сент. 2017 г. в 19:57, Dipesh Dangol <ddipeshdan@gmail.com>:
hi,I am trying to implement logical replication stream API of postgresql.I am facing unusual connection breakdown problem. Here is the simple code that I amusing to read WAL file:String url = "jdbc:postgresql://pcnode2:5432/benchmarksql";
Properties props = new Properties();
PGProperty.USER.set(props, "benchmarksql");
PGProperty.PASSWORD.set(props, "benchmarksql");
PGProperty.ASSUME_MIN_SERVER_VERSION.set(props, "9.4");
PGProperty.REPLICATION.set(props, "database");
PGProperty.PREFER_QUERY_MODE.set(props, "simple");
Connection conn = DriverManager.getConnection(url, props);
PGConnection replConnection = conn.unwrap(PGConnection.class);
PGReplicationStream stream = replConnection.getReplicationAPI()
.replicationStream().logical()
.withSlotName("replication_slot3")
.withSlotOption("include-xids", true)
.withSlotOption("include-timestamp", "on")
.withSlotOption("skip-empty-xacts", true)
.withStatusInterval(20, TimeUnit.MILLISECONDS).start();
while (true) {
ByteBuffer msg = stream.read();
if (msg == null) {
TimeUnit.MILLISECONDS.sleep(10L);
continue;
}
int offset = msg.arrayOffset();
byte[] source = msg.array();
int length = source.length - offset;
String data = new String(source, offset, length);
System.out.println(data);
stream.setAppliedLSN(stream.getLastReceiveLSN());
stream.setFlushedLSN(stream.getLastReceiveLSN());
}Even the slightest modification in the code like commenting System.out.println(data);which is just printing the data in the console, causes connection breakdown problem withfollowing error msgorg.postgresql.util.PSQLException: Database connection failed when reading from copy
at org.postgresql.core.v3.QueryExecutorImpl.readFromCopy(QueryExecutorImpl.java:1028)
at org.postgresql.core.v3.CopyDualImpl.readFromCopy(CopyDualImpl.java:41)
at org.postgresql.core.v3.replication.V3PGReplicationStream.receiveNextData(V3PGReplicationStream.java:155)
at org.postgresql.core.v3.replication.V3PGReplicationStream.readInternal(V3PGReplicationStream.java:124)
at org.postgresql.core.v3.replication.V3PGReplicationStream.read(V3PGReplicationStream.java:70)
at Server.main(Server.java:52)
Caused by: java.net.SocketException: Socket closed
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:171)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at org.postgresql.core.VisibleBufferedInputStream.readMore(VisibleBufferedInputStream.java:140)
at org.postgresql.core.VisibleBufferedInputStream.ensureBytes(VisibleBufferedInputStream.java:109)
at org.postgresql.core.VisibleBufferedInputStream.read(VisibleBufferedInputStream.java:191)
at org.postgresql.core.PGStream.receive(PGStream.java:495)
at org.postgresql.core.PGStream.receive(PGStream.java:479)
at org.postgresql.core.v3.QueryExecutorImpl.processCopyResults(QueryExecutorImpl.java:1161)
at org.postgresql.core.v3.QueryExecutorImpl.readFromCopy(QueryExecutorImpl.java:1026)
... 5 moreI am trying to implement some logic like filtering out the unrelated table after reading log.But due to this unusual behavior I couldn't implement properly.Can somebody give me some hint how to solve this problem.Thank you.
On 2017-09-15 20:00:34 +0000, Vladimir Sitnikov wrote: > ++pgjdbc dev list. > > >I am facing unusual connection breakdown problem. Here is the simple code > that I am using to read WAL file: > > Does it always fails? > Can you create a test case? For instance, if you file a pull request with > the test, it will get automatically tested across various PG versions, so > it would be easier to reson about > > Have you tried "withStatusInterval(20, TimeUnit.SECONDS)" instead of 20 > millis? I don't think it matter much, however 20ms seems to be an overkill. Also, have you checked the server log? - Andres -- Sent via pgsql-jdbc mailing list (pgsql-jdbc@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-jdbc
Hi Vladimir,
Ya, initially I was trying with withStatusInterval(20, TimeUnit.SECONDS),that didn't work so, then only I switched to .withStatusInterval(20, TimeUnit.MILLISECONDS)
Could you please send me any link for that one.
For generating the load, I am using benchmarkSQL, which will generate around 9000
transactions per second. I am trying to run streamAPI at the same time
BenchmarskSQL is generating load. If i don't run benchmarkSQL it works fine I mean
when there are only few transactions to replicate at a time, it works fine. But when
I run it with that benchmarskSql and try to add some logic like some conditions, then
it breaks down in between, most of the time within few seconds.
Hi Andres,
I haven't check the server log yet. Now, I don't access to my working
environment, I will be able to check that only on Monday. If I find any suspicious
thing in log, I will let you know.
Thank you guys.
On Fri, Sep 15, 2017 at 10:05 PM, Andres Freund <andres@anarazel.de> wrote:
On 2017-09-15 20:00:34 +0000, Vladimir Sitnikov wrote:
> ++pgjdbc dev list.
>
> >I am facing unusual connection breakdown problem. Here is the simple code
> that I am using to read WAL file:
>
> Does it always fails?
> Can you create a test case? For instance, if you file a pull request with
> the test, it will get automatically tested across various PG versions, so
> it would be easier to reson about
>
> Have you tried "withStatusInterval(20, TimeUnit.SECONDS)" instead of 20
> millis? I don't think it matter much, however 20ms seems to be an overkill.
Also, have you checked the server log?
- Andres
Hi Andres,
I also checked server log. Nothing unusual is recorded there.On Fri, Sep 15, 2017 at 11:32 PM, Dipesh Dangol <ddipeshdan@gmail.com> wrote:
but it is not working as well. I am not aware of type of test cases that you are pointing.Hi Vladimir,Ya, initially I was trying with withStatusInterval(20, TimeUnit.SECONDS),
that didn't work so, then only I switched to .withStatusInterval(20, TimeUnit.MILLISECONDS)Could you please send me any link for that one.For generating the load, I am using benchmarkSQL, which will generate around 9000transactions per second. I am trying to run streamAPI at the same timeBenchmarskSQL is generating load. If i don't run benchmarkSQL it works fine I meanwhen there are only few transactions to replicate at a time, it works fine. But whenI run it with that benchmarskSql and try to add some logic like some conditions, thenit breaks down in between, most of the time within few seconds.Hi Andres,I haven't check the server log yet. Now, I don't access to my workingenvironment, I will be able to check that only on Monday. If I find any suspiciousthing in log, I will let you know.Thank you guys.On Fri, Sep 15, 2017 at 10:05 PM, Andres Freund <andres@anarazel.de> wrote:On 2017-09-15 20:00:34 +0000, Vladimir Sitnikov wrote:
> ++pgjdbc dev list.
>
> >I am facing unusual connection breakdown problem. Here is the simple code
> that I am using to read WAL file:
>
> Does it always fails?
> Can you create a test case? For instance, if you file a pull request with
> the test, it will get automatically tested across various PG versions, so
> it would be easier to reson about
>
> Have you tried "withStatusInterval(20, TimeUnit.SECONDS)" instead of 20
> millis? I don't think it matter much, however 20ms seems to be an overkill.
Also, have you checked the server log?
- Andres