Обсуждение: wide row insert via Postgres jdbc driver

Поиск
Список
Период
Сортировка

wide row insert via Postgres jdbc driver

От
Sameer Kumar
Дата:

Hi,

I am working with a vendor and planning to deploy their application on PostgreSQL as backend. They have cautioned the customer that PostgreSQL's jdbc driver v9.1 (build 900) has issues which causes deadlocks while "wide record inserts".

Is there any such known problem which anyone else has encountered in this regards? Has there been any improvements in future builds/releases on this aspect of PostgreSQL drivers/connectors?

I am probably skipping/missing to provide lot of info in this post. I am not sure what other info can be helpful here other than version. The "cautionary advice" is irrespective of platform/hardware.

Regards
Sameer

PS: Sent from my Mobile device. Pls ignore typo n abb

Re: wide row insert via Postgres jdbc driver

От
Bill Moran
Дата:
On Tue, 23 Sep 2014 13:24:40 +0800
Sameer Kumar <sameer.kumar@ashnik.com> wrote:
>
> I am working with a vendor and planning to deploy their application on
> PostgreSQL as backend. They have cautioned the customer that PostgreSQL's
> jdbc driver v9.1 (build 900) has issues which causes deadlocks while "wide
> record inserts".

Where are they getting this information?  Sounds like FUD to me.

> Is there any such known problem which anyone else has encountered in this
> regards? Has there been any improvements in future builds/releases on this
> aspect of PostgreSQL drivers/connectors?

I'm not aware of any, and in my previous job we made extensive use of it.

--
Bill Moran
I need your help to succeed:
http://gamesbybill.com


Re: wide row insert via Postgres jdbc driver

От
Thomas Kellerer
Дата:
Sameer Kumar schrieb am 23.09.2014 um 07:24:
> I am working with a vendor and planning to deploy their application
> on PostgreSQL as backend. They have cautioned the customer that
> PostgreSQL's jdbc driver v9.1 (build 900) has issues which causes
> deadlocks while "wide record inserts".

Can you be a bit more explicit?
I have never heard the term "wide record inserts" before


> Is there any such known problem which anyone else has encountered in
> this regards? Has there been any improvements in future
> builds/releases on this aspect of PostgreSQL drivers/connectors?

I have never seen any deadlocks in Postgres that were caused by the driver or Postgres itself.

Deadlocks are almost always caused by sloppy programming.

My guess is that this vendor initially supported "some other" database
with less strict transaction handling or even a DBMS where the couldn't
(or didn't want to) use transactions.





Re: wide row insert via Postgres jdbc driver

От
Bill Moran
Дата:
On Tue, 23 Sep 2014 14:12:22 +0200
Thomas Kellerer <spam_eater@gmx.net> wrote:

> Sameer Kumar schrieb am 23.09.2014 um 07:24:
> > I am working with a vendor and planning to deploy their application
> > on PostgreSQL as backend. They have cautioned the customer that
> > PostgreSQL's jdbc driver v9.1 (build 900) has issues which causes
> > deadlocks while "wide record inserts".
>
> Can you be a bit more explicit?
> I have never heard the term "wide record inserts" before

I've heard these terms before.  "Wide" generally means at least one of the
following:

* A large number of columns
* At least 1 column with a lot of data

Of course, both of those criteria are incredibly subjective.  How many columns
is a "large" number?  How much data is a "lot"?

It generally boils down to he fact that pushing a lot of data (whether many
columns or a single column with a lot of data) takes longer than pushing small
amounts of data (big surprise) and as a result, the statistical chance that
the operatin will collide with a conflicting operation (causing, in this case,
a deadlock) is increased.

As you mention, it's usually something that people with poorly written
applications complain about.  I.e. "our application works just fine in our test
environment, so your server must be too slow ... get a faster server"

Of course, the real problem is that the application was written with a large
number of braindead assumptions (things will always be fast; our tests never
encounter deadlocks, so they can't happen, etc)  I've dealt directly with this
back in my consulting days: clients who insisted that the correct way to fix
their crashes was to buy faster hardware.  The annoying thing is that such an
approach _appears_ to fix the problem, because the faster hardware causes the
chance of the problem occuring to be less, and in the mind of people who don't
understand concurrent programming, that's "fixed".

The amount of really badly written software out there is a very large number.

--
Bill Moran
I need your help to succeed:
http://gamesbybill.com


Re: wide row insert via Postgres jdbc driver

От
Sameer Kumar
Дата:

On Tue, Sep 23, 2014 at 9:24 PM, Bill Moran <wmoran@potentialtech.com> wrote:
On Tue, 23 Sep 2014 14:12:22 +0200
Thomas Kellerer <spam_eater@gmx.net> wrote:

> Sameer Kumar schrieb am 23.09.2014 um 07:24:
> > I am working with a vendor and planning to deploy their application
> > on PostgreSQL as backend. They have cautioned the customer that
> > PostgreSQL's jdbc driver v9.1 (build 900) has issues which causes
> > deadlocks while "wide record inserts".
>
> Can you be a bit more explicit?
> I have never heard the term "wide record inserts" before

I've heard these terms before.  "Wide" generally means at least one of the
following:

* A large number of columns
* At least 1 column with a lot of data

​Sorry for using the generic term. Yes the explanation is correct​. When I said wide row, I meant "bytea" columns being part of table.

 
Of course, both of those criteria are incredibly subjective.  How many columns
is a "large" number?  How much data is a "lot"?

It generally boils down to he fact that pushing a lot of data (whether many
columns or a single column with a lot of data) takes longer than pushing small
amounts of data (big surprise) and as a result, the statistical chance that
the operatin will collide with a conflicting operation (causing, in this case,
a deadlock) is increased.

I guess I understand the explanation here.​
 
As you mention, it's usually something that people with poorly written
applications complain about.  I.e. "our application works just fine in our test
environment, so your server must be too slow ... get a faster server"
 
Of course, the real problem is that the application was written with a large
number of braindead assumptions (things will always be fast; our tests never
encounter deadlocks, so they can't happen, etc)  I've dealt directly with this
back in my consulting days: clients who insisted that the correct way to fix
their crashes was to buy faster hardware.  The annoying thing is that such an
approach _appears_ to fix the problem, because the faster hardware causes the
chance of the problem occuring to be less, and in the mind of people who don't
understand concurrent programming, that's "fixed".

The amount of really badly written software out there is a very large number.
​Can not agree any more... :)
​Let me get back to the vendor with this. I am sure they are not going to like it (no one likes to admit they are wrong) :)​

Best Regards,

Sameer Kumar | Database Consultant

ASHNIK PTE. LTD.

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069533

M: +65 8110 0350  T: +65 6438 3504 | www.ashnik.com

icons

 

Email patch

 

This email may contain confidential, privileged or copyright material and is solely for the use of the intended recipient(s).

Вложения