Re: Help..Help...

Поиск
Список
Период
Сортировка
От Shridhar Daithankar
Тема Re: Help..Help...
Дата
Msg-id 3DD2A8BD.6022.56D203@localhost
обсуждение исходный текст
Ответ на Help..Help...  (Murali Mohan Kasetty <kasetty@india.hp.com>)
Список pgsql-general
On 13 Nov 2002 at 19:14, Murali Mohan Kasetty wrote:

> We are running two processes accessing the same table using JDBC. Both
> the
> processes updates records in the same table. The same rows will not be
> updated by the processes at the same time.
>
> When the processes are run concurrently, the time taken is X seconds
> each.
> But, when we run the same processes together, we are seeing that the
> time
> taken is worse than 2X.

Update generates dead tuples which causes performance slowdown. Run vacuum
analyze concurrently in background so that these dead tuples are available for
reuse.

>
> Is it possible that there is a contention that is occuring while the
> records
> are being written. Has anybody experienced a similar problem. What is
> the

I am sure that's not the case. Are you doing rapind updates. Practiacally you
should run vacuum analyze for each 1000 updates to keep performance maximum.
Tune this figure to suit your need..

> LOCK mechanism that is used by PostgreSQL.

Go thr. MVCC. It's documented in postgresql manual.

HTH

Bye
 Shridhar

--
mixed emotions:    Watching a bus-load of lawyers plunge off a cliff.    With five
empty seats.


В списке pgsql-general по дате отправления:

Предыдущее
От: Doug McNaught
Дата:
Сообщение: Re: Solved, and a bug found! Re: JDBC question: Creating new arrays
Следующее
От: "Shridhar Daithankar"
Дата:
Сообщение: Re: error: lost syncronization with server