Re: Performance with very large tables

Поиск
Список
Период
Сортировка
От Jan van der Weijde
Тема Re: Performance with very large tables
Дата
Msg-id 4B9C73D1EB78FE4A81475AE8A553B3C67DC54E@exch-lei1.attachmate.com
обсуждение исходный текст
Ответ на Performance with very large tables  ("Jan van der Weijde" <Jan.van.der.Weijde@attachmate.com>)
Список pgsql-general
Hi Bruno,

Good to read that your advice to me is the solution I was considering!
Although I think this is something PostgreSQL should solve internally, I
prefer the WHERE clause over a long lasting SERIALIZABLE transaction.

Thanks,
Jan

-----Original Message-----
From: Bruno Wolff III [mailto:bruno@wolff.to]
Sent: Tuesday, January 16, 2007 19:12
To: Jan van der Weijde; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Performance with very large tables

On Tue, Jan 16, 2007 at 12:06:38 -0600,
  Bruno Wolff III <bruno@wolff.to> wrote:
>
> Depending on exactly what you want to happen, you may be able to
continue
> where you left off using a condition on the primary key, using the
last
> primary key value for a row that you have viewed, rather than OFFSET.
> This will still be fast and will not skip rows that are now visible to
> your transaction (or show duplicates when deleted rows are no longer
visible
> to your transaction).

I should have mentioned that you also will need to use an ORDER BY
clause
on the primary key when doing things this way.

В списке pgsql-general по дате отправления:

Предыдущее
От: Tom Lane
Дата:
Сообщение: Re: too many trigger records found for relation "item" -
Следующее
От: "Steven De Vriendt"
Дата:
Сообщение: PostgreSQL 8.1: createdb: xflush error ?