Re: [ADMIN] Fast Deletion For Large Tables

Поиск
Список
Период
Сортировка
От Stephan Szabo
Тема Re: [ADMIN] Fast Deletion For Large Tables
Дата
Msg-id 20021004083355.M36970-100000@megazone23.bigpanda.com
обсуждение исходный текст
Ответ на Fast Deletion For Large Tables  (Raymond Chui <raymond.chui@noaa.gov>)
Список pgadmin-support
On Wed, 2 Oct 2002, Raymond Chui wrote:

> I have some tables with huge data.
> The tables have column timestamp and float.
> I am try to keep up to 6 day of data values.
> What I do is execute SQL below from crontab (UNIX to
> schedule commands).
>
> BEGIN;
> DELETE FROM table_1 WHERE column_time < ('now'::timestamp - '6
> days'::interval);
> .....
> DELETE FROM table_n WHERE column_time < ('now'::timestamp - '6
> days'::interval);
> COMMIT;
>
>
> Everything is running fine, except take long time to finish.
> Because some tables stored values from 50,000 to 100,000 rows
> Some deletion need to deleted up to 45,000 rows.
>
> So I am thinking just delete the rows by their row number or row ID,
> like
>
> DELETE FROM a_table WHERE row_id < 45000;
>
> I know there is row_id in Oracle.
> Is there row_id for a table in Postgres?

Not really of that sort IIRC Oracle's row_id definition, although you
could probably fake something with a sequence.



В списке pgadmin-support по дате отправления:

Предыдущее
От: Serkan Bektaş
Дата:
Сообщение: * Warning - virus alert relating to the pgadmin list. - FW: AntiVir ALERT [mail from: Dave Page ]
Следующее
От: Roland Roberts
Дата:
Сообщение: Re: [GENERAL] Fast Deletion For Large Tables