benchmarking update/insert and random record update

Поиск
Список
Период
Сортировка
От Ivan Sergio Borgonovo
Тема benchmarking update/insert and random record update
Дата
Msg-id 20080109004844.3b4a6741@webthatworks.it
обсуждение исходный текст
Список pgsql-general
I've to sync 2 tables with pk (serial/identity).The source comes from
MS SQL, the destination is pg (ODBC is not an option currently).

One solution would be to truncate the destination and just copy the
new data in it but I prefer a slower sync but avoid a period where no
data is available.

So I thought to delete the records not present in source then do an
update/insert

update...
if(not found) then
  insert ...
end if;

Supposed it is the best solution to keep in sync 2 tables[1] I was
going to simulate such update to get a rough idea about how long it
takes.

I've a 700K record table and I expect to have a maximum of 10K
updates, 20K insert and 2K delete.
The inserts will have higher, mostly consecutive pk.
Deletes and updates will have random pk.

Considering pg internals does it have any sense to simulate even the
"position" of delete/update/inserts?

I know how to delete random records and it's not a problem to delete
a range of records whose pk is in an interval but...

How can I randomly update records?

I need to insert random values in some of the columns of randomly
picked up records.


[1] will renaming tables (dest -> old_dest, src-> dest) break pk, fk
relationships and function reference to objects?

thx

--
Ivan Sergio Borgonovo
http://www.webthatworks.it


В списке pgsql-general по дате отправления:

Предыдущее
От: "D. Dante Lorenso"
Дата:
Сообщение: WHERE vs ORDER BY vs LIMIT why not using the correct index?
Следующее
От: Tom Lane
Дата:
Сообщение: Re: WHERE vs ORDER BY vs LIMIT why not using the correct index?