massive INSERT

Поиск
Список
Период
Сортировка
От Kurt Overberg
Тема massive INSERT
Дата
Msg-id 3E70C3BC.40904@hotdogrecords.com
обсуждение исходный текст
Ответы Re: massive INSERT  (Greg Stark <gsstark@mit.edu>)
Список pgsql-sql
Hi everyone!

I'm trying to maintain a 'membership' table in my postgres database. 
Membership is determined by a potentially costly algorithm that 
basically returns a series of member ids, where I insert those ids into 
my membership table (called groups).  Up to now, for speed purposes, 
I've been deleting all users in a group, then re-building the group from 
scratch.  The tables look roughly like this:

id            | integer  | not null default 
nextval('"xrefmgrp_id_seq"'::text)

membergroupid | integer  | not null default 0
memberid      | integer  | not null default 0

There's a constraint on the table saying that (membergroupid,memberid) 
needs to be UNIQUE.

...so before re-building a table, I do a:

delete from xrefmembergroup where membergroupid = 4 ;

...then blast the id's into the table:

insert into xrefmembergroup (membergroupid, memberid) select 4 as 
membergroupid, member.id as memberid from member where <* huge complex 
select*>


...I've found this to be faster then running the query, figuring out who 
needs to be removed from the group, who needs to be added and whatnot. 
The thing that I'm worried about is that this table is going to be 
pretty big (potentially millions of rows), and everytime I rebuild this 
table I lose (or at least invalidate) all my indexes.  Is that the case?  Is constantly deleting then adding large
numbersof rows from a table 
 
really bad for performance?  I'm worried this isn't going to scale well.   Anyone know of a better way to do this?
Thoughtsand comments would 
 
be appreciated.

Thanks!

/kurt



В списке pgsql-sql по дате отправления:

Предыдущее
От: Tom Lane
Дата:
Сообщение: Re: [HACKERS] What's wrong with this group by clause?
Следующее
От: Jason Earl
Дата:
Сообщение: Re: nearest match