Re: Slow concurrent update of same row in a given table
От | Jim C. Nasby |
---|---|
Тема | Re: Slow concurrent update of same row in a given table |
Дата | |
Msg-id | 20051004185513.GS40138@pervasive.com обсуждение исходный текст |
Ответ на | Re: Slow concurrent update of same row in a given table (Rajesh Kumar Mallah <mallah.rajesh@gmail.com>) |
Список | pgsql-performance |
On Thu, Sep 29, 2005 at 07:59:34AM +0530, Rajesh Kumar Mallah wrote: > > I see. These problems regularly come up in database design. The best thing > > you can do is modify your database design/application such that instead of > > incrementing a count in a single row, you insert a row into a table, > > recording the 'dispatch_id'. Counting the number of rows for a given > > dispatch id will give you your count. > > > > sorry i will be accumulating huge amount of rows in seperate table > with no extra info when i really want just the count. Do you have > a better database design in mind? > > Also i encounter same problem in implementing read count of > articles in sites and in counting banner impressions where same > row get updated by multiple processes frequently. Databases like to work on *sets* of data, not individual rows. Something like this would probably perform much better than what you've got now, and would prevent having a huge table laying around: INSERT INTO holding_table ... -- Done for every incomming connection/what-have-you CREATE OR REPLACE FUNCTION summarize() RETURNS void AS $$ DECLARE v_rows int; BEGIN DELETE FROM holding_table; GET DIAGNOSTICS v_rows = ROW_COUNT; UPDATE count_table SET count = count + v_rows ; END; $$ LANGUAGE plpgsql; Periodically (say, once a minute): SELECT summarize() VACUUM holding_table; VACUUM count_table; -- Jim C. Nasby, Sr. Engineering Consultant jnasby@pervasive.com Pervasive Software http://pervasive.com work: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461
В списке pgsql-performance по дате отправления: