Re: schema/db design wrt performance
От | Andrew Sullivan |
---|---|
Тема | Re: schema/db design wrt performance |
Дата | |
Msg-id | 20030116092005.C22344@mail.libertyrms.com обсуждение исходный текст |
Ответ на | schema/db design wrt performance (CaptainX0r <captainx0r@yahoo.com>) |
Ответы |
Re: schema/db design wrt performance
(Ron Johnson <ron.l.johnson@cox.net>)
|
Список | pgsql-performance |
On Thu, Jan 16, 2003 at 05:51:40AM -0800, CaptainX0r wrote: > All, > > I just noted in another thread that use of foreign keys in postgres > significantly hinders performance. I'm wondering what other Since I think I'm the one responsible for this, I'd better say something clearer for the record. The foreign keys implementation in PostgreSQL essentially uses SELECT . . . FOR UPDATE to ensure that referenced data doesn't go away while a referencing datum is being inserted or altered. The problem with this is that frequently-referenced data are therefore effectively locked during the operation. Other writers will block on the locked data until the first writer finishes. So, for instance, consider two artificial-example tables: create table account (acct_id serial primary key); create table acct_activity (acct_id int references account(acct_id), trans_on timestamp, val numeric(12,2)); If a user has multiple connections and charges things to the same account in more than one connection at the same time, the transactions will have to be processed, effectively, in series: each one will have to wait for another to commit in order to complete. This is just a performance bottleneck. But it gets worse. Suppose the account table is like this: create table account (acct_id serial primary key, con_id int references contact(con_id)); create table contact (con_id serial primary key, name text, address1 text [. . .]); Now, if another transaction is busy trying to delete a contact at the same time the account table is being updated to reflect, say, a new contact, you run the risk of deadlock. The FK support in PostgreSQL is therefore mostly useful for low-volume applications. It can be made to work under heavier load if you use it very carefully and program your application for it. But I suggest avoiding it for heavy-duty use if you really can. > take into consideration in the design of our database. We're > coming from Sybase and trying to design a more encompassing, > modular, generic database that won't take a serious performance hit > under postgres. Avoid NOT IN. This is difficult, because the workaround in Postgres (NOT EXISTS) is frequently lousy on other systems. Apparently there is some fix for this contemplated for 7.4, but I've been really busy lately, so I haven't been following -hackers. Someone else can probably say something more useful about it. A -- ---- Andrew Sullivan 204-4141 Yonge Street Liberty RMS Toronto, Ontario Canada <andrew@libertyrms.info> M2P 2A8 +1 416 646 3304 x110
В списке pgsql-performance по дате отправления: