Re: Inserts in 'big' table slowing down the database
От | Greg Williamson |
---|---|
Тема | Re: Inserts in 'big' table slowing down the database |
Дата | |
Msg-id | 1349138158.88943.YahooMailNeo@web125905.mail.ne1.yahoo.com обсуждение исходный текст |
Ответ на | Re: Inserts in 'big' table slowing down the database (Stefan Keller <sfkeller@gmail.com>) |
Список | pgsql-performance |
Stefan -- ----- Original Message ----- > From: Stefan Keller <sfkeller@gmail.com> > To: Ivan Voras <ivoras@freebsd.org> > Cc: pgsql-performance@postgresql.org > Sent: Monday, October 1, 2012 5:15 PM > Subject: Re: [PERFORM] Inserts in 'big' table slowing down the database > > Sorry for the delay. I had to sort out the problem (among other things). > > It's mainly about swapping. > > The table nodes contains about 2^31 entries and occupies about 80GB on > disk space plus index. > If one would store the geom values in a big array (where id is the > array index) it would only make up about 16GB, which means that the > ids are dense (with few deletes). > Then updates come in every hour as bulk insert statements with entries > having ids in sorted manner. > Now PG becomes slower and slower! > CLUSTER could help - but obviously this operation needs a table lock. > And if this operation takes longer than an hour, it delays the next > update. > > Any ideas? Partitioning? pg_reorg if you have the space might be useful in doing a cluster-like action: <http://reorg.projects.postgresql.org/> Haven't followed the thread so I hope this isn't redundant. Partitioning might work if you can create clusters that are bigger than 1 hour -- too many partitions doesn't help. Greg Williamson
В списке pgsql-performance по дате отправления: