Обсуждение: Index bloat of 4x
We just did a bunch of maintenance on one of our production databases that involved a lot of alter tables and moving records about and the like. Afterwards, I did a vacuum full and analyze to get the database back on track -- autovac maintains it under normal operations. Today I decided to run reindex during a slow period, and was shocked to find the database size drop from 165M to 30M. Keep in mind that the 165M is after vacuum full. So, apparently, there was 135M of index bloat? That seems a little excessive to me, especially when the docs claim that reindexing is usually not necessary. This is PostgreSQL 8.1.4. We've got upgrades to 8.2 planned, but it's going to be a few months before we can squeak that into a maintenance window. Additionally, I thought all the big index improvements were added in 7.4. I guess my question is whether or not this is expected. It's obviously not a good thing -- I've noticed that shared buffer usage has dropped dramatically as well (from 28,000 to 7000). I hadn't expected index bloat of this magnitude, and I'm concerned about when the database hits 2 or 3 G in size and has 12G just in indexes that take hours to rebuild. -- Bill Moran Collaborative Fusion Inc.
Bill Moran wrote: > > We just did a bunch of maintenance on one of our production databases that > involved a lot of alter tables and moving records about and the like. > > Afterwards, I did a vacuum full and analyze to get the database back on > track -- autovac maintains it under normal operations. > > Today I decided to run reindex during a slow period, and was shocked to > find the database size drop from 165M to 30M. Keep in mind that the > 165M is after vacuum full. So, apparently, there was 135M of index bloat? > That seems a little excessive to me, especially when the docs claim that > reindexing is usually not necessary. It's been said that vacuum full does not fix index bloat -- in fact, it's a problem it worsens. However, I very much doubt that it would be this serious. I guess the question is, how large was the index *before* all the alter tables? I'd expect that it was the ALTER TABLEs that caused this much index growth, which VACUUM FULL was subsequently unable to fix. I don't expect you kept a log of index sizes throughout the operation however :-( -- Alvaro Herrera http://www.CommandPrompt.com/ PostgreSQL Replication, Consulting, Custom Development, 24x7 support
In response to Alvaro Herrera <alvherre@commandprompt.com>: > Bill Moran wrote: > > > > We just did a bunch of maintenance on one of our production databases that > > involved a lot of alter tables and moving records about and the like. > > > > Afterwards, I did a vacuum full and analyze to get the database back on > > track -- autovac maintains it under normal operations. > > > > Today I decided to run reindex during a slow period, and was shocked to > > find the database size drop from 165M to 30M. Keep in mind that the > > 165M is after vacuum full. So, apparently, there was 135M of index bloat? > > That seems a little excessive to me, especially when the docs claim that > > reindexing is usually not necessary. > > It's been said that vacuum full does not fix index bloat -- in fact, > it's a problem it worsens. However, I very much doubt that it would be > this serious. I guess the question is, how large was the index *before* > all the alter tables? I don't have details on the various indexes. I do keep an mrtg graph of pg_database_size(), so I can track the overall size of the database and correlate it to events. I'm not tracking individual relations, indexes, etc though. The entire database was around 28M prior to the upgrades, etc. Immediately after the upgrades, it was ~270M. Following a vacuum full, it dropped to 165M. Following a database-wide reindex, it dropped to 30M. > I'd expect that it was the ALTER TABLEs that caused this much index > growth, which VACUUM FULL was subsequently unable to fix. > > I don't expect you kept a log of index sizes throughout the operation > however :-( Not index size, specifically, no. I can probably reproduce the issue, however. I have access to the scripts that were run to update the database, and I can pull a pre-upgrade version from backup. I guess my question is whether or not this behaviour is strange enough to warrant me taking the time to do so. Just because I've never seen it before doesn't mean that it's unheard of. ;) Is this level of index bloat known? Would it be worthwhile for me to investigate it and report any details on what's going on or is this a known factor that folks don't need any additional details on? -- Bill Moran Collaborative Fusion Inc.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 > Today I decided to run reindex during a slow period, and was shocked to > find the database size drop from 165M to 30M. Keep in mind that the > 165M is after vacuum full. So, apparently, there was 135M of index bloat? > That seems a little excessive to me, especially when the docs claim that > reindexing is usually not necessary. For what its worth, I've seen far worse. > I guess my question is whether or not this is expected. It's obviously > not a good thing -- I've noticed that shared buffer usage has dropped > dramatically as well (from 28,000 to 7000). I hadn't expected index > bloat of this magnitude, and I'm concerned about when the database hits > 2 or 3 G in size and has 12G just in indexes that take hours to rebuild. Regular reindexing is so inexpensive compared to vacuum, I recommend adding it in as part of your regular maintenance. At the very least, it's unlikely to ever be that severe again unless you don't reindex for an equally long period of time. Come to think of it, an auto-reindex option might be nice in core someday. TODO item? - -- Greg Sabino Mullane greg@turnstep.com PGP Key: 0x14964AC8 200701171129 http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8 -----BEGIN PGP SIGNATURE----- iD8DBQFFrlDivJuQZxSWSsgRAvNnAJ9fJ+U6cyyO382HiZtp8LE5drcpOgCgwlW5 EbOS7Gbg/DYOgXeG7vUIlhY= =9E8g -----END PGP SIGNATURE-----
[snip] > Come to think of it, an auto-reindex option might be nice in core someday. > TODO item? Marry it with autovacuum + online index build, and it will be cool ;-) BTW, having a privileged background thread doing the reindex could be a solution to most of the objections regarding online reindex, as the thread would be privileged enough already to be able to back out if something fails (part of the objections), and it could stay around long enough to only lock tentatively in a loop in order to avoid deadlocks (another part of the objections). If it would be also marked the same as the vacuum threads not to block other vacuums, then it would also not be a problem that it runs 2 days... Cheers, Csaba.
Bill Moran <wmoran@collaborativefusion.com> writes: > The entire database was around 28M prior to the upgrades, etc. Immediately > after the upgrades, it was ~270M. Following a vacuum full, it dropped to > 165M. Following a database-wide reindex, it dropped to 30M. As Alvaro said, vacuum full doesn't shrink indexes but in fact bloats them. (Worst case, they could double in size, if the vacuum moves every row; there's an intermediate state where there have to be index entries for both old and new copies of each moved row, to ensure things are consistent if the vacuum crashes right there.) So the above doesn't sound too unlikely. Perhaps we should recommend vac full + reindex as standard cleanup procedure. Longer term, maybe teach vac full to do an automatic reindex if it's moved more than X% of the rows. Or forget the current vac full implementation entirely, and go over to something acting more like CLUSTER ... regards, tom lane
Bill Moran wrote: > The entire database was around 28M prior to the upgrades, etc. Immediately > after the upgrades, it was ~270M. Following a vacuum full, it dropped to > 165M. Following a database-wide reindex, it dropped to 30M. Oh, so it was clearly the upgrade procedure that caused the bloat ... Reindexing seems the expected course. -- Alvaro Herrera http://www.CommandPrompt.com/ PostgreSQL Replication, Consulting, Custom Development, 24x7 support
In response to Alvaro Herrera <alvherre@commandprompt.com>: > Bill Moran wrote: > > > The entire database was around 28M prior to the upgrades, etc. Immediately > > after the upgrades, it was ~270M. Following a vacuum full, it dropped to > > 165M. Following a database-wide reindex, it dropped to 30M. > > Oh, so it was clearly the upgrade procedure that caused the bloat ... > Reindexing seems the expected course. Right. Sorry if I didn't explain that properly. It wasn't the fact that it bloated that surprised me. It was the _magnitude_ of bloat that I wasn't expecting, as well as the fact that it was _all_ _index_ bloat. -- Bill Moran Collaborative Fusion Inc.
In response to Ben <bench@silentmedia.com>: > Hey Bill. How do you monitor your shared buffer usage? My understanding > was that there wasn't a good way to see what was used vs. allocated. echo "select count(*) from pg_buffercache where reldatabase is not null;" | $PSQL_BIN -P tuples_only -U pgsql postgres |head -1 Of course, you have to install the pg_buffercache contrib module first. > On Wed, 17 Jan 2007, Bill Moran wrote: > > > > > We just did a bunch of maintenance on one of our production databases that > > involved a lot of alter tables and moving records about and the like. > > > > Afterwards, I did a vacuum full and analyze to get the database back on > > track -- autovac maintains it under normal operations. > > > > Today I decided to run reindex during a slow period, and was shocked to > > find the database size drop from 165M to 30M. Keep in mind that the > > 165M is after vacuum full. So, apparently, there was 135M of index bloat? > > That seems a little excessive to me, especially when the docs claim that > > reindexing is usually not necessary. > > > > This is PostgreSQL 8.1.4. We've got upgrades to 8.2 planned, but it's > > going to be a few months before we can squeak that into a maintenance > > window. Additionally, I thought all the big index improvements were > > added in 7.4. > > > > I guess my question is whether or not this is expected. It's obviously > > not a good thing -- I've noticed that shared buffer usage has dropped > > dramatically as well (from 28,000 to 7000). I hadn't expected index > > bloat of this magnitude, and I'm concerned about when the database hits > > 2 or 3 G in size and has 12G just in indexes that take hours to rebuild. > > > > -- > > Bill Moran > > Collaborative Fusion Inc. > > > > ---------------------------(end of broadcast)--------------------------- > > TIP 6: explain analyze is your friend > > > > > > > > -- Bill Moran Collaborative Fusion Inc. wmoran@collaborativefusion.com Phone: 412-422-3463x4023 **************************************************************** IMPORTANT: This message contains confidential information and is intended only for the individual named. If the reader of this message is not an intended recipient (or the individual responsible for the delivery of this message to an intended recipient), please be advised that any re-use, dissemination, distribution or copying of this message is prohibited. Please notify the sender immediately by e-mail if you have received this e-mail by mistake and delete this e-mail from your system. E-mail transmission cannot be guaranteed to be secure or error-free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the contents of this message, which arise as a result of e-mail transmission. ****************************************************************
Tom Lane wrote: > Bill Moran <wmoran@collaborativefusion.com> writes: >> The entire database was around 28M prior to the upgrades, etc. Immediately >> after the upgrades, it was ~270M. Following a vacuum full, it dropped to >> 165M. Following a database-wide reindex, it dropped to 30M. > > As Alvaro said, vacuum full doesn't shrink indexes but in fact bloats them. > (Worst case, they could double in size, if the vacuum moves every row; > there's an intermediate state where there have to be index entries for > both old and new copies of each moved row, to ensure things are > consistent if the vacuum crashes right there.) > > So the above doesn't sound too unlikely. Perhaps we should recommend > vac full + reindex as standard cleanup procedure. Longer term, maybe > teach vac full to do an automatic reindex if it's moved more than X% of > the rows. Or forget the current vac full implementation entirely, and > go over to something acting more like CLUSTER ... we have to recommend the CLUSTER "way" to fix overly bloated databases quite often to people on IRC because vacuum full is unreasonably slow on highly fragmented databases. Doing something like that internally for vacuum full sounds like a reasonable idea except for the additional disk usage during the process which might cause issues for people ... Stefan
On Jan 17, 2007, at 11:56 AM, Tom Lane wrote: > So the above doesn't sound too unlikely. Perhaps we should recommend > vac full + reindex as standard cleanup procedure. Longer term, maybe > teach vac full to do an automatic reindex if it's moved more than X > % of a vac full + reindex is a waste of effort. whenever i need a big cleanup, I drop indexes (other than PK), vac full, re-create indexes. however, usually a reindex does a sufficient job if vacuum has been run with any sort of regularity.
Вложения
Bill Moran <wmoran@collaborativefusion.com> writes: > It wasn't the fact that it bloated that surprised me. It was the > _magnitude_ of bloat that I wasn't expecting, as well as the fact that > it was _all_ _index_ bloat. Um, no, you had plenty of table *and* index bloat before. The problem here is that VACUUM FULL fixed all the table bloat whilst making the index situation worse :-( regards, tom lane
In response to Tom Lane <tgl@sss.pgh.pa.us>: > Bill Moran <wmoran@collaborativefusion.com> writes: > > It wasn't the fact that it bloated that surprised me. It was the > > _magnitude_ of bloat that I wasn't expecting, as well as the fact that > > it was _all_ _index_ bloat. > > Um, no, you had plenty of table *and* index bloat before. The problem > here is that VACUUM FULL fixed all the table bloat whilst making the > index situation worse :-( Right. It doesn't _look_ that way from the graph, but that's because I only graph total DB size. I expect if I graphed data and index size separately, it would be evident. At this point, I'm going to assume that my question of, "Is this 4x bloat strange enough to warrant further investigation" is "no". It seems like this amount of bloat isn't terribly unusual, and that the people working on improving this sort of thing already have enough examples of it. Thanks to everyone for the replies. -- Bill Moran Collaborative Fusion Inc.
On Thursday January 18 2007 6:07 am, Bill Moran wrote: > Right. It doesn't _look_ that way from the graph, but that's > because I only graph total DB size. I expect if I graphed > data and index size separately, it would be evident. pg_total_relation_size() might give you what you want there. > At this point, I'm going to assume that my question of, "Is > this 4x bloat strange enough to warrant further investigation" > is "no". It seems like this amount of bloat isn't terribly > unusual, and that the people working on improving this sort of > thing already have enough examples of it. I afraid I don't see how any of the answers I saw discussed fit a 24x7 operation. Reindex, drop index, vacuum full, ... they all block production queries of one sort or another for significant periods of time (minutes) on large (multi/tens of GB) tables, and thus are infeasible for true 24x7 operations. What it seems we really need is something to remove the bloat without blocking production DML queries, while under significant query load, with very large tables. This bloat issue is by far our biggest headache on the DB side. Ed
[snip] > I afraid I don't see how any of the answers I saw discussed fit a > 24x7 operation. Reindex, drop index, vacuum full, ... they all > block production queries of one sort or another for significant > periods of time (minutes) on large (multi/tens of GB) tables, > and thus are infeasible for true 24x7 operations.[snip] This is not completely true, as of 8.2 there is an online index build, and if that could be used in a background thread to rebuild the index and replace the bloated one once it's finished, that would be a non-blocking operation which could be done in 24x7 situations. There are some issues with using the online index build for replacing an existing index, but if those could be solved it would be a viable solution I think... Cheers, Csaba.
On Friday January 19 2007 2:11 am, Csaba Nagy wrote: > > > I afraid I don't see how any of the answers I saw discussed > > fit a 24x7 operation. Reindex, drop index, vacuum full, ... > > they all block production queries of one sort or another for > > significant periods of time (minutes) on large (multi/tens > > of GB) tables, and thus are infeasible for true 24x7 > > operations.[snip] > > This is not completely true, as of 8.2 there is an online > index build, and if that could be used in a background thread > to rebuild the index and replace the bloated one once it's > finished, that would be a non-blocking operation which could > be done in 24x7 situations. Online index creation definitely helps us toward 24x7. But wouldn't we still have to drop the old index, thus blocking production queries? Ed
"Ed L." <pgsql@bluepolka.net> writes: > Online index creation definitely helps us toward 24x7. But > wouldn't we still have to drop the old index, thus blocking > production queries? Yes, but only for a very short period. regards, tom lane
Is it feasible to add a "reindex concurrently" that doesn't lock the table for the rebuild, then locks the table when doing a second pass to pickup rows that were changed after the first pass? Or something like that.... On Fri, 19 Jan 2007 12:45:03 -0500, "Tom Lane" <tgl@sss.pgh.pa.us> said: > "Ed L." <pgsql@bluepolka.net> writes: > > Online index creation definitely helps us toward 24x7. But > > wouldn't we still have to drop the old index, thus blocking > > production queries? > > Yes, but only for a very short period. > > regards, tom lane > > ---------------------------(end of broadcast)--------------------------- > TIP 6: explain analyze is your friend
On Fri, 2007-01-19 at 20:03, Jeremy Haile wrote: > Is it feasible to add a "reindex concurrently" that doesn't lock the > table for the rebuild, then locks the table when doing a second pass to > pickup rows that were changed after the first pass? Or something like > that.... IIRC, the objection was the deadlock potential of any lock upgrade, and the problems of impossible cleanup on failure if something changed the permissions of the executing user in the meantime. That's why I think it would make sense if it could be done by a privileged background thread like the autovacuum ones, so the lock upgrade can be tried without blocking, as it can take quite some time till it succeeds, and the cleanup is possible due to the privileged nature of the executor. If there would be such a facility it would also need some policies to control time windows and priorities just as for autovacuum, that's why I connect it in my usage-focused mind to autovacuum. Cheers, Csaba.
We have a large number (50+) of pre-8.2 clusters. How can I best/most easily identify those indices most bloated and in need of reindex/rebuilding? Ed
Added to TODO: * Add REINDEX CONCURRENTLY, like CREATE INDEX CONCURRENTLY This is difficult because you must upgrade to an exclusive table lock to replace the existing index file. CREATE INDEX CONCURRENTLY does not have this complication. This would allow index compaction without downtime. I understand the problems, but the need for this seems pretty clear. --------------------------------------------------------------------------- Csaba Nagy wrote: > On Fri, 2007-01-19 at 20:03, Jeremy Haile wrote: > > Is it feasible to add a "reindex concurrently" that doesn't lock the > > table for the rebuild, then locks the table when doing a second pass to > > pickup rows that were changed after the first pass? Or something like > > that.... > > IIRC, the objection was the deadlock potential of any lock upgrade, and > the problems of impossible cleanup on failure if something changed the > permissions of the executing user in the meantime. That's why I think it > would make sense if it could be done by a privileged background thread > like the autovacuum ones, so the lock upgrade can be tried without > blocking, as it can take quite some time till it succeeds, and the > cleanup is possible due to the privileged nature of the executor. > > If there would be such a facility it would also need some policies to > control time windows and priorities just as for autovacuum, that's why I > connect it in my usage-focused mind to autovacuum. > > Cheers, > Csaba. > > > > ---------------------------(end of broadcast)--------------------------- > TIP 9: In versions below 8.0, the planner will ignore your desire to > choose an index scan if your joining column's datatypes do not > match -- Bruce Momjian bruce@momjian.us EnterpriseDB http://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. +
I found this thread quite depressing because I had forgotten the VACUUM FULL only reclaims totally empty pages. I have applied the following documentation patch to recommend periodic REINDEX, and backpatched to 8.2.X docs. I also added some TODO items so hopefully at least we will keep track of this limitation. --------------------------------------------------------------------------- Ed L. wrote: > On Thursday January 18 2007 6:07 am, Bill Moran wrote: > > Right. It doesn't _look_ that way from the graph, but that's > > because I only graph total DB size. I expect if I graphed > > data and index size separately, it would be evident. > > pg_total_relation_size() might give you what you want there. > > > At this point, I'm going to assume that my question of, "Is > > this 4x bloat strange enough to warrant further investigation" > > is "no". It seems like this amount of bloat isn't terribly > > unusual, and that the people working on improving this sort of > > thing already have enough examples of it. > > I afraid I don't see how any of the answers I saw discussed fit a > 24x7 operation. Reindex, drop index, vacuum full, ... they all > block production queries of one sort or another for significant > periods of time (minutes) on large (multi/tens of GB) tables, > and thus are infeasible for true 24x7 operations. What it seems > we really need is something to remove the bloat without blocking > production DML queries, while under significant query load, with > very large tables. This bloat issue is by far our biggest > headache on the DB side. > > Ed > > ---------------------------(end of broadcast)--------------------------- > TIP 2: Don't 'kill -9' the postmaster -- Bruce Momjian bruce@momjian.us EnterpriseDB http://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + Index: doc/src/sgml/maintenance.sgml =================================================================== RCS file: /cvsroot/pgsql/doc/src/sgml/maintenance.sgml,v retrieving revision 1.66 diff -c -c -r1.66 maintenance.sgml *** doc/src/sgml/maintenance.sgml 16 Jan 2007 18:26:02 -0000 1.66 --- doc/src/sgml/maintenance.sgml 31 Jan 2007 04:12:07 -0000 *************** *** 615,623 **** for inefficient use of space: if all but a few index keys on a page have been deleted, the page remains allocated. So a usage pattern in which all but a few keys in each range are eventually deleted will see poor use of ! space. The potential for bloat is not indefinite — at worst there ! will be one key per page — but it may still be worthwhile to schedule ! periodic reindexing for indexes that have such usage patterns. </para> <para> --- 615,621 ---- for inefficient use of space: if all but a few index keys on a page have been deleted, the page remains allocated. So a usage pattern in which all but a few keys in each range are eventually deleted will see poor use of ! space. For such usage patterns, periodic reindexing is recommended. </para> <para> Index: doc/src/sgml/ref/vacuum.sgml =================================================================== RCS file: /cvsroot/pgsql/doc/src/sgml/ref/vacuum.sgml,v retrieving revision 1.45 diff -c -c -r1.45 vacuum.sgml *** doc/src/sgml/ref/vacuum.sgml 27 Dec 2006 14:55:17 -0000 1.45 --- doc/src/sgml/ref/vacuum.sgml 31 Jan 2007 04:12:07 -0000 *************** *** 167,172 **** --- 167,175 ---- most of the rows in a table and would like the table to physically shrink to occupy less disk space. <command>VACUUM FULL</command> will usually shrink the table more than a plain <command>VACUUM</command> would. + The <option>FULL</option> option does not shrink indexes; a periodic + <command>REINDEX</> is still recommended. In fact, it is often faster + to drop all indexes, <command>VACUUM FULL</>, and recreate the indexes. </para> <para>
Tom Lane wrote: > Bill Moran <wmoran@collaborativefusion.com> writes: > > The entire database was around 28M prior to the upgrades, etc. Immediately > > after the upgrades, it was ~270M. Following a vacuum full, it dropped to > > 165M. Following a database-wide reindex, it dropped to 30M. > > As Alvaro said, vacuum full doesn't shrink indexes but in fact bloats them. > (Worst case, they could double in size, if the vacuum moves every row; > there's an intermediate state where there have to be index entries for > both old and new copies of each moved row, to ensure things are > consistent if the vacuum crashes right there.) > > So the above doesn't sound too unlikely. Perhaps we should recommend > vac full + reindex as standard cleanup procedure. Longer term, maybe > teach vac full to do an automatic reindex if it's moved more than X% of > the rows. Or forget the current vac full implementation entirely, and > go over to something acting more like CLUSTER ... TODO already has: * Improve speed with indexes For large table adjustments during VACUUM FULL, it is faster to reindex rather than update the index. Also, index updates can bloat the index. -- Bruce Momjian bruce@momjian.us EnterpriseDB http://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. +
Is there no way to change the index code to allow for moving index tuples from one page to another? If we could do that then presumably we could free up substantially more pages. On Jan 30, 2007, at 10:18 PM, Bruce Momjian wrote: > > Added to TODO: > > * Add REINDEX CONCURRENTLY, like CREATE INDEX CONCURRENTLY > > This is difficult because you must upgrade to an exclusive table > lock > to replace the existing index file. CREATE INDEX CONCURRENTLY > does not > have this complication. This would allow index compaction without > downtime. > > I understand the problems, but the need for this seems pretty clear. > > ---------------------------------------------------------------------- > ----- > > Csaba Nagy wrote: >> On Fri, 2007-01-19 at 20:03, Jeremy Haile wrote: >>> Is it feasible to add a "reindex concurrently" that doesn't lock the >>> table for the rebuild, then locks the table when doing a second >>> pass to >>> pickup rows that were changed after the first pass? Or something >>> like >>> that.... >> >> IIRC, the objection was the deadlock potential of any lock >> upgrade, and >> the problems of impossible cleanup on failure if something changed >> the >> permissions of the executing user in the meantime. That's why I >> think it >> would make sense if it could be done by a privileged background >> thread >> like the autovacuum ones, so the lock upgrade can be tried without >> blocking, as it can take quite some time till it succeeds, and the >> cleanup is possible due to the privileged nature of the executor. >> >> If there would be such a facility it would also need some policies to >> control time windows and priorities just as for autovacuum, that's >> why I >> connect it in my usage-focused mind to autovacuum. >> >> Cheers, >> Csaba. >> >> >> >> ---------------------------(end of >> broadcast)--------------------------- >> TIP 9: In versions below 8.0, the planner will ignore your desire to >> choose an index scan if your joining column's datatypes do not >> match > > -- > Bruce Momjian bruce@momjian.us > EnterpriseDB http://www.enterprisedb.com > > + If your life is a hard drive, Christ can be your backup. + > > ---------------------------(end of > broadcast)--------------------------- > TIP 6: explain analyze is your friend > -- Jim Nasby jim@nasby.net EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)
Jim Nasby wrote: > Is there no way to change the index code to allow for moving index > tuples from one page to another? If we could do that then presumably > we could free up substantially more pages. This paper @inproceedings{DBLP:conf/sigmod/ZouS96, author = {C. Zou and B. Salzberg}, editor = {H. V. Jagadish and Inderpal Singh Mumick}, title = {On-line Reorganization of Sparsely-populated B+trees}, booktitle = {Proceedings of the 1996 ACM SIGMOD International Conference on Management of Data, Montreal, Quebec, Canada, June 4-6, 1996}, publisher = {ACM Press}, year = {1996}, pages = {115-124}, bibsource = {DBLP, \url{http://dblp.uni-trier.de}} } may be of some use here. http://citeseer.ist.psu.edu/zou96line.html -- Alvaro Herrera http://www.CommandPrompt.com/ The PostgreSQL Company - Command Prompt, Inc.