Hi,
Hiroshi Inoue:
>
> Maybe shared buffer isn't so large as to keep all the(4.1M) pg_index pages.
That seems to be the case.
> So it would read pages from disk every time, Unfortunately pg_index
> has no index to scan the index entries of a relation now.
>
Well, it's reasonable that you can't keep an index on the table which
states what the indices are. ;-)
... on the other hand, Apple's HFS file system stores all the information
about the on-disk locations of their files as a B-Tree in, in, you
guessed it, a B-Tree which is saved on disk as an (invisible) file.
Thus, the thing stores the information on where its sectors are located
at, inside itself.
To escape this catch-22 situation, the location of the first three
extents (which is usually all it takes anyway) is stored elsewhere.
Possibly, something like this would work with postgres too.
> However why is pg_index so large ?
>
Creating ten thousand tables will do that to you.
Is there an option I can set to increase the appropriate cache, so that
the backend can keep the data in memory?
--
Matthias Urlichs | noris network GmbH | smurf@noris.de | ICQ: 20193661
The quote was selected randomly. Really. | http://smurf.noris.de/
--
Famous last words: They'd never (be stupid enough to) make him a manager.