Re: Re: Speed of locating tables

Поиск
Список
Период
Сортировка
От carl garland
Тема Re: Re: Speed of locating tables
Дата
Msg-id 20000605212923.58070.qmail@hotmail.com
обсуждение исходный текст
Ответы Re: Re: Re: Speed of locating tables
Список pgsql-general
>About the size of the datamodel I say this : I think that you'll need a
>mighty long time and enormous amount of analysts to reach a datamodel of
>1M+ tables, or else it is based upon a large number of > simple tables, in
>which case it could be reduced in size.

Sorry but just little old me came up with the design which gave birth to
this model ;)

>I'm sorry, but my feeling is that 1M+ tables for a datamodel is
>preposterous.

Perhaps I can shed a little more light on the issue by giving some
background.  Over the last year I have developed a web site service that was
built on a complete open source architecture (Linux, AOLServer, Postgres,
etc.) This site allows group collaboration allowing each user to potentially
setup information pooling and the type / amount of this info can vary for
each group. The reason I am asking about 1 million plus tables is that the
model dynamically builds a large # of tables and they come and go
periodically.  If / when the site becomes popular  I want to be sure of
scalability. The tables are dynamic in both size and type ie one table may
have 6 columns of all one type and another may have 20 of various type in
random order.  I  *could* change the data model to use fewer tables but I
*want* to push the technology and keep the model as simple as possible given
the dynamic nature of it. 95% of these tables will probably fit in one 8k
block that postgres uses but the other 5% could probably grow to +20Meg.

>Suppose you have 1M+ tables, and you have 1TB of space. This makes up for
>about 1 MB per table. Suppose you have a FS which works with 4k bloks, then
>you need 269 blocks per table. Since the original figure is not a round
>one, this gives a waste of 0.2 %, which amounts to 2.1 GB. Since i-nodes
>have multiple pointers to manage blocks (amounts to 12 datablocks under
>Linux (I think)), this means you need 23 inodes per file, this is
>23,000,000 inodes.
>
>This might not be quite unmanageable, but there is also the fragmentation
>on all these tables which bogs down your seek times.

All of this points to the OS and not PostgreSQL although with the model I
have described 95% of the time you wouldn't worry about fragmenting or i/o.
I am mainly concerned with how pg deals with large number of tables and
noone has concretely answered this issue.
When and where would a data model like this run into problems in pg.  With
pg trying to become enterprise ready pushing the limits like this would be
beneficial and are there hurdle to overcome on the pg side of things or is
performance more of an OS issue.

Carl Garland
________________________________________________________________________
Get Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com


В списке pgsql-general по дате отправления:

Предыдущее
От: Bruce Momjian
Дата:
Сообщение: Re: 32KB Tuples
Следующее
От: "Eric Jain"
Дата:
Сообщение: RE: 32KB Tuples