Обсуждение: Re: Re: Speed of locating tables

Поиск
Список
Период
Сортировка

Re: Re: Speed of locating tables

От
"carl garland"
Дата:
>About the size of the datamodel I say this : I think that you'll need a
>mighty long time and enormous amount of analysts to reach a datamodel of
>1M+ tables, or else it is based upon a large number of > simple tables, in
>which case it could be reduced in size.

Sorry but just little old me came up with the design which gave birth to
this model ;)

>I'm sorry, but my feeling is that 1M+ tables for a datamodel is
>preposterous.

Perhaps I can shed a little more light on the issue by giving some
background.  Over the last year I have developed a web site service that was
built on a complete open source architecture (Linux, AOLServer, Postgres,
etc.) This site allows group collaboration allowing each user to potentially
setup information pooling and the type / amount of this info can vary for
each group. The reason I am asking about 1 million plus tables is that the
model dynamically builds a large # of tables and they come and go
periodically.  If / when the site becomes popular  I want to be sure of
scalability. The tables are dynamic in both size and type ie one table may
have 6 columns of all one type and another may have 20 of various type in
random order.  I  *could* change the data model to use fewer tables but I
*want* to push the technology and keep the model as simple as possible given
the dynamic nature of it. 95% of these tables will probably fit in one 8k
block that postgres uses but the other 5% could probably grow to +20Meg.

>Suppose you have 1M+ tables, and you have 1TB of space. This makes up for
>about 1 MB per table. Suppose you have a FS which works with 4k bloks, then
>you need 269 blocks per table. Since the original figure is not a round
>one, this gives a waste of 0.2 %, which amounts to 2.1 GB. Since i-nodes
>have multiple pointers to manage blocks (amounts to 12 datablocks under
>Linux (I think)), this means you need 23 inodes per file, this is
>23,000,000 inodes.
>
>This might not be quite unmanageable, but there is also the fragmentation
>on all these tables which bogs down your seek times.

All of this points to the OS and not PostgreSQL although with the model I
have described 95% of the time you wouldn't worry about fragmenting or i/o.
I am mainly concerned with how pg deals with large number of tables and
noone has concretely answered this issue.
When and where would a data model like this run into problems in pg.  With
pg trying to become enterprise ready pushing the limits like this would be
beneficial and are there hurdle to overcome on the pg side of things or is
performance more of an OS issue.

Carl Garland
________________________________________________________________________
Get Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com


Re: Re: Re: Speed of locating tables

От
Tom Lane
Дата:
"carl garland" <carlhgarland@hotmail.com> writes:
>> I'm sorry, but my feeling is that 1M+ tables for a datamodel is
>> preposterous.

> Perhaps I can shed a little more light on the issue by giving some
> background.  Over the last year I have developed a web site service that was
> built on a complete open source architecture (Linux, AOLServer, Postgres,
> etc.) This site allows group collaboration allowing each user to potentially
> setup information pooling and the type / amount of this info can vary for
> each group. The reason I am asking about 1 million plus tables is that the
> model dynamically builds a large # of tables and they come and go
> periodically.  If / when the site becomes popular  I want to be sure of
> scalability.

In that case you haven't got *one* datamodel, you've got N usergroups
each with their own model.  Seems to me that you'd want to give each
usergroup a separate database so as to avoid problems with namespace
collisions, security considerations, etc.  The previous responses were
all assuming that you wanted 1M tables in a single database, but
I don't think you really do.

My guess is that you are focusing on the wrong worry --- there are other
issues you are probably going to hit before you run into any limits
related to the sheer number of tables.  I'd be more worried about how
many simultaneous users you expect to support, and how big server iron
you expect to be able to do it with.  (However, if you are dividing the
users into separate databases, you could finesse that issue by setting
up separate servers for collections of databases/usergroups.  It's only
really a concern if you insist on serving them all with one Postgres
installation.)

> I am mainly concerned with how pg deals with large number of tables and
> noone has concretely answered this issue.

We told you not to do it; isn't that concrete enough ;-) ?

Seriously, I think that the real answer is that we know the current
implementation will not behave well with more than a few thousand tables
per database, but in a year or two the situation might be completely
different.  There is ongoing discussion about restructuring the way PG's
tables are stored in the Unix filesystem, and one of the side-effects
of whatever is done there might well be to eliminate this issue.  That
would be especially likely if some of the contributors to the work are
concerned about making the world safe for lots of tables.  Now that
you've told us your concern is long-term rather than short-term, I'd
say the question is not so much what the system does now as whether you
are willing to work on it some to help make it do what you want.

Postgres is not a static object ... it is a work in progress.

            regards, tom lane