Обсуждение: Scalability with large numbers of tables

Поиск
Список
Период
Сортировка

Scalability with large numbers of tables

От
Phil Endecott
Дата:
Dear Postgresql experts,

I have a single database with one schema per user.  Each user has a
handful of tables, but there are lots of users, so in total the database
has thousands of tables.

I'm a bit concerned about scalability as this continues to grow.  For
example I find that tab-completion in psql is now unusably slow; if
there is anything more important where the algorithmic complexity is the
same then it will be causing a problem.  There are 42,000 files in the
database directory.  This is enough that, with a "traditional" unix
filesystem like ext2/3, kernel operations on directories take a
significant time.  (In other applications I've generally used a guide of
100-1000 files per directory before adding extra layers, but I don't
know how valid this is.)

I'm interested to know if anyone has any experiences to share with
similar large numbers of tables.  Should I worry about it?  I don't want
to wait until something breaks badly if I need architectural changes.
Presumably tablespaces could be used to avoid the
too-many-files-per-directory issue, though I've not moved to 8.0 yet.

Thanks

Phil.


Re: Scalability with large numbers of tables

От
Lincoln Yeoh
Дата:
I'm not an expert.

Turn off tab completion? It's probably scanning through all the possible
table names and the algorithm used is probably not designed for that
number. And with 42000 tables, tab completion may not be that helpful.

Don't use ext2/ext3? There are other filesystems on Linux which perform
decently with thousands of files in a directory. AFAIK ext2 and ext3 don't
allow you to have single large files anyway - also not sure if postgresql
BLOBs will hit those filesystem limits or postgresql splits BLOBs or hits
its own limits first - I'd just store multi-GB stuff out of the DB.

At 01:24 PM 2/20/2005 +0000, Phil Endecott wrote:

>Dear Postgresql experts,
>
>I have a single database with one schema per user.  Each user has a
>handful of tables, but there are lots of users, so in total the database
>has thousands of tables.
>
>I'm a bit concerned about scalability as this continues to grow.  For
>example I find that tab-completion in psql is now unusably slow; if there
>is anything more important where the algorithmic complexity is the same
>then it will be causing a problem.  There are 42,000 files in the database
>directory.  This is enough that, with a "traditional" unix filesystem like
>ext2/3, kernel operations on directories take a significant time.  (In
>other applications I've generally used a guide of 100-1000 files per
>directory before adding extra layers, but I don't know how valid this is.)



Re: Scalability with large numbers of tables

От
"Larry Rosenman"
Дата:
Lincoln Yeoh wrote:
> I'm not an expert.
>
> Turn off tab completion? It's probably scanning through all the
> possible table names and the algorithm used is probably not designed
> for that number. And with 42000 tables, tab completion may not be
> that helpful.
>
> Don't use ext2/ext3? There are other filesystems on Linux which
> perform decently with thousands of files in a directory. AFAIK ext2
> and ext3 don't allow you to have single large files anyway - also not
> sure if postgresql BLOBs will hit those filesystem limits or
> postgresql splits BLOBs or hits its own limits first - I'd just store
> multi-GB stuff out of the DB.
>
> At 01:24 PM 2/20/2005 +0000, Phil Endecott wrote:
>
>> Dear Postgresql experts,
>>
>> I have a single database with one schema per user.  Each user has a
>> handful of tables, but there are lots of users, so in total the
>> database has thousands of tables.
>>
>> I'm a bit concerned about scalability as this continues to grow.  For
>> example I find that tab-completion in psql is now unusably slow; if
>> there is anything more important where the algorithmic complexity is
>> the same then it will be causing a problem.  There are 42,000 files
>> in the database directory.  This is enough that, with a
>> "traditional" unix filesystem like ext2/3, kernel operations on
>> directories take a significant time.  (In other applications I've
>> generally used a guide of 100-1000 files per directory before adding
>> extra layers, but I don't know how valid this is.)

PostgreSQL breaks tables down into 1GB segments, and oversized attributes
get stored
Into TOAST tables, compressed.

I don't know if this helps in your case, however.

--
Larry Rosenman                     http://www.lerctr.org/~ler
Phone: +1 972-414-9812                 E-Mail: ler@lerctr.org
US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749


Re: Scalability with large numbers of tables

От
Scott Marlowe
Дата:
On Sun, 2005-02-20 at 10:45, Lincoln Yeoh wrote:
> I'm not an expert.
>
> Turn off tab completion? It's probably scanning through all the possible
> table names and the algorithm used is probably not designed for that
> number. And with 42000 tables, tab completion may not be that helpful.
>
> Don't use ext2/ext3? There are other filesystems on Linux which perform
> decently with thousands of files in a directory. AFAIK ext2 and ext3 don't
> allow you to have single large files anyway - also not sure if postgresql
> BLOBs will hit those filesystem limits or postgresql splits BLOBs or hits
> its own limits first - I'd just store multi-GB stuff out of the DB.

Actually, while ext2/3 used to be pretty pokey with a large number of
files, the latest version seems quite comfortable handling tens of
thousands with a fairly good response time.

Re: Scalability with large numbers of tables

От
Marco Colombo
Дата:
On Sun, 20 Feb 2005, Phil Endecott wrote:

> Dear Postgresql experts,
>
> I have a single database with one schema per user.  Each user has a handful
> of tables, but there are lots of users, so in total the database has
> thousands of tables.
>
> I'm a bit concerned about scalability as this continues to grow.  For example
> I find that tab-completion in psql is now unusably slow; if there is anything
> more important where the algorithmic complexity is the same then it will be
> causing a problem.  There are 42,000 files in the database directory.  This
> is enough that, with a "traditional" unix filesystem like ext2/3, kernel
> operations on directories take a significant time.  (In other applications
> I've generally used a guide of 100-1000 files per directory before adding
> extra layers, but I don't know how valid this is.)

Please read the following linux-kernel thread (October 2002):

http://www.ussg.iu.edu/hypermail/linux/kernel/0210.0/0281.html

they are comparing Ext3 and RaiserFS directory operations, with 300,000
files. They are head to head. I doubt you can measure any difference
with 42,000 files, provided that the kernel you're running isn't too
old. Please note that as far as PostgreSQL is concerned, CPU usage
is more important than raw speed in tests, IMHO. And I bet both the
filesystems have improved since then.

Now I wonder, is tab-completion faster in Oracle? B-)

.TM.
--
       ____/  ____/   /
      /      /       /            Marco Colombo
     ___/  ___  /   /              Technical Manager
    /          /   /             ESI s.r.l.
  _____/ _____/  _/               Colombo@ESI.it

Re: Scalability with large numbers of tables

От
Christopher Browne
Дата:
Oops! spam_from_postgresql_general@chezphil.org (Phil Endecott) was seen spray-painting on a wall:
> Dear Postgresql experts,
>
> I have a single database with one schema per user.  Each user has a
> handful of tables, but there are lots of users, so in total the
> database has thousands of tables.
>
> I'm a bit concerned about scalability as this continues to grow.
> For example I find that tab-completion in psql is now unusably slow;
> if there is anything more important where the algorithmic complexity
> is the same then it will be causing a problem.  There are 42,000
> files in the database directory.  This is enough that, with a
> "traditional" unix filesystem like ext2/3, kernel operations on
> directories take a significant time.  (In other applications I've
> generally used a guide of 100-1000 files per directory before adding
> extra layers, but I don't know how valid this is.)

I'm pretty sure that slowness of tab completion has little to do with
the performance of the filesystems.

If you've got tens of thousands of relations, the tab completion code
has to draw the whole list of relations from pg_class into memory and
"marshal" it into a form usable by GNU Readline.  THAT is what you're
seeing slow down.  As the number of tables, n, grows, the cost of that
grows with order of complexity O(n).

Actual queries on actual tables won't be slow; they will look up
relation names directly in pg_class, and presumably go from there to
get the file name(s) on the filesystem, which each represent
operations of complexity of order O(n log n).  Which remains fast even
if there are millions of tables.
--
(format nil "~S@~S" "cbbrowne" "gmail.com")
http://linuxdatabases.info/info/slony.html
It isn't that  physicists enjoy physics  more than they enjoy sex, its
that they enjoy sex more when they are thinking of physics.

Re: Scalability with large numbers of tables

От
Christopher Browne
Дата:
I'm pretty sure that slowness of tab completion has little to do with
the performance of the filesystems.

If you've got tens of thousands of relations, the tab completion code
has to draw the whole list of relations from pg_class into memory and
"marshal" it into a form usable by GNU Readline.  THAT is what you're
seeing slow down.  As the number of tables, n, grows, the cost of that
grows with order of complexity O(n).

Actual queries on actual tables won't be slow; they will look up
relation names directly in pg_class, and presumably go from there to
get the file name(s) on the filesystem, which each represent
operations of complexity of order O(n log n).  Which remains fast even
if
--
(format nil "~S@~S" "cbbrowne" "gmail.com")
http://linuxdatabases.info/info/slony.html
It isn't that  physicists enjoy physics  more than they enjoy sex, its
that they enjoy sex more when they are thinking of physics.

Re: Scalability with large numbers of tables

От
"Florian G. Pflug"
Дата:
Christopher Browne wrote:
> Oops! spam_from_postgresql_general@chezphil.org (Phil Endecott) was seen spray-painting on a wall:
>>I have a single database with one schema per user.  Each user has a
>>handful of tables, but there are lots of users, so in total the
>>database has thousands of tables.

> If you've got tens of thousands of relations, the tab completion code
> has to draw the whole list of relations from pg_class into memory and
> "marshal" it into a form usable by GNU Readline.  THAT is what you're
> seeing slow down.  As the number of tables, n, grows, the cost of that
> grows with order of complexity O(n).
>
> Actual queries on actual tables won't be slow; they will look up
> relation names directly in pg_class, and presumably go from there to
> get the file name(s) on the filesystem, which each represent
> operations of complexity of order O(n log n).  Which remains fast even
> if there are millions of tables.

I guess you mean O(log n) in the second paragraph (Which would imply
that there is an index on relname for pg_class). If the complexity was
O(n log n), it would be more complex/slower than an O(n) algorithm, and
therefore slower (or, at least, it would scale worse) than the
tab-completion code ;-)

greetings, Florian Pflug

Вложения

Re: Scalability with large numbers of tables

От
"Greg Sabino Mullane"
Дата:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1


Phil Endecott wrote:
> For example I find that tab-completion in psql is now unusably slow
...

Christopher Browne replied:
> If you've got tens of thousands of relations, the tab completion code
> has to draw the whole list of relations from pg_class into memory and
> "marshal" it into a form usable by GNU Readline.

Well, it's actually not quite that bad. The tab-completion code has a
hard-coded limit (literally) of 1000 relations in the SQL it sends to
the backend, so over 1000 any slowdown is simply a limitation on how fast
Postgres can execute the query and serve it up to psql.

- --
Greg Sabino Mullane greg@turnstep.com
PGP Key: 0x14964AC8 200502211216
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8

-----BEGIN PGP SIGNATURE-----

iD8DBQFCGhmavJuQZxSWSsgRAvCbAJ9mKvfuJ0Es8U6Mzl+xRQoVeAAsjACdF0gN
oXklUXpX9Vhs57KS3CGk3uY=
=/6Yx
-----END PGP SIGNATURE-----



Re: Scalability with large numbers of tables

От
Phil Endecott
Дата:
Thanks to all who've responded to my concern about scalability with
large numbers of tables.  I am reasonably reassured.

I don't really care about the tab-completion speed - it's just that its
slowness was my "heads-up" to the fact that there could be something
related to number of tables to worry about.  Interestingly, however,
bash tab completion on the files in the database directory is still
instantaneous.

The quoted linux-kernel thread about filesystem performance is related
to creating and deleting files (the typical "untar" and "mail server"
benchmarks) which is not typical of what postgres does - it simply opens
them.  I was unaware that ext3 had a tree structure - I assumed that its
backward compatibility with ext2 made that impossible.  So that's
another thing I don't need to worry about.

Regards,

--Phil.


Re: Scalability with large numbers of tables

От
Tom Lane
Дата:
"Greg Sabino Mullane" <greg@turnstep.com> writes:
> Christopher Browne replied:
>> If you've got tens of thousands of relations, the tab completion code
>> has to draw the whole list of relations from pg_class into memory and
>> "marshal" it into a form usable by GNU Readline.

> Well, it's actually not quite that bad. The tab-completion code has a
> hard-coded limit (literally) of 1000 relations in the SQL it sends to
> the backend, so over 1000 any slowdown is simply a limitation on how fast
> Postgres can execute the query and serve it up to psql.

Actually it is that bad :-( because the query that is generated is

    SELECT bad-nasty-select
    UNION
    SELECT another-nasty-select
    UNION
    SELECT still-another-one
    LIMIT 1000

and since UNION eliminates duplicates, the backend has to do the whole
thing before the LIMIT kicks in.

I'm not sure if we could use UNION ALL --- does readline care if there
are duplicates? --- but if not it'd help to put LIMITs on the individual
UNION arms.

    (SELECT nasty-select LIMIT 1000)
    UNION
    (SELECT nasty-select LIMIT 1000)
    UNION
    (SELECT nasty-select LIMIT 1000)
    LIMIT 1000

The individual selects themselves aren't exactly optimized, either ---
for instance the WHERE clauses aren't designed to allow indexscans
to be used.  Maybe we could use LIKEs instead of those ugly substring
tests.

In short, this is mostly the fault of the tab completion code and not
the backend.

            regards, tom lane

Re: Scalability with large numbers of tables

От
Thomas F.O'Connell
Дата:
To me, the issues of scalability come from a management perspective.
I'm dealing with a schema that has tens of thousands of tables, but
they are inherited from a relatively small set of superclass tables.

I imagine there are similar issues to managing schemas as there are to
managing subclass tables. I have to deal with indexes for the thousands
of tables because there is no inheritance of indexes or constraints. So
any time I discover a new column that should be indexed, I have to make
sure to propagate the index across all subclasses.

Where this hits performance is certainly peculiar to inheritance
because doing even primary key lookups in the superclass tables becomes
a slow operation when it's having to break this down into lookups in
every child table.

-tfo

--
Thomas F. O'Connell
Co-Founder, Information Architect
Sitening, LLC
http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-260-0005

On Feb 20, 2005, at 7:24 AM, Phil Endecott wrote:

> Dear Postgresql experts,
>
> I have a single database with one schema per user.  Each user has a
> handful of tables, but there are lots of users, so in total the
> database has thousands of tables.
>
> I'm a bit concerned about scalability as this continues to grow.  For
> example I find that tab-completion in psql is now unusably slow; if
> there is anything more important where the algorithmic complexity is
> the same then it will be causing a problem.  There are 42,000 files in
> the database directory.  This is enough that, with a "traditional"
> unix filesystem like ext2/3, kernel operations on directories take a
> significant time.  (In other applications I've generally used a guide
> of 100-1000 files per directory before adding extra layers, but I
> don't know how valid this is.)
>
> I'm interested to know if anyone has any experiences to share with
> similar large numbers of tables.  Should I worry about it?  I don't
> want to wait until something breaks badly if I need architectural
> changes. Presumably tablespaces could be used to avoid the
> too-many-files-per-directory issue, though I've not moved to 8.0 yet.
>
> Thanks
>
> Phil.