Re: performance problem - 10.000 databases
| От | Tom Lane |
|---|---|
| Тема | Re: performance problem - 10.000 databases |
| Дата | |
| Msg-id | 3453.1068055189@sss.pgh.pa.us обсуждение исходный текст |
| Ответ на | Re: performance problem - 10.000 databases (Marek Florianczyk <franki@tpi.pl>) |
| Ответы |
Re: performance problem - 10.000 databases
|
| Список | pgsql-admin |
Marek Florianczyk <franki@tpi.pl> writes:
> Each client was doing:
> 10 x connect,"select * from table[rand(1-4)] where
> number=[rand(1-1000)]",disconnect--(fetch one row)
Seems like this is testing the cost of connect and disconnect to the
exclusion of nearly all else. PG is not designed to process just one
query per connection --- backend startup is too expensive for that.
Consider using a connection-pooling module if your application wants
short-lived connections.
> I noticed that queries like: "\d table1" "\di" "\dp" are extremly slow,
I thought maybe you'd uncovered a performance issue with lots of
schemas, but I can't reproduce it here. I made 10000 schemas each
containing a table "mytab", which is about the worst case for an
unqualified "\d mytab", but it doesn't seem excessively slow --- maybe
about a quarter second to return the one mytab that's actually in my
search path. In realistic conditions where the users aren't all using
the exact same table names, I don't think there's an issue.
regards, tom lane
В списке pgsql-admin по дате отправления: