Re: performance problem - 10.000 databases

Поиск
Список
Период
Сортировка
От Marek Florianczyk
Тема Re: performance problem - 10.000 databases
Дата
Msg-id 1068056378.28827.172.camel@franki-laptop.tpi.pl
обсуждение исходный текст
Ответ на Re: performance problem - 10.000 databases  (Tom Lane <tgl@sss.pgh.pa.us>)
Ответы Re: performance problem - 10.000 databases  (Tom Lane <tgl@sss.pgh.pa.us>)
Список pgsql-admin
W liście z śro, 05-11-2003, godz. 18:59, Tom Lane pisze:
> Marek Florianczyk <franki@tpi.pl> writes:
> > Each client was doing:
>
> > 10 x connect,"select * from table[rand(1-4)] where
> > number=[rand(1-1000)]",disconnect--(fetch one row)
>
> Seems like this is testing the cost of connect and disconnect to the
> exclusion of nearly all else.  PG is not designed to process just one
> query per connection --- backend startup is too expensive for that.
> Consider using a connection-pooling module if your application wants
> short-lived connections.

You right, maybe typical php page will have more queries "per view"
How good is connection-pooling module when connection from each virtual
site is uniq? Different user and password, and differen schemas and
permissions, so this connect-pooling module would have to switch between
users, without reconnecting to database? Impossible ?

>
> > I noticed that queries like: "\d table1" "\di" "\dp" are extremly slow,
>
> I thought maybe you'd uncovered a performance issue with lots of
> schemas, but I can't reproduce it here.  I made 10000 schemas each
> containing a table "mytab", which is about the worst case for an
> unqualified "\d mytab", but it doesn't seem excessively slow --- maybe
> about a quarter second to return the one mytab that's actually in my
> search path.  In realistic conditions where the users aren't all using
> the exact same table names, I don't think there's an issue.

But did you do that under some database load ? eg. 100 clients
connected, like in my example ? When I do these queries "\d" without any
clients connected and after ANALYZE it's fast, but only 100 clients is
enough to lengthen query time to 30 sec. :(

I've 3000 schemas named: test[1-3000] and 3000 users named test[1-3000]
in each schema there is four tables (table1 table2 table3 table4 )
each table has 3 column (int,text,int) and some of them has also
indexes.

If you want, I will send perl script that forks to 100 process and
perform my queries.

greetings
Marek

>
>             regards, tom lane
>
> ---------------------------(end of broadcast)---------------------------
> TIP 5: Have you checked our extensive FAQ?
>
>                http://www.postgresql.org/docs/faqs/FAQ.html
>


В списке pgsql-admin по дате отправления:

Предыдущее
От: Jeff
Дата:
Сообщение: Re: performance problem - 10.000 databases
Следующее
От: Marek Florianczyk
Дата:
Сообщение: Re: performance problem - 10.000 databases