Re: PLEASE GOD HELP US!

Поиск
Список
Период
Сортировка
От William Yu
Тема Re: PLEASE GOD HELP US!
Дата
Msg-id cjkh7a$1s73$1@news.hub.org
обсуждение исходный текст
Ответ на Re: PLEASE GOD HELP US!  ("Shane | SkinnyCorp" <shanew@skinnycorp.com>)
Ответы Re: PLEASE GOD HELP US!  (William Yu <wyu@talisys.com>)
Re: PLEASE GOD HELP US!  (Geoffrey <esoteric@3times25.net>)
Список pgsql-admin
Shane | SkinnyCorp wrote:

> I would like to also add that I've been a sysadmin for quite some time, and
> I've been designing databases for quite some time as well. I'm no idiot, I
> just can't find the bottleneck here (if one does in fact exist). So in light
> of this, please send me some suggestions I can work with.

My 2 cents. When I see a message titled "PLEASE GOD HELP US", my first
thought is not "experienced db designer". Less bombastic statements,
more logs (vacuum verbose, explain analyze), db stats, cpu stats, i/o
stats -- would have (1) cut down on the obvious advice which you say you
already know and (2) let the experts hone in on the problem. It's not
like we can guess what your system is doing.



Here's a few things that may or may not mean anything. Just random
guesses off the top of my head due to the lack of detailed info.

 > Nothing makes much of a difference.  I even attempted to use
 > persistant connections to the database to get around the connection
 > overhead... and THAT just seemed to eat up all the system's memory
 > while giving little or no gain in performance.

At buffers of 8192, that's just 64MB used up for shared memory + a few
MB per connection. At the max 60 users clicking on submit at exactly the
same time, that's about 300MB of memory compared to your total of 1.5GB.
If something is "eating up all the system memory", it isn't Postgres and
it certainly wouldn't be a connection pooling mechanism.


 > max_connections = 50

A bit low in db connections for a production website. Fine for
development but I personally max my DBs out at the full 1024. No harm in
setting it really high if you can allocate the shared memory. Also could
be causing delays if you don't have enough connections for the number of
users. Dunno



 > enable_seqscan = false

There are  cases where seqscan will be faster than indexscans. For
example, your query to retrieve the latest 25 threads -- always faster
using seqscan. If it was using indexscan, that would explain the 9
seconds to run because the HD heads would have to jump back & forth from
  index to table for every record. (And as an experienced sysadmin, you
should know that HD latency is way more expensive than HD throughput.)
On my dev system (P3/750, 1GB ram, 1 IDE HD), I get the following
results on a 36K table:

   SELECT * FROM 16K_table ORDER BY un_indexed_fld DESC LIMIT 25
   --> 5s -- 1st run
   --> .013s -- 2nd run

Doing the selects on an indexed field -- my results start at < 100 msec
(1st run) and drops to < 1 msec for cached I/O.

В списке pgsql-admin по дате отправления:

Предыдущее
От: "Igor Maciel Macaubas"
Дата:
Сообщение: Re: Does PostgreSQL Stores its database in multiple disks?
Следующее
От: Stephan Szabo
Дата:
Сообщение: Re: PLEASE GOD HELP US!