identifying performance hits: how to ???

Поиск
Список
Период
Сортировка
Hello All,

Anyone know if read performance on a postgres database decreases at an
increasing rate, as the number of stored records increase?

This is a TCL app, which makes entries into a single, table and from time
to time repopulates a grid control.  It must rebuild the data in the grid
control, because other clients have since written to the same table.

It seems as if I'm missing something fundamental... maybe I am... is some
kind of database cleanup necessary?   With less than ten records, the grid
populates very quickly.  Beyond that, performance slows to a crawl, until
it _seems_ that every new record doubles the time needed to retrieve the
records.  My quick fix was to cache the data locally in TCL, and only
retrieve changed data from the database.  But now as client demand
increases, as well as the number of clients making changes to the table,
I'm reaching the bottleneck again.

The client asked me yesterday to start evaluating "more mainstream"
databases, which means that they're pissed off.  Postgres is fun to work
with, but it's hard to learn about, and hard to justify to clients.

By the way, I have experimented with populating the exact same grid control
on Windows NT, using MS Access (TCL runs just about anywhere).  The grid
seemed to populate just about instantaneously.  So, is the bottleneck in
Unix, in Postgres, and does anybody know how to make it faster?

Cheers,
Rob



В списке pgsql-general по дате отправления:

Предыдущее
От: Sarah Officer
Дата:
Сообщение: Re: [GENERAL] How do you live without OUTER joins?
Следующее
От: Bruce Momjian
Дата:
Сообщение: Simulating an outer join