Re: Configuration Advice

Поиск
Список
Период
Сортировка
От Merlin Moncure
Тема Re: Configuration Advice
Дата
Msg-id b42b73150701191116x21996c5dp2af72e97fc3951a6@mail.gmail.com
обсуждение исходный текст
Ответ на Configuration Advice  (Steve <cheetah@tanabi.org>)
Список pgsql-performance
On 1/17/07, Steve <cheetah@tanabi.org> wrote:
> Hey there;
> I've been lurking on this list awhile, and I've been working with postgres
> for a number of years so I'm not exactly new to this.  But I'm still
> having trouble getting a good balance of settings and I'd like to see what
> other people think.  We may also be willing to hire a contractor to help
> tackle this problem if anyone is interested.

I happen to be something of a cobol->sql expert, if you are interested
in some advice you can contact me off-list.  I converted an enterprise
cobol (in acucobol) app to Postgresql by plugging pg into the cobol
system via custom c++ isam driver.

> I've got an application here that runs large (in terms of length -- the
> queries have a lot of conditions in them) queries that can potentially
> return millions of rows but on average probably return tens of thousands
> of rows.  It's read only for most of the day, and pretty much all the
> queries except one are really fast.

If it's just one query I think I'd focus on optimizing that query, not
.conf settings.  In my opinion .conf tuning (a few gotchas aside)
doesn't really get you all that much.

> However, each night we load data from a legacy cobol system into the SQL
> system and then we summarize that data to make the reports faster.  This
> load process is intensely insert/update driven but also has a hefty
> amount of selects as well.  This load process is taking ever longer to
> complete.
>
>
> SO ... our goal here is to make this load process take less time.  It
> seems the big part is building the big summary table; this big summary
> table is currently 9 million rows big.  Every night, we drop the table,
> re-create it, build the 9 million rows of data (we use COPY to put hte
> data in when it's prepared, not INSERT), and then build the indexes on it
> -- of which there are many.  Unfortunately this table gets queried
> in a lot of different ways and needs these indexes; also unfortunately, we
> have operator class indexes to support both ASC and DESC sorting on

I have some very specific advice here.  Check out row-wise comparison
feature introduced in 8.2.

> columns so these are for all intents and purposes duplicate but required
> under Postgres 8.1 (we've recently upgraded to Postgres 8.2, is this still
> a requirement?)

> Building these indexes takes forever!  It's a long grind through inserts
> and then building the indexes takes a hefty amount of time too.  (about 9
> hours).  Now, the application is likely part at fault, and we're working
> to make it more efficient, but it has nothing to do with the index
> building time.  I'm wondering what we can do to make this better if
> anything; would it be better to leave the indexes on?  It doesn't seem to
> be.  Would it be better to use INSERTs instead of copies?  Doesn't seem to

no.

probably any optimization strategies would focus on reducing the
amount of data you had to load.

merlin

В списке pgsql-performance по дате отправления:

Предыдущее
От: Bruno Wolff III
Дата:
Сообщение: Re: DB benchmark and pg config file help
Следующее
От: Kevin Hunter
Дата:
Сообщение: Re: DB benchmark and pg config file help