Обсуждение:

Поиск
Список
Период
Сортировка

От
"Doron Baranes"
Дата:

Hi

 

I am new at postgres and I'm having performance issues.

I am running on postgres 7.4.6 on a pineapp with 512MB RAM.

I did a database vacuum analyze and rebuild my indexes.

When I perform queries on tables of 2M-10M of rows it takes several minutes and

I see at sar and top that the cpu and memory is heavily used.

 

I would be glad for guidance on server parameters or other configurations which would help.

 

10x.


Doron.

Re:

От
Luckys
Дата:
It'd be helpful if posted with the EXPLAIN of the slow running queries on the respective table.
 
cool.
L.

 
On 4/9/06, Doron Baranes <doron.baranes@dbnet.co.il> wrote:

Hi

 

I am new at postgres and I'm having performance issues.

I am running on postgres 7.4.6 on a pineapp with 512MB RAM.

I did a database vacuum analyze and rebuild my indexes.

When I perform queries on tables of 2M-10M of rows it takes several minutes and

I see at sar and top that the cpu and memory is heavily used.

 

I would be glad for guidance on server parameters or other configurations which would help.

 

10x.


Doron.


Re:

От
Ragnar
Дата:
On sun, 2006-04-09 at 12:47 +0200, Doron Baranes wrote:
> Hi
>

> I am running on postgres 7.4.6 on a pineapp with 512MB RAM.
>
> I did a database vacuum analyze and rebuild my indexes.

If you have previously done a lot of deletes or updates
without regular vacuums, you may have to do a
  VACUUM FULL ANALYZE
once to get the table into normal state.

After this, regular normal VACUUM ANALYZE should be
enough.

> When I perform queries on tables of 2M-10M of rows it takes several
> minutes and

We would need to see the output of EXPLAIN ANALYZE
for your query, along with some information about
the schema of the tables involved, such as what indexes
have been created.

Also, let us know about any non-default configuration.

gnari



Re:

От
"Doron Baranes"
Дата:
Hi,

I Attached here a file with details about the tables, the queries and
the
Explain analyze plans.
Hope this can be helpful to analyze my problem

10x
Doron

-----Original Message-----
From: Ragnar [mailto:gnari@hive.is]
Sent: Sunday, April 09, 2006 2:37 PM
To: Doron Baranes
Subject: RE: [PERFORM]

On sun, 2006-04-09 at 14:11 +0200, Doron Baranes wrote:

Please reply to the list, not to me directly. this way
others can help you too.

> I did vacuum database analyze a few days ago.

yes, I saw that in your original post. I mentioned
VACUUM FULL ANALYZE , not just VACUUM ANALYZE

> I'll attached a few explain plans.

[explain plans deleted]

These are useless. you must show us the output of
EXPLAIN ANALYZE. these are output of EXPLAIN.
A plan is not much use without seeing the query itself.

you still have not answered the question about
what indexes you have.

gnari



Вложения

Re:

От
Ragnar
Дата:
On mán, 2006-04-10 at 10:30 +0200, Doron Baranes wrote:

> I Attached here a file with details about the tables, the queries and
> the
> Explain analyze plans.
> Hope this can be helpful to analyze my problem

first query:

> explain analyze SELECT date_trunc('hour'::text,
> i.entry_time) AS datetime,
> COUNT(fr.grp_fate_id) ,
> SUM(i.size)
> FROM log.msg_info as i,log.msg_fate as f,
> log.msg_fate_recipients as fr
> WHERE i.origin = 1
> AND i.msgid=f.msgid
> AND i.entry_time > '2006-01-25'
> AND f.grp_fate_id=fr.grp_fate_id
> GROUP BY datetime
> order by datetime;

if i.origin has high selectivity (if very
few rows in msg_info have origin=1 in this
case), an index on msg_info(orgin) can help.
unfortunately, as you are using 7.4 and this
is a smallint column, you would have to change
the query slightly to make use of that:
  WHERE i.origin = 1::smallint
if more than a few % or the rows have this value,
then this will not help

the index on msg_info(entry_time) is unlikely
to be used, because a simple '>' comparison
has little selectivity. try to add an upper limit
to the query to make it easier for the planner
so see that few rows would be returned (if that is
the case)
for example:
  AND i.entry_time BETWEEN '2006-01-25'
                   AND '2006-05-01'
this might also improve the estimated number
of groups on datetime (notice: estimated rows=1485233,
real=623), although I am not sure if that will help you

I do now know how good the planner is with dealing
with the date_trunc('hour'::text, i.entry_time),
so possibly you could get some improvement with
an indexed entry_hour column populated with trigger
or by your application, and change your query to:

explain analyze SELECT i.entry_hour,
COUNT(fr.grp_fate_id) ,
SUM(i.size)
FROM log.msg_info as i,log.msg_fate as f, log.msg_fate_recipients as fr
WHERE i.origin = 1
AND i.msgid=f.msgid
AND i.entry_hour BETWEEN '2006-01-25:00:00'
                 AND '2006-05-01:00:00'
AND f.grp_fate_id=fr.grp_fate_id
GROUP BY entry_hour
order by entry_hour;

(adjust the upper limit to your reality)

do these suggestions help at all?

gnari