Обсуждение: slow result

Поиск
Список
Период
Сортировка

slow result

От
Laurent Manchon
Дата:
Hi,

I have a slow response of my PostgreSQL database 7.4 using this query below
on a table with 800000 rows:

select count(*)from tbl;

PostgreSQL return result in 28 sec every time.
although MS-SQL return result in 0.02 sec every time.

My server is a DELL PowerEdge 2600 with bi-processor Xeon at 3.2 Ghz
with 3GBytes RAM


My PostgreSQL Conf is
*********************
log_connections = yes
syslog = 2
effective_cache_size = 50000
sort_mem = 10000
max_connections = 200
shared_buffers = 3000
vacuum_mem = 32000
wal_buffers = 8
max_fsm_pages = 2000
max_fsm_relations = 100

Can you tell me is there a way to enhence performance ?

Thank you




+-----------------------------------------------------+
| Laurent Manchon                                     |
| Email: lmanchon@univ-montp2.fr                     |
+-----------------------------------------------------+

Re: slow result

От
"Heiko W.Rupp"
Дата:
Am 23.01.2007 um 11:34 schrieb Laurent Manchon:

> Hi,
>
> I have a slow response of my PostgreSQL database 7.4 using this
> query below
> on a table with 800000 rows:
>
> select count(*)from tbl;

count(*) is doing a full tablescan over all your 800000 rows. This is
a well known "feature"
of postgres :-/

So enhancing the performance is currently only possible by having
faster disk drives.
--
Heiko W.Rupp
                    heiko.rupp@redhat.com, http://www.dpunkt.de/buch/
3-89864-429-4.html




Re: slow result

От
"A. Kretschmer"
Дата:
am  Tue, dem 23.01.2007, um 11:34:52 +0100 mailte Laurent Manchon folgendes:
> Hi,
>
> I have a slow response of my PostgreSQL database 7.4 using this query below
> on a table with 800000 rows:
>
> select count(*)from tbl;

If i remember correctly, i saw this question yesterday on an other
list...


Answer:

Because PG force a sequencial scan. You can read a lot about this in the
archives. Here some links to explanations:

http://www.pervasive-postgres.com/instantkb13/article.aspx?id=10117&cNode=0T1L6L
http://sql-info.de/postgresql/postgres-gotchas.html#1_7
http://www.varlena.com/GeneralBits/49.php


Hope that helps, Andreas
--
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: -> Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

Re: slow result

От
"Steinar H. Gunderson"
Дата:
On Tue, Jan 23, 2007 at 11:34:52AM +0100, Laurent Manchon wrote:
> I have a slow response of my PostgreSQL database 7.4 using this query below
> on a table with 800000 rows:
>
> select count(*)from tbl;

Contrary to your expectations, this is _not_ a query you'd expect to be fast
in Postgres. Try real queries from your application instead -- most likely,
you'll find them to be much master. (If not, come back with the query, the
schema and the EXPLAIN ANALYZE output of your query, and you'll usually get
help nailing down the issues. :-) )

/* Steinar */
--
Homepage: http://www.sesse.net/

Re: slow result

От
"Steinar H. Gunderson"
Дата:
On Tue, Jan 23, 2007 at 11:55:41AM +0100, Steinar H. Gunderson wrote:
> you'll find them to be much master.

s/master/faster/

/* Steinar */
--
Homepage: http://www.sesse.net/

Re: slow result

От
Bill Moran
Дата:
In response to Laurent Manchon <lmanchon@univ-montp2.fr>:
>
> I have a slow response of my PostgreSQL database 7.4 using this query below
> on a table with 800000 rows:
>
> select count(*)from tbl;
>
> PostgreSQL return result in 28 sec every time.
> although MS-SQL return result in 0.02 sec every time.
>
> My server is a DELL PowerEdge 2600 with bi-processor Xeon at 3.2 Ghz
> with 3GBytes RAM

While there's truth in everything that's been said by others, the query
should not take _that_ long.  I just tried a count(*) on a table with
460,000 rows, and it took less than a second.  count(*) in PostgreSQL
is not likely to compare to most other RDBMS for the reasons others have
stated, but counting 800,000 rows shouldn't take 28 seconds.

The standard question applies: have you vacuumed recently?

> My PostgreSQL Conf is
> *********************
> log_connections = yes
> syslog = 2
> effective_cache_size = 50000
> sort_mem = 10000
> max_connections = 200
> shared_buffers = 3000
> vacuum_mem = 32000
> wal_buffers = 8
> max_fsm_pages = 2000
> max_fsm_relations = 100
>
> Can you tell me is there a way to enhence performance ?

On our 4G machines, we use shared_buffers=240000 (which equates to about
2G).  The only reason I don't set it higher is that FreeBSD has a limit on
shared memory of 2G.

The caveat here is that I'm running a mix of 8.1 and 8.2.  There have been
significant improvements in both the usage of shared memory, and the
optimization of count(*) since 7.4, so the first suggestion I have is to
upgrade your installation.

--
Bill Moran
Collaborative Fusion Inc.

Re: slow result

От
Bruno Wolff III
Дата:
On Tue, Jan 23, 2007 at 11:34:52 +0100,
  Laurent Manchon <lmanchon@univ-montp2.fr> wrote:
> Hi,
>
> I have a slow response of my PostgreSQL database 7.4 using this query below
> on a table with 800000 rows:
>
> select count(*)from tbl;
>
> PostgreSQL return result in 28 sec every time.
> although MS-SQL return result in 0.02 sec every time.

Besides the other advice mentioned in this thread, check that you don't
have a lot of dead tuples in that table. 28 seconds seems a bit high
for even a sequential scan of 800000 tuples unless they are pretty large.

Re: slow result

От
"Shoaib Mir"
Дата:
You can also try this one:

ANALYZE tablename;
select reltuples from pg_class where relname = 'tablename';

Will also give almost the same results I guess...

-------------
Shoaib Mir
EnterpriseDB ( www.enterprisedb.com)

On 1/23/07, Bruno Wolff III <bruno@wolff.to> wrote:
On Tue, Jan 23, 2007 at 11:34:52 +0100,
  Laurent Manchon < lmanchon@univ-montp2.fr> wrote:
> Hi,
>
> I have a slow response of my PostgreSQL database 7.4 using this query below
> on a table with 800000 rows:
>
> select count(*)from tbl;
>
> PostgreSQL return result in 28 sec every time.
> although MS-SQL return result in 0.02 sec every time.

Besides the other advice mentioned in this thread, check that you don't
have a lot of dead tuples in that table. 28 seconds seems a bit high
for even a sequential scan of 800000 tuples unless they are pretty large.

---------------------------(end of broadcast)---------------------------
TIP 3: Have you checked our extensive FAQ?

               http://www.postgresql.org/docs/faq