Re: Performance with very large tables

Поиск
Список
Период
Сортировка
От Gregory S. Williamson
Тема Re: Performance with very large tables
Дата
Msg-id 71E37EF6B7DCC1499CEA0316A256832802B3EBEC@loki.wc.globexplorer.net
обсуждение исходный текст
Ответ на Re: Performance with very large tables  ("Jan van der Weijde" <Jan.van.der.Weijde@attachmate.com>)
Список pgsql-general
Limit is somewhat magical ... at least to a degree. Not sure about cursors since I am not currently using them.

select count(*) from bill_rpt_work;
  count
---------
 2317451
(1 row)

Time: 1709.829 ms

billing=# \d bill_rpt_work
          Table "reporting.bill_rpt_work"
    Column     |         Type          | Modifiers
---------------+-----------------------+-----------
 report_id     | integer               |
 client_id     | character varying(10) |
 contract_id   | integer               | not null
 rate          | numeric               | not null
 appid         | character varying(10) | not null
 userid        | text                  | not null
 collection_id | integer               | not null
 client_name   | character varying(60) |
 use_sius      | integer               | not null
 is_subscribed | integer               | not null
 hits          | numeric               | not null
 sius          | numeric               | not null
 total_amnt    | numeric               | not null
 royalty_total | numeric               |
Indexes:
    "billrptw_ndx" UNIQUE, btree (report_id, client_id, contract_id, rate, appid, userid, collection_id)
    "billrpt_cntrct_ndx" btree (report_id, contract_id, client_id)
    "billrpt_collid_ndx" btree (report_id, collection_id, client_id, contract_id)
Foreign-key constraints:
    "$1" FOREIGN KEY (report_id) REFERENCES billing_reports(report_id)
    "$2" FOREIGN KEY (client_id) REFERENCES "work".clients(client_id)

billing=# select * from bill_rpt_work order by report_id, client_id, contract_id, rate, appid, userid, collection_id;
Cancel request sent

after more than 10 seconds

THEN:

select * from bill_rpt_work order by report_id, client_id, contract_id, rate, appid, userid, collection_id limit 1000;
 report_id | client_id  | contract_id |  rate  |  appid   |                    userid
           | collection_id |                 client_name                 | use_sius | is_subscr
ibed |    hits     |          sius          |         total_amnt         |       royalty_total

-----------+------------+-------------+--------+----------+------------------------------------
-----------+---------------+---------------------------------------------+----------+----------
-----+-------------+------------------------+----------------------------+---------------------
-------
<...deleted details...>
Time: 52.745 ms

THEN:

billing=#  select * from bill_rpt_work order by report_id, client_id, contract_id, rate, appid, userid, collection_id
limit100000; 
 report_id | client_id  | contract_id |   rate   |  appid   |                                 u
serid                                 | collection_id |                     client_name
             | use_sius | is_subscribed |     hits     |          sius          |         total
_amnt         |       royalty_total
-----------+------------+-------------+----------+----------+----------------------------------
--------------------------------------+---------------+----------------------------------------
-------------+----------+---------------+--------------+------------------------+--------------
--------------+----------------------------
<...deleted details...>
Time: 1043.582 ms

Noticibly longer but not bad ...

But with no limit it takes quite a while:
select * from bill_rpt_work order by report_id, client_id, contract_id, rate, appid, userid, collection_id;
<...>
Time: 132033.171 ms

but with a limit, even a fairly large one (a little less than half the table) it was markedly faster. Maybe try more
thanhalf, see if there's a limit to what you can do ... 

YMMV, HTH, yadda yadda ...

Greg Williamson
DBA
GlobeXplorer LLC (part of Digital Globe Inc.)

-----Original Message-----
From:    pgsql-general-owner@postgresql.org on behalf of Jan van der Weijde
Sent:    Mon 1/15/2007 4:44 AM
To:    Alban Hertroys
Cc:    pgsql-general@postgresql.org
Subject:    Re: [GENERAL] Performance with very large tables

That is exactly the problem I think. However I do not deliberately
retrieve the entire table. I use the default settings of the PostgreSQL
installation and just execute a simple SELECT * FROM table.
I am using a separate client and server (both XP in the test
environment), but that should not make much difference.
I would expect that the default behavior of PostgreSQL should be such
that without LIMIT, a SELECT returns records immediately.

Thank you,
Jan

-----Original Message-----
From: Alban Hertroys [mailto:alban@magproductions.nl]
Sent: Monday, January 15, 2007 12:49
To: Jan van der Weijde
Cc: Richard Huxton; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Performance with very large tables

Jan van der Weijde wrote:
> Thank you.
> It is true he want to have the first few record quickly and then
> continue with the next records. However without LIMIT it already takes

> a very long time before the first record is returned.
> I reproduced this with a table with 1.1 million records on an XP
> machine and in my case it took about 25 seconds before the select
> returned the first record. I tried it both interactively with pgAdmin
> and with a C-application using a cursor (with hold). Both took about
the same time.

Are you sure you don't retrieve the entire result set first, and only
start iterating it after that? Notably the fact that LIMIT changes this
behaviour seems to point in that direction.

A quick calculation shows that (provided my assumption holds true)
fetching each record takes about 12.5 usec on average (25s / 2m
records). A quick test on our dev-db fetches (~40k records) in 5 usec
average, so that looks reasonable to me (apples and oranges, I know).

--
Alban Hertroys
alban@magproductions.nl

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
   7500 AK Enschede

// Integrate Your World //

---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
       choose an index scan if your joining column's datatypes do not
       match


-------------------------------------------------------
Click link below if it is SPAM gsw@globexplorer.com

"https://mailscanner.globexplorer.com/dspam/dspam.cgi?signatureID=45ab760c142921470421014&user=gsw@globexplorer.com&retrain=spam&template=history&history_page=1"
!DSPAM:45ab760c142921470421014!
-------------------------------------------------------






В списке pgsql-general по дате отправления:

Предыдущее
От: Richard Huxton
Дата:
Сообщение: Re: Performance with very large tables
Следующее
От: "Shoaib Mir"
Дата:
Сообщение: Re: Performance with very large tables