Performance issues when the number of records are around 10 Million

Поиск
Список
Период
Сортировка
Hi all,
      In my database application, I've a table whose records can reach 10M and insertions can happen at a faster rate like 100 insertions per second in the peak times. I configured postgres to do auto vacuum on hourly basis. I have frontend GUI application in CGI which displays the data from the database. When I try to get the last twenty records from the database, it takes around 10-15  mins to complete the operation.This is the query which is used:

select e.cid, timestamp, s.sig_class, s.sig_priority, s.sig_name, e.sniff_ip, e.sniff_channel, s.sig_config, e.wifi_addr_1,
e.wifi_addr_2, e.view_status, bssid  FROM event e, signature s WHERE s.sig_id = e.signature   AND e.timestamp >= '1270449180' AND e.timestamp < '1273473180'  ORDER BY e.cid DESC,  e.cid DESC limit 21 offset 10539780;

Can any one suggest me a better solution to improve the performance.

Please let me know if you've any further queries.


Thank you,
Venu

В списке pgsql-performance по дате отправления:

Предыдущее
От: thilo
Дата:
Сообщение: Re: Slow Bulk Delete
Следующее
От: "Kevin Grittner"
Дата:
Сообщение: Re: Performance issues when the number of records are around 10 Million