Large Table Performance

Поиск
Список
Период
Сортировка
От Edoceo Lists
Тема Large Table Performance
Дата
Msg-id 435986F2.9040002@edoceo.com
обсуждение исходный текст
Ответы Re: Large Table Performance  (Alex Stapleton <alexs@advfn.com>)
Re: Large Table Performance  (Michael Fuhr <mike@fuhr.org>)
Список pgsql-general
List,
   I've got a problem where I need to make a table that is going to grow by an average of 230,000 records per day.
There 
are only 20 columns in the table, mostly char and integer.  It's FK'd in two places to another table for import/export
transaction id's and I have a serial primary key and an index on a date column for when I need to search (every search
is done inside a date range).  I thought it would be OK but after a few weeks of operation I have more than five
million 
records in there.  Some queries take more than five minutes to complete and I'm sad about that.  How can I make this
faster?  I could munge dates into integers if their faster, I'm OK with that.  What can I tweak in the configuration
file to speed things up?  What about some drastic schema change that more experience would have shown me?  I cannot
show 
the full schema but it's like this:

-- My import/export data information table
ie_data (id serial primary key, date date, [12 other columns here])

big_transaction_table(id serial primary key, import_id int w/FK, export_id int w/FK, date date, [20 other necessary
transaction detail columns])

So when I say
select x,y,z from big_transaction_table where date>='10/2/2005' and date<='10/4/2005' and transaction_status in (1,2,3)

order by date;
it takes five+ minutes.

TIA for any suggestions.

/djb

В списке pgsql-general по дате отправления:

Предыдущее
От: Douglas McNaught
Дата:
Сообщение: Re: Newbie Questions
Следующее
От: Johan Wehtje
Дата:
Сообщение: Re: PgInstallerfor 8.1 beta 3 missing Postgis?