Optimization recommendations request

Поиск
Список
Период
Сортировка
От Joe Conway
Тема Optimization recommendations request
Дата
Msg-id 018601c06d02$d007ace0$0705a8c0@jecw2k1
обсуждение исходный текст
Ответ на Re: Invoice number  (Stephan Szabo <sszabo@megazone23.bigpanda.com>)
Ответы Re: Optimization recommendations request  (Stephan Szabo <sszabo@megazone23.bigpanda.com>)
Re: Optimization recommendations request  (Tom Lane <tgl@sss.pgh.pa.us>)
Список pgsql-sql
Hello,

I'm working on an application where I need to design for one table to grow
to an extremely large size. I'm already planning to partition the data into
multiple tables, and even possibly multiple servers, but even so each table
may need to grow to the 10 - 15 million tuple range. This table will be used
for a keyed lookup and it is very important that the query return in well
under a second. I've done a small test using a dual ppro 200 server with 512
MB RAM, running RedHat Linux 6.1, and a late November 7.1 pre-beta snapshot
of PostgreSQL (PGOPTS="-D/opt/pgsql/data -i  -B 25000"). I used a test table
with about 5 million tuples.

Details:

CREATE TABLE foo(   guid varchar(20) not null,   ks varchar(20) not null
);

--> insert 5,000,000 rows, both guid and ks are 20 byte sha-1 hashes
-- tried this first
-- create index foo_idx1 on foo(guid);
-- then tried
create index foo_idx1 on foo using HASH (guid);

SELECT ks FROM foo WHERE guid = 'f9ee1930f4010ba34cc5ca6efd27d988eb4f921d';

The query currently takes in excess of 40 seconds. I would appreciate any
suggestions for optimizing to bring this down substantially.

Thanks in advance,

Joe Conway



В списке pgsql-sql по дате отправления:

Предыдущее
От: Ron Peterson
Дата:
Сообщение: system catalog info
Следующее
От: Raju Mathur
Дата:
Сообщение: [ADMIN] Running a file