Extrapolating performance expectation

Поиск
Список
Период
Сортировка
От Rob Sargent
Тема Extrapolating performance expectation
Дата
Msg-id 5c4ddc540905172104t1a91b829tc2848798a31df886@mail.gmail.com
обсуждение исходный текст
Список pgsql-sql
Can one extrapolate future performance expectations for ever-growing tables from a given (non-trivial) data set, and if
sowith what curve?  Corollary: what would one expect a performance curve to approximate in terms of query execution
timev. number of data rows (hardware, load staying constant).<br /><br />I have user and group information on system
usage. I would like to be able to do year-to-date counts per user given a single group id but in the data for one
businessquarter the query is taking in between 10 and 60+  seconds depending on both on the size of the group and the
group'stotal usage.  Groups typically have 10-100 users and consume 20K - 80K records in a 9M record data set.  Group
idcolumn is indexed, but it is not the primary index.  (Sad note: two pseudo groups account for 50 percent of the total
recordsIIRC (and will never be used for the usage-by-group query below)<br /><br />This is a single table query:<br
/><br/>select user_id, element_type, count(*) <br />from dataset <br />where group_id = N<br />group by user_id,
element_type<br/>order by user_id, element_type <br /><br />Is this the sort of situation which might benefit from
increasingthe number of histogram bins (alter table alter column statistics (N>10))?<br /><br />Any and all pointers
appreciated,<br/><br /><br /> 

В списке pgsql-sql по дате отправления:

Предыдущее
От: Emi Lu
Дата:
Сообщение: Re: alter column from varchar(32) to varchar(255) without view re-creation
Следующее
От: Dani Castaños
Дата:
Сообщение: SUM Array values query