Add free-behind capability for large sequential scans

Поиск
Список
Период
Сортировка
От Amit Kumar Khare
Тема Add free-behind capability for large sequential scans
Дата
Msg-id 20020212195814.37928.qmail@web10102.mail.yahoo.com
обсуждение исходный текст
Ответы Re: Add free-behind capability for large sequential scans  (Tom Lane <tgl@sss.pgh.pa.us>)
Список pgsql-hackers
Hi All,

(1)I am Amit Kumar Khare, I am doing MCS from UIUC USA
offcampus from India.

(2) We have been asked to enhance postgreSQL in one of
our assignments. So I
have chosen to pick "Add free-behind capability for
large sequential scans"
from TODO list. Many thanks to Mr. Bruce Momjian who
helped me out and
suggested to make a patch for this problem.

(3)As explained to me by Mr. Bruce, the problem
description is that if say
cache size is 1 mb and a sequential scan is done
through a 2mb file over and
over again the cache becomes useless.Because by the
time the second read of
the table happens the first 1mb has been forced out of
the cache already.Thus
the idea is not to cache very large sequential scans,
but to cache index scans
small sequential scans.

(4)what I think the problem arises because of default
LRU page replacement
policy. So I think we have to make use of MRU or LRU-K
page replacement
policies.

(5)But I am not sure and I wish more input into the
problem description from
you all. I have started reading the buffer manager
code and I found that
freelist.c may be needed to be modified and may be
some other too since we
have to identify the large sequential scans.

Please help me out

Regards
Amit Kumar Khare


__________________________________________________
Do You Yahoo!?
Send FREE Valentine eCards with Yahoo! Greetings!
http://greetings.yahoo.com


В списке pgsql-hackers по дате отправления:

Предыдущее
От: Tom Lane
Дата:
Сообщение: Re: Permissions problem
Следующее
От: Brent Verner
Дата:
Сообщение: Re: [BUGS] Bug #581: Sequence cannot be deleted