On Mon, 2005-03-28 at 11:32, Yudie Gunawan wrote:
> I have table with more than 4 millions records and when I do select
> query it gives me "out of memory" error.
> Does postgres has feature like table partition to handle table with
> very large records.
> Just wondering what do you guys do to deal with very large table?
Is this a straight "select * from table" or is there more being done to
the data?
If it's a straight select, you are likely running out of memory to hold
the result set, and need to look at using a cursor to grab the result in
pieces.