Large selects handled inefficiently?

Поиск
Список
Период
Сортировка
От Jules Bean
Тема Large selects handled inefficiently?
Дата
Msg-id 20000830123039.A15150@grommit.office.vi.net
обсуждение исходный текст
Ответы RE: Large selects handled inefficiently?  ("Andrew Snow" <als@fl.net.au>)
Re: Large selects handled inefficiently?  (andrew@ugh.net.au)
Список pgsql-general
Hiya,

I am running a very large SELECT - it selects every row from a ~10 000
000 row table. I'm running this in Perl using DBD:Pg, with the general
sequence:

$sth = $dbh->prepare("SELECT $fields FROM $from") || return 0;
$fh = new FileHandle(">$file") || die "Can't open $file : $!";
$sth->execute()  || return 0;
$sth->bind_columns(undef,\(@fields));
while ($sth->fetch){
      print $fh join("\t",@fields),"\n";
}
if ($sth->err) { return 0; }
close $fh;

...which works fine in most circumstance, but on this really big
select:

* the client process's memory gradually climbs to around 1 gigabyte
(ouch!)
* eventually the stdout of the process shows over and over again:

Backend sent D message without prior T

It seems as if the client is fetching the /whole/ query result, or
trying to, all at once.  I expected it to only actually fetch result
rows as I called fetch().

Is this:

* A deficiency in DBD::Pg?
* A deficiency in the postgresql client libraries?
* A silly bug in my code?

I believe I can work around this problem using cursors (although I
don't know how well DBD::Pg copes with cursors).  However, that
doesn't seem right -- cursors should be needed to fetch a large query
without having it all in memory at once...

Jules

В списке pgsql-general по дате отправления:

Предыдущее
От: Trurl McByte
Дата:
Сообщение: SSL support autoconfiguration troubles
Следующее
От: "Campbell, Scott"
Дата:
Сообщение: Upper and Lower case sensitivities