Re: stored proc and inserting hundreds of thousands of rows

Поиск
Список
Период
Сортировка
От Joel Reymont
Тема Re: stored proc and inserting hundreds of thousands of rows
Дата
Msg-id 4881277622113933088@unknownmsgid
обсуждение исходный текст
Ответ на Re: stored proc and inserting hundreds of thousands of rows  ("Kevin Grittner" <Kevin.Grittner@wicourts.gov>)
Ответы Re: stored proc and inserting hundreds of thousands of rows  ("Pierre C" <lists@peufeu.com>)
Re: stored proc and inserting hundreds of thousands of rows  ("Kevin Grittner" <Kevin.Grittner@wicourts.gov>)
Список pgsql-performance
Calculating distance involves giving an array of 150 float8 to a pgsql
function, then calling a C function 2 million times (at the moment),
giving it two arrays of 150 float8.

Just calculating distance for 2 million rows and extracting the
distance takes less than a second. I think that includes sorting by
distance and sending 100 rows to the client.

Are you suggesting eliminating the physical linking and calculating
matching documents on the fly?

Is there a way to speed up my C function by giving it all the float
arrays, calling it once and having it return a set of matches? Would
this be faster than calling it from a select, once for each array?

Sent from my comfortable recliner

On 30/04/2011, at 18:28, Kevin Grittner <Kevin.Grittner@wicourts.gov> wrote:

> Joel Reymont <joelr1@gmail.com> wrote:
>
>> We have 2 million documents now and linking an ad to all of them
>> takes 5 minutes on my top-of-the-line SSD MacBook Pro.
>
> How long does it take to run just the SELECT part of the INSERT by
> itself?
>
> -Kevin

В списке pgsql-performance по дате отправления:

Предыдущее
От: "Kevin Grittner"
Дата:
Сообщение: Re: stored proc and inserting hundreds of thousands of rows
Следующее
От: "Pierre C"
Дата:
Сообщение: Re: stored proc and inserting hundreds of thousands of rows