Re: BUG #16148: Query on Large table hangs in ETL flows and gives outof memory when run in pgAdmin4

Поиск
Список
Период
Сортировка
От Scott Volkers
Тема Re: BUG #16148: Query on Large table hangs in ETL flows and gives outof memory when run in pgAdmin4
Дата
Msg-id CAFOAN5pTT6hWtmsbo9ni2yEvRH7QXOwB5G+CSpDvSdUwi=rodg@mail.gmail.com
обсуждение исходный текст
Ответ на Re: BUG #16148: Query on Large table hangs in ETL flows and gives outof memory when run in pgAdmin4  (Jeff Janes <jeff.janes@gmail.com>)
Ответы Re: BUG #16148: Query on Large table hangs in ETL flows and gives outof memory when run in pgAdmin4  (Jeff Janes <jeff.janes@gmail.com>)
Список pgsql-bugs
Hi Jeff,

I may not have explained this well.

The long and the short of it is this where clause
FROM "elliedb"."documentlog" WHERE dcmodifiedutc>(extract(epoch from
TIMESTAMP '2019-11-15 11:30:51')*1000) 

causes and out of memory error in PGAdmin.  The query will not run.   I am testing it there because the same query will not run in Informatica ETL task flow.    It hangs our processes.  No error returns in the Informatica situation.

My reference to aggregation is the presumption of what Postgres Sql engine is doing with the production of a result set. 

On Wed, Dec 4, 2019 at 5:05 PM Jeff Janes <jeff.janes@gmail.com> wrote:
On Wed, Dec 4, 2019 at 9:20 AM PG Bug reporting form <noreply@postgresql.org> wrote:
The following bug has been logged on the website:

Bug reference:      16148
Logged by:          Scott Volkers
Email address:      scottvolkers@gmail.com
PostgreSQL version: 9.5.15
Operating system:   PostgreSQL 9.5.15 on x86_64-pc-linux-gnu, compiled
Description:       

We have a large table and the error occurs with this where clause:
FROM "elliedb"."documentlog" WHERE dcmodifiedutc>(extract(epoch from
TIMESTAMP '2019-11-15 11:30:51')*1000)

When we reduce the scope to current time - 4 hours the query works within 44
seconds.
where dcmodifiedutc > '1575282651000'

Is this expected?   Is this a version issue being only 9.5? 

From "Now minus 4" hours to now covers 100 fold less time than from   2019-11-15 11:30:51 until now does.  Assuming your data is evenly distributed over the past and doesn't have data from the future, then I think that yes, selecting 100 time more data is expected to take more time and more memory.  pgAdmin4 is not well suited to loading giant data sets into memory.  You can extract large data sets directly into files.  This will not depend on the version.

 
  It seems the
timestamp conversion would be done once and applied to the filter, but it
seems to ballooning the query result being aggregated for the where
clause?


Is aggregation being used?  You haven't shown any aggregation.

Cheers,

Jeff


--
Thanks,

Scott Volkers

В списке pgsql-bugs по дате отправления:

Предыдущее
От: Tomas Vondra
Дата:
Сообщение: Re: logical decoding bug: segfault in ReorderBufferToastReplace()
Следующее
От: Tom Lane
Дата:
Сообщение: Re: BUG #16150: UPDATE set NULL value in non-null columns