Обсуждение: PostgreSQL 9.2 - pg_dump out of memory when backuping a database with 300000000 large objects

Поиск
Список
Период
Сортировка

PostgreSQL 9.2 - pg_dump out of memory when backuping a database with 300000000 large objects

От
Sergey Klochkov
Дата:
Hello All,

While trying to backup a database of relatively modest size (160 Gb) I
ran into the following issue:

When I run
$ pg_dump -f /path/to/mydb.dmp -C -Z 9 mydb

File /path/to/mydb.dmp does not appear (yes, I've checked permissions
and so on). pg_dump just begins to consume memory until it eats up all
avaliable RAM (96 Gb total on server, >64 Gb available) and is killed by
the oom killer.

According to pg_stat_activity, pg_dump runs the following query

SELECT oid, (SELECT rolname FROM pg_catalog.pg_roles WHERE oid =
lomowner) AS rolname, lomacl FROM pg_largeobject_metadata

until it is killed.

strace shows that pg_dump is constantly reading a large amount of data
from a UNIX socket. I suspect that it is the result of the above query.

There are >300000000 large objects in the database. Please don't ask me why.

I tried googling on this, and found mentions of pg_dump being killed by
oom killer, but I failed to find anything related to the huge large
objects number.

Is there any method of working around this issue?

Thanks in advance.

OS: CentOS 6
PostgreSQL version: 9.2.1
96 Gb RAM

PostgreSQL configuration:

listen_addresses = '*'          # what IP address(es) to listen on;
port = 5432                             # (change requires restart)
max_connections = 500                   # (change requires restart)
shared_buffers = 16GB                  # min 128kB
temp_buffers = 64MB                     # min 800kB
work_mem = 512MB                        # min 64kB
maintenance_work_mem = 30000MB          # min 1MB
checkpoint_segments = 70                # in logfile segments, min 1,
16MB each
effective_cache_size = 50000MB
logging_collector = on                  # Enable capturing of stderr and
csvlog
log_directory = 'pg_log'                # directory where log files are
written,
log_filename = 'postgresql-%a.log'      # log file name pattern,
log_truncate_on_rotation = on           # If on, an existing log file of the
log_rotation_age = 1d                   # Automatic rotation of logfiles
will
log_rotation_size = 0                   # Automatic rotation of logfiles
will
log_min_duration_statement = 5000
log_line_prefix = '%t'                  # special values:
autovacuum = on                         # Enable autovacuum subprocess?
  'on'
log_autovacuum_min_duration = 0         # -1 disables, 0 logs all
actions and
autovacuum_max_workers = 5              # max number of autovacuum
subprocesses
autovacuum_naptime =    5s              # time between autovacuum runs
autovacuum_vacuum_threshold = 25        # min number of row updates before
autovacuum_vacuum_scale_factor = 0.1    # fraction of table size before
vacuum
autovacuum_vacuum_cost_delay = 7ms      # default vacuum cost delay for
autovacuum_vacuum_cost_limit = 1500     # default vacuum cost limit for
datestyle = 'iso, dmy'
lc_monetary = 'ru_RU.UTF-8'                     # locale for monetary
formatting
lc_numeric = 'ru_RU.UTF-8'                      # locale for number
formatting
lc_time = 'ru_RU.UTF-8'                         # locale for time formatting
default_text_search_config = 'pg_catalog.russian'

--
Sergey Klochkov
klochkov@iqbuzz.ru


Try update to the latest release,I see there is a bug fix about pg_dump out of memroy in 9.2.2,from the release note http://www.postgresql.org/docs/devel/static/release-9-2-2.html:
  • Work around unportable behavior of malloc(0) and realloc(NULL, 0) (Tom Lane)

    On platforms where these calls return NULL, some code mistakenly thought that meant out-of-memory. This is known to have broken pg_dump for databases containing no user-defined aggregates. There might be other cases as well.




2013/10/1 Sergey Klochkov <klochkov@iqbuzz.ru>
Hello All,

While trying to backup a database of relatively modest size (160 Gb) I ran into the following issue:

When I run
$ pg_dump -f /path/to/mydb.dmp -C -Z 9 mydb

File /path/to/mydb.dmp does not appear (yes, I've checked permissions and so on). pg_dump just begins to consume memory until it eats up all avaliable RAM (96 Gb total on server, >64 Gb available) and is killed by the oom killer.

According to pg_stat_activity, pg_dump runs the following query

SELECT oid, (SELECT rolname FROM pg_catalog.pg_roles WHERE oid = lomowner) AS rolname, lomacl FROM pg_largeobject_metadata

until it is killed.

strace shows that pg_dump is constantly reading a large amount of data from a UNIX socket. I suspect that it is the result of the above query.

There are >300000000 large objects in the database. Please don't ask me why.

I tried googling on this, and found mentions of pg_dump being killed by oom killer, but I failed to find anything related to the huge large objects number.

Is there any method of working around this issue?

Thanks in advance.

OS: CentOS 6
PostgreSQL version: 9.2.1
96 Gb RAM

PostgreSQL configuration:

listen_addresses = '*'          # what IP address(es) to listen on;
port = 5432                             # (change requires restart)
max_connections = 500                   # (change requires restart)
shared_buffers = 16GB                  # min 128kB
temp_buffers = 64MB                     # min 800kB
work_mem = 512MB                        # min 64kB
maintenance_work_mem = 30000MB          # min 1MB
checkpoint_segments = 70                # in logfile segments, min 1, 16MB each
effective_cache_size = 50000MB
logging_collector = on                  # Enable capturing of stderr and csvlog
log_directory = 'pg_log'                # directory where log files are written,
log_filename = 'postgresql-%a.log'      # log file name pattern,
log_truncate_on_rotation = on           # If on, an existing log file of the
log_rotation_age = 1d                   # Automatic rotation of logfiles will
log_rotation_size = 0                   # Automatic rotation of logfiles will
log_min_duration_statement = 5000
log_line_prefix = '%t'                  # special values:
autovacuum = on                         # Enable autovacuum subprocess?  'on'
log_autovacuum_min_duration = 0         # -1 disables, 0 logs all actions and
autovacuum_max_workers = 5              # max number of autovacuum subprocesses
autovacuum_naptime =    5s              # time between autovacuum runs
autovacuum_vacuum_threshold = 25        # min number of row updates before
autovacuum_vacuum_scale_factor = 0.1    # fraction of table size before vacuum
autovacuum_vacuum_cost_delay = 7ms      # default vacuum cost delay for
autovacuum_vacuum_cost_limit = 1500     # default vacuum cost limit for
datestyle = 'iso, dmy'
lc_monetary = 'ru_RU.UTF-8'                     # locale for monetary formatting
lc_numeric = 'ru_RU.UTF-8'                      # locale for number formatting
lc_time = 'ru_RU.UTF-8'                         # locale for time formatting
default_text_search_config = 'pg_catalog.russian'

--
Sergey Klochkov
klochkov@iqbuzz.ru


--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

Re: PostgreSQL 9.2 - pg_dump out of memory when backuping a database with 300000000 large objects

От
Sergey Klochkov
Дата:
I've upgraded to 9.2.4. The problem still persists. It consumed 10 Gb of
RAM in 5 minutes and still grows. The dump file did not appear.

On 01.10.2013 14:04, Jov wrote:
> Try update to the latest release,I see there is a bug fix about pg_dump
> out of memroy in 9.2.2,from the release note
> http://www.postgresql.org/docs/devel/static/release-9-2-2.html:
>
>   *
>
>     Work around unportable behavior of malloc(0) and realloc(NULL,
>     0) (Tom Lane)
>
>     On platforms where these calls return NULL, some code mistakenly
>     thought that meant out-of-memory. This is known to have broken
>     pg_dump for databases containing no user-defined aggregates. There
>     might be other cases as well.
>
>
> Jov
> blog: http:amutu.com/blog <http://amutu.com/blog>
>
>
> 2013/10/1 Sergey Klochkov <klochkov@iqbuzz.ru <mailto:klochkov@iqbuzz.ru>>
>
>     Hello All,
>
>     While trying to backup a database of relatively modest size (160 Gb)
>     I ran into the following issue:
>
>     When I run
>     $ pg_dump -f /path/to/mydb.dmp -C -Z 9 mydb
>
>     File /path/to/mydb.dmp does not appear (yes, I've checked
>     permissions and so on). pg_dump just begins to consume memory until
>     it eats up all avaliable RAM (96 Gb total on server, >64 Gb
>     available) and is killed by the oom killer.
>
>     According to pg_stat_activity, pg_dump runs the following query
>
>     SELECT oid, (SELECT rolname FROM pg_catalog.pg_roles WHERE oid =
>     lomowner) AS rolname, lomacl FROM pg_largeobject_metadata
>
>     until it is killed.
>
>     strace shows that pg_dump is constantly reading a large amount of
>     data from a UNIX socket. I suspect that it is the result of the
>     above query.
>
>     There are >300000000 large objects in the database. Please don't ask
>     me why.
>
>     I tried googling on this, and found mentions of pg_dump being killed
>     by oom killer, but I failed to find anything related to the huge
>     large objects number.
>
>     Is there any method of working around this issue?
>
>     Thanks in advance.
>
>     OS: CentOS 6
>     PostgreSQL version: 9.2.1
>     96 Gb RAM
>
>     PostgreSQL configuration:
>
>     listen_addresses = '*'          # what IP address(es) to listen on;
>     port = 5432                             # (change requires restart)
>     max_connections = 500                   # (change requires restart)
>     shared_buffers = 16GB                  # min 128kB
>     temp_buffers = 64MB                     # min 800kB
>     work_mem = 512MB                        # min 64kB
>     maintenance_work_mem = 30000MB          # min 1MB
>     checkpoint_segments = 70                # in logfile segments, min
>     1, 16MB each
>     effective_cache_size = 50000MB
>     logging_collector = on                  # Enable capturing of stderr
>     and csvlog
>     log_directory = 'pg_log'                # directory where log files
>     are written,
>     log_filename = 'postgresql-%a.log'      # log file name pattern,
>     log_truncate_on_rotation = on           # If on, an existing log
>     file of the
>     log_rotation_age = 1d                   # Automatic rotation of
>     logfiles will
>     log_rotation_size = 0                   # Automatic rotation of
>     logfiles will
>     log_min_duration_statement = 5000
>     log_line_prefix = '%t'                  # special values:
>     autovacuum = on                         # Enable autovacuum
>     subprocess?  'on'
>     log_autovacuum_min_duration = 0         # -1 disables, 0 logs all
>     actions and
>     autovacuum_max_workers = 5              # max number of autovacuum
>     subprocesses
>     autovacuum_naptime =    5s              # time between autovacuum runs
>     autovacuum_vacuum_threshold = 25        # min number of row updates
>     before
>     autovacuum_vacuum_scale_factor = 0.1    # fraction of table size
>     before vacuum
>     autovacuum_vacuum_cost_delay = 7ms      # default vacuum cost delay for
>     autovacuum_vacuum_cost_limit = 1500     # default vacuum cost limit for
>     datestyle = 'iso, dmy'
>     lc_monetary = 'ru_RU.UTF-8'                     # locale for
>     monetary formatting
>     lc_numeric = 'ru_RU.UTF-8'                      # locale for number
>     formatting
>     lc_time = 'ru_RU.UTF-8'                         # locale for time
>     formatting
>     default_text_search_config = 'pg_catalog.russian'
>
>     --
>     Sergey Klochkov
>     klochkov@iqbuzz.ru <mailto:klochkov@iqbuzz.ru>
>
>
>     --
>     Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org
>     <mailto:pgsql-admin@postgresql.org>)
>     To make changes to your subscription:
>     http://www.postgresql.org/__mailpref/pgsql-admin
>     <http://www.postgresql.org/mailpref/pgsql-admin>
>
>

--
Sergey Klochkov
klochkov@iqbuzz.ru


Re: PostgreSQL 9.2 - pg_dump out of memory when backuping a database with 300000000 large objects

От
Sergey Klochkov
Дата:
Stack trace:

Thread 1 (Thread 0x7ff72c4c97c0 (LWP 13086)):
#0  removeHeapElement (objs=0x1a0c90630, numObjs=<value optimized out>,
preBoundaryId=<value optimized out>, postBoundaryId=<value optimized
out>) at pg_dump_sort.c:502
#1  TopoSort (objs=0x1a0c90630, numObjs=<value optimized out>,
preBoundaryId=<value optimized out>, postBoundaryId=<value optimized
out>) at pg_dump_sort.c:415
#2  sortDumpableObjects (objs=0x1a0c90630, numObjs=<value optimized
out>, preBoundaryId=<value optimized out>, postBoundaryId=<value
optimized out>) at pg_dump_sort.c:280
#3  0x000000000041acd1 in main (argc=<value optimized out>, argv=<value
optimized out>) at pg_dump.c:747



On 01.10.2013 14:23, Sergey Klochkov wrote:
> I've upgraded to 9.2.4. The problem still persists. It consumed 10 Gb of
> RAM in 5 minutes and still grows. The dump file did not appear.
>
> On 01.10.2013 14:04, Jov wrote:
>> Try update to the latest release,I see there is a bug fix about pg_dump
>> out of memroy in 9.2.2,from the release note
>> http://www.postgresql.org/docs/devel/static/release-9-2-2.html:
>>
>>   *
>>
>>     Work around unportable behavior of malloc(0) and realloc(NULL,
>>     0) (Tom Lane)
>>
>>     On platforms where these calls return NULL, some code mistakenly
>>     thought that meant out-of-memory. This is known to have broken
>>     pg_dump for databases containing no user-defined aggregates. There
>>     might be other cases as well.
>>
>>
>> Jov
>> blog: http:amutu.com/blog <http://amutu.com/blog>
>>
>>
>> 2013/10/1 Sergey Klochkov <klochkov@iqbuzz.ru
>> <mailto:klochkov@iqbuzz.ru>>
>>
>>     Hello All,
>>
>>     While trying to backup a database of relatively modest size (160 Gb)
>>     I ran into the following issue:
>>
>>     When I run
>>     $ pg_dump -f /path/to/mydb.dmp -C -Z 9 mydb
>>
>>     File /path/to/mydb.dmp does not appear (yes, I've checked
>>     permissions and so on). pg_dump just begins to consume memory until
>>     it eats up all avaliable RAM (96 Gb total on server, >64 Gb
>>     available) and is killed by the oom killer.
>>
>>     According to pg_stat_activity, pg_dump runs the following query
>>
>>     SELECT oid, (SELECT rolname FROM pg_catalog.pg_roles WHERE oid =
>>     lomowner) AS rolname, lomacl FROM pg_largeobject_metadata
>>
>>     until it is killed.
>>
>>     strace shows that pg_dump is constantly reading a large amount of
>>     data from a UNIX socket. I suspect that it is the result of the
>>     above query.
>>
>>     There are >300000000 large objects in the database. Please don't ask
>>     me why.
>>
>>     I tried googling on this, and found mentions of pg_dump being killed
>>     by oom killer, but I failed to find anything related to the huge
>>     large objects number.
>>
>>     Is there any method of working around this issue?
>>
>>     Thanks in advance.
>>
>>     OS: CentOS 6
>>     PostgreSQL version: 9.2.1
>>     96 Gb RAM
>>
>>     PostgreSQL configuration:
>>
>>     listen_addresses = '*'          # what IP address(es) to listen on;
>>     port = 5432                             # (change requires restart)
>>     max_connections = 500                   # (change requires restart)
>>     shared_buffers = 16GB                  # min 128kB
>>     temp_buffers = 64MB                     # min 800kB
>>     work_mem = 512MB                        # min 64kB
>>     maintenance_work_mem = 30000MB          # min 1MB
>>     checkpoint_segments = 70                # in logfile segments, min
>>     1, 16MB each
>>     effective_cache_size = 50000MB
>>     logging_collector = on                  # Enable capturing of stderr
>>     and csvlog
>>     log_directory = 'pg_log'                # directory where log files
>>     are written,
>>     log_filename = 'postgresql-%a.log'      # log file name pattern,
>>     log_truncate_on_rotation = on           # If on, an existing log
>>     file of the
>>     log_rotation_age = 1d                   # Automatic rotation of
>>     logfiles will
>>     log_rotation_size = 0                   # Automatic rotation of
>>     logfiles will
>>     log_min_duration_statement = 5000
>>     log_line_prefix = '%t'                  # special values:
>>     autovacuum = on                         # Enable autovacuum
>>     subprocess?  'on'
>>     log_autovacuum_min_duration = 0         # -1 disables, 0 logs all
>>     actions and
>>     autovacuum_max_workers = 5              # max number of autovacuum
>>     subprocesses
>>     autovacuum_naptime =    5s              # time between autovacuum
>> runs
>>     autovacuum_vacuum_threshold = 25        # min number of row updates
>>     before
>>     autovacuum_vacuum_scale_factor = 0.1    # fraction of table size
>>     before vacuum
>>     autovacuum_vacuum_cost_delay = 7ms      # default vacuum cost
>> delay for
>>     autovacuum_vacuum_cost_limit = 1500     # default vacuum cost
>> limit for
>>     datestyle = 'iso, dmy'
>>     lc_monetary = 'ru_RU.UTF-8'                     # locale for
>>     monetary formatting
>>     lc_numeric = 'ru_RU.UTF-8'                      # locale for number
>>     formatting
>>     lc_time = 'ru_RU.UTF-8'                         # locale for time
>>     formatting
>>     default_text_search_config = 'pg_catalog.russian'
>>
>>     --
>>     Sergey Klochkov
>>     klochkov@iqbuzz.ru <mailto:klochkov@iqbuzz.ru>
>>
>>
>>     --
>>     Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org
>>     <mailto:pgsql-admin@postgresql.org>)
>>     To make changes to your subscription:
>>     http://www.postgresql.org/__mailpref/pgsql-admin
>>     <http://www.postgresql.org/mailpref/pgsql-admin>
>>
>>
>

--
Sergey Klochkov
klochkov@iqbuzz.ru


Re: PostgreSQL 9.2 - pg_dump out of memory when backuping a database with 300000000 large objects

От
Giuseppe Broccolo
Дата:
Maybe you can performe your database changing some parameters properly:
>
> PostgreSQL configuration:
>
> listen_addresses = '*'          # what IP address(es) to listen on;
> port = 5432                             # (change requires restart)
> max_connections = 500                   # (change requires restart)
Set it to 100, the highest value supported by PostgreSQL
> shared_buffers = 16GB                  # min 128kB
This value should not be higher than 8GB
> temp_buffers = 64MB                     # min 800kB
> work_mem = 512MB                        # min 64kB
> maintenance_work_mem = 30000MB          # min 1MB
Given RAM 96GB, you could set it up to 4800MB
> checkpoint_segments = 70                # in logfile segments, min 1,
> 16MB each
> effective_cache_size = 50000MB
Given RAM 96GB, you could set it up to 80GB
>

Hope it can help.

Giuseppe.

--
Giuseppe Broccolo - 2ndQuadrant Italy
PostgreSQL Training, Services and Support
giuseppe.broccolo@2ndQuadrant.it | www.2ndQuadrant.it



Re: PostgreSQL 9.2 - pg_dump out of memory when backuping a database with 300000000 large objects

От
Sergey Klochkov
Дата:
No, it did not make any difference. And after looking through pg_dump.c
and pg_dump_sort.c, I cannot tell how it possibly could. See the
stacktrace that I've sent to the list.

Thanks.

On 01.10.2013 15:01, Giuseppe Broccolo wrote:
> Maybe you can performe your database changing some parameters properly:
>>
>> PostgreSQL configuration:
>>
>> listen_addresses = '*'          # what IP address(es) to listen on;
>> port = 5432                             # (change requires restart)
>> max_connections = 500                   # (change requires restart)
> Set it to 100, the highest value supported by PostgreSQL
>> shared_buffers = 16GB                  # min 128kB
> This value should not be higher than 8GB
>> temp_buffers = 64MB                     # min 800kB
>> work_mem = 512MB                        # min 64kB
>> maintenance_work_mem = 30000MB          # min 1MB
> Given RAM 96GB, you could set it up to 4800MB
>> checkpoint_segments = 70                # in logfile segments, min 1,
>> 16MB each
>> effective_cache_size = 50000MB
> Given RAM 96GB, you could set it up to 80GB
>>
>
> Hope it can help.
>
> Giuseppe.
>

--
Sergey Klochkov
klochkov@iqbuzz.ru


Re: PostgreSQL 9.2 - pg_dump out of memory when backuping a database with 300000000 large objects

От
bricklen
Дата:

On Tue, Oct 1, 2013 at 4:01 AM, Giuseppe Broccolo <giuseppe.broccolo@2ndquadrant.it> wrote:
Maybe you can performe your database changing some parameters properly:

max_connections = 500                   # (change requires restart)
Set it to 100, the highest value supported by PostgreSQL

Surely you mean that  max_connections = 100 is the *default* ?

Re: PostgreSQL 9.2 - pg_dump out of memory when backuping a database with 300000000 large objects

От
Magnus Hagander
Дата:
On Tue, Oct 1, 2013 at 11:07 AM, Sergey Klochkov <klochkov@iqbuzz.ru> wrote:
> Hello All,
>
> While trying to backup a database of relatively modest size (160 Gb) I ran
> into the following issue:
>
> When I run
> $ pg_dump -f /path/to/mydb.dmp -C -Z 9 mydb
>
> File /path/to/mydb.dmp does not appear (yes, I've checked permissions and so
> on). pg_dump just begins to consume memory until it eats up all avaliable
> RAM (96 Gb total on server, >64 Gb available) and is killed by the oom
> killer.
>
> According to pg_stat_activity, pg_dump runs the following query
>
> SELECT oid, (SELECT rolname FROM pg_catalog.pg_roles WHERE oid = lomowner)
> AS rolname, lomacl FROM pg_largeobject_metadata
>
> until it is killed.
>
> strace shows that pg_dump is constantly reading a large amount of data from
> a UNIX socket. I suspect that it is the result of the above query.
>
> There are >300000000 large objects in the database. Please don't ask me why.
>
> I tried googling on this, and found mentions of pg_dump being killed by oom
> killer, but I failed to find anything related to the huge large objects
> number.
>
> Is there any method of working around this issue?

I think this problem comes from the fact that pg_dump treats each
large object as it's own item. See getBlobs() which allocates a
BlobInfo struct for each LO (and a DumpableObject if there are any,
but that's just one).

I assume the query (from that file):
SELECT oid, lomacl FROM pg_largeobject_metadata

returns 300000000 rows, which are then looped over?

I ran into a similar issue a few years ago with a client using a
32-bit version of pg_dump, and got it worked around by moving to
64-bit. Did unfortunately not have time to look at the underlying
issue.


--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


Re: PostgreSQL 9.2 - pg_dump out of memory when backuping a database with 300000000 large objects

От
Alejandro Brust
Дата:
Did U perform  any vacuumdb / reindexdb before the Pg_dump?


El 01/10/2013 09:49, Magnus Hagander escribió:
> On Tue, Oct 1, 2013 at 11:07 AM, Sergey Klochkov <klochkov@iqbuzz.ru> wrote:
>> Hello All,
>>
>> While trying to backup a database of relatively modest size (160 Gb) I ran
>> into the following issue:
>>
>> When I run
>> $ pg_dump -f /path/to/mydb.dmp -C -Z 9 mydb
>>
>> File /path/to/mydb.dmp does not appear (yes, I've checked permissions and so
>> on). pg_dump just begins to consume memory until it eats up all avaliable
>> RAM (96 Gb total on server, >64 Gb available) and is killed by the oom
>> killer.
>>
>> According to pg_stat_activity, pg_dump runs the following query
>>
>> SELECT oid, (SELECT rolname FROM pg_catalog.pg_roles WHERE oid = lomowner)
>> AS rolname, lomacl FROM pg_largeobject_metadata
>>
>> until it is killed.
>>
>> strace shows that pg_dump is constantly reading a large amount of data from
>> a UNIX socket. I suspect that it is the result of the above query.
>>
>> There are >300000000 large objects in the database. Please don't ask me why.
>>
>> I tried googling on this, and found mentions of pg_dump being killed by oom
>> killer, but I failed to find anything related to the huge large objects
>> number.
>>
>> Is there any method of working around this issue?
> I think this problem comes from the fact that pg_dump treats each
> large object as it's own item. See getBlobs() which allocates a
> BlobInfo struct for each LO (and a DumpableObject if there are any,
> but that's just one).
>
> I assume the query (from that file):
> SELECT oid, lomacl FROM pg_largeobject_metadata
>
> returns 300000000 rows, which are then looped over?
>
> I ran into a similar issue a few years ago with a client using a
> 32-bit version of pg_dump, and got it worked around by moving to
> 64-bit. Did unfortunately not have time to look at the underlying
> issue.
>
>



DB link from postgres to Oracle; how to query Dbname.tablename?

От
Bhanu Murthy
Дата:
Hi all, greetings!
 
Using Oracle Heterogeneous Services (Oracle HS) I have configured/created a DB link from Postgres 9.3 database into Oracle 11gR3 database (with postgres DB user credentials).
 
SQL>  create public database link pg_link connect to "postgres" identified by "blahblah" using 'postgresql';
Since Postgres does not support public synonyms across databases in a cluster, how do I connect to a specific database and query a specific table in this Postgres cluster using the HS DB link?
 
Let's say, if I have 2 Postgres databases named pgdb01 and pgdb02 in the Postgres cluster, using this DB link that I have created in Oracle, how can I query a specific table called table01 from pgdb01 database?
 
Even though the table user_account exists in pgdb01 database, I cannot select from it using the DB link.
 
SQL> select count(*) from mailto:%22user_account%22@pg_link;
select count(*) from "user_account"@pg_link;                     *
ERROR at line 1:
ORA-28500: connection from ORACLE to a non-Oracle system returned this message:
ERROR:  relation "user_account" does not exist at character 21;
No query has been executed with that handle {HY000,NativeErr = 1}
ORA-02063: preceding 3 lines from PG_LINK;
 
I tried dbname.tablename syntax, but it didn't work!  BTW, all my tables belong to public schema.
 
Does anyone with DB link expertise try to answer my question?
 
Thanks,
Bhanu M. Gandikota
Mobile: (415) 420-7740

From: Alejandro Brust <alejandrob@pasteleros.org.ar>
To: pgsql-admin@postgresql.org
Sent: Tuesday, October 1, 2013 12:30 PM
Subject: Re: [ADMIN] PostgreSQL 9.2 - pg_dump out of memory when backuping a database with 300000000 large objects

Did U perform  any vacuumdb / reindexdb before the Pg_dump?


El 01/10/2013 09:49, Magnus Hagander escribió:
> On Tue, Oct 1, 2013 at 11:07 AM, Sergey Klochkov <klochkov@iqbuzz.ru> wrote:
>> Hello All,
>>
>> While trying to backup a database of relatively modest size (160 Gb) I ran
>> into the following issue:
>>
>> When I run
>> $ pg_dump -f /path/to/mydb.dmp -C -Z 9 mydb
>>
>> File /path/to/mydb.dmp does not appear (yes, I've checked permissions and so
>> on). pg_dump just begins to consume memory until it eats up all avaliable
>> RAM (96 Gb total on server, >64 Gb available) and is killed by the oom
>> killer.
>>
>> According to pg_stat_activity, pg_dump runs the following query
>>
>> SELECT oid, (SELECT rolname FROM pg_catalog.pg_roles WHERE oid = lomowner)
>> AS rolname, lomacl FROM pg_largeobject_metadata
>>
>> until it is killed.
>>
>> strace shows that pg_dump is constantly reading a large amount of data from
>> a UNIX socket. I suspect that it is the result of the above query.
>>
>> There are >300000000 large objects in the database. Please don't ask me why.
>>
>> I tried googling on this, and found mentions of pg_dump being killed by oom
>> killer, but I failed to find anything related to the huge large objects
>> number.
>>
>> Is there any method of working around this issue?
> I think this problem comes from the fact that pg_dump treats each
> large object as it's own item. See getBlobs() which allocates a
> BlobInfo struct for each LO (and a DumpableObject if there are any,
> but that's just one).
>
> I assume the query (from that file):
> SELECT oid, lomacl FROM pg_largeobject_metadata
>
> returns 300000000 rows, which are then looped over?
>
> I ran into a similar issue a few years ago with a client using a
> 32-bit version of pg_dump, and got it worked around by moving to
> 64-bit. Did unfortunately not have time to look at the underlying
> issue.
>
>



--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [SQL] DB link from postgres to Oracle; how to query Dbname.tablename?

От
Jayadevan M
Дата:
In PostgreSQL, you always connect to a 'database', then query tables. So if you are connecting to the 'wrong' database, you will get the error you mentioned. You can troubleshoot this in many ways - 
one way would be to enable logging on PostgreSQL side and check the log and see which database you are connecting to. Another way will be to execute the PostgreSQL function current_database()
instead of the query you are using right now and verify if you are connecting to the correct database.



On Wed, Oct 2, 2013 at 5:11 AM, Bhanu Murthy <bhanu_murthy@yahoo.com> wrote:
Hi all, greetings!
 
Using Oracle Heterogeneous Services (Oracle HS) I have configured/created a DB link from Postgres 9.3 database into Oracle 11gR3 database (with postgres DB user credentials).
 
SQL>  create public database link pg_link connect to "postgres" identified by "blahblah" using 'postgresql';
Since Postgres does not support public synonyms across databases in a cluster, how do I connect to a specific database and query a specific table in this Postgres cluster using the HS DB link?
 
Let's say, if I have 2 Postgres databases named pgdb01 and pgdb02 in the Postgres cluster, using this DB link that I have created in Oracle, how can I query a specific table called table01 from pgdb01 database?
 
Even though the table user_account exists in pgdb01 database, I cannot select from it using the DB link.
 
SQL> select count(*) from mailto:%22user_account%22@pg_link;
select count(*) from "user_account"@pg_link;                     *
ERROR at line 1:
ORA-28500: connection from ORACLE to a non-Oracle system returned this message:
ERROR:  relation "user_account" does not exist at character 21;
No query has been executed with that handle {HY000,NativeErr = 1}
ORA-02063: preceding 3 lines from PG_LINK;
 
I tried dbname.tablename syntax, but it didn't work!  BTW, all my tables belong to public schema.
 
Does anyone with DB link expertise try to answer my question?
 
Thanks,
Bhanu M. Gandikota
Mobile: (415) 420-7740

From: Alejandro Brust <alejandrob@pasteleros.org.ar>
To: pgsql-admin@postgresql.org
Sent: Tuesday, October 1, 2013 12:30 PM
Subject: Re: [ADMIN] PostgreSQL 9.2 - pg_dump out of memory when backuping a database with 300000000 large objects

Did U perform  any vacuumdb / reindexdb before the Pg_dump?


El 01/10/2013 09:49, Magnus Hagander escribió:
> On Tue, Oct 1, 2013 at 11:07 AM, Sergey Klochkov <klochkov@iqbuzz.ru> wrote:
>> Hello All,
>>
>> While trying to backup a database of relatively modest size (160 Gb) I ran
>> into the following issue:
>>
>> When I run
>> $ pg_dump -f /path/to/mydb.dmp -C -Z 9 mydb
>>
>> File /path/to/mydb.dmp does not appear (yes, I've checked permissions and so
>> on). pg_dump just begins to consume memory until it eats up all avaliable
>> RAM (96 Gb total on server, >64 Gb available) and is killed by the oom
>> killer.
>>
>> According to pg_stat_activity, pg_dump runs the following query
>>
>> SELECT oid, (SELECT rolname FROM pg_catalog.pg_roles WHERE oid = lomowner)
>> AS rolname, lomacl FROM pg_largeobject_metadata
>>
>> until it is killed.
>>
>> strace shows that pg_dump is constantly reading a large amount of data from
>> a UNIX socket. I suspect that it is the result of the above query.
>>
>> There are >300000000 large objects in the database. Please don't ask me why.
>>
>> I tried googling on this, and found mentions of pg_dump being killed by oom
>> killer, but I failed to find anything related to the huge large objects
>> number.
>>
>> Is there any method of working around this issue?
> I think this problem comes from the fact that pg_dump treats each
> large object as it's own item. See getBlobs() which allocates a
> BlobInfo struct for each LO (and a DumpableObject if there are any,
> but that's just one).
>
> I assume the query (from that file):
> SELECT oid, lomacl FROM pg_largeobject_metadata
>
> returns 300000000 rows, which are then looped over?
>
> I ran into a similar issue a few years ago with a client using a
> 32-bit version of pg_dump, and got it worked around by moving to
> 64-bit. Did unfortunately not have time to look at the underlying
> issue.
>
>



--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin



Re: DB link from postgres to Oracle; how to query Dbname.tablename?

От
Albe Laurenz
Дата:
Bhanu Murthy wrote:
> Using Oracle Heterogeneous Services (Oracle HS) I have configured/created a DB link from Postgres 9.3
> database into Oracle 11gR3 database (with postgres DB user credentials).
> 
> SQL>  create public database link pg_link connect to "postgres" identified by "blahblah" using
> 'postgresql';
> Since Postgres does not support public synonyms across databases in a cluster, how do I connect to a
> specific database and query a specific table in this Postgres cluster using the HS DB link?
> 
> Let's say, if I have 2 Postgres databases named pgdb01 and pgdb02 in the Postgres cluster, using this
> DB link that I have created in Oracle, how can I query a specific table called table01 from pgdb01
> database?
> 
> Even though the table user_account exists in pgdb01 database, I cannot select from it using the DB
> link.

You have to connect to a database to query its data.

Rather than defining a database link to database "postgres",
define two links, one for "pgdb01" and one for "pgdb02".

Yours,
Laurenz Albe

Re: PostgreSQL 9.2 - pg_dump out of memory when backuping a database with 300000000 large objects

От
Sergey Klochkov
Дата:
I tried it out. It did not make any difference.

On 01.10.2013 23:30, Alejandro Brust wrote:
> Did U perform  any vacuumdb / reindexdb before the Pg_dump?
>
>
> El 01/10/2013 09:49, Magnus Hagander escribió:
>> On Tue, Oct 1, 2013 at 11:07 AM, Sergey Klochkov <klochkov@iqbuzz.ru> wrote:
>>> Hello All,
>>>
>>> While trying to backup a database of relatively modest size (160 Gb) I ran
>>> into the following issue:
>>>
>>> When I run
>>> $ pg_dump -f /path/to/mydb.dmp -C -Z 9 mydb
>>>
>>> File /path/to/mydb.dmp does not appear (yes, I've checked permissions and so
>>> on). pg_dump just begins to consume memory until it eats up all avaliable
>>> RAM (96 Gb total on server, >64 Gb available) and is killed by the oom
>>> killer.
>>>
>>> According to pg_stat_activity, pg_dump runs the following query
>>>
>>> SELECT oid, (SELECT rolname FROM pg_catalog.pg_roles WHERE oid = lomowner)
>>> AS rolname, lomacl FROM pg_largeobject_metadata
>>>
>>> until it is killed.
>>>
>>> strace shows that pg_dump is constantly reading a large amount of data from
>>> a UNIX socket. I suspect that it is the result of the above query.
>>>
>>> There are >300000000 large objects in the database. Please don't ask me why.
>>>
>>> I tried googling on this, and found mentions of pg_dump being killed by oom
>>> killer, but I failed to find anything related to the huge large objects
>>> number.
>>>
>>> Is there any method of working around this issue?
>> I think this problem comes from the fact that pg_dump treats each
>> large object as it's own item. See getBlobs() which allocates a
>> BlobInfo struct for each LO (and a DumpableObject if there are any,
>> but that's just one).
>>
>> I assume the query (from that file):
>> SELECT oid, lomacl FROM pg_largeobject_metadata
>>
>> returns 300000000 rows, which are then looped over?
>>
>> I ran into a similar issue a few years ago with a client using a
>> 32-bit version of pg_dump, and got it worked around by moving to
>> 64-bit. Did unfortunately not have time to look at the underlying
>> issue.
>>
>>
>
>

--
Sergey Klochkov
klochkov@iqbuzz.ru



Re: DB link from postgres to Oracle; how to query Dbname.tablename?

От
Chris Twombly
Дата:
Do keep in mind that querying across databases generally garners really poor performance, and can bring
your application to its knees with astonishing speed.

________________________________________
From: pgsql-admin-owner@postgresql.org [pgsql-admin-owner@postgresql.org] on behalf of Albe Laurenz
[laurenz.albe@wien.gv.at]
Sent: Wednesday, October 02, 2013 3:58 AM
To: Bhanu Murthy; pgsql-admin@postgresql.org; pgsql-sql@postgresql.org
Subject: Re: [ADMIN] DB link from postgres to Oracle; how to query Dbname.tablename?

Bhanu Murthy wrote:
> Using Oracle Heterogeneous Services (Oracle HS) I have configured/created a DB link from Postgres 9.3
> database into Oracle 11gR3 database (with postgres DB user credentials).
>
> SQL>  create public database link pg_link connect to "postgres" identified by "blahblah" using
> 'postgresql';
> Since Postgres does not support public synonyms across databases in a cluster, how do I connect to a
> specific database and query a specific table in this Postgres cluster using the HS DB link?
>
> Let's say, if I have 2 Postgres databases named pgdb01 and pgdb02 in the Postgres cluster, using this
> DB link that I have created in Oracle, how can I query a specific table called table01 from pgdb01
> database?
>
> Even though the table user_account exists in pgdb01 database, I cannot select from it using the DB
> link.

You have to connect to a database to query its data.

Rather than defining a database link to database "postgres",
define two links, one for "pgdb01" and one for "pgdb02".

Yours,
Laurenz Albe

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

Re: PostgreSQL 9.2 - pg_dump out of memory when backuping a database with 300000000 large objects

От
luckyjackgao
Дата:
Hello

I have encountered some issues of  PG crash when dealing  with too much
data.
It seems that PG tries to do its task as quckly as it can be and will use as
much resource as it can.

Later I tried cgroups to limit resource usage to avoid PG consuming too much
memory etc. too quickly.
And PG works fine.

I edited the following files:

/etc/cgconfig.conf

mount {
    cpuset    = /cgroup/cpuset;
    cpu    = /cgroup/cpu;
    cpuacct    = /cgroup/cpuacct;
    memory    = /cgroup/memory;
    devices    = /cgroup/devices;
    freezer    = /cgroup/freezer;
    net_cls    = /cgroup/net_cls;
    blkio    = /cgroup/blkio;
}

group test1 {
    perm {
          task{
              uid=postgres;
              gid=postgres;
          }

          admin{
             uid=root;
             gid=root;
          }

    } memory {
       memory.limit_in_bytes=300M;
    }
}

/etc/cgrules.conf
# End of file
 postgres      memory           test1/
#
Then set service on and restart , then login as postgres
chkconfig cgconfig  on

chkconfig cgred on

And I can find PG works under 300M memory limit.

Best Regards
jian gao




--
View this message in context:
http://postgresql.1045698.n5.nabble.com/PostgreSQL-9-2-pg-dump-out-of-memory-when-backuping-a-database-with-300000000-large-objects-tp5772931p5774252.html
Sent from the PostgreSQL - admin mailing list archive at Nabble.com.