Обсуждение: pg_dump -s dumps data?!
hiu I have weird situation. pg 9.1.2, compilet by our own script from source, on 10+ machines. on fours machines, pg_dump -s <database_name> - dumps with data!: postgres@machine:~$ pg_dump --verbose --schema-only dbname > q ... pg_dump: creating TABLE x1 pg_dump: restoring data for table "x2" pg_dump: dumping contents of table x2 pg_dump: restoring data for table "x3" pg_dump: dumping contents of table x3 ... What could be wrong? Same pg_dump call on the same host, but for different database dumps just schema!? Best regards, depesz -- The best thing about modern society is how easy it is to avoid contact with it. http://depesz.com/
On 01/27/2012 02:19 PM, hubert depesz lubaczewski wrote: > hiu > I have weird situation. > pg 9.1.2, compilet by our own script from source, on 10+ machines. > on fours machines, pg_dump -s<database_name> - dumps with data!: Are those 4 machines different from the other 6+? What does the script do? I am guessing you have not seen this in previous versions of postgres? > > postgres@machine:~$ pg_dump --verbose --schema-only dbname> q > ... > pg_dump: creating TABLE x1 > pg_dump: restoring data for table "x2" > pg_dump: dumping contents of table x2 > pg_dump: restoring data for table "x3" > pg_dump: dumping contents of table x3 > ... > > What could be wrong? > > Same pg_dump call on the same host, but for different database dumps just schema!? > > Best regards, > > depesz > -- Adrian Klaver adrian.klaver@gmail.com
On Fri, Jan 27, 2012 at 03:00:24PM -0800, Adrian Klaver wrote: > On 01/27/2012 02:19 PM, hubert depesz lubaczewski wrote: > >hiu > >I have weird situation. > >pg 9.1.2, compilet by our own script from source, on 10+ machines. > >on fours machines, pg_dump -s<database_name> - dumps with data!: > > Are those 4 machines different from the other 6+? no idea. same os, same installation of pg. > What does the script do? the compilation? just runs ./conmfigure with some options, make and make install. > I am guessing you have not seen this in previous versions of postgres? that's the first time I saw this. and we never had older pg on thess machines. there is some suggestion that it might be related to extensions ... but I am not sure what/how to check. Best regards, depesz -- The best thing about modern society is how easy it is to avoid contact with it. http://depesz.com/
On 01/27/2012 03:05 PM, hubert depesz lubaczewski wrote: > On Fri, Jan 27, 2012 at 03:00:24PM -0800, Adrian Klaver wrote: >> On 01/27/2012 02:19 PM, hubert depesz lubaczewski wrote: >>> hiu >>> I have weird situation. >>> pg 9.1.2, compilet by our own script from source, on 10+ machines. >>> on fours machines, pg_dump -s<database_name> - dumps with data!: >> >> Are those 4 machines different from the other 6+? > > no idea. same os, same installation of pg. So much for that idea:) > >> What does the script do? > > the compilation? just runs ./conmfigure with some options, make and make > install. Not sure that it makes a difference, but on the chance it does, what are the options and are they the same for all machines? > >> I am guessing you have not seen this in previous versions of postgres? > > that's the first time I saw this. and we never had older pg on thess > machines. > > there is some suggestion that it might be related to extensions ... but > I am not sure what/how to check. I am not going to much help here, as I am still learning the extension mechanism. For the sake of others that might have a clue, what are the extensions involved? Also, are all the tables having their data dumped or only those that relate to extensions? > > Best regards, > > depesz > -- Adrian Klaver adrian.klaver@gmail.com
hubert depesz lubaczewski <depesz@depesz.com> writes: > I have weird situation. > pg 9.1.2, compilet by our own script from source, on 10+ machines. > on fours machines, pg_dump -s <database_name> - dumps with data!: > postgres@machine:~$ pg_dump --verbose --schema-only dbname > q > ... > pg_dump: creating TABLE x1 > pg_dump: restoring data for table "x2" > pg_dump: dumping contents of table x2 > pg_dump: restoring data for table "x3" > pg_dump: dumping contents of table x3 > ... > What could be wrong? Do the command lines actually look exactly like that? Some platforms are forgiving about violation of the switch-then-argument order (ie, putting switches after the database name) and some are not. I seem to recall that Solaris is particularly strange about this, so what platform(s) are we talking about anyway? regards, tom lane
On Fri, Jan 27, 2012 at 03:11:32PM -0800, Adrian Klaver wrote: > Not sure that it makes a difference, but on the chance it does, what > are the options and are they the same for all machines? $ pg_config --configure '--prefix=/opt/pgbrew/9.1.2' '--with-pgport=5910' '--enable-debug' '--with-perl' '--enable-integer-datetimes' '--enable-thread-safety' '--without-krb5' '--without-pam' '--without-bonjour' '--with-openssl' '--with-readline' '--with-libxml' '--with-zlib' and yes - the same on all machines. but the databases are not the same - so i'm more inclined to think that it's something wrong (or mistaken) on the db level. > I am not going to much help here, as I am still learning the > extension mechanism. For the sake of others that might have a clue, > what are the extensions involved? > Also, are all the tables having their data dumped or only those that > relate to extensions? this is custom extension - basically it's application database loaded as extension. aside from normal pg_catalog and information_schema, this database has two schemata: o and f (names changed to protect the guilty). o schema has 1 table. f schema has 7 tables. pg_dump -s dumps *data* for 6 tables from f schema. not all of them?! \dx+ fextension shows all 7 tables from f schema. *but* select * from pg_extension ; in extconfig column lists 6 oids. and these are oids of tables that got dumped with data?! is it by design that tables listed there (in extconfig column of pg_extension) will be dumped with data, even for pg_dump --schema? (i didn't make the extension, and up to yesterday I wasn't aware that they used extensions to load schema to these databases). Best regards, depesz -- The best thing about modern society is how easy it is to avoid contact with it. http://depesz.com/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
On Fri, Jan 27, 2012 at 08:03:06PM -0500, Tom Lane wrote: > Do the command lines actually look exactly like that? Yes. Exactly. This is ubuntu, and --schema is parsed - more details in mail i sent a minute ago as reply to Adrians mail. Best regards, depesz -- The best thing about modern society is how easy it is to avoid contact with it. http://depesz.com/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
On Fri, Jan 27, 2012 at 03:11:32PM -0800, Adrian Klaver wrote: > Not sure that it makes a difference, but on the chance it does, what > are the options and are they the same for all machines? $ pg_config --configure '--prefix=/opt/pgbrew/9.1.2' '--with-pgport=5910' '--enable-debug' '--with-perl' '--enable-integer-datetimes' '--enable-thread-safety' '--without-krb5' '--without-pam' '--without-bonjour' '--with-openssl' '--with-readline' '--with-libxml' '--with-zlib' and yes - the same on all machines. but the databases are not the same - so i'm more inclined to think that it's something wrong (or mistaken) on the db level. > I am not going to much help here, as I am still learning the > extension mechanism. For the sake of others that might have a clue, > what are the extensions involved? > Also, are all the tables having their data dumped or only those that > relate to extensions? this is custom extension - basically it's application database loaded as extension. aside from normal pg_catalog and information_schema, this database has two schemata: o and f (names changed to protect the guilty). o schema has 1 table. f schema has 7 tables. pg_dump -s dumps *data* for 6 tables from f schema. not all of them?! \dx+ fextension shows all 7 tables from f schema. *but* select * from pg_extension ; in extconfig column lists 6 oids. and these are oids of tables that got dumped with data?! is it by design that tables listed there (in extconfig column of pg_extension) will be dumped with data, even for pg_dump --schema? (i didn't make the extension, and up to yesterday I wasn't aware that they used extensions to load schema to these databases). Best regards, depesz -- The best thing about modern society is how easy it is to avoid contact with it. http://depesz.com/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
On Fri, Jan 27, 2012 at 08:03:06PM -0500, Tom Lane wrote: > Do the command lines actually look exactly like that? Yes. Exactly. This is ubuntu, and --schema is parsed - more details in mail i sent a minute ago as reply to Adrians mail. Best regards, depesz -- The best thing about modern society is how easy it is to avoid contact with it. http://depesz.com/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
hubert depesz lubaczewski <depesz@depesz.com> writes: > is it by design that tables listed there (in extconfig column of > pg_extension) will be dumped with data, even for pg_dump --schema? Um, yes. Read the manual. http://www.postgresql.org/docs/9.1/static/extend-extensions.html#AEN51966 regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
On Saturday, January 28, 2012 1:29:22 am hubert depesz lubaczewski wrote: > On Fri, Jan 27, 2012 at 03:11:32PM -0800, Adrian Klaver wrote: > > Not sure that it makes a difference, but on the chance it does, what > > are the options and are they the same for all machines? > > $ pg_config --configure > '--prefix=/opt/pgbrew/9.1.2' '--with-pgport=5910' '--enable-debug' > '--with-perl' '--enable-integer-datetimes' '--enable-thread-safety' > '--without-krb5' '--without-pam' '--without-bonjour' '--with-openssl' > '--with-readline' '--with-libxml' '--with-zlib' > > and yes - the same on all machines. > > but the databases are not the same - so i'm more inclined to think that > it's something wrong (or mistaken) on the db level. > > > I am not going to much help here, as I am still learning the > > extension mechanism. For the sake of others that might have a clue, > > what are the extensions involved? > > Also, are all the tables having their data dumped or only those that > > relate to extensions? > > this is custom extension - basically it's application database loaded as > extension. > > aside from normal pg_catalog and information_schema, this database has > two schemata: > o > and > f > (names changed to protect the guilty). > o schema has 1 table. > f schema has 7 tables. > > pg_dump -s dumps *data* for 6 tables from f schema. not all of them?! > > \dx+ fextension > shows all 7 tables from f schema. > *but* > select * from pg_extension ; > in extconfig column lists 6 oids. > > and these are oids of tables that got dumped with data?! > > is it by design that tables listed there (in extconfig column of > pg_extension) will be dumped with data, even for pg_dump --schema? > (i didn't make the extension, and up to yesterday I wasn't aware that > they used extensions to load schema to these databases). If I am following correctly then yes. I am getting on thin ice here as I am still learning this but from here: http://www.postgresql.org/docs/9.1/interactive/extend-extensions.html " 35.15.3. Extension Configuration Tables Some extensions include configuration tables, which contain data that might be added or changed by the user after installation of the extension. Ordinarily, if a table is part of an extension, neither the table's definition nor its content will be dumped by pg_dump. But that behavior is undesirable for a configuration table; any data changes made by the user need to be included in dumps, or the extension will behave differently after a dump and reload. To solve this problem, an extension's script file can mark a table it has created as a configuration table, which will cause pg_dump to include the table's contents (not its definition) in dumps. To do that, call the function pg_extension_config_dump(regclass, text) after creating the table, for example CREATE TABLE my_config (key text, value text); SELECT pg_catalog.pg_extension_config_dump('my_config', ''); Any number of tables can be marked this way. When the second argument of pg_extension_config_dump is an empty string, the entire contents of the table are dumped by pg_dump. This is usually only correct if the table is initially empty as created by the extension script. If there is a mixture of initial data and user-provided data in the table, the second argument of pg_extension_config_dump provides a WHERE condition that selects the data to be dumped. For example, you might do CREATE TABLE my_config (key text, value text, standard_entry boolean); SELECT pg_catalog.pg_extension_config_dump('my_config', 'WHERE NOT standard_entry'); and then make sure that standard_entry is true only in the rows created by the extension's script. More complicated situations, such as initially-provided rows that might be modified by users, can be handled by creating triggers on the configuration table to ensure that modified rows are marked correctly. " and from here: http://www.postgresql.org/docs/9.1/interactive/catalog-pg-extension.html " extconfig oid[] pg_class.oid Array of regclass OIDs for the extension's configuration table(s), or NULL if none " What is not explicitly stated is whether any of the above is supposed to respect the -s switch. From the evidence it is not and I can understand that behavior. If an extension has config tables and needs that info to load then it should be carried along. > > Best regards, > > depesz -- Adrian Klaver adrian.klaver@gmail.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
On Fri, Jan 27, 2012 at 03:11:32PM -0800, Adrian Klaver wrote: > Not sure that it makes a difference, but on the chance it does, what > are the options and are they the same for all machines? $ pg_config --configure '--prefix=/opt/pgbrew/9.1.2' '--with-pgport=5910' '--enable-debug' '--with-perl' '--enable-integer-datetimes' '--enable-thread-safety' '--without-krb5' '--without-pam' '--without-bonjour' '--with-openssl' '--with-readline' '--with-libxml' '--with-zlib' and yes - the same on all machines. but the databases are not the same - so i'm more inclined to think that it's something wrong (or mistaken) on the db level. > I am not going to much help here, as I am still learning the > extension mechanism. For the sake of others that might have a clue, > what are the extensions involved? > Also, are all the tables having their data dumped or only those that > relate to extensions? this is custom extension - basically it's application database loaded as extension. aside from normal pg_catalog and information_schema, this database has two schemata: o and f (names changed to protect the guilty). o schema has 1 table. f schema has 7 tables. pg_dump -s dumps *data* for 6 tables from f schema. not all of them?! \dx+ fextension shows all 7 tables from f schema. *but* select * from pg_extension ; in extconfig column lists 6 oids. and these are oids of tables that got dumped with data?! is it by design that tables listed there (in extconfig column of pg_extension) will be dumped with data, even for pg_dump --schema? (i didn't make the extension, and up to yesterday I wasn't aware that they used extensions to load schema to these databases). Best regards, depesz -- The best thing about modern society is how easy it is to avoid contact with it. http://depesz.com/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
On Fri, Jan 27, 2012 at 08:03:06PM -0500, Tom Lane wrote: > Do the command lines actually look exactly like that? Yes. Exactly. This is ubuntu, and --schema is parsed - more details in mail i sent a minute ago as reply to Adrians mail. Best regards, depesz -- The best thing about modern society is how easy it is to avoid contact with it. http://depesz.com/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
hubert depesz lubaczewski <depesz@depesz.com> writes: > is it by design that tables listed there (in extconfig column of > pg_extension) will be dumped with data, even for pg_dump --schema? Um, yes. Read the manual. http://www.postgresql.org/docs/9.1/static/extend-extensions.html#AEN51966 regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
On Saturday, January 28, 2012 1:29:22 am hubert depesz lubaczewski wrote: > On Fri, Jan 27, 2012 at 03:11:32PM -0800, Adrian Klaver wrote: > > Not sure that it makes a difference, but on the chance it does, what > > are the options and are they the same for all machines? > > $ pg_config --configure > '--prefix=/opt/pgbrew/9.1.2' '--with-pgport=5910' '--enable-debug' > '--with-perl' '--enable-integer-datetimes' '--enable-thread-safety' > '--without-krb5' '--without-pam' '--without-bonjour' '--with-openssl' > '--with-readline' '--with-libxml' '--with-zlib' > > and yes - the same on all machines. > > but the databases are not the same - so i'm more inclined to think that > it's something wrong (or mistaken) on the db level. > > > I am not going to much help here, as I am still learning the > > extension mechanism. For the sake of others that might have a clue, > > what are the extensions involved? > > Also, are all the tables having their data dumped or only those that > > relate to extensions? > > this is custom extension - basically it's application database loaded as > extension. > > aside from normal pg_catalog and information_schema, this database has > two schemata: > o > and > f > (names changed to protect the guilty). > o schema has 1 table. > f schema has 7 tables. > > pg_dump -s dumps *data* for 6 tables from f schema. not all of them?! > > \dx+ fextension > shows all 7 tables from f schema. > *but* > select * from pg_extension ; > in extconfig column lists 6 oids. > > and these are oids of tables that got dumped with data?! > > is it by design that tables listed there (in extconfig column of > pg_extension) will be dumped with data, even for pg_dump --schema? > (i didn't make the extension, and up to yesterday I wasn't aware that > they used extensions to load schema to these databases). If I am following correctly then yes. I am getting on thin ice here as I am still learning this but from here: http://www.postgresql.org/docs/9.1/interactive/extend-extensions.html " 35.15.3. Extension Configuration Tables Some extensions include configuration tables, which contain data that might be added or changed by the user after installation of the extension. Ordinarily, if a table is part of an extension, neither the table's definition nor its content will be dumped by pg_dump. But that behavior is undesirable for a configuration table; any data changes made by the user need to be included in dumps, or the extension will behave differently after a dump and reload. To solve this problem, an extension's script file can mark a table it has created as a configuration table, which will cause pg_dump to include the table's contents (not its definition) in dumps. To do that, call the function pg_extension_config_dump(regclass, text) after creating the table, for example CREATE TABLE my_config (key text, value text); SELECT pg_catalog.pg_extension_config_dump('my_config', ''); Any number of tables can be marked this way. When the second argument of pg_extension_config_dump is an empty string, the entire contents of the table are dumped by pg_dump. This is usually only correct if the table is initially empty as created by the extension script. If there is a mixture of initial data and user-provided data in the table, the second argument of pg_extension_config_dump provides a WHERE condition that selects the data to be dumped. For example, you might do CREATE TABLE my_config (key text, value text, standard_entry boolean); SELECT pg_catalog.pg_extension_config_dump('my_config', 'WHERE NOT standard_entry'); and then make sure that standard_entry is true only in the rows created by the extension's script. More complicated situations, such as initially-provided rows that might be modified by users, can be handled by creating triggers on the configuration table to ensure that modified rows are marked correctly. " and from here: http://www.postgresql.org/docs/9.1/interactive/catalog-pg-extension.html " extconfig oid[] pg_class.oid Array of regclass OIDs for the extension's configuration table(s), or NULL if none " What is not explicitly stated is whether any of the above is supposed to respect the -s switch. From the evidence it is not and I can understand that behavior. If an extension has config tables and needs that info to load then it should be carried along. > > Best regards, > > depesz -- Adrian Klaver adrian.klaver@gmail.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
hubert depesz lubaczewski <depesz@depesz.com> writes: > is it by design that tables listed there (in extconfig column of > pg_extension) will be dumped with data, even for pg_dump --schema? Um, yes. Read the manual. http://www.postgresql.org/docs/9.1/static/extend-extensions.html#AEN51966 regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
On Fri, Jan 27, 2012 at 08:03:06PM -0500, Tom Lane wrote: > Do the command lines actually look exactly like that? Yes. Exactly. This is ubuntu, and --schema is parsed - more details in mail i sent a minute ago as reply to Adrians mail. Best regards, depesz -- The best thing about modern society is how easy it is to avoid contact with it. http://depesz.com/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
On Fri, Jan 27, 2012 at 03:11:32PM -0800, Adrian Klaver wrote: > Not sure that it makes a difference, but on the chance it does, what > are the options and are they the same for all machines? $ pg_config --configure '--prefix=/opt/pgbrew/9.1.2' '--with-pgport=5910' '--enable-debug' '--with-perl' '--enable-integer-datetimes' '--enable-thread-safety' '--without-krb5' '--without-pam' '--without-bonjour' '--with-openssl' '--with-readline' '--with-libxml' '--with-zlib' and yes - the same on all machines. but the databases are not the same - so i'm more inclined to think that it's something wrong (or mistaken) on the db level. > I am not going to much help here, as I am still learning the > extension mechanism. For the sake of others that might have a clue, > what are the extensions involved? > Also, are all the tables having their data dumped or only those that > relate to extensions? this is custom extension - basically it's application database loaded as extension. aside from normal pg_catalog and information_schema, this database has two schemata: o and f (names changed to protect the guilty). o schema has 1 table. f schema has 7 tables. pg_dump -s dumps *data* for 6 tables from f schema. not all of them?! \dx+ fextension shows all 7 tables from f schema. *but* select * from pg_extension ; in extconfig column lists 6 oids. and these are oids of tables that got dumped with data?! is it by design that tables listed there (in extconfig column of pg_extension) will be dumped with data, even for pg_dump --schema? (i didn't make the extension, and up to yesterday I wasn't aware that they used extensions to load schema to these databases). Best regards, depesz -- The best thing about modern society is how easy it is to avoid contact with it. http://depesz.com/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
On Sat, Jan 28, 2012 at 01:10:39PM -0500, Tom Lane wrote: > hubert depesz lubaczewski <depesz@depesz.com> writes: > > is it by design that tables listed there (in extconfig column of > > pg_extension) will be dumped with data, even for pg_dump --schema? > > Um, yes. Read the manual. > http://www.postgresql.org/docs/9.1/static/extend-extensions.html#AEN51966 Yes, but: If you create table using extensions, and the you insert data to it - this data will *not* be dumped using pg_dump. If you mark the table with pg_extension_config_dump() - the data fro the table will be dumped *always* - even for -s dump of another table. I.e. pg_dump -s -t a will dump data of table b! So long story short - with currently released versions it is either: - you will never get data for given table in dumps or - you will get data for this table in dump, always, even for schema-only dumps of unrelated tables. Now. I have since made some tests, and it looks like the dumping thing is fixed in 9.2devel from git HEAD, which seems to suggest that it will work sanely in 9.1.3 - so the point is kind of moot. But the fact that is the newest released Pg provides mechanism that completely breaks pg_dump functionality. Best regards, depesz -- The best thing about modern society is how easy it is to avoid contact with it. http://depesz.com/
On Saturday, January 28, 2012 1:29:22 am hubert depesz lubaczewski wrote: > On Fri, Jan 27, 2012 at 03:11:32PM -0800, Adrian Klaver wrote: > > Not sure that it makes a difference, but on the chance it does, what > > are the options and are they the same for all machines? > > $ pg_config --configure > '--prefix=/opt/pgbrew/9.1.2' '--with-pgport=5910' '--enable-debug' > '--with-perl' '--enable-integer-datetimes' '--enable-thread-safety' > '--without-krb5' '--without-pam' '--without-bonjour' '--with-openssl' > '--with-readline' '--with-libxml' '--with-zlib' > > and yes - the same on all machines. > > but the databases are not the same - so i'm more inclined to think that > it's something wrong (or mistaken) on the db level. > > > I am not going to much help here, as I am still learning the > > extension mechanism. For the sake of others that might have a clue, > > what are the extensions involved? > > Also, are all the tables having their data dumped or only those that > > relate to extensions? > > this is custom extension - basically it's application database loaded as > extension. > > aside from normal pg_catalog and information_schema, this database has > two schemata: > o > and > f > (names changed to protect the guilty). > o schema has 1 table. > f schema has 7 tables. > > pg_dump -s dumps *data* for 6 tables from f schema. not all of them?! > > \dx+ fextension > shows all 7 tables from f schema. > *but* > select * from pg_extension ; > in extconfig column lists 6 oids. > > and these are oids of tables that got dumped with data?! > > is it by design that tables listed there (in extconfig column of > pg_extension) will be dumped with data, even for pg_dump --schema? > (i didn't make the extension, and up to yesterday I wasn't aware that > they used extensions to load schema to these databases). If I am following correctly then yes. I am getting on thin ice here as I am still learning this but from here: http://www.postgresql.org/docs/9.1/interactive/extend-extensions.html " 35.15.3. Extension Configuration Tables Some extensions include configuration tables, which contain data that might be added or changed by the user after installation of the extension. Ordinarily, if a table is part of an extension, neither the table's definition nor its content will be dumped by pg_dump. But that behavior is undesirable for a configuration table; any data changes made by the user need to be included in dumps, or the extension will behave differently after a dump and reload. To solve this problem, an extension's script file can mark a table it has created as a configuration table, which will cause pg_dump to include the table's contents (not its definition) in dumps. To do that, call the function pg_extension_config_dump(regclass, text) after creating the table, for example CREATE TABLE my_config (key text, value text); SELECT pg_catalog.pg_extension_config_dump('my_config', ''); Any number of tables can be marked this way. When the second argument of pg_extension_config_dump is an empty string, the entire contents of the table are dumped by pg_dump. This is usually only correct if the table is initially empty as created by the extension script. If there is a mixture of initial data and user-provided data in the table, the second argument of pg_extension_config_dump provides a WHERE condition that selects the data to be dumped. For example, you might do CREATE TABLE my_config (key text, value text, standard_entry boolean); SELECT pg_catalog.pg_extension_config_dump('my_config', 'WHERE NOT standard_entry'); and then make sure that standard_entry is true only in the rows created by the extension's script. More complicated situations, such as initially-provided rows that might be modified by users, can be handled by creating triggers on the configuration table to ensure that modified rows are marked correctly. " and from here: http://www.postgresql.org/docs/9.1/interactive/catalog-pg-extension.html " extconfig oid[] pg_class.oid Array of regclass OIDs for the extension's configuration table(s), or NULL if none " What is not explicitly stated is whether any of the above is supposed to respect the -s switch. From the evidence it is not and I can understand that behavior. If an extension has config tables and needs that info to load then it should be carried along. > > Best regards, > > depesz -- Adrian Klaver adrian.klaver@gmail.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
On Monday, January 30, 2012 5:08:41 am hubert depesz lubaczewski wrote: > On Sat, Jan 28, 2012 at 01:10:39PM -0500, Tom Lane wrote: > > hubert depesz lubaczewski <depesz@depesz.com> writes: > > > is it by design that tables listed there (in extconfig column of > > > pg_extension) will be dumped with data, even for pg_dump --schema? > > > > Um, yes. Read the manual. > > http://www.postgresql.org/docs/9.1/static/extend-extensions.html#AEN51966 > > Yes, but: > > If you create table using extensions, and the you insert data to it > - this data will *not* be dumped using pg_dump. > If you mark the table with pg_extension_config_dump() - the data fro the > table will be dumped *always* - even for -s dump of another table. > I.e. pg_dump -s -t a will dump data of table b! Well this is a different problem report from your original post:) In your original report you where asking why data was dumped in a complete database pg_dump -s > > So long story short - with currently released versions it is either: > - you will never get data for given table in dumps > or > - you will get data for this table in dump, always, even for schema-only > dumps of unrelated tables. Well that is spelled out as such in the docs, except for the part about dumping data from an unrelated table. Short version, it is up to the extension to take care of table creation. Long version: "Some extensions include configuration tables, which contain data that might be added or changed by the user after installation of the extension. Ordinarily, if a table is part of an extension, neither the table's definition nor its content will be dumped by pg_dump. But that behavior is undesirable for a configuration table; any data changes made by the user need to be included in dumps, or the extension will behave differently after a dump and reload. To solve this problem, an extension's script file can mark a table it has created as a configuration table, which will cause pg_dump to include the table's contents (not its definition) in dumps. To do that, call the function pg_extension_config_dump(regclass, text) after creating the table..." > > Now. > > I have since made some tests, and it looks like the dumping thing is > fixed in 9.2devel from git HEAD, which seems to suggest that it will > work sanely in 9.1.3 - so the point is kind of moot. > > But the fact that is the newest released Pg provides mechanism that > completely breaks pg_dump functionality. Breaks certain cases when using pg_dump -s. Some of what you highlight above is designed behavior. What is happening is covered by my second rule of life 'Easy is difficult'. In this case it is the desire for a built in 'packaging' system that makes extending Postgres easier for the end user. To get that leads to more complexity in the backend and a new learning curve for those that have to deal with it. > > Best regards, > > depesz -- Adrian Klaver adrian.klaver@gmail.com
On Mon, Jan 30, 2012 at 07:34:49AM -0800, Adrian Klaver wrote: > Breaks certain cases when using pg_dump -s. Some of what you highlight above is > designed behavior. What is happening is covered by my second rule of life 'Easy > is difficult'. In this case it is the desire for a built in 'packaging' system > that makes extending Postgres easier for the end user. To get that leads to > more complexity in the backend and a new learning curve for those that have to > deal with it. not sure what "learning curve" you have in mind, since it simply cripples functionality of pg_dump. how steep is the learning curve, to learn that you no longer can make sensible "pg_dump -s"? Best regards, depesz -- The best thing about modern society is how easy it is to avoid contact with it. http://depesz.com/
hubert depesz lubaczewski <depesz@depesz.com> writes: > On Sat, Jan 28, 2012 at 01:10:39PM -0500, Tom Lane wrote: >> Um, yes. Read the manual. >> http://www.postgresql.org/docs/9.1/static/extend-extensions.html#AEN51966 > Yes, but: > If you create table using extensions, and the you insert data to it > - this data will *not* be dumped using pg_dump. > If you mark the table with pg_extension_config_dump() - the data fro the > table will be dumped *always* - even for -s dump of another table. Yes, that's the intended behavior. The purpose of the configuration-table feature is to dump data that is needed for an extension to operate properly --- for instance postgis has some auxiliary tables that are best treated as part of the schema. If you think the contents of a table are not effectively schema information, then you shouldn't mark it as pg_extension_config_dump. > I have since made some tests, and it looks like the dumping thing is > fixed in 9.2devel from git HEAD, which seems to suggest that it will > work sanely in 9.1.3 - so the point is kind of moot. What tests were those exactly? I'm not aware of any agreed changes in this area. regards, tom lane
On Monday, January 30, 2012 7:39:13 am hubert depesz lubaczewski wrote: > On Mon, Jan 30, 2012 at 07:34:49AM -0800, Adrian Klaver wrote: > > Breaks certain cases when using pg_dump -s. Some of what you highlight > > above is designed behavior. What is happening is covered by my second > > rule of life 'Easy is difficult'. In this case it is the desire for a > > built in 'packaging' system that makes extending Postgres easier for the > > end user. To get that leads to more complexity in the backend and a new > > learning curve for those that have to deal with it. > > not sure what "learning curve" you have in mind, since it simply > cripples functionality of pg_dump. > how steep is the learning curve, to learn that you no longer can make > sensible "pg_dump -s"? I am not sure I understand crippled. There is a bug that you acknowledge has been dealt with. The rest is documented behavior having to do with extension packaging. Extensions exist as packages and are put into the database and pulled from the database as such, by the extension mechanism. Whether data is included in that process is up to the discretion of the extension creator. So on that particular point you probably need to talk to the folks that created the extension. The learning curve exists because now a db admin has to understand that the extension mechanism exists and the ways it interacts with the rest of the database. > > Best regards, > > depesz -- Adrian Klaver adrian.klaver@gmail.com
On Mon, Jan 30, 2012 at 11:12:09AM -0500, Tom Lane wrote: > Yes, that's the intended behavior. The purpose of the > configuration-table feature is to dump data that is needed for an > extension to operate properly --- for instance postgis has some > auxiliary tables that are best treated as part of the schema. If you > think the contents of a table are not effectively schema information, > then you shouldn't mark it as pg_extension_config_dump. but the it is *NOT POSSIBLE* do dump data of the table with pg_dump. AT ALL. regardless of options. > > I have since made some tests, and it looks like the dumping thing is > > fixed in 9.2devel from git HEAD, which seems to suggest that it will > > work sanely in 9.1.3 - so the point is kind of moot. > What tests were those exactly? I'm not aware of any agreed changes in > this area. 1. create table with extension 2. try to dump it's content with pg_dump 3. try to dump schema of database 4. try to dump schema of another table (not from extension) 5. drop extension, load another extension, this time, with marking table with pg_extension_config_dump repeat steps 2,3,4. the way pg works in git HEAD is much more sensible. Best regards, depesz -- The best thing about modern society is how easy it is to avoid contact with it. http://depesz.com/
On Mon, Jan 30, 2012 at 08:20:15AM -0800, Adrian Klaver wrote: > I am not sure I understand crippled. There is a bug that you acknowledge has > been dealt with. The rest is documented behavior having to do with extension > packaging. Extensions exist as packages and are put into the database and pulled > from the database as such, by the extension mechanism. Whether data is included > in that process is up to the discretion of the extension creator. So on that > particular point you probably need to talk to the folks that created the > extension. The learning curve exists because now a db admin has to understand > that the extension mechanism exists and the ways it interacts with the rest of > the database. crippled in this way: it is not possible to have table from extension that will have its data dumped in normal pg_dump (or pg_dump -a), and will *not* have its data dumped when doing pg_dump -s or pg_dump -s -t other_table. you either have data for extension-based-tables in *all* dumps or you don't have data for extension-based-tables in *any* dumps. Best regards, depesz -- The best thing about modern society is how easy it is to avoid contact with it. http://depesz.com/
hubert depesz lubaczewski <depesz@depesz.com> writes: > On Mon, Jan 30, 2012 at 11:12:09AM -0500, Tom Lane wrote: >> What tests were those exactly? I'm not aware of any agreed changes in >> this area. > 1. create table with extension > 2. try to dump it's content with pg_dump > 3. try to dump schema of database > 4. try to dump schema of another table (not from extension) > 5. drop extension, load another extension, this time, with marking table > with pg_extension_config_dump > repeat steps 2,3,4. That is way too vague for my taste, as you have not shown the pg_dump options you're using, for example. > the way pg works in git HEAD is much more sensible. I'm not aware that the way it works has been changed intentionally, so if you're seeing a difference here it's more likely that a bug has been added than removed. Please provide an exact test case. regards, tom lane
On Monday, January 30, 2012 8:25:54 am hubert depesz lubaczewski wrote: > On Mon, Jan 30, 2012 at 08:20:15AM -0800, Adrian Klaver wrote: > > I am not sure I understand crippled. There is a bug that you acknowledge > > has been dealt with. The rest is documented behavior having to do with > > extension packaging. Extensions exist as packages and are put into the > > database and pulled from the database as such, by the extension > > mechanism. Whether data is included in that process is up to the > > discretion of the extension creator. So on that particular point you > > probably need to talk to the folks that created the extension. The > > learning curve exists because now a db admin has to understand that the > > extension mechanism exists and the ways it interacts with the rest of > > the database. > > crippled in this way: > it is not possible to have table from extension that will have its data > dumped in normal pg_dump (or pg_dump -a), and will *not* have its data > dumped when doing pg_dump -s or pg_dump -s -t other_table. Well the only part I see as broken is if you do pg_dump -s -t some_table and that causes data to be dumped from extension table(s) with out asking. This in the case where some_table has nothing to do with the extension. If you do pg_dump -s for an entire database I would expect the data to be dumped if it was configured by pg_catalog.pg_extension_config_dump(). When that function is run by the extension creator, they are essentially saying that data is needed to set up the extension and therefore it is schema. Otherwise you would not be able to replicate an extensions state in the dump/restore cycle in those cases where that is important. At least that is how I see it. > > you either have data for extension-based-tables in *all* dumps > or > you don't have data for extension-based-tables in *any* dumps. > > Best regards, > > depesz -- Adrian Klaver adrian.klaver@gmail.com
On Mon, Jan 30, 2012 at 11:30:51AM -0500, Tom Lane wrote: > That is way too vague for my taste, as you have not shown the pg_dump > options you're using, for example. OK. i tried to explain that the options don't matter, but here we go. full example: I have two diferent databases: 9.1.2 and 9.2devel, built TODAY from TODAYs gir head (pulled ~ 90 minutes ago). On both systems, in correct places, I create 2 files: depesz--1.0.sql: -- complain IF script IS sourced in psql, rather than via CREATE EXTENSION \echo Use "CREATE EXTENSION depesz" to load this file. \quit CREATE TABLE users ( username TEXT PRIMARY KEY, password TEXT ); depesz.control: comment = 'test extension' default_version = '1.0' relocatable = true on both machines, I create empty test database (template1 is empty, can I skip proving this?): =$ createdb test afterwards, on both systems, I do: psql -d test create extension depesz; create table z (i int4); insert into users (username) values ('anything'); insert into z (i) values (1); Results expected: 1. two tables exist (z, users). 2. table z has one row with i == 1 3. table users contains 1 row with username == anything. Results on both 9.1.2 and 9.2 are as expected (I hope I can skip proving this, or will this be a problem?) Now. Let's try some dumps. First - let's get schema dump of whole database: Command to be used: pg_dump -s test expected: 1. create extension depesz 2. create table 3. no data for neither "users" nor "z" tables results: both 9.1.2 and 9.2 pass Now. let's get dump of table "users", just schema: expected: no data for this table, and create table as sql or, alternatively - create extension statement. command used: pg_dump -s -t users test result: both 9.1.2 and 9.2 fail - there is neither create table nor create extension statement. now. let's try the same with table "z" - command pg_dump -s -t z test results: as expected normal create table exists in dump. Now, let's try data dumps. first - database wide pg_dump -a test. expected results: data for users table and data for z table. result: both 9.1.2 and 9.2 *do not* show the data for users table. data for "z" table is dumped without problem. Now, let's try to dump data specifically for users table: pg_dump -a -t users test expected result: data for users table. result: no data dumped. table z data dump, with pg_dump -a -t z test expected result: data for z table. result: data for z table dumped, and nothing else. So, as I showed above, if the table is *not* marked with pg_catalog.pg_extension_config_dump, but the table structure comes from extension, it is not possible, using no options, to get it's data in dump. Is is also not possible to get table structure as "create table", or even "create extension" with pg_dump. Now. Let's see what changes where I do use this pg_catalog.pg_extension_config_dump. I dropped test database, changed depesz--1.0.sql to contain: -- complain IF script IS sourced in psql, rather than via CREATE EXTENSION \echo Use "CREATE EXTENSION depesz" to load this file. \quit CREATE TABLE users ( username TEXT PRIMARY KEY, password TEXT ); SELECT pg_catalog.pg_extension_config_dump('users', ''); And recreated test, loaded extension, created z table, and inserted rows. State before tests: $ \d List of relations Schema │ Name │ Type │ Owner ────────┼───────┼───────┼──────── public │ users │ table │ depesz public │ z │ table │ depesz (2 rows) (depesz@localhost:5910) 18:16:06 [test] $ select * from users; username │ password ──────────┼────────── anything │ [null] (1 row) (depesz@localhost:5910) 18:16:12 [test] $ select * from z; i ─── 1 (1 row) (depesz@localhost:5910) 18:16:14 [test] $ \dx List of installed extensions Name │ Version │ Schema │ Description ─────────┼─────────┼────────────┼────────────────────────────── depesz │ 1.0 │ public │ test extension plpgsql │ 1.0 │ pg_catalog │ PL/pgSQL procedural language (2 rows) (depesz@localhost:5910) 18:16:15 [test] $ select * from pg_extension ; extname │ extowner │ extnamespace │ extrelocatable │ extversion │ extconfig │ extcondition ─────────┼──────────┼──────────────┼────────────────┼────────────┼───────────┼────────────── plpgsql │ 10 │ 11 │ f │ 1.0 │ [null] │ [null] depesz │ 16387 │ 2200 │ t │ 1.0 │ {162414} │ {""} (2 rows) (depesz@localhost:5910) 18:16:20 [test] $ select 162414::regclass; regclass ────────── users (1 row) oid in 9.2 is different, but I hope it will not make the mail useless. Now. let's try again with the dumps. 1. pg_dump test expected: create extension depesz; create table z; data for users; data for z; result: passed. all as expected 2. pg_dump -s test expected: create extension depesz; create table z; data for users; all as expected. 3. pg_dump -s -t z test expected: create table z; result: on 9.2: create table z; on 9.1.2: create table z + data for users table 4. pg_dump -a -t z test expected: data for table z result: on 9.2: data for table z on 9.1.2: data for both table z and table users I hope that this time I got my point through, and frankly - if not, I just give up. It is *not* possible to have table come from extension, and have it's data dumped in *some* of the dumps. It either shows in *no* of the dumps (in case of tables without pg_extension_config_dump()), or in *all* dumps - including dumps of other tables, just schema dumps. I think I explained it in previous mails, and if not - sorry, but I clearly can't explain good enough - the point is that with the way how extensions now work, they are useless for providing way to create tables that will store data, in case you would ever want dump without this data. Best regards, depesz -- The best thing about modern society is how easy it is to avoid contact with it. http://depesz.com/
On 01/30/2012 09:23 AM, hubert depesz lubaczewski wrote: > > I think I explained it in previous mails, and if not - sorry, but > I clearly can't explain good enough - the point is that with the way how > extensions now work, they are useless for providing way to create > tables that will store data, in case you would ever want dump without > this data. So in summary; if an extension creates a user table you want access to that table(schema and data) via pg_dump, outside the extension mechanism, without resorting to marking it as a configuration table. Is that correct ? > > Best regards, > > depesz > > -- Adrian Klaver adrian.klaver@gmail.com
On Mon, Jan 30, 2012 at 09:43:46AM -0800, Adrian Klaver wrote: > On 01/30/2012 09:23 AM, hubert depesz lubaczewski wrote: > > > > >I think I explained it in previous mails, and if not - sorry, but > >I clearly can't explain good enough - the point is that with the way how > >extensions now work, they are useless for providing way to create > >tables that will store data, in case you would ever want dump without > >this data. > > So in summary; if an extension creates a user table you want access > to that table(schema and data) via pg_dump, outside the extension > mechanism, without resorting to marking it as a configuration table. > Is that correct ? no. I want to be able to do both: 1. dump the data for the table 2. dump structure of other tables but not in the same file. Best regards, depesz -- The best thing about modern society is how easy it is to avoid contact with it. http://depesz.com/
On 01/30/2012 09:45 AM, hubert depesz lubaczewski wrote: > On Mon, Jan 30, 2012 at 09:43:46AM -0800, Adrian Klaver wrote: >> On 01/30/2012 09:23 AM, hubert depesz lubaczewski wrote: >> >>> >>> I think I explained it in previous mails, and if not - sorry, but >>> I clearly can't explain good enough - the point is that with the way how >>> extensions now work, they are useless for providing way to create >>> tables that will store data, in case you would ever want dump without >>> this data. >> >> So in summary; if an extension creates a user table you want access >> to that table(schema and data) via pg_dump, outside the extension >> mechanism, without resorting to marking it as a configuration table. >> Is that correct ? > > no. > I want to be able to do both: > 1. dump the data for the table > 2. dump structure of other tables > but not in the same file. Actually that was what I was saying:) "..via pg_dump, outside the extension mechanism.." "..without resorting to marking it as a configuration table.." Currently the extension mechanism is getting in the way of 1 & 2 above. What you want is for pg_dump to ignore the extension dependency process when you explicitly name a table and the operation on it. > > Best regards, > > depesz > -- Adrian Klaver adrian.klaver@gmail.com
On Mon, Jan 30, 2012 at 09:51:58AM -0800, Adrian Klaver wrote: > On 01/30/2012 09:45 AM, hubert depesz lubaczewski wrote: > >On Mon, Jan 30, 2012 at 09:43:46AM -0800, Adrian Klaver wrote: > >>On 01/30/2012 09:23 AM, hubert depesz lubaczewski wrote: > >> > >>> > >>>I think I explained it in previous mails, and if not - sorry, but > >>>I clearly can't explain good enough - the point is that with the way how > >>>extensions now work, they are useless for providing way to create > >>>tables that will store data, in case you would ever want dump without > >>>this data. > >> > >>So in summary; if an extension creates a user table you want access > >>to that table(schema and data) via pg_dump, outside the extension > >>mechanism, without resorting to marking it as a configuration table. > >>Is that correct ? > > > >no. > >I want to be able to do both: > >1. dump the data for the table > >2. dump structure of other tables > >but not in the same file. > > Actually that was what I was saying:) > > "..via pg_dump, outside the extension mechanism.." > > "..without resorting to marking it as a configuration table.." > > Currently the extension mechanism is getting in the way of 1 & 2 > above. What you want is for pg_dump to ignore the extension > dependency process when you explicitly name a table and the > operation on it. no. marking is irrelevant. marking lets you do one thing, but breaks the other. Best regards, depesz -- The best thing about modern society is how easy it is to avoid contact with it. http://depesz.com/
On 01/30/2012 09:53 AM, hubert depesz lubaczewski wrote: > On Mon, Jan 30, 2012 at 09:51:58AM -0800, Adrian Klaver wrote: >> On 01/30/2012 09:45 AM, hubert depesz lubaczewski wrote: >>> On Mon, Jan 30, 2012 at 09:43:46AM -0800, Adrian Klaver wrote: >>>> On 01/30/2012 09:23 AM, hubert depesz lubaczewski wrote: >>>> >>>>> >>>>> I think I explained it in previous mails, and if not - sorry, but >>>>> I clearly can't explain good enough - the point is that with the way how >>>>> extensions now work, they are useless for providing way to create >>>>> tables that will store data, in case you would ever want dump without >>>>> this data. >>>> >>>> So in summary; if an extension creates a user table you want access >>>> to that table(schema and data) via pg_dump, outside the extension >>>> mechanism, without resorting to marking it as a configuration table. >>>> Is that correct ? >>> >>> no. >>> I want to be able to do both: >>> 1. dump the data for the table >>> 2. dump structure of other tables >>> but not in the same file. >> >> Actually that was what I was saying:) >> >> "..via pg_dump, outside the extension mechanism.." >> >> "..without resorting to marking it as a configuration table.." >> >> Currently the extension mechanism is getting in the way of 1& 2 >> above. What you want is for pg_dump to ignore the extension >> dependency process when you explicitly name a table and the >> operation on it. > > no. > marking is irrelevant. marking lets you do one thing, but breaks the > other. I am going to have to go Cool Hand Luke(1) here:) "What we have here is a failure to communicate" I think we are actually saying the same thing, just coming from different directions. Bear with me as I try to organize what I know of extensions and how it relates to the issue at hand. The docs snippets below come from: http://www.postgresql.org/docs/9.1/interactive/extend-extensions.html 1) An Extension is basically a set of templates with dependency tracking. Said templates and the files that specify the dependencies are stored on disk outside the database storage. This means a pg_dump only finds the Extensions in pg_extension and does the minimum necessary to push the information into the dump file i.e do CREATE EXTENSION some_extension. Or to quote the docs: " Even more useful, pg_dump knows that it should not dump the individual member objects of the extension — it will just include a CREATE EXTENSION command in dumps, instead. This vastly simplifies migration to a new version of the extension that might contain more or different objects than the old version. Note however that you must have the extension's control, script, and other files available when loading such a dump into a new database. " So by default pg_dump does not care about the actual objects referred to by the Extension. As far as it is concerned they are invisible. 2) The exception that breaks the rule. It is possible to designate tables as being configuration tables and as such are essential to the working of the Extension. In this case the tables data(or some subset of it) is dumped, though not the table schema as that is still the responsibility of the Extension. 3) You have tables created by the Extension that either: a) do not fall into that exception and for which you want to recover that data and/or schema by using pg_dump. or b) do but you want to grab the schema without going to the schema information stored outside the database. 4) Problem 3a is as far as I can see not handled by the Extension system. Problem 3b is also not handled, though the schema information is available in the files stored outside the database storage. 5) The above is currently complicated by what looks to be a bug that dumps the data for tables designated as configuration tables even when you are specifically doing a pg_dump -s(a) -t some_table on a table that is not part of an Extension. So along with the schema/data for the table you desire you get the data for those Extension tables designated as configuration tables. 6) Given the above and from what you have said I am going to go out on a limb and say what you want is as I stated before: For pg_dump not to ignore the objects referred to by an Extension. Indeed what you want is for pg_dump to see them as 'ordinary' objects and dump them 'normally' according to the switches you provide to pg_dump. > > Best regards, > > depesz > (1) http://www.imdb.com/title/tt0061512/ -- Adrian Klaver adrian.klaver@gmail.com
[ Note: please follow-up to pgsql-hackers not pgsql-general; I think this discussion needs to move there ] hubert depesz lubaczewski <depesz@depesz.com> writes: > On Mon, Jan 30, 2012 at 11:30:51AM -0500, Tom Lane wrote: >> That is way too vague for my taste, as you have not shown the pg_dump >> options you're using, for example. > i tried to explain that the options don't matter, but here we go. full > example: > [ example showing pg_dump's odd behavior for extension config tables ] [ traces through that with gdb... ] As I suspected, the behavioral change from 9.1 to HEAD is not intentional. It is an artifact of commit 7b070e896ca835318c90b02c830a5c4844413b64, which is almost, but not quite, entirely broken. I won't enumerate its shortcomings here, because they're not really relevant, but it does seem appropriate to discuss exactly what we think *should* happen for tables created inside extensions. The original design intention for non-config tables was, per the manual: Ordinarily, if a table is part of an extension, neither the table's definition nor its content will be dumped by pg_dump. the assumption being that both the definition and the content would be re-loaded by executing the extension's SQL script. The purpose of pg_extension_config_dump() is to allow part or all of the data in the table to be treated as user data and thus dumped; this is assumed to be data not supplied by the extension script but by subsequent user insertions. I don't recall that we thought very hard about what should happen when pg_dump switches are used to produce a selective dump, but ISTM reasonable that if it's "user data" then it should be dumped only if data in a regular user table would be. So I agree it's pretty broken that "pg_dump -t foo" will dump data belonging to a config table not selected by the -t switch. I think this should be changed in both HEAD and 9.1 (note that HEAD will presumably return to the 9.1 behavior once that --exclude-table-data patch gets fixed). What's not apparent to me is whether there's an argument for doing more than that. It strikes me that the current design is not very friendly towards the idea of an extension that creates a table that's meant solely to hold user data --- you'd have to mark it as "config" which seems a bit unfortunate terminology for that case. Is it important to do something about that, and if so what? regards, tom lane
On Mon, Jan 30, 2012 at 11:18 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > I don't recall that we thought very hard about what should happen when > pg_dump switches are used to produce a selective dump, but ISTM > reasonable that if it's "user data" then it should be dumped only if > data in a regular user table would be. Yep. > What's not apparent to me is whether there's an argument for doing more > than that. It strikes me that the current design is not very friendly > towards the idea of an extension that creates a table that's meant > solely to hold user data --- you'd have to mark it as "config" which > seems a bit unfortunate terminology for that case. Is it important to > do something about that, and if so what? Is this anything more than a naming problem? -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
On Mon, Jan 30, 2012 at 11:18:31PM -0500, Tom Lane wrote: > I don't recall that we thought very hard about what should happen when > pg_dump switches are used to produce a selective dump, but ISTM > reasonable that if it's "user data" then it should be dumped only if > data in a regular user table would be. So I agree it's pretty broken > that "pg_dump -t foo" will dump data belonging to a config table not > selected by the -t switch. I think this should be changed in both HEAD > and 9.1 (note that HEAD will presumably return to the 9.1 behavior once > that --exclude-table-data patch gets fixed). Perhaps a better way of dealing with this is providing a way of dumping extensions explicitly. Then you could say: pg_dump --extension=postgis -s to get the data. And you can use all the normal pg_dump options for controlling the output. The flag currently used to seperate the table schema from the table content could then interact logically. Another way perhaps: pg_dump --extension-postgis=data-only pg_dump --extension-postgis=schema pg_dump --extension-postgis=all pg_dump --extension-postgis=none The last being the default. Just throwing out some completely different ideas. Have a nice day, -- Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/ > He who writes carelessly confesses thereby at the very outset that he does > not attach much importance to his own thoughts. -- Arthur Schopenhauer
Вложения
On 01/31/2012 04:36 AM, Robert Haas wrote: > On Mon, Jan 30, 2012 at 11:18 PM, Tom Lane<tgl@sss.pgh.pa.us> wrote: >> I don't recall that we thought very hard about what should happen when >> pg_dump switches are used to produce a selective dump, but ISTM >> reasonable that if it's "user data" then it should be dumped only if >> data in a regular user table would be. > > Yep. > >> What's not apparent to me is whether there's an argument for doing more >> than that. It strikes me that the current design is not very friendly >> towards the idea of an extension that creates a table that's meant >> solely to hold user data --- you'd have to mark it as "config" which >> seems a bit unfortunate terminology for that case. Is it important to >> do something about that, and if so what? > > Is this anything more than a naming problem? Seems to me that would be dependent on what the future plans are for the extension mechanism. There is also the issue of backward compatibility for those people that are using configuration tables in their extensions and would like to maintain that separation. I could see adding another function that is similar and would be used to identify strictly user data tables. -- Adrian Klaver adrian.klaver@gmail.com
Adrian Klaver <adrian.klaver@gmail.com> writes: > On 01/31/2012 04:36 AM, Robert Haas wrote: >> On Mon, Jan 30, 2012 at 11:18 PM, Tom Lane<tgl@sss.pgh.pa.us> wrote: >>> What's not apparent to me is whether there's an argument for doing more >>> than that. It strikes me that the current design is not very friendly >>> towards the idea of an extension that creates a table that's meant >>> solely to hold user data --- you'd have to mark it as "config" which >>> seems a bit unfortunate terminology for that case. Is it important to >>> do something about that, and if so what? >> Is this anything more than a naming problem? > Seems to me that would be dependent on what the future plans are for the > extension mechanism. My thought exactly --- maybe it's only a minor cosmetic issue that will affect few people, or maybe this will someday be a major use-case. I don't know. I was hoping Dimitri had an opinion. regards, tom lane
Hi, Sorry to be late in the thread, I'm too busy right now. Cédric called it to my immediate attention though. Martijn van Oosterhout <kleptog@svana.org> writes: > Perhaps a better way of dealing with this is providing a way of dumping > extensions explicitly. Then you could say: > > pg_dump --extension=postgis -s That's something I'm working on in this commit fest under the “inline extensions” topic, and we should have that facility in 9.2 baring major obstacles (consensus is made). Tom Lane <tgl@sss.pgh.pa.us> writes: >>> On Mon, Jan 30, 2012 at 11:18 PM, Tom Lane<tgl@sss.pgh.pa.us> wrote: >>>> What's not apparent to me is whether there's an argument for doing more >>>> than that. It strikes me that the current design is not very friendly >>>> towards the idea of an extension that creates a table that's meant >>>> solely to hold user data --- you'd have to mark it as "config" which >>>> seems a bit unfortunate terminology for that case. Is it important to >>>> do something about that, and if so what? > > My thought exactly --- maybe it's only a minor cosmetic issue that will > affect few people, or maybe this will someday be a major use-case. > I don't know. I was hoping Dimitri had an opinion. So, being able to stuff data into an extension has been made possible to address two use cases: - postgis - (sql only) data extensions The former is very specific and as we didn't hear back from them I guess we addressed it well enough, the latter is still WIP. It's about being able to ship data as an extension (think timezone updates, geo ip, bank cards database, exchange rates, etc). You need to be able to easily ship those (CSV isn't the best we can do here, as generally it doesn't include the schema nor the COPY recipe that can be non-trivial) and to easily update those. The case for a table that is partly user data and partly extension data is very thin, I think that if I had this need I would use inheritance and a CHECK(user_data is true/false) constraint to filter the data. So I sure would appreciate being able to call that data rather than config, and to mark any table at once. If that doesn't need any pg_dump stretching I think providing that in 9.2 would be great. Regards, -- Dimitri Fontaine http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support
On Wed, Feb 01, 2012 at 10:02:14PM +0100, Dimitri Fontaine wrote: > The case for a table that is partly user data and partly extension data > is very thin, I think that if I had this need I would use inheritance > and a CHECK(user_data is true/false) constraint to filter the data. definitely agree. i.e. i don't really see a case when we'd have data from both extension, and normal usage, in the same table. and the overhead of tracking source of data seems to be excessive. Best regards, depesz -- The best thing about modern society is how easy it is to avoid contact with it. http://depesz.com/