Обсуждение: No pg_dumplo on 8.2.4
Hi again, a different question ... On a previous upgrade to 7.4 (I think) I used the pg_dumplo contrib utility to add the large objects to my restored via pg_dumpall databases (not all using large objects). I liked it because it was easy and I hadn't to remove the databases with large objects to reimport them with the dumped via pg_dump versions which seemed more work (and more possibilities of problems). Now I see that theres is not pg_dumplo on contrib directory for 8.2.4 (or at least i did'nt found it)... Which is the best method to import my large objects in this case? 1.- Import all the stuff via pg_dumpall+psql, drop databases with LO, import LO databases with pg_dump+psql 2.-Import all the stuff via pg_dumpall+psql, import LO databases with pg_dump+psql (without delete them) 3.- ?? Thanks in advance -- ******************************************************** Daniel Rubio Rodríguez OASI (Organisme Autònom Per la Societat de la Informació) c/ Assalt, 12 43003 - Tarragona Tef.: 977.244.007 - Fax: 977.224.517 e-mail: drubio a oasi.org ********************************************************
Daniel Rubio <drubior@tinet.org> writes: > Now I see that theres is not pg_dumplo on contrib directory for 8.2.4 That's because it's obsolete --- regular pg_dump can handle large objects now. regards, tom lane
Hello All, We have a postgres setup on solaris 10 with sun cluster for HA purposes. 2 nodes are configured in the cluster in active-passive mode with pg_data stored on external storage. Everything is working as expected however, when we either switch the resource group from one node to other or rg restart on primary node, the apps fails with "An I/O error occurred while sending to the backend." and doesn't recover back from db failover. All queries to the db give the above error after resource group restart. Our app uses resin container db pooling, with following HA parameters set. With same settings, the app recovers just fine with database configured in non-cluster mode i.e. no sun cluster setup etc. <database> <jndi-name>jdbc/nbbsDB</jndi-name> <driver type="org.postgresql.Driver"> <url>jdbc:postgresql://db-vip:5432/appdbname</url> <user>appusr</user> <password>apppass</password> </driver> <max-connections>100</max-connections> <max-idle-time>5m</max-idle-time> <max-active-time>6h</max-active-time> <max-pool-time>24h</max-pool-time> <connection-wait-time>30s</connection-wait-time> <max-overflow-connections>0</max-overflow-connections> <ping-table>pingtable</ping-table> <ping>true</ping> <ping-interval>60s</ping-interval> <prepared-statement-cache-size>10</prepared-statement-cache-size> <spy>false</spy> </database> Any pointers to debug this futher is greatly appreciated. We are running postgres 8.2.4. Other thing i noticed in pg_log/server.logs is that whenever i restart postgres i get below error, when there is no other postgres running on 5432. "LOG: could not bind IPv6 socket: Cannot assign requested address HINT: Is another postmaster already running on port 5432? If not, wait a few seconds and retry." Thanks, Stalin
Hello All, We have a postgres setup on solaris 10 with sun cluster for HA purposes. 2 nodes are configured in the cluster in active-passive mode with pg_data stored on external storage. Everything is working as expected however, when we either switch the resource group from one node to other or rg restart on primary node, the apps fails with "An I/O error occurred while sending to the backend." and doesn't recover back from db failover. All queries to the db give the above error after resource group restart. Our app uses resin container db pooling, with following HA parameters set. With same settings, the app recovers just fine with database configured in non-cluster mode i.e. no sun cluster setup etc. <database> <jndi-name>jdbc/nbbsDB</jndi-name> <driver type="org.postgresql.Driver"> <url>jdbc:postgresql://db-vip:5432/appdbname</url> <user>appusr</user> <password>apppass</password> </driver> <max-connections>100</max-connections> <max-idle-time>5m</max-idle-time> <max-active-time>6h</max-active-time> <max-pool-time>24h</max-pool-time> <connection-wait-time>30s</connection-wait-time> <max-overflow-connections>0</max-overflow-connections> <ping-table>pingtable</ping-table> <ping>true</ping> <ping-interval>60s</ping-interval> <prepared-statement-cache-size>10</prepared-statement-cache-size> <spy>false</spy> </database> Any pointers to debug this futher is greatly appreciated. We are running postgres 8.2.4. Other thing i noticed in pg_log/server.logs is that whenever i restart postgres i get below error, when there is no other postgres running on 5432. "LOG: could not bind IPv6 socket: Cannot assign requested address HINT: Is another postmaster already running on port 5432? If not, wait a few seconds and retry." Thanks, Stalin
Hi, Subbiah Stalin-XCGF84 wrote: > Any pointers to debug this futher is greatly appreciated. I'm not quite sure, how sun cluster is working, but to me it sounds like a shared-disk failover solution (see [1] for more details). As such, only one node should run a postgres instance at any time. Does sun cluster take care of that? > "LOG: could not bind IPv6 socket: Cannot assign requested address > HINT: Is another postmaster already running on port 5432? If not, wait > a few seconds and retry." That's talking about IPv6. Are you using IPv6? Is sun cluster doing something magic WRT your network? (BTW, please don't cross post (removed -performance). And don't reply to emails when you intend to start a new thread, thanks.) Regards Markus [1]: Postgres Documentation High Availability and Load Balancing http://www.postgresql.org/docs/8.2/static/high-availability.html
Yes, it's a shared-disk failover solution as described in [1] and only node will run the pg instance. We got it fixed by setting ping-interval in resin from 60s to 0s. It's more like validate every connection before giving to app to process. > That's talking about IPv6. Are you using IPv6? Is sun cluster doing something magic WRT your network? Well we see this error on a stand-alone sol 10 box with no cluster. How do I check if it's setup to use IPv6 or not. Thanks, Stalin -----Original Message----- From: Markus Schiltknecht [mailto:markus@bluegap.ch] Sent: Friday, September 07, 2007 1:24 AM To: Subbiah Stalin-XCGF84 Cc: pgsql-admin@postgresql.org Subject: Re: [ADMIN] Postgres with Sun Cluster HA/Solaris 10 Hi, Subbiah Stalin-XCGF84 wrote: > Any pointers to debug this futher is greatly appreciated. I'm not quite sure, how sun cluster is working, but to me it sounds like a shared-disk failover solution (see [1] for more details). As such, only one node should run a postgres instance at any time. Does sun cluster take care of that? > "LOG: could not bind IPv6 socket: Cannot assign requested address > HINT: Is another postmaster already running on port 5432? If not, > wait a few seconds and retry." That's talking about IPv6. Are you using IPv6? Is sun cluster doing something magic WRT your network? (BTW, please don't cross post (removed -performance). And don't reply to emails when you intend to start a new thread, thanks.) Regards Markus [1]: Postgres Documentation High Availability and Load Balancing http://www.postgresql.org/docs/8.2/static/high-availability.html