Обсуждение: UUID-OSSP Contrib Module Compilation Issue

Поиск
Список
Период
Сортировка

UUID-OSSP Contrib Module Compilation Issue

От
Bruce McAlister
Дата:
Hi All,

I am trying to build the uuid-ossp contrib module for PostgreSQL 8.3.4.
I am building on Solaris x86 with Sun Studio 12.

I built the ossp-uuid version 1.6.2 libraries and installed them,
however, whenever I attempt to build the contrib module I always end up
with the following error:

----------------------
+ cd contrib
+ cd uuid-ossp
+ make all
sed 's,MODULE_PATHNAME,$libdir/uuid-ossp,g' uuid-ossp.sql.in >uuid-ossp.sql
/usr/bin/cc -Xa -I/usr/sfw/include -KPIC -I. -I../../src/include
-I/usr/sfw/include   -c -o uuid-ossp.o uuid-ossp.c
"uuid-ossp.c", line 29: #error: OSSP uuid.h not found
cc: acomp failed for uuid-ossp.c
make: *** [uuid-ossp.o] Error 2
----------------------

I have the ossp uuid libraries and headers in the standar locations
(/usr/include, /usr/lib) but the checks within the contrib module dont
appear to find the ossp uuid headers I have installed.

Am I mising something here, or could the #ifdefs have something to do
with it not picking up the newer ossp uuid defnitions?

Any suggestions would be greatly appreciated.

Thanks
Bruce

Re: UUID-OSSP Contrib Module Compilation Issue

От
Tom Lane
Дата:
Bruce McAlister <bruce.mcalister@blueface.ie> writes:
> I am trying to build the uuid-ossp contrib module for PostgreSQL 8.3.4.
> I am building on Solaris x86 with Sun Studio 12.

> I built the ossp-uuid version 1.6.2 libraries and installed them,
> however, whenever I attempt to build the contrib module I always end up
> with the following error:
> "uuid-ossp.c", line 29: #error: OSSP uuid.h not found

Um ... did you run PG's configure script with --with-ossp-uuid?
It looks like either you didn't do that, or configure doesn't know
to look in the place where you put the ossp-uuid header files.

            regards, tom lane

Re: UUID-OSSP Contrib Module Compilation Issue

От
Bruce McAlister
Дата:
>
> Um ... did you run PG's configure script with --with-ossp-uuid?
> It looks like either you didn't do that, or configure doesn't know
> to look in the place where you put the ossp-uuid header files.
>

Doh, I missed that, however, I have now included that option but it
still does not find the libraries that I have installed.

My configure options are:

./configure --prefix=/opt/postgresql-v8.3.4 \
            --with-openssl \
            --without-readline \
            --with-perl \
            --enable-integer-datetimes \
            --enable-thread-safety \
            --enable-dtrace \
            --with-ossp-uuid

When I run configure with the above options, I end up with the following
configure error:

checking for uuid_export in -lossp-uuid... no
checking for uuid_export in -luuid... no
configure: error: library 'ossp-uuid' or 'uuid' is required for OSSP-UUID

The uuid library that I built was obtained from the following url as
mentioned in the documentation:

http://www.ossp.org/pkg/lib/uuid/

I've built and installed version 1.6.2 and the libraries/headers built
are installed in: /usr/lib and /usr/include, the cli tool is in /usr/bin.

ll /usr/lib/*uuid* | grep 'Oct 28'
-rw-r--r--   1 root     bin        81584 Oct 28 15:33 /usr/lib/libuuid_dce.a
-rw-r--r--   1 root     bin          947 Oct 28 15:33
/usr/lib/libuuid_dce.la
lrwxrwxrwx   1 root     root          22 Oct 28 15:34
/usr/lib/libuuid_dce.so -> libuuid_dce.so.16.0.22
lrwxrwxrwx   1 root     root          22 Oct 28 15:34
/usr/lib/libuuid_dce.so.16 -> libuuid_dce.so.16.0.22
-rwxr-xr-x   1 root     bin        80200 Oct 28 15:33
/usr/lib/libuuid_dce.so.16.0.22
-rw-r--r--   1 root     bin        77252 Oct 28 15:33 /usr/lib/libuuid.a
-rw-r--r--   1 root     bin          919 Oct 28 15:33 /usr/lib/libuuid.la
lrwxrwxrwx   1 root     root          18 Oct 28 15:34
/usr/lib/libuuid.so -> libuuid.so.16.0.22
lrwxrwxrwx   1 root     root          18 Oct 28 15:34
/usr/lib/libuuid.so.16 -> libuuid.so.16.0.22
-rwxr-xr-x   1 root     bin        76784 Oct 28 15:33
/usr/lib/libuuid.so.16.0.22

Do I need to use a specific version of the ossp-uuid libraries for this
module?

Thanks
Bruce

Re: UUID-OSSP Contrib Module Compilation Issue

От
"Hiroshi Saito"
Дата:
Hi.

Um, you are reconfigure of postgresql then.  It is necessary to specify with-ossp-uuid.

Regards,
Hiroshi Saito

----- Original Message -----
From: "Bruce McAlister" <bruce.mcalister@blueface.ie>
To: "pgsql" <pgsql-general@postgresql.org>
Sent: Wednesday, October 29, 2008 8:01 AM
Subject: [GENERAL] UUID-OSSP Contrib Module Compilation Issue


> Hi All,
>
> I am trying to build the uuid-ossp contrib module for PostgreSQL 8.3.4.
> I am building on Solaris x86 with Sun Studio 12.
>
> I built the ossp-uuid version 1.6.2 libraries and installed them,
> however, whenever I attempt to build the contrib module I always end up
> with the following error:
>
> ----------------------
> + cd contrib
> + cd uuid-ossp
> + make all
> sed 's,MODULE_PATHNAME,$libdir/uuid-ossp,g' uuid-ossp.sql.in >uuid-ossp.sql
> /usr/bin/cc -Xa -I/usr/sfw/include -KPIC -I. -I../../src/include
> -I/usr/sfw/include   -c -o uuid-ossp.o uuid-ossp.c
> "uuid-ossp.c", line 29: #error: OSSP uuid.h not found
> cc: acomp failed for uuid-ossp.c
> make: *** [uuid-ossp.o] Error 2
> ----------------------
>
> I have the ossp uuid libraries and headers in the standar locations
> (/usr/include, /usr/lib) but the checks within the contrib module dont
> appear to find the ossp uuid headers I have installed.
>
> Am I mising something here, or could the #ifdefs have something to do
> with it not picking up the newer ossp uuid defnitions?
>
> Any suggestions would be greatly appreciated.
>
> Thanks
> Bruce
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general

Re: UUID-OSSP Contrib Module Compilation Issue

От
Tom Lane
Дата:
Bruce McAlister <bruce.mcalister@blueface.ie> writes:
> When I run configure with the above options, I end up with the following
> configure error:

> checking for uuid_export in -lossp-uuid... no
> checking for uuid_export in -luuid... no
> configure: error: library 'ossp-uuid' or 'uuid' is required for OSSP-UUID

Huh.  Nothing obvious in your info about why it wouldn't work.  I think
you'll need to dig through the config.log output to see why these link
tests are failing.  (They'll be a few hundred lines above the end of the
log, because the last part of the log is always a dump of configure's
internal variables.)

            regards, tom lane

Re: UUID-OSSP Contrib Module Compilation Issue

От
"Hiroshi Saito"
Дата:
> Do I need to use a specific version of the ossp-uuid libraries for this
> module?

The 1.6.2 stable version which you use is right.

Regards,
Hiroshi Saito

Re: UUID-OSSP Contrib Module Compilation Issue

От
Bruce McAlister
Дата:
>
> Huh.  Nothing obvious in your info about why it wouldn't work.  I think
> you'll need to dig through the config.log output to see why these link
> tests are failing.  (They'll be a few hundred lines above the end of the
> log, because the last part of the log is always a dump of configure's
> internal variables.)
>

In addition to the missing configure option, it turned out to be missing
LDFLAGS parameters, I just added -L/usr/lib to LDFLAGS and it all built
successfully now.

Thanks for the pointers :)

Re: UUID-OSSP Contrib Module Compilation Issue

От
Bruce McAlister
Дата:
>
> The 1.6.2 stable version which you use is right.
>

Thanks, we managed to get it working now. Thanks for the pointers.

Re: UUID-OSSP Contrib Module Compilation Issue

От
Tom Lane
Дата:
Bruce McAlister <bruce.mcalister@blueface.ie> writes:
> In addition to the missing configure option, it turned out to be missing
> LDFLAGS parameters, I just added -L/usr/lib to LDFLAGS and it all built
> successfully now.

Bizarre ... I've never heard of a Unix system that didn't consider that
a default place to look.  Unless this is a 64-bit machine and uuid
should have installed itself in /usr/lib64?

            regards, tom lane

Decreasing WAL size effects

От
Jason Long
Дата:
I am planning on setting up PITR for my application. 

It does not see much traffic and it looks like the 16 MB log files switch about every 4 hours or so during business hours.
I am also about to roll out functionality to store documents in a bytea column.  This should make the logs roll faster.

I also have to ship them off site using a T1 so setting the time to automatically switch files will just waste bandwidth if they are still going to be 16 MB anyway.

1.  What is the effect of recompiling and reducing the default size of the WAL files?
2.  What is the minimum suggested size?
3.  If I reduce the size how will this work if I try to save a document that is larger than the WAL size?

Any other suggestions would be most welcome.
Thank you for your time,

Jason Long
CEO and Chief Software Engineer
BS Physics, MS Chemical Engineering
http://www.octgsoftware.com
HJBug Founder and President
http://www.hjbug.com  

Re: Decreasing WAL size effects

От
"Joshua D. Drake"
Дата:
Jason Long wrote:
> I am planning on setting up PITR for my application.

> I also have to ship them off site using a T1 so setting the time to
> automatically switch files will just waste bandwidth if they are still
> going to be 16 MB anyway.
>
> *1.  What is the effect of recompiling and reducing the default size of
> the WAL files?

Increased I/O

> 2.  What is the minimum suggested size?

16 megs, the default.

> 3.  If I reduce the size how will this work if I try to save a document
> that is larger than the WAL size?

You will create more segments.

Joshua D. Drake

Re: UUID-OSSP Contrib Module Compilation Issue

От
Bruce McAlister
Дата:
>
> Bizarre ... I've never heard of a Unix system that didn't consider that
> a default place to look.  Unless this is a 64-bit machine and uuid
> should have installed itself in /usr/lib64?
>

It is a rather peculiar issue, I also assumed that it would check the
standard locations, but I thought I would try it anyway and see what
happens.

The box is indeed a 64-bit system but the packages being built are all
32-bit and therefor all libraries being built are all in the standard
locations.

Re: UUID-OSSP Contrib Module Compilation Issue

От
Tom Lane
Дата:
Bruce McAlister <bruce.mcalister@blueface.ie> writes:
>> Bizarre ... I've never heard of a Unix system that didn't consider that
>> a default place to look.  Unless this is a 64-bit machine and uuid
>> should have installed itself in /usr/lib64?

> It is a rather peculiar issue, I also assumed that it would check the
> standard locations, but I thought I would try it anyway and see what
> happens.

> The box is indeed a 64-bit system but the packages being built are all
> 32-bit and therefor all libraries being built are all in the standard
> locations.

Hmm ... it sounds like some part of the compile toolchain didn't get the
word about wanting to build 32-bit.  Perhaps the switch you really need
is along the lines of CFLAGS=-m32.

            regards, tom lane

Re: Decreasing WAL size effects

От
Greg Smith
Дата:
On Tue, 28 Oct 2008, Jason Long wrote:

> I also have to ship them off site using a T1 so setting the time to
> automatically switch files will just waste bandwidth if they are still going
> to be 16 MB anyway.

The best way to handle this is to clear the unused portion of the WAL file
and then compress it before sending over the link.  There is a utility
named pg_clearxlogtail available at
http://www.2ndquadrant.com/replication.htm that handles the first part of
that you may find useful here.

This reminds me yet again that pg_clearxlogtail should probably get added
to the next commitfest for inclusion into 8.4; it's really essential for a
WAN-based PITR setup and it would be nice to include it with the
distribution.

--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD

Re: Decreasing WAL size effects

От
"Joshua D. Drake"
Дата:
On Wed, 2008-10-29 at 09:05 -0400, Greg Smith wrote:
> On Tue, 28 Oct 2008, Jason Long wrote:
>
> > I also have to ship them off site using a T1 so setting the time to
> > automatically switch files will just waste bandwidth if they are still going
> > to be 16 MB anyway.
>
> The best way to handle this is to clear the unused portion of the WAL file
> and then compress it before sending over the link.  There is a utility
> named pg_clearxlogtail available at
> http://www.2ndquadrant.com/replication.htm that handles the first part of
> that you may find useful here.
>
> This reminds me yet again that pg_clearxlogtail should probably get added
> to the next commitfest for inclusion into 8.4; it's really essential for a
> WAN-based PITR setup and it would be nice to include it with the
> distribution.

What is to be gained over just using rsync with -z?

Joshua D. Drake

>
> --
> * Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD
>
--


Re: Decreasing WAL size effects

От
Greg Smith
Дата:
On Thu, 30 Oct 2008, Joshua D. Drake wrote:

>> This reminds me yet again that pg_clearxlogtail should probably get added
>> to the next commitfest for inclusion into 8.4; it's really essential for a
>> WAN-based PITR setup and it would be nice to include it with the
>> distribution.
>
> What is to be gained over just using rsync with -z?

When a new XLOG segment is created, it gets zeroed out first, so that
there's no chance it can accidentally look like a valid segment.  But when
an existing segment is recycled, it gets a new header and that's it--the
rest of the 16MB is still left behind from whatever was in that segment
before.  That means that even if you only write, say, 1MB of new data to a
recycled segment before a timeout that causes you to ship it somewhere
else, there will still be a full 15MB worth of junk from its previous life
which may or may not be easy to compress.

I just noticed that recently this project has been pushed into pgfoundry,
it's at
http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/clearxlogtail/clearxlogtail/

What clearxlogtail does is look inside the WAL segment, and it clears the
"tail" behind the portion of that is really used.  So our example file
would end up with just the 1MB of useful data, followed by 15MB of zeros
that will compress massively.  Since it needs to know how XLogPageHeader
is formatted and if it makes a mistake your archive history will be
silently corrupted, it's kind of a scary utility to just download and use.
That's why I'd like to see it turn into a more official contrib module, so
that it will never lose sync with the page header format and be available
to anyone using PITR.

--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD

Re: Decreasing WAL size effects

От
Jason Long
Дата:
Greg Smith wrote:
> On Thu, 30 Oct 2008, Joshua D. Drake wrote:
>
>>> This reminds me yet again that pg_clearxlogtail should probably get
>>> added
>>> to the next commitfest for inclusion into 8.4; it's really essential
>>> for a
>>> WAN-based PITR setup and it would be nice to include it with the
>>> distribution.
>>
>> What is to be gained over just using rsync with -z?
>
> When a new XLOG segment is created, it gets zeroed out first, so that
> there's no chance it can accidentally look like a valid segment.  But
> when an existing segment is recycled, it gets a new header and that's
> it--the rest of the 16MB is still left behind from whatever was in
> that segment before.  That means that even if you only write, say, 1MB
> of new data to a recycled segment before a timeout that causes you to
> ship it somewhere else, there will still be a full 15MB worth of junk
> from its previous life which may or may not be easy to compress.
>
> I just noticed that recently this project has been pushed into
> pgfoundry, it's at
> http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/clearxlogtail/clearxlogtail/
>
> What clearxlogtail does is look inside the WAL segment, and it clears
> the "tail" behind the portion of that is really used.  So our example
> file would end up with just the 1MB of useful data, followed by 15MB
> of zeros that will compress massively.  Since it needs to know how
> XLogPageHeader is formatted and if it makes a mistake your archive
> history will be silently corrupted, it's kind of a scary utility to
> just download and use.
I would really like to add something like this to my application.
1.  Should I be scared or is it just scary in general?
2.  Is this safe to use with 8.3.4?
3.  Any pointers on how to install and configure this?
> That's why I'd like to see it turn into a more official contrib
> module, so that it will never lose sync with the page header format
> and be available to anyone using PITR.
>
> --
> * Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD


Re: Decreasing WAL size effects

От
Kyle Cordes
Дата:
Greg Smith wrote:

> there's no chance it can accidentally look like a valid segment.  But
> when an existing segment is recycled, it gets a new header and that's
> it--the rest of the 16MB is still left behind from whatever was in that
> segment before.  That means that even if you only write, say, 1MB of new

[...]

> What clearxlogtail does is look inside the WAL segment, and it clears
> the "tail" behind the portion of that is really used.  So our example
> file would end up with just the 1MB of useful data, followed by 15MB of


It sure would be nice if there was a way for PG itself to zero the
unused portion of logs as they are completed, perhaps this will make it
in as part of the ideas discussed on this list a while back to make a
more "out of the box" log-ship mechanism?


--
Kyle Cordes
http://kylecordes.com

Re: Decreasing WAL size effects

От
Jason Long
Дата:
Kyle Cordes wrote:
Greg Smith wrote:

there's no chance it can accidentally look like a valid segment.  But when an existing segment is recycled, it gets a new header and that's it--the rest of the 16MB is still left behind from whatever was in that segment before.  That means that even if you only write, say, 1MB of new

[...]

What clearxlogtail does is look inside the WAL segment, and it clears the "tail" behind the portion of that is really used.  So our example file would end up with just the 1MB of useful data, followed by 15MB of


It sure would be nice if there was a way for PG itself to zero the unused portion of logs as they are completed, perhaps this will make it in as part of the ideas discussed on this list a while back to make a more "out of the box" log-ship mechanism?
I agree totally.  I looked at the code for clearxlogtail and it seems short and not very complex.  Hopefully something like this will at least be a trivial to set up option in 8.4.



Re: Decreasing WAL size effects

От
Greg Smith
Дата:
On Thu, 30 Oct 2008, Kyle Cordes wrote:

> It sure would be nice if there was a way for PG itself to zero the unused
> portion of logs as they are completed, perhaps this will make it in as part
> of the ideas discussed on this list a while back to make a more "out of the
> box" log-ship mechanism?

The overhead of clearing out the whole thing is just large enough that it
can be disruptive on systems generating lots of WAL traffic, so you don't
want the main database processes bothering with that.  A related fact is
that there is a noticable slowdown to clients that need a segment switch
on a newly initialized and fast system that has to create all its WAL
segments, compared to one that has been active long enough to only be
recycling them.  That's why this sort of thing has been getting pushed
into the archive_command path; nothing performance-sensitive that can slow
down clients is happening there, so long as your server is powerful enough
to handle that in parallel with everything else going on.

Now, it would be possible to have that less sensitive archive code path
zero things out, but you'd need to introduce a way to note when it's been
done (so you don't do it for a segment twice) and a way to turn it off so
everybody doesn't go through that overhead (which probably means another
GUC).  That's a bit much trouble to go through just for a feature with a
fairly limited use-case that can easily live outside of the engine
altogether.

--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD

Re: Decreasing WAL size effects

От
Kyle Cordes
Дата:
Greg Smith wrote:
> On Thu, 30 Oct 2008, Kyle Cordes wrote:
>
>> It sure would be nice if there was a way for PG itself to zero the
>> unused portion of logs as they are completed, perhaps this will make


> The overhead of clearing out the whole thing is just large enough that
> it can be disruptive on systems generating lots of WAL traffic, so you

Hmm.  My understanding is that it wouldn't need to clear out the whole
thing, just the unused portion at the end. This wouldn't add any
initialize effort at startup / segment creation at all, right?  The
unused portions at the end only happen when a WAL segment needs to be
finished "early" for some reason.  I'd expect in a heavily loaded
system, that PG would be filling each segment, not ending them early.

However, there could easily be some reason that I am not familiar with,
that would cause a busy PG to nonetheless end a lot of segments early.

--
Kyle Cordes
http://kylecordes.com

Re: Decreasing WAL size effects

От
Gregory Stark
Дата:
Greg Smith <gsmith@gregsmith.com> writes:

> Now, it would be possible to have that less sensitive archive code path zero
> things out, but you'd need to introduce a way to note when it's been done (so
> you don't do it for a segment twice) and a way to turn it off so everybody
> doesn't go through that overhead (which probably means another GUC).  That's a
> bit much trouble to go through just for a feature with a fairly limited
> use-case that can easily live outside of the engine altogether.

Wouldn't it be just as good to indicate to the archive command the amount of
real data in the wal file and have it only bother copying up to that point?

--
  Gregory Stark
  EnterpriseDB          http://www.enterprisedb.com
  Ask me about EnterpriseDB's Slony Replication support!

Re: Decreasing WAL size effects

От
Christophe
Дата:
On Oct 30, 2008, at 2:54 PM, Gregory Stark wrote:
> Wouldn't it be just as good to indicate to the archive command the
> amount of
> real data in the wal file and have it only bother copying up to
> that point?

Hm!  Interesting question: Can the WAL files be truncated, rather
than zeroed, safely?

Re: Decreasing WAL size effects

От
Kyle Cordes
Дата:
Gregory Stark wrote:
> Greg Smith <gsmith@gregsmith.com> writes:
>
> Wouldn't it be just as good to indicate to the archive command the amount of
> real data in the wal file and have it only bother copying up to that point?

That sounds like a great solution to me; ideally it would be done in a
way that is always on (i.e. no setting, etc.).

On the log-recovery side, PG would need to be willing to accept
shorter-than-usual segments, if it's not already willing.


--
Kyle Cordes
http://kylecordes.com

Re: Decreasing WAL size effects

От
Greg Smith
Дата:
On Thu, 30 Oct 2008, Gregory Stark wrote:

> Wouldn't it be just as good to indicate to the archive command the amount of
> real data in the wal file and have it only bother copying up to that point?

That pushes the problem of writing a little chunk of code that reads only
the right amount of data and doesn't bother compressing the rest onto the
person writing the archive command.  Seems to me that leads back towards
wanting to bundle a contrib module with a good implementation of that with
the software.  The whole tail clearing bit is in the same situation
pg_standby was circa 8.2:  the software is available, and it works, but it
seems kind of sketchy to those not familiar with the source of the code.
Bundling it into the software as a contrib module just makes that problem
go away for end-users.

--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD

Re: Decreasing WAL size effects

От
Tom Lane
Дата:
Greg Smith <gsmith@gregsmith.com> writes:
> That pushes the problem of writing a little chunk of code that reads only
> the right amount of data and doesn't bother compressing the rest onto the
> person writing the archive command.  Seems to me that leads back towards
> wanting to bundle a contrib module with a good implementation of that with
> the software.  The whole tail clearing bit is in the same situation
> pg_standby was circa 8.2:  the software is available, and it works, but it
> seems kind of sketchy to those not familiar with the source of the code.
> Bundling it into the software as a contrib module just makes that problem
> go away for end-users.

The real reason not to put that functionality into core (or even
contrib) is that it's a stopgap kluge.  What the people who want this
functionality *really* want is continuous (streaming) log-shipping, not
WAL-segment-at-a-time shipping.  Putting functionality like that into
core is infinitely more interesting than putting band-aids on a
segmented approach.

            regards, tom lane

Re: Decreasing WAL size effects

От
Greg Smith
Дата:
On Thu, 30 Oct 2008, Tom Lane wrote:

> The real reason not to put that functionality into core (or even
> contrib) is that it's a stopgap kluge.  What the people who want this
> functionality *really* want is continuous (streaming) log-shipping, not
> WAL-segment-at-a-time shipping.

Sure, and that's why I didn't care when this got kicked out of the March
CommitFest; was hoping a better one would show up.  But if 8.4 isn't going
out the door with the feature people really want, it would be nice to at
least make the stopgap kludge more easily available.

--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD

Re: Decreasing WAL size effects

От
Jason Long
Дата:
Greg Smith wrote:
> On Thu, 30 Oct 2008, Tom Lane wrote:
>
>> The real reason not to put that functionality into core (or even
>> contrib) is that it's a stopgap kluge.  What the people who want this
>> functionality *really* want is continuous (streaming) log-shipping, not
>> WAL-segment-at-a-time shipping.
>
> Sure, and that's why I didn't care when this got kicked out of the
> March CommitFest; was hoping a better one would show up.  But if 8.4
> isn't going out the door with the feature people really want, it would
> be nice to at least make the stopgap kludge more easily available.
+1
Sure I would rather have synchronous WAL shipping, but  if that is going
to be a while or synchronous would slow down my applicaton I  can get
comfortably close enough for my purposes with some highly compressible WALs.
>
> --
> * Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD
>


Re: Decreasing WAL size effects

От
Kyle Cordes
Дата:
Jason Long wrote:

> Sure I would rather have synchronous WAL shipping, but  if that is going
> to be a while or synchronous would slow down my applicaton I  can get
> comfortably close enough for my purposes with some highly compressible
> WALs.

I'm way out here on the outskirts (just a user with a small pile of
servers running PG)... I would also find any improvements in WAL
shipping helpful, between now and when continuous streaming is ready.


--
Kyle Cordes
http://kylecordes.com

Re: Decreasing WAL size effects

От
Craig Ringer
Дата:
Jason Long wrote:
> Greg Smith wrote:
>> On Thu, 30 Oct 2008, Tom Lane wrote:
>>
>>> The real reason not to put that functionality into core (or even
>>> contrib) is that it's a stopgap kluge.  What the people who want this
>>> functionality *really* want is continuous (streaming) log-shipping, not
>>> WAL-segment-at-a-time shipping.
>>
>> Sure, and that's why I didn't care when this got kicked out of the
>> March CommitFest; was hoping a better one would show up.  But if 8.4
>> isn't going out the door with the feature people really want, it would
>> be nice to at least make the stopgap kludge more easily available.
> +1
> Sure I would rather have synchronous WAL shipping, but  if that is going
> to be a while or synchronous would slow down my applicaton I  can get
> comfortably close enough for my purposes with some highly compressible
> WALs.

I also tend to agree; it'd be really handy. pg_clearxlogtail (which I
use) makes me nervous despite the restore tests I've done.

If Pg truncated the WAL files before calling archive_command, and would
accept truncated WAL files on restore, that'd be really useful. Failing
that, packaging pg_clearxlogtail so it was kept in sync with the main Pg
code would be a big step.

--
Craig Ringer

Re: Decreasing WAL size effects

От
Craig Ringer
Дата:
> If Pg truncated the WAL files before calling archive_command, and would
> accept truncated WAL files on restore, that'd be really useful.

On second thought - that'd prevent reuse of WAL files, or at least force
the filesystem to potentially allocate new storage for the part that was
truncated.

Is it practical or sane to pass another argument to the archive_command:
a byte offset within the WAL file that is the last byte that must be
copied? That way, the archive_command could just avoid reading any
garbage in the first place, and write a truncated WAL file to the
archive, but Pg wouldn't have to do anything to the original files.
There'd be no need for a tool like pg_clearxlogtail, as the core server
would just report what it already knows about the WAL file.

Sound practical / sane?

--
Craig Ringer

Re: Decreasing WAL size effects

От
Magnus Hagander
Дата:
On 31 okt 2008, at 02.18, Greg Smith <gsmith@gregsmith.com> wrote:

> On Thu, 30 Oct 2008, Tom Lane wrote:
>
>> The real reason not to put that functionality into core (or even
>> contrib) is that it's a stopgap kluge.  What the people who want this
>> functionality *really* want is continuous (streaming) log-shipping,
>> not
>> WAL-segment-at-a-time shipping.
>
> Sure, and that's why I didn't care when this got kicked out of the
> March CommitFest; was hoping a better one would show up.  But if 8.4
> isn't going out the door with the feature people really want, it
> would be nice to at least make the stopgap kludge more easily
> available.
>

+1.

It's not like we haven't had kludges in contrib before. We just need
to be careful to label it as temporary and say it will go away. As
long as it can be safe, that is. To me it sounds like passing the size
as a param and ship a tool in contrib that makes use of it would be a
reasonable compromise, but I'm not deeply familiar with the code so I
could be wrong.

/Magnus


Re: Decreasing WAL size effects

От
Aidan Van Dyk
Дата:
* Greg Smith <gsmith@gregsmith.com> [081001 00:00]:

> The overhead of clearing out the whole thing is just large enough that it
> can be disruptive on systems generating lots of WAL traffic, so you don't
> want the main database processes bothering with that.  A related fact is
> that there is a noticable slowdown to clients that need a segment switch
> on a newly initialized and fast system that has to create all its WAL
> segments, compared to one that has been active long enough to only be
> recycling them.  That's why this sort of thing has been getting pushed
> into the archive_command path; nothing performance-sensitive that can
> slow down clients is happening there, so long as your server is powerful
> enough to handle that in parallel with everything else going on.

> Now, it would be possible to have that less sensitive archive code path
> zero things out, but you'd need to introduce a way to note when it's been
> done (so you don't do it for a segment twice) and a way to turn it off so
> everybody doesn't go through that overhead (which probably means another
> GUC).  That's a bit much trouble to go through just for a feature with a
> fairly limited use-case that can easily live outside of the engine
> altogether.

Remember that the place where this benifit is big is on a generally idle
server. Is it possible to make the "time based WAL switch" zero the tail?  You
don't even need to fsync it for durability (although you may want to hopefully
preventing a larger fsync delay on the next commit).

<timid experince=none>
How about something like the attached.  It's been spun quickly, passed
regression tests, and some simple hand tests on REL8_3_STABLE.  It seem slike
HEAD can't  initdb on my machine (quad opteron with SW raid1), I tried a few
revision in the last few days, and initdb dies on them all...

I'm not expert in the PG code, I just greped around what looked like reasonable
functions in xlog.c until I (hopefully) figured out the basic flow of switching
to new xlog segments.    I *think* I'm using openLogFile and openLogOff
correctly.
 </timid>

Setting archiving, with archive_timeout of 30s, and a few hand
pg_start_backup/pg_stop_backup you can see it *really* does make things
really compressable...

It's output is like:
    Archiving 000000010000000000000002
    Archiving 000000010000000000000003
    Archiving 000000010000000000000004
    Archiving 000000010000000000000005
    Archiving 000000010000000000000006
    LOG:  checkpoints are occurring too frequently (10 seconds apart)
    HINT:  Consider increasing the configuration parameter "checkpoint_segments".
    Archiving 000000010000000000000007
    Archiving 000000010000000000000008
    Archiving 000000010000000000000009
    LOG:  checkpoints are occurring too frequently (7 seconds apart)
    HINT:  Consider increasing the configuration parameter "checkpoint_segments".
    Archiving 00000001000000000000000A
    Archiving 00000001000000000000000B
    Archiving 00000001000000000000000C
    LOG:  checkpoints are occurring too frequently (6 seconds apart)
    HINT:  Consider increasing the configuration parameter "checkpoint_segments".
    Archiving 00000001000000000000000D
    LOG:  ZEROING xlog file 0 segment 14 from 12615680 - 16777216 [4161536 bytes]
    STATEMENT:  SELECT pg_stop_backup();
    Archiving 00000001000000000000000E
    Archiving 00000001000000000000000E.00C07098.backup
    LOG:  ZEROING xlog file 0 segment 15 from 8192 - 16777216 [16769024 bytes]
    STATEMENT:  SELECT pg_stop_backup();
    Archiving 00000001000000000000000F
    Archiving 00000001000000000000000F.00000C60.backup
    LOG:  ZEROING xlog file 0 segment 16 from 8192 - 16777216 [16769024 bytes]
    STATEMENT:  SELECT pg_stop_backup();
    Archiving 000000010000000000000010.00000F58.backup
    Archiving 000000010000000000000010
    LOG:  ZEROING xlog file 0 segment 17 from 8192 - 16777216 [16769024 bytes]
    STATEMENT:  SELECT pg_stop_backup();
    Archiving 000000010000000000000011
    Archiving 000000010000000000000011.00000020.backup
    LOG:  ZEROING xlog file 0 segment 18 from 6815744 - 16777216 [9961472 bytes]
    Archiving 000000010000000000000012
    LOG:  ZEROING xlog file 0 segment 19 from 8192 - 16777216 [16769024 bytes]
    Archiving 000000010000000000000013
    LOG:  ZEROING xlog file 0 segment 20 from 16384 - 16777216 [16760832 bytes]
    Archiving 000000010000000000000014
    LOG:  ZEROING xlog file 0 segment 23 from 8192 - 16777216 [16769024 bytes]
    STATEMENT:  SELECT pg_switch_xlog();
    Archiving 000000010000000000000017
    LOG:  ZEROING xlog file 0 segment 24 from 8192 - 16777216 [16769024 bytes]
    Archiving 000000010000000000000018
    LOG:  ZEROING xlog file 0 segment 25 from 8192 - 16777216 [16769024 bytes]
    Archiving 000000010000000000000019

You can see that when DB activity was heavy enough to fill an xlog segment
before the timout (or interative forced switch), it didn't zero anything.  It
only zeroed on a timeout switch, or a forced switch (pg_switch_xlog/pg_stop_backup).

And compressed xlog segments:
    -rw-r--r-- 1 mountie mountie   18477 2008-10-31 14:44 000000010000000000000010.gz
    -rw-r--r-- 1 mountie mountie   16394 2008-10-31 14:44 000000010000000000000011.gz
    -rw-r--r-- 1 mountie mountie 2721615 2008-10-31 14:52 000000010000000000000012.gz
    -rw-r--r-- 1 mountie mountie   16588 2008-10-31 14:52 000000010000000000000013.gz
    -rw-r--r-- 1 mountie mountie   19230 2008-10-31 14:52 000000010000000000000014.gz
    -rw-r--r-- 1 mountie mountie 4920063 2008-10-31 14:52 000000010000000000000015.gz
    -rw-r--r-- 1 mountie mountie 5024705 2008-10-31 14:52 000000010000000000000016.gz
    -rw-r--r-- 1 mountie mountie   18082 2008-10-31 14:52 000000010000000000000017.gz
    -rw-r--r-- 1 mountie mountie   18477 2008-10-31 14:52 000000010000000000000018.gz
    -rw-r--r-- 1 mountie mountie   16394 2008-10-31 14:52 000000010000000000000019.gz
    -rw-r--r-- 1 mountie mountie 2721615 2008-10-31 15:02 00000001000000000000001A.gz
    -rw-r--r-- 1 mountie mountie   16588 2008-10-31 15:02 00000001000000000000001B.gz
    -rw-r--r-- 1 mountie mountie   19230 2008-10-31 15:02 00000001000000000000001C.gz

And yes, even the non-zeroed segments compress well here, because
my test load is pretty simple:
    CREATE TABLE TEST
    (
     a numeric,
     b numeric,
     c numeric,
     i bigint not null
    );


    INSERT INTO test (a,b,c,i)
      SELECT random(),random(),random(),s FROM generate_series(1,1000000) s;


a.


--
Aidan Van Dyk                                             Create like a god,
aidan@highrise.ca                                       command like a king,
http://www.highrise.ca/                                   work like a slave.

Вложения

Re: Decreasing WAL size effects

От
Aidan Van Dyk
Дата:
* Aidan Van Dyk <aidan@highrise.ca> [081031 15:11]:
>     Archiving 000000010000000000000012
>     Archiving 000000010000000000000013
>     Archiving 000000010000000000000014

>     Archiving 000000010000000000000017
>     Archiving 000000010000000000000018
>     Archiving 000000010000000000000019

Just incase anybody noticed the skip in the above sequence, the missing few
caught cauht up in me acutally using the terminal there, and made cop-pasting a
mess...  I just didn't try to copy/paste them...


--
Aidan Van Dyk                                             Create like a god,
aidan@highrise.ca                                       command like a king,
http://www.highrise.ca/                                   work like a slave.

Вложения

Re: Decreasing WAL size effects

От
Aidan Van Dyk
Дата:
* Aidan Van Dyk <aidan@highrise.ca> [081031 15:11]:
> How about something like the attached.  It's been spun quickly, passed
> regression tests, and some simple hand tests on REL8_3_STABLE.  It seem slike
> HEAD can't  initdb on my machine (quad opteron with SW raid1), I tried a few
> revision in the last few days, and initdb dies on them all...

OK, HEAD does work, I don't know what was going on previosly... Attached is my
patch against head.

I'll try and pull out some machines on Monday to really thrash/crash this but
I'm running out of time today to set that up.

But in running head, I've come accross this:
    regression=# SELECT pg_stop_backup();
    WARNING:  pg_stop_backup still waiting for archive to complete (60 seconds elapsed)
    WARNING:  pg_stop_backup still waiting for archive to complete (120 seconds elapsed)
    WARNING:  pg_stop_backup still waiting for archive to complete (240 seconds elapsed)

My archive script is *not* running, it ran and exited:
    mountie@pumpkin:~/projects/postgresql/PostgreSQL/src/test/regress$ ps -ewf | grep post
    mountie   2904     1  0 16:31 pts/14   00:00:00
/home/mountie/projects/postgresql/PostgreSQL/src/test/regress/tmp_check/install/usr/local/pgsql
    mountie   2906  2904  0 16:31 ?        00:00:01 postgres: writer process
    mountie   2907  2904  0 16:31 ?        00:00:00 postgres: wal writer process
    mountie   2908  2904  0 16:31 ?        00:00:00 postgres: archiver process   last was 00000001000000000000001F
    mountie   2909  2904  0 16:31 ?        00:00:01 postgres: stats collector process
    mountie   2921  2904  1 16:31 ?        00:00:18 postgres: mountie regression 127.0.0.1(56455) idle

Those all match up:
    mountie@pumpkin:~/projects/postgresql/PostgreSQL/src/test/regress$ pstree -acp 2904
    postgres,2904 -D/home/mountie/projects/postgres
      ├─postgres,2906
      ├─postgres,2907
      ├─postgres,2908
      ├─postgres,2909
      └─postgres,2921

strace on the "archiver process" postgres:
    select(0, NULL, NULL, NULL, {1, 0})     = 0 (Timeout)
    getppid()                               = 2904
    select(0, NULL, NULL, NULL, {1, 0})     = 0 (Timeout)
    getppid()                               = 2904
    select(0, NULL, NULL, NULL, {1, 0})     = 0 (Timeout)
    getppid()                               = 2904
    select(0, NULL, NULL, NULL, {1, 0})     = 0 (Timeout)
    getppid()                               = 2904
    select(0, NULL, NULL, NULL, {1, 0})     = 0 (Timeout)
    getppid()                               = 2904

It *does* finally finish, postmaster log looks like ("Archving ..." is what my
archive script prints, bytes is the gzip'ed size):
    Archiving 000000010000000000000016 [16397 bytes]
    Archiving 000000010000000000000017 [4405457 bytes]
    Archiving 000000010000000000000018 [3349243 bytes]
    Archiving 000000010000000000000019 [3349505 bytes]
    LOG:  ZEROING xlog file 0 segment 27 from 7954432 - 16777216 [8822784 bytes]
    Archiving 00000001000000000000001A [3349590 bytes]
    Archiving 00000001000000000000001B [1596676 bytes]
    LOG:  ZEROING xlog file 0 segment 28 from 8192 - 16777216 [16769024 bytes]
    Archiving 00000001000000000000001C [16398 bytes]
    LOG:  ZEROING xlog file 0 segment 29 from 8192 - 16777216 [16769024 bytes]
    Archiving 00000001000000000000001D [16397 bytes]
    LOG:  ZEROING xlog file 0 segment 30 from 8192 - 16777216 [16769024 bytes]
    Archiving 00000001000000000000001E [16393 bytes]
    Archiving 00000001000000000000001E.00000020.backup [146 bytes]
    WARNING:  pg_stop_backup still waiting for archive to complete (60 seconds elapsed)
    WARNING:  pg_stop_backup still waiting for archive to complete (120 seconds elapsed)
    WARNING:  pg_stop_backup still waiting for archive to complete (240 seconds elapsed)
    LOG:  ZEROING xlog file 0 segment 31 from 8192 - 16777216 [16769024 bytes]
    Archiving 00000001000000000000001F [16395 bytes]


So what's this "pg_stop_backup still waiting for archive to complete" for 5
minutes state?  I've not seen that before (runing 8.2 and 8.3).

a.
--
Aidan Van Dyk                                             Create like a god,
aidan@highrise.ca                                       command like a king,
http://www.highrise.ca/                                   work like a slave.

Вложения

Re: Decreasing WAL size effects

От
Bruce Momjian
Дата:
Tom Lane wrote:
> Greg Smith <gsmith@gregsmith.com> writes:
> > That pushes the problem of writing a little chunk of code that reads only
> > the right amount of data and doesn't bother compressing the rest onto the
> > person writing the archive command.  Seems to me that leads back towards
> > wanting to bundle a contrib module with a good implementation of that with
> > the software.  The whole tail clearing bit is in the same situation
> > pg_standby was circa 8.2:  the software is available, and it works, but it
> > seems kind of sketchy to those not familiar with the source of the code.
> > Bundling it into the software as a contrib module just makes that problem
> > go away for end-users.
>
> The real reason not to put that functionality into core (or even
> contrib) is that it's a stopgap kluge.  What the people who want this
> functionality *really* want is continuous (streaming) log-shipping, not
> WAL-segment-at-a-time shipping.  Putting functionality like that into
> core is infinitely more interesting than putting band-aids on a
> segmented approach.

Well, I realize we want streaming archive logs, but there are still
going to be people who are archiving for point-in-time recovery, and I
assume a good number of them are going to compress their WAL files to
save space, because they have to store a lot of them.  Wouldn't zeroing
out the trailing byte of WAL still help those people?

--
  Bruce Momjian  <bruce@momjian.us>        http://momjian.us
  EnterpriseDB                             http://enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +