Обсуждение: unlogged tables

Поиск
Список
Период
Сортировка

unlogged tables

От
"Anibal David Acosta"
Дата:

Hello, just for clarification.

 

Unlogged tables are not memory tables don’t?

 

If we stop postgres server (normal stop) and start again, all information in unlogged tables still remain?

 

So, can I expect a data loss just in case of crash, power failure or SO crash don’t?

 

In case of crash, is possible that data corruption happened in a unlogged tables?

 

For performance purpose can I use async commit and unlogged tables?

 

 

Thanks!

 

 

 

 

Re: unlogged tables

От
Stephen Frost
Дата:
* Anibal David Acosta (aa@devshock.com) wrote:
> Unlogged tables are not memory tables don't?

Unlogged tables are not memory tables.

> If we stop postgres server (normal stop) and start again, all information in
> unlogged tables still remain?

Yes.

> So, can I expect a data loss just in case of crash, power failure or SO
> crash don't?

Yes.

> In case of crash, is possible that data corruption happened in a unlogged
> tables?

In a crash, unlogged tables are automatically truncated.

> For performance purpose can I use async commit and unlogged tables?

I'm not aware of any issues (beyond those already documented for async
commit..) with having async commit and unlogged tables.

    THanks,

        Stephen

Вложения

Re: unlogged tables

От
Sergey Konoplev
Дата:
Hi,

On 12 November 2011 00:18, Stephen Frost <sfrost@snowman.net> wrote:
> In a crash, unlogged tables are automatically truncated.

BTW I wonder what for they are truncated in a crash case?

--
Sergey Konoplev

Blog: http://gray-hemp.blogspot.com
LinkedIn: http://ru.linkedin.com/in/grayhemp
JID/GTalk: gray.ru@gmail.com Skype: gray-hemp

Re: unlogged tables

От
Richard Huxton
Дата:
On 14/11/11 08:10, Sergey Konoplev wrote:
> Hi,
>
> On 12 November 2011 00:18, Stephen Frost<sfrost@snowman.net>  wrote:
>> In a crash, unlogged tables are automatically truncated.
>
> BTW I wonder what for they are truncated in a crash case?

Because they bypass the transaction-log (WAL), hence unlogged.

There's no way to know whether there were partial updates applied when
the system restarts.

--
   Richard Huxton
   Archonet Ltd

Re: unlogged tables

От
Sergey Konoplev
Дата:
On 14 November 2011 12:58, Richard Huxton <dev@archonet.com> wrote:
> Because they bypass the transaction-log (WAL), hence unlogged.
> There's no way to know whether there were partial updates applied when the
> system restarts.

I probably did not understand the "truncate" meaning correct. It
truncates all the records of the table or several recent records only?

>
> --
>  Richard Huxton
>  Archonet Ltd
>



--
Sergey Konoplev

Blog: http://gray-hemp.blogspot.com
LinkedIn: http://ru.linkedin.com/in/grayhemp
JID/GTalk: gray.ru@gmail.com Skype: gray-hemp

Re: unlogged tables

От
Richard Huxton
Дата:
On 14/11/11 10:08, Sergey Konoplev wrote:
> On 14 November 2011 12:58, Richard Huxton<dev@archonet.com>  wrote:
>> Because they bypass the transaction-log (WAL), hence unlogged.
>> There's no way to know whether there were partial updates applied when the
>> system restarts.
>
> I probably did not understand the "truncate" meaning correct. It
> truncates all the records of the table or several recent records only?

All.

Let's say you were doing something like "UPDATE unlogged_table SET x=1
WHERE y=2". If a crash occurs during this command, there's no guarantee
that the affected disk pages were all updated. Worse, a single page
might be partially updated or even have rubbish in it (depending on the
nature of the crash).

Without the WAL there's no way to check whether the table is good or
not, or even to know what the last updates were. So - the only safe
thing to do is truncate the unlogged tables.

In the event of a normal shutdown, we can flush all the writes to disk
so we know all the data has been written, so there is no need to truncate.

--
   Richard Huxton
   Archonet Ltd

Re: unlogged tables

От
Sergey Konoplev
Дата:
On 14 November 2011 14:17, Richard Huxton <dev@archonet.com> wrote:
> On 14/11/11 10:08, Sergey Konoplev wrote:
>>
>> On 14 November 2011 12:58, Richard Huxton<dev@archonet.com>  wrote:
> Let's say you were doing something like "UPDATE unlogged_table SET x=1 WHERE
> y=2". If a crash occurs during this command, there's no guarantee that the
> affected disk pages were all updated. Worse, a single page might be
> partially updated or even have rubbish in it (depending on the nature of the
> crash).
>
> Without the WAL there's no way to check whether the table is good or not, or
> even to know what the last updates were. So - the only safe thing to do is
> truncate the unlogged tables.
>
> In the event of a normal shutdown, we can flush all the writes to disk so we
> know all the data has been written, so there is no need to truncate.

Thank you for the explanation. Now I understand it.

>
> --
>  Richard Huxton
>  Archonet Ltd
>



--
Sergey Konoplev

Blog: http://gray-hemp.blogspot.com
LinkedIn: http://ru.linkedin.com/in/grayhemp
JID/GTalk: gray.ru@gmail.com Skype: gray-hemp

Re: unlogged tables

От
"Anibal David Acosta"
Дата:
Maybe an option like "Recover from file " will be useful
So, for example, daily some process do a COPY of entire table to a file

In case of crash postgres recover content from the file.

:)



-----Mensaje original-----
De: Sergey Konoplev [mailto:gray.ru@gmail.com]
Enviado el: lunes, 14 de noviembre de 2011 07:39 a.m.
Para: Richard Huxton
CC: Stephen Frost; Anibal David Acosta; pgsql-performance@postgresql.org
Asunto: Re: [PERFORM] unlogged tables

On 14 November 2011 14:17, Richard Huxton <dev@archonet.com> wrote:
> On 14/11/11 10:08, Sergey Konoplev wrote:
>>
>> On 14 November 2011 12:58, Richard Huxton<dev@archonet.com>  wrote:
> Let's say you were doing something like "UPDATE unlogged_table SET x=1
> WHERE y=2". If a crash occurs during this command, there's no
> guarantee that the affected disk pages were all updated. Worse, a
> single page might be partially updated or even have rubbish in it
> (depending on the nature of the crash).
>
> Without the WAL there's no way to check whether the table is good or
> not, or even to know what the last updates were. So - the only safe
> thing to do is truncate the unlogged tables.
>
> In the event of a normal shutdown, we can flush all the writes to disk
> so we know all the data has been written, so there is no need to truncate.

Thank you for the explanation. Now I understand it.

>
> --
>  Richard Huxton
>  Archonet Ltd
>



--
Sergey Konoplev

Blog: http://gray-hemp.blogspot.com
LinkedIn: http://ru.linkedin.com/in/grayhemp
JID/GTalk: gray.ru@gmail.com Skype: gray-hemp


Re: unlogged tables

От
"Kevin Grittner"
Дата:
"Anibal David Acosta" <aa@devshock.com> wrote:

> Maybe an option like "Recover from file " will be useful
> So, for example, daily some process do a COPY of entire table to a
> file
>
> In case of crash postgres recover content from the file.

If you need to recover file contents on a crash, then an unlogged
table is probably not the right choice.  There is always
asynchronous commit.

-Kevin

Re: unlogged tables

От
"Anibal David Acosta"
Дата:
I am doing asynchronous commit but sometimes I think that there are so  many
"things" in an insert/update transaction, for a table that has not too much
important information.

My table is a statistics counters table, so I can live with a partial data
loss, but not with a full data loss because many counters are weekly and
monthly.

Unlogged table can increase speed, this table has about 1.6 millions of
update per hour, but unlogged with a chance of loss all information on a
crash are not a good idea for this.

Anyway, thanks Kevin!







-----Mensaje original-----
De: Kevin Grittner [mailto:Kevin.Grittner@wicourts.gov]
Enviado el: lunes, 14 de noviembre de 2011 02:27 p.m.
Para: 'Richard Huxton'; Anibal David Acosta; 'Sergey Konoplev'
CC: pgsql-performance@postgresql.org; 'Stephen Frost'
Asunto: Re: [PERFORM] unlogged tables

"Anibal David Acosta" <aa@devshock.com> wrote:

> Maybe an option like "Recover from file " will be useful So, for
> example, daily some process do a COPY of entire table to a file
>
> In case of crash postgres recover content from the file.

If you need to recover file contents on a crash, then an unlogged table is
probably not the right choice.  There is always asynchronous commit.

-Kevin


Re: unlogged tables

От
"Kevin Grittner"
Дата:
"Anibal David Acosta" <aa@devshock.com> wrote:

> I am doing asynchronous commit but sometimes I think that there
> are so  many "things" in an insert/update transaction, for a table
> that has not too much important information.
>
> My table is a statistics counters table, so I can live with a
> partial data loss, but not with a full data loss because many
> counters are weekly and monthly.
>
> Unlogged table can increase speed, this table has about 1.6
> millions of update per hour, but unlogged with a chance of loss
> all information on a crash are not a good idea for this.

pg_dump -t 'tablename' from a cron job?  (Make sure to rotate dump
file names, maybe with day of week or some such.)

-Kevin

Re: unlogged tables

От
Josh Berkus
Дата:
>> Unlogged table can increase speed, this table has about 1.6
>> millions of update per hour, but unlogged with a chance of loss
>> all information on a crash are not a good idea for this.
>
> pg_dump -t 'tablename' from a cron job?  (Make sure to rotate dump
> file names, maybe with day of week or some such.)

Or just "CREATE TABLE AS" copy the table every hour to a second, backup
table.  Then it would be much easier to script automated restore of the
data.

--
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com

Re: unlogged tables

От
"Pierre C"
Дата:
> My table is a statistics counters table, so I can live with a partial
> data
> loss, but not with a full data loss because many counters are weekly and
> monthly.
>
> Unlogged table can increase speed, this table has about 1.6 millions of
> update per hour, but unlogged with a chance of loss all information on a
> crash are not a good idea for this.

You could use an unlogged table for hourly updates, and periodically,
accumulate those counters to a (logged) daily/weekly table...

The hourly table could be rebuilt by examining only 1 hour's worth of
data, so it isn't too much of a problem if it's lost. The other tables
would get much less updates.

Re: unlogged tables

От
dgabriel
Дата:
"In the event of a normal shutdown, we can flush all the writes to disk
so we know all the data has been written, so there is no need to truncate."

Isn't possible to periodically flush data to disk and in case of crush
postgres to load only the data that existed at last flush? The periodic
flush could be configurable, for example every 30 minutes or after x rows
updated/inserted.



--
View this message in context: http://postgresql.nabble.com/unlogged-tables-tp4985453p5845576.html
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.


Re: unlogged tables

От
Matheus de Oliveira
Дата:

On Mon, Apr 13, 2015 at 4:31 PM, dgabriel <gabriel.dodan@gmail.com> wrote:
"In the event of a normal shutdown, we can flush all the writes to disk
so we know all the data has been written, so there is no need to truncate."

Isn't possible to periodically flush data to disk and in case of crush
postgres to load only the data that existed at last flush? The periodic
flush could be configurable, for example every 30 minutes or after x rows
updated/inserted.

There is no such facility implemented for UNLOGGED TABLEs. That could be a feature request though.

Best regards,
--
Matheus de Oliveira
Analista de Banco de Dados
Dextra Sistemas - MPS.Br nível F!
www.dextra.com.br/postgres

Re: unlogged tables

От
dgabriel
Дата:
That will be a very useful feature. I don'
t care if i loss 1-2 hours of data. I know we could have some cron jobs to
dump the table periodically but  the table could be big,  so this operation
could be expensive. Also i have to detect when postgres crush, i have no
idea how i can detect if postgres crushed. Then i have somehow to attache a
script at postgres start, to restore the dumps...the dump solution is very
complicate and  unreliable. A periodic flush feature will be amazing!

How is the procedure for feature request on postgres, github?

Thanks!



--
View this message in context: http://postgresql.nabble.com/unlogged-tables-tp4985453p5845580.html
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.


Re: unlogged tables

От
Yves Dorfsman
Дата:
On 2015-04-13 14:16, dgabriel wrote:
> That will be a very useful feature.

I agree, unlogged tables would be a lot more useful if they didn't "disappear"
on re-start.


> could be expensive. Also i have to detect when postgres crush, i have no
> idea how i can detect if postgres crushed. Then i have somehow to attache a
> script at postgres start, to restore the dumps...the dump solution is very
> complicate and  unreliable. A periodic flush feature will be amazing!

In my experience postgres is very aggressive in getting rid of unlogged
tables, it does get rid of them from shutdowns that seem perfectly fine (no
crash). A lot of people get surprised by this.

--
http://yves.zioup.com
gpg: 4096R/32B0F416



Re: unlogged tables

От
Matheus de Oliveira
Дата:

On Mon, Apr 13, 2015 at 5:30 PM, Yves Dorfsman <yves@zioup.com> wrote:

In my experience postgres is very aggressive in getting rid of unlogged
tables, it does get rid of them from shutdowns that seem perfectly fine (no
crash). A lot of people get surprised by this.

Shutdowns in "fast" or "smart" modes does not get rid of unlogged tables. But if you do "immediate", then it does, and I don't see why people get surprised by it, as you probably shouldn't be using "immediate" mode in normal circumstances.

Best regards,
--
Matheus de Oliveira
Analista de Banco de Dados
Dextra Sistemas - MPS.Br nível F!
www.dextra.com.br/postgres

Re: unlogged tables

От
"David G. Johnston"
Дата:
On Monday, April 13, 2015, Matheus de Oliveira <matioli.matheus@gmail.com> wrote:

On Mon, Apr 13, 2015 at 4:31 PM, dgabriel <gabriel.dodan@gmail.com> wrote:
"In the event of a normal shutdown, we can flush all the writes to disk
so we know all the data has been written, so there is no need to truncate."

Isn't possible to periodically flush data to disk and in case of crush
postgres to load only the data that existed at last flush? The periodic
flush could be configurable, for example every 30 minutes or after x rows
updated/inserted.

There is no such facility implemented for UNLOGGED TABLEs. That could be a feature request though.


Well, that is half right anyway.  UNLOGGED tables obey checkpoints just like any other table.  The missing feature is an option to leaved restored the last checkpoint.  Instead, not knowing whether there were changes since the last checkpoint, the system truncated the relation.

What use case is there for a behavior that the last checkpoint data is left on the relation upon restarting - not knowing whether it was possible the other data could have been written subsequent?

David J.

Re: unlogged tables

От
Jim Nasby
Дата:
On 4/13/15 3:49 PM, David G. Johnston wrote:
> On Monday, April 13, 2015, Matheus de Oliveira
> <matioli.matheus@gmail.com <mailto:matioli.matheus@gmail.com>> wrote:
>     On Mon, Apr 13, 2015 at 4:31 PM, dgabriel <gabriel.dodan@gmail.com
>     <javascript:_e(%7B%7D,'cvml','gabriel.dodan@gmail.com');>> wrote:
>
>         "In the event of a normal shutdown, we can flush all the writes
>         to disk
>         so we know all the data has been written, so there is no need to
>         truncate."
>
>         Isn't possible to periodically flush data to disk and in case of
>         crush
>         postgres to load only the data that existed at last flush? The
>         periodic
>         flush could be configurable, for example every 30 minutes or
>         after x rows
>         updated/inserted.
>
>     There is no such facility implemented for UNLOGGED TABLEs. That
>     could be a feature request though.
>
> Well, that is half right anyway.  UNLOGGED tables obey checkpoints just
> like any other table.  The missing feature is an option to
> leaved restored the last checkpoint.  Instead, not knowing whether there
> were changes since the last checkpoint, the system truncated the relation.
>
> What use case is there for a behavior that the last checkpoint data is
> left on the relation upon restarting - not knowing whether it was
> possible the other data could have been written subsequent?

Yeah, this is not something that would be very easy to accomplish,
because a buffer can get evicted and written to disk at any point. It
wouldn't be too hard to read every unlogged table during recovery and
see if there are any pages that were written after the last checkpoint,
but that obviously won't be very fast.

Actually, I suppose we could dedicate a fork for unlogged tables and use
that to record the newest LSN of any page that's been written out. But
if you have much of any write activity on the table that's probably
going to be completely useless.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


Re: unlogged tables

От
Alvaro Herrera
Дата:
Jim Nasby wrote:

> Yeah, this is not something that would be very easy to accomplish, because a
> buffer can get evicted and written to disk at any point. It wouldn't be too
> hard to read every unlogged table during recovery and see if there are any
> pages that were written after the last checkpoint, but that obviously won't
> be very fast.

If you consider only tables, then yeah perhaps this is easy to
accomplish (not really convinced myself).  But if you consider indexes,
things are not so easy anymore.


In the thread from 2011 (which this started as a reply to) the OP was
doing frequent UPDATEs to keep track of counts of something.  I think
that would be better served by using INSERTs of deltas and periodic
accumulation of grouped values, as suggested in
http://www.postgresql.org/message-id/20150305211601.GW3291@alvh.no-ip.org
This has actually been suggested many times over the years.

--
Álvaro Herrera                http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Re: unlogged tables

От
Jim Nasby
Дата:
On 4/13/15 4:13 PM, Alvaro Herrera wrote:
> Jim Nasby wrote:
>
>> Yeah, this is not something that would be very easy to accomplish, because a
>> buffer can get evicted and written to disk at any point. It wouldn't be too
>> hard to read every unlogged table during recovery and see if there are any
>> pages that were written after the last checkpoint, but that obviously won't
>> be very fast.
>
> If you consider only tables, then yeah perhaps this is easy to
> accomplish (not really convinced myself).  But if you consider indexes,
> things are not so easy anymore.

Are indexes not guaranteed to have LSNs? I thought they basically
followed the same write rules as heap pages in regard to WAL first.

Though, if you have an index that doesn't support logging (like hash)
you're still hosed...

> In the thread from 2011 (which this started as a reply to) the OP was

I don't keep PGSQL emails from that far back... ;)

> doing frequent UPDATEs to keep track of counts of something.  I think
> that would be better served by using INSERTs of deltas and periodic
> accumulation of grouped values, as suggested in
> http://www.postgresql.org/message-id/20150305211601.GW3291@alvh.no-ip.org
> This has actually been suggested many times over the years.

What I was suggesting certainly wouldn't help you if you were getting
any serious amount of changes to the count.

I am wondering though what the bottleneck in HEAD is with doing an
UPDATE instead of an INSERT, at least where unlogged would help
significantly. I didn't think we logged all that much more for an
UPDATE. Heck, with HOT you might even be able to log less.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


Re: unlogged tables

От
Jeff Janes
Дата:
On Mon, Apr 13, 2015 at 1:49 PM, David G. Johnston <david.g.johnston@gmail.com> wrote:
On Monday, April 13, 2015, Matheus de Oliveira <matioli.matheus@gmail.com> wrote:

On Mon, Apr 13, 2015 at 4:31 PM, dgabriel <gabriel.dodan@gmail.com> wrote:
"In the event of a normal shutdown, we can flush all the writes to disk
so we know all the data has been written, so there is no need to truncate."

Isn't possible to periodically flush data to disk and in case of crush
postgres to load only the data that existed at last flush? The periodic
flush could be configurable, for example every 30 minutes or after x rows
updated/inserted.

There is no such facility implemented for UNLOGGED TABLEs. That could be a feature request though.

One way would be to lock dirty buffers from unlogged relations into shared_buffers (which hardly seems like a good thing) until the start of a "super-checkpoint" and then write them all out as fast as possible (which kind of defeats checkpoint_completion_target).  And then if the crash happened during a super-checkpoint, the data would still be inconsistent and need to be truncated.
 


Well, that is half right anyway.  UNLOGGED tables obey checkpoints just like any other table. 

Do they?  I thought they only obeyed shutdown checkpoints, not online checkpoints.  I do remember some changes around this area, but none that completely reverted that logic.

 
The missing feature is an option to leaved restored the last checkpoint.  Instead, not knowing whether there were changes since the last checkpoint, the system truncated the relation.

What use case is there for a behavior that the last checkpoint data is left on the relation upon restarting - not knowing whether it was possible the other data could have been written subsequent?

I would like a way to have unlogged tables be available on a replica provided that no changes were made to them between the pg_basebackup and the recovery point.

My use case is that I mark certain read-only-after-bulk-loading tables as unlogged solely to avoid blowing out the log archive during the loading phase and refresh phase.  This is stuff like vendor catalogs, NCBI datasets, ChEMBL datasets, etc, which can simply be re-derived from the reference.  It would be nice if these were still available (without having to repeat the ETL) after crashes provided they were not written to since a checkpoint, and available on cloned test servers without having to repeat the ETL on those as well.
 
As for "maybe its corrupt, maybe it isn't, but lets keep them anyway", yeah, I have little use for that.

Cheers,

Jeff

Re: unlogged tables

От
"David G. Johnston"
Дата:
On Mon, Apr 13, 2015 at 4:49 PM, Jeff Janes <jeff.janes@gmail.com> wrote:
On Mon, Apr 13, 2015 at 1:49 PM, David G. Johnston <david.g.johnston@gmail.com> wrote:
On Monday, April 13, 2015, Matheus de Oliveira <matioli.matheus@gmail.com> wrote:

On Mon, Apr 13, 2015 at 4:31 PM, dgabriel <gabriel.dodan@gmail.com> wrote:
"In the event of a normal shutdown, we can flush all the writes to disk
so we know all the data has been written, so there is no need to truncate."

Isn't possible to periodically flush data to disk and in case of crush
postgres to load only the data that existed at last flush? The periodic
flush could be configurable, for example every 30 minutes or after x rows
updated/inserted.

There is no such facility implemented for UNLOGGED TABLEs. That could be a feature request though.

One way would be to lock dirty buffers from unlogged relations into shared_buffers (which hardly seems like a good thing) until the start of a "super-checkpoint" and then write them all out as fast as possible (which kind of defeats checkpoint_completion_target).  And then if the crash happened during a super-checkpoint, the data would still be inconsistent and need to be truncated.
 


Well, that is half right anyway.  UNLOGGED tables obey checkpoints just like any other table. 

Do they?  I thought they only obeyed shutdown checkpoints, not online checkpoints.  I do remember some changes around this area, but none that completely reverted that logic.


​I vaguely recall that conversation now...I'm not positive on the exact mechanics here and, as it pertains to the OP, the difference you describe is immaterial since in either case the status quo mandates an "all or nothing" approach to an unlogged table's contents.​

 
 
The missing feature is an option to leaved restored the last checkpoint.  Instead, not knowing whether there were changes since the last checkpoint, the system truncated the relation.

What use case is there for a behavior that the last checkpoint data is left on the relation upon restarting - not knowing whether it was possible the other data could have been written subsequent?

I would like a way to have unlogged tables be available on a replica provided that no changes were made to them between the pg_basebackup and the recovery point.

My use case is that I mark certain read-only-after-bulk-loading tables as unlogged solely to avoid blowing out the log archive during the loading phase and refresh phase.  This is stuff like vendor catalogs, NCBI datasets, ChEMBL datasets, etc, which can simply be re-derived from the reference.  It would be nice if these were still available (without having to repeat the ETL) after crashes provided they were not written to since a checkpoint, and available on cloned test servers without having to repeat the ETL on those as well.
 

​My gut reaction is that those should be in their own clusters and accessed via postgres_fdw...

That particular use-case would probably best be served with a separate replication channel which pushes data files from the primary to the slaves and allows for the slave to basically "rewrite" its existing table by pointing to the newly supplied version.  Some kind of "CREATE STATIC TABLE" and "PUSH STATIC TABLE TO {all | replica name}" command combo...though ideally with less manual intervention...

David J.​
 

Re: unlogged tables

От
Jim Nasby
Дата:
On 4/13/15 7:32 PM, David G. Johnston wrote:
>         The missing feature is an option to leaved restored the last
>         checkpoint.  Instead, not knowing whether there were changes
>         since the last checkpoint, the system truncated the relation.
>
>         What use case is there for a behavior that the last checkpoint
>         data is left on the relation upon restarting - not knowing
>         whether it was possible the other data could have been written
>         subsequent?
>
>
>     I would like a way to have unlogged tables be available on a replica
>     provided that no changes were made to them between the pg_basebackup
>     and the recovery point.
>
>
>     My use case is that I mark certain read-only-after-bulk-loading
>     tables as unlogged solely to avoid blowing out the log archive
>     during the loading phase and refresh phase.  This is stuff like
>     vendor catalogs, NCBI datasets, ChEMBL datasets, etc, which can
>     simply be re-derived from the reference.  It would be nice if these
>     were still available (without having to repeat the ETL) after
>     crashes provided they were not written to since a checkpoint, and
>     available on cloned test servers without having to repeat the ETL on
>     those as well.
>
>
> ​My gut reaction is that those should be in their own clusters and
> accessed via postgres_fdw...

Likely to produce really crappy plans if the tables are of any real size...

> That particular use-case would probably best be served with a separate
> replication channel which pushes data files from the primary to the
> slaves and allows for the slave to basically "rewrite" its existing
> table by pointing to the newly supplied version.  Some kind of "CREATE
> STATIC TABLE" and "PUSH STATIC TABLE TO {all | replica name}" command
> combo...though ideally with less manual intervention...

You still have the same problem of knowing if someone has scribbled on
the data since the last checkpoint.

There's been recent discussion of adding support for read-only tables.
If we had those, we might be able to support something like...

INSERT INTO unlogged;
ALTER TABLE unlogged SET READ ONLY;
CHECKPOINT;
/* take backup */

This should be safe as long as we WAL log changes to read-only status
(which presumably we would).

How much work that would entail though, I don't know.

Ultimately you still have to get the data over to the other machine
anyway. ISTM it'd be a LOT more useful to look at ways to make the WAL
logging of bulk inserts (and especially COPY into a known empty table) a
lot more efficient.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


Re: unlogged tables

От
"David G. Johnston"
Дата:
On Mon, Apr 13, 2015 at 7:45 PM, Jim Nasby <Jim.Nasby@bluetreble.com> wrote:
On 4/13/15 7:32 PM, David G. Johnston wrote:

That particular use-case would probably best be served with a separate
replication channel which pushes data files from the primary to the
slaves and allows for the slave to basically "rewrite" its existing
table by pointing to the newly supplied version.  Some kind of "CREATE
STATIC TABLE" and "PUSH STATIC TABLE TO {all | replica name}" command
combo...though ideally with less manual intervention...

You still have the same problem of knowing if someone has scribbled on the data since the last checkpoint.

​That seems like an automation concern though...the more limited idea was to simply have a means for a table to exist on the master and allow the user to cause an exact copy of that table to appear on a replica via direct data transfer (i.e., without need to create a backup/dump).  If the table already exists on the replica the existing version remains as-is until the new table is fully push and then a filenode pointer update happens.  If changes are made to the master the two tables will remain diverged until a new push occurs.​

I imaging this same idea could be handled external to the database though I'm don't know enough to comment on the specific technical merits of each.​


There's been recent discussion of adding support for read-only tables. If we had those, we might be able to support something like...

INSERT INTO unlogged;
ALTER TABLE unlogged SET READ ONLY;
CHECKPOINT;
/* take backup */

This should be safe as long as we WAL log changes to read-only status (which presumably we would).

How much work that would entail though, I don't know.

Ultimately you still have to get the data over to the other machine anyway. ISTM it'd be a LOT more useful to look at ways to make the WAL logging of bulk inserts (and especially COPY into a known empty table) a lot more efficient.


​Jeff Janes makes a comment about wanting "...to avoid blowing out the log archive..."; which I also don't quite follow...

WAL does seem to be designed to solve a different problem that what is described here - lots of small changes versus few large changes.  Improving WAL to move the size at which small becomes large is a win but another channel designed for few large changes may be less complex to implement.  The current work in logical replication likely has merit here as well but my familiarity with that technology is fairly limited.

David J.

Re: unlogged tables

От
Yves Dorfsman
Дата:
On 2015-04-13 17:49, Jeff Janes wrote:
>
> One way would be to lock dirty buffers from unlogged relations into
> shared_buffers (which hardly seems like a good thing) until the start of a
> "super-checkpoint" and then write them all out as fast as possible (which kind
> of defeats checkpoint_completion_target).  And then if the crash happened
> during a super-checkpoint, the data would still be inconsistent and need to be
> truncated.
>

What do you call a "super-checkpoint"?

--
http://yves.zioup.com
gpg: 4096R/32B0F416



Re: unlogged tables

От
dgabriel
Дата:
David G Johnston wrote
> Well, that is half right anyway.  UNLOGGED tables obey checkpoints just
> like any other table.  The missing feature is an option to leaved restored
> the last checkpoint.  Instead, not knowing whether there were changes
> since the last checkpoint, the system truncated the relation.
>
> What use case is there for a behavior that the last checkpoint data is
> left on the relation upon restarting - not knowing whether it was possible
> the other data could have been written subsequent?

If is possible to restore the table at last checkpoint state that will be
more than enough. I don't care about the changes since last checkpoint, I am
willing to lose those changes.  There are use cases where is acceptable to
lose some data, for example in a cache system, it is not a big issue if we
lose some cached data.



--
View this message in context: http://postgresql.nabble.com/unlogged-tables-tp4985453p5845650.html
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.


Re: unlogged tables

От
Jeff Janes
Дата:
On Tue, Apr 14, 2015 at 8:41 AM, Yves Dorfsman <yves@zioup.com> wrote:
On 2015-04-13 17:49, Jeff Janes wrote:
>
> One way would be to lock dirty buffers from unlogged relations into
> shared_buffers (which hardly seems like a good thing) until the start of a
> "super-checkpoint" and then write them all out as fast as possible (which kind
> of defeats checkpoint_completion_target).  And then if the crash happened
> during a super-checkpoint, the data would still be inconsistent and need to be
> truncated.
>

What do you call a "super-checkpoint"?

A hypothetical checkpoint which includes writing and flushing pages of unlogged tables.

Presumably you wouldn't want every checkpoint to do this, because if done the way I described the super-checkpoint is a vulnerable period.  Crashes that happen during it would result in truncation of the unlogged relation.  Since that is the very thing we want to avoid, you would want to make these vulnerable periods rare.

Cheers,

Jeff

Re: unlogged tables

От
Jeff Janes
Дата:
On Mon, Apr 13, 2015 at 8:28 PM, David G. Johnston <david.g.johnston@gmail.com> wrote:
On Mon, Apr 13, 2015 at 7:45 PM, Jim Nasby <Jim.Nasby@bluetreble.com> wrote:


There's been recent discussion of adding support for read-only tables. If we had those, we might be able to support something like...

INSERT INTO unlogged;
ALTER TABLE unlogged SET READ ONLY;
CHECKPOINT;
/* take backup */

This should be safe as long as we WAL log changes to read-only status (which presumably we would).

How much work that would entail though, I don't know.

Right.  I've been keeping an eye on that discussion with the same intention.  The big question is how, during recovery, does it know what state the table was in without being able to read from the system catalogs?  Perhaps it would be the checkpointer's duty at the end of the checkpoint to remove the init fork for unlogged relations which were turned to read only before that checkpoint started.
 

Ultimately you still have to get the data over to the other machine anyway. ISTM it'd be a LOT more useful to look at ways to make the WAL logging of bulk inserts (and especially COPY into a known empty table) a lot more efficient.


​Jeff Janes makes a comment about wanting "...to avoid blowing out the log archive..."; which I also don't quite follow...

I think the WAL logging of bulk COPY is pretty space-efficient already, provided it is not indexed at the time of the COPY.  But no amount of efficiency improvement is going to make them small enough for me want to keep the WAL logs around beyond the next base backup.

What I would really want is a way to make two separate WAL streams; changes to this set of tables goes to the "keep forever, for PITR" stream, and changes to this other set of tables go to the "keep until pg_basebackup is next run" stream.  Of course you couldn't have fk constraints between the two different sets of tables.

Having to get the data over to the other machine doesn't bother me, it is just a question of how to do it without permanently intermingling it with WAL logs which I want to keep forever.

The FDW would be a good option, except the overhead (both execution overhead and the overhead of poor plans) seems to be too large.  I haven't explored it as much as I would like.

Cheers,

Jeff

Re: unlogged tables

От
Jim Nasby
Дата:
On 4/14/15 10:56 AM, dgabriel wrote:
> David G Johnston wrote
>> Well, that is half right anyway.  UNLOGGED tables obey checkpoints just
>> like any other table.  The missing feature is an option to leaved restored
>> the last checkpoint.  Instead, not knowing whether there were changes
>> since the last checkpoint, the system truncated the relation.
>>
>> What use case is there for a behavior that the last checkpoint data is
>> left on the relation upon restarting - not knowing whether it was possible
>> the other data could have been written subsequent?
>
> If is possible to restore the table at last checkpoint state that will be
> more than enough. I don't care about the changes since last checkpoint, I am
> willing to lose those changes.  There are use cases where is acceptable to
> lose some data, for example in a cache system, it is not a big issue if we
> lose some cached data.

It is not. Unless you ensure that data is written to WAL (on disk)
BEFORE it is written to the data pages, you will probably have
corruption after a crash, and have no way to prevent or possibly even
detect the corruption.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com