> One question: would you please expand your answer and explain how would this adversely affect async replication?
Is this a question or a hint (or both) :-)? Of course almost all non-durable settings [1] will delay replication.
I think I have to add, that pure speed of a read-mostly database is the main scenario I have in mind.
Duration, High-availability and Scaling out are perhaps additional or separate scenarios.
So, to come back to my question: I think that Postgres could be even faster by magnitudes, if the assumption of writing to slow secondary storage (like disks) is removed (or replaced).
On Sun, Nov 17, 2013 at 8:25 PM, Stefan Keller <sfkeller@gmail.com> wrote:
How can Postgres be used and configured as an In-Memory Database?
Does anybody know of thoughts or presentations about this "NoSQL feature" - beyond e.g. "Perspectives on NoSQL" from Gavin Roy at PGCon 2010)?
Given, say 128 GB memory or more, and (read-mostly) data that fit's into this, what are the hints to optimize Postgres (postgresql.conf etc.)?
In this case as you are trading system safety (system will not be crash-safe) for performance... The following parameters would be suited: - Improve performance by reducing the amount of data flushed: fsync = off synchronous_commit=off - Reduce the size of WALs: full_page_writes = off - Disable the background writer: bgwriter_lru_maxpages = 0 Regards,
One question: would you please expand your answer and explain how would this adversely affect async replication?