On Wed, 18 Nov 2015 20:10:00 -0500
Jonathan Vanasco <postgres@2xlp.com> wrote:
> As a temporary fix I need to write some uploaded image files to PostgreSQL until a task server can
read/process/deletethem.
>
> The problem I've run into (via server load tests that model our production environment), is that these read/writes
endup pushing the indexes used by other queries out of memory -- causing them to be re-read from disk. These files
canbe anywhere from 200k to 5MB.
>
> has anyone dealt with situations like this before and has any suggestions? I could use a dedicated db connection if
thatwould introduce any options.
PostgreSQL doesn't have any provisions for preferring one thing
or another for storing in memory.
The easiest thing I can think would be to add memory to the machine
(or configure Postgres to use more) such that those files aren't
pushing enough other pages out of memory to have a problematic
impact.
Another idea would be to put the image database on a different
physical server, or run 2 instances of Postgres on a single
server with the files in one database configured with a low
shared_buffers value, and the rest of the data on the other
database server configured with higher shared_buffers.
I know these probably aren't the kind of answers you're looking
for, but I don't have anything better to suggest; and the rest
of the mailing list seems to be devoid of ideas as well.
--
Bill Moran