Обсуждение: Loading the entire DB into RAM

Поиск
Список
Период
Сортировка

Loading the entire DB into RAM

От
"Charles A. Landemaine"
Дата:
I have a web server with PostgreSQL and RHEL. It hosts a search
engine, and each time some one makes a query, it uses the HDD Raid
array. The DB is not very big, it is less than a GB. I plan to add
more RAM anyway.

What I'd like to do is find out how to keep the whole DB in RAM so
that each time some one does a query, it doesn't use the HDD. Is it
possible, if so, how?
Thanks,

Charles.

Re: Loading the entire DB into RAM

От
"Charles A. Landemaine"
Дата:
On 4/7/06, Matt Davies | Postgresql List <matt-postgresql@mattdavies.net> wrote:
> Out of curiosity, what are you using as the search engine?

Thank you. We designed the search engine ourself (we didn't use a
ready-to-use solution).

--
Charles A. Landemaine.

Re: Loading the entire DB into RAM

От
Matt Davies | Postgresql List
Дата:
If memory serves me correctly I have seen several posts about this in
the past.

I'll try to recall highlights.

1. Create a md in linux sufficiently large enough to handle the data set
you are wanting to store.
2. Create a HD based copy somewhere as your permanent storage mechanism.
3. Start up your PostgreSQL instance with the MD as the data store
4. Load your data to the MD instance.
5. Figure out how you will change indexes _and_ ensure that your disk
storage is consistent with your MD instance.

I haven't done so, but it would be interesting to have a secondary
database somewhere that is your primary storage. It needn't be
especially powerful, or even available. It serves as the place to
generate your indexing data. You could then use SLONY to propogate the
data to the MD production system.

Of course, if you are updating your system that resides in ram, you
should be thinking the other way. Have SLONY replicate changes to the
other, permanent storage, system.

Either way you do it, I can't think of an out of the box method to doing
it. Somehow one has to transfer data from permanent storage to the md
instance, and, likewise, back to permanent storage.

Out of curiosity, what are you using as the search engine?


Charles A. Landemaine wrote:
> I have a web server with PostgreSQL and RHEL. It hosts a search
> engine, and each time some one makes a query, it uses the HDD Raid
> array. The DB is not very big, it is less than a GB. I plan to add
> more RAM anyway.
>
> What I'd like to do is find out how to keep the whole DB in RAM so
> that each time some one does a query, it doesn't use the HDD. Is it
> possible, if so, how?
> Thanks,
>
> Charles.
>
> ---------------------------(end of broadcast)---------------------------
> TIP 6: explain analyze is your friend
>
>
>


Re: Loading the entire DB into RAM

От
Tom Lane
Дата:
"Charles A. Landemaine" <landemaine@gmail.com> writes:
> What I'd like to do is find out how to keep the whole DB in RAM so
> that each time some one does a query, it doesn't use the HDD. Is it
> possible, if so, how?

That should happen essentially for free, if the kernel doesn't have any
better use for the memory --- anything read from disk once will stay in
kernel disk cache.  Perhaps you need to take a closer look at your
kernel VM parameters.  Or maybe you don't have enough RAM yet for both
the DB contents and the processes you need to run.

            regards, tom lane

Re: Loading the entire DB into RAM

От
"Merlin Moncure"
Дата:
On 4/7/06, Charles A. Landemaine <landemaine@gmail.com> wrote:
> I have a web server with PostgreSQL and RHEL. It hosts a search
> engine, and each time some one makes a query, it uses the HDD Raid
> array. The DB is not very big, it is less than a GB. I plan to add
> more RAM anyway.
>
> What I'd like to do is find out how to keep the whole DB in RAM so
> that each time some one does a query, it doesn't use the HDD. Is it
> possible, if so, how?

don't bother.

If your database is smaller than ram on the box, the operating will
cache it quite effectively.  All you should be worrying about is to
set fsync=on (you care about your data) or off (you don't).  If your
data is truly static you might get better performance out of a
in-process data storage, like sqlite for example.

Merlin

Re: Loading the entire DB into RAM

От
Scott Marlowe
Дата:
On Fri, 2006-04-07 at 09:54, Matt Davies | Postgresql List wrote:
> If memory serves me correctly I have seen several posts about this in
> the past.
>
> I'll try to recall highlights.
>
> 1. Create a md in linux sufficiently large enough to handle the data set
> you are wanting to store.
> 2. Create a HD based copy somewhere as your permanent storage mechanism.
> 3. Start up your PostgreSQL instance with the MD as the data store
> 4. Load your data to the MD instance.
> 5. Figure out how you will change indexes _and_ ensure that your disk
> storage is consistent with your MD instance.

SNIP

> Either way you do it, I can't think of an out of the box method to doing
> it. Somehow one has to transfer data from permanent storage to the md
> instance, and, likewise, back to permanent storage.

dd could do that.  Just have a third partition that holds the drive
image.  Start up the mirror set, dd the file system into place on the md
device.  When you're ready to shut the machine down or back it up, shut
down the postmaster, sync the md drive, dd the filesystem back off to
the image backup drive.

But I'd really just recommend getting a LOT of RAM and letting the
kernel do all the caching.  If you've got a 2 gig database and 4 gigs of
ram, you should be gold.

Re: Loading the entire DB into RAM

От
PFC
Дата:
** This has not been tested.

    Create a ramdisk of required size
    Create a Linux software RAID mirror between the ramdisk, and a partition
of the same size.
    Mark the physical-disk as write-mostly (reads will go to the ramdisk)
    Format it and load data...

    On reboot you'll get a RAID1 mirror with 1 failed drive (because the
ramdisk is dead of course). Just recreate the ramdisk and resync.