Обсуждение: Using high speed swap to improve performance?

Поиск
Список
Период
Сортировка

Using high speed swap to improve performance?

От
Christiaan Willemsen
Дата:

Hi there,

 

About a year ago we setup a machine with sixteen 15k disk spindles on Solaris using ZFS. Now that Oracle has taken Sun, and is closing up Solaris, we want to move away (we are more familiar with Linux anyway).

 

So the plan is to move to Linux and put the data on a SAN using iSCSI (two or four network interfaces). This however leaves us with with 16 very nice disks dooing nothing. Sound like a wast of time. If we were to use Solaris, ZFS would have a solution: use it as L2ARC. But there is no Linux filesystem with those features (ZFS on fuse it not really an option).

 

So I was thinking: Why not make a big fat array using 14 disks (raid 1, 10 or 5), and make this a big and fast swap disk. Latency will be lower than the SAN can provide, and throughput will also be better, and it will relief the SAN from a lot of read iops.

 

So I could create a 1TB swap disk, and put it onto the OS next to the 64GB of memory. Then I can set Postgres to use more than the RAM size so it will start swapping. It would appear to postgres that the complete database will fit into memory. The question is: will this do any good? And if so: what will happen?

 

Kind regards,

 

Christiaan

Re: Using high speed swap to improve performance?

От
Arjen van der Meijden
Дата:
What about FreeBSD with ZFS? I have no idea which features they support
and which not, but it at least is a bit more free than Solaris and still
offers that very nice file system.

Best regards,

Arjen

On 2-4-2010 21:15 Christiaan Willemsen wrote:
> Hi there,
>
> About a year ago we setup a machine with sixteen 15k disk spindles on
> Solaris using ZFS. Now that Oracle has taken Sun, and is closing up
> Solaris, we want to move away (we are more familiar with Linux anyway).
>
> So the plan is to move to Linux and put the data on a SAN using iSCSI
> (two or four network interfaces). This however leaves us with with 16
> very nice disks dooing nothing. Sound like a wast of time. If we were to
> use Solaris, ZFS would have a solution: use it as L2ARC. But there is no
> Linux filesystem with those features (ZFS on fuse it not really an option).
>
> So I was thinking: Why not make a big fat array using 14 disks (raid 1,
> 10 or 5), and make this a big and fast swap disk. Latency will be lower
> than the SAN can provide, and throughput will also be better, and it
> will relief the SAN from a lot of read iops.
>
> So I could create a 1TB swap disk, and put it onto the OS next to the
> 64GB of memory. Then I can set Postgres to use more than the RAM size so
> it will start swapping. It would appear to postgres that the complete
> database will fit into memory. The question is: will this do any good?
> And if so: what will happen?
>
> Kind regards,
>
> Christiaan
>

Re: Using high speed swap to improve performance?

От
Robert Haas
Дата:
On Fri, Apr 2, 2010 at 3:15 PM, Christiaan Willemsen
<cwillemsen@technocon.com> wrote:
> About a year ago we setup a machine with sixteen 15k disk spindles on
> Solaris using ZFS. Now that Oracle has taken Sun, and is closing up Solaris,
> we want to move away (we are more familiar with Linux anyway).
>
> So the plan is to move to Linux and put the data on a SAN using iSCSI (two
> or four network interfaces). This however leaves us with with 16 very nice
> disks dooing nothing. Sound like a wast of time. If we were to use Solaris,
> ZFS would have a solution: use it as L2ARC. But there is no Linux filesystem
> with those features (ZFS on fuse it not really an option).
>
> So I was thinking: Why not make a big fat array using 14 disks (raid 1, 10
> or 5), and make this a big and fast swap disk. Latency will be lower than
> the SAN can provide, and throughput will also be better, and it will relief
> the SAN from a lot of read iops.
>
> So I could create a 1TB swap disk, and put it onto the OS next to the 64GB
> of memory. Then I can set Postgres to use more than the RAM size so it will
> start swapping. It would appear to postgres that the complete database will
> fit into memory. The question is: will this do any good? And if so: what
> will happen?

I suspect it will result in lousy performance because neither PG nor
the OS will understand that some of that "memory" is actually disk.
But if you end up testing it post the results back here for
posterity...

...Robert

Re: Using high speed swap to improve performance?

От
Robert Haas
Дата:
On Sun, Apr 4, 2010 at 4:52 PM, Robert Haas <robertmhaas@gmail.com> wrote:
> On Fri, Apr 2, 2010 at 3:15 PM, Christiaan Willemsen
> <cwillemsen@technocon.com> wrote:
>> About a year ago we setup a machine with sixteen 15k disk spindles on
>> Solaris using ZFS. Now that Oracle has taken Sun, and is closing up Solaris,
>> we want to move away (we are more familiar with Linux anyway).
>>
>> So the plan is to move to Linux and put the data on a SAN using iSCSI (two
>> or four network interfaces). This however leaves us with with 16 very nice
>> disks dooing nothing. Sound like a wast of time. If we were to use Solaris,
>> ZFS would have a solution: use it as L2ARC. But there is no Linux filesystem
>> with those features (ZFS on fuse it not really an option).
>>
>> So I was thinking: Why not make a big fat array using 14 disks (raid 1, 10
>> or 5), and make this a big and fast swap disk. Latency will be lower than
>> the SAN can provide, and throughput will also be better, and it will relief
>> the SAN from a lot of read iops.
>>
>> So I could create a 1TB swap disk, and put it onto the OS next to the 64GB
>> of memory. Then I can set Postgres to use more than the RAM size so it will
>> start swapping. It would appear to postgres that the complete database will
>> fit into memory. The question is: will this do any good? And if so: what
>> will happen?
>
> I suspect it will result in lousy performance because neither PG nor
> the OS will understand that some of that "memory" is actually disk.
> But if you end up testing it post the results back here for
> posterity...

Err, the OS will understand it, but PG will not.

...Robert

Re: Using high speed swap to improve performance?

От
Scott Marlowe
Дата:
On Fri, Apr 2, 2010 at 1:15 PM, Christiaan Willemsen
<cwillemsen@technocon.com> wrote:
> Hi there,
>
> About a year ago we setup a machine with sixteen 15k disk spindles on
> Solaris using ZFS. Now that Oracle has taken Sun, and is closing up Solaris,
> we want to move away (we are more familiar with Linux anyway).
>
> So the plan is to move to Linux and put the data on a SAN using iSCSI (two
> or four network interfaces). This however leaves us with with 16 very nice
> disks dooing nothing. Sound like a wast of time. If we were to use Solaris,
> ZFS would have a solution: use it as L2ARC. But there is no Linux filesystem
> with those features (ZFS on fuse it not really an option).
>
> So I was thinking: Why not make a big fat array using 14 disks (raid 1, 10
> or 5), and make this a big and fast swap disk. Latency will be lower than
> the SAN can provide, and throughput will also be better, and it will relief
> the SAN from a lot of read iops.
>
> So I could create a 1TB swap disk, and put it onto the OS next to the 64GB
> of memory. Then I can set Postgres to use more than the RAM size so it will
> start swapping. It would appear to postgres that the complete database will
> fit into memory. The question is: will this do any good? And if so: what
> will happen?

I'd make a couple of RAID-10s out of it and use them for highly used
tables and / or indexes etc...

Re: Using high speed swap to improve performance?

От
Greg Smith
Дата:
Christiaan Willemsen wrote:
>
> So I was thinking: Why not make a big fat array using 14 disks (raid
> 1, 10 or 5), and make this a big and fast swap disk. Latency will be
> lower than the SAN can provide, and throughput will also be better,
> and it will relief the SAN from a lot of read iops.
>

Presuming that swap will give predictable performance as things go into
and out of there doesn't sound like a great idea to me.  Have you
considered adding that space as a tablespace and setting
temp_tablespaces to point to it?  That's the best thing I can think of
to use a faster local disk with lower integrity guarantees for.

--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
greg@2ndQuadrant.com   www.2ndQuadrant.us


Re: Using high speed swap to improve performance?

От
Christiaan Willemsen
Дата:

Hi Scott,

 

That sound like a usefull thing to do, but the big advantage of the SAN is that in case the physical machine goes down, I can quickly startup a virtual machine using the same database files to act as a fallback. It will have less memory, and less CPU's but it will do fine for some time.

 

So when putting fast tables on local storage, I losse those tables when the machine goes down.

 

Putting indexes on there however might me intresting.. What will Postgresql do when it is started on the backupmachine, and it finds out the index files are missing? Will it recreate those files, or will it panic and not start at all, or can we just manually reindex?

 

Kind regards,

 

Christiaan
 

-----Original message-----
From: Scott Marlowe <scott.marlowe@gmail.com>
Sent: Sun 04-04-2010 23:08
To: Christiaan Willemsen <cwillemsen@technocon.com>;
CC: pgsql-performance@postgresql.org;
Subject: Re: [PERFORM] Using high speed swap to improve performance?

On Fri, Apr 2, 2010 at 1:15 PM, Christiaan Willemsen
<cwillemsen@technocon.com> wrote:
> Hi there,
>
> About a year ago we setup a machine with sixteen 15k disk spindles on
> Solaris using ZFS. Now that Oracle has taken Sun, and is closing up Solaris,
> we want to move away (we are more familiar with Linux anyway).
>
> So the plan is to move to Linux and put the data on a SAN using iSCSI (two
> or four network interfaces). This however leaves us with with 16 very nice
> disks dooing nothing. Sound like a wast of time. If we were to use Solaris,
> ZFS would have a solution: use it as L2ARC. But there is no Linux filesystem
> with those features (ZFS on fuse it not really an option).
>
> So I was thinking: Why not make a big fat array using 14 disks (raid 1, 10
> or 5), and make this a big and fast swap disk. Latency will be lower than
> the SAN can provide, and throughput will also be better, and it will relief
> the SAN from a lot of read iops.
>
> So I could create a 1TB swap disk, and put it onto the OS next to the 64GB
> of memory. Then I can set Postgres to use more than the RAM size so it will
> start swapping. It would appear to postgres that the complete database will
> fit into memory. The question is: will this do any good? And if so: what
> will happen?

I'd make a couple of RAID-10s out of it and use them for highly used
tables and / or indexes etc...

Re: Using high speed swap to improve performance?

От
Lew
Дата:
Christiaan Willemsen wrote:
> About a year ago we setup a machine with sixteen 15k disk spindles on
> Solaris using ZFS. Now that Oracle has taken Sun, and is closing up
> Solaris, we want to move away (we are more familiar with Linux anyway).

What evidence do you have that Oracle is "closing up" Solaris?
<http://www.oracle.com/us/products/servers-storage/solaris/index.html>
and its links, particularly
<http://www.sun.com/software/solaris/10/index.jsp>
seem to indicate otherwise.

Industry analysis seems to support the continuance of Solaris, too:
<http://jeremy.linuxquestions.org/2010/02/03/oracle-sun-merger-closes/>
"... it would certainly appear that Oracle is committed to the Solaris
platform indefinitely."

More recently, less than a week ago as I write this, there was the article
<http://news.yahoo.com/s/nf/20100330/tc_nf/72477>
which discusses that Oracle may move away from open-sourcing Solaris, but
indicates that Oracle remains committed to Solaris as a for-pay product, and
also assesses a rosy future for Java.

--
Lew

Re: Using high speed swap to improve performance?

От
Scott Marlowe
Дата:
On Sun, Apr 4, 2010 at 3:17 PM, Lew <noone@lwsc.ehost-services.com> wrote:
> Christiaan Willemsen wrote:
>>
>> About a year ago we setup a machine with sixteen 15k disk spindles on
>> Solaris using ZFS. Now that Oracle has taken Sun, and is closing up Solaris,
>> we want to move away (we are more familiar with Linux anyway).
>
> What evidence do you have that Oracle is "closing up" Solaris?

I don't think the other poster mean shutting down solaris, that would
be insane.  I think he meant closing it, as in taking it closed
source, which there is ample evidence for.

Re: Using high speed swap to improve performance?

От
Lew
Дата:
Christiaan Willemsen wrote:
>>> About a year ago we setup a machine with sixteen 15k disk spindles on
>>> Solaris using ZFS. Now that Oracle has taken Sun, and is closing up Solaris,
>>> we want to move away (we are more familiar with Linux anyway).

Lew <noone@lwsc.ehost-services.com> wrote:
>> What evidence do you have that Oracle is "closing up" Solaris?

Scott Marlowe wrote:
> I don't think the other poster mean shutting down solaris, that would
> be insane.  I think he meant closing it, as in taking it closed
> source, which there is ample evidence for.

Oh, that makes sense.  Yes, it does seem that they're doing that.

Some press hints that Oracle might keep OpenSolaris going, forked from the
for-pay product.  If that really is true, I speculate that Oracle might be
emulating the strategy in such things as Apache Geronimo - turn the
open-source side loose on the world under a license that lets you dip into it
for code in the closed-source product.  Innovation flows to the closed-source
product rather than from it.  This empowers products like WebSphere
Application Server, which includes a lot of reworked Apache code in the
persistence layer, the web-services stack, the app-server engine and elsewhere.

I don't know Oracle's plans, but that sure would be a good move for them.

For me, I am quite satisfied with Linux.  I don't really know what the value
proposition is for Solaris anyway.

--
Lew