Обсуждение: Workaround for working_mem max value in windows?

Поиск
Список
Период
Сортировка

Workaround for working_mem max value in windows?

От
Nick Eubank
Дата:
Hi all,

A few years ago someone said postgres windows can't set working_mem above about 2 GB (www.postgresql.org/message-id/17895.1315869622@sss.pgh.pa.us -- seems to be same for maintenance_working_mem ). Im finding limit still present.

 I'm doing single user, single connection data intensive queries and would like to set a higher value on windows to better use 16gb built in ram (don't control platform, so can't jump to Linux). 

Anyone found a work around?

Thanks!

Nick

Re: Workaround for working_mem max value in windows?

От
Martin French
Дата:
> Hi all,
>
> A few years ago someone said postgres windows can't set working_mem
> above about 2 GB (
www.postgresql.org/message-id/17895.1315869622@sss.pgh.pa.us
> -- seems to be same for maintenance_working_mem ). Im finding
> limit still present.


Setting work_mem higher than 2GB on a 16GB machine could easily run the server out of memory.

work_mem is set on a "per client" and "per sort" basis, so setting it to 2GB would exhaust the amount of available ram very quickly on complex queries with multiple sorts, (or with a number of clients greater than 8 - although you mention that you're using a single user; that doesn't mean that there is only 1 connection to the database).

The same rule applies with maintenance_work_mem, more than 1 autovacuum would use n multiples of maintenance_work_mem, again exhausting the server very quickly.

>
>  I'm doing single user, single connection data intensive queries and
> would like to set a higher value on windows to better use 16gb built
> in ram (don't control platform, so can't jump to Linux). 

>
> Anyone found a work around?

>

PostgreSQL on windows is maintained by EnterpriseDB IIRC, so maybe someone on their forums has any ideas on this, as I doubt very much that the extra work in the PostgreSQL core would be undertaken give the comment by Tom in the thread you posted.

http://forums.enterprisedb.com/forums/list.page

Cheers
=============================================

Romax Technology Limited
A limited company registered in England and Wales.
Registered office:
Rutherford House
Nottingham Science and Technology Park
Nottingham
NG7 2PZ
England
Registration Number: 2345696
VAT Number: 526 246 746

Telephone numbers:
+44 (0)115 951 88 00 (main)

For other office locations see:
http://www.romaxtech.com/Contact
=================================
===============
E-mail: info@romaxtech.com
Website:
www.romaxtech.com
=================================

================
Confidentiality Statement
This transmission is for the addressee only and contains information that is confidential and privileged.
Unless you are the named addressee, or authorised to receive it on behalf of the addressee
you may not copy or use it, or disclose it to anyone else.
If you have received this transmission in error please delete from your system and contact the sender. Thank you for your cooperation.
=================================================

Re: Workaround for working_mem max value in windows?

От
amulsul
Дата:
>Anyone found a work around?

Wouldn't it helpful, setting it in your session?

set work_mem='2000MB';
set maintenance_work_mem='2000MB';

do rest of sql after .....

Regards,
Amul Sul



--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Workaround-for-working-mem-max-value-in-windows-tp5800170p5800216.html
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.


Re: Workaround for working_mem max value in windows?

От
Nick Eubank
Дата:



On Wed, Apr 16, 2014 at 1:29 AM, amulsul <sul_amul@yahoo.co.in> wrote:
>Anyone found a work around?

Wouldn't it helpful, setting it in your session?

set work_mem='2000MB';
set maintenance_work_mem='2000MB';

do rest of sql after .....

Regards,
Amul Sul



--
View this message in context: http://postgresql.1045698.n5.nabble.com/Workaround-for-working-mem-max-value-in-windows-tp5800170p5800216.html
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Thanks all!

Sorry Martin, should have been clearer on my usage plans: I'm only interested in optimizing for single-connection, sequential high-demand queries, so I think I'm safe bumping up memory usage, even if it's usually a disastrous idea for most users.  I'll definitely check with the Enterprise folks!

Amul: thanks for the followup! Unfortunately, setting locally faces the same limitation as setting things in the config file -- I get an "ERROR: 3072000 is outside the valid range for parameter "work_mem" (64 .. 2097151)
SQL state: 22023" problem if I set above ~1.9gb. :(

Re: Workaround for working_mem max value in windows?

От
"Martin French"
Дата:


On 16 Apr 2014, at 17:35, "Nick Eubank" <nickeubank@gmail.com> wrote:




On Wed, Apr 16, 2014 at 1:29 AM, amulsul <sul_amul@yahoo.co.in> wrote:
>Anyone found a work around?

Wouldn't it helpful, setting it in your session?

set work_mem='2000MB';
set maintenance_work_mem='2000MB';

do rest of sql after .....

Regards,
Amul Sul



--
View this message in context: http://postgresql.1045698.n5.nabble.com/Workaround-for-working-mem-max-value-in-windows-tp5800170p5800216.html
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Thanks all!

Sorry Martin, should have been clearer on my usage plans: I'm only interested in optimizing for single-connection, sequential high-demand queries, so I think I'm safe bumping up memory usage, even if it's usually a disastrous idea for most users.  I'll definitely check with the Enterprise folks!

Amul: thanks for the followup! Unfortunately, setting locally faces the same limitation as setting things in the config file -- I get an "ERROR: 3072000 is outside the valid range for parameter "work_mem" (64 .. 2097151)
SQL state: 22023" problem if I set above ~1.9gb. :(


Nick, the issue would still remain if you set work_mem to 2Gb and joined 8 tables together, it would consume too much memory. 

So if you DO decide to proceed with this, I would recommend to err on the side of caution and be a little more concerned with tuning your SQL statements

Good luck. 


============================================= Romax Technology Limited A limited company registered in England and Wales. Registered office: Rutherford House Nottingham Science and Technology Park Nottingham NG7 2PZ England Registration Number: 2345696 VAT Number: 526 246 746 Telephone numbers: +44 (0)115 951 88 00 (main) For other office locations see: http://www.romaxtech.com/Contact ================================= =============== E-mail: info@romaxtech.com Website: www.romaxtech.com ================================= ================ Confidentiality Statement This transmission is for the addressee only and contains information that is confidential and privileged. Unless you are the named addressee, or authorised to receive it on behalf of the addressee you may not copy or use it, or disclose it to anyone else. If you have received this transmission in error please delete from your system and contact the sender. Thank you for your cooperation. =================================================

Re: Workaround for working_mem max value in windows?

От
Jeff Janes
Дата:
On Tue, Apr 15, 2014 at 6:36 PM, Nick Eubank <nickeubank@gmail.com> wrote:
Hi all,

A few years ago someone said postgres windows can't set working_mem above about 2 GB (www.postgresql.org/message-id/17895.1315869622@sss.pgh.pa.us -- seems to be same for maintenance_working_mem ). Im finding limit still present.

 I'm doing single user, single connection data intensive queries and would like to set a higher value on windows to better use 16gb built in ram (don't control platform, so can't jump to Linux). 

Anyone found a work around?

Before worrying much about that, I'd just give it a try at the highest value it will let you set and see what happens.

If you want to do something like hashed aggregate that would have been predicted to fit in 6GB but not in 1.999GB, then you will lose out on the hash agg by not being able to set the memory higher.  On the other hand, if your queries want to use sorts that will spill to disk anyway, the exact value of work_mem usually doesn't matter much as long as it not absurdly small (1MB absurdly small for analytics, 64MB is probably not).  In fact very large work_mem can be worse in those cases, because large priority queue heaps are unfriendly to the CPU cache.  (Based on Linux experience, but I don't see why that would not carry over to Windows)

Frankly I think you've bitten off more than you can chew.  600GB of csv is going to expand to probably 3TB of postgresql data once loaded.  If you can't control the platform, I'm guessing your disk array options are no better than your OS options are.

ACID compliance is expensive, both in storage overhead and in processing time, and I don't think you can afford that and probably don't need it.  Any chance you could give up on databases and get what you need just using pipelines of sort, cut, uniq, awk, perl, etc. (or whatever their Window equivalent is)?

Cheers,

Jeff

Re: Workaround for working_mem max value in windows?

От
Nick Eubank
Дата:


On Wednesday, April 16, 2014, Jeff Janes <jeff.janes@gmail.com> wrote:
On Tue, Apr 15, 2014 at 6:36 PM, Nick Eubank <nickeubank@gmail.com> wrote:
Hi all,

A few years ago someone said postgres windows can't set working_mem above about 2 GB (www.postgresql.org/message-id/17895.1315869622@sss.pgh.pa.us -- seems to be same for maintenance_working_mem ). Im finding limit still present.

 I'm doing single user, single connection data intensive queries and would like to set a higher value on windows to better use 16gb built in ram (don't control platform, so can't jump to Linux). 

Anyone found a work around?

Before worrying much about that, I'd just give it a try at the highest value it will let you set and see what happens.

If you want to do something like hashed aggregate that would have been predicted to fit in 6GB but not in 1.999GB, then you will lose out on the hash agg by not being able to set the memory higher.  On the other hand, if your queries want to use sorts that will spill to disk anyway, the exact value of work_mem usually doesn't matter much as long as it not absurdly small (1MB absurdly small for analytics, 64MB is probably not).  In fact very large work_mem can be worse in those cases, because large priority queue heaps are unfriendly to the CPU cache.  (Based on Linux experience, but I don't see why that would not carry over to Windows)

Frankly I think you've bitten off more than you can chew.  600GB of csv is going to expand to probably 3TB of postgresql data once loaded.  If you can't control the platform, I'm guessing your disk array options are no better than your OS options are.

ACID compliance is expensive, both in storage overhead and in processing time, and I don't think you can afford that and probably don't need it.  Any chance you could give up on databases and get what you need just using pipelines of sort, cut, uniq, awk, perl, etc. (or whatever their Window equivalent is)?

Cheers,

Jeff

Thanks Jeff -- you're clearly correct that SQL is not the optimal tool for this, as I'm clearly leaning. I just can't find anything MADE for one-user big data transformations. :/ I may resort to that kind of pipeline approach, I just have so many transformations to do I was hoping I could use a declarative language in something.

But your point about hash map size is excellent. No idea how big an index for this would be...

Re: Workaround for working_mem max value in windows?

От
amul sul
Дата:
On Wednesday, 16 April 2014 10:05 PM, Nick Eubank <nickeubank@gmail.com> wrote:



>Amul: thanks for the followup! Unfortunately, setting locally faces the same limitation 
>as setting things in the config file -- 
>I get an "ERROR: 3072000 is outside the valid range for parameter "work_mem" (64 .. 2097151)

>SQL state: 22023" problem if I set above ~1.9gb. :(


yes, you can set work_mem upto 2047MB.


Regards,
Amul Sul