It seems impossible to use hundreds of gigs of temp space, but I’ve done it. Adding temp space beyond 2TB here is not likely to be the right solution. Instead you want to avoid needing THAT much temp space in the first place. Your
PostgreSQL server is not a large Spark cluster.
Posting the text of the query here would help … as would the output of EXPLAIN [query].
Yes, optimizing your postgresql.conf for analysis type work is also very helpful, but this is not that.
Here is a good place to start on that … https://pgtune.leopard.in.ua/#/
Hi team,
I have an issue with Postgres error: PSQLException: ERROR: could not write to hash-join temporary file: No space left on device
After investigation, I could see that both work_mem and temp_tablespace is NOT specified in postgresql.conf file,
but when I queried database, it showed as 15728kB.
$ cat postgresql.conf | grep temp_tablespaces
#temp_tablespaces = '' # a list of tablespace names, '' uses
$ cat postgresql.conf | grep work_mem
#work_mem = 4MB # min 64kB
#maintenance_work_mem = 64MB # min 1MB
#autovacuum_work_mem = -1 # min 1MB, or -1 to use maintenance_work_mem
--Recommend is 64MB
postgres=# show work_mem;
work_mem
----------
15728kB
(1 row)
I can see that large number of temp files have been created.
postgres=# SELECT datname, temp_files,temp_bytes from pg_catalog.pg_stat_database;
datname | temp_files | temp_bytes
------------+------------+---------------
postgres | 24 | 25013533
user_db | 200797 | 2774922398171
template1 | 0 | 0
template0 | 0 | 0
(4 rows)
$ free -h
total used free shared buffers cached
Mem: 31G 18G 12G 5.4G 408M 17G
-/+ buffers/cache: 1.4G 29G
Swap: 4.0G 226M 3.8G
In my opinion, work_mem needs to be set since temp files created seem quite large.
Can someone please advise the recommended value for work_mem (is 64MB recommended?)
And if setting up temp_tablespace is also recommended in this case?
Thanks,
Calvin