When parallel degree is set to very high say 70000, there is a segmentation fault in parallel code,
and that is because type casting is missing in the code..
Take a look at below test code:
create table abd(n int) with (parallel_degree=70000);
insert into abd values (generate_series(1,1000000)); analyze abd; vacuum abd;
set max_parallel_degree=70000;
explain analyze verbose select * from abd where n<=1;
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: LOG: server process (PID 41906) was terminated by signal 11: Segmentation fault
DETAIL: Failed process was running: explain analyze verbose select * from abd where n<=1;
This is crashing because in ExecParallelSetupTupleQueues function,
for (i = 0; i < pcxt->nworkers; ++i)
{
...
(Here i is Int but arg to shm_mq_create is Size so when worker is beyond 32767 then 32767*65536 will overflow
the integer boundary, and it will access the illegal memory and will crash or corrupt some memory. Need to typecast
i * PARALLEL_TUPLE_QUEUE_SIZE --> (Size)i * PARALLEL_TUPLE_QUEUE_SIZE and this will fix
mq = shm_mq_create(tqueuespace + i * PARALLEL_TUPLE_QUEUE_SIZE, (Size)PARALLEL_TUPLE_QUEUE_SIZE);
...
}
Below attached patch will fix this issue, Apart from here I have done typecasting at other places also wherever its needed.
typecasting at other places will fix other issue (ERROR: requested shared memory size overflows size_t) also
described in below mail thread
http://www.postgresql.org/message-id/570BACFC.6020305@enterprisedb.com