Hi, I have a table (sanact) with 23.125.525 rows (and a hundred columns). I am doing a select, that did not finish after some 15 hours... Select is as follows: select * from sanact where sanact___rfovsnide = 'MYVERSION' order by sanactcsu; There is an index on sanact___rfovsnide and doing EXPLAIN shows it is used. Resulting dataset should be 1626000 rows. iostat shows 99.5% idle disks, almost no activity. top shows almost no cpu usage. Where should I be looking for a problem ? Thanks in advance, Franck
Franck Routier <franck.routier@axege.com> writes:
> I am doing a select, that did not finish after some 15 hours... Select
> is as follows:
> select * from sanact where sanact___rfovsnide = 'MYVERSION' order by
> sanactcsu;
> There is an index on sanact___rfovsnide and doing EXPLAIN shows it is used.
> Resulting dataset should be 1626000 rows.
> iostat shows 99.5% idle disks, almost no activity.
> top shows almost no cpu usage.
> Where should I be looking for a problem ?
pg_locks, probably.
regards, tom lane
Le 20/03/2014 14:56, Tom Lane a écrit : > pg_locks, probably. regards, tom lane select * from pg_stat_activity shows 'F'alse in the waiting column for the query. Can I rely on that or should I be investigating further for subtile types of locks ?
Franck Routier <franck.routier@axege.com> writes: > Le 20/03/2014 14:56, Tom Lane a �crit : >> pg_locks, probably. regards, tom lane > select * from pg_stat_activity shows 'F'alse in the waiting column for > the query. Hm. The next most likely theory is that it's waiting on network I/O, but it's hard to tell that from the outside. Can you attach to the stuck backend with gdb and get a stack trace? http://wiki.postgresql.org/wiki/Generating_a_stack_trace_of_a_PostgreSQL_backend regards, tom lane
Le 20/03/2014 15:15, Tom Lane a écrit : > Hm. The next most likely theory is that it's waiting on network I/O, > but it's hard to tell that from the outside. Can you attach to the > stuck backend with gdb and get a stack trace? > http://wiki.postgresql.org/wiki/Generating_a_stack_trace_of_a_PostgreSQL_backend > > regards, tom lane > I found the problem, not related to postgresql after all. The client (CloverETL, using jdbc) was stuck and not "consuming" the records. I killed and restarted the ETL and all is fine now. Thanks a lot for your time and help, Franck
Сайт использует файлы cookie для корректной работы и повышения удобства. Нажимая кнопку «Принять» или продолжая пользоваться сайтом, вы соглашаетесь на их использование в соответствии с Политикой в отношении обработки cookie ООО «ППГ», в том числе на передачу данных из файлов cookie сторонним статистическим и рекламным службам. Вы можете управлять настройками cookie через параметры вашего браузера