Re: Questions about connection clean-up and "invalid page header"

Поиск
Список
Период
Сортировка
От Scott Marlowe
Тема Re: Questions about connection clean-up and "invalid page header"
Дата
Msg-id dcc563d11001250015v12fac7a7l7f93082baadfc913@mail.gmail.com
обсуждение исходный текст
Ответ на Questions about connection clean-up and "invalid page header"  (Herouth Maoz <herouth@unicell.co.il>)
Ответы Re: Questions about connection clean-up and "invalid page header"  (Herouth Maoz <herouth@unicell.co.il>)
Re: Questions about connection clean-up and "invalid page header"  (Greg Stark <gsstark@mit.edu>)
Список pgsql-general
On Sun, Jan 24, 2010 at 3:17 AM, Herouth Maoz <herouth@unicell.co.il> wrote:
> Hi Everybody.
>
> I have two questions.
>
> 1. We have a system that is accessed by Crystal reports which is in turned
> controlled by another (3rd party) system. Now, when a report takes too long or
> the user cancels it, it doesn't send a cancel request to Postgres. It just
> kills the Crystal process that works on it.
>
> As a result, the query is left alive on the Postgres backend. Eventually I get
> the message "Unexpected End of file" and the query is cancelled. But this
> doesn't happen soon enough for me - these are usually very heavy queries, and
> I'd like them to be cleaned up as soon as possible if the client connection
> has ended.

The real solution is to fix the application.  But I understand
sometimes you can't do that.

> Is there a parameter to set in the configuration or some other means to
> shorten the time before an abandoned backend's query is cancelled?

You can shorten the tcp_keepalive settings so that dead connections
get detected faster.

> 2. I get the following message in my development database:
>
> vacuumdb: vacuuming of database "reports" failed: ERROR:  invalid page header
> in block 6200 of relation "rb"
>
> I had this already a couple of months ago. Looking around the web, I saw this
> error is supposed to indicate a hardware error. I informed my sysadmin, but
> since this is just the dev system and the data was not important, I did a
> TRUNCATE TABLE on the "rb" relation, and the errors stopped...
>
> But now the error is back, and I'm a bit suspicious. If this is a hardware
> issue, it's rather suspicious that it returned in the exact same relation
> after I did a "truncate table". I have many other relations in the system,
> ones that fill up a lot faster. So I suspect this might be a PostgreSQL issue
> after all. What can I do about this?

Might be, but not very likely.  I and many others run pgsql in
production environments where it handles thousands of updates /
inserts per minute with no corruption.  We run on server class
hardware with ECC memory and large RAID arrays with no corruption.

Have you run something as simple as memtest86+ on your machine to see
if it's got bad memory?

> We are currently using PostgreSQL v. 8.3.1 on the server side.

You should really update to the latest 8.3.x version (around 8.3.8 or
so).  It's simple and easy, and it's possible you've hit a bug in an
older version of 8.3.

В списке pgsql-general по дате отправления:

Предыдущее
От: Pavel Stehule
Дата:
Сообщение: Re: Variadic polymorpic functions
Следующее
От: Steeles
Дата:
Сообщение: Help: Postgresql on Microsoft cluster (MSCS)