Problem retrieving large records (bytea) data from a table

Поиск
Список
Период
Сортировка
От jtkells@verizon.net
Тема Problem retrieving large records (bytea) data from a table
Дата
Msg-id bnf617dql1hq15cd8vlqdnngdkejtechhg@4ax.com
обсуждение исходный текст
Ответы Re: Problem retrieving large records (bytea) data from a table  (pasman pasmański <pasman.p@gmail.com>)
Re: Problem retrieving large records (bytea) data from a table  (Thomas Kellerer <spam_eater@gmx.net>)
Список pgsql-admin
I am having a hang condition every time I try to retrieve a large
records (bytea) data from  a table
The OS is a 5.11 snv_134 i86pc i386 i86pc Solaris with 4GB memory
running Postgresql 8.4.3 with a standard postgresql.conf file (nothing
has been changed)
I have the following table called doc_table
      Column  |              Type              |  Modifiers     |
Storage  | Description
------------------------+--------------------------------+---------------------------------------
 id           | numeric                        | not null    | main |
 file_n       | character varying(4000)        |             |
extended |
 create_date  | timestamp(6) without time zone | not null
                default (clock_timestamp())
                ::timestamp(0)without time zone              | plain |
 desc         | character varying(4000)        |             |
extended |
 doc_cc       | character varying(120)         | not null    |
extended |
 by           | numeric                        | not null    | main |
 doc_data     | bytea                          |             |
extended |
 mime_type_id | character varying(16)          | not null    |
extended |
 doc_src      | text                           |             |
extended |
 doc_stat     | character varying(512)         | not null
               default 'ACTIVE'::character varying           |
extended |
Indexes:
   "documents_pk" PRIMARY KEY, btree (document_id)


A while ago the some developers inserted several records with a
document (stored in doc_Data) that was around 400 - 450 MB each. Now
when you do a select * (all) from this table you get a hang and the
system becomes unresponsive.  Prior to these inserts, a select * (all,
no where clause) worked.  I'm also told a select * from doc_table
where id = xxx still works.  I haven't seen any error message in the
postgresql log files.
So I'm not sure how to find these bad records and why I am getting a
hang.  Since this postgresql is running with the default config files
could I be running out of a resource?  If so I'm not sure how to or
how much to add to these resources to fix this problem since I have
very little memory on this system.  Does anyone have any ideas why I
am getting a hang.  Thanks

В списке pgsql-admin по дате отправления:

Предыдущее
От: jenopob
Дата:
Сообщение: How to auto swtich the roles of primary and standby
Следующее
От: c k
Дата:
Сообщение: Python UCS4 error