Re: PG Admin 4.28 is crashing while viewing 10M row data

Поиск
Список
Период
Сортировка
От Holger Jakobs
Тема Re: PG Admin 4.28 is crashing while viewing 10M row data
Дата
Msg-id 3DBC1D09-28AE-4B02-9052-8C31ED434647@jakobs.com
обсуждение исходный текст
Ответ на Re: PG Admin 4.28 is crashing while viewing 10M row data  (Sachin Kumar <sachinkumaras@gmail.com>)
Ответы Re: PG Admin 4.28 is crashing while viewing 10M row data
Список pgsql-admin
Why not check why the server didn't start? For sure that isn't pgadmin related.

Have a look at your system log. It will show where your typo in the config file lies. File name and lime number can be found there.

Don't search in Postgresql's log, bout the system log!


Am 26. November 2020 06:30:57 MEZ schrieb Sachin Kumar <sachinkumaras@gmail.com>:
Hi Team,

After doing all these changes the DB services stopped and not started when I tried manually. I have to roll back all these changes and reinstall PGAdmin which wastes a lot of time for the team to fix these issues.

error statement :- could not connect to server: Connection refused (0x0000274D/10061) Is the server running on host "localhost" (127.0.0.1) and accepting TCP/IP connections on port 5432?

Kindly share tried and tested solution



On Wed, Nov 25, 2020 at 7:22 AM dbatoCloud Solution <dbatocloud17@gmail.com> wrote:
Hi Sachin - try the below steps and let me know

Hi Sachin,
Step 1:-  Configure these below parameters before the bulk inserts then restart the server.

shared_buffers = 2048MB (default: 32MB)               -  25% of physical memory is 2GB ( You system size 8GB physical memory)
effective_cache_size = 4096MB (default: 128MB)        -   I recommend to go for higher value , so that index scan will be used instead of sequential scan.
checkpoint_segment = 32 (default: 3)                  -     in 9.5 checkpoint_segments but in Version 12 is max_size_size
max_wal_size       = 32 (default 1GB)                 -  increase the value for heavy  load, but FYI it will increase the recovery if incase system crashed.
checkpoint_completion_target = 0.9 (default: 0.5)      - the target of checkpoint completion time increase to 0.9
default_statistics_target = 1000 (default: 100)       - fraction of total time between checkpoints.
work_mem = 200MB (default: 1MB)
maintainance_work_mem = 1GB (default: 16MB)           -  this is for vacuum, creating index. since your table old table with index then i would recomment this parameter to be higher.

step 2:-

select name, setting from pg_settings where name = 'autovacuum' ;

if the output says "on" then no need to vacuum your table else do the below

VACUUM FULL hk_card_master_test  ;

 

 

Thanks & Best Wishes,

Ashok

 

---------------------------------------------------------------------------------------------------------------------

Ashokkumar Mani  (OCP12c/11g/10g/9i, AWS SAA, M103)

Dubai , UAE | BLR , INDIA

M: +971 54 723 0075 | +91 90086 70302 | WApp : +91 81975 99922

W: https://dbatocloudsolution.blogspot.in/ | E: dbatocloud17@gmail.com



On Tue, Nov 24, 2020 at 4:08 PM Sachin Kumar <sachinkumaras@gmail.com> wrote:
Hi Ashok,

it's an old table where an index is already created.

postgresql database version 12 and 13 1 @ each server.

all settings are on the default configuration.

server configuration.

Operating system :- Windows (Windows Server 2016 Datacenter)
Size (2 vcpus, 8 GiB memory)

On Mon, Nov 23, 2020 at 8:30 PM dbatoCloud Solution <dbatocloud17@gmail.com> wrote:
Hi Sachin, 
sorry for, delayed response,

Why don't you run batch by batch? 

could you please confirm the below parameters values:- Since your doing bulk load I need the below details to recommend then I would suggest for a vacuum? 
hk_card_master_test              > has it created newly or old table with index was already existed?  please confirm? 
postgresql database version                      ?  
shared_buffers =                          ? (default: 32MB)
effective_cache_size = ? (default: 128MB)
checkpoint_segment = ? (default: 3)
checkpoint_completion_target = ? (default: 0.5)
default_statistics_target = ? (default: 100)
work_mem = ? (default: 1MB)
maintainance_work_mem = ? (default: 16MB)


 

Thanks & Best Wishes,

Ashok

 

---------------------------------------------------------------------------------------------------------------------

Ashokkumar Mani  (OCP12c/11g/10g/9i, AWS SAA, M103)

Dubai , UAE | BLR , INDIA

M: +971 54 723 0075 | +91 90086 70302 | WApp : +91 81975 99922

W: https://dbatocloudsolution.blogspot.in/ | E: dbatocloud17@gmail.com



On Mon, Nov 23, 2020 at 10:36 AM Sachin Kumar <sachinkumaras@gmail.com> wrote:
HI Ashok,

Please find the command i am using to upload 10 million data into a table.

copy command to update records
copy hk_card_master_test from 'C:/inetpub/wwwroot/cards/media/static/rawsheet_001-000.csv' with delimiter E'\t' csv header encoding 'ISO_8859_5'

I am not using the Vacuum command during the update? NO, I  am not Vacuuming this large table.
I am using the analyze command for other queries.

I am not aware of the Vacuum command functionality. Please if you can help in understanding Vacuum query. 


On Mon, Nov 23, 2020 at 10:23 AM dbatoCloud Solution <dbatocloud17@gmail.com> wrote:
Hi Sachin,

Let me know-how did you give a commit statement during update? Did you try use Vacuum and analyze command during update? are you frequently Vacuuming this table large table .

if the postgresql is running on Linux then run it by batch with intermediate commit. 

 

Thanks & Best Wishes,

Ashok

 

 

 

Thanks & Best Wishes,

Ashok

 

---------------------------------------------------------------------------------------------------------------------

Ashokkumar Mani  (OCP12c/11g/10g/9i, AWS SAA, M103)

Dubai , UAE | BLR , INDIA

M: +971 54 723 0075 | +91 90086 70302 | WApp : +91 81975 99922

W: https://dbatocloudsolution.blogspot.in/ | E: dbatocloud17@gmail.com



On Sun, Nov 22, 2020 at 10:25 PM David G. Johnston <david.g.johnston@gmail.com> wrote:

On Sunday, November 22, 2020, Sachin Kumar <sachinkumaras@gmail.com> wrote:
Hi Expert,
I am updating 10 Million records using COPY command in a table, the table is updated in a minute, but when I am trying to view all row. PGadmin 4.28 which I am using in chrome is getting crashed. I tried it 4 times and it crashed 3 times. After viewing this 10M row the other queries are very running slow.
is there any process to clear the buffer in PGAdmin or process through which the quey in PGAdmin runs at its normal speed?
Please guide me on how to fix this issue.


Stop and restart the program?

What do you hope to do with 10M full records displayed on screen (even if pagination is involved...)?

David J.
 


--

Best Regards,
Sachin Kumar


--

Best Regards,
Sachin Kumar


--

Best Regards,
Sachin Kumar

--
Holger Jakobs, Bergisch Gladbach
+49 178 9759012
- sent from mobile, therefore short -

В списке pgsql-admin по дате отправления:

Предыдущее
От: Nikhil Shetty
Дата:
Сообщение: Re: Streaming Replication replay lag
Следующее
От: Sachin Kumar
Дата:
Сообщение: Re: PG Admin 4.28 is crashing while viewing 10M row data