Обсуждение: Fwd: postgresql performance question

Поиск
Список
Период
Сортировка

Fwd: postgresql performance question

От
許耀彰
Дата:
Dear Support Team, 
I create a table by command as listed as below:
CREATE TABLE public.log2
(
        d3 text COLLATE pg_catalog."default"
)
WITH (

    OIDS = FALSE

)

TABLESPACE pg_default;

And then use command as listed as below to import data to log2 table

COPY log2 FROM '/home/anderson/0107.csv' CSV HEADER;

My purpose is to import log information to log2 table to analysis, but I found one situation : for example , I have 166856 records in log2 table, when I use select command to list data , it spent a lot time, can we adjust the situation ? 
select * from log2 
Thanks for your kindly assistance.
Additional information: The attachment is log format
Best Regards, Anderson Hsu


Вложения

Re: Fwd: postgresql performance question

От
Pavan Teja
Дата:
Hi,

You can filter by using error_severity condition in the where Clause like:
Select * from log2 where error_severity ='error';


Regards,
Pavan

On Feb 11, 2018 9:31 PM, "許耀彰" <kpm906@gmail.com> wrote:
Dear Support Team, 
I create a table by command as listed as below:
CREATE TABLE public.log2
(
        d3 text COLLATE pg_catalog."default"
)
WITH (

    OIDS = FALSE

)

TABLESPACE pg_default;

And then use command as listed as below to import data to log2 table

COPY log2 FROM '/home/anderson/0107.csv' CSV HEADER;

My purpose is to import log information to log2 table to analysis, but I found one situation : for example , I have 166856 records in log2 table, when I use select command to list data , it spent a lot time, can we adjust the situation ? 
select * from log2 
Thanks for your kindly assistance.
Additional information: The attachment is log format
Best Regards, Anderson Hsu


Re: Fwd: postgresql performance question

От
Tomas Vondra
Дата:

On 02/11/2018 04:15 PM, 許耀彰 wrote:
> Dear Support Team, 
> I create a table by command as listed as below:
> *CREATE TABLE public.log2*
> *(*
> *        d3 text COLLATE pg_catalog."default"*
> *)*
> *WITH (*
> *
> *
> *    OIDS = FALSE*
> *
> *
> *)*
> *
> *
> *TABLESPACE pg_default;*
> 
> And then use command as listed as below to import data to log2 table
> 
> *COPY log2 FROM '/home/anderson/0107.csv' CSV HEADER;*
> 
> My purpose is to import log information to log2 table to analysis, but I
> found one situation : for example , I have 166856 records in log2 table,
> when I use select command to list data , it spent a lot time, can we
> adjust the situation ? 
> *select * from log2 *
> Thanks for your kindly assistance.

Sorry for being annoying, but this mailing list is for bug reports, and
your post is clearly not one. Please, send it to pgsql-performance, more
people are watching that list and you're more likely to get help.

Furthermore, I strongly recommend reading this:

    https://wiki.postgresql.org/wiki/Slow_Query_Questions

It may actually have answer to your question, and in case it does not it
lists things you need to include in your post. For example query plan or
information about the hardware/system would be very helpful.

regards

-- 
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services