Обсуждение: Problem with 11 M records table

Поиск
Список
Период
Сортировка

Problem with 11 M records table

От
idc danny
Дата:
Hi everybody,

I'm fairly new to PostgreSQL and I have a problem with
a query:

SELECT * FROM "LockerEvents" LIMIT 10000 OFFSET
10990000

The table LockerEvents has 11 Mlillions records on it
and this query takes about 60 seconds to complete.
Moreover, even after making for each column in the
table a index the EXPLAIN still uses sequential scan
instead of indexes.

The EXPLAIN is:
"Limit  (cost=100245579.54..100245803.00 rows=10000
width=60) (actual time=58414.753..58482.661 rows=10000
loops=1)"
"  ->  Seq Scan on "LockerEvents"
(cost=100000000.00..100245803.00 rows=11000000
width=60) (actual time=12.620..45463.222 rows=11000000
loops=1)"
"Total runtime: 58493.648 ms"

The table is:

CREATE TABLE "LockerEvents"
(
  "ID" serial NOT NULL,
  "IDMoneySymbol" integer NOT NULL,
  "IDLocker" integer NOT NULL,
  "IDUser" integer NOT NULL,
  "IDEventType" integer NOT NULL,
  "TimeBegin" timestamp(0) without time zone NOT NULL,
  "Notes" character varying(200),
  "Income" double precision NOT NULL DEFAULT 0,
  "IncomeWithRate" double precision NOT NULL DEFAULT
0,
  CONSTRAINT pk_lockerevents_id PRIMARY KEY ("ID"),
  CONSTRAINT fk_lockerevents_ideventtype_eventtypes_id
FOREIGN KEY ("IDEventType")
      REFERENCES "EventTypes" ("ID") MATCH SIMPLE
      ON UPDATE NO ACTION ON DELETE NO ACTION,
  CONSTRAINT fk_lockerevents_idlocker_lockers_id
FOREIGN KEY ("IDLocker")
      REFERENCES "Lockers" ("ID") MATCH SIMPLE
      ON UPDATE NO ACTION ON DELETE NO ACTION,
  CONSTRAINT
fk_lockerevents_idmoneysymbol_moneysymbols_id FOREIGN
KEY ("IDMoneySymbol")
      REFERENCES "MoneySymbols" ("ID") MATCH SIMPLE
      ON UPDATE NO ACTION ON DELETE NO ACTION,
  CONSTRAINT fk_lockerevents_iduser_users_id FOREIGN
KEY ("IDUser")
      REFERENCES "Users" ("ID") MATCH SIMPLE
      ON UPDATE NO ACTION ON DELETE NO ACTION
)
WITH (OIDS=FALSE);


CREATE INDEX idx_col_lockerevents_income
  ON "LockerEvents"
  USING btree
  ("Income");

CREATE INDEX idx_col_lockerevents_incomewithrate
  ON "LockerEvents"
  USING btree
  ("IncomeWithRate");

CREATE INDEX idx_col_lockerevents_notes
  ON "LockerEvents"
  USING btree
  ("Notes");

CREATE INDEX idx_col_lockerevents_timebegin
  ON "LockerEvents"
  USING btree
  ("TimeBegin");

CREATE INDEX
idx_fk_lockerevents_ideventtype_eventtypes_id
  ON "LockerEvents"
  USING btree
  ("IDEventType");

CREATE INDEX idx_fk_lockerevents_idlocker_lockers_id
  ON "LockerEvents"
  USING btree
  ("IDLocker");

CREATE INDEX
idx_fk_lockerevents_idmoneysymbol_moneysymbols_id
  ON "LockerEvents"
  USING btree
  ("IDMoneySymbol");

CREATE INDEX idx_fk_lockerevents_iduser_users_id
  ON "LockerEvents"
  USING btree
  ("IDUser");

CREATE UNIQUE INDEX idx_pk_lockerevents_id
  ON "LockerEvents"
  USING btree
  ("ID");


If I do the query :
SELECT * FROM "LockerEvents" LIMIT 10000 OFFSET 0
then this query takes under a second to complete - I
believe this is because the sequential scan starts
from beginning.

I need the query to complete under 10 seconds and I do
not know how to do it.
Please help me!

Thank you,
Danny




Re: Problem with 11 M records table

От
Bill Moran
Дата:
In response to idc danny <idcdanny@yahoo.com>:

> Hi everybody,
>
> I'm fairly new to PostgreSQL and I have a problem with
> a query:
>
> SELECT * FROM "LockerEvents" LIMIT 10000 OFFSET
> 10990000

This query makes no sense, and I can't blame PostgreSQL for using a
seq scan, since you've given it no reason to do otherwise.  If you
want a random sampling of rows, you should construct your query more
to that effect, as this query is going to give you a random sampling
of rows, and the LIMIT/OFFSET are simply junk that confuses the
query planner.

I suspect that you don't really want a random sampling of rows, although
I can't imagine what you think you're going to get from that query.
Have you tried putting an ORDER BY clause in?

>
> The table LockerEvents has 11 Mlillions records on it
> and this query takes about 60 seconds to complete.
> Moreover, even after making for each column in the
> table a index the EXPLAIN still uses sequential scan
> instead of indexes.
>
> The EXPLAIN is:
> "Limit  (cost=100245579.54..100245803.00 rows=10000
> width=60) (actual time=58414.753..58482.661 rows=10000
> loops=1)"
> "  ->  Seq Scan on "LockerEvents"
> (cost=100000000.00..100245803.00 rows=11000000
> width=60) (actual time=12.620..45463.222 rows=11000000
> loops=1)"
> "Total runtime: 58493.648 ms"
>
> The table is:
>
> CREATE TABLE "LockerEvents"
> (
>   "ID" serial NOT NULL,
>   "IDMoneySymbol" integer NOT NULL,
>   "IDLocker" integer NOT NULL,
>   "IDUser" integer NOT NULL,
>   "IDEventType" integer NOT NULL,
>   "TimeBegin" timestamp(0) without time zone NOT NULL,
>   "Notes" character varying(200),
>   "Income" double precision NOT NULL DEFAULT 0,
>   "IncomeWithRate" double precision NOT NULL DEFAULT
> 0,
>   CONSTRAINT pk_lockerevents_id PRIMARY KEY ("ID"),
>   CONSTRAINT fk_lockerevents_ideventtype_eventtypes_id
> FOREIGN KEY ("IDEventType")
>       REFERENCES "EventTypes" ("ID") MATCH SIMPLE
>       ON UPDATE NO ACTION ON DELETE NO ACTION,
>   CONSTRAINT fk_lockerevents_idlocker_lockers_id
> FOREIGN KEY ("IDLocker")
>       REFERENCES "Lockers" ("ID") MATCH SIMPLE
>       ON UPDATE NO ACTION ON DELETE NO ACTION,
>   CONSTRAINT
> fk_lockerevents_idmoneysymbol_moneysymbols_id FOREIGN
> KEY ("IDMoneySymbol")
>       REFERENCES "MoneySymbols" ("ID") MATCH SIMPLE
>       ON UPDATE NO ACTION ON DELETE NO ACTION,
>   CONSTRAINT fk_lockerevents_iduser_users_id FOREIGN
> KEY ("IDUser")
>       REFERENCES "Users" ("ID") MATCH SIMPLE
>       ON UPDATE NO ACTION ON DELETE NO ACTION
> )
> WITH (OIDS=FALSE);
>
>
> CREATE INDEX idx_col_lockerevents_income
>   ON "LockerEvents"
>   USING btree
>   ("Income");
>
> CREATE INDEX idx_col_lockerevents_incomewithrate
>   ON "LockerEvents"
>   USING btree
>   ("IncomeWithRate");
>
> CREATE INDEX idx_col_lockerevents_notes
>   ON "LockerEvents"
>   USING btree
>   ("Notes");
>
> CREATE INDEX idx_col_lockerevents_timebegin
>   ON "LockerEvents"
>   USING btree
>   ("TimeBegin");
>
> CREATE INDEX
> idx_fk_lockerevents_ideventtype_eventtypes_id
>   ON "LockerEvents"
>   USING btree
>   ("IDEventType");
>
> CREATE INDEX idx_fk_lockerevents_idlocker_lockers_id
>   ON "LockerEvents"
>   USING btree
>   ("IDLocker");
>
> CREATE INDEX
> idx_fk_lockerevents_idmoneysymbol_moneysymbols_id
>   ON "LockerEvents"
>   USING btree
>   ("IDMoneySymbol");
>
> CREATE INDEX idx_fk_lockerevents_iduser_users_id
>   ON "LockerEvents"
>   USING btree
>   ("IDUser");
>
> CREATE UNIQUE INDEX idx_pk_lockerevents_id
>   ON "LockerEvents"
>   USING btree
>   ("ID");
>
>
> If I do the query :
> SELECT * FROM "LockerEvents" LIMIT 10000 OFFSET 0
> then this query takes under a second to complete - I
> believe this is because the sequential scan starts
> from beginning.
>
> I need the query to complete under 10 seconds and I do
> not know how to do it.
> Please help me!
>
> Thank you,
> Danny
>
>
>
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance


--
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~wmoran/

wmoran@collaborativefusion.com
Phone: 412-422-3463x4023

****************************************************************
IMPORTANT: This message contains confidential information and is
intended only for the individual named. If the reader of this
message is not an intended recipient (or the individual
responsible for the delivery of this message to an intended
recipient), please be advised that any re-use, dissemination,
distribution or copying of this message is prohibited. Please
notify the sender immediately by e-mail if you have received
this e-mail by mistake and delete this e-mail from your system.
E-mail transmission cannot be guaranteed to be secure or
error-free as information could be intercepted, corrupted, lost,
destroyed, arrive late or incomplete, or contain viruses. The
sender therefore does not accept liability for any errors or
omissions in the contents of this message, which arise as a
result of e-mail transmission.
****************************************************************

Re: Problem with 11 M records table

От
salman
Дата:

idc danny wrote:
> Hi everybody,
>
> I'm fairly new to PostgreSQL and I have a problem with
> a query:
>
> SELECT * FROM "LockerEvents" LIMIT 10000 OFFSET
> 10990000
>
> The table LockerEvents has 11 Mlillions records on it
> and this query takes about 60 seconds to complete.
> Moreover, even after making for each column in the
> table a index the EXPLAIN still uses sequential scan
> instead of indexes.
>
> The EXPLAIN is:
> "Limit  (cost=100245579.54..100245803.00 rows=10000
> width=60) (actual time=58414.753..58482.661 rows=10000
> loops=1)"
> "  ->  Seq Scan on "LockerEvents"
> (cost=100000000.00..100245803.00 rows=11000000
> width=60) (actual time=12.620..45463.222 rows=11000000
> loops=1)"
> "Total runtime: 58493.648 ms"
>
> The table is:
>
> CREATE TABLE "LockerEvents"
> (
>   "ID" serial NOT NULL,
>   "IDMoneySymbol" integer NOT NULL,
>   "IDLocker" integer NOT NULL,
>   "IDUser" integer NOT NULL,
>   "IDEventType" integer NOT NULL,
>   "TimeBegin" timestamp(0) without time zone NOT NULL,
>   "Notes" character varying(200),
>   "Income" double precision NOT NULL DEFAULT 0,
>   "IncomeWithRate" double precision NOT NULL DEFAULT
> 0,
>   CONSTRAINT pk_lockerevents_id PRIMARY KEY ("ID"),
>   CONSTRAINT fk_lockerevents_ideventtype_eventtypes_id
> FOREIGN KEY ("IDEventType")
>       REFERENCES "EventTypes" ("ID") MATCH SIMPLE
>       ON UPDATE NO ACTION ON DELETE NO ACTION,
>   CONSTRAINT fk_lockerevents_idlocker_lockers_id
> FOREIGN KEY ("IDLocker")
>       REFERENCES "Lockers" ("ID") MATCH SIMPLE
>       ON UPDATE NO ACTION ON DELETE NO ACTION,
>   CONSTRAINT
> fk_lockerevents_idmoneysymbol_moneysymbols_id FOREIGN
> KEY ("IDMoneySymbol")
>       REFERENCES "MoneySymbols" ("ID") MATCH SIMPLE
>       ON UPDATE NO ACTION ON DELETE NO ACTION,
>   CONSTRAINT fk_lockerevents_iduser_users_id FOREIGN
> KEY ("IDUser")
>       REFERENCES "Users" ("ID") MATCH SIMPLE
>       ON UPDATE NO ACTION ON DELETE NO ACTION
> )
> WITH (OIDS=FALSE);
>
>
> CREATE INDEX idx_col_lockerevents_income
>   ON "LockerEvents"
>   USING btree
>   ("Income");
>
> CREATE INDEX idx_col_lockerevents_incomewithrate
>   ON "LockerEvents"
>   USING btree
>   ("IncomeWithRate");
>
> CREATE INDEX idx_col_lockerevents_notes
>   ON "LockerEvents"
>   USING btree
>   ("Notes");
>
> CREATE INDEX idx_col_lockerevents_timebegin
>   ON "LockerEvents"
>   USING btree
>   ("TimeBegin");
>
> CREATE INDEX
> idx_fk_lockerevents_ideventtype_eventtypes_id
>   ON "LockerEvents"
>   USING btree
>   ("IDEventType");
>
> CREATE INDEX idx_fk_lockerevents_idlocker_lockers_id
>   ON "LockerEvents"
>   USING btree
>   ("IDLocker");
>
> CREATE INDEX
> idx_fk_lockerevents_idmoneysymbol_moneysymbols_id
>   ON "LockerEvents"
>   USING btree
>   ("IDMoneySymbol");
>
> CREATE INDEX idx_fk_lockerevents_iduser_users_id
>   ON "LockerEvents"
>   USING btree
>   ("IDUser");
>
> CREATE UNIQUE INDEX idx_pk_lockerevents_id
>   ON "LockerEvents"
>   USING btree
>   ("ID");
>
>
> If I do the query :
> SELECT * FROM "LockerEvents" LIMIT 10000 OFFSET 0
> then this query takes under a second to complete - I
> believe this is because the sequential scan starts
> from beginning.
>
> I need the query to complete under 10 seconds and I do
> not know how to do it.
> Please help me!
>
> Thank you,
> Danny
>

I recall it being mentioned on one of these lists that with offset, all
the rows in between still have to be read. So, you may get better
results if you use a 'where id > 10000' clause in the query.

-salman


Re: Problem with 11 M records table

От
Craig James
Дата:
idc danny wrote:
> Hi everybody,
>
> I'm fairly new to PostgreSQL and I have a problem with
> a query:
>
> SELECT * FROM "LockerEvents" LIMIT 10000 OFFSET
> 10990000
>
> The table LockerEvents has 11 Mlillions records on it
> and this query takes about 60 seconds to complete.

The OFFSET clause is almost always inefficient for anything but very small tables or small offsets.  In order for a
relationaldatabase (not just Postgres) to figure out which row is the 11000000th row, it has to actually retrieve the
first10999999 rows and and discard them.  There is no magical way to go directly to the 11-millionth row.  Even on a
trivialquery such as yours with no WHERE clause, the only way to determine which row is the 11 millionths is to scan
theprevious 10999999. 

There are better (faster) ways to achieve this, but it depends on why you are doing this query.  That is, do you just
wantthis one block of data, or are you scanning the whole database in 10,000-row blocks? 

Craig

Re: Problem with 11 M records table

От
Craig James
Дата:
idc danny wrote:
> Hi James,
>
> Than you for your response.
>
> What I want to achieve is to give to the application
> user 10k rows where the records are one after another
> in the table, and the application has a paginating GUI
> ("First page", "Previous page", "Next page", "Last
> page" - all links & "Jump to page" combobox) where
> thsi particular query gets to run if the user clicks
> on the "Last page" link.
> The application receive the first 10k rows in under a
> second when the user clicks on "First page" link and
> receive the last 10k rows in about 60 seconds when he
> clicks on "Last page" link.

You need a sequence that automatically assigns an ascending "my_rownum" to each row as it is added to the table, and an
indexon that my_rownum column.  Then you select your page by (for example) 

  select * from my_table where my_rownum >= 100 and id < 110;

That will do what you want, with instant performance that's linear over your whole table.

If your table will have deletions, then you have to update the row numbering a lot, which will cause you terrible
performanceproblems due to the nature of the UPDATE operation in Postgres.  If this is the case, then you should keep a
separatetable just for numbering the rows, which is joined to your main table when you want to retrieve a "page" of
data. When you delete data (which should be batched, since this will be expensive), then you truncate your rownum
table,reset the sequence that generates your row numbers, then regenerate your row numbers with something like "insert
intomy_rownum_table (select id, nextval('my_rownum_seq') from my_big_table)".  To retrieve a page, just do "select ...
frommy_table join my_rownum_table on (...)", which will be really fast since you'll have indexes on both tables. 

Note that this method requires that you have a primary key, or at least a unique column, on your main table, so that
youhave something to join with your row-number table. 

Craig

Re: Problem with 11 M records table

От
"Scott Marlowe"
Дата:
On Tue, May 13, 2008 at 10:57 AM, idc danny <idcdanny@yahoo.com> wrote:
> Hi everybody,
>
> I'm fairly new to PostgreSQL and I have a problem with
> a query:
>
> SELECT * FROM "LockerEvents" LIMIT 10000 OFFSET
> 10990000
>
> The table LockerEvents has 11 Mlillions records on it
> and this query takes about 60 seconds to complete.
> Moreover, even after making for each column in the
> table a index the EXPLAIN still uses sequential scan
> instead of indexes.

Yep.  The way offset limit works is it first materializes the data
needed for OFFSET+LIMIT rows, then throws away OFFSET worth's of data.
So, it has to do a lot of retrieving.

Better off to use something like:

select * from table order by indexfield where indexfield between
10000000 and 10001000;

which can use an index on indexfield, as long as the amount of data is
small enough, etc...

can I move sort to first outer join ?

От
fernando castano
Дата:
Hi all,

This sql is taking too long for the size of my tiny db.  Any tips from
this alias?  I tried moving the sort to the first left outer join
(between projects and features tables) using a nested subquery, but
postgres tells me only one column could be returned from a subqueyr.

TIA,

fdo

SELECT projects."id" AS t0_r0, projects."name" AS t0_r1, projects."display_name"
 AS t0_r2, projects."description" AS t0_r3, projects."community_id" AS t0_r4, projects."parent_id" AS t0_r5,
projects."visible" AS t0_r6, projects."created_at" AS t0_r7, projects."updated_at" AS t0_r8, projects."image_path"
 AS t0_r9, projects."with_navigation" AS t0_r10, projects."static_home" AS t0_r11, projects."active" AS t0_r12,
projects."image_id" AS t0_r13, projects."request_message" AS t0_r14, projects."response_message" AS t0_r15,
projects."approval_status" AS t0_r16, projects."approved_by_id" AS t0_r17, projects."owner_id" AS t0_r18,
 project_tags."id" AS t1_r0, project_tags."project_id" AS t1_r1, project_tags."name" AS t1_r2,
project_tags."created_at" AS t1_r3, project_tags."updated_at" AS t1_r4, person_roles."id" AS t2_r0,
person_roles."project_id" AS t2_r1, person_roles."person_id" AS t2_r2, person_roles."role_id" AS t2_r3,
person_roles."authorized" AS t2_r4, person_roles."created_at" AS t2_r5, person_roles."updated_at" AS t2_r6,
person_roles."request_message" AS t2_r7, person_roles."response_message" AS t2_r8, features."id" AS t3_r0,
features."project_id" AS t3_r1, features."name" AS t3_r2, features."display_name" AS t3_r3,
 features."feature_uri" AS t3_r4, features."provisioned" AS t3_r5, features."service_name" AS t3_r6,
 features."created_at" AS t3_r7, features."updated_at" AS t3_r8, features."active" AS t3_r9,
features."description" AS t3_r10, features."type" AS t3_r11, features."forum_topic_count" AS t3_r12,
 features."forum_post_count" AS t3_r13, features."forum_last_post_at" AS t3_r14,
features."forum_last_post_by_id" AS t3_r15, features."wiki_default_page_id" AS t3_r16,
features."wiki_default_page_name" AS t3_r17, features."wiki_format" AS t3_r18,
 features."service_id" AS t3_r19, features."service_type_id" AS t3_r20 FROM projects
 LEFT OUTER JOIN project_tags ON project_tags.project_id = projects.id
LEFT OUTER JOIN person_roles ON person_roles.project_id = projects.id
LEFT OUTER JOIN features ON features.project_id = projects.id
WHERE (projects."visible" = 't') AND projects.id IN (3, 4, 5, 6, 10, 7, 8, 9, 13, 11)
ORDER BY projects.name asc;

                                                              QUERY PLAN
               

---------------------------------------------------------------------------------------------------------------------------------------
 Sort  (cost=57.17..57.20 rows=12 width=4925) (actual time=147.880..148.325 rows=846 loops=1)
   Sort Key: projects.name
   ->  Hash Left Join  (cost=45.53..56.95 rows=12 width=4925) (actual time=1.374..6.694 rows=846 loops=1)
         Hash Cond: (projects.id = project_tags.project_id)
         ->  Hash Left Join  (cost=22.48..33.48 rows=4 width=4819) (actual time=1.243..3.018 rows=222 loops=1)
               Hash Cond: (projects.id = person_roles.project_id)
               ->  Hash Left Join  (cost=10.90..21.86 rows=4 width=3754) (actual time=1.121..1.702 rows=78 loops=1)
                     Hash Cond: (projects.id = features.project_id)
                     ->  Seq Scan on projects  (cost=0.00..10.90 rows=4 width=1884) (actual time=0.039..0.109 rows=10
loops=1)
                           Filter: (visible AND (id = ANY ('{3,4,5,6,10,7,8,9,13,11}'::integer[])))
                     ->  Hash  (cost=10.40..10.40 rows=40 width=1870) (actual time=1.048..1.048 rows=101 loops=1)
                           ->  Seq Scan on features  (cost=0.00..10.40 rows=40 width=1870) (actual time=0.026..0.464
rows=101loops=1) 
               ->  Hash  (cost=10.70..10.70 rows=70 width=1065) (actual time=0.098..0.098 rows=29 loops=1)
                     ->  Seq Scan on person_roles  (cost=0.00..10.70 rows=70 width=1065) (actual time=0.014..0.037
rows=29loops=1) 
         ->  Hash  (cost=15.80..15.80 rows=580 width=106) (actual time=0.105..0.105 rows=32 loops=1)
               ->  Seq Scan on project_tags  (cost=0.00..15.80 rows=580 width=106) (actual time=0.013..0.036 rows=32
loops=1)
 Total runtime: 149.622 ms
(17 rows)




Re: Problem with 11 M records table

От
"Ramasubramanian G"
Дата:
Hi ,

Set this parameter in psotgresql.conf set enable_seqscan=off;
And try:

-----Original Message-----
From: pgsql-performance-owner@postgresql.org
[mailto:pgsql-performance-owner@postgresql.org] On Behalf Of Scott
Marlowe
Sent: Tuesday, May 13, 2008 11:32 PM
To: idc danny
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Problem with 11 M records table

On Tue, May 13, 2008 at 10:57 AM, idc danny <idcdanny@yahoo.com> wrote:
> Hi everybody,
>
> I'm fairly new to PostgreSQL and I have a problem with
> a query:
>
> SELECT * FROM "LockerEvents" LIMIT 10000 OFFSET
> 10990000
>
> The table LockerEvents has 11 Mlillions records on it
> and this query takes about 60 seconds to complete.
> Moreover, even after making for each column in the
> table a index the EXPLAIN still uses sequential scan
> instead of indexes.

Yep.  The way offset limit works is it first materializes the data
needed for OFFSET+LIMIT rows, then throws away OFFSET worth's of data.
So, it has to do a lot of retrieving.

Better off to use something like:

select * from table order by indexfield where indexfield between
10000000 and 10001000;

which can use an index on indexfield, as long as the amount of data is
small enough, etc...

--
Sent via pgsql-performance mailing list
(pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: can I move sort to first outer join ?

От
PFC
Дата:
On Wed, 14 May 2008 06:40:40 +0200, fernando castano
<Fernando.Castano@Sun.COM> wrote:

>
> Hi all,
>
> This sql is taking too long for the size of my tiny db.  Any tips from
> this alias?  I tried moving the sort to the first left outer join
> (between projects and features tables) using a nested subquery, but
> postgres tells me only one column could be returned from a subqueyr.

    Instead of :

    SELECT * FROM a LEFT JOIN b LEFT JOIN c WHERE c.column=... ORDER BY c.x
LIMIT N

    You could write :

    SELECT * FROM a LEFT JOIN b LEFT JOIN (SELECT * FROM c WHERE c.column=...
ORDER BY c.x LIMIT N) AS cc ORDER BY cc.x LIMIT N

    This is only interesting of you use a LIMIT and this allows you to reduce
the number of rows sorted/joined.

    However in your case this is not the right thing to do since you do not
use LIMIT, and sorting your 846 rows will only take a very small time.
Your problem are those seq scans, you need to optimize that query so it
can use indexes.

>                      ->  Seq Scan on projects  (cost=0.00..10.90 rows=4
> width=1884) (actual time=0.039..0.109 rows=10 loops=1)
>                            Filter: (visible AND (id = ANY
> ('{3,4,5,6,10,7,8,9,13,11}'::integer[])))
>                      ->  Hash  (cost=10.40..10.40 rows=40 width=1870)
> (actual time=1.048..1.048 rows=101 loops=1)
>                            ->  Seq Scan on features  (cost=0.00..10.40
> rows=40 width=1870) (actual time=0.026..0.464 rows=101 loops=1)
>                ->  Hash  (cost=10.70..10.70 rows=70 width=1065) (actual
> time=0.098..0.098 rows=29 loops=1)
>                      ->  Seq Scan on person_roles  (cost=0.00..10.70
> rows=70 width=1065) (actual time=0.014..0.037 rows=29 loops=1)
>          ->  Hash  (cost=15.80..15.80 rows=580 width=106) (actual
> time=0.105..0.105 rows=32 loops=1)
>                ->  Seq Scan on project_tags  (cost=0.00..15.80 rows=580
> width=106) (actual time=0.013..0.036 rows=32 loops=1)
>  Total runtime: 149.622 ms

    All those seq scans !!!

    Please post, for each of those tables :

    - The total number of rows (SELECT count(*) is fine)
    - The table definitions with indexes (\d table)

    EXPLAIN ANALYZE tells you the number of rows it picked out of a seq scan
(that's the "rows=") but not the number of rows scanned... this is
important, because a seq scan on a small table isn't a problem, but on a
big one, it is.

Define all IP's in the world in pg_hba.conf

От
idc danny
Дата:
Hi everybody,

I know that this group deals with performance but is the only one on which I'm subscribed, so my apologize in advance
forthe question. 

I want to allow everybody in the world, all IP's, to connect to my server. How do I accomplish that? Definitely, it's
nota good solution to enter all them manually in pg_hba.conf :). 

Currently, if above question cannot be answered, I want to achieve to allow the IP's of Hamachi network, which all are
ofthe form 5.*.*.* - but in the future it can expand to all IP's. 

Thank you,
Danny







Re: Define all IP's in the world in pg_hba.conf

От
Russell Smith
Дата:
idc danny wrote:
> Hi everybody,
>
> I know that this group deals with performance but is the only one on which I'm subscribed, so my apologize in advance
forthe question. 
>
> I want to allow everybody in the world, all IP's, to connect to my server. How do I accomplish that? Definitely, it's
nota good solution to enter all them manually in pg_hba.conf :). 
>
what's wrong with 0.0.0.0/0 ?
> Currently, if above question cannot be answered, I want to achieve to allow the IP's of Hamachi network, which all
areof the form 5.*.*.* - but in the future it can expand to all IP's. 
>
> Thank you,
> Danny
>
>
>
>
>
>
>
>