Обсуждение: RE: [HACKERS] MAX Query length

Поиск
Список
Период
Сортировка

RE: [HACKERS] MAX Query length

От
"Ansley, Michael"
Дата:
Once I have recompiled with a new block size, how do I update the databases
that I already have.  If I understand right, once the block size has been
updated, my current dbs will not work.  Do I just pg_dump before make
install and then recreate the dbs and load the dumps afterwards?

>> As of 6.5, it's just a matter of adjusting BLCKSZ in 
>> include/config.h,
>> rebuilding, and re-initdb-ing.  The workable sizes are 8k 
>> 16k and 32k;
>> bigger than 32k fails for reasons I don't recall exactly (offsets
>> stored in signed shorts somewhere, no doubt).


MikeA


Re: [HACKERS] MAX Query length

От
Tom Lane
Дата:
"Ansley, Michael" <Michael.Ansley@intec.co.za> writes:
> Once I have recompiled with a new block size, how do I update the databases
> that I already have.  If I understand right, once the block size has been
> updated, my current dbs will not work.  Do I just pg_dump before make
> install and then recreate the dbs and load the dumps afterwards?

Right, the real sequence when you are changing disk layout details ispg_dumpall with old pg_dump and backend.stop
postmasterrm-rf installationmake installinitdbstart postmasterpsql <pgdumpscript.
 

You may want to do your development work in a "playpen" installation
instead of risking breaking your "production" installation with these
sorts of shenanigans.  I do that all the time here; for one thing I
don't have to bother saving and restoring any data when I blow away
a playpen installation.

The easiest kind of playpen setup is a separate server machine, but if
you only have one machine available then you do something like this to
build a playpen:
configure --with-pgport=5440 --prefix=/users/postgres/testversion

(Adjust playpen's port and install location to taste; make more than one
if you want...)  BTW, if you are messing with the backend then your
playpen should also be built with --enable-cassert.

As I commented a moment ago, it's probably not really necessary for you
to change BLCKSZ for your testing, but the above tips are worth
repeating every so often for the benefit of new hackers.
        regards, tom lane


Interesting index/LIKE/join slowness problems

От
Ole Gjerde
Дата:
Hi,
I've posted 3 messages to pgsql-general about a weird index problem I'm
having.  I've found a very simple case that exhibits this problems.
This time I'm using a different database and different table that the
first 3 messages(It's the same pg install however).

The index called mcrl1_partnumber_index is an index on the 'reference'
field.  The table was just vacuumed(with and without analyze).
The pg install is from CVS last night around 7pm Central time.

The problems seems to be rooted in 'OR' combined with 'LIKE'.  If I remove
the % in the string, explain shows the same (high) cost.  If I also remove
the 'LIKE' the cost basically goes to nothing.  The cost is indeed
correct, either of the 2 first cases takes ~5 minutes, while the last one
(no LIKE) finishes instantly.

The weird thing is, why is the cost being calculated as being that high
when it's actually using the index on that field and is there a reason why
explain shows the index name twice?

I ran the same exact query on a MS SQL server with the same data, and
that took in comparison about 2 seconds to finish.
Both Postgres and MS SQL are on Pentium 100 servers(Yes, very pathetic),
and Linux 2.2.6 and NT 4.0 respectively.

Thanks,
Ole Gjerde

Here's the SQL: 
---------------------
select * from mcrl1 where reference = 'AN914' OR reference LIKE 'AN914-%';

Here's the explain: 
-----------------
mcrl=> explain select * from mcrl1 where reference = 'AN914' OR reference
LIKE AN914-%';
NOTICE:  QUERY PLAN:

Index Scan using mcrl1_reference_index, mcrl1_reference_index on mcrl1
(cost=418431.81 rows=1 width=120)

EXPLAIN

Here's the table layout: 
------------
Table    = mcrl1
+----------------------------------+----------------------------------+-------+
|              Field               |              Type                |Length|
+----------------------------------+----------------------------------+-------+
| reference                        | varchar()                        |32 |
| cage_num                         | char()                           |5 |
| fsc                              | char()                           |4 |
| niin                             | char()                           |9 |
| isc                              | char()                           |1 |
| rnvc                             | char()                           |1 |
| rncc                             | char()                           |1 |
| sadc                             | char()                           |1 |
| da                               | char()                           |1 |
| description                      | varchar()                        |32 |
+----------------------------------+----------------------------------+-------+
Index:    mcrl1_partnumber_index








Re: [HACKERS] Interesting index/LIKE/join slowness problems

От
Tom Lane
Дата:
Ole Gjerde <gjerde@icebox.org> writes:
> The pg install is from CVS last night around 7pm Central time.

Do you have USE_LOCALE defined?

> The problems seems to be rooted in 'OR' combined with 'LIKE'.  If I remove
> the % in the string, explain shows the same (high) cost.  If I also remove
> the 'LIKE' the cost basically goes to nothing.  The cost is indeed
> correct, either of the 2 first cases takes ~5 minutes, while the last one
> (no LIKE) finishes instantly.

When you have just "where reference = 'AN914'", the system knows it can
use the index to scan just the tuples with keys between AN914 and AN914
(duh).  Very few tuples actually get fetched.

As soon as you use LIKE with a %, more tuples have to be scanned.  It's
particularly bad if you have USE_LOCALE; with the current code, that
basically means that LIKE 'AN914-%' will cause all tuples beginning with
key AN914- and running to the end of the table to be scanned.

See the extensive thread on this topic from about a month or two back
in the pgsql-hackers mail list archives; I don't feel like repeating the
info now.

When you throw in the OR, the indexqual logic basically breaks down
completely; I think you end up scanning the entire table.  (This could
be made smarter, perhaps, but right now I don't believe the system is
able to figure out the union of indexqual conditions.)  I would say it
is an optimizer bug that it is not reverting to sequential scan here
... that would be a good bit faster, I bet.
        regards, tom lane


Re: [HACKERS] Interesting index/LIKE/join slowness problems

От
Ole Gjerde
Дата:
On Thu, 15 Jul 1999, Tom Lane wrote:
> Do you have USE_LOCALE defined?

Nope..  Not unless it defaults to on...  I did a 
./configure --prefix=/home/postgres ; make ; make install as usual

> As soon as you use LIKE with a %, more tuples have to be scanned.  It's
> particularly bad if you have USE_LOCALE; with the current code, that
> basically means that LIKE 'AN914-%' will cause all tuples beginning with
> key AN914- and running to the end of the table to be scanned.

Ok.. I get that..  But why does LIKE 'AN914' have the same problem?  The %
doesn't have to be there as long as it's either LIKE or ~*(or ~ etc)
query.  And that still doesn't explain why it happens with USE_LOCALE
off..
Also, since the ='s work using OR, why wouldn't LIKE also?  Both methods
would use the indexes, and the LIKE doesn't take that much longer to run..
Doesn't make sense, especially concerning what you mention below..

> See the extensive thread on this topic from about a month or two back
> in the pgsql-hackers mail list archives; I don't feel like repeating the
> info now.

I haven't been able to find a discussion on this topic last few months, I
found discussion about something similar in March, but that didn't explain
it very well..  I'll just have to look some more :)

> When you throw in the OR, the indexqual logic basically breaks down
> completely; I think you end up scanning the entire table.  (This could
> be made smarter, perhaps, but right now I don't believe the system is
> able to figure out the union of indexqual conditions.)  I would say it
> is an optimizer bug that it is not reverting to sequential scan here
> ... that would be a good bit faster, I bet.

Ok.. I can believe that..  This is a pretty nasty problem tho.. I don't
believe using OR with LIKE is all that rare..  Maybe it's rare on a 17
mill row table, but still..
What would be the outlook on fixing the problem and not the symptom? :)

As far as sequential scan being faster.. Unfortunately, this table has
about 17 million rows, so any kind of seq scan is gonna be really slow.

Thanks,
Ole Gjerde





Re: [HACKERS] Interesting index/LIKE/join slowness problems

От
Tom Lane
Дата:
Ole Gjerde <gjerde@icebox.org> writes:
> Ok.. I get that..  But why does LIKE 'AN914' have the same problem?  The %
> doesn't have to be there as long as it's either LIKE or ~*(or ~ etc)
> query.

A pure "where field LIKE constant" doesn't have the problem; it's the
OR that does it.  More specifically it's an OR of ANDs that doesn't work
very well.

By the time the parser gets done with it, your query looks like

select * from mcrl1 wherereference = 'AN914' OR(reference LIKE 'AN914-%' AND reference >= 'AN914-' AND reference <=
'AN914-\377');

(ugly, ain't it?)  Those comparison clauses are what need to be pulled
out and fed to the indexscan mechanism, so that only part of the table
gets scanned, not the whole table.  Indexscan doesn't know anything
about LIKE, but it does grok >= and <=.

Unfortunately the current optimizer doesn't do it right.  I looked into
a very similar bug report from Hiroshi Inoue (see his message of 3/19/99
and my response of 4/3 in the hackers archives), and what I found was
that the cause is a fairly fundamental optimizer design choice.  The
ANDed conditions get split into separate top-level clauses and there's
no easy way to put them back together.  The optimizer ends up passing
only one of them to the indexscan executor.  That's better than nothing,
but on average you still end up scanning half the table rather than
just a small range of it.

> I haven't been able to find a discussion on this topic last few months, I
> found discussion about something similar in March, but that didn't explain
> it very well..  I'll just have to look some more :)

I was referring to the discussion around 4/15/99 about why LIKE needs a
smarter way to generate the upper comparison clause.  That's not
directly your problem, but it is causing the same kind of slowdown for
everyone who does use LOCALE...

>> When you throw in the OR, the indexqual logic basically breaks down
>> completely; I think you end up scanning the entire table.  (This could
>> be made smarter, perhaps, but right now I don't believe the system is
>> able to figure out the union of indexqual conditions.)

I was wrong about that --- the executor *does* handle OR'd indexqual
conditions, basically by performing a new indexscan for each OR'd
condition.  (That's why EXPLAIN is listing the index multiple times.)
The trouble with OR-of-ANDs is entirely the optimizer's fault; the
executor would do them fine if the optimizer would only hand them over
in that form.

> What would be the outlook on fixing the problem and not the symptom? :)

I plan to look into fixing this for 6.6, but don't hold your breath
waiting...
        regards, tom lane


Re: [HACKERS] Interesting index/LIKE/join slowness problems

От
Ole Gjerde
Дата:
On Fri, 16 Jul 1999, Tom Lane wrote:
> I was wrong about that --- the executor *does* handle OR'd indexqual
> conditions, basically by performing a new indexscan for each OR'd
> condition.  (That's why EXPLAIN is listing the index multiple times.)
> The trouble with OR-of-ANDs is entirely the optimizer's fault; the
> executor would do them fine if the optimizer would only hand them over
> in that form.
> > What would be the outlook on fixing the problem and not the symptom? :)
> I plan to look into fixing this for 6.6, but don't hold your breath
> waiting...

Thanks for giving the very detailed explanation!

Since we really need to have this work, or go with a different database,
we would be willing to pay someone to fix this problem.  Would anybody be
interested in doing this, how soon and how much?  It would be preferable
that this would be a patch that would be accepted back into postgres for
6.6.

Thanks,
Ole Gjerde
Avsupport Inc.




Re: [HACKERS] Interesting index/LIKE/join slowness problems

От
Tom Lane
Дата:
Ole Gjerde <gjerde@icebox.org> writes:
> On Fri, 16 Jul 1999, Tom Lane wrote:
>> The trouble with OR-of-ANDs is entirely the optimizer's fault; the
>> executor would do them fine if the optimizer would only hand them over
>> in that form.

> Since we really need to have this work, or go with a different database,
> we would be willing to pay someone to fix this problem.  Would anybody be
> interested in doing this, how soon and how much?  It would be preferable
> that this would be a patch that would be accepted back into postgres for
> 6.6.

Fixing the general OR-of-ANDs problem is going to be quite ticklish,
I think, because it would be easy to make other cases worse if we're
not careful about how we rewrite the qual condition.

However, I had an idea yesterday about a narrow, localized fix for LIKE
(and the other ops processed by makeIndexable), which I think would meet
your needs if the particular cases you are concerned about are just ORs
of LIKEs and simple comparisons.

It goes like this: while we want LIKE to generate indexable comparisons
if possible, having the parser insert them into the parsetree is a
really crude hack.  The extra clauses are just a waste of cycles under
many scenarios (no index on the field being looked at, LIKE not in the
WHERE clause or buried too deeply to be an indexqual, etc etc).
What's worse, the parser doesn't know for sure that what it's
manipulating really *is* a LIKE --- it's making an unwarranted
assumption on the basis of the operator name, before the actual operator
has been looked up!  So I've wanted to replace that method of optimizing
LIKE since the moment I saw it ;-)

What would be better would be to teach the indexqual extractor in the
optimizer that it can make indexqual conditions from a LIKE operator.
Then, the LIKE just passes through the cnfify() step without getting
rewritten, so we don't have the OR-of-ANDs problem.  Plus we don't pay
any overhead if the LIKE can't be used as an indexqual condition for any
reason.  And by the time the optimizer is acting, we really know whether
we have a LIKE or not, because type resolution and operator lookup have
been done.

I don't know how soon the general OR-of-ANDs problem can be solved,
but I am planning to try to make this LIKE fix for 6.6.  If you want
to send some $$ my way, all the better...
        regards, tom lane


Index not used on simple select

От
Ole Gjerde
Дата:
Hey,
I've been having this bizarre problem with some index on this one table.
The table has in the past had more than 9 indexes, but today I redid the
table and it still has the same problem.
I just did a dump of the schema, COPY'd the data out.  Deleted all
postgres files, and installed 6.5.1.
The table has 3,969,935 rows in it.

Any ideas?

Here is the explain reports after both vacuum and vacuum analyze on the
table:
---------------------------------------------
parts=> explain select * from av_parts where partnumber = '123456';
NOTICE:  QUERY PLAN:

Index Scan using av_parts_partnumber_index on av_parts  (cost=3.55 rows=32
width=124)

EXPLAIN
parts=> explain select * from av_parts where nsn = '123456';
NOTICE:  QUERY PLAN:

Seq Scan on av_parts  (cost=194841.86 rows=3206927 width=124)

EXPLAIN
-------------------------------------------------

This is how I create the 2 indexes:
-------------------------------------------------
CREATE  INDEX "av_parts_partnumber_index" on "av_parts" using btree             ( "partnumber" "varchar_ops" );
CREATE  INDEX "av_parts_nsn_index" on "av_parts" using btree             ( "nsn" "varchar_ops" );
-------------------------------------------------

Table    = av_parts
+----------------------------------+----------------------------------+-------+
|              Field               |              Type                |Length|
+----------------------------------+----------------------------------+-------+
| itemid                           | int4 not null default nextval (  |4 |
| vendorid                         | int4                             |4 |
| partnumber                       | varchar()                        |25 |
| alternatepartnumber              | varchar()                        |25 |
| nsn                              | varchar()                        |15 |
| description                      | varchar()                        |50 |
| condition                        | varchar()                        |10 |
| quantity                         | int4                             |4 |
| rawpartnumber                    | varchar()                        |25 |
| rawalternatenumber               | varchar()                        |25 |
| rawnsnnumber                     | varchar()                        |15 |
| date                             | int4                             |4 |
| cagecode                         | varchar()                        |10 |
+----------------------------------+----------------------------------+-------+
Indices:  av_parts_itemid_key         av_parts_nsn_index         av_parts_partnumber_index

Thanks,
Ole Gjerde



Re: [HACKERS] Index not used on simple select

От
Tom Lane
Дата:
Ole Gjerde <gjerde@icebox.org> writes:
> parts=> explain select * from av_parts where nsn = '123456';
> Seq Scan on av_parts  (cost=194841.86 rows=3206927 width=124)
> [ why isn't it using the index on nsn? ]

That is darn peculiar.  You're probably managing to trigger some nitty
little bug in the optimizer, but I haven't the foggiest what it might
be.

> Indices:  av_parts_itemid_key
>           av_parts_nsn_index
>           av_parts_partnumber_index

One bit of info you didn't provide is how that third index is defined.

Shipping your 4-million-row database around is obviously out of the
question, but I think a reproducible test case is needed; it's going to
take burrowing into the code with a debugger to find this one.  Can
you make a small test case that behaves the same way?  (One thing
to try right away is loading the same table and index definitions into
an empty database, but *not* loading any data and not doing vacuum.
If that setup doesn't show the bug, try adding a couple thousand
representative rows from your real data, vacuum analyzing, and then
seeing if it happens.)
        regards, tom lane