fulltext search and hunspell

Поиск
Список
Период
Сортировка
От Jens Sauer
Тема fulltext search and hunspell
Дата
Msg-id AANLkTik4UhMyDM7s7ovWhFWWDwsQ6fN67y8s=6H3xF9H@mail.gmail.com
обсуждение исходный текст
Ответы Re: fulltext search and hunspell  (Oleg Bartunov <oleg@sai.msu.su>)
Список pgsql-general
Hey,

I want to use hunspell as a dictionary for the full text search by

* using PostgresSQL 8.4.7
* installing hunspell-de-de, hunspell-de-med
* creating a dictionary:

CREATE TEXT SEARCH DICTIONARY german_hunspell (
    TEMPLATE = ispell,
    DictFile = de_de,
    AffFile = de_de,
    StopWords = german
);

* changing the config

ALTER TEXT SEARCH CONFIGURATION german
    ALTER MAPPING FOR asciiword, asciihword, hword_asciipart,
                      word, hword, hword_part
    WITH german_hunspell, german_stem;

* now testing the lexizer:

SELECT ts_lexize('german_hunspell', 'Schokaladenfarik');
 ts_lexize
-----------

(1 Zeile)

Shouldn't it be something like this:
SELECT ts_lexize('norwegian_ispell', 'sjokoladefabrikk');
   {sjokoladefabrikk,sjokolade,fabrikk}
(from the 8.4 documentation of PostgreSQL)


The dict and affix files in the tsearch_data directory were
automatically generated by pg_updatedicts.

Is this a problem of the splitting compound word functionality? Should
I use ispell instead of hunspell?

Thanks

В списке pgsql-general по дате отправления:

Предыдущее
От: Andrew Sullivan
Дата:
Сообщение: Re: How to create index on only some of the rows
Следующее
От: akp geek
Дата:
Сообщение: postgis 1.5.2 installation configure: WARNING: could not locate CUnit required for liblwgeom unit tests