Re: Best practices for moving UTF8 databases

Поиск
Список
Период
Сортировка
От Albe Laurenz
Тема Re: Best practices for moving UTF8 databases
Дата
Msg-id D960CB61B694CF459DCFB4B0128514C203937E4F@exadv11.host.magwien.gv.at
обсуждение исходный текст
Ответ на Re: Best practices for moving UTF8 databases  (Phoenix Kiula <phoenix.kiula@gmail.com>)
Список pgsql-general
Phoenix Kiula wrote:
> > I wonder: why do you spend so much time complaining instead of
> > simply locating the buggy data and fixing them?
>
>
> I'd love to fix them. But if I do a search for
>
> SELECT * FROM xyz WHERE col like '%0x80%'
>
> it doesn't work. How should I search for these characters?

I would get GNU libiconv:
http://www.gnu.org/software/libiconv/

Then take a plain text pg_dump of the database in UTF-8.
Let's call it "db.orig.dmp".

Strip all problem characters:
iconv -f UTF-8 -t UTF-8 -c <db.orig.dmp >db.stripped.sql

Compare both files:
diff -u db.orig.dmp db.stripped.sql

That output will show all lines containing a problem character.

Now the tedious part:
Use "db.orig.dmp" to find out which tables they belong to,
locate the records in the database by primary key and fix them.

Alternatively, you can use iconv's auto-repair if you know which
bytes give you a problem.
For example, if you know that the trouble stems only from 0x80 bytes
that should be Euro symbols, you could:
iconv -f UTF-8 -t UTF-8 --byte-subst="<0x%x>" <db.orig.dmp | sed -e 's/<0x80>/EUR/g' >db.fixed.sql

The resulting "db.fixed.sql" could then be loaded into the new database.

Yours,
Laurenz Albe

В списке pgsql-general по дате отправления:

Предыдущее
От: Craig Ringer
Дата:
Сообщение: Re: ***UNCHECKED*** Re: memory leak occur when disconnect database
Следующее
От: Phoenix Kiula
Дата:
Сообщение: Re: First query very slow. Solutions: memory, or settings, or SQL?