is there a deep unyielding reason to limit U&'' literals to ASCII?

Поиск
Список
Период
Сортировка
От Chapman Flack
Тема is there a deep unyielding reason to limit U&'' literals to ASCII?
Дата
Msg-id 56A4529B.4050408@anastigmatix.net
обсуждение исходный текст
Ответы Re: is there a deep unyielding reason to limit U&'' literals to ASCII?  (Robert Haas <robertmhaas@gmail.com>)
Список pgsql-hackers
I see in the documentation (and confirm in practice) that a
Unicode character string literal U&'...' is only allowed to have
<Unicode escape value>s representing Unicode characters if the
server encoding is, exactly and only, UTF8.

Otherwise, it can still have <Unicode escape value>s, but they can only
be in the range \+000001 to \+00007f and can only represent ASCII characters
... and this isn't just for an ASCII server encoding but for _any server
encoding other than UTF8_.

I'm a newcomer here, so maybe there was an existing long conversation
where that was determined to be necessary for some deep reason, and I
just need to be pointed to it.

What I would have expected would be to allow <Unicode escape value>s
for any Unicode codepoint that's representable in the server encoding,
whatever encoding that is. Indeed, that's how I read the SQL standard
(or my scrounged 2006 draft of it, anyway). The standard even lets
you precede U& with _charsetname and have the escapes be allowed to
be any character representable in the specified charset. *That*, I assume,
would be tough to implement in PostgreSQL, since strings don't walk
around with their own personal charsets attached. But what's the reason
for not being able to mention characters available in the server encoding?

-Chap



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Jeff Janes
Дата:
Сообщение: Re: Combining Aggregates
Следующее
От: Jinhua Luo
Дата:
Сообщение: Re: insert/update performance