Обсуждение: Questionable description about character sets

Поиск
Список
Период
Сортировка

Questionable description about character sets

От
Tatsuo Ishii
Дата:
"23.3.1. Supported Character Sets
Table 23.3 shows the character sets available for use in PostgreSQL."

https://www.postgresql.org/docs/current/multibyte.html#MULTIBYTE-CHARSET-SUPPORTED

But the table actually shows encodings (more precisely, "character
encoding scheme") (BIG5...EUC_JP... UTF8). I think we need one more
column for "character sets" (more precisely, "coded character sets").

Encoding   Character set        ...
BIG5       Big5-2003
:
EUC_JP     ASCII, JIS X 0208, JIS X 0212, JIS X 0201
:
UTF8       Unicode      

Best regards,
--
Tatsuo Ishii
SRA OSS K.K.
English: http://www.sraoss.co.jp/index_en/
Japanese:http://www.sraoss.co.jp



Re: Questionable description about character sets

От
Andreas Karlsson
Дата:
On 2/11/26 10:58 AM, Tatsuo Ishii wrote:
> "23.3.1. Supported Character Sets
> Table 23.3 shows the character sets available for use in PostgreSQL."
> 
> https://www.postgresql.org/docs/current/multibyte.html#MULTIBYTE-CHARSET-SUPPORTED
> 
> But the table actually shows encodings (more precisely, "character
> encoding scheme") (BIG5...EUC_JP... UTF8). I think we need one more
> column for "character sets" (more precisely, "coded character sets").
> 
> Encoding   Character set        ...
> BIG5       Big5-2003
> :
> EUC_JP     ASCII, JIS X 0208, JIS X 0212, JIS X 0201
> :
> UTF8       Unicode    

Wouldn't that make the table very wide? And for e.g. European character 
encodings I am not sure it is that useful since most or maybe even all 
of them are subsets of unicode, it mostly gets interesting for encodings 
which support characters not in unicode, right?

Andreas




Re: Questionable description about character sets

От
Tatsuo Ishii
Дата:
> Wouldn't that make the table very wide?

I don't think it would make the table very wide but a little bit
wider. So I think adding the character sets information to
"Description" column is better. Some of encodings already have the
info. See attached patch.

> And for e.g. European
> character encodings I am not sure it is that useful since most or
> maybe even all of them are subsets of unicode, it mostly gets
> interesting for encodings which support characters not in unicode,
> right?

Choosing UTF8 or not is just one of the use cases.

I am thinking about the use case in which user wants to continue to
use other encodings (e.g. wants to avoid conversion to UTF8).
Example: suppose the user has a legacy system in which EUC_JP is
used. The data in the system includes JIS X 0201, JIS X 0208 and JIS X
0212, and he wants to make sure that PostgreSQL supports all those
character sets in EUC_JP, because some tools does not support JIS X
0212. Only JIS X 0212 and JIS X 0208 are supported. Currently the info
(whether JIS X 0212 is supported or not) does not exist anywhere in
our docs. It's only in the source code. I think it's better to have
the info in our docs so that user does not need to look into the
source code.

Best regards,
--
Tatsuo Ishii
SRA OSS K.K.
English: http://www.sraoss.co.jp/index_en/
Japanese:http://www.sraoss.co.jp

From 98c97f670ce647003ce467a84f81cec0cb463c18 Mon Sep 17 00:00:00 2001
From: Tatsuo Ishii <ishii@postgresql.org>
Date: Sat, 14 Feb 2026 16:26:01 +0900
Subject: [PATCH v1] doc: Enhance "PostgreSQL Character Sets" table.

Previously some of encoding lacked description of coded character sets
being used in the encoding. For most of European encoding this is
obvious because there's only or few character sets for encoding, but
it's not true for some Asian encodings. For example, EUC_JP encoding
corresponds to multiple character sets: Namely, JIS X 0201, JIS X 0208
and JIS X 0212. This commit adds the information to "Description"
column.

Discussion: https://postgr.es/m/20260211.185847.1679085676298121526.ishii%40postgresql.org
---
 doc/src/sgml/charset.sgml | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/doc/src/sgml/charset.sgml b/doc/src/sgml/charset.sgml
index 3aabc798012..32c6280489b 100644
--- a/doc/src/sgml/charset.sgml
+++ b/doc/src/sgml/charset.sgml
@@ -1831,7 +1831,7 @@ ORDER BY c COLLATE ebcdic;
         </row>
         <row>
          <entry><literal>EUC_CN</literal></entry>
-         <entry>Extended UNIX Code-CN</entry>
+         <entry>Extended UNIX Code-CN, GB 2312</entry>
          <entry>Simplified Chinese</entry>
          <entry>Yes</entry>
          <entry>Yes</entry>
@@ -1840,7 +1840,7 @@ ORDER BY c COLLATE ebcdic;
         </row>
         <row>
          <entry><literal>EUC_JP</literal></entry>
-         <entry>Extended UNIX Code-JP</entry>
+         <entry>Extended UNIX Code-JP, JIS X 0201, JIS X 0208, JIS X 0212</entry>
          <entry>Japanese</entry>
          <entry>Yes</entry>
          <entry>Yes</entry>
@@ -1849,7 +1849,7 @@ ORDER BY c COLLATE ebcdic;
         </row>
         <row>
          <entry><literal>EUC_JIS_2004</literal></entry>
-         <entry>Extended UNIX Code-JP, JIS X 0213</entry>
+         <entry>Extended UNIX Code-JP, JIS X 0201, JIS X 0213</entry>
          <entry>Japanese</entry>
          <entry>Yes</entry>
          <entry>No</entry>
@@ -1858,7 +1858,7 @@ ORDER BY c COLLATE ebcdic;
         </row>
         <row>
          <entry><literal>EUC_KR</literal></entry>
-         <entry>Extended UNIX Code-KR</entry>
+         <entry>Extended UNIX Code-KR, KS X 1001</entry>
          <entry>Korean</entry>
          <entry>Yes</entry>
          <entry>Yes</entry>
@@ -1867,7 +1867,7 @@ ORDER BY c COLLATE ebcdic;
         </row>
         <row>
          <entry><literal>EUC_TW</literal></entry>
-         <entry>Extended UNIX Code-TW</entry>
+         <entry>Extended UNIX Code-TW, CNS 11643</entry>
          <entry>Traditional Chinese, Taiwanese</entry>
          <entry>Yes</entry>
          <entry>Yes</entry>
@@ -2056,7 +2056,7 @@ ORDER BY c COLLATE ebcdic;
         </row>
         <row>
          <entry><literal>SJIS</literal></entry>
-         <entry>Shift JIS</entry>
+         <entry>Shift JIS, JIS X 0201, JIS X 0208</entry>
          <entry>Japanese</entry>
          <entry>No</entry>
          <entry>No</entry>
-- 
2.43.0


Re: Questionable description about character sets

От
Thomas Munro
Дата:
On Sat, Feb 14, 2026 at 11:20 PM Tatsuo Ishii <ishii@postgresql.org> wrote:
> > Wouldn't that make the table very wide?
>
> I don't think it would make the table very wide but a little bit
> wider. So I think adding the character sets information to
> "Description" column is better. Some of encodings already have the
> info. See attached patch.

When I point my browser at
file:///home/tmunro/projects/postgresql/build/doc/src/sgml/html/multibyte.html
I see these longer descriptions flowing onto multiple lines making the
table cells higher, while the published documentation[1] does only a
small amount of that, and then the font instead becomes smaller as I
make the window narrower.  Is there an easy way to see the final
website form in a local build?

We'd have more free space in the affected rows if we did s/Extended
UNIX Code-JP/EUC-JP/.  Why is that acronym expanded, while ISO, ECMA,
JIS and CP are not?

It might be confusing that the style "ISO 8859-1, ECMA 94" is used to
list alternative encoding standards that are aligned or equivalent,
while here you're listing the encoding and then the underlying
character sets in the same way.  Would it be better to put them in
parentheses?

With those two changes we'd have:

EUC_JP       | EUC-JP (JIS X 0201, JIS X 0208, JIS X 0212)
EUC_JIS_2004 | EUC-JP (JIS X 0201, JIS X 0213)

If we really wanted to save horizontal space, I suppose we could drop
the Alias column and either list aliases in a new table, or give them
their own rows with a description "Alias for ...", but that seems a
bit over the top.

While wondering if some other rows could be more specific, I noticed
that for GBK we have "Extended National Standard".  I don't understand
these things, but from a quick look at Wikipedia[2], I got the idea
that if convert_to('€', 'GBK') = '\x80'::bytea (yes) then what we have
might actually be the yet-further-extended standard known as "GBK
1.0".  Do I have that right?

As for BIG5, it seems to be an underspecified mess defying description
other than "good luck" :-)  Thankfully we won't have to list all the
standards that MULE_INTERNAL indirectly covers, as it looks like we've
agreed to drop it.  And IIRC there was a thread somewhere proposing to
drop JOHAB...

> > And for e.g. European
> > character encodings I am not sure it is that useful since most or
> > maybe even all of them are subsets of unicode, it mostly gets
> > interesting for encodings which support characters not in unicode,
> > right?
>
> Choosing UTF8 or not is just one of the use cases.
>
> I am thinking about the use case in which user wants to continue to
> use other encodings (e.g. wants to avoid conversion to UTF8).
> Example: suppose the user has a legacy system in which EUC_JP is
> used. The data in the system includes JIS X 0201, JIS X 0208 and JIS X
> 0212, and he wants to make sure that PostgreSQL supports all those
> character sets in EUC_JP, because some tools does not support JIS X
> 0212. Only JIS X 0212 and JIS X 0208 are supported. Currently the info
> (whether JIS X 0212 is supported or not) does not exist anywhere in
> our docs. It's only in the source code. I think it's better to have
> the info in our docs so that user does not need to look into the
> source code.

Makes sense to me.  The underlying character sets must be very
important to understand, especially if implementations vary on these
points.  We should give the information.

. o O ( I wonder if anyone has ever tried to make an "XTF-8-JA"
encoding just like UTF-8 but with ~1900 high-frequency Japanese
codepoints swapped into the 2-byte range U+0080-07ff where Greek,
Hebrew, Arabic and others won the encoding lottery.  UTF-16 is
apparently sometimes preferred to save space in other RDBMSs that can
do it, but I suppose you could achieve the same size most of the time
with a scheme like that.  The other encodings have the desired size,
but non-universal character sets.  A similar thought for the languages
of India, but with the frequency fuzziness factor removed: you could
surely map a dozen tiny non-ideographic scripts into that range to
save a byte per character... Hindi, Tamil etc didn't get a very good
deal with UTF-8.  Don't worry, I'm not suggesting that PostgreSQL has
any business inventings its own hair-brained encodings, I'm just
wondering out loud if that is a kind of thing that exists somewhere
out there... )

[1] https://www.postgresql.org/docs/current/multibyte.html
[2] https://en.wikipedia.org/wiki/GBK_(character_encoding)



Re: Questionable description about character sets

От
Nico Williams
Дата:
On Mon, Feb 16, 2026 at 05:35:41PM +1300, Thomas Munro wrote:
>                                              [...].  UTF-16 is
> apparently sometimes preferred to save space in other RDBMSs that can
> do it, but I suppose you could achieve the same size most of the time
> with a scheme like that.  [...]

[Off-topic] I think UTF-16 yielding smaller encodings is a truism.  It
really depends on what language the text is mostly written in, but
mostly it's a truism that's not true.  Anyways, UTF-16 has to go away,
and the sooner the better.

Nico
-- 



Re: Questionable description about character sets

От
Tatsuo Ishii
Дата:
> When I point my browser at
> file:///home/tmunro/projects/postgresql/build/doc/src/sgml/html/multibyte.html
> I see these longer descriptions flowing onto multiple lines making the
> table cells higher, while the published documentation[1] does only a
> small amount of that, and then the font instead becomes smaller as I
> make the window narrower.  Is there an easy way to see the final
> website form in a local build?

Same here. It would be nice to know website form in a local build.

> We'd have more free space in the affected rows if we did s/Extended
> UNIX Code-JP/EUC-JP/.  Why is that acronym expanded, while ISO, ECMA,
> JIS and CP are not?

Fair point.

> It might be confusing that the style "ISO 8859-1, ECMA 94" is used to
> list alternative encoding standards that are aligned or equivalent,
> while here you're listing the encoding and then the underlying
> character sets in the same way.  Would it be better to put them in
> parentheses?
>
> With those two changes we'd have:
>
> EUC_JP       | EUC-JP (JIS X 0201, JIS X 0208, JIS X 0212)
> EUC_JIS_2004 | EUC-JP (JIS X 0201, JIS X 0213)

Looks good to me.

> While wondering if some other rows could be more specific, I noticed
> that for GBK we have "Extended National Standard".  I don't understand
> these things,

Me neither. Probably "Extended National Standard" comes from the fact
that GB means "national standard" and "K" means "extension".  However
actually GBK is not an "official standard" which is mandatory for
Chinese industries to follow [1]. It's kind of strongly recommended
standard to follow. Probably we can just write "Defact standard (CP936)".

> but from a quick look at Wikipedia[2], I got the idea
> that if convert_to('€', 'GBK') = '\x80'::bytea (yes) then what we have
> might actually be the yet-further-extended standard known as "GBK
> 1.0".  Do I have that right?

I don't think so. [2] stats that "Microsoft later added the euro sign
to Code page 936 and assigned the code 0x80 to it. This is not a valid
code point in GBK 1.0. " So what we have seems to be CP936. Actually
in UCS_to_most.pl, which is used to generate gdbk_to_utf8.map, has the
line:
    'GBK' => 'CP936.TXT');

> As for BIG5, it seems to be an underspecified mess defying description
> other than "good luck" :-)

Yeah, ours is BIG5 (Unicode 1.1) + CP950.

> Thankfully we won't have to list all the
> standards that MULE_INTERNAL indirectly covers, as it looks like we've
> agreed to drop it.  And IIRC there was a thread somewhere proposing to
> drop JOHAB...

Apparently JOHAB has not been well tested...

> Makes sense to me.  The underlying character sets must be very
> important to understand, especially if implementations vary on these
> points.  We should give the information.

Yes.

> . o O ( I wonder if anyone has ever tried to make an "XTF-8-JA"
> encoding just like UTF-8 but with ~1900 high-frequency Japanese
> codepoints swapped into the 2-byte range U+0080-07ff where Greek,
> Hebrew, Arabic and others won the encoding lottery.  UTF-16 is
> apparently sometimes preferred to save space in other RDBMSs that can
> do it, but I suppose you could achieve the same size most of the time
> with a scheme like that.  The other encodings have the desired size,
> but non-universal character sets.  A similar thought for the languages
> of India, but with the frequency fuzziness factor removed: you could
> surely map a dozen tiny non-ideographic scripts into that range to
> save a byte per character... Hindi, Tamil etc didn't get a very good
> deal with UTF-8.  Don't worry, I'm not suggesting that PostgreSQL has
> any business inventings its own hair-brained encodings, I'm just
> wondering out loud if that is a kind of thing that exists somewhere
> out there... )

Well, I think inventing internal use only encoding is not a bad thing
in general.  We already have number of internal only data
structures. Internal encodings are just one of them. (I am not saying
I want to implement "XTF-8-JA" though).

> [1] https://www.postgresql.org/docs/current/multibyte.html
> [2] https://en.wikipedia.org/wiki/GBK_(character_encoding)
>

[3] https://ja.wikipedia.org/wiki/GBK

Best regards,
--
Tatsuo Ishii
SRA OSS K.K.
English: http://www.sraoss.co.jp/index_en/
Japanese:http://www.sraoss.co.jp



Re: Questionable description about character sets

От
Robert Treat
Дата:
On Mon, Feb 16, 2026 at 4:48 AM Tatsuo Ishii <ishii@postgresql.org> wrote:
>
> > When I point my browser at
> > file:///home/tmunro/projects/postgresql/build/doc/src/sgml/html/multibyte.html
> > I see these longer descriptions flowing onto multiple lines making the
> > table cells higher, while the published documentation[1] does only a
> > small amount of that, and then the font instead becomes smaller as I
> > make the window narrower.  Is there an easy way to see the final
> > website form in a local build?
>
> Same here. It would be nice to know website form in a local build.
>

Are you folks building with "make STYLE=website html" ?  That usually
gives me a pretty good representation of the web (although beware if
you use any browser specific settings to display websites in different
fonts. For example, on my desktop at home I run with postgresql.org at
133% size, which doesn't carry over when looking at locally built html
pages.

In any case, there is some additional info at
https://www.postgresql.org/docs/devel/docguide-build.html#DOCGUIDE-BUILD-HTML


Robert Treat
https://xzilla.net



Re: Questionable description about character sets

От
Tatsuo Ishii
Дата:
>> Same here. It would be nice to know website form in a local build.
>>
> 
> Are you folks building with "make STYLE=website html" ?  That usually
> gives me a pretty good representation of the web (although beware if
> you use any browser specific settings to display websites in different
> fonts. For example, on my desktop at home I run with postgresql.org at
> 133% size, which doesn't carry over when looking at locally built html
> pages.
> 
> In any case, there is some additional info at
> https://www.postgresql.org/docs/devel/docguide-build.html#DOCGUIDE-BUILD-HTML

Thanks for letting know me. I did not notice it.

Best regards,
--
Tatsuo Ishii
SRA OSS K.K.
English: http://www.sraoss.co.jp/index_en/
Japanese:http://www.sraoss.co.jp



Re: Questionable description about character sets

От
Thomas Munro
Дата:
On Mon, Feb 16, 2026 at 6:07 PM Nico Williams <nico@cryptonector.com> wrote:
> On Mon, Feb 16, 2026 at 05:35:41PM +1300, Thomas Munro wrote:
> >                                              [...].  UTF-16 is
> > apparently sometimes preferred to save space in other RDBMSs that can
> > do it, but I suppose you could achieve the same size most of the time
> > with a scheme like that.  [...]
>
> [Off-topic] I think UTF-16 yielding smaller encodings is a truism.  It
> really depends on what language the text is mostly written in, but
> mostly it's a truism that's not true.  Anyways, UTF-16 has to go away,
> and the sooner the better.

But when it's true for your language and that's what your database
holds, then it's true all the time, and it's not just outliers, we're
talking about nearly all of Asia's languages.  That's ... a lot of
NAND gates being wasted due to arbitrary choices made probably before
UTF-8 even existed.

I do agree with you that UTF-16 has turned out to be an odd beast,
though, not big enough but also too big.  Maybe it's only just right
for CJK (or CJ?).  I don't see much chance at all of anyone
retro-fitting UTF-16 into PostgreSQL anyway, so I wouldn't worry about
that.  I could more easily see us figuring out how to drop the
requirement for high bits in multi-byte sequence tails so that GB18030
could be used to store two-byte Chinese (while also retaining full
access to all of Unicode as it does), and I was basically wondering
out loud if Japan might be hiding something like that somewhere and
imagining what it might look like.



Re: Questionable description about character sets

От
Thomas Munro
Дата:
On Mon, Feb 16, 2026 at 5:35 PM Thomas Munro <thomas.munro@gmail.com> wrote:
> On Sat, Feb 14, 2026 at 11:20 PM Tatsuo Ishii <ishii@postgresql.org> wrote:
> > > Wouldn't that make the table very wide?
> >
> > I don't think it would make the table very wide but a little bit
> > wider. So I think adding the character sets information to
> > "Description" column is better. Some of encodings already have the
> > info. See attached patch.

If we wanted to follow the SQL standard's terminology, I think we'd
call this the "character repertoire".  In the standard, a "character
set" is the database object representing a repertoire and an encoding
of it, or its identifier.  But if we put it in the description column,
we wouldn't have to name it.

Researching the standard led me to
src/backend/catalog/information_schema.sql[1].  It currently reports
the encoding name as the character set and the repertoire, except
s/UTF8/UCS/ for the repertoire.  That's the same information as you
want to document here.  For the character set (in the SQL standard
sense), the current view definition seems reasonable given that we
don't support CREATE CHARACTER SET or CHARACTER SET generally, and for
the character repertoire, the s/UTF8/UCS/ translation makes sense, but
you chose to call it "Unicode".  Shouldn't those agree?

If GB18030 were a valid server encoding, it would surely have to
report UCS, like UTF8, since it is also a "Unicode transformation
format"[2] (its purpose is to be backwards compatible with legacy
2-byte-per-common-Chinese-character formats while also covering all of
Unicode 100% systematically, ie booting stuff they don't often encode
into the 3- and 4-byte zone to make room for efficient encoding of
stuff they do often encode).  So I think that means your new
documentation should say UCS (or UNICODE) for that one too.  I don't
know how other encodings should spell their repertoire though...

(CC Henson Choi who might be interested in this topic especially WRT Korean.)

[1] https://www.postgresql.org/docs/current/infoschema-character-sets.html
[2] https://en.wikipedia.org/wiki/GB_18030



Re: Questionable description about character sets

От
Tatsuo Ishii
Дата:
> If we wanted to follow the SQL standard's terminology, I think we'd
> call this the "character repertoire".

Calling it "character repertoire" works for me. Fortunately the
meaning of "character repertoire" in the SQL standard and in other
standard (ISO/IEC 2022 or 10646) looks same.

> In the standard, a "character
> set" is the database object representing a repertoire and an encoding
> of it, or its identifier.

Yes. Unlike ISO/IEC 2022 or 10646, the SQL standard has no clear
distinction between character set (in the sense of ISO/IEC 10646) and
encoding. (To me this is quite confusing.)

> But if we put it in the description column,
> we wouldn't have to name it.

Why?

> Researching the standard led me to
> src/backend/catalog/information_schema.sql[1].  It currently reports
> the encoding name as the character set and the repertoire, except
> s/UTF8/UCS/ for the repertoire.  That's the same information as you
> want to document here.  For the character set (in the SQL standard
> sense), the current view definition seems reasonable given that we
> don't support CREATE CHARACTER SET or CHARACTER SET generally,

Why? For example, Shouldn't EUC_JP have JIS X 0201, JIS X 0208 and JIS
X 0212 as its character repertoire?

> and for
> the character repertoire, the s/UTF8/UCS/ translation makes sense, but
> you chose to call it "Unicode".  Shouldn't those agree?

I think "UCS" is not a repertoire, but a coded character set.
"Unicode" or "Unicode repertoire" [1] is more appropreate, I think.

[1] https://www.unicode.org/reports/tr17/tr17-3.html

> If GB18030 were a valid server encoding, it would surely have to
> report UCS, like UTF8, since it is also a "Unicode transformation
> format"[2] (its purpose is to be backwards compatible with legacy
> 2-byte-per-common-Chinese-character formats while also covering all of
> Unicode 100% systematically, ie booting stuff they don't often encode
> into the 3- and 4-byte zone to make room for efficient encoding of
> stuff they do often encode).  So I think that means your new
> documentation should say UCS (or UNICODE) for that one too.

Not sure. I heard that the latest GB18030 (GB18030-2022, at this
point) does not contain some newer Unicode characters.

> I don't
> know how other encodings should spell their repertoire though...

Need research for me too.

Regards,
--
Tatsuo Ishii
SRA OSS K.K.
English: http://www.sraoss.co.jp/index_en/
Japanese:http://www.sraoss.co.jp



Re: Questionable description about character sets

От
Henson Choi
Дата:
Thanks Thomas for looping me in, and thanks Tatsuo-san for driving
this.  Before getting to the Korean Description-column wording
itself, the main thing I want to surface from my audit is two
Bytes/Char corrections on this very table -- they turn out to be
the most concrete thing I can offer.

  * JOHAB row Bytes/Char = 1-3.  This is wrong.  I posted a
    separate patch for bug #19354 [1] that rewrites
    pg_johab_mblen() / pg_johab_verifychar() to follow
    KS X 1001:2004 Annex 3 Table 1 directly, instead of borrowing
    from pg_euc_mblen() / IS_EUC_RANGE_VALID().  (JOHAB's Hangul
    lead-byte range 0x84-0xD3 spans 0x8E and 0x8F, which EUC
    reserves as SS2/SS3, so it was never an EUC profile to begin
    with.)  That patch also corrects pg_wchar_table's maxmblen for
    JOHAB from 3 to 2 and the Bytes/Char column of this same
    Table 23.3 from "1-3" to "1-2".

  * EUC_KR row Bytes/Char = 1-3.  Overstated in the same way, but
    with a twist: the validator is already correct.  EUC-KR per
    KS X 2901 / RFC 1557 designates only G0 (ASCII) and G1
    (KS X 1001), so the maximum valid sequence length is 2.
    pg_euckr_verifychar() already rejects 0x8E and 0x8F via
    IS_EUC_RANGE_VALID (0xA1-0xFE), so no 3-byte sequence is ever
    accepted in practice.  The stale "3" only survives in
    pg_wchar_table[PG_EUC_KR].maxmblen and in this docs cell, as a
    leftover from pg_euckr_mblen() delegating to the shared
    pg_euc_mblen().  Correcting both to 2 is a pure cleanup with
    no behavior change and no backward-compatibility impact.

If the JOHAB fix lands first, that row's Bytes/Char can inherit
the corrected value.  For EUC_KR, I could go either way and would
rather let you pick the direction: fold the maxmblen/docs cleanup
into v1 (since the change is behavior-free), or keep it out and
let me post it as its own small patch in a separate thread (since
it touches src/common/wchar.c as well as the docs, while your v1
is docs-only).  I'm happy to prepare it either way.

As for the Korean Description-column wording itself, I'd rather
offer input than a finished proposal -- I'm honestly not confident
about the right naming convention, especially for UHC.  For what
it's worth:

  * EUC_KR's coded character set is just KS X 1001 (plus ASCII);
    there is no KS equivalent of JIS X 0212.

  * JOHAB shares the same character repertoire as EUC_KR --
    KS X 1001 + ASCII -- and simply arranges those characters into
    bytes via the combinational code in Annex 3.  So if the column
    is about coded character sets rather than encodings, JOHAB's
    entry would arguably read identically to EUC_KR's.  That's
    actually a clean illustration of the encoding-vs-character-set
    distinction you raised in the original post.

  * UHC / CP949 is the Microsoft superset of EUC-KR that adds the
    11172 precomposed Hangul syllables beyond KS X 1001, but those
    extra syllables aren't standardized as a separately-named
    coded character set as far as I know -- "CP949" tends to refer
    to the encoding.  I don't have a confident answer for the
    wording; if you have a preferred convention I'll defer to it.

    (Structural note in passing: despite the "superset of EUC-KR"
    framing, UHC is not itself an EUC profile.  To fit the extra
    syllables, it extends the lead-byte range down to 0x81, which
    necessarily swallows 0x8E and 0x8F -- the bytes EUC reserves
    as SS2 and SS3.  So by extending EUC-KR, CP949 steps outside
    the EUC family.  Mentioning this only because it mirrors the
    JOHAB situation.)

One more observation, and apologies in advance for wandering a bit
beyond the scope of this thread: while auditing those code paths I
noticed that pg_uhc_verifychar() appears quite loose on trail
bytes (it only rejects \0), while CP949's actual trail-byte range
is somewhat narrower.  Tightening this would be a real behavior
change -- existing databases may contain byte sequences that are
currently accepted but would be rejected under a stricter verifier
-- so it needs its own discussion.  I'll raise that in its own
separate thread regardless of how the EUC_KR question above is
resolved.  (UHC's 1-2 / maxmblen = 2 are already correct, so this
is purely a verifier-strictness question, not a table-cell
question.)

So in summary: the UHC verifier question will go to its own
separate thread from my side (behavior change, needs consensus),
and the EUC_KR cleanup will go to either v1 or a separate thread
depending on your call above.  Neither should block your v1 patch;
the only pieces that touch the same table cells are the two
Bytes/Char corrections, both handled either via [1] or via the
EUC_KR cleanup, wherever it ends up.

[1] https://postgr.es/m/19354-eefe6d8b3e84f9f2@postgresql.org

Regards,
Henson Choi