Copyright © 2001 W3C® (MIT, INRIA, Keio), All Rights Reserved. W3C liability, trademark, document use, and software licensing rules apply.
This section describes the status of this document at the time of its publication. Other documents may supersede this document. The latest status of this series of documents is maintained at the W3C.
This is a W3C Working Draft published between the first Last Call Working Draft of 26 January 2001 and a planned second Last Call. This interim publication is used to document the further progress made on addressing the comments received during the first Last Call. A list of last call comments with their status can be found in the disposition of comments (Members only).
Work is still ongoing on addressing the comments received during the first Last Call. We do not encourage comments on this Working Draft; instead we ask reviewers to wait for the second Last Call. We will announce the second Last Call on the W3C Internationalization public mailing list (www-international@w3.org; subscribe). Comments from the public and from organizations outside the W3C may be sent to www-i18n-comments@w3.org (archive). Comments from W3C Working Groups may be sent directly to the Internationalization Interest Group (w3c-i18n-ig@w3.org), with cross-posting to the originating Group, to facilitate discussion and resolution.
Due to the architectural nature of this document, it affects a large number of W3C Working Groups, but also software developers, content developers, and writers and users of specifications outside the W3C that have to interface with W3C specifications.
This document is published as part of the W3C Internationalization Activity by the Internationalization Working Group (Members only), with the help of the Internationalization Interest Group. The Internationalization Working Group will not allow early implementation to constrain its ability to make changes to this specification prior to final release. Publication as a Working Draft does not imply endorsement by the W3C Membership. It is inappropriate to use W3C Working Drafts as reference material or to cite them as other than "work in progress". A list of current W3C Recommendations and other technical documents can be found at http://www.w3.org/TR.
For information about the requirements that informed the development of important parts of this specification, see Requirements for String Identity Matching and String Indexing [CharReq].
1 Introduction
1.1 Goals and Scope
1.2 Background
1.3 Terminology and Notation
2 Conformance
3 Characters
3.1 Perceptions of Characters
3.1.1 Introduction
3.1.2 Units of Aural Rendering
3.1.3 Units of Visual
Rendering
3.1.4 Units of Input
3.1.5 Units of Collation
3.1.6 Units of Storage
3.1.7 Summary
3.2 Digital Encoding of Characters
3.3 Transcoding
3.4 Strings
3.5 Reference Processing Model
3.6 Choice and Identification of Character
Encodings
3.6.1 Mandating a unique character
encoding
3.6.2 Character Encoding
Identification
3.6.3 Private Use Code Points
3.7 Character Escaping
4 Early Uniform Normalization
4.1 Motivation
4.2 Definitions for W3C Text
Normalization
4.2.1 Unicode-normalized Text
4.2.2 Fully Normalized Text
4.2.3 Examples
4.3 Responsibility for
Normalization
5 Compatibility and Formatting
Characters
6 String Identity Matching
7 String Indexing
8 Character Encoding in URI References
9 Referencing the Unicode Standard and
ISO/IEC 10646
A Examples of Characters, Keystrokes and
Glyphs
B Acknowledgements
C References
C.1 Normative
References
C.2 Other References
D Change Log (Non-Normative)
D.1 Changes since
http://www.w3.org/TR/2001/WD-charmod-20010928
D.2 Changes since
http://www.w3.org/TR/2001/WD-charmod-20010126
The goal of this document is to facilitate use of the Web by all people, regardless of their language, script, writing system, and cultural conventions, in accordance with the W3C goal of universal access. One basic prerequisite to achieve this goal is to be able to transmit and process the characters used around the world in a well-defined and well-understood way.
The main target audience of this document is W3C specification developers. This document defines conformance requirements for other W3C specifications. This document and parts of it can also be referenced from other W3C specifications.
Other audiences of this document include software developers, content developers, and authors of specifications outside the W3C. Software developers and content developers implement and use W3C specifications. This document defines some conformance requirements for software developers and content developers that implement and use W3C specifications. It also helps software developers and content developers to understand the character-related provisions in other W3C specifications.
The character model described in this document provides authors of specifications, software developers, and content developers with a common reference for consistent, interoperable text manipulation on the World Wide Web. Working together, these three groups can build a more international Web.
Topics addressed include encoding identification, early uniform normalization, string identity matching, string indexing, and URI conventions. Some introductory material on characters and character encodings is also provided.
Topics not addressed or barely touched include collation (sorting), fuzzy matching and language tagging. Some of these topics may be addressed in a future version of this specification.
At the core of the model is the Universal Character Set (UCS), defined jointly by The Unicode Standard [Unicode] and ISO/IEC 10646 [ISO/IEC 10646]. In this document, Unicode is used as a synonym for the Universal Character Set. The model will allow Web documents authored in the world's scripts (and on different platforms) to be exchanged, read, and searched by Web users around the world.
All W3C specifications must conform to this document (see section 2 Conformance). Authors of other specifications (for example, IETF specifications) are strongly encouraged to take guidance from it.
Since other W3C specifications will be based on some of the provisions of this document, without repeating them, software developers implementing W3C specifications must conform to these provisions.
This section provides some historical background on the topics addressed in this document.
Starting with Internationalization of the Hypertext Markup Language [RFC 2070], the Web community has recognized the need for a character model for the World Wide Web. The first step towards building this model was the adoption of Unicode as the document character set for HTML.
The choice of Unicode was motivated by the fact that Unicode:
is the only universal character repertoire available,
covers the widest possible range,
provides a way of referencing characters independent of the encoding of a resource,
is being updated/completed carefully,
is widely accepted and implemented by industry.
W3C adopted Unicode as the document character set for HTML in [HTML 4.0]. The same approach was later used for specifications such as XML 1.0 [XML 1.0] and CSS2 [CSS2]. Unicode now serves as a common reference for W3C specifications and applications.
The IETF has adopted some policies on the use of character sets on the Internet (see [RFC 2277]).
When data transfer on the Web remained mostly unidirectional (from server to browser), and where the main purpose was to render documents, the use of Unicode without specifying additional details was sufficient. However, the Web has grown:
Data transfers among servers, proxies, and clients, in all directions, have increased.
Non-ASCII characters [MIME] are being used in more and more places.
Data transfers between different protocol/format elements (such as element/attribute names, URI components, and textual content) have increased.
More and more APIs are defined, not just protocols and formats.
In short, the Web may be seen as a single, very large application (see [Nicol]), rather than as a collection of small independent applications.
While these developments strengthen the requirement that Unicode be the basis of a character model for the Web, they also create the need for additional specifications on the application of Unicode to the Web. Some aspects of Unicode that require additional specification for the Web include:
Choice of encoding forms (UTF-8, UTF-16, UTF-32).
Counting characters, measuring string length in the presence of variable-length encodings and combining characters).
Duplicate encodings (e.g. precomposed vs decomposed).
Use of control codes for various purposes (e.g. bidirectionality control, symmetric swapping, etc.).
It should be noted that such properties also exist in legacy encodings (where legacy encoding is taken to mean any character encoding not based on Unicode), and in many cases have been inherited by Unicode in one way or another from such legacy encodings.
The remainder of this document presents additional specifications and requirements to ensure an interoperable character model for the Web, taking into account earlier work (from W3C, ISO and IETF).
For the purpose of this specification, the producer of text data is the sender of the data in the case of protocols, and the tool that produces the data in the case of formats. The recipient of text data is the software module that receives the data.
NOTE: A software module may be both a recipient and a producer.
Unicode code points are denoted as U+hhhh, where "hhhh" is a sequence of at least four, and at most six hexadecimal digits.
In this document, requirements are expressed using the key words "MUST", "MUST NOT", "REQUIRED", "SHALL" and "SHALL NOT". Recommendations are expressed using the key words "SHOULD", "SHOULD NOT" and "RECOMMENDED". "MAY" and "OPTIONAL" are used to indicate optional features or behaviour. These keywords are used in accordance with RFC 2119 [RFC 2119].
This specification places conformance requirements on specifications, on software and on Web content. To aid the reader, all requirements are preceded by '[X]' where 'X' is one of 'S' for specifications, 'I' for software implementations, and 'C' for Web content. These markers indicate the relevance of the requirement and allow the reader to quickly locate relevant requirements using the browser's search function. [S] [I] [C] In order to conform to this document, specifications MUST NOT violate any requirements preceded by [S], software MUST NOT violate any requirements preceded by [I], and content MUST NOT violate any requirements preceded by [C].
[S] Every W3C specification MUST:
conform to the requirements applicable to specifications,
specify that implementations MUST conform to the requirements applicable to software, and
specify that content created according to that specification MUST conform to the requirements applicable to content.
[S] If an existing W3C specification does not conform to the requirements in this document, then the next version of that specification MUST be modified in order to conform.
[I] Where this specification contains a procedural description, it MUST be understood as a way to specify the desired external behavior. Implementations MAY use other ways of achieving the same results, as long as observable behavior is not affected.
The glossary entry in [Unicode 3.0] gives:
"Character. (1) The smallest component of written language that has semantic values; refers to the abstract meaning and/or shape ..."
The word 'character' is used in many contexts, with different meanings. Human cultures have radically differing writing systems, leading to radically differing concepts of a character. Such wide variation in end user experience can, and often does, result in misunderstanding. This variation is sometimes mistakenly seen as the consequence of imperfect technology. Instead, it derives from the great flexibility and creativity of the human mind and the long tradition of writing as an important part of the human cultural heritage. The alphabetic approach used by scripts such as Latin, Cyrillic and Greek is only one of several possibilities.
EXAMPLE: Japanese hiragana and katakana are syllabaries. A character in these scripts corresponds to a syllable (usually a combination of consonant plus vowel).
EXAMPLE: Korean Hangul is a featural syllabary that combines symbols for individual sounds of the language into square syllabic blocks. Depending on the user and the application, either the individual symbols or the syllabic clusters can be considered to be characters.
EXAMPLE: Indic scripts are abugidas. Each consonant letter carries an inherent vowel that is eliminated or replaced using semi-regular or irregular ways to combine consonants and vowels into clusters. Depending on the user and the application, either individual consonants or vowels, or the consonant or consonant-vowel clusters can be perceived as characters.
EXAMPLE: Arabic script is an example of an abjad. Short vowel sounds are typically not written at all. When they are written they are indicated by the use of combining marks placed above and below the consonantal letters.
The developers of W3C specifications, and the developers of software based on those specifications, are likely to be more familiar with usages they have experienced and less familiar with the wide variety of usages in an international context. Furthermore, within a computing context, characters are often confused with related concepts, resulting in incomplete or inappropriate specifications and software.
This section examines some of these contexts, meanings and confusions.
In some scripts, characters have a close relationship to phonemes (a phoneme is a minimally distinct sound in the context of a particular spoken language), while in others they are closely related to meanings. Even when characters (loosely) correspond to phonemes, this relationship may not be simple, and there is rarely a one-to-one correspondence between character and phoneme.
EXAMPLE: In the English sentence, "They were too close to the door to close it." the same character 's' is used to represent both /s/ and /z/ phonemes.
EXAMPLE: In many scripts a single character may represent a sequence of phonemes, such as the syllabic characters of Japanese hiragana.
EXAMPLE: In many writing systems a sequence of characters may represent a single phoneme, for example 'wr' and 'ng' in "writing".
[S] [I] Specifications and software MUST NOT assume that there is a one-to-one correspondence between characters and the sounds of a language.
Visual rendering introduces the notion of a glyph. Glyphs are defined by ISO/IEC 9541-1 [ISO/IEC 9541-1] as "a recognizable abstract graphic symbol which is independent of a specific design". There is not a one-to-one correspondence between characters and glyphs:
A single character can be represented by multiple glyphs (each glyph is then part of the representation of that character). These glyphs may be physically separated from one another.
A single glyph may represent a sequence of characters (this is the case with ligatures, among others).
A character may be rendered with very different glyphs depending on the context.
A single glyph may represent different characters (e.g. capital Latin A, capital Greek A and capital Cyrillic A).
Each glyph can be represented by a number of different glyph images; a set of glyph images makes up a font. Glyphs can be construed as the basic units of organization of the visual rendering of text, just as characters are the basic unit of organization of encoded text.
[S] [I] Specifications and software MUST NOT assume a one-to-one mapping between character codes and units of displayed text.
See A Examples of Characters, Keystrokes and Glyphs for examples of the complexities of character to glyph mapping.
Some scripts, in particular Arabic and Hebrew, are written from right to left. Text including characters from these scripts can run in both directions and is therefore called bidirectional text (see example A.6 in Appendix A). The Unicode Standard [Unicode] requires that characters be stored and interchanged in logical order. [S] Protocols, data formats and APIs MUST store, interchange or process text data in logical order.
In the presence of bidi text, two possible selection modes must be considered. The first is logical selection mode, which selects all the characters logically located between the end-points of the user's mouse gesture. Here the user selects from between the first and second letters of the second word to the middle of the number. Logical selection looks like this:
In memory | |
---|---|
On screen |
It is a consequence of the bidirectionality of the text that a single, continuous logical selection in memory results in a discontinuous selection appearing on the screen. This discontinuity, as well as the somewhat unintuitive behavior of the cursor, makes some users prefer a visual selection mode, which selects all the characters visually located between the end-points of the user's mouse gesture. With the same mouse gesture as before, we now obtain:
In memory | |
---|---|
On screen |
In this mode, a single visual selection range results in two logical ranges, which have to be accommodated by protocols, APIs and implementations.
[S] Specifications of protocols and APIs that involve selection of ranges SHOULD provide for discontiguous selections, at least to the extent necessary to support implementation of visual selection on screen on top of those protocols and APIs.
In keyboard input, it is not always the case that keystrokes and input characters correspond one-to-one. A limited number of keys can fit on a keyboard. Some keyboards will generate multiple characters from a single keypress. In other cases ('dead keys') a key will generate no characters, but affect the results of subsequent keypresses. Many writing systems have far too many characters to fit on a keyboard and must rely on more complex input methods, which transform keystroke sequences into character sequences. Other languages may make it necessary to input some characters with special modifier keys. See A Examples of Characters, Keystrokes and Glyphs for examples of non-trivial input.
[S] [I] Specifications and software MUST NOT assume that a single keystroke results in a single character, nor that a single character can be input with a single keystroke (even with modifiers), nor that keyboards are the same all over the world.
String comparison as used in sorting and searching is based on units which do not in general have a one-to-one relationship to encoded characters. Such string comparison can aggregate a character sequence into a single collation unit with its own position in the sorting order, can separate a single character into multiple collation units, and can distinguish various aspects of a character (case, presence of diacritics, etc.) to be sorted separately (multi-level sorting).
In addition, a certain amount of pre-processing may also be required, and in some scripts (such as Japanese and Arabic) sort order is governed by higher order factors such as phonetics or word roots. Collation methods may also vary by application (e.g. dictionaries may be sorted differently than telephone books).
EXAMPLE: In traditional Spanish sorting, the letter sequences 'ch' and 'll' are treated as atomic collation units. Although Spanish sorting, and to some extent Spanish everyday use, treat 'ch' as a single unit, current digital encodings treat it as two letters, and keyboards do the same (the user types 'c', then 'h').
EXAMPLE: In most languages, the letter 'æ' is sorted as two consecutive collation units: 'a' and 'e'.
EXAMPLE: The sorting of text written in a bicameral script (i.e. a script which has distinct upper and lower case letters) is usually required to ignore case differences in a first pass; case is then used to break ties in a later pass.
EXAMPLE: Treatment of accented letters in sorting is dependent on the script or language in question. The letter 'ö' is treated as a modified 'o' in French, but as a letter completely independent from 'o' (and sorting after 'z') in Swedish. In German certain applications treat the letter 'o' as if it were the sequence 'oe'.
EXAMPLE: In Thai the sequence U+0E44 U+0E01 must be sorted as if it was written U+0E01 U+0E44. Reordering is typically done during an initial pre-processing stage.
[S] [I] Software that sorts or searches text for users MUST do so on the basis of appropriate collation units and ordering rules for the relevant language and/or application.
Computer storage and communication rely on units of physical storage and information interchange, such as bits and bytes (also known as octets, as nowadays the word bytes is generally considered to mean 8-bit bytes). A frequent error in specifications and implementations is the equating of characters with units of physical storage. The mapping between characters and such units of storage is actually quite complex, and is discussed in the next section, 3.2 Digital Encoding of Characters.
[S] [I] Specifications and software MUST NOT assume a one-to-one relationship between characters and units of physical storage.
The term character is used differently in a variety of contexts and often leads to confusion when used outside of these contexts. In the context of the digital representations of text, a character can be defined informally as a small logical unit of text. Text is then defined as sequences of characters. While such an informal definition is sufficient to create or capture a common understanding in many cases, it is also sufficiently open to create misunderstandings as soon as details start to matter. In order to write effective specifications, protocol implementations, and software for end users, it is very important to understand that these misunderstandings can occur.
[S] When specifications use the term 'character' it MUST be clear which of the possible meanings they intend. [S] Specifications SHOULD avoid the use of the term 'character' if a more specific term is available.
To be of any use in computers, in computer communications and in particular on the World Wide Web, characters must be encoded. In fact, much of the information processed by computers over the last few decades has been encoded text, exceptions being images, audio, video and numeric data. To achieve text encoding, a large variety of encoding schemes have been devised, which can loosely be defined as mappings between the character sequences that users manipulate and the sequences of bits that computers manipulate.
Given the complexity of text encoding and the large variety of schemes for character encoding invented throughout the computer age, a more formal description of the encoding process is useful. The process of defining a text encoding can be described as follows (see [UTR #17] for a more detailed description):
A set of characters to be encoded is identified. The characters are pragmatically chosen to express text and to efficiently allow various text processes in one or more target languages. They may not correspond precisely to what users perceive as letters and other characters. The set of characters is called a repertoire.
Each character in the repertoire is then associated with a (mathematical, abstract) non-negative integer, the code point (also known as a character number or code position). The result, a mapping from the repertoire to the set of non-negative integers, is called a coded character set (CCS).
To enable use in computers, a suitable base datatype is identified (such as a byte, a 16-bit unit of storage or other) and a character encoding form (CEF) is used, which encodes the abstract integers of a CCS into sequences of the code units of the base datatype. The encoding form can be extremely simple (for instance, one which encodes the integers of the CCS into the natural representation of integers of the chosen datatype of the computing platform) or arbitrarily complex (a variable number of code units, where the value of each unit is a non-trivial function of the encoded integer).
To enable transmission or storage using byte-oriented devices,
a serialization scheme or character encoding scheme
(CES) is next used. A CES is a mapping of the code units
of a CEF into well-defined
sequences of bytes, taking into account the necessary specification of
byte-order for multi-byte base datatypes and including in some cases switching
schemes between the code units of multiple
CESes (an example is ISO
2022). A CES, together
with the CCSes it is used with,
is identified by an IANA charset identifier.
Given a sequence of bytes representing text and a charset
identifier,
one can in principle unambiguously recover the sequence of characters of the
text.
NOTE: The term 'character encoding' is somewhat ambiguous, as it is sometimes used to describe the actual process of encoding characters and sometimes to denote a particular way to perform that process (as in "this file is in the X character encoding"). Context normally allows the distinction of those uses, once one is aware of the ambiguity.
NOTE: Unfortunately, there are some important cases of charset identifiers that denote a range of slight variants of an encoding scheme, where the differences may be crucial (e.g. the well-known yen/backslash case) and may vary over time. In those cases, recovery of the character sequence from a byte sequence is not totally unambiguous. See the [XML Japanese Profile] for examples of such ambiguous charsets.
In very simple cases, the whole encoding process can be collapsed to a single step, a trivial one-to-one mapping from characters to bytes; this is the case, for instance, for US-ASCII [MIME] and ISO-8859-1.
Text data is said to be in a Unicode encoding form if it is encoded in UTF-8, UTF-16 or UTF-32.
Transcoding is the process of converting text data from
one
Character Encoding Form to another.
Transcoders work only at the level of character encoding and do not parse the
text; consequently, they do not deal with character escapes such as numeric
character references (see 3.7 Character Escaping) and do not adjust
embedded character encoding information (for instance in an XML declaration or
in an HTML meta
element).
NOTE: Transcoding may involve one-to-one, many-to-one, one-to-many or many-to-many mappings. In addition, the storage order of characters varies between encodings: some, such as Unicode, prescribe logical ordering while others use visual ordering; among encodings that have separate diacritics, some prescribe that they be placed before the base character, some after. Because of these differences in sequencing characters, transcoding may involve reordering: thus XYZ may map to yxz.
A normalizing transcoder is a transcoder that converts from a legacy encoding to a Unicode encoding form and ensures that the result is in Unicode Normalization Form C (see 4.2.1 Unicode-normalized Text). For most legacy encodings, it is possible to construct a normalizing transcoder; it is not possible to do so if the encoding's repertoire contains characters not in Unicode.
Various specifications use the notion of a 'string', sometimes without defining precisely what is meant and sometimes defining it differently from other specifications. The reason for this variability is that there are in fact multiple reasonable definitions for a string, depending on one's intended use of the notion; the term 'string' is used for all these different notions because these are actually just different views of the same reality: a piece of text stored inside a computer. This section provides specific definitions for different notions of 'string' which may be reused elsewhere.
Byte string: A string viewed as a sequence of bytes representing characters in a particular encoding. This corresponds to a CES. As a definition for a string, this definition is most often useless, except when the textual nature is unimportant and the string is considered only as a piece of opaque data with a length in bytes. [S] Specifications in general SHOULD NOT define a string as a 'byte string'.
Code unit string: A string viewed as a sequence of code units representing characters in a particular encoding. This corresponds to a CEF. This definition is useful in APIs that expose a physical representation of string data. Example: For the DOM [DOM Level 1], UTF-16 was chosen based on widespread implementation practice.
Character string: A string viewed as a sequence of characters, each represented by a code point in Unicode [Unicode]. This is usually what programmers consider to be a string, although it may not match exactly what most users perceive as characters. This is the highest layer of abstraction that ensures interoperability with very low implementation effort. [S] The 'character string' definition of a string is generally the most useful and SHOULD be used by most specifications, following the examples of Production [2] of XML 1.0 [XML 1.0], the SGML declaration of HTML 4.0 [HTML 4.01], and the character model of RFC 2070 [RFC 2070].
EXAMPLE: Consider the string comprising the characters U+10333 GOTHIC LETTER DAGS, U+2260 NOT EQUAL TO and U+0030 DIGIT ZERO, encoded in UTF-16 in big-endian byte order. The rows of the following table show the string viewed as a character string, code unit string and byte string, respectively:
Character string | U+10333 | U+2260 | U+0030 | |||||
---|---|---|---|---|---|---|---|---|
Code unit string | D800 | DF33 | 2260 | 0030 | ||||
Byte string | D8 | 00 | DF | 33 | 22 | 60 | 00 | 30 |
NOTE: It is also possible to view a string as a sequence of graphemes. In this case the string is divided into text units that correspond to the user's perception of where character boundaries occur in a visually rendered text. However, there is no standard rule for the segmentation of text in this way, and the segmentation will vary from language to language and even from user to user. Examples of possible approaches can be found in sections 5.12 and 5.15 of the Unicode Standard [Unicode 3.0].
Many Internet protocols and data formats, most notably the very important Web formats HTML, CSS and XML, are based on text. In those formats, everything is text but the relevant specifications impose a structure on the text, giving meaning to certain constructs so as to obtain functionality in addition to that provided by plain text. HTML and XML are markup languages, defining entities entirely composed of text but with conventions allowing the separation of this text into markup and character data. Citing from the XML 1.0 specification [XML 1.0], section 2.4:
"Text consists of intermingled character data and markup. [...] All text that is not markup constitutes the character data of the document."
For the purposes of this section, the important aspect is that everything is text, that is, a sequence of characters.
Since its early days, the Web has seen the development of a Reference Processing Model, first described for HTML in RFC 2070 [RFC 2070]. This model was later embraced by XML and CSS. It is applicable to any data format or protocol that is text-based as described above. The essence of the Reference Processing Model is the use of Unicode as a common reference. Use of the Reference Processing Model by a specification does not, however, require that implementations actually use Unicode. The requirement is only that the implementations behave as if the processing took place as described by the Model.
A specification conforms to the Reference Processing Model if all of the following apply:
[S ] Specifications MUST be defined in terms of Unicode characters, not bytes or glyphs.
[S] Specifications SHOULD allow the use of the full range of Unicode code points from 0 to 0x10FFFF inclusive; any exceptions SHOULD be listed and justified; code points above 0x10FFFF SHOULD NOT be used.
[S] Specifications MAY allow use of any character encoding which can be transcoded to Unicode for its text entities.
[S] Specifications MAY choose to disallow or deprecate some encodings and to make others mandatory. Independent of the actual encoding, the specified behavior MUST be the same as if the processing happened as follows:
The encoding of any text entity received by the application implementing the specification MUST be determined and the text entity MUST be interpreted as a sequence of Unicode characters - this MUST be equivalent to transcoding the entity to some Unicode encoding form, adjusting any character encoding label if necessary, and receiving it in that Unicode encoding form.
All processing MUST take place on this sequence of Unicode characters.
If text is output by the application, the sequence of Unicode characters MUST be encoded using an encoding chosen among those allowed by the specification.
[S] If a specification is such that multiple text entities are involved (such as an XML document referring to external parsed entities), it MAY choose to allow these entities to be in different character encodings. In all cases, the Reference Processing Model MUST be applied to all entities.
[S] All specifications that involve text MUST specify processing according to the Reference Processing Model.
NOTE: All specifications that derive from the XML 1.0 specification [XML 1.0] automatically inherit this Reference Processing Model. XML is entirely defined in terms of Unicode characters and mandates the UTF-8 and UTF-16 encodings while allowing any other encoding for parsed entities.
NOTE: When specifications choose to allow encodings other than Unicode encodings, implementers should be aware that the correspondence between the characters of a legacy encoding and Unicode characters may in practice depend on the software used for transcoding. See the Japanese XML Profile [XML Japanese Profile] for examples of such inconsistencies.
Because encoded text cannot be interpreted and processed without knowing the encoding, it is vitally important that the character encoding scheme (see 3.2 Digital Encoding of Characters) is known at all times and places where text is exchanged or processed. [S] Specifications MUST either specify a unique encoding, or provide character encoding identification mechanisms such that the encoding of text can always be reliably identified. [S] When designing a new protocol, format or API, specifications SHOULD mandate a unique character encoding.
Mandating a unique character encoding is simple, efficient, and robust. There is no need for specifying, producing, transmitting, and interpreting encoding tags. At the receiver, the encoding will always be understood. There is also no ambiguity if data is transferred non-electronically and later has to be converted back to a digital representation. Even when there is a need for compatibility with existing data, systems, protocols and applications, multiple encodings can often be dealt with at the boundaries or outside a protocol, format, or API. The DOM [DOM Level 1] is an example of where this was done. The advantages of choosing a unique encoding become more important the smaller the pieces of text used are and the closer to actual processing the specification is.
[S] When a unique encoding is mandated, the encoding MUST be UTF-8, UTF-16 or UTF-32. [S] If a unique encoding is mandated and compatibility with US-ASCII is desired, UTF-8 (see [RFC 2279]) is RECOMMENDED. In other situations, such as for APIs, UTF-16 or UTF-32 may be more appropriate. Possible reasons for choosing one of these include efficiency of internal processing and interoperability with other processes.
NOTE: The IETF Charset Policy [RFC 2277] specifies that on the Internet "Protocols MUST be able to use the UTF-8 charset".
NOTE: The XML 1.0 specification [XML 1.0] requires all conforming XML processors to accept both UTF-16 and UTF-8.
The MIME Internet specification [MIME] provides a
good example of a mechanism for character encoding identification. The MIME
charset
parameter definition is intended to supply sufficient
information to uniquely decode the sequence of bytes of the received data into
a sequence of characters. The values are drawn from the IANA charset registry
[IANA].
NOTE: In practice there is wide variation among implementations, so uniqueness cannot be depended upon. See the end of 3.5 Reference Processing Model for more information.
NOTE: The term 'charset' derives from 'character set', an expression with a long and tortured history (see [Connolly] for a discussion).
[S] Specifications
SHOULD avoid using the expression 'character
set', as well as the term 'charset' to refer to a character
encoding scheme, except when the latter is used to refer to the MIME
charset
parameter or its IANA-registered values. The terms
'character encoding' or 'character encoding scheme'
are RECOMMENDED.
NOTE: In XML, the XML declaration or the text declaration contains a
pseudo-attribute called encoding
which identifies the character
encoding using the IANA charset.
The IANA charset registry is the official list of names and aliases for character encodings on the Internet.
[S] If the unique encoding
approach is not taken, specifications SHOULD mandate the use
of the IANA charset registry names, and in particular the names identified in
the registry as 'MIME preferred names', to designate character
encodings in protocols, data formats and APIs.
[S] The 'x-' convention for
unregistered character encoding names SHOULD NOT be used,
having led to abuse in the past. ('x-' was used
for character encodings that were widely used, even long after there was an
official registration.)
[I] [C] Content and software
that label textual data MUST use one of the names mandated
by the appropriate specification (e.g. the XML specification when editing XML
text) and SHOULD use the MIME preferred name of an encoding
to label data in that encoding.
[I] [C] An IANA-registered
charset
name MUST NOT be used to label textual data
in an encoding other than the one identified in the IANA registration of that
name.
[S] If the unique encoding approach is not chosen, specifications MUST designate at least one of the UTF-8 and UTF-16 encoding forms of Unicode as admissible encodings and SHOULD choose at least one of UTF-8 or UTF-16 as mandated encoding forms (encoding forms that MUST be supported by implementations of the specification). [S] Specifications MAY define either UTF-8 or UTF-16 as a default encoding form (or both if they define suitable means of distinguishing them), but they MUST NOT use any other character encoding as a default. [S] Specifications MUST NOT use heuristics to determine the encoding of data.
[I] Receiving
software MUST determine the encoding of data from available
information according to appropriate specifications.
[I] When an IANA-registered charset
name is recognized, receiving software MUST interpret the
received data according to the encoding associated with the name in the IANA
registry. [I] When no charset
is provided receiving software MUST adhere to the default
encoding(s) specified in the specification.
[I] Receiving software MAY recognize as many encodings (names and aliases) as appropriate. A field-upgradeable mechanism may be appropriate for this purpose. Certain encodings are more or less associated with certain languages (e.g. Shift-JIS with Japanese); trying to support a given language or set of customers may mean that certain encodings have to be supported. The encodings that need to be supported may change over time. This document does not give any advice on which encoding may be appropriate or necessary for the support of any given language.
[I] Software MUST completely implement the mechanisms for character encoding identification and SHOULD implement them in such a way that they are easy to use (for instance in HTTP servers). [I] On interfaces to other protocols, software SHOULD support conversion between Unicode encoding forms as well as any other necessary conversions.
[C] Content MUST make use of available facilities for character encoding identification by always indicating character encoding; where the facilities offered for character encoding identification include defaults (e.g. in XML 1.0 [XML 1.0]), relying on such defaults is sufficient to satisfy this identification requirement.
Because of the layered Web architecture (e.g. formats used over protocols), there may be multiple and at times conflicting information about character encoding. [S] Specifications MUST define conflict-resolution mechanisms (e.g. priorities) for cases where there is multiple or conflicting information about character encoding. [I] [C] Software and content MUST carefully follow conflict-resolution mechanisms where there is multiple or conflicting information about character encoding.
Unicode designates certain ranges of code points for private use: the Private Use Area (U+E000-F8FF) and planes 15 and 16 (U+F0000-FFFFD and U+100000-10FFFD). These code points are guaranteed to never be allocated to standard characters, and are available for use by private agreement between a producer and a recipient. However, their use is strongly discouraged, since private agreements do not scale on the Web. Code points from different private agreements may collide, and a private agreement and therefore the meaning of the code points can quickly get lost.
[S] Specifications MUST NOT define any assignments of private use code points. [S] Conformance to a specification MUST NOT require the use of private use area characters. [S] Specifications SHOULD NOT provide mechanisms for agreement on private use code points between parties and MUST NOT require the use of such mechanisms. [S] [I] Specifications and implementations SHOULD be designed in such a way as to not disallow the use of private use code points by private arrangement. As an example, XML does not disallow the use of private use code points.
[S] Specifications MAY define markup to allow the transmission of symbols not in Unicode or to identify specific variants of Unicode characters.
EXAMPLE: MathML (see [MathML2]
section
3.2.9) defines an element mglyph
for mathematical symbols
not in Unicode.
EXAMPLE: SVG (see [SVG]
section
10.14) defines an element altglyph
which allows the
identification of specific display variants of Unicode characters.
In text-based protocols or formats where characters can be either part of character data or of markup (see 3.5 Reference Processing Model), it is often the case that certain characters are designated as having certain specific protocol/format functions in certain contexts (e.g. '<' and '&' serve as markup delimiters in HTML and XML). These syntax-significant characters cannot be used to represent themselves in text data in the same way as all other characters do. Also, often formats are represented in an encoding that does not allow to represent all characters directly.
To express syntax-significant or unrepresentable characters, a technique called escaping is used. This works by creating an additional syntactic construct, defining additional characters or defining character sequences that have special meaning. Escaping a character means expressing it using such a construct, appropriate to the format or protocol in which the character appears; expanding an escape (or unescaping) means replacing it with the character that it represents.
Certain guidelines apply to the way specifications define character escapes. [S] The guidelines in this document relating to the definition of character escapes MUST be followed when designing new W3C protocols and formats and SHOULD be followed as much as possible when revising existing protocols and formats.
[S] Specifications MUST NOT invent a new escaping mechanism if an appropriate one already exists.
[S] The number of different ways to escape a character SHOULD be minimized (ideally to one). [A well-known counter-example is that for historical reasons, both HTML and XML have redundant decimal (&#ddddd;) and hexadecimal (&#xhhhh;) escapes.]
[S] Explicit end delimiters MUST be provided. Escapes such as \uABCD where the end delimiter is a space or any character other than [01-9A-F] SHOULD be avoided. These escapes are not clear visually, and can cause an editor to insert spurious line-breaks when word-wrapping on spaces. Forms like SPREAD's &UABCD; [SPREAD]or XML's &#xhhhh;, where the escape is explicitly terminated by a semicolon, are much better.
[S] Whenever specifications define escapes that allow the representation of characters using a number the number SHOULD be in hexadecimal notation.
[S] Escaped characters SHOULD be acceptable wherever unescaped characters are; this does not preclude that a syntax-significant character, when escaped, loses its significance in the syntax. In particular, escaped characters SHOULD be acceptable in identifiers and comments.
Certain guidelines apply to content developers, as well as to software that generates content:
[I] [C] Escapes SHOULD be avoided when the characters to be expressed are representable in the character encoding of the document.
[I] [C] Since character set standards usually list character numbers as hexadecimal, content SHOULD use the hexadecimal form of escapes when there is one.
This chapter discusses character data normalization for the Web. 4.1 Motivation discusses the need for normalization, and in particular early uniform normalization. 4.2 Definitions for W3C Text Normalization defines full normalization and gives examples. 4.3 Responsibility for Normalization assigns reponsibilities to various components and situations.
As explained at length in Requirements for String Identity Matching and String Indexing [CharReq], the existence, in many character encoding schemes, of multiple representations for what users perceive as the same string makes it necessary to define character data normalization. Without a precise specification, it is not possible to determine reliably whether or not two strings are identical. Such a specification must take into account character encoding, the way normalization is to be performed and where or when (by sender or recipient) to perform it.
String identity is central to the correct functioning of much
software, and in particular of large parts of the Web infrastructure
(protocols, formats, etc.). Incorrect string matching can have far reaching
consequences, including the creation of security holes. Consider a contract,
encoded in XML, for buying goods: each item sold is described in an
artículo
element; unfortunately, "artículo" is subject to
different representations in the character encoding of the contract. Suppose
that the contract is viewed and signed by means of a user agent that looks for
artículo
elements, extracts them (matching on the element name),
presents them to the user and adds up their prices. If different instances of
the artículo
element happen to be represented differently in a
particular contract, then the buyer and seller may see (and sign) different
contracts if their respective user agents perform string identity matching
differently, which is fairly likely in the absence of a well-defined
specification. The absence of a well-defined specification also means that
there is no way to resolve the ensuing contractual dispute.
The Unicode Consortium provides four standard normalization forms (see Unicode Normalization Forms [UTR #15]). For use on the Web, this document defines W3C Text Normalization by picking the most appropriate of these (NFC) and additionally addressing the issues of legacy encodings and of character escapes (which can denormalize text when unescaped).
Roughly speaking, NFC is defined such that combining character sequences (a base character followed by one or more combining characters) are replaced, as far as possible, by canonically equivalent precomposed characters. Text in a Unicode encoding form is said to be in NFC if it doesn't contain any combining sequence that could be replaced and if any remaining combining sequence is in canonical order.
This document also specifies that normalization is to be performed early (by the sender) as opposed to late (by the recipient). The reasons for that choice are manifold:
Almost all legacy data as well as data created by current software is normalized (using NFC).
The number of Web components that generate or transform text is considerably smaller than the number of components that receive text and need to perform matching or other processes requiring normalized text.
Current receiving components (browsers, XML parsers, etc.) implicitly assume early normalization by not performing normalization themselves. This is a vast legacy.
Web components that generate and process text are in a much better position to do normalization than other components; in particular, they may be aware that they deal with a restricted repertoire only.
Not all components of the Web that implement functions such as string matching can reasonably be expected to do normalization. This, in particular, applies to very small components and components in the lower layers of the architecture.
Forward-compatibility issues can be dealt with more easily: less software needs to be updated, namely only the software that generates newly introduced characters.
It improves matching in cases where the character encoding is partly undefined, such as URIs [RFC 2396] in which non-ASCII bytes have no defined meaning.
It is a prerequisite for comparison of encrypted strings (see [CharReq], section 2.7).
Text data is, for the purposes of this specification, Unicode-normalized if it is in a Unicode encoding form and is in Unicode Normalization Form C (according to version 3.1.0 of [UTR #15]).
Text data is fully normalized if:
the data is Unicode-normalized and does not contain any character escapes whose unescaping would cause the data to become no longer Unicode-normalized; or
the data is in a legacy encoding and, if it were transcoded to a Unicode encoding form by a normalizing transcoder, the resulting data would satisfy clause 1 above.
In the remainder of this specification, normalized is used to mean 'fully normalized', unless otherwise indicated.
NOTE: A consequence of this definition is that legacy text (i.e. text in a legacy encoding) is always normalized unless i) a normalizing transcoder cannot exist for that encoding (e.g. because the repertoire contains characters not in Unicode) or ii) the text contains escapes which, once expanded, result in un-normalized text.
NOTE: Full normalization is specified against the context of a markup language (or the absence thereof), which specifies the form of escapes. For plain text (no escapes) in a Unicode encoding form, full normalization and Unicode-normalization are equivalent.
The string 'suçon', expressed as the sequence of five characters U+0073 U+0075 U+00E7 U+006F U+006E and encoded in a Unicode encoding form, is both Unicode-normalized and fully normalized. The same string encoded in a legacy encoding for which there exists a normalizing transcoder would be fully normalized but not Unicode-normalized (since not in a Unicode encoding form).
In an XML or HTML context, the string suçon
is also both fully normalized and, if encoded in a Unicode encoding
form, Unicode-normalized. Expanding ç yields suçon
as above, which contains no replaceable combining sequence.
The string 'suçon', expressed as the sequence of six characters U+0073 U+0075 U+0063 U+0327 U+006F U+006E (U+0327 is the COMBINING CEDILLA) and encoded in a Unicode encoding form, is neither Unicode-normalized (since the combining sequence U+0063 U+0327 is replaceable by the precomposed U+00E7 'ç') nor fully normalized (since in a Unicode encoding form but not Unicode-normalized).
In an XML or HTML context, the string suçon
is not fully normalized, regardless of encoding form, because expanding
̧ yields the sequence suc¸on
which is not Unicode-normalized ('c¸' is replaceable by
'ç'). Unicode-normalization, however, is defined only for plain
text, doesn't know that ̧ represents a character in XML or HTML and
considers it just a sequence of characters. Therefore, the string suçon
in a Unicode encoding form is Unicode-normalized since it
contains no replaceable combining sequence. (The latter example does not imply
that Unicode-normalization is sufficient to meet the normalization requirements
of the Web; it just illustrates a case where Unicode-normalization and full
normalization differ).
The string <elem>/ foobar</elem>
, where the '/' immediately after <elem>
stands for the character U+0338 COMBINING LONG SOLIDUS
OVERLAY, is neither Unicode-normalized nor fully normalized, since the
U+0338 '/' combines with the '>' (yielding U+226F
NOT GREATER-THAN).
NOTE: From this example, it follows that it is impossible to produce a normalized XML or HTML document containing the character U+0338 COMBINING LONG SOLIDUS OVERLAY immediately following an element tag, comment, CDATA section or processing instruction. It is noteworthy that U+0338 COMBINING LONG SOLIDUS OVERLAY also combines with '<', yielding U+226E NOT LESS-THAN. Consequently, U+0338 COMBINING LONG SOLIDUS OVERLAY should remain excluded from XML identifiers.
This section defines the responsibilities for normalization for various components and situations, based on the goal of early uniform normalization.
[C] All content on the Web MUST be fully normalized.
[I] Producers MUST produce text data in normalized form, unless they are willing to accept the consequences (loss of integrity and security, high probability of rejection by recipients) of un-normalized data. For the purpose of W3C specifications and their implementations, the producer of text data is the sender of the data in the case of protocols and the tool that produces the data in the case of formats.
NOTE: As an optimization, it is perfectly acceptable for a system to define the producer to be the actual producer (e.g. a small device) together with a remote component (e.g. a server serving as a kind of proxy) to which normalization is delegated. In such a case, the communications channel between the device and proxy server is considered to be internal to the system, not part of the Web. Only data normalized by the proxy server is to be exposed to the Web at large, as shown in the illustration below:
NOTE: Normalization is the responsibility of the producer as a whole. This specification does not assign responsibility for normalization to any particular component of the producer (for instance a DOM implementation).
NOTE: Implementers of producer software are encouraged to delegate normalization to their respective data sources wherever possible. Examples of data sources are operating systems, libraries, and keyboard drivers. One way of ensuring that user input results in normalized data is to not provide any way of creating denormalized data.
[I] The recipients of text data MUST verify the normalization of data they receive and reject un-normalized data, unless they are willing to accept the consequences (loss of integrity and security) of un-normalized data. [I] Recipients MUST NOT normalize the data that they receive. [I] Recipients which transcode text data from a legacy encoding to a Unicode encoding form MUST use a normalizing transcoder.
NOTE: The prohibition of normalization by recipients is necessary to avoid the security issues mentioned in section 4.1 Motivation.
[I] When a recipient returns un-normalized text to a sender (e.g. to indicate an error or fault), that recipient MAY return the text without normalizing it.
[I] If a software module functions as both a producer and a recipient of text data (e.g. a browser/editor), normalization MUST be applied in the producer part but MUST NOT be applied in the recipient part.
[I] Intermediate (recipient/producer) components whose role involves modification of text data MUST ensure that their modifications do not result in denormalization of any data exposed (sent on the network, saved to disk, returned in an API call, etc.).
NOTE: Consequently, an intermediate component, in a system that packages a payload in some control information, may modify the control information without having to renormalize the payload.
[I] Software MUST behave as if normalization took place after each modification, so that any subsequent matching, indexing or other normalization-sensitive operations always behave as if they were dealing with normalized data.
EXAMPLE: If the
'z' is deleted from the (normalized) string cz¸
(where '¸' represents a combining cedilla, U+0327),
normalization is necessary to turn the denormalized result c¸
into the properly normalized ç
. Analogous cases exist for insertion and concatenation. If the software
that deletes the 'z' later uses the string in a
normalization-sensitive operation, it needs to normalize the string before this
operation to ensure correctness; otherwise, normalization may be deferred until
the data is exposed.
[I] Intermediate components whose role does not involve modification of the data (e.g. caching proxies) MUST NOT reject un-normalized data and MUST NOT perform normalization.
[S] In specifications of markup languages, syntax-significant characters MUST be chosen that do not combine with any other characters in NFC. This is to avoid problems such as U+0338 COMBINING LONG SOLIDUS OVERLAY combining with the '<' and '>' delimiters in XML (see the last example in section 4.2.3 Examples above).
This specification does not address the suitability of particular characters for use in markup languages, in particular formatting characters and compatibility equivalents. For detailed recommendations about the use of compatibility and formatting characters, see Unicode in XML and other Markup Languages [UXML].
[S] Specifications SHOULD exclude compatibility characters in the syntactic elements (markup, delimiters, identifiers) of the formats they define.
One important operation that depends on early normalization is string identity matching [CharReq], which is a subset of the more general problem of string matching. There are various degrees of specificity for string matching, from approximate matching such as regular expressions or phonetic matching, to more specific matches such as case-insensitive or accent-insensitive matching and finally to identity matching. In the Web environment, where multiple encodings are used to represent strings, including some encodings which allow multiple representations for the same thing, identity is defined to occur if and only if the compared strings contain no user-identifiable distinctions. This definition is such that strings do not match when they differ in case or accentuation, but do match when they differ only in non-semantically significant ways such as encoding, use of escapes (of potentially different kinds), or use of precomposed vs. decomposed character sequences.
To avoid unnecessary conversions and, more importantly, to ensure predictability and correctness, it is necessary for all components of the Web to use the same identity testing mechanism. Conformance to the rule that follows meets this requirement and supports the above definition of identity. [S] [I] String identity matching MUST be performed as if the following steps were followed:
Early uniform normalization to fully normalized form, as defined in 4.2.2 Fully Normalized Text. In accordance with section 4 Early Uniform Normalization, this step MUST be performed by the producers of the strings to be compared.
Conversion to a common encoding of UCS, if necessary.
Expansion of all escapes.
Testing for bit-by-bit identity.
Step 1 ensures 1) that the identity matching process can produce correct results using the next three steps and 2) that a minimum of effort is spent on solving the problem.
[S] [I] Forms of
string matching other than identity SHOULD be based on the
steps specified in this document for
string identity matching. Taking into account normalization
and escapes is necessary so that, for example, a case-insensitive match of suçon
against sucçon
or against SUC¸ON
returns TRUE
.
NOTE: The expansion of escapes (step 3 above) is dependent on context,
i.e. on which markup or programming language is considered to apply when the
string matching operation is performed. Consider a search for the string
'suçon' in an XML document containing sucçon
but not suçon
. If the search is performed in a plain text editor, the context is
plain text (no markup or programming language applies), the
ç escape is not recognized, hence not expanded and the search fails.
If the search is performed in an XML browser, the context is XML,
the escape (defined by XML) is expanded and the search succeeds.
An intermediate case would be an XML editor that purposefully provides a view of an XML document with entity references left unexpanded. In that case, a search over that pseudo-XML view will deliberately not expand entities: in that particular context, entity references are not considered escapes and need not be expanded.
There are many situations where a software process needs to access a substring or to point within a string and does so by the use of indices, i.e. numeric "positions" within a string. Where such indices are exchanged between components of the Web, there is a need for an agreed-upon definition of string indexing in order to ensure consistent behavior. The requirements for string indexing are discussed in Requirements for String Identity Matching [CharReq], section 4. The two main questions that arise are: "What is the unit of counting?" and "Do we start counting at 0 or 1?".
Depending on the particular requirements of a process, the unit of counting may correspond to any of the definitions of a string provided in section 3.4 Strings. In particular:
[S] [I] The character string is RECOMMENDED as a basis for string indexing. (Example: the XML Path Language [XPath]).
[S] [I] A code unit string MAY be used as a basis for string indexing if this results in a significant improvement in the efficiency of internal operations when compared to the use of character string. (Example: the use of UTF-16 in [DOM Level 1]).
Counting graphemes will become a good option where user interaction is the primary concern, once a suitable definition is widely accepted.
It is noteworthy that there exist other, non-numeric ways of identifying substrings which have favorable properties. For instance, substrings based on string matching are quite robust against small edits; substrings based on document structure (in structured formats such as XML) are even more robust against edits and even against translation of a document from one human language to another. [S] Specifications that need a way to identify substrings or point within a string SHOULD provide ways other than string indexing to perform this operation. [I] [C] Users of specifications (software developers, content developers) SHOULD whenever possible prefer ways other than string indexing to identify substrings or point within a string.
Experience shows that more general, flexible and robust specifications result when individual characters are understood and processed as substrings, identified by a position before and a position after the substring. Understanding indices as boundary positions between the counting units also makes it easier to relate the indices resulting from the different string definitions. [S] Specifications SHOULD understand and process single characters as substrings, and treat indices as boundary positions between counting units, regardless of the choice of counting units.
[S] Specifications of APIs SHOULD NOT specify single character or single encoding-unit arguments.
EXAMPLE: uppercase('ß')
cannot return the proper result (the two-character string
'SS') if the return type of the uppercase
function is defined to be a single character.
The issue of index origin, i.e. whether we count from 0 or 1, actually arises only after a decision has been made on whether it is the units themselves that are counted or the positions between the units. [S] When the positions between the units are counted for string indexing, starting with an index of 0 for the position at the start of the string is the RECOMMENDED solution, with the last index then being equal to the number of counting units in the string.
According to the definition in RFC 2396 [RFC 2396], URI references are restricted to a subset of US-ASCII, with an escaping mechanism to encode arbitrary byte values, using the %HH convention. However, the %HH convention by itself is of limited use because there is no definitive mapping from characters to bytes. Also, non-ASCII characters cannot be used directly. Internationalized Resource Identifiers (IRI) [I-D URI-I18N] solves both problems with an uniform approach that conforms to the Reference Processing Model.
[S] W3C specifications that define protocol or format elements (e.g. HTTP headers, XML attributes, etc.) which are to be interpreted as URI references (or specific subsets of URI references, such as absolute URI references, URIs, etc.) MUST use Internationalized Resource Identifiers (IRI) [I-D URI-I18N] (or an appropriate subset thereof). [S] W3C specifications MUST define when the conversion from IRI references to URI references (or subsets thereof) takes place, in accordance with Internationalized Resource Identifiers (IRI) [I-D URI-I18N].
NOTE: Many current W3C specifications already contain provisions in
accordance with Internationalized Resource Identifiers
(IRI) [I-D URI-I18N]. For XML 1.0 [XML 1.0],
see Section
4.2.2, External Entities, and
Erratum
E26. XML Schema Part 2: Datatypes [XML Schema-2]
provides the anyURI
datatype (see
Section
3.2.17). The XML Linking Language (XLink) [XLink]
provides the href attribute (see
Section 5.4, Locator
Attribute). Further information and links can be found at
Internationalization: URIs and other identifiers
[Info URI-I18N].
[S] W3C specifications that define new syntax for URIs, such as a new URI scheme or a new kind of fragment identifier, MUST specify that characters outside the US-ASCII repertoire are encoded using UTF-8 and %HH-escaping, in accordance with Guidelines for new URL Schemes [RFC 2718], Section 2.2.5. This will make sure that these schemes or fragment identifiers can be used in IRIs in the natural way.
Specifications often need to make references to the Unicode standard or International Standard ISO/IEC 10646. Such references must be made with care, especially when normative. The questions to be considered are:
Which standard should be referenced?
How to reference a particular version?
When to use versioned vs unversioned references?
ISO/IEC 10646 is developed and published jointly by ISO (the International Organisation for Standardisation) and IEC (the International Electrotechnical Commission). The Unicode Standard is developed and published by the Unicode Consortium, an organization of major computer corporations, software producers, database vendors, national governments, research institutions, international agencies, various user groups, and interested individuals. The Unicode Standard is comparable in standing to W3C Recommendations.
ISO/IEC 10646 and Unicode define exactly the same CCS (same repertoire, same code points) and encoding forms. They are actively maintained in synchrony by liaisons and overlapping membership between the respective technical committees. In addition to the jointly defined CCS and encoding forms, the Unicode Standard adds normative and informative lists of character properties, normative character equivalence and normalization specifications, a normative algorithm for bidirectional text and a large amount of useful implementation information. In short, Unicode adds semantics to the characters that ISO/IEC 10646 merely enumerates. Conformance to Unicode implies conformance to ISO/IEC 10646, see [Unicode 3.0] Appendix C.
[S] Since specifications in general need both a definition for their characters and the semantics associated with these characters, specifications SHOULD include a reference to the Unicode Standard, whether or not they include a reference to ISO/IEC 10646. By providing a reference to The Unicode Standard implementers can benefit from the wealth of information provided in the standard and on the Unicode Consortium Web site.
The fact that both ISO/IEC 10646 and Unicode are evolving (in synchrony) raises the issue of versioning: should a specification refer to a specific version of the standard, or should it make a generic reference, so that the normative reference is to the version current at the time of reading the specification? In general the answer is both. [S] A generic reference to the Unicode Standard MUST be made if it is desired that characters allocated after a specification is published are usable with that specification. A specific reference to the Unicode Standard MAY be included to ensure that functionality depending on a particular version is available and will not change over time (an example would be the set of characters acceptable as Name characters in XML 1.0 [XML 1.0], which is an enumerated list that parsers must implement to validate names).
NOTE: See http://www.unicode.org/unicode/standard/versions/#Citations for guidance on referring to specific versions of Unicode.
A generic reference can be formulated in two ways:
By explicitly including a generic entry in the bibliography section of a specification and simply referring to that entry in the body of the specification. Such a generic entry contains text such as "... as it may from time to time be revised or amended".
By including a specific entry in the bibliography and adding text such as "... as it may from time to time be revised or amended" at the point of reference in the body of the specification.
It is an editorial matter, best left to each specification, which of
these two formulations is used. Examples of the first formulation can be found
in the bibliography of this specification (see the entries for
[ISO/IEC 10646] and [Unicode]). Examples of the latter,
as well as a discussion of the versioning issue with respect to MIME
charset
parameters for UCS encodings, can be found in
[RFC 2279] and [RFC 2781].
[S] All generic references to Unicode MUST refer to Unicode 3.0[Unicode 3.0] or later. [S] Generic references to ISO/IEC 10646 MUST be written such that they make allowance for the future publication of additional parts of the standard. They MUST refer to ISO/IEC 10646-1:2000 [ISO/IEC 10646-1:2000] or later, including any amendments.
A few examples will help make sense all this complexity of text in computers (which is mostly a reflection of the complexity of human writing systems). Let us start with a very simple example: a user, equipped with a US-English keyboard, types "Foo", which the computer encodes as 16-bit values (the UTF-16 encoding of Unicode) and displays on the screen.
Keystrokes | Shift-f | o | o |
---|---|---|---|
Input characters | F | o | o |
Encoded characters (byte values in hex) | 0046 | 006F | 006F |
Display | Foo |
The only complexity here is the use of a modifier (Shift) to input the capital 'F'.
A slightly more complex example is a user typing 'çé' on a traditional French-Canadian keyboard, which the computer again encodes in UTF-16 and displays. We assume that this particular computer uses a fully composed form of UTF-16.
Keystrokes | ¸ | c | é |
---|---|---|---|
Input characters | ç | é | |
Encoded characters (byte values in hex) | 00E7 | 00E9 | |
Display | çé |
A few interesting things are happening here: when the user types the cedilla ('¸'), nothing happens except for a change of state of the keyboard driver; the cedilla is a dead key. When the driver gets the c keystroke, it provides a complete 'ç' character to the system, which represents it as a single 16-bit code unit and displays a 'ç' glyph. The user then presses the dedicated 'é' key, which results in, again, a character represented by two bytes. Most systems will display this as one glyph, but it is also possible to combine two glyphs (the base letter and the accent) to obtain the same rendering.
On to a Japanese example: our user employs a romaji input method to type "", which the computer encodes in UTF-16 and displays.
Keystrokes | n i h o n g o <space> <return> | |||
---|---|---|---|---|
Input characters | ||||
Encoded characters (byte values in hex) | 65E5 | 672C | 8A9E | |
Display |
The interesting aspect here is input: the user types Latin characters, which are converted on the fly to kana (not shown here), and then to kanji when the user requests conversion by pressing <space>; the kanji characters are finally sent to the application when the user presses <return>. The user has to type a total of nine keystrokes before the three characters are produced, which are then encoded and displayed rather trivially.
An Arabic example will show different phenomena:
Keystrokes | |||||||
---|---|---|---|---|---|---|---|
Input characters | |||||||
Encoded characters (byte values in hex) | 0644 | 0627 | 0644 | 0627 | 0639 | 0639 | |
Display |
Here the first two keystrokes each produce an input character and an encoded character, but the pair is displayed as a single glyph ('', a lam-alef ligature). The next keystroke is a lam-alef, which some Arabic keyboards have; it produces the same two characters which are displayed similarly, but this second lam-alef is placed to the left of the first one when displayed. The last two keystrokes produce two identical characters which are rendered by two different glyphs (a medial form followed to its left by a final form). We thus have 5 keystrokes producing 6 characters and 4 glyphs laid out right-to-left.
A final example in Tamil, typed with an ISCII keyboard, will illustrate some additional phenomena:
Keystrokes | |||||||
---|---|---|---|---|---|---|---|
Input characters | |||||||
Encoded characters (byte values in hex) | 0B9F | 0BBE | 0B99 | 0BCD | 0B95 | 0BCB | |
Display |
Here input is straightforward, but note that contrary to the preceding accented Latin example, the diacritic '' (virama, vowel killer) is entered after the '' to which it applies. Rendering is interesting for the last two characters. The last one ('') clearly consists of two glyphs which surround the glyph of the next to last character ('').
A number of operations routinely performed on text can be impacted by the complexities of the world's writing systems. An example is the operation of selecting text on screen by a pointing device in a bidirectional (bidi) context (see 3.1.3 Units of Visual Rendering). Let's have a look at some bidi text, in this case Arabic letters (written right-to-left) mixed with Arabic-Hindi digits (left-to-right):
In memory | ||
---|---|---|
On screen |
Special thanks go to Ian Jacobs for ample help with editing. Tim Berners-Lee and James Clark provided important details in the section on URIs. The W3C I18N WG and IG, as well as others, provided many comments and suggestions.
Replaced much of chapter 8 content with references to [I-D URI-I18N].
Made numerous further changes listed in Character Model for the World Wide Web 1.0 Last Call Comments (Members only).
Converted to XHTML with UTF-8 encoding.
Normalization: changed from "recipients MUST NOT normalize" to "recipients MUST check and reject un-normalized data".
Clarified conformance model, in particular introduced [S][I][C] specifiers for requirements.
Made numerous other changes listed in Character Model for the World Wide Web 1.0 Last Call Comments (Members only).
Fixed countless typos and unclear/ambiguous sentences.
Updated references.