In this section [[RFC2119]] keywords in uppercase italics have their usual meaning. We differentiate best practices , which should be adopted by all specifications and recommendations , which require additional standardization or which are speculative prior to adoption.
Best practices appear with a different background color and decoration like this.
Gaps or recommendations for future work are listed as issues or displayed like this.
The main issue is how to establish a common serialization agreement between producers and consumers of data values so that each knows how to encode, find, and interpret the language and base direction of each data field. The use of metadata for supplying both the language and base direction of natural language string fields ensures that the necessary information is present, can be supplied and extracted with the minimal amount of processing, and does not require producers or consumers to scan or alter the data.
This document describes a number of approaches for identifying language and direction information for strings. These include the following:
fields that set a default language and direction for all strings in that resource
string-specific fields or string datatypes to specify language and direction
first-strong heuristics
first-strong heuristics augmented by directional markers at the start of the string
string-internal markup
inference of direction from special applications of language data.
The use of some of the above preclude the use of others, and in some cases some of the above approaches may need to be specified together to cater for fallback situations.
General best practices
Specifications MUST NOT treat syntactic content values as "displayable".
While the value of a syntactic content item or user-supplied value will often be meaningful, implementers should be reminded that in most instances these values must be wrapped with localizable display strings for presentation to the user, particularly in cases where the values are enumerated in advance.
Error messages are not syntactic content. They consist of and should be treated as localizable content .
Specifications SHOULD NOT use the Unicode "language tag" characters (code points U+E0000
to U+E007F
) for language identification.
[[Unicode]] says that the ... use of tag characters to convey language tags is strongly discouraged and that the use of the character U+E0001 LANGUAGE TAG is strongly discouraged . The only current use of characters in this block of Unicode is to form various flag emoji.
Resource-wide defaults
Many resources use only a single language and have a consistent base text direction. For efficiency, the following are best practices:
Define a rule or a field to provide the default language and base direction for all strings in a given resource.
Specifications MUST NOT assume that a document-level default is sufficient.
Document level defaults, when combined with per-field metadata, can reduce the overall complexity of a given document instance, since the language and direction values don't have to be repeated across many fields. However, they do not solve all language or directionality problems, and so it must be possible to override the default on a string-by-string basis, where necessary.
Specify that, in the absence of other information, the default direction and default language are unknown.
Explicit metadata, if available, trumps the need for heuristics to be applied. This is logical, since the heuristic method cannot reliably deduce the necessary direction on its own, and if metadata has been explicitly provided there is an indication that it is intended to be authoritative.
It is essential for a consumer to know that language and direction are unknown quantities in order for them to know when to apply fallback strategies to the data (this could include language-detection, or first-strong heuristics for direction). In particular, the default direction should not be set to LTR, since that would override the need for first-strong detection, which is more appropriate for strings written in a RTL script.
Use of [[JSON-LD]] @context
and the built-in @language
attribute is RECOMMENDED as a document level default.
For document formats that use it, [[JSON-LD]] includes some data structures that are helpful in assigning language (but not base direction) metadata to collections of strings (including entire resources). Notably, it defines what it calls “string internationalization” in the form of a context-scoped @language
value which can be associated with blocks of JSON or within individual objects. There is no definition for base direction, so the @context
mechanism does not currently address all concerns raised by this document.
String-specific language information
Low-level support for natural language string metadata is widespread because the use of metadata for storage and interchange of the language of data values is long-established and widely supported in the basic infrastructure of the Web. This includes language attributes in [[XML]] and [[HTML]]; string types in schema languages (e.g. [[xmlschema11-2]]) or the various RDF specifications including [[JSON-LD]]; or protocol- or document format-specific provisions for language.
Use of [[JSON-LD]] plain string literals are RECOMMENDED as a way to provide string-specific language information.
Some datatypes, such as [[RDF-PLAIN-LITERAL]], already exist that allow for language metadata to be serialized as part of a string value. Examples include:
"title": "تصميم و إنشاء مواقع الويب@ar",
"tags": [ "HTML@en", "CSS@en", "تصميم المواقع@ar" ]
"id": "978-111887164-5@und"
String-specific directional information
First-strong heuristics are ineffective when a default direction has been set for all strings, since metadata overrides (intentionally) the value of the first-strong character. Therefore it is necessary to use explicitly provided field data to override the default. Even if an RLM character has been prepended to a string, the default metadata overrides it.
The use of metadata for indicating base direction is also preferred, because it avoids requiring the consumer to interpolate the direction using methods such as first strong or use of methods which require modification of the data itself (such as the insertion of RLM/LRM markers or bidirectional controls ).
Schema languages, such as the RDF suite of specifications, have no in-built mechanism for associating base direction metadata with natural language string values.
There is no built-in attribute for base direction in [[JSON-LD]]. There needs to be a corresponding built-in attribute (e.g. a dir
) or de facto convention for indicating document-level base direction.
For the case where the resource-wide setting is not available, specify that consumers should use first-strong heuristics to identify the base direction of strings.
For the case where the resource-wide setting is available but not used, specify that consumers should fall back to first-strong heuristics to identify the base direction of strings.
If metadata is not available, consumers of strings should use heuristics, preferably based on the Unicode Standard's first-strong detection algorithm, to detect the base direction of a string.
The first-strong algorithm looks for the first strongly-directional character in a string (skipping certain preliminary substrings), and assumes that it represents the base direction for the string as a whole. However, the first strong directional character doesn't always coincide with the required base direction for the string as a whole, so it should be possible to provide metadata, where needed, to address this problem.
If relying on first-strong heuristics, encourage content developers to use RLM/LRM at the beginning of a string where it is necessary to force a particular base direction, but do not prepend one of these characters to existing strings.
Do not rely on the availability of RLM/LRM formatting characters in most cases.
If string data is being provided by users or content developers in web forms or other simple environments, users may not be able to enter these formatting characters. In fact, most users will probably be unaware that such characters exist, or how to use them. A web form can render their use unnecessary for immediate inspection if it sets the base direction for the input (which it should).
Not all resources make use of the available metadata mechanisms. The script subtag of a language tag (or the "likely" script subtag based on [[BCP47]] and [[LDML]]) can sometimes be used to provide a base direction when other data is not available. Note that using language information is a "last resort" and specifications SHOULD NOT use it as the primary way of indicating direction: make the effort to provide for metadata.
Other approaches
This combines both language and direction metadata and, if consistently adopted, makes interchange between different formats easier. Consistency between different specifications and document formats allows for the easy interchange of string data. By naming field attributes in the same way and adopting the same semantics, different specifications can more easily extract values from or add values into resources from other data sources.
Another way to say this is: do not require implementations to modify data passing through them . Unicode bidi control characters might be found in a particular piece of string content, where the producer or data source has used them to make the text display properly. That is, they might already be part of the data. Implementations should not disturb any controls that they find—but they shouldn't be required to produce additional controls on their own.
Specifications SHOULD recommend the use of language indexing when Localizable strings can be supplied in multiple languages for the same value.
Producers sometimes need to supply multiple language values (see Localization Considerations ) for the same content item or data record. One use for this language negotiation by the consumer .
[[JSON-LD]] language indexing should be modified to support the use of Localizable values in language indexing .
Here is the record used in the original example with a record-level default language and base direction added. It also shows the use of a Localizable string to override the document-level defaults for the author
field. Note that this "worked example" is not valid.
{
"@context": {
"@language": "ar",
"@dir": "rtl"
},
"id": {"978-111887164-5"},
"title": "HTML و CSS: تصميم و إنشاء مواقع الويب ",
"authors": [ {"value": "Jon Duckett", "lang": "en", "dir": "ltr"} ],
"pubDate": "2008-01-01",
"publisher": "مكتبة",
"coverImage": "https://example.com/images/html_and_css_cover.jpg",
// etc.
},
Defining Bidirectional Keywords in Specifications
A specification for a document format or protocol that includes natural language text will need to define a data field or attribute to store the direction of that natural language content. These definitions need to be consistent across the Web in order to ensure interoperability, as consumers of one document format will need to map the base direction to fields in documents that they produce or control the base direction in text fields for display. This section describes how to provide such a definition along with the specific content to use.
There are two common use cases for defining content direction:
A field direction value is a data field stored or exchanged with a natural language string giving its base direction.
A display direction attribute is a field or value, usually represented by an attribute in markup languages, that controls the base direction of a span of content.
Example of a field direction value . In this JSON fragment, the title
structure has a value direction
which represents the field direction of the value
field.
"title": {
"value": "HTML و CSS: تصميم و إنشاء مواقع الويب",
"direction": "rtl",
"language": "ar"
}
Example of a display direction attribute . If the above JSON were received by a process that was assembling a Web page for display, it might be filling in a template similar to the top line in this example to produce markup like the second line. Here the dir
attribute from [[HTML]] is an example of a display direction attribute .
<p dir={$title.direction}>{$title.value}</p>
<p dir="rtl">HTML و CSS: تصميم و إنشاء مواقع الويب</p>
The name direction
is preferred for data values. The name dir
is an acceptable alternative.
The name dir
is preferred for an attribute, such as in markup languages. Using direction
for an attribute is not recommended, since it is long and relatively uncommon for this use case. Note that both [[HTML]] and [[XML10]] have a built-in dir
attribute. A dir
attribute should have scope within a document and should be defined to provide bidi isolation.
Define the values of a field direction to include and be limited to ltr
and rtl
.
The value auto
SHOULD NOT be used as field direction value : omitting the direction is preferred when the content direction is not known.
The value ltr
indicates a base direction of left-to-right, in exactly the same manner indicated by CSS writing modes [[CSS-WRITING-MODES-4]]
The value rtl
indicates a base direction of right-to-left, in exactly the same manner indicated by CSS writing modes [[CSS-WRITING-MODES-4]]
The value auto
indicates that the user agent uses the first strong character of the content to determine the base direction using the algorithm for auto
found in [[HTML]].
The heuristic used by auto
just looks at the first character with a strong directionality, in a manner analogous to the Paragraph Level determination in the bidirectional algorithm [[UAX9]]. Authors are urged to only use this value as a last resort when the direction of the text is truly unknown and no better server-side heuristic can be applied.
Requirements and Use Cases
For a detailed set of example use cases, please read the article Use cases for bidi and language metadata on the Web . This section summarises some key points related to the need for language and direction metadata.
Why is this important?
Information about the language of content is important when processing and presenting localizable content for a variety of reasons. When language information is not present, the resulting degradation in appearance or functionality can frustrate users, render the content unintelligible, or disable important features. Some of the affected processes include:
Selection of fonts and configuration of rendering options to enable the proper display of different languages. This includes
prevention of problems such as:
"ransom noting" (showing text using multiple different fonts)
language specific glyph selection, especially the selection of the correct Chinese/Japanese/Korean font due to important presentational variations for the same characters in these languages
displaying blanks, spaces, question marks, or other disappearance of characters due to the lack of glyphs in the selected font
Spell checking and other content processing (such as abuse detection, hyphenation, line-breaking, case conversion, etc.)
Indexing, search, and other natural language text operations
Filtering according to intended audience and language negotiation
Selection of a text-to-speech voice and processor, such as used for accessibility or in a voice-based interface
Similarly, direction metadata is important to the Web. When a string
contains text in a script that runs right-to-left (RTL), it must be
possible to eventually display that string correctly when it reaches an
end user. For that to happen, it is necessary to establish what base
direction needs to be applied to the string as a whole. The
appropriate base direction cannot always be deduced by simply looking
at the string; even where it is possible, the producer and consumer of
the string need to use the same heuristics to interpret the
direction.
Static content, such as the body of a Web page or the contents of an
e-book, often has language or direction information provided by the document format
or as part of the content metadata. Data formats found on the Web
generally do not supply this metadata. Base specifications such as
Microformats, WebIDL, JSON, and more, have tended to store natural
language text in string objects, without additional metadata.
This places a burden on application authors and data format
designers to provide the metadata on their own initiative. When
standardized formats do not address the resulting issues, the result
can be that, while the data arrives intact, its processing or
presentation cannot be wholly recovered.
In a distributed Web, any consumer can also be a producer for some other process or system. Thus, a given consumer might need to pass language and direction metadata from one document format (and using one agreement ) to another consumer using a different document format. Lack of consistency in representing language and direction metadata in serialization agreements poses a threat to interoperability and a barrier to consistent implementation.
An example
Suppose that you are building a Web page to show a
customer's library of e-books. The e-books exist in a catalog of data
and consist of the usual data values. A JSON file for a single entry
might look something like:
{
"id": "978-111887164-5",
"title": "HTML و CSS: تصميم و إنشاء مواقع الويب ",
"authors": [ "Jon Duckett" ],
"language": "ar",
"pubDate": "2008-01-01",
"publisher": "مكتبة",
"coverImage": "https://example.com/images/html_and_css_cover.jpg",
// etc.
},
Each of the above is a data field in a database somewhere. There is even information about what language the book is in: ("language": "ar" ).
A well-internationalized catalog would include additional metadata to what is shown above. That is, for each of the fields containing localizable content , such as the title and authors fields, there should be language and base direction information stored as metadata. (There may be other values as well, such as pronunciation metadata for sorting East Asian language information.) These metadata values are used by consumers of the data to influence the processing and enable the display of the items in a variety of ways. As the JSON data structure
provides no place to store or exchange these values, it is more difficult to construct internationalized applications.
One work-around might be to encode the values using a mix of HTML and Unicode bidi controls, so that a data value might look like one of the following:
// following examples are NOT recommended
// contains HTML markup
"title": "<span lang='ar' dir='rtl'>HTML و CSS: تصميم و إنشاء مواقع الويب </span>",
// contains LRM as first character
"authors": [ "\u200eJon Duckett" ],
But JSON is a data interchange format: the content might not end up with the title field being displayed in an HTML context. The JSON above might very well be used to populate, say, a local data store which uses native controls to show the title and these controls will treat the HTML as string contents. Producers and consumers of the data might not expect to introspect the data in order to supply or remove the extra data or to expose it as metadata. Most JSON libraries don't know anything about the structure of the content that they are serializing. Producers want to generate the JSON file directly from a local data store, such as a database. Consumers want to store or retrieve the value for use without additional consideration of the content of each string. In addition, either producers or consumers can have other considerations, such as field length restrictions, that are affected by the insertion of additional controls or markup. Each of these considerations places special burden on implementers to create arbitrary means of serializing, deserializing, managing, and exchanging the necessary metadata, with interoperability as a casualty along the way.
(As an aside, note that the markup shown in the above example is actually needed to make the title as well as the inserted markup display correctly in the browser.)
Isn't Unicode enough?
[[Unicode]] and its character encodings (such as UTF-8) are key elements of the Web and its formats. They provide the ability to encode and exchange text in any language consistently throughout the Internet. However, Unicode by itself does not guarantee perfect presentation and processing of natural language text, even though it does guarantee perfect interchange.
Several features of Unicode are sometimes suggested as part of the solution to providing language and direction metadata. Specifically, Unicode bidi controls are suggested for handling direction metadata. In addition, there are "tag" characters in the U+E0000
block of Unicode originally intended for use as language tags (although this use is now deprecated).
There are a variety of reasons why the addition of characters to
data in an interchange format is not a good idea. These include:
Most of the data sources used to assemble the documents on the Web will not contain
these characters; producers, in the process of assembling or serializing the data,
will need to introspect and insert the characters as needed—changing the data from the original source. Consumers must then deserialize and introspect the information using an identical agreement . The consumer has no way of knowing if the characters found in the data were inserted by the producer (and should be removed) or if the characters were part of the source data. Overzealous producers might introduce additional and unnecessary characters, for example adding an additional layer of bidi control codes to a string that would not otherwise require it. Equally, an overzealous consumer might remove characters that are needed by or intended for downstream processes.
Another challenge is that many applications that use these data formats have limitations on
content, such as length limits or character set restrictions. Inserting additional characters into
the data may violate these externally applied requirements, and interfere
with processing. In the worst case, portions (or all of) the data value itself might be rejected, corrupted,
or lost as a result.
Inserting additional characters changes the identity of the string. This may have important consequences in certain contexts.
Inserting and removing characters from the string is not a common operation for most data serialization libraries. Any processing that adds language or direction controls would need to introspect the string to see if these are already present or might need to do other processing to insert or modify the contents of the string as part of serializing the data.
This last consideration is important to call out: document formats are often built and serialized using several layers of code. Libraries, such as general purpose JSON libraries, are expected to store and retrieve faithfully the data that they are passed. Higher-level implementations also generally concern themselves with faithful serialization and de-serialization of the values that they are passed. Any process that alters the data itself introduces variability that is undesirable. For example, consider an application's unit test that checks if the string returned from the document is identical to the one in the data catalog used to generate the document. If bidi controls, HTML markup, or Unicode language tags have been inserted, removed, or changed, the strings might not compare as equal, even though they would be expected to be the same.
What consumers need to do to support direction
Given the use cases for bidirectional text, it will be clear that a consumer cannot simply insert a string into a target location without some additional work or preparation taking place, first to establish the appropriate base direction for the string being inserted, and secondly to apply bidi isolation around the string.
This requires the presence of markup or Unicode formatting controls around the string. If the string's base direction is opposite that into which it is being inserted, the markup or control codes need to tightly wrap the string. Strings that are inserted adjacent to each other all need to be individually wrapped in order to avoid the spillover issues we saw in the previous section.
[[HTML5]] provides base direction controls and isolation for any inline element when the dir
attribute is used, or when the bdi
element is used. When inserting strings into plain text environments, isolating Unicode formatting characters need to be used. (Unfortunately, support for the isolating characters, which the Unicode Standard recommends as the default for plain text/non-markup applications, is still not universal.)
The trick is to ensure that the direction information provided by the markup or control characters reflects the base direction of the string.
Approaches Considered for Identifying the Base Direction
The fundamental problem for bidirectional text values is how a consumer of a string will know what base direction should be used for that string when it is eventually displayed to a user. Note that some of these approaches for identifying or estimating the base direction have utility in specific applications and are in use in different specifications such as [[HTML5]]. The issue here is which are appropriate to adopt generally and specify for use as a best practice in document formats.
First-strong property detection
This approach is NOT recommended when used alone, but IS recommended as a fallback in combination with other approaches.
How it works
A producer doesn't need to do anything.
The string is stored as it is.
Consumers must look for the first character in the string with a strong Unicode directional property, and set the base direction to match it. They then take appropriate action to ensure that the string will be displayed as needed. This is not quite so simple as it may appear, for the following reasons:
Characters at the start of a string without a strong direction (eg. punctuation, numbers, etc) and isolated sequences (ie. sequences of characters surrounded by RLI/LRI/FSI...PDI formatting characters) within a string must be skipped in order to find the first strong character.
The detection algorithm needs to be able to handle markup at the start of the string. It needs to be able to tell whether the markup is just string text, or whether the markup needs to be parsed in the target location – in which case it must understand the markup, and understand any direction-related information that is carried in the markup.
First-strong detection is only needed where the required base direction is not already known. If direction is indicated for a string by metadata, either string-specific or via a resource-wide declaration, then first-strong heuristics should not be invoked. For example, first-strong heuristics would produce the wrong result for a string such as "HTML و CSS: تصميم و إنشاء مواقع الويب ". This can be corrected using metadata, the use of which signifies informed intention, and you would not need or want to apply heuristics that would then make the result incorrect.
However, if there is no mechanism for the application of metadata, or if there is such a mechanism but the content developer omitted to use it, then first-strong heuristics can be helpful to establish base direction in many, though not all, cases. The application of strongly-directional formatting characters can help produce correct results for plain text strings such as the example just quoted, but it is not always possible to apply those (see [[[#rlm]]]).
Advantages
Where it is reliable, information about direction can be obtained without any changes to the string, and without the agreements and structures that would be needed to support out-of-band metadata.
Issues
The main problem with this approach is that it produces the wrong result for
strings that begin with a strong character with a different directionality than that needed for the string overall (eg. an Arabic tweet that starts with a hashtag)
strings that don't have a strong directional character (such as a telephone number), which are likely to be displayed incorrectly in a RTL context.
strings that begin with markup, such as span
, since the first strong character is always going to be LTR.
In cases where the entire string starts and ends with RLI/LRI/FSI...PDI formatting characters, it is not possible to detect the first strong character by following the Unicode Bidirectional Algorithm. This is because the algorithm requires that bidi-isolated text be excluded from the detection.
If no strong directional character is found in the string, the direction should probably be assumed to be LTR, and the consumer should act on that basis. This has not been tested fully, however.
If a string contains markup that will be parsed by the consumer as markup, there are additional problems. Any such markup at the start of the string must also be skipped when searching for the first strong directional character.
If parseable markup in the string contains information about the intended direction of the string (for example, a dir
attribute with the value rtl
in HTML), that information should be used rather than relying on first-strong heuristics. This is problematic in a couple of ways: (a) it assumes that the consumer of the string understands the semantics of the markup, which may be ok if there is an agreement between all parties to use, say, HTML markup only, but would be problematic, for example, when dealing with random XML vocabularies, and (b) the consumer must be able to recognise and handle a situation where only the initial part of the string has markup, ie. the markup applies to an inline span of text rather than the string as a whole.
It's not clear where the example with the broken link in the following paragraph is or used to be.
If, however, there is angle bracket content that is intended to be an example of markup, rather than actual markup, the markup must not be skipped – trying to display markup source code in a RTL context yields very confusing results ! It isn't clear, however, how a consumer of the string would always know the difference between examples and parseable strings.
Additional notes
Although first-strong detection is outlined in the Unicode Bidirectional Algorithm (UBA) [[UAX9]], it is not the only possible higher-level protocol mentioned for estimating string direction. For example, Twitter and Facebook currently use different default heuristics for guessing the base direction of text – neither use just simple first-strong detection, and one uses a completely different method.
Augmenting first-strong by inserting RLM/LRM markers
This approach is NOT workable for all situations.
How it works
A producer ascertains the base direction of the string and adds an marker character (either U+200F RIGHT-TO-LEFT MARK (RLM) or U+200E LEFT-TO-RIGHT MARK (LRM)) to the beginning of the string. The marker is not functional, ie. it will not automatically apply a base direction to the string that can be used by the consumer, it is simply a marker.
There are a number of possible approaches:
Add a marker to every string (not recommended).
Rely on the consumer to do first-strong detection , and add a marker to only those strings which would produce the wrong result (eg. a RTL string that starts with LTR strong characters).
Assume a default of LTR (no marker), and apply only RLM markers.
Consumers apply first-strong heuristics to detect the base direction for the string. The RLM and LRM characters are strongly typed, directionally, and should therefore indicate the appropriate base direction.
As described in [[[#firststrong]]], this approach is not relevant if directional information is provided via metadata.
Advantages
It provides a reliable way of indicating base direction, as long as the producer can reliably apply markers.
In theory, it should be easier to spot the first-strong character in strings that begin with markup, as long as the correct RLM/LRM is prepended to the string.
Issues
If the producer is a human, they could theoretically apply one of
these characters when creating a string in order to signal the
directionality.
A significant problem with this, especially on mobile devices, is the
availability or inconvenience of inputting an RLM/LRM character. The keyboards of mobile devices generally do not provide keys for RLM/LRM characters. Perhaps more important, because the characters are invisible and because Unicode
bidi is complicated, it can be difficult for the user to know how to use the character effectively. In fact, a large percentage of users don't actually know what these characters are or what they do.
Furthermore, if a person types information into, say, an HTML form in a RTL page or uses shortcut keys to set the direction for the form field, strings will look correct without the need to add
RLM/LRM. However,
used outside of that context the string would look incorrect unless it is associated with information about the required base direction. Similarly, strings
scraped from a web page that has dir=rtl
set in the html
element would not normally have or need an
RLM/LRM character at the start of the string in HTML.
It may be possible for the steps used by a producer to include an examination of the original context of the string for directional information (for example, by testing the computed direction of an HTML form field), followed by automatic insertion of an RLM/LRM mark into the beginning of the string where necessary. An issue with this approach is that it changes the string value and identity. This may also create problems for working with string length
or pointer positions, especially if some producers add markers and others don't.
If directional information is contained in markup that will be
parsed as such by the consumer (for example, dir=rtl
in HTML), the producer of the string
needs to understand that markup in order to set or not set an RLM/LRM
character as appropriate. If the producer always adds RLM/LRM to the
start of such strings, the consumer is expected to know that. If the
producer relies instead on the markup being understood, the consumer
is expected to understand the markup.
The producer of a string should not automatically apply RLM or LRM
to the start of the string, but should test whether it is needed.
For example, if there's already an RLM in the text, there is no need to add another.
If the context is correctly conveyed by first-strong heuristics, there is no
need to add additional characters either. Note, however, that testing
whether supplementary directional information of this kind is needed
is only possible if the producer has access, and knows that it has
access, to the original context of the string. Many document formats are generated from data stored away from the original context. For example, the catalog of books in the original example above is disconnected from the user inputing the bidirectional text.
Paired formatting characters
This approach is NOT recommended.
How it works
A producer ascertains the base direction of the string and adds a directional formatting character (one of U+2066 LEFT-TO-RIGHT ISOLATE (LRI), U+2067 RIGHT-TO-LEFT ISOLATE (RLI), U+2068 FIRST STRONG ISOLATE (FSI), U+202A LEFT-TO-RIGHT EMBEDDING (LRE), or U+202B RIGHT-TO-LEFT EMBEDDING (RLE)) to the beginning of the string, and U+2069 POP DIRECTIONAL ISOLATE (PDI) or U+202C POP DIRECTIONAL FORMATTING (PDF) to the end.
There are a number of possible approaches:
Add the formatting codes to every string.
Rely on the consumer to do first-strong detection , and add a marker to only those strings which would produce the wrong result (eg. a RTL string that starts with LTR strong characters).
Consumers would theoretically just insert the string in the place it will be displayed, and rely on the formatting codes to apply the base direction. However, things are not quite so simple (see below).
There are two types of paired formatting characters. The original set of controls provide the ability to add an additional level of bidirectional "embedding" to the Unicode bidirectional Algorithm. More recently, Unicode added a complementary set of "isolating" controls. Isolating controls are used to surround a string. The inside of the string is treated as its own bidirectional sequence, and the string is protected against spill-over effects related to any surrounding text. The enclosing string treats the entire surrounded string as a single unit that is ignored for bidi reordering. This issue is described here .
Code Point
Abbreviation
Description
Code Point
Abbreviation
Description
U+200A
LRE
Left to Right Embedding
U+2066
LRI
Left to Right Isolate
U+200B
RLE
Right to Left Embedding
U+2067
RLI
Right to Left Isolate
U+2068
FSI
First Strong Isolate
U+200C
PDF
Pop Directional Formatting (ending an embedding)
U+2069
PDI
Pop Directional Isolate (ending an isolate)
If paired formatting characters are used, they should be isolating, ie. starting with RLI, LRI, FSI, and not with RLE or LRE.
Advantages
There are no real advantages to using this approach.
Issues
This approach is only appropriate if it is acceptable to change the value of the string. In addition to possible issues such as changed string length or pointer positions, this approach runs a real and serious risk of one of the paired characters getting lost, either through handling errors, or through text truncation, etc.
A producer and a consumer of a string would need to recognise and handle a situation where a string begins with a paired formatting character but doesn't end with it because the formatting characters only describe a part of the string.
Unicode specifies a limit to the number of embeddings that are effective, and embeddings could build up over time to exceed that limit.
Consuming applications would need to recognise and appropriately handle the isolating formatting characters. At the moment such support for RLI/LRI/FSI is far from pervasive.
This approach would disqualify the string from being amenable to UBA first-strong heuristics if used by a non-aware consumer, because the Unicode bidi algorithm is unable to ascertain the base direction for a string that starts with RLI/LRI/FSI and ends with PDI. This is because the algorithm skips over isolated sequences and treats them as a neutral character. A consumer of the string would have to take special steps, in this case, to uncover the first-strong character.
Script subtags
This approach is only recommended as a workaround for situations that prevent the use of metadata.
How it works
A producer supplies language metadata for strings, specifying, where necessary, the script in use.
There are a number of possible approaches:
Label every string for language, including a script subtag as needed. Consumers may need to compute the script subtag when the producer does not provide one.
It might be reasonable to assume a default of LTR for all strings unless marked with a language tag whose script subtag (either present or implied) indicates RTL.
Alternatively, limit the use of script subtag metadata to situations where first-strong heuristics are expected to fail — provided that such cases can be identified, and appropriate action taken by the producer (not always reliable). Consumers would then need to use first-strong heuristics in the absence of a script subtag in order to identify the appropriate base direction. The use of script subtags should not, however, be restricted to strings that need to indicate direction; it is perfectly valid to associate a script subtag with any string.
Set a default language for a set of strings at a higher level, but provide a mechanism to override that default for a given string where needed.
Consumers extract the script subtag from the language tag associated with each string, computing the string's base direction as necessary. Script subtags associated with RTL scripts are used to assign a base direction of RTL to their associated strings.
Language information MUST use [[BCP47]] language tags. The portion of the language tag that carries the information is the script subtag, not the primary language subtag. For example, Azeri may be written LTR (with the Latin or Cyrillic scripts) or RTL (with the Arabic script). Therefore, the subtag az
is insufficient to clarify intended base direction. A language tag such as az-Arab
(Azeri as written in the Arabic script), however, can generally be relied upon to indicate that the overall base direction should be RTL.
Script subtags should only be used in language tags when the language's script is not implied by other information in the language tag. Implementations and specifications SHOULD NOT require the addition or generation of script subtags not already present in a language tag. The IANA Language Subtag Registry, defined by [[BCP47]] contains a Suppress-Script
field for a few languages, indicating the script where it is missing. Additionally, the [[LDML]] specification defines a "likely subtag" mechanism that can often be used to supply a missing script subtag. For example, language tags such as ar
(Arabic) or ar-EG
(Arabic as used in Egypt), imply the Arab
(Arabic) script subtag, since nearly all Arabic is written in this script.
Advantages
There is no need to inspect or change the string itself.
This approach avoids the issues associated with first-strong detection when the first-strong character is not indicative of the necessary base direction for the string, and avoids issues relating to the interpretation of markup.
Note that a string that begins with markup that sets a language for the string text content (eg. <cite lang="zh-Hans">
) is not problematic here, since that language declaration is not expected to play into the setting of the base direction.
Issues
The use of metadata as outlined above is a much better approach, if it is available. This script-related approach is only for use where that approach is unavailable, for legacy reasons.
There are many strings which are not language-specific but which absolutely need to be associated with a particular base direction for correct consumption. For example, MAC addresses inserted into a RTL context need to be displayed with a LTR overall base direction and isolation from the surrounding text. It's not clear how to distinguish these cases from others (in a way that would be feasible when using direction metadata). Special language tags, such as zxx
(Non-Linguistic), exist for identifying this type of content, but usually data fields of this type omit language information altogether, since it is not applicable.
The list of script subtags may be added to in future. In that case, any subtags that indicate a default RTL direction need to be added to the lists used by the consumers of the strings.
There are some rare situations where the appropriate base direction cannot be identified from the script subtag, but these are really limited to archaic usage of text. For example, Japanese and Chinese text prior to World War 2 was often written RTL, rather than LTR. Languages such as those written using Egyptian Hieroglyphs, or the Tifinagh Berber script, could formerly be written either LTR or RTL, however the default for scholastic research tends to LTR.
Other comments
The approach outlined here is only appropriate when declaring information about the overall base direction to be associated with a string. We do not recommend use of language data to indicate text direction within strings, since the usage patterns are not interchangeable.
Require bidi markup for content
This approach is NOT recommended, except under agreements that expect to exclusively interchange HTML or XML markup data.
How it works
The producer ensures that all strings begin and end with markup which indicates the appropriate base direction for that string. This requires the producer to examine the string. If the string is not bounded by markup with directional information, the producer must add wrap the string with elements that have the dir
or its:direction
[[ITS20]] attributes, or other markup appropriate to a given XML application. If the string is bounded by markup, but it is something such as an HTML h1
element, the producer needs to introduce directional information into the existing markup, rather than simply surround the string with a span
.
This example uses HTML markup. (Simply to make the example easier to read, it shows the text content of the string as it should be displayed, rather than in the order in which the characters are stored.)
The consumer then relies on the markup to set the base direction around the text content of the string when it is displayed. (Note that, unless additional metadata is provided, the consumer cannot remove the markup before integrating the string in the target location, because it cannot tell what markup has been added by the producer and what was already there. In general, however, such added markup is harmless.)
Advantages
The benefit for content that already uses markup is clear. The content will already provide complete markup necessary for the display and processing of the text or it can be extracted from the source page context. HTML and XML processors already know how to deal with this markup and provide ready validation.
For HTML, the dir
attribute bidirectionally isolates the content from the surrounding text, which removes spillover conflicts. This reduces the work of the consumer.
Markup can also be used for string-internal directional information, something base direction on its own cannot solve.
Effectively, all levels of the implementation stack have to participate in understanding the markup (or ensure that they do no harm).
If the system uses HTML, end to end, then appropriate markup is available and its semantics are understood (ie. the dir
attribute, and the bdi
and bdo
elements). For XML applications, however, there is no standard markup for bidi support. Such markup would need to first be defined, and then understood by both the producer and consumer.
A key downside of this approach is that many data values are just strings. As with adding Unicode tags or Unicode bidi controls, the addition of markup to strings alters the original string content. Altering the length of the content can cause problems with processes that enforce arbitrary limits or with processes that "sanitize" content by escaping HTML/XML unsafe characters such as angle brackets.
Another issue is the work and sophistication required for producers to examine strings and add markup as needed.
There are limits to the number of embeddings allowed by the Unicode bidirectional algorithm. Consumers would need to ensure that this limit is not passed when embedding strings into a wider context.
The addition of markup also requires consumers to guard against the usual problems with markup insertion, such as XSS attacks.
Create a new bidi datatype
This approach is not currently available.
How it works
This is similar to the idea of sending metadata with a string as discussed previously, however the metadata is not stored in a completely separate field (as in ), or inserted into the string itself (as in ), but is associated with the string as part of the string format itself.
Some datatypes, such as [[RDF-PLAIN-LITERAL]], already exist that allow for language metadata to be serialized as part of a string value. However, these do not include a consideration for base direction. This might be addressed by defining a new datatype (or extending an existing one) that document formats could then use to serialize natural language strings (localizable content ) that includes both language and direction metadata.
Using RDF plain string literals as a model, here is what a serialization might look like. (The RTL text is shown in the order in which characters are stored in memory, rather than the display order.)
myLocalizedString: "Hello World!@en^ltr" // language and direction
myLocalizedString_ar: "مرحبا بالعالم !@ar-EG^rtl" // right-to-left example
myLocalizedString_fr: "Bonjour monde !@fr" // language only
myLocalizedString_und: "שלום עולם !^rtl" // direction information only
myDataString: "978-0-123-4567-X^ltr" // language-neutral string
Note that the last string does not include language information because it is an internal data value, but does include direction information because strings of this kind must be presented in the LTR order.
Producers would need to attach the direction information to a string.
Again, it would be sensible to establish rules that expect the consumer to use first-strong heuristics for those strings that are amenable to that approach, and for the producer to only add directional information if the first-strong approach would otherwise produce the wrong result. This would greatly simplify the management of strings and the amount of data to be transmittted, because the number of strings requiring metadata is relatively small.
The consumer would look to see whether the string has metadata associated with it, in which case it would set the indicated base direction. Otherwise, it would use first-strong heuristics to determine the base direction of the string.
Advantages
If a new datatype were added to JSON to support natural language strings, then specifications could easily specify that type for use in document formats. Since the format is standardized, producers and consumers would not need to guess about direction or language information when it is encoded.
Issues
Apart from the fact that this currently doesn't work, the downside of adding a datatype is that JSON is a widely implemented format, including many ad-hoc implementations. Any new serialization form would likely break or cause interoperability problems with these existing implementations. JSON is not designed to be a "versioned" format. Any serialization form used would need to be transparent to existing JSON processors and thus could introduce unwanted data or data corruption to existing strings and formats.
Approaches Considered for Identifying the Language of Content
This section deals with different means of determining or conveying the language of string values.
This approach is recommended.
How it works
A producer ascertains the language of the string (generally from metadata supplied upstream) and includes this information a metadata field that accompanies the string when it is stored or transmitted.
When storing or transmitting a set of strings at a time, it helps to have a field for the resource as a whole that sets a language which can be inherited by all strings in the resource. Note that in addition to a global field, you still need the possibility of attaching string-specific metadata fields in cases where a string's language is not that of the default. The language set on an individual string must override any resource-level value.
A consumer needs to understand how to read the metadata associated with a string and apply it to the display, processing, or data structures that it generates. Note that this might include the need to apply a resource-level default language when serializing or exchanging an individual value.
Advantages
Using a consistent and well-defined data structure makes it more likely that different standards are composable and will work together seamlessly.
Metadata can be supplied without affecting the content itself.
Where metadata is unavailable, it can be omitted.
Consumers and producers do not have to instrospect the data outside of their normal processing.
Issues
Serialized files utilizing the dictionary and its data values will contain additional fields and can be more difficult to read as a result.
For existing document formats, it represents a change to the values being exchanged.
Require markup for content
This approach is NOT recommended except in special cases where the content being exchanged is expected to consist of and is restricted to literal values in a given markup language.
How it works
When a document is expected to consist of HTML or XML fragments and will be processed and displayed strictly in a markup context, the producer can use markup to convey the language of the content by wrapping strings with elements that have the lang
or xml:lang
attributes.
Advantages
This approach, and thus the advantages, are effectively the same as in this section .
See above .
Use Unicode language tag characters
This approach is NOT recommended.
As noted in this best practice , Unicode tag characters in the U+E0000
block SHOULD NOT be used to encode langauge tags. This section mainly exists to provide guidance to specifications against adopting these as a potential solution.
Producers insert Unicode tag characters into the data to tag strings with a language.
Consumers process the Unicode tag characters and use them to assign the language.
Unicode defines special characters that can be used as language tags. These characters are "default ignorable" and should have no visual appearance. Here is how Unicode tags are supposed to work:
Each tag is a character sequence. The sequence begins with a tag identification character. The only one currently defined is U+E0001
, which identifies [[BCP47]] language tags. Other types of tags are possible, via private agreement. The remainder of the Unicode block for forming tags mirrors the printable ASCII characters. That is, U+E0020
is space (mirroring U+0020
), U+E0041
is capital A (mirroring U+0041
), and so forth. Following the tag identification character, producers use each tag character to spell out a [[BCP47]] language tag using the upper/lowercase letters, digits, and the hyphen character. A given source language tag, which is composed from ASCII letters, digits and hyphens, can be transmogrified into tags by adding 0xE0000
to each character's code point. Additional structure, such as a language priority list (see [[RFC4647]]) might be constructed using other characters such as comma or semi-colon, although Unicode does not define or even necessarily permit this.
The end of a tag's scope is signalled by the end of the string, or can be signalled explicitly using the cancel tag character U+E007F, either alone (to cancel all tags) or preceeded by the language tag identification character U+E0001
(i.e. the sequence <U+E0001,U+E007F>
to end only language tags).
Tags therefore have a minimum of three characters, and can easily be 12 or more. Furthermore, these characters are supplementary characters. That is, they are encoded using 4-bytes per character in UTF-8 and they are encoded as a surrogate pair (two 16-bit code units) in UTF-16. Surrogate pairs are needed to encode these characters in string types for languages such as Java and JavaScript that use UTF-16 internally. The use of surrogates makes the strings somewhat opaque. For example, U+E0020
is encoded in UTF-16 as 0xDB40.DC20 and in UTF-8 as the byte sequence 0xF3.A0.80.A0 .
Advantages
These language tag characters could be used as part of normal Unicode text without modification to the structure of the document format.
Issues
Unicode tag characters are strongly deprecated by the Unicode Consortium. These tag characters were intended for use in language tagging within plain text contexts and are often suggested as an alternate means of providing in-band non-markup language tagging. We are unaware of any implementations that use them as language tags.
Applications that treat the characters as unknown Unicode characters will display them as tofu (hollow box replacement characters) and may count them towards length limits, etc. So they are only useful when applications or interchange mechanisms are fully aware of them and can remove them or disregard them appropriately. Although the characters are not supposed to be displayed or have any effect on text processing, in practice they can interfere with normal text processes such as truncation. line wrapping, hyphenation, spell-checking and so forth.
By design, [[BCP47]] language tags are intended to be ASCII case-insensitive. Applications handling Unicode tag characters would have to apply similar case-insensitivity to ensure correct identification of the language. (The Unicode data doesn't specify case conversion pairings for these characters; this complicates the processing and matching of language tag values encoded using the tag characters.)
Moreover, language tags need to be formed from valid subtags to conform to [[BCP47]]. Valid subtags are kept in an IANA registry and new subtags are added regularly, so applications dealing with this kind of tagging would need to always check each subtag against the latest version of the registry.
The language tag characters do not allow nesting of language tags. For example, if a string contains two languages, such as a quote in French inside an English sentence, Unicode tag characters can only indicate where one language starts. To indicate nested languages, tags would need to be embedded into the text not just prefixed to the front.
Although never implemented, other types of tags could be embedded into a string or document using Unicode tag characters. It is possible for these tags to overlap sections of text tagged with a language tag.
Finally, Unicode has recently "recycled" these characters for use in forming sub-regional flags, such as the flag of Scotland (🏴), which is made of the sequence:
🏴 [U+1F3F4 WAVING BLACK FLAG ]
[U+E0067 TAG LATIN SMALL LETTER G ]
[U+E0062 TAG LATIN SMALL LETTER B ]
[U+E0073 TAG LATIN SMALL LETTER S ]
[U+E0063 TAG LATIN SMALL LETTER C ]
[U+E0074 TAG LATIN SMALL LETTER T ]
[U+E007F CANCEL TAG ]
The above is a new feature of emoji added in Unicode 10.0 (version 5.0 of UTR#51) in June 2017. Proper display depends on your system's adoption of this version.
Use a language detection heuristic
This approach is NOT recommended.
How it works
Producers do nothing.
Consumers run a language detection algorithm to determine the language of the text. These are usually statistically based heuristics, such as using n-gram frequency in a language, possibly coupled with other data.
Advantages
There are no fundamental advantages to this approach.
Issues
Heuristics are more accurate the longer and more representative the text being scanned is. Short strings may not detect well.
Language detection is limited to the languages for which one has a detector.
Inclusions, such as personal or brand names in another language or script, can throw off the detection.
Language detection tends to be slow and can be memory intensive. Simple consumers probably can't afford the complexity needed to determine the language.
Localization Considerations
Many specifications need to allow multiple different language values to be returned for a given field. This might be to support runtime localization or because the producer has multiple different language values and cannot select or distinguish them appropriately. There are several ways that multiple language values could be organized. For speed and ease of access, the use of language indexing is a useful strategy.
In language indexing, a given field's value is an array of key-value pairs. The keys in the array are language tags. The values of each language tag are strings or, ideally, Localizable objects. Here's an example of what a language indexed field title might look like:
"title": [ "en": { "value": "Learning Web Design", "lang": "en" },
"ar": { "value": "التعلم على شبكة الإنترنت التصميم", "lang": "ar", "dir": "rtl"},
"ja": { "value": "Webデザインを学ぶ", "lang": "ja" },
"zh-Hans": { "value": "学习网页设计", "lang": "zh-Hans", "dir": "ltr"} ],
Using the language tag as a key to the value array allow for rapid selection of the correct value for a given request. Notice that, if the value of the language tag is a Localizable , the language might be repeated in the data structure.
For example, if the language requested were U.S. English (en-US ), this format makes it easier to match and extract the best fitting title object {"value": "Learning Web Design", "lang": "en"} . An additional potential advantage is that the indexed language tag can indicate the intended audience of the value separately from the language tag of the actual data value. An example of this might be the use of language ranges from [[RFC4647]], as in the following example, where a more specific language value might be wrapped with a less-specific language tag. In this example, the content has been labeled with a specific language tag (de-DE
), but is available and applicable to users who speak other variants of German, such as de-CH
or de-AT
:
"title": [ {
"de": {"value": "HTML und CSS verstehen", "language": "de-DE" },
...
],
A less common example would be when a system supplies a specific value in a different ("wrong") language from the indexing language tag, perhaps because the actual translated value is missing:
"title": [ {
"de": {"value": "Understanding HTML and CSS", "language": "en-US" }, // German not available
...
],
The primary issue with this approach is the need to extract the indexing language tag from the content in order to generate the index. Producers might also need to have a serialization agreement with consumers about whether the indexing language tag will be in any way canonicalized. For example, the language tag cel-gaulish
is one of the [[BCP47]] grandfathered language tags. Some implementations, such as those following the rules in [[CLDR]], would prefer that this tag be replaced with a modern equivalent (xtg-x-cel-gaulish
in this case) for the purposes of language negotiation.
[[JSON-LD]] defines a specific implementation of language indexing, which depends on the use of the @context
structure. This structure does not support the use of Localizable values (only strings or arrays of strings are supported), so changes would be needed to allow some of the above capabilities in [[JSON-LD]] documents.
{
"@context": {
"example": "http://example.com/example/",
"title": {
"@id": "example:title",
"@container": "@language"
}
},
"@id": "http://example.com/Learning%20Web%20Design",
"title": { "en" : "Learning Web Design",
"ar" : "التعلم على شبكة الإنترنت التصميم",
"ja" : "Webデザインを学ぶ",
"zh-Hans": "学习网页设计"
}
}
The Localizable WebIDL Dictionary
This section contains a WebIDL definition for a Localizable
dictionary.
To be effective, specification authors should consistently use the same formats and data structures so that the majority of data formats are interoperable (in other words, so that data can be copied between many formats without having to apply additional processing). We recommend adoption of the Localizable WebIDL "dictionary" as the best available format for JSON-derived formats to do that.
By defining the language and direction in a WebIDL dictionary form, specifications can incorporate language and direction metadata for a given String value succinctly. Implementations can recyle the dictionary implementation straightforwardly.
Localizable
dictionary
dictionary Localizable
{
DOMString value
;
DOMString lang
;
TextDirection
dir
= "auto" ;
};
value
member
The string containing the data value of this field.
lang
member
A [[BCP47]] language tag that specifies the primary language for the values of the human-readable
members of the inheriting dictionary.
dir
member
Specifies the base direction for the human-readable members of an inheriting dictionary.
TextDirection
enum
enum TextDirection
{
"auto" ,
"ltr" ,
"rtl"
};
The text-direction values are the following, implying that the value of the human-readable members is by default:
auto
Directionality is determined by the Unicode Bidirectional Algorithm [[UAX9]] algorithm.
ltr
Left-to-right text.
rtl
Right-to-left text.
Acknowledgements
The Internationalization (I18N) Working Group would like to thank
the following contributors to this document:
Mati Allouche,
David Baron,
Ivan Herman,
Tobie Langel,
Sangwhan Moon,
Felix Sasaki,
Najib Tounsi,
and many others.
The following pages formed the initial basis of this document: