This document describes the best practices for identifying language and base direction for strings used on the Web.

We welcome comments on this document, but to make it easier to track them, please raise separate issues for each comment, and point to the section you are commenting on using a URL.

Introduction

This document was developed as a result of observations by the Internationalization Working Group over a series of specification reviews related to formats based on JSON, WebIDL, and other non-markup data languages. Unlike markup formats, such as XML, these data languages generally do not provide extensible attributes and were not conceived with built-in language or direction metadata.

The concepts in this document are applicable any time strings are used on the Web, either as part of a formalised data structure, but also where they simply originate from JavaScript scripting or any stored list of strings.

Natural language information on the Web depends on and benefits from the presence of language and direction metadata. Along with support for Unicode, mechanisms for including and specifying the base direction and the natural language of spans of text are one of the key internationalization considerations when developing new formats and technologies for the Web.

Markup formats, such as HTML and XML, as well as related styling languages, such as CSS and XSL, are reasonably mature and provide support for the interchange and presentation of the world's languages via built-in features. Strings and string-based data formats need similar mechanisms in order to ensure complete and consistent support for the world's languages and cultures.

Terminology

This section defines terminology necessary to understand the contents of this document. Most of the terms defined here are specific to this document. Terminology borrowed from other Internationalization documents have a link to the original definition.

A producer is any process where natural language string data is created for later storage, processing, or interchange.

A consumer is any process that receives natural language strings, either for display or processing.

A serialization agreement (or "agreement" for short) is the common understanding between a producer and consumer about the serialization of string metadata: how it is to be understood, serialized, read, transmitted, removed, etc.

Language negotiation is any process which selects or filters content based on language. Usually this implies selecting content in a single language (or falling back to some meaningful default language that is available) by finding the best matching values when several languages or locales [[LTLI]] are present in the content. Some common language negotiation algorithms include the Lookup algorithm in [[BCP47]] or the BestFitMatcher in [[ECMA-402]].

LTR stands for "left-to-right" and refers to the inline base direction of left-to-right [[UAX9]]. This is the base text direction used by languages whose starting character progression begins on the left side of the page in horizontal text. It's used for scripts such as Latin, Cyrillic, Devanagari, and many others.

RTL stands for "right-to-left" and refers to the inline base direction of right-to-left [[UAX9]]. This is the base text direction used by languages whose starting character progression begins on the right side of the page in horizontal text. It's used for a variety of scripts which include Arabic, Hebrew, N'Ko, Adlam, Thaana, and Syriac among others.

Bidi isolation often needs to be applied to a range of text in order to prevent the automatic rules of the Unicode Bidirectional Algorithm incorrectly ordering that content in relation to the surrounding text. For example, numbers following right-to-left text in memory are automatically positioned to the left of that text by the Bidi Algorithm, but sometimes need to appear to the right. Another example occurs when lists of RTL items occur in a LTR sentence: the Bidi Algorithm will automatically assume that the order of items in the list should be "3 ,2 ,1", but actually what's needed is "1, 2, 3". In HTML, bidi isolation can be applied to a range of text by enclosing it in an element with a dir attribute. In plain text there are Unicode formatting characters that can do the job. These mechanisms remove unwanted 'spillover effects'.

First-strong detection is an algorithm that looks for the first strongly-directional character in a string, and then uses that to guess at the appropriate base direction for the string as a whole. Unicode code points are associated with properties relating to text direction: generally, letters in right-to-left scripts such as Arabic and Hebrew have a strong RTL direction, whereas Latin and Han characters have a strong LTR direction. Other characters, such as punctuation, only have a weak intrinsic directionality, and the actual directionality is determined according to the context in which they are found.

Base direction determines the general arrangement and progression of content when bidirectional text is displayed. The Unicode Bidirectional Algorithm or UBA [[UAX9]] is primarily focused on arranging adjacent characters, based on character properties. Base direction works at a higher level, and dictates (a) the visual order and direction in which runs of strongly-typed LTR and RTL character are displayed, and (b) where there are weakly-typed characters such as punctuation, the placement of those items relative to the other content.

Metadata is information about data defined in terms of functions, form and scope. In this document, the function of metadata is to express information about direction and language. The form for direction metadata is described in [[[#bidi-approaches]]], the form for language metadata is described in [[[#language-approaches]]]. In this document, the scope for both types of metadata is a string or a set of strings. In absence of direction or language metadata, defaults apply, see [[[#resource_wide_defaults]]].

If you are unfamiliar with bidirectional or right-to-left text, there is a basic introduction here. This will give you a basic grasp of how the Unicode Bidirectional Algorithm works and the interplay between it and the base direction, which will stand you in good stead for reading this document. Additional materials can be found in the Internationalization Working Group's Techniques Index.

Natural Language The spoken, written, or signed communications used by human beings. [[LTLI]]

Syntactic Content Any text in a document format or protocol that belongs to the structure of the format or protocol. [[CHARMOD-NORM]]

User-Supplied Value Unreserved syntactic content in a vocabulary that is assigned by users, as distinct from reserved keywords in a given format or protocol. [[CHARMOD-NORM]]

Localizable Content Document contents intended as human-readable text and not to any of the surrounding or embedded syntactic content that form part of the document structure. Note that syntactic content can have localizable content embedded in it, such as when an [[HTML]] img element has an alt attribute containing a description of the image. [[CHARMOD-NORM]]

In this document, the term natural language is usually used to refer to the portions of a document or protocol intended for human consumption. The term localizable content is used to refer to the natural language content of formal languages, protocol syntaxes and the like, as distinct from syntactic content or user-supplied values.

The String Lifecycle

It's not possible to consider alternatives for handling string metadata in a vacuum: we need to establish a framework for talking about string handling and data formats.

Producers

A string can be created in a number of ways, including a content author typing strings into a plain text editor, text message, or editing tool; or a script scraping text from web pages; or acquisition of an existing set of strings from another application or repository. In the data formats under consideration in this document, many strings come from back end data repositories or databases of various kinds. Sources of strings may provide an interface, API, or metadata that includes information about the base direction and language of the data. Some also provide a suitable default for when the direction or language is not provided or specified. In this document, the producer of a string is the source, be it a human or a mechanism, that creates or provides a string for storage or transmission.

When a string is created, it's necessary to (a) detect or capture the appropriate language and base direction to be associated with the string, and (b) take steps, where needed, to set the string up in a way that stores and communicates the language and base direction.

For example, in the case of a string that is extracted from an HTML form, the base direction can be detected from the computed value of the form's field. Such a value could be inherited from an earlier element, such as the html element, or set using markup or styling on the input element itself. The user could also set the direction of the text by using keyboard shortcut keys to change the direction of the form field. The dirname attribute provides a way of automatically communicating that value with a form submission.

Similarly, language information in an HTML form would typically be inherited from the lang attribute on the html tag, or an ancestor element in the tree with a lang attribute.

If the producer of the string is receiving the string from a location where it was stored by another producer, and where the base direction/language has already been established, the producer needs to understand that the language and base direction has already been set, and understand how to convert or encode that information for its consumers.

Consumers

A consumer is an application or process that receives a string for processing and possibly places it into a context where it will be exposed to a user. For display purposes, it must ensure that the base direction and language of the string is correctly applied to the string in that context. For processing purposes, it must at least persist the language and direction and may need to use the language and direction data in order to perform language-specific operations.

Displaying the string usually involves applying the base direction and language by constructing additional markup, adding control codes, or setting display properties. This indicates to rendering software the base direction or language that should be applied to the string in this display context to get the string to appear correctly. For both language and direction, it must make clear the boundaries for the range of text to which the language applies. For text direction, it must also isolate embedded strings from the surrounding text to avoid spill-over effects of the bidi algorithm [[UAX9]].

Note that a consumer of one document format might be a producer of another document format.

Serialization Agreements

Between any producer and consumer, there needs to be an agreement about what the document format contains and what the data in each field or attribute means. Any time a producer of a string takes special steps to collect and communicate information about the base direction or language of that string, it must do so with the expectation that the consumer of the string will understand how the producer encoded this information.

If no action is taken by the producer, the consumer must still decide what rules to follow in order to decide on the appropriate base direction and language, even if it is only to provide some form of default value.

In some systems or document formats, the necessary behaviour of the producers and consumers of a string are fully specified. In others, such agreements are not available; it is up to users to provide an agreement for how to encode, transmit, and later decode the necessary language or direction information. Low level specifications, such as JSON, do not provide a string metadata structure by default, so any document formats based on these need to provide the "agreement" themselves.

Strings that are not localizable content

The Web uses strings and character sequences to encode most data. Leaving aside different data types (such as numbers, time values, or binary data serializations such as base64), there are still values that are defined as using a string data type but which are not intended for use as natural language data values. For example, the syntactic content defined by a specification, such as the reserved keywords in CSS or the names of the various definitions in a WebIDL document, are not part of the localizable content of their respective document formats or protocols.

Many specifications also allow users to provide user-supplied values inside of a given namespace or document format. For example, SSIDs on a Wifi network are user-defined. So too are class names in a CSS stylesheet. Most specifications allow (and are encouraged to allow) a wide range of Unicode characters in these names. Most users choose values that are recognizable as words in one or another natural language, as doing so makes the values easier to work with. However, even though these strings consist of words in a natural language, these types of strings are not considered localizable content and do not need to be encumbered with additional metadata related to langauge or base direction. Usually they are merely identifiers that enable a computer to match the values.

A sometimes-useful test is that if replacing the identifier with an arbitrary string such as tK0001.37B would still be allowed, functional, and "normal", then it's not localizable content.

For example, in the base example below, all of the keys in the JSON document (id, title, authors, language, publisher, and so on) are syntactic content. The data values, such as the ISBN, the language tag, and the publication date are also syntactic content. Only the actual book title, the author's name, and the publisher's name are natural language data values and thus localizable content.

Best Practices, Recommendations, and Gaps

This section consists of the Internationalization (I18N) Working Group's set of best practices for identifying language and base direction in data formats on the Web. In some cases, there are gaps in existing standards, where the recommendations of the I18N WG require additional standardization or there might be barriers to full adoption.

The main issue is how to establish a common serialization agreement between producers and consumers of data values so that each knows how to encode, find, and interpret the language and base direction of each data field. The use of metadata for supplying both the language and base direction of natural language string fields ensures that the necessary information is present, can be supplied and extracted with the minimal amount of processing, and does not require producers or consumers to scan or alter the data.

This document describes a number of approaches for identifying language and direction information for strings. These include the following:

The use of some of the above preclude the use of others, and in some cases some of the above approaches may need to be specified together to cater for fallback situations.

General best practices

Specifications SHOULD be careful to distinguish syntactic content, including user-supplied values, from localizable content.

Specifications MUST NOT treat syntactic content values as "displayable".

While the value of a syntactic content item or user-supplied value will often be meaningful, implementers should be reminded that in most instances these values must be wrapped with localizable display strings for presentation to the user, particularly in cases where the values are enumerated in advance.

Specifications SHOULD NOT use the Unicode "language tag" characters (code points U+E0000 to U+E007F) for language identification.

[[Unicode]] says that the ... use of tag characters to convey language tags is strongly discouraged and that the use of the character U+E0001 LANGUAGE TAG is strongly discouraged. The only current use of characters in this block of Unicode is to form various flag emoji.

Resource-wide defaults

Many resources use only a single language and have a consistent base text direction. For efficiency, the following are best practices:

Define a rule or a field to provide the default language and base direction for all strings in a given resource.

Specifications MUST NOT assume that a document-level default is sufficient.

Document level defaults, when combined with per-field metadata, can reduce the overall complexity of a given document instance, since the language and direction values don't have to be repeated across many fields. However, they do not solve all language or directionality problems, and so it must be possible to override the default on a string-by-string basis, where necessary.

Specify that, in the absence of other information, the default direction and default language are unknown.

Explicit metadata, if available, trumps the need for heuristics to be applied. This is logical, since the heuristic method cannot reliably deduce the necessary direction on its own, and if metadata has been explicitly provided there is an indication that it is intended to be authoritative.

It is essential for a consumer to know that language and direction are unknown quantities in order for them to know when to apply fallback strategies to the data (this could include language-detection, or first-strong heuristics for direction). In particular, the default direction should not be set to LTR, since that would override the need for first-strong detection, which is more appropriate for strings written in a RTL script.

Use of [[JSON-LD]] @context and the built-in @language attribute is RECOMMENDED as a document level default.

For document formats that use it, [[JSON-LD]] includes some data structures that are helpful in assigning language (but not base direction) metadata to collections of strings (including entire resources). Notably, it defines what it calls “string internationalization” in the form of a context-scoped @language value which can be associated with blocks of JSON or within individual objects. There is no definition for base direction, so the @context mechanism does not currently address all concerns raised by this document.

String-specific language information

Use field-based metadata or string datatypes to indicate the language and the base direction for individual localizable content values.

Low-level support for natural language string metadata is widespread because the use of metadata for storage and interchange of the language of data values is long-established and widely supported in the basic infrastructure of the Web. This includes language attributes in [[XML]] and [[HTML]]; string types in schema languages (e.g. [[xmlschema11-2]]) or the various RDF specifications including [[JSON-LD]]; or protocol- or document format-specific provisions for language.

Use of [[JSON-LD]] plain string literals are RECOMMENDED as a way to provide string-specific language information.

Some datatypes, such as [[RDF-PLAIN-LITERAL]], already exist that allow for language metadata to be serialized as part of a string value. Examples include:

"title": "تصميم و إنشاء مواقع الويب@ar",

"tags": [ "HTML@en", "CSS@en", "تصميم المواقع@ar" ]

"id": "978-111887164-5@und"

String-specific directional information

If a resource-wide setting is available, specify field-based metadata to override the default.

First-strong heuristics are ineffective when a default direction has been set for all strings, since metadata overrides (intentionally) the value of the first-strong character. Therefore it is necessary to use explicitly provided field data to override the default. Even if an RLM character has been prepended to a string, the default metadata overrides it.

The use of metadata for indicating base direction is also preferred, because it avoids requiring the consumer to interpolate the direction using methods such as first strong or use of methods which require modification of the data itself (such as the insertion of RLM/LRM markers or bidirectional controls).

Schema languages, such as the RDF suite of specifications, have no in-built mechanism for associating base direction metadata with natural language string values.

There is no built-in attribute for base direction in [[JSON-LD]]. There needs to be a corresponding built-in attribute (e.g. a dir) or de facto convention for indicating document-level base direction.

For the case where the resource-wide setting is not available, specify that consumers should use first-strong heuristics to identify the base direction of strings.

For the case where the resource-wide setting is available but not used, specify that consumers should fall back to first-strong heuristics to identify the base direction of strings.

If metadata is not available, consumers of strings should use heuristics, preferably based on the Unicode Standard's first-strong detection algorithm, to detect the base direction of a string.

The first-strong algorithm looks for the first strongly-directional character in a string (skipping certain preliminary substrings), and assumes that it represents the base direction for the string as a whole. However, the first strong directional character doesn't always coincide with the required base direction for the string as a whole, so it should be possible to provide metadata, where needed, to address this problem.

If relying on first-strong heuristics, encourage content developers to use RLM/LRM at the beginning of a string where it is necessary to force a particular base direction, but do not prepend one of these characters to existing strings.

Do not rely on the availability of RLM/LRM formatting characters in most cases.

If string data is being provided by users or content developers in web forms or other simple environments, users may not be able to enter these formatting characters. In fact, most users will probably be unaware that such characters exist, or how to use them. A web form can render their use unnecessary for immediate inspection if it sets the base direction for the input (which it should).

Specifications SHOULD NOT allow a base direction to be interpolated from available language metadata unless direction metadata is not available and cannot otherwise be provided.

Not all resources make use of the available metadata mechanisms. The script subtag of a language tag (or the "likely" script subtag based on [[BCP47]] and [[LDML]]) can sometimes be used to provide a base direction when other data is not available. Note that using language information is a "last resort" and specifications SHOULD NOT use it as the primary way of indicating direction: make the effort to provide for metadata.

Other approaches

For [[WebIDL]]-defined data structures, define each localizable content (natural language text) field as a Localizable.

This combines both language and direction metadata and, if consistently adopted, makes interchange between different formats easier. Consistency between different specifications and document formats allows for the easy interchange of string data. By naming field attributes in the same way and adopting the same semantics, different specifications can more easily extract values from or add values into resources from other data sources.

Specifications MUST NOT require the production or use of paired bidi controls.

Another way to say this is: do not require implementations to modify data passing through them. Unicode bidi control characters might be found in a particular piece of string content, where the producer or data source has used them to make the text display properly. That is, they might already be part of the data. Implementations should not disturb any controls that they find—but they shouldn't be required to produce additional controls on their own.

Specifications SHOULD recommend the use of language indexing when Localizable strings can be supplied in multiple languages for the same value.

Producers sometimes need to supply multiple language values (see Localization Considerations) for the same content item or data record. One use for this language negotiation by the consumer.

[[JSON-LD]] language indexing should be modified to support the use of Localizable values in language indexing.

Defining Bidirectional Keywords in Specifications

A specification for a document format or protocol that includes natural language text will need to define a data field or attribute to store the direction of that natural language content. These definitions need to be consistent across the Web in order to ensure interoperability, as consumers of one document format will need to map the base direction to fields in documents that they produce or control the base direction in text fields for display. This section describes how to provide such a definition along with the specific content to use.

There are two common use cases for defining content direction:

A field direction value is a data field stored or exchanged with a natural language string giving its base direction.

A display direction attribute is a field or value, usually represented by an attribute in markup languages, that controls the base direction of a span of content.

Use the field name direction when defining a field direction value.

The name direction is preferred for data values. The name dir is an acceptable alternative.

Use the field name dir when defining a display direction attribute.

The name dir is preferred for an attribute, such as in markup languages. Using direction for an attribute is not recommended, since it is long and relatively uncommon for this use case. Note that both [[HTML]] and [[XML10]] have a built-in dir attribute. A dir attribute should have scope within a document and should be defined to provide bidi isolation.

Define the values of a field direction to include and be limited to ltr and rtl.

Define the values of any display direction attribute to include and be limited to the values ltr, rtl, and auto.

The value auto SHOULD NOT be used as field direction value: omitting the direction is preferred when the content direction is not known.

The value ltr indicates a base direction of left-to-right, in exactly the same manner indicated by CSS writing modes [[CSS-WRITING-MODES-4]]

The value rtl indicates a base direction of right-to-left, in exactly the same manner indicated by CSS writing modes [[CSS-WRITING-MODES-4]]

The value auto indicates that the user agent uses the first strong character of the content to determine the base direction using the algorithm for auto found in [[HTML]].

The heuristic used by auto just looks at the first character with a strong directionality, in a manner analogous to the Paragraph Level determination in the bidirectional algorithm [[UAX9]]. Authors are urged to only use this value as a last resort when the direction of the text is truly unknown and no better server-side heuristic can be applied.

Requirements and Use Cases

For a detailed set of example use cases, please read the article Use cases for bidi and language metadata on the Web. This section summarises some key points related to the need for language and direction metadata.

Why is this important?

Information about the language of content is important when processing and presenting localizable content for a variety of reasons. When language information is not present, the resulting degradation in appearance or functionality can frustrate users, render the content unintelligible, or disable important features. Some of the affected processes include:

Similarly, direction metadata is important to the Web. When a string contains text in a script that runs right-to-left (RTL), it must be possible to eventually display that string correctly when it reaches an end user. For that to happen, it is necessary to establish what base direction needs to be applied to the string as a whole. The appropriate base direction cannot always be deduced by simply looking at the string; even where it is possible, the producer and consumer of the string need to use the same heuristics to interpret the direction.

Static content, such as the body of a Web page or the contents of an e-book, often has language or direction information provided by the document format or as part of the content metadata. Data formats found on the Web generally do not supply this metadata. Base specifications such as Microformats, WebIDL, JSON, and more, have tended to store natural language text in string objects, without additional metadata.

This places a burden on application authors and data format designers to provide the metadata on their own initiative. When standardized formats do not address the resulting issues, the result can be that, while the data arrives intact, its processing or presentation cannot be wholly recovered.

In a distributed Web, any consumer can also be a producer for some other process or system. Thus, a given consumer might need to pass language and direction metadata from one document format (and using one agreement) to another consumer using a different document format. Lack of consistency in representing language and direction metadata in serialization agreements poses a threat to interoperability and a barrier to consistent implementation.

An example

Suppose that you are building a Web page to show a customer's library of e-books. The e-books exist in a catalog of data and consist of the usual data values. A JSON file for a single entry might look something like:

{
    "id": "978-111887164-5",
    "title": "HTML و CSS: تصميم و إنشاء مواقع الويب",
    "authors": [ "Jon Duckett" ],
    "language": "ar",
    "pubDate": "2008-01-01",
    "publisher": "مكتبة",
    "coverImage": "https://example.com/images/html_and_css_cover.jpg",
    // etc.
},

Each of the above is a data field in a database somewhere. There is even information about what language the book is in: ("language": "ar").

A well-internationalized catalog would include additional metadata to what is shown above. That is, for each of the fields containing localizable content, such as the title and authors fields, there should be language and base direction information stored as metadata. (There may be other values as well, such as pronunciation metadata for sorting East Asian language information.) These metadata values are used by consumers of the data to influence the processing and enable the display of the items in a variety of ways. As the JSON data structure provides no place to store or exchange these values, it is more difficult to construct internationalized applications.

One work-around might be to encode the values using a mix of HTML and Unicode bidi controls, so that a data value might look like one of the following:

// following examples are NOT recommended
// contains HTML markup
"title": "<span lang='ar' dir='rtl'>HTML و CSS: تصميم و إنشاء مواقع الويب</span>",
// contains LRM as first character
"authors": [ "\u200eJon Duckett" ], 

But JSON is a data interchange format: the content might not end up with the title field being displayed in an HTML context. The JSON above might very well be used to populate, say, a local data store which uses native controls to show the title and these controls will treat the HTML as string contents. Producers and consumers of the data might not expect to introspect the data in order to supply or remove the extra data or to expose it as metadata. Most JSON libraries don't know anything about the structure of the content that they are serializing. Producers want to generate the JSON file directly from a local data store, such as a database. Consumers want to store or retrieve the value for use without additional consideration of the content of each string. In addition, either producers or consumers can have other considerations, such as field length restrictions, that are affected by the insertion of additional controls or markup. Each of these considerations places special burden on implementers to create arbitrary means of serializing, deserializing, managing, and exchanging the necessary metadata, with interoperability as a casualty along the way.

(As an aside, note that the markup shown in the above example is actually needed to make the title as well as the inserted markup display correctly in the browser.)

Isn't Unicode enough?

[[Unicode]] and its character encodings (such as UTF-8) are key elements of the Web and its formats. They provide the ability to encode and exchange text in any language consistently throughout the Internet. However, Unicode by itself does not guarantee perfect presentation and processing of natural language text, even though it does guarantee perfect interchange.

Several features of Unicode are sometimes suggested as part of the solution to providing language and direction metadata. Specifically, Unicode bidi controls are suggested for handling direction metadata. In addition, there are "tag" characters in the U+E0000 block of Unicode originally intended for use as language tags (although this use is now deprecated).

There are a variety of reasons why the addition of characters to data in an interchange format is not a good idea. These include:

This last consideration is important to call out: document formats are often built and serialized using several layers of code. Libraries, such as general purpose JSON libraries, are expected to store and retrieve faithfully the data that they are passed. Higher-level implementations also generally concern themselves with faithful serialization and de-serialization of the values that they are passed. Any process that alters the data itself introduces variability that is undesirable. For example, consider an application's unit test that checks if the string returned from the document is identical to the one in the data catalog used to generate the document. If bidi controls, HTML markup, or Unicode language tags have been inserted, removed, or changed, the strings might not compare as equal, even though they would be expected to be the same.

What consumers need to do to support direction

Given the use cases for bidirectional text, it will be clear that a consumer cannot simply insert a string into a target location without some additional work or preparation taking place, first to establish the appropriate base direction for the string being inserted, and secondly to apply bidi isolation around the string.

This requires the presence of markup or Unicode formatting controls around the string. If the string's base direction is opposite that into which it is being inserted, the markup or control codes need to tightly wrap the string. Strings that are inserted adjacent to each other all need to be individually wrapped in order to avoid the spillover issues we saw in the previous section.

[[HTML5]] provides base direction controls and isolation for any inline element when the dir attribute is used, or when the bdi element is used. When inserting strings into plain text environments, isolating Unicode formatting characters need to be used. (Unfortunately, support for the isolating characters, which the Unicode Standard recommends as the default for plain text/non-markup applications, is still not universal.)

The trick is to ensure that the direction information provided by the markup or control characters reflects the base direction of the string.

Approaches Considered for Identifying the Base Direction

The fundamental problem for bidirectional text values is how a consumer of a string will know what base direction should be used for that string when it is eventually displayed to a user. Note that some of these approaches for identifying or estimating the base direction have utility in specific applications and are in use in different specifications such as [[HTML5]]. The issue here is which are appropriate to adopt generally and specify for use as a best practice in document formats.

First-strong property detection

This approach is NOT recommended when used alone, but IS recommended as a fallback in combination with other approaches.

How it works

A producer doesn't need to do anything.

The string is stored as it is.

Consumers must look for the first character in the string with a strong Unicode directional property, and set the base direction to match it. They then take appropriate action to ensure that the string will be displayed as needed. This is not quite so simple as it may appear, for the following reasons:

  1. Characters at the start of a string without a strong direction (eg. punctuation, numbers, etc) and isolated sequences (ie. sequences of characters surrounded by RLI/LRI/FSI...PDI formatting characters) within a string must be skipped in order to find the first strong character.
  2. The detection algorithm needs to be able to handle markup at the start of the string. It needs to be able to tell whether the markup is just string text, or whether the markup needs to be parsed in the target location – in which case it must understand the markup, and understand any direction-related information that is carried in the markup.

First-strong detection is only needed where the required base direction is not already known. If direction is indicated for a string by metadata, either string-specific or via a resource-wide declaration, then first-strong heuristics should not be invoked. For example, first-strong heuristics would produce the wrong result for a string such as "HTML و CSS: تصميم و إنشاء مواقع الويب". This can be corrected using metadata, the use of which signifies informed intention, and you would not need or want to apply heuristics that would then make the result incorrect.

However, if there is no mechanism for the application of metadata, or if there is such a mechanism but the content developer omitted to use it, then first-strong heuristics can be helpful to establish base direction in many, though not all, cases. The application of strongly-directional formatting characters can help produce correct results for plain text strings such as the example just quoted, but it is not always possible to apply those (see [[[#rlm]]]).

Advantages

Where it is reliable, information about direction can be obtained without any changes to the string, and without the agreements and structures that would be needed to support out-of-band metadata.

Issues

The main problem with this approach is that it produces the wrong result for

  1. strings that begin with a strong character with a different directionality than that needed for the string overall (eg. an Arabic tweet that starts with a hashtag)
  2. strings that don't have a strong directional character (such as a telephone number), which are likely to be displayed incorrectly in a RTL context.
  3. strings that begin with markup, such as span, since the first strong character is always going to be LTR.

In cases where the entire string starts and ends with RLI/LRI/FSI...PDI formatting characters, it is not possible to detect the first strong character by following the Unicode Bidirectional Algorithm. This is because the algorithm requires that bidi-isolated text be excluded from the detection.

If no strong directional character is found in the string, the direction should probably be assumed to be LTR, and the consumer should act on that basis. This has not been tested fully, however.

If a string contains markup that will be parsed by the consumer as markup, there are additional problems. Any such markup at the start of the string must also be skipped when searching for the first strong directional character.

If parseable markup in the string contains information about the intended direction of the string (for example, a dir attribute with the value rtl in HTML), that information should be used rather than relying on first-strong heuristics. This is problematic in a couple of ways: (a) it assumes that the consumer of the string understands the semantics of the markup, which may be ok if there is an agreement between all parties to use, say, HTML markup only, but would be problematic, for example, when dealing with random XML vocabularies, and (b) the consumer must be able to recognise and handle a situation where only the initial part of the string has markup, ie. the markup applies to an inline span of text rather than the string as a whole.

It's not clear where the example with the broken link in the following paragraph is or used to be.

If, however, there is angle bracket content that is intended to be an example of markup, rather than actual markup, the markup must not be skipped – trying to display markup source code in a RTL context yields very confusing results! It isn't clear, however, how a consumer of the string would always know the difference between examples and parseable strings.

Additional notes

Although first-strong detection is outlined in the Unicode Bidirectional Algorithm (UBA) [[UAX9]], it is not the only possible higher-level protocol mentioned for estimating string direction. For example, Twitter and Facebook currently use different default heuristics for guessing the base direction of text – neither use just simple first-strong detection, and one uses a completely different method.

Metadata

This approach is recommended.

By 'metadata' we mean field-based information associated with a specific string or a set of strings in a data format, or information built into a string datatype (see also [[[#dir-approach-new-datatype]]]).

An example would be:

{
    "title": "HTML و CSS: تصميم و إنشاء مواقع الويب",
    "language": "ar",
},

Metadata indicating the default direction for all the strings in a resource could also be set using an appropriate field.

How it works

A producer ascertains the base direction of the string and adds that to a metadata field that accompanies the string when it is stored or transmitted.

There are a couple of possible approaches:

  1. Label every string for base direction.
  2. Rely on the consumer to do first-strong detection, and label only those strings which would produce the wrong result (ie. a RTL string that starts with LTR strong characters).

If storing or transmitting a set of strings at a time, it helps to have a field for the resource as a whole that sets a global, default base direction which can be inherited by all strings in the resource. Note that in addition to a global field, you still need the possibility of attaching string-specific metadata fields in cases where a string's base direction is not that of the default. The base direction set on an individual string must override the default.

Consumers would need to understand how to read the metadata sent with a string, and would need to apply first-strong heuristics in the absence of metadata.

The use of the Localizable dictionary structure is RECOMMENDED for individual values in JSON-based document formats, as it combines both language and direction metadata and, if consistently adopted, makes interchange between different formats easier.

As noted here, [[JSON-LD]] includes some data structures that are helpful in assigning language (but not base direction) metadata to collections of strings (including entire resources). These gaps in support for pre-built metadata at the resource or item level are one of the key reasons for this documents development.

Advantages

Passing metadata as separate data value from the string provides a simple, effective and efficient method of communicating the intended base direction without affecting the actual content of the string.

If every string is labelled for direction, or the direction for all strings can be ascertained by applying the global setting and any string-specific deviations, it avoids the need to inspect and run heuristics on the string to determine its base direction.

Issues

Out-of-band information needs to be associated with and kept with strings. This may be problematic for some sets of string data which are not part of a defined framework.

In particular, JSON-LD doesn't allow direction to be associated with individual strings in the same way as it works for language.

Augmenting first-strong by inserting RLM/LRM markers

This approach is NOT workable for all situations.

How it works

A producer ascertains the base direction of the string and adds an marker character (either U+200F RIGHT-TO-LEFT MARK (RLM) or U+200E LEFT-TO-RIGHT MARK (LRM)) to the beginning of the string. The marker is not functional, ie. it will not automatically apply a base direction to the string that can be used by the consumer, it is simply a marker.

There are a number of possible approaches:

  1. Add a marker to every string (not recommended).
  2. Rely on the consumer to do first-strong detection, and add a marker to only those strings which would produce the wrong result (eg. a RTL string that starts with LTR strong characters).
  3. Assume a default of LTR (no marker), and apply only RLM markers.

Consumers apply first-strong heuristics to detect the base direction for the string. The RLM and LRM characters are strongly typed, directionally, and should therefore indicate the appropriate base direction.

As described in [[[#firststrong]]], this approach is not relevant if directional information is provided via metadata.

Advantages

It provides a reliable way of indicating base direction, as long as the producer can reliably apply markers.

In theory, it should be easier to spot the first-strong character in strings that begin with markup, as long as the correct RLM/LRM is prepended to the string.

Issues

If the producer is a human, they could theoretically apply one of these characters when creating a string in order to signal the directionality.

A significant problem with this, especially on mobile devices, is the availability or inconvenience of inputting an RLM/LRM character. The keyboards of mobile devices generally do not provide keys for RLM/LRM characters. Perhaps more important, because the characters are invisible and because Unicode bidi is complicated, it can be difficult for the user to know how to use the character effectively. In fact, a large percentage of users don't actually know what these characters are or what they do.

Furthermore, if a person types information into, say, an HTML form in a RTL page or uses shortcut keys to set the direction for the form field, strings will look correct without the need to add RLM/LRM. However, used outside of that context the string would look incorrect unless it is associated with information about the required base direction. Similarly, strings scraped from a web page that has dir=rtl set in the html element would not normally have or need an RLM/LRM character at the start of the string in HTML.

It may be possible for the steps used by a producer to include an examination of the original context of the string for directional information (for example, by testing the computed direction of an HTML form field), followed by automatic insertion of an RLM/LRM mark into the beginning of the string where necessary. An issue with this approach is that it changes the string value and identity. This may also create problems for working with string length or pointer positions, especially if some producers add markers and others don't.

If directional information is contained in markup that will be parsed as such by the consumer (for example, dir=rtl in HTML), the producer of the string needs to understand that markup in order to set or not set an RLM/LRM character as appropriate. If the producer always adds RLM/LRM to the start of such strings, the consumer is expected to know that. If the producer relies instead on the markup being understood, the consumer is expected to understand the markup.

The producer of a string should not automatically apply RLM or LRM to the start of the string, but should test whether it is needed. For example, if there's already an RLM in the text, there is no need to add another. If the context is correctly conveyed by first-strong heuristics, there is no need to add additional characters either. Note, however, that testing whether supplementary directional information of this kind is needed is only possible if the producer has access, and knows that it has access, to the original context of the string. Many document formats are generated from data stored away from the original context. For example, the catalog of books in the original example above is disconnected from the user inputing the bidirectional text.

Paired formatting characters

This approach is NOT recommended.

How it works

A producer ascertains the base direction of the string and adds a directional formatting character (one of U+2066 LEFT-TO-RIGHT ISOLATE (LRI), U+2067 RIGHT-TO-LEFT ISOLATE (RLI), U+2068 FIRST STRONG ISOLATE (FSI), U+202A LEFT-TO-RIGHT EMBEDDING (LRE), or U+202B RIGHT-TO-LEFT EMBEDDING (RLE)) to the beginning of the string, and U+2069 POP DIRECTIONAL ISOLATE (PDI) or U+202C POP DIRECTIONAL FORMATTING (PDF) to the end.

There are a number of possible approaches:

  1. Add the formatting codes to every string.
  2. Rely on the consumer to do first-strong detection, and add a marker to only those strings which would produce the wrong result (eg. a RTL string that starts with LTR strong characters).

Consumers would theoretically just insert the string in the place it will be displayed, and rely on the formatting codes to apply the base direction. However, things are not quite so simple (see below).

There are two types of paired formatting characters. The original set of controls provide the ability to add an additional level of bidirectional "embedding" to the Unicode bidirectional Algorithm. More recently, Unicode added a complementary set of "isolating" controls. Isolating controls are used to surround a string. The inside of the string is treated as its own bidirectional sequence, and the string is protected against spill-over effects related to any surrounding text. The enclosing string treats the entire surrounded string as a single unit that is ignored for bidi reordering. This issue is described here.

Code Point Abbreviation Description Code Point Abbreviation Description
U+200A LRE Left to Right Embedding U+2066 LRI Left to Right Isolate
U+200B RLE Right to Left Embedding U+2067 RLI Right to Left Isolate
U+2068 FSI First Strong Isolate
U+200C PDF Pop Directional Formatting (ending an embedding) U+2069 PDI Pop Directional Isolate (ending an isolate)

If paired formatting characters are used, they should be isolating, ie. starting with RLI, LRI, FSI, and not with RLE or LRE.

Advantages

There are no real advantages to using this approach.

Issues

This approach is only appropriate if it is acceptable to change the value of the string. In addition to possible issues such as changed string length or pointer positions, this approach runs a real and serious risk of one of the paired characters getting lost, either through handling errors, or through text truncation, etc.

A producer and a consumer of a string would need to recognise and handle a situation where a string begins with a paired formatting character but doesn't end with it because the formatting characters only describe a part of the string.

Unicode specifies a limit to the number of embeddings that are effective, and embeddings could build up over time to exceed that limit.

Consuming applications would need to recognise and appropriately handle the isolating formatting characters. At the moment such support for RLI/LRI/FSI is far from pervasive.

This approach would disqualify the string from being amenable to UBA first-strong heuristics if used by a non-aware consumer, because the Unicode bidi algorithm is unable to ascertain the base direction for a string that starts with RLI/LRI/FSI and ends with PDI. This is because the algorithm skips over isolated sequences and treats them as a neutral character. A consumer of the string would have to take special steps, in this case, to uncover the first-strong character.

Script subtags

This approach is only recommended as a workaround for situations that prevent the use of metadata.

How it works

A producer supplies language metadata for strings, specifying, where necessary, the script in use.

There are a number of possible approaches:

  1. Label every string for language, including a script subtag as needed. Consumers may need to compute the script subtag when the producer does not provide one.
  2. It might be reasonable to assume a default of LTR for all strings unless marked with a language tag whose script subtag (either present or implied) indicates RTL.
  3. Alternatively, limit the use of script subtag metadata to situations where first-strong heuristics are expected to fail — provided that such cases can be identified, and appropriate action taken by the producer (not always reliable). Consumers would then need to use first-strong heuristics in the absence of a script subtag in order to identify the appropriate base direction. The use of script subtags should not, however, be restricted to strings that need to indicate direction; it is perfectly valid to associate a script subtag with any string.
  4. Set a default language for a set of strings at a higher level, but provide a mechanism to override that default for a given string where needed.

Consumers extract the script subtag from the language tag associated with each string, computing the string's base direction as necessary. Script subtags associated with RTL scripts are used to assign a base direction of RTL to their associated strings.

Language information MUST use [[BCP47]] language tags. The portion of the language tag that carries the information is the script subtag, not the primary language subtag. For example, Azeri may be written LTR (with the Latin or Cyrillic scripts) or RTL (with the Arabic script). Therefore, the subtag az is insufficient to clarify intended base direction. A language tag such as az-Arab (Azeri as written in the Arabic script), however, can generally be relied upon to indicate that the overall base direction should be RTL.

Advantages

There is no need to inspect or change the string itself.

This approach avoids the issues associated with first-strong detection when the first-strong character is not indicative of the necessary base direction for the string, and avoids issues relating to the interpretation of markup.

Note that a string that begins with markup that sets a language for the string text content (eg. <cite lang="zh-Hans">) is not problematic here, since that language declaration is not expected to play into the setting of the base direction.

Issues

The use of metadata as outlined above is a much better approach, if it is available. This script-related approach is only for use where that approach is unavailable, for legacy reasons.

There are many strings which are not language-specific but which absolutely need to be associated with a particular base direction for correct consumption. For example, MAC addresses inserted into a RTL context need to be displayed with a LTR overall base direction and isolation from the surrounding text. It's not clear how to distinguish these cases from others (in a way that would be feasible when using direction metadata). Special language tags, such as zxx (Non-Linguistic), exist for identifying this type of content, but usually data fields of this type omit language information altogether, since it is not applicable.

The list of script subtags may be added to in future. In that case, any subtags that indicate a default RTL direction need to be added to the lists used by the consumers of the strings.

There are some rare situations where the appropriate base direction cannot be identified from the script subtag, but these are really limited to archaic usage of text. For example, Japanese and Chinese text prior to World War 2 was often written RTL, rather than LTR. Languages such as those written using Egyptian Hieroglyphs, or the Tifinagh Berber script, could formerly be written either LTR or RTL, however the default for scholastic research tends to LTR.

Other comments

The approach outlined here is only appropriate when declaring information about the overall base direction to be associated with a string. We do not recommend use of language data to indicate text direction within strings, since the usage patterns are not interchangeable.

Require bidi markup for content

This approach is NOT recommended, except under agreements that expect to exclusively interchange HTML or XML markup data.

How it works

The producer ensures that all strings begin and end with markup which indicates the appropriate base direction for that string. This requires the producer to examine the string. If the string is not bounded by markup with directional information, the producer must add wrap the string with elements that have the dir or its:direction [[ITS20]] attributes, or other markup appropriate to a given XML application. If the string is bounded by markup, but it is something such as an HTML h1 element, the producer needs to introduce directional information into the existing markup, rather than simply surround the string with a span.

This example uses HTML markup. (Simply to make the example easier to read, it shows the text content of the string as it should be displayed, rather than in the order in which the characters are stored.)

The consumer then relies on the markup to set the base direction around the text content of the string when it is displayed. (Note that, unless additional metadata is provided, the consumer cannot remove the markup before integrating the string in the target location, because it cannot tell what markup has been added by the producer and what was already there. In general, however, such added markup is harmless.)

Advantages

The benefit for content that already uses markup is clear. The content will already provide complete markup necessary for the display and processing of the text or it can be extracted from the source page context. HTML and XML processors already know how to deal with this markup and provide ready validation.

For HTML, the dir attribute bidirectionally isolates the content from the surrounding text, which removes spillover conflicts. This reduces the work of the consumer.

Markup can also be used for string-internal directional information, something base direction on its own cannot solve.

Issues

Effectively, all levels of the implementation stack have to participate in understanding the markup (or ensure that they do no harm).

If the system uses HTML, end to end, then appropriate markup is available and its semantics are understood (ie. the dir attribute, and the bdi and bdo elements). For XML applications, however, there is no standard markup for bidi support. Such markup would need to first be defined, and then understood by both the producer and consumer.

A key downside of this approach is that many data values are just strings. As with adding Unicode tags or Unicode bidi controls, the addition of markup to strings alters the original string content. Altering the length of the content can cause problems with processes that enforce arbitrary limits or with processes that "sanitize" content by escaping HTML/XML unsafe characters such as angle brackets.

Another issue is the work and sophistication required for producers to examine strings and add markup as needed.

There are limits to the number of embeddings allowed by the Unicode bidirectional algorithm. Consumers would need to ensure that this limit is not passed when embedding strings into a wider context.

The addition of markup also requires consumers to guard against the usual problems with markup insertion, such as XSS attacks.

Create a new bidi datatype

This approach is not currently available.

How it works

This is similar to the idea of sending metadata with a string as discussed previously, however the metadata is not stored in a completely separate field (as in ), or inserted into the string itself (as in ), but is associated with the string as part of the string format itself.

Some datatypes, such as [[RDF-PLAIN-LITERAL]], already exist that allow for language metadata to be serialized as part of a string value. However, these do not include a consideration for base direction. This might be addressed by defining a new datatype (or extending an existing one) that document formats could then use to serialize natural language strings (localizable content) that includes both language and direction metadata.

Note that the last string does not include language information because it is an internal data value, but does include direction information because strings of this kind must be presented in the LTR order.

Producers would need to attach the direction information to a string.

Again, it would be sensible to establish rules that expect the consumer to use first-strong heuristics for those strings that are amenable to that approach, and for the producer to only add directional information if the first-strong approach would otherwise produce the wrong result. This would greatly simplify the management of strings and the amount of data to be transmittted, because the number of strings requiring metadata is relatively small.

The consumer would look to see whether the string has metadata associated with it, in which case it would set the indicated base direction. Otherwise, it would use first-strong heuristics to determine the base direction of the string.

Advantages

If a new datatype were added to JSON to support natural language strings, then specifications could easily specify that type for use in document formats. Since the format is standardized, producers and consumers would not need to guess about direction or language information when it is encoded.

Issues

Apart from the fact that this currently doesn't work, the downside of adding a datatype is that JSON is a widely implemented format, including many ad-hoc implementations. Any new serialization form would likely break or cause interoperability problems with these existing implementations. JSON is not designed to be a "versioned" format. Any serialization form used would need to be transparent to existing JSON processors and thus could introduce unwanted data or data corruption to existing strings and formats.

Approaches Considered for Identifying the Language of Content

This section deals with different means of determining or conveying the language of string values.

Metadata

This approach is recommended.

How it works

A producer ascertains the language of the string (generally from metadata supplied upstream) and includes this information a metadata field that accompanies the string when it is stored or transmitted.

When storing or transmitting a set of strings at a time, it helps to have a field for the resource as a whole that sets a language which can be inherited by all strings in the resource. Note that in addition to a global field, you still need the possibility of attaching string-specific metadata fields in cases where a string's language is not that of the default. The language set on an individual string must override any resource-level value.

A consumer needs to understand how to read the metadata associated with a string and apply it to the display, processing, or data structures that it generates. Note that this might include the need to apply a resource-level default language when serializing or exchanging an individual value.

Advantages

Using a consistent and well-defined data structure makes it more likely that different standards are composable and will work together seamlessly.

Metadata can be supplied without affecting the content itself.

Where metadata is unavailable, it can be omitted.

Consumers and producers do not have to instrospect the data outside of their normal processing.

Issues

Serialized files utilizing the dictionary and its data values will contain additional fields and can be more difficult to read as a result.

For existing document formats, it represents a change to the values being exchanged.

Require markup for content

This approach is NOT recommended except in special cases where the content being exchanged is expected to consist of and is restricted to literal values in a given markup language.

How it works

When a document is expected to consist of HTML or XML fragments and will be processed and displayed strictly in a markup context, the producer can use markup to convey the language of the content by wrapping strings with elements that have the lang or xml:lang attributes.

Advantages

This approach, and thus the advantages, are effectively the same as in this section.

Issues

See above.

Use Unicode language tag characters

This approach is NOT recommended.

How it works

Producers insert Unicode tag characters into the data to tag strings with a language.

Consumers process the Unicode tag characters and use them to assign the language.

Unicode defines special characters that can be used as language tags. These characters are "default ignorable" and should have no visual appearance. Here is how Unicode tags are supposed to work:

Each tag is a character sequence. The sequence begins with a tag identification character. The only one currently defined is U+E0001, which identifies [[BCP47]] language tags. Other types of tags are possible, via private agreement. The remainder of the Unicode block for forming tags mirrors the printable ASCII characters. That is, U+E0020 is space (mirroring U+0020), U+E0041 is capital A (mirroring U+0041), and so forth. Following the tag identification character, producers use each tag character to spell out a [[BCP47]] language tag using the upper/lowercase letters, digits, and the hyphen character. A given source language tag, which is composed from ASCII letters, digits and hyphens, can be transmogrified into tags by adding 0xE0000 to each character's code point. Additional structure, such as a language priority list (see [[RFC4647]]) might be constructed using other characters such as comma or semi-colon, although Unicode does not define or even necessarily permit this.

The end of a tag's scope is signalled by the end of the string, or can be signalled explicitly using the cancel tag character U+E007F, either alone (to cancel all tags) or preceeded by the language tag identification character U+E0001 (i.e. the sequence <U+E0001,U+E007F> to end only language tags).

Tags therefore have a minimum of three characters, and can easily be 12 or more. Furthermore, these characters are supplementary characters. That is, they are encoded using 4-bytes per character in UTF-8 and they are encoded as a surrogate pair (two 16-bit code units) in UTF-16. Surrogate pairs are needed to encode these characters in string types for languages such as Java and JavaScript that use UTF-16 internally. The use of surrogates makes the strings somewhat opaque. For example, U+E0020 is encoded in UTF-16 as 0xDB40.DC20 and in UTF-8 as the byte sequence 0xF3.A0.80.A0.

Advantages

These language tag characters could be used as part of normal Unicode text without modification to the structure of the document format.

Issues

Unicode tag characters are strongly deprecated by the Unicode Consortium. These tag characters were intended for use in language tagging within plain text contexts and are often suggested as an alternate means of providing in-band non-markup language tagging. We are unaware of any implementations that use them as language tags.

Applications that treat the characters as unknown Unicode characters will display them as tofu (hollow box replacement characters) and may count them towards length limits, etc. So they are only useful when applications or interchange mechanisms are fully aware of them and can remove them or disregard them appropriately. Although the characters are not supposed to be displayed or have any effect on text processing, in practice they can interfere with normal text processes such as truncation. line wrapping, hyphenation, spell-checking and so forth.

By design, [[BCP47]] language tags are intended to be ASCII case-insensitive. Applications handling Unicode tag characters would have to apply similar case-insensitivity to ensure correct identification of the language. (The Unicode data doesn't specify case conversion pairings for these characters; this complicates the processing and matching of language tag values encoded using the tag characters.)

Moreover, language tags need to be formed from valid subtags to conform to [[BCP47]]. Valid subtags are kept in an IANA registry and new subtags are added regularly, so applications dealing with this kind of tagging would need to always check each subtag against the latest version of the registry.

The language tag characters do not allow nesting of language tags. For example, if a string contains two languages, such as a quote in French inside an English sentence, Unicode tag characters can only indicate where one language starts. To indicate nested languages, tags would need to be embedded into the text not just prefixed to the front.

Although never implemented, other types of tags could be embedded into a string or document using Unicode tag characters. It is possible for these tags to overlap sections of text tagged with a language tag.

Finally, Unicode has recently "recycled" these characters for use in forming sub-regional flags, such as the flag of Scotland (🏴󠁧󠁢󠁳󠁴󠁿󠁧), which is made of the sequence:󠁢󠁳󠁣󠁴󠁿

  • 🏴 [U+1F3F4 WAVING BLACK FLAG]
  • 󠁧 [U+E0067 TAG LATIN SMALL LETTER G]
  • 󠁢 [U+E0062 TAG LATIN SMALL LETTER B]
  • 󠁳 [U+E0073 TAG LATIN SMALL LETTER S]
  • 󠁣 [U+E0063 TAG LATIN SMALL LETTER C]
  • 󠁴 [U+E0074 TAG LATIN SMALL LETTER T]
  • 󠁿 [U+E007F CANCEL TAG]

The above is a new feature of emoji added in Unicode 10.0 (version 5.0 of UTR#51) in June 2017. Proper display depends on your system's adoption of this version.

Use a language detection heuristic

This approach is NOT recommended.

How it works

Producers do nothing.

Consumers run a language detection algorithm to determine the language of the text. These are usually statistically based heuristics, such as using n-gram frequency in a language, possibly coupled with other data.

Advantages

There are no fundamental advantages to this approach.

Issues

Heuristics are more accurate the longer and more representative the text being scanned is. Short strings may not detect well.

Language detection is limited to the languages for which one has a detector.

Inclusions, such as personal or brand names in another language or script, can throw off the detection.

Language detection tends to be slow and can be memory intensive. Simple consumers probably can't afford the complexity needed to determine the language.

Localization Considerations

Many specifications need to allow multiple different language values to be returned for a given field. This might be to support runtime localization or because the producer has multiple different language values and cannot select or distinguish them appropriately. There are several ways that multiple language values could be organized. For speed and ease of access, the use of language indexing is a useful strategy.

In language indexing, a given field's value is an array of key-value pairs. The keys in the array are language tags. The values of each language tag are strings or, ideally, Localizable objects. Here's an example of what a language indexed field title might look like:

Using the language tag as a key to the value array allow for rapid selection of the correct value for a given request. Notice that, if the value of the language tag is a Localizable, the language might be repeated in the data structure.

For example, if the language requested were U.S. English (en-US), this format makes it easier to match and extract the best fitting title object {"value": "Learning Web Design", "lang": "en"}. An additional potential advantage is that the indexed language tag can indicate the intended audience of the value separately from the language tag of the actual data value. An example of this might be the use of language ranges from [[RFC4647]], as in the following example, where a more specific language value might be wrapped with a less-specific language tag. In this example, the content has been labeled with a specific language tag (de-DE), but is available and applicable to users who speak other variants of German, such as de-CH or de-AT:

A less common example would be when a system supplies a specific value in a different ("wrong") language from the indexing language tag, perhaps because the actual translated value is missing:

The primary issue with this approach is the need to extract the indexing language tag from the content in order to generate the index. Producers might also need to have a serialization agreement with consumers about whether the indexing language tag will be in any way canonicalized. For example, the language tag cel-gaulish is one of the [[BCP47]] grandfathered language tags. Some implementations, such as those following the rules in [[CLDR]], would prefer that this tag be replaced with a modern equivalent (xtg-x-cel-gaulish in this case) for the purposes of language negotiation.

[[JSON-LD]] defines a specific implementation of language indexing, which depends on the use of the @context structure. This structure does not support the use of Localizable values (only strings or arrays of strings are supported), so changes would be needed to allow some of the above capabilities in [[JSON-LD]] documents.

The Localizable WebIDL Dictionary

This section contains a WebIDL definition for a Localizable dictionary.

To be effective, specification authors should consistently use the same formats and data structures so that the majority of data formats are interoperable (in other words, so that data can be copied between many formats without having to apply additional processing). We recommend adoption of the Localizable WebIDL "dictionary" as the best available format for JSON-derived formats to do that.

By defining the language and direction in a WebIDL dictionary form, specifications can incorporate language and direction metadata for a given String value succinctly. Implementations can recyle the dictionary implementation straightforwardly.

Acknowledgements

The Internationalization (I18N) Working Group would like to thank the following contributors to this document: Mati Allouche, David Baron, Ivan Herman, Tobie Langel, Sangwhan Moon, Felix Sasaki, Najib Tounsi, and many others.

The following pages formed the initial basis of this document: