Apple is currently seeking patent protection for technology that is designed to improve speech recognition through location awareness. A Thursday filing
titled "Automatic input signal recognition using location based language modeling." describes a system that automatically associates voice input with location-specific language models and geographic information such as nearby businesses and street names.
"As the number and type of possible input signals has broadened, providing accurate results has remained a challenge," the filing reads. "This is particularly true for recognition systems that rely on a global language model for all input signals. In such cases, input signals that are unique to a particular geographic region are often improperly recognized."
The authors admit that in some cases such technology could incorrectly assume that a person is using a local dialect or talking about a nearby landmark. To help avoid confusion, the filing describes a way to assign a weight to global word sequences and local word sequences in an attempt to determine if the user's voice input should be interpreted as specific to their geographic location.
"Additionally, such a solution only considers one geographic region, which can still produce inaccurate results if the location is close to the border of the geographic region and the input signal corresponds to a word sequence that is unique in the neighboring geographic region," the filing adds.
As an example the authors point out how the word sequence "goat hill" has a low probability as a common word sequence, leading the system to consider if the user is actually saying a more common sequence like "good will" or talking about a nearby cafe that actually carries the name "goat hill."
Many voice-input systems, including Apple's own Siri technology, already provide choices between languages and regional dialects. It is unclear if Apple is already using some form of the language interpretation technology described in the recent filing.