doc_id
stringlengths 7
11
| appl_id
stringlengths 8
8
| flag_patent
int64 0
1
| claim_one
stringlengths 13
18.3k
|
---|---|---|---|
8473430 | 12696051 | 1 | 1. In a computing environment, a computer-implemented method performed on at least one processor, comprising, processing an input signal with a conditional random field model having a final layer and one or more lower layers, by the at least one processor, including processing data corresponding to the input signal at each layer, outputting probability information from each lower layer to a next higher layer above that lower layer and outputting probability information from the final layer that corresponds to a classification of the input signal. |
20160162804 | 14564138 | 0 | 1. A computer-implemented method for automatically analyzing a conversational sequence between a plurality of users, the method comprising: receiving, using a data collection module on a computer comprising a processor and memory, signals corresponding to a training dataset including a plurality of data sequences related to the conversational sequence; extracting, using a feature extraction module on the computer, at least one feature from the received training dataset based on predefined feature categories; formulating, using a learning module on the computer, a plurality of tasks for being learned from the training dataset based on the extracted at least one feature, wherein each of the plurality of tasks is related to at least one predefined label; providing, using the learning module on the computer, a model for each of the plurality of formulated tasks, wherein the model includes one or more parameters having a set of parameters common to the plurality of formulated tasks, wherein the set of parameters includes at least one explicit parameter being explicitly shared with each of the plurality of formulated tasks; optimizing, using the learning module on the computer, values for the one or more parameters and the at least one explicit parameter to create an optimized model; creating, using the learning module on the computer, a trained model for each of the plurality of formulated tasks using an optimized value of the at least one explicit parameter and corresponding values of the one or more parameters; assigning, using a classification module on the computer, the at least one predefined label for each of the plurality of formulated tasks on to a live dataset based on the corresponding created trained model; and outputting, using the computer, signals corresponding to the live dataset assigned with the at least one predefined label for each of the plurality of formulated tasks. |
8121269 | 11395110 | 1 | 1. A participant managing system comprising: a means for storing records of associations of persons at least to one or more keywords; a means for inputting keywords; a means for selecting from the storing means persons associated with the input keywords; a means for outputting the selection; and a monitor for monitoring which persons are actively linked into an active conference and for maintaining a list of those actively linked into the active conference session; wherein the means for selecting is further operable to select from persons not in the list of active participants during an active conference session in accordance with further keywords. |
20110184720 | 13056563 | 0 | 1. A computer-assisted language generation system comprising: sentence retrieval functionality, operative on the basis of an input text containing words, to retrieve from an internet corpus a plurality of sentences containing words which correspond to said words in the input text; and sentence generation functionality operative using a plurality of sentences retrieved by the sentence retrieval functionality from the internet corpus to generate at least one correct sentence giving expression to the input text. |
7529666 | 09699894 | 1 | 1. A method of providing pattern recognition, said method comprising the steps of: inputting a speech pattern into a pattern recognition apparatus; providing minimum Bayes error feature selection via transforming the input pattern to provide a set of features for a classifier which classifies into classes, wherein there is only one feature space transformation for all classes; and providing final features to the classifier, wherein the classifier provides a final output classification result; said transforming step comprising the step of directly minimizing the probability of subsequent misclassification in a projected space of at least one feature; said direct minimizing step comprising: performing a full-covariance gaussian clustering of input records for every class; developing an objective function by way of means, covariances and priors, wherein said objective function either: maximizes an average pairwise divergence and relates it to Bayes error; or directly minimizes an upper bound on Bayes error; optimizing the objective function through gradient decent, wherein all dimensions of a matrix are optimized via optimizing the objective function; wherein the optimizing is carried out over all possible matrices; and wherein the objective function is initialized with an LDA matrix (linear discriminant analysis); upon convergence of the optimization, transforming all the records x into y=θx to produce the at least one final feature where 0 is the LDA matrix; wherein said pattern recognition is speech recognition. |
20150228275 | 14176457 | 0 | 1. A method for processing a voice command using a statistical dialog model, comprising: determining, in response to receiving the voice command, a belief state as a probability distribution over states organized in a hierarchy with a parent-child relationship of nodes representing the states, such that the belief state includes the hierarchy of state variables defining probabilities of each state to correspond to the voice command, wherein a probability of a state of a child node in the hierarchy is a conditioned on a probability of a state of a corresponding parent node; and selecting a system action based on the belief state, wherein steps of the method are performed by a processor. |
20050128311 | 10937343 | 0 | 1. An image capture device comprising: a first receiver operable to receive a user input for initiating a remote image capture operation of the image capture device; a second receiver operable to receive a speech signal representative of an utterance of the user; a processor operable to process the received speech signal to detect a keyword in the user's utterance; an image capture operable to capture an image; and a controller responsive to said first receiver and said processor and operable to control said image capture; wherein said controller is operable, in response to said user input, to set the image capture device into a standby state for a period of time in which said image capture device is unresponsive to utterances of the user and, after said period of time, is operable to set the image capture devise listening state in which the controller is operable to cause said image capture to capture an image when said processor detects said spoken keyword. |
8401836 | 13528426 | 1 | 1. A computer-implemented method comprising: determining, at a computing device including a processor, a plurality of error surfaces corresponding to a plurality of feature functions in a translation hypergraph, for each of one or more candidate translations represented in the translation hypergraph, including: generating, at the computing device, a factor graph from the translation hypergraph, wherein the factor graph is an isomorphic transformation of the translation hypergraph, and generating, at the computing device, the plurality of error surfaces from the factor graph; combining, at the computing device, the plurality of error surfaces to produce a combined error surface; traversing, at the computing device, the combined error surface to select weights for the feature functions that minimize error counts for traversing the combined error surface; and applying, at the computing device, the selected weights to reduce an error count in a decoder that converts a sample of text from a first language to a second language. |
20120170730 | 12984460 | 0 | 1. A communication system for producing and transmitting speech messages during voice calls over communication networks, said communication system comprising: (a) a calling device for initiating a voice call; (b) a first network, operatively connected to said calling device; (c) a destination device, operatively connected to said first network; (d) an advertisement server, operatively connected to said first network; and (e) an advertisement data bank, wherein said advertisement data bank is programmed to generate advertisements using a caller's own voice and integrate said advertisements into a voice call between said calling device and said destination device. |
20130110509 | 13283624 | 0 | 1. A method comprising: receiving, at a representational state transfer endpoint device, a first user input related to a first speech to text conversion performed by a speech to text transcription service; receiving, at the representational state transfer endpoint device, a second user input related to a second speech to text conversion performed by the speech to text transcription service; and processing the first user input and the second user input at the representational state transfer endpoint device to generate speech to text adjustment information. |
8290134 | 11828677 | 1 | 1. A method of managing conference calls among a plurality of participants and a moderator by a system, each of said participants being associated with a line, which method comprises: said system placing said plurality of participants into a conference call; said system muting the line associated with each of said plurality of participants on said conference call; receiving a request to speak from a first participant of said plurality of participants; placing said first participant at a position in a talk queue, said talk queue having at least a top position and a bottom position; determining if said first participant has entered a priority code, wherein when entered by a participant, said priority code assigns a higher priority in said talk queue to said participant; in response to determining said first participant has entered said priority code, placing said first participant at said top position in said talk queue; in response to determining said first participant has not entered said priority code, placing said first participant at said bottom position in said talk queue; and selecting, from said talk queue, a participant at said top position of said talk queue; and in response to selecting said participant at said top position of said talk queue: deleting said participant at said top position of said talk queue; setting said participant at said top position of said talk queue as a speaker; and unmuting said line associated with said speaker, thereby giving said speaker a turn to speak and to be heard by said plurality of participants on the conference call; wherein a line for each of said plurality of participants who is not said speaker remains muted. |
9685158 | 14634111 | 1 | 1. A method, comprising: collecting, by at least one processor, first data associated with prior communications sent to a user, the first data comprising a first profile for a caller that creates a voice message for the user, and the prior communications including at least one prior communication received from the caller; sending, by the at least one processor, to a speech recognition system, the voice message for transcribing using at least a portion of the first data to provide a transcribed message; receiving, from the speech recognition system, the transcribed message; and causing a presentation on a display, the presentation comprising the transcribed message for viewing by the user, and the presentation further comprising a display of options for selection of a corrected word by the user to correct a misspelled word in the transcribed message. |
20020085690 | 09749598 | 0 | 1. A method for providing textual content along with voice messages, the method comprising: placing a call, by a caller on a calling station, to a callee, represented by a receiving station, through a connection, the receiving station including a callee's phone linked to a callee's computer; recording the call, at the calling station, to generate voice data; transcribing the call based on the voice data to generate the textual content of the call; sending the textual content by the calling station to a server via the connection; and transferring, by the server, the textual content and the voice data, received by the server, as electronic mail to an electronic incoming mailbox of the callee on the callee's computer. |
10042842 | 15052197 | 1 | 1. A system to detect an event, the system comprising: a memory device configured to store candidate lexical graphs; and a processor configured to: obtain a set of social media data streams associated with a region and a time frame of interest; apply a lexical graph generation algorithm to the set of social media data streams to obtain lexical graphs; perform similarity analysis on the lexical graphs based on the candidate lexical graphs related to the event to generate matching data; and provide information for investigation of the event based on the matching data. |
9032510 | 13609510 | 1 | 1. A device comprising: at least one processor; at least one computer memory with instructions accessible to the processor to configure the processor for: receiving a first input mode value from a first input device; receiving a second input mode value; determining whether the first and second values match user-defined values, and only responsive to a match being found, executing a computer operation, wherein at least the first input mode value is established by an image of a gesture, and the second input mode value is established by face recognition plus IR sensing satisfying a threshold to ensure a live person is being imaged for authentication, and/or the second input mode value being established by face recognition plus a particular facial expression; and allowing a user to select the computer operation, wherein the computer operation is unlocking the computer for full use thereof. |
20120143591 | 12957394 | 0 | 1. A computer-implemented training system for speech translation, comprising: a discriminative training component derived from a decision rule of a speech translation component, the speech translation component recognizes speech input using a recognition process and translates a recognition output to multiple hypotheses as translation output using a machine translation process, the discriminative training component includes an objective function that integrates the recognition process and the machine translation process; and a processor that executes computer-executable instructions associated with at least the training component. |
9469247 | 14086828 | 1 | 1. A vehicular audio system comprising: an audio detector configured to sense ambient sounds external to a vehicle; and an audio processing module configured to: receive one or more signals representing the ambient sounds from the audio detector, process the ambient sounds to identify an event external to the vehicle and corresponding to at least one sound within the ambient sounds, wherein the at least one sound is generated by the event, and based on determining that a current speed of the vehicle is above a threshold speed, output the at least one sound via a speaker configured to output sound into an interior of the vehicle, or based on determining that the current speed is below the threshold speed, output a spoken description of the event via the speaker. |
8254686 | 12276284 | 1 | 1. An on-line identifying method of hand-written Arabic letters, comprising steps of: (a) collecting chirography coordinates of hand-written Arabic letters that are inputted in a terminal unit, and storing said chirography coordinates into a predefined structural array in real time; (b) preprocessing chirography coordinates and chirography coordinates lattice in a matrix format transformed from said chirography coordinates stored in a structural array; (c) by using multilayer coarse classification algorithm based on local characteristic of Arabic letter, according to shape characteristic of Arabic letters, classifying all standard Arabic letters into four categories, obtaining a first candidate letter aggregation matching with inputted hand-written Arabic letter according to stroke number of inputted hand-written Arabic letter, and obtaining a second candidate letter aggregation matching with inputted hand-written Arabic letter according to local characteristic of inputted hand-written Arabic letter and first candidate letter aggregation; and (d) extracting a freeman chain code of inputted hand-written Arabic letter, calculating a matching probability between said freeman chain code of inputted hand-written Arabic letter and an optimal Hidden Markov Models of each standard letter stored in a predetermined letter library and corresponding to each letter stored in the second candidate letter aggregation, obtaining an optimized matching probability from each matching probability, and determining that standard letter stored in a predetermined letter library and corresponding to an optimized matching probability as final identifying result of inputted hand-written Arabic letter. |
20040059790 | 10460639 | 0 | 1. A notification system that provides an electronic notification to an intended recipient, the system comprising: a communication module configured to communicate an electronic notification; a lifespan module configured to determine a lifespan related to a duration of relevance of the electronic notification; and a delivery module configured to provide the electronic notification to an intended recipient when the lifespan has not expired. |
7991608 | 11468853 | 1 | 1. A method comprising: receiving a base word selected by a user using a user interface; receiving one or more parts of speech of the base word selected by the user using the user interface; determining a word ontology of the base word in a source language, the word ontology comprising words associated with a part of speech selected by the user if one part of speech is selected by the user, the word ontology comprising words associated with more than one part of speech if more than one part of speech is selected by the user, the word ontology of the base word comprising at least two of a synonym, a homonym, a hypernym and a hyponym of the base word; generating, in the source language using a processor, a first set of words comprising the word ontology and a definition of each word in the word ontology; receiving a subset of the first set of words selected by the user using the user interface; translating the subset of the first set of words selected by the user from the source language into a second set of words in a target language; translating the second set of words selected by the user from the target language into a third set of words in the source language; receiving a subset of the second set of words in the target language selected by the user using the user interface after the user observes the third set of words in the source language; and querying, using the processor, the subset of the second set of words. |
9009046 | 11235742 | 1 | 1. A method comprising: receiving, via an interactive voice recognition system, a user utterance and converting the user utterance to text; generating multiple intents based on the text; establishing, via the interactive voice recognition system, a confidence score for each intent in the multiple intents, wherein the confidence score for each intent is based on how much training data corresponding to the each intent was used to train a spoken language understanding module, where more training data corresponds to a higher confidence; when only a single intent in the multiple intents has a confidence score above a threshold: identifying a plurality of call types associated with the multiple intents; and applying predefined precedence rules to respond to only a single call type in the plurality of call types, the single call type associated with the single intent; and when multiple intents have confidence scores above the threshold: identifying a first intent and a second intent based on the confidence scores for the multiple intents, wherein the first intent and the second intent have a highest two confidence scores in the multiple intents; and disambiguating the first intent and the second intent by presenting a disambiguation sub-dialog, via the interactive voice recognition system, wherein a user is offered a choice of which intent to process first, wherein the user is first presented with one of the first intent and the second intent having a lowest confidence score between the first intent and the second intent. |
9298766 | 14049445 | 1 | 1. A method comprising: receiving an inquiry from a user into a computerized device; automatically performing an initial analysis of said inquiry, using said computerized device, said initial analysis producing potential answers to said inquiry from evidence sources; automatically analyzing words used in said inquiry, said potential answers, and data maintained by said evidence sources, using said computerized device, and determining a sensitivity level associated with said inquiry, said sensitivity level associated with said inquiry representing an emotional and cognitive state of said user as automatically determined by said computerized device; automatically generating at least one response to said inquiry, said response being one of relatively more empathetic and relatively less empathetic based on said sensitivity level associated with said inquiry; and outputting said emotional and cognitive state of said user as automatically determined by said computerized device to individuals other than said user. |
7593846 | 10933046 | 1 | 1. A method of providing semantic information related to the meaning of input text, the method comprising with a processor: receiving input text; processing at least portions of the input text to identify self-describing fragments of the input text based on a hierarchical schema, the hierarchical schema defining a domain with at least one top-level node and child nodes, wherein each identified self-describing fragment includes hierarchical context with respect to the hierarchical schema of each corresponding portion of the input text and positional information of words forming the corresponding portion in the input text; and providing semantic information related to the meaning of at least some portion of the input text based on the identified self-describing fragments. |
20140365880 | 14298720 | 0 | 1. A method of providing cross-domain semantic ranking of complete input phrases for a digital assistant, comprising: receiving a training corpus comprising a collection of complete input phrases that span a plurality of semantically distinct domains; for each of a plurality of distinct words present in the collection of complete input phrases, calculating a respective word indexing power across the plurality of domains based on a respective normalized entropy for said word, wherein the respective normalized entropy is based on a total number of domains in which said word appears and how representative said word is for each of the plurality of domains; for each complete input phrase in the collection of complete input phrases, calculating a respective phrase indexing power across the plurality of domains based on an aggregation of the respective word indexing powers of all constituent words of said complete input phrase; obtaining respective domain-specific usage frequencies of the complete input phrases in the training corpus; and generating a cross-domain ranking of the collection of complete input phrases based at least on the respective phrase indexing powers of the complete input phrases and the respective domain-specific usage frequencies of the complete input phrases. |
9767825 | 15374455 | 1 | 1. A method comprising: receiving, from a content server, input media data with an input normal playback speed, the input media data comprising a plurality of input media data portions each having the same input normal playback speed; determining one or more user identities identified based at least in part on biometric data collected from one or more users who correspond to the one or more user identities and to whom audio utterance derived from the input media data is to be played; determining a preferred rate of audio utterance based at least in part on the one or more user identities; receiving, from the content server, a plurality of rates of audio utterance for the plurality of input media data portions; based at least in part on the preferred rate of audio utterance and the plurality of rates of audio utterance, generating audio output media data comprising a plurality of output media data portions having at least two different output normal playback speeds but the same preferred rate of audio utterance; wherein the method is performed by one or more computing devices. |
8340971 | 11029320 | 1 | 1. A method of analyzing dialogs, the method comprising: receiving, via a processor, call-logs associated with a plurality of dialogs between a dialog system and users and external information about at least one user; extracting a first portion of turn-by-turn details of dialogs from the call-logs comprising at least a time stamp associated with a turn in the plurality of dialogs; inferring a second portion of the turn-by-turn details unavailable in the call-logs based on the first portion of the turn-by-turn details using a call-flow specification as a guide, the second portion of the turn-by-turn details comprising an interleaved sequence of at least two attributes that characterize a system state and a user response; and generating, from the first portion of the turn-by-turn details, the external information about the user, and the second portion of the turn-by-turn details, an empirical call-flow representation of the dialog. |
7912700 | 11704381 | 1 | 1. A computer-implemented method for providing context-based word prediction, comprising: parsing documents associated with different applications and obtaining words contained in the documents; creating context-based data sources for the words including creating a context-based data source for each different application, each context-based data source comprising an instance of an application defined candidate provider and an application defined data store for storing the words for an associated application; receiving a text input in a document having an associated context-based data source; retrieving one or more words associated with the text input received in the document from the associated context-based data source by utilizing words stored in the associated context-based data source before utilizing words from an existing text prediction data source as part of prediction of candidates using the text input; and displaying the one or more words retrieved from the associated context-based data source in the document and allowing selection of one of the one or more words displayed in the document for automatically completing the text input received in the document. |
20140019140 | 13926253 | 0 | 1. A method for controlling an external input of a broadcast receiving apparatus, the method comprising: setting a call word of an external input apparatus connected to an external input terminal of the broadcast receiving apparatus; associating the call word with the external input terminal and storing the call word and the external input terminal in association with each other; in response to a voice of a user being input, recognizing the voice to determine whether the voice includes the call word; and in response to determining the voice includes the call word, enabling the external input terminal corresponding to the call word to communicate with the external input apparatus using the external input terminal corresponding to the call word. |
20070118357 | 11285090 | 0 | 1. A method of reducing ambiguities present in electronically stored words, the method comprising: receiving a plurality of characters in electronic form, the received plurality of characters corresponding to a sequence of words and including an ambiguous word that has one or more characters whose value is substantially uncertain; comparing at least some of the words in the sequence to an ontology, the ontology defining a plurality of nodes, each node being associated with a word, and each node being connected to at least one other node by a link, each link being associated with a concept that relates the words associated with the nodes connected by the link in a predetermined context; and identifying nodes in the ontology that correspond to the ambiguous word based on the comparison. |
20140337357 | 13891610 | 0 | 1. A method of organizing a collection of documents, the method comprising: storing entries in multiple dictionaries, wherein the multiple dictionaries are each associated with a different subject, wherein the entries contain descriptive terms and corresponding subject-determining-power scores, wherein an individual subject-determining-power score indicates the relative strength or weakness of the corresponding descriptive term with respect to the subject associated with a particular dictionary containing the entry, and wherein at least some of the descriptive terms are present in two or more of the multiple dictionaries; and accessing the collection of documents by associating particular ones of the descriptive terms contained in the collection of documents with one or more associated subjects of one or more of the multiple dictionaries containing the particular descriptive terms. |
20020041659 | 09160448 | 0 | 1. An arrangement comprising: a processor; at least one user device configured to provide user inputs to the processor and outputs from the processor to a user; a contacts database comprising a first field having at least one substantially correctly spelled contact identifier, and a corresponding second field having at least one phonetically spelled contact identifier; and a voice recognizer configured at least partially within the processor to directly access the first and second fields in the contacts database to interpret the user inputs. |
20150163358 | 14103144 | 0 | 1. A method, comprising: accessing a message by a contact center from a sender; formulating a substantive portion of a response to the message; accessing a user context of the sender; selecting an embellishment in accord with the user context; embellishing the response with banter from the embellishment; and sending the response to the sender. |
9201865 | 13841217 | 1 | 1. A method for providing automated assistance for a user using a computing device selected from a group consisting of: a telephone; a wireless communicator; a tablet computer; a laptop computer; a personal digital assistant; a desktop computer; a processor with memory; a kiosk; a consumer electronic device; a consumer entertainment device; a music player; a camera; a television; an electronic gaming unit; and a set-top box, the method comprising: receiving a user request for assistance spoken in a first language; translating the user request spoken in the first language to a second language; determining semantics of the user request and identifying at least one domain, at least one task, and at least one parameter for the user request; searching a semantic database on the Internet for the at least one matching domain, task, and parameter; compensating for translation errors based on user history; generating a response in the second language; and translating the response to the first language and rendering the response to the user and providing information from one of: music, audiobooks, news, weather, traffic, sports, and processing the user request by an assistant software on a computing cloud to purchase, reserve, or order products or services, wherein the assistant software automatically accesses semantic data and services having one or more triples including subject, predicate, and object available over the Internet to find one or more of: movies, events, performances, exhibits, shows, attractions, travel destinations, hotels, restaurants, bars, pubs, entertainment sites, landmarks, summer camps, resorts, places. |
8364613 | 13246596 | 1 | 1. A computer-implemented method, comprising: storing a first predictive model in computer-readable memory, the first predictive model having been defined based on a first training dataset provided by an owner of the first predictive model and being operable to generate an output based on a query; enabling access for a user to the first predictive model based on permissions defined by the owner, while inhibiting access for the user to the first training dataset such that the first training dataset is inaccessible to the user; receiving a second training dataset from the user, the second training dataset being distinct from the first training dataset; modifying the first predictive model based on the second training dataset to provide a second predictive model; storing the second predictive model in computer-readable memory; and enabling access for the user to the second predictive model. |
20040261023 | 10751955 | 0 | 1. A method for automatically converting documents expressed in a markup language to structured shared writeable editable documents, comprising: inputting a first document having a first document coding expressed in the markup language; converting the first document into a second document having a second document coding expressed in the markup language; constructing a template describing how each of at least one editable item in the second document will be formatted and displayed, and which user interface interactors will be provided to edit fields of that editable item; and outputting the second document, containing content similar in appearance to the first document, wherein a region of content of the first document is identified as the at least one editable item, and at least one sub-region of the at least one region is identified as an editable field within that editable item. |
20170193323 | 15451781 | 0 | 1. A method for assigning text to a record from an image of the record, comprising: obtaining a scanned image of a record; determining at an optical character recognition system that at least some words in the scanned image are unidentified; evaluating the record image in order to locate each of multiple word images corresponding to the unidentified words; for each located word image, identifying multiple word features of that word image; assigning each of the multiple word images that have similar word features to one of a plurality of word clusters; selecting a representative word image in each of the word clusters as a centroid; reviewing, by an analyst, the centroid in each of the word clusters, and entering data representing text for the centroid; and assigning the representing text for the centroid to all other word images in the same word cluster as the centroid. |
9613637 | 14429423 | 1 | 1. A driving support device providing audio information useful for driving to a driver of a vehicle according to speech guidance by using voice and a warning sound, wherein the driving support device provides the audio information useful for driving to a left ear and a right ear of the driver by using a right speaker and a left speaker to generate the speech guidance and the warning sound; wherein the right speaker is an ultrasound generation device for a right ear parametric speaker that generates a reproduced sound at the right ear by irradiating ultrasound toward a right ear position of the driver sitting on a driver seat of the vehicle; wherein the left speaker is an ultrasound generation device for a left ear parametric speaker that generates the reproduced sound at the left ear by irradiating ultrasound toward a left ear position of the driver sitting on the driver seat; and wherein the speech guidance among the audio information useful for driving is provided directly to the right ear of the driver exclusively from the right speaker. |
20180236954 | 15961514 | 0 | 1. A vehicle comprising: an input unit configured to receive an execution command for speech recognition; a navigation module configured to transfer information about an obstacle existing on a road on which the vehicle travels to a speech recognition controller; and the speech recognition controller configured to compare a time in which the vehicle is expected to arrive at the obstacle based on the information transferred from the navigation module to a time in which a voice command is input to determine whether to perform dynamic noise removal pre-processing. |
8498857 | 12782326 | 1 | 1. A system for porting a speech recognition solution in a source language to recognize a target language, said speech recognition solution consisting of a speech recognition engine, pronunciation lexicon in the source language, speech grammar file for the source language, prompts in the source language, said system comprising: lexicon conversion means adapted to convert the pronunciation lexicon of the source language to an equivalent lexicon to be used in the target language, said lexicon conversion means having: i. a first database for storing lexicon of words in the source language corresponding to analogous words in the target language; ii. translation means co-operating with said first database adapted to receive each word in the source language and provide its corresponding word in the target language; iii. transliteration means adapted to receive and map said translated word into source language graphemes; iv. grapheme to phoneme conversion means adapted to generate a source language phoneme sequence for each of said source language graphemes to obtain the phonetic pronunciation of the target language word in the source language; v. lookup table creation means adapted to receive a transliterated target language word in the source language and also receive equivalent phonetic pronunciation of said word from said grapheme to phoneme conversion means and prepare a lookup table; grammar conversion means adapted to modify the speech grammar file of the source language to handle free speech based speech recognition solutions in the target language, said grammar conversion means having: i. translation means adapted to receive the speech grammar file of the source language and translate the grammar file to the target language; ii. transliteration means adapted to receive said translated grammar file and transliterate said translated grammar file to the source language and provide a transliterated grammar file for the target language in the source language; prompt generation means adapted to convert voice prompts in the source language to the target language, said prompt generation means having: i. translation means adapted to convert words containable in the voice prompts of the source language into words in the target language; ii. identification means adapted to use the words in said lookup table of said lexicon conversion means to identify the phonetic pronunciation corresponding to the words containable in the prompts in the target language and provide a phoneme sequence of words containable in the prompts in the source language; iii. text to speech conversion means adapted to receive said phoneme sequence of words and generate a string of words containable in the prompts in the source language; iv. grammar adjustment means adapted to receive the converted string of words containable in the prompts in the source language and arrange the words in accordance with said transliterated grammar file for the target language and provide a grammatically modified text based prompt; v. speech generation means adapted to receive said grammatically modified text based prompt and generate speech based output for said grammatically modified text based prompt; prompting means co-operating with said prompt generation means adapted to output said generated speech corresponding to a prompt in the target language to guide users to submit their query; receiving means adapted to receive a speech based query from users in the target language, said query adapted to be received and converted into source language text and source language representation using said lookup table and said transliterated grammar file for the target language by the speech recognition engine; processing means adapted to process said source language text and perform pre-determined operation and further adapted to provide a processed output; compiling means co-operating with said lexicon conversion means, said speech grammar conversion means, said prompt generation means and said processing means and adapted to compile the final output in the target language; and playback means adapted to play said final output in the target language. |
20050142529 | 10974530 | 0 | 1. A method of grading an essay using an automated essay scoring system, the essay being a response to a test prompt, comprising: deriving a set of predetermined features from the essay, wherein the predetermined features comprise one or more features that are independent from the test prompt; evaluating the feature set with a scoring equation; generating a raw score for the essay from the scoring equation; and processing the raw score for the essay, wherein on an adaptive cutoff algorithm is utilized to assign the raw score to a score category. |
7809722 | 11246741 | 1 | 1. A computer-implemented method for enabling retrieval of a collection of images, the method being performed by a combination of hardware components that include one or more processors and memory resources, the method comprising: programmatically analyzing each image of the images in at least a portion of the collection by (i) detecting presence of one or more objects in individual images that comprise the collection; (ii) determining a type of each detected object; and (iii) performing an image recognition process for said each detected object based on the determined type of said each detected object in order to determine image recognition information for said each detected object, the image recognition process being performed by computational steps that incorporate a set of parameters that are based on the type of the detected object, the image recognition process performed for one of the types of detected objects being different than the image recognition process performed for another one of the types of detected objects; wherein the one or more detected objects in at least some of the analyzed images correspond to an item of clothing or apparel; storing the image recognition information about the one or more detected objects that are detected from said programmatically analyzing each image, said image recognition information including a recognition signature for individually detected objects that correspond to the item of clothing or apparel, and said recognition information; determining a criteria from an image input provided by a user; wherein the determining a criteria from an image input includes determining a recognition signature for at least a portion of an image that is used in the image input; and wherein the determining the recognition signature is performed for the item of clothing or apparel that appears in the portion of the image in the image input; comparing the criteria to the stored image recognition information in order to identify one or more images in the collection that satisfy the criteria, wherein comparing the criteria to the stored image recognition information includes enabling the image recognition information for individual items of clothing or apparel to form at least a partial basis for comparison against the criteria of the image input in order to determine whether the image that contains the item of clothing or apparel satisfy the criteria of the image input; wherein the comparing the criteria to the stored image recognition information further comprises computing a similarity between (i) features of the item of clothing or apparel that appears in the portion of the image in the image input and (ii) features of individual objects that appear in at least a portion of the images that comprise the collection; and presenting the one or more images that satisfy the criteria to the user. |
9153234 | 13847001 | 1 | 1. A speech recognition apparatus comprising: a recognition device that recognizes a content of a voice made by a user and generates a speech character string as a character string indicative of a recognition result; a display device that displays the speech character string generated by the recognition device; a reception device that receives an input of a correction character string, which is used for correction of the speech character string displayed in the display device, through an operation portion operated by the user under a condition that a part of a body of the user contacts the operation portion; a correction device that corrects the speech character string with using the correction character string, the input of which is received by the reception device; and a detection device that detects a misrecognition character string, which is a character string different from the content of the voice made by the user, in a predetermined correction range of the speech character string, wherein: the correction device executes correction by replacing the misrecognition character string detected by the detection device with the correction character string, the input of which is received by the reception device, the reception device receives the input of the correction range via the operation portion, the detection device detects the misrecognition character string in the correction range of the speech character string, the operation portion includes an operation surface having a plate shape, the operation portion is attached to a steering wheel of a vehicle, and the reception device receives the input of the correction character string by detecting a letter input by a swiping operation of the operation surface of the operation portion. |
10117038 | 15944796 | 1 | 1. A method executed by a handheld portable electronic device (HPED) to generate a sound localization point (SLP) in empty space where binaural sound externally localizes to a person during a telephone call, the method comprising: determining, with the HPED, a location of the HPED with respect to a head of the person while HPED is within 1.5 meters from the head of the person; generating, with the HPED, the SLP in empty space where the binaural sound will externally localize during the telephone call to the person in the empty space at the location where the HPED was located; and convolving, with a processor in the HPED and during the telephone call between the person and another person and after the HPED is removed from the location where the SLP was generated, a voice of the another person so the voice of the another person externally localizes as the binaural sound to the person in the empty space at the location where the HPED was located to generate the SLP. |
20040040621 | 10435135 | 0 | 1. A method for recovering target speech based on split spectra using sound sources'locational information, said method comprising: a first step of receiving target speech from a target speech source and noise from a noise source and forming mixed signals of the target speech and the noise at a first microphone and at a second microphone, said microphones being provided at different locations; a second step of performing the Fourier transform of the mixed signals from a time domain to a frequency domain, decomposing the mixed signals into two separated signals U A and U B by use of the Independent Component Analysis, and, based on transmission path characteristics of the four different paths from the target speech source and the noise source to the first and second microphones, generating from the separated signal U A a pair of split spectra v A1 and v A2, which were received at the first and second microphones respectively, and from the separated signal U B another pair of split spectra v B1 and v B2, which were received at the first and second microphones respectively; and a third step of extracting a recovered spectrum of the target speech, wherein the split spectra are analyzed by applying criteria based on sound transmission characteristics that depend on the four different distances between the first and second microphones and the target speech and noise sources, and performing the inverse Fourier transform of the recovered spectrum from the frequency domain to the time domain to recover the target speech. |
9189064 | 13604462 | 1 | 1. A method comprising: at a computer system with a display and a gaze detection device: preparing to display, on the display, a display event associated with a target region of the display; obtaining, from the gaze detection device, user gaze information including a gaze region on the display; comparing the gaze region and the target region; after comparing the gaze region and the target region: in accordance with a determination that the gaze region and the target region do not overlap, delaying display of the display event on the display; in accordance with a determination that the gaze region and the target region do overlap and that the size of the overlapping area is less than the threshold size, delaying display of the display event on the display; and in accordance with a determination that the gaze region and the target region do overlap and that the size of the overlapping area is greater than the threshold size, executing the display event on the display without the delay. |
20140304257 | 14311441 | 0 | 1. A system for providing an automated response to a user natural language query (NLQ) made in regard to a subject, said system comprising: a computing platform including communication circuitry, processing circuitry and computer executable code adapted to cause the computing platform to: (a) receive digital data representing the user NLQ; (b) assign a weight value to each of some or all of the words in the NLQ, wherein a weight value assigned to a given word of the NLQ is inversely related to a rate of occurrence of the given word in at least one knowledgebase; (c) calculate a query significance value for each of some or all of the words in the NLQ, wherein the query significance value for a given word is proportional to the weight value of the given word relative to a sum of weight values of a set of words in the NLQ; (d) search the at least one knowledgebase for one or more candidate matches, which candidate matches include words corresponding to words in the user NLQ; and (e) score matches between the NLQ and the one or more match candidates by performing a mathematical operation using the query significance value of words in the NLQ and the query significance value of corresponding words in the one or more match candidates; (f) compare a context in which the NLQ was submitted to contexts associated with one or more of the one or more match candidates. |
20150186351 | 14145168 | 0 | 1. An electronic device, comprising: a display for presenting paginated digital content to a user; and a user interface including an annotation mode, the annotation mode including multiple note types for paginated digital content, the note types comprising: i) a sticky note that can be created in response to a tap or mouse click or selection made on the paginated digital content when the annotation mode is invoked, wherein the sticky note is represented by a movable graphic and selection of the graphic causes contents of the sticky note to be presented; and ii) a margin note that can be created by converting a previously created sticky note to a margin note, wherein contents of the margin note are always presented and the margin note is configured to be placed anywhere on the paginated digital content. |
20050004883 | 10842237 | 0 | 1. An artificial adaptive agent having a plurality of input nodes for receiving input signals and a plurality of output nodes of generating output signals wherein the output signals are trained responses to the input signals, the agent comprising: a primary value register for storing one or more primary values; means for adjusting the primary values based upon the responses each time the responses change; and sensors coupled to sense the primary values and having sensor outputs coupled to some of the input nodes of the agent. |
20070167857 | 10587461 | 0 | 1. A method for evoking and measuring response signals in a human patient, comprising: providing a plurality of discrete stimulus signals to the human patient in a predetermined encoded sequence, each of said discrete stimulus signals selected to evoke at least one desired response signal in the human patient; acquiring unfiltered signals from the human patient, said acquired unfiltered response signals including signal noise; and utilizing said predetermined encoded sequence to extract said desired response signals from said acquired unfiltered response signals. |
9087519 | 13424643 | 1 | 1. A computer-implemented method of scoring speech, comprising: receiving a speech sample, wherein the speech sample is based upon speaking from a script; aligning, using a processing system, the speech sample with the script; extracting, using the processing system, an event recognition metric of the speech sample; detecting, using the processing system, locations of prosodic events in the speech sample based on the event recognition metric; comparing, using the processing system, the locations of the detected prosodic events with locations of model prosodic events, wherein the locations of model prosodic events identify expected locations of prosodic events of a fluent, native speaker speaking the script, and wherein the comparing comprises comparing a first data structure for the model prosodic events and a second data structure for the detected prosodic events, the first data structure and the second data structure including binary data per syllable representing whether or not a syllable exhibits a stress and whether or not the syllable exhibits a tone change, said comparing including comparing per syllable the binary data representing stress and the binary data representing tone change for the model prosodic events and the detected prosodic events; calculating, using the processing system, a prosodic event metric based on the comparison; and scoring, using the processing system, the speech sample using a scoring model based upon the prosodic event metric. |
20100305947 | 12792384 | 0 | 1. A computer-implemented speech recognition method for selecting a combination of list elements via a speech input, wherein a first list element of the combination is part of a first set of list elements and a second list element of the combination is part of a second set of list elements, the method comprising: receiving at a processor the speech input; comparing within the processor each list element of the first set of list elements with the speech input to obtain a first candidate list of best matching list elements; processing the second set of list elements using the first candidate list to obtain a subset of the second set of list elements; comparing each list element of the subset of the second set of list elements with the speech input to obtain a second candidate list of best matching list elements; and selecting a combination of list elements using the first and the second candidate lists wherein selecting a combination of list elements comprises: determining combinations of a list element of the first candidate list with a related list element of the second candidate list; scoring each determined combination by combining the score of the list element of the first candidate list and the score of the related list element of the second candidate list; and determining a result list wherein the result list comprises best matching combinations of a list element of the first set of list elements and a list element of the second set of list elements. |
7561673 | 10954676 | 1 | 1. A computer readable storage medium having instructions for controlling a telephone infrastructure service model based telephony device, the instructions comprising: an object oriented application including a device object adapted for storing information pertaining to a physical or logical device, and a call object adapted for storing information pertaining to a call between at least two devices; and a converter module configured to convert operations performed by the object oriented application to service model based operations for the service model based telephony device. |
9105267 | 13727250 | 1 | 1. A speech recognition apparatus comprising: a local database storing a plurality of local text data; a text data obtaining unit that obtains a plurality of subject text data from an external device; a text data transmission unit that transmits the plurality of subject text data to a server; a first recognition dictionary that stores a plurality of first phoneme strings, which are respectively converted from the plurality of subject text data; a recognition dictionary preparation unit that prepares the first recognition dictionary by: respectively converting the plurality of subject text data to the plurality of first phoneme strings when each of the plurality of subject text data is equal to one of the plurality of local text data stored in the local database, and storing the plurality of first phoneme strings in the first recognition dictionary; a speech input unit that inputs a speech made by a user; a speech recognition unit that recognizes the speech by referring to the first recognition dictionary and outputs a first recognition result; a speech transmission unit that transmits the speech to the server, which includes a second recognition dictionary that stores a plurality of second phoneme strings respectively converted from the plurality of subject text data, the server recognizing the speech by referring to the second recognition dictionary and outputting a second recognition result; a recognition result receipt unit that receives the second recognition result from the server; and a control unit that determines a likelihood level of a selected candidate obtained based on the first recognition result, and controls an output unit to output at least one of the first recognition result or the second recognition result based on a determination result of the likelihood level of the selected candidate, wherein the server includes a server database storing a plurality of server text data, an amount of the server text data stored in the server database being larger than an amount of the local text data stored in the local database; wherein the server: converts each of the subject text data received from the text data transmission unit to one of the plurality of second phoneme strings when each of the subject text data is equal to one of the plurality of server text data stored in the server database, and stores the plurality of second phoneme strings in the second recognition dictionary; and wherein, when the likelihood level of the selected candidate is equal to or higher than a threshold level, the control unit controls the output unit to output the first recognition result irrespective of whether the recognition result receipt unit receives the second recognition result from the server. |
8639678 | 13479363 | 1 | 1. A system for generating medical knowledge base information, comprising: a non-transitory computer readable medium for storing computer readable instructions; a search processor device operative with the computer readable instructions to search at least one repository of medical information to identify sentences including a received medical term; and a data processor device operative with the computer readable instructions to perform steps including, searching the identified sentences to identify sentences including a medical term different to the received term in response to a predetermined repository of medical terms, excluding sentences without a term different to the received term, to provide remaining multiple term sentences, grouping different terms of individual sentences of said multiple term sentences to provide grouped terms, determining whether a medically valid relationship occurs between different terms of an individual group of terms of said grouped terms by using predetermined sentence structure and syntax rules, and outputting data representing grouped terms having a medically valid relationship. |
20160078697 | 14615378 | 0 | 1. A wearable device, comprising: a fingerprint recognition apparatus; a pulse sensor, for detecting pulse information of a user; and a processor, for determining whether the user is wearing the wearable device according to the pulse information detected by the pulse sensor, wherein when the processor determines that the user is wearing the wearable device, the processor performs a first time of fingerprint recognition and a pulse recognition on the user according to a fingerprint image detected by the fingerprint recognition apparatus and the pulse information detected by the pulse sensor, respectively, wherein when the processor determines that the first time of fingerprint recognition and the pulse recognition are approved, the processor controls the wearable device to enter a first working state from a locked state. |
8704948 | 13353160 | 1 | 1. A method of presenting text identified in a presented video image of a media content event, the method comprising: receiving a complete video frame that is associated with a presented video image of a captured scene of a video content event, wherein the presented video image includes text disposed on an object that has been captured in the scene; finding the text on the object that is part of the captured scene in the complete video frame; using an optical character recognition (OCR) algorithm to translate the found text on the object into translated text; and presenting the translated text associated with the text on the object that is part of the captured scene. |
20080154597 | 11961580 | 0 | 1. A voice processing apparatus, comprising: a storage unit that stores registration information containing a characteristic parameter of a given voice; a judgment unit that judges whether an input voice is appropriate or not for creating or updating the registration information; a management unit that creates or updates the registration information based on a characteristic parameter of the input voice when the judgment unit judges that the input voice is appropriate; and a notification unit that notifies a speaker of the input voice when the judgment unit judges that the input voice is inappropriate. |
8379994 | 12904138 | 1 | 1. A computer-implemented feature extraction method for classifying pixels of a digitized image, the method to be performed by a system comprising at least one processor and at least one memory, the method comprising: generating a plurality of characterized pixels from a digitized image; generating a plurality of classification models by associating features of the plurality of characterized pixels with labels of a plurality of ground truths, wherein each of the plurality of ground truths is associated with a respective one of a plurality of image classifications; determining by the system a plurality of confidence maps based on the plurality of classification models, wherein the confidence maps contain information representing a likelihood of each pixel in the digitized image belonging to one of the plurality of image classifications; and iteratively improving the plurality of classification models by (a) extracting a plurality of contextual image feature vectors from the most-recently generated plurality of confidence maps, (b) updating the plurality of classification models based on the most-recently extracted plurality of contextual image feature vectors, and performing steps a and b for a threshold number of iterations; and outputting by the system a plurality of final classification models that classify part or all of the pixels of the digitized image. |
8952985 | 13655562 | 1 | 1. A digital comic editor, comprising: a data acquisition device configured to acquire a piece of master data of a digital comic, the master data including: an image file corresponding to each page of the comic, the image file having a high resolution image of the entire page; and an information file corresponding to each page or all pages of the comic, the information file having described therein a piece of speech bubble information including a piece of speech bubble region information representing regions of speech bubbles for containing dialogs of characters in the image; a display control device configured to control a display device to display an image thereon based on the image file in the master data acquired by the data acquisition device and to display an image representing speech bubble regions based on the speech bubble region information included in the information file in the master data while superimposing the image representing the speech bubble regions on the image based on the image file; an indication device configured to indicate a position on the image displayed on the display device; a speech bubble region addition device configured to add a new piece of speech bubble region information to the position indicated by the indication device; a speech bubble region deletion device configured to delete the speech bubble region information from the position indicated by the indication device; an editing device configured to update the speech bubble region information included in the information file based on the speech bubble region information added by the speech bubble region addition device and the speech bubble region information deleted by the speech bubble region deletion device; a speech bubble region detection device configured to detect a piece of closed region information for enclosing a periphery of the position indicated by the indication device as a piece of speech bubble region information, wherein the display control device controls to display an image representing the speech bubble region based on the speech bubble region information detected by the speech bubble region detection device while superimposing the image representing the speech bubble region on the image based on the image file, and the speech bubble region addition device adds the speech bubble region information detected by the speech bubble region detection device; an image acquisition device configured to acquire an image file having a high resolution image of the entire page; a speech bubble region extraction device configured to analyze the image of the entire page acquired by the image acquisition device and automatically extracts the speech bubble regions in the image; an information file creation device configured to create an information file having described therein the speech bubble information which includes a piece of speech bubble region information representing a speech bubble region extracted by the speech bubble region extraction device; and a master data creation device configured to create a piece of master data of the digital comic, the master data including: an image file acquired by the image acquisition device for each page of the comic; and an information file corresponding to each page or all pages of the comic, which is created by the information file creation device, wherein the data acquisition device acquires a piece of master data created by the master data creation device. |
20170289486 | 15088633 | 0 | 1. A method, comprising: monitoring an audio signal comprising a device audio component from a device presenting content and an ambient audio component; determining the ambient audio component of the audio signal; determining whether the ambient audio component satisfies an ambient noise threshold; activating an audio transcription feature of the device in response to the ambient audio component satisfying the ambient noise threshold; and deactivating the activated audio transcription feature of the device in response to determining that the ambient audio component no longer satisfies the ambient noise threshold. |
8898159 | 13240108 | 1 | 1. A system for generating answers to questions, comprising: a computer device comprising at least one distinct software module, each distinct software module being embodied on a tangible computer-readable medium; a memory; and at least one processor coupled to the memory and operative for: receiving an input query; formulating a plurality of different subqueries to answer the input query, wherein each of the subqueries has an associated answer to the each subquery, and the answers to the subqueries are used to determine an answer to the input query; conducting a search in one or more data sources to identify at least one candidate answer to each of the subqueries; for each of the candidate answers for each of the subqueries, applying a candidate ranking function to determine a ranking for said each of the candidate answers; for each of the subqueries, selecting one of the candidate answers to the subquery based on the ranking of said one of the candidate answers; and applying a logical synthesis component to synthesize a candidate answer for the input query from the selected ones of the candidate answers to the subqueries. |
20160042296 | 14456985 | 0 | 1. A method, implemented by one or more computing devices, for generating a model, comprising: sampling user-behavioral data from a repository of user-behavioral data, the user-behavioral data identifying linguistic items submitted by users, together with selections made by the users in response to the linguistic items; sampling knowledge data from one or more structured knowledge resources, the knowledge data representing relationships among linguistic items expressed by said one or more structured knowledge resources; and generating a model on the basis of the user-behavioral data and the knowledge data, using a machine-learning training process, the model providing logic for assessing relevance of linguistic items, said sampling of user-behavioral data, said sampling of knowledge data, and said generating of the model being performed using at least one processing device associated with said one or more computing devices. |
8682811 | 12650285 | 1 | 1. A method of building an index of web pages, the method comprising: accessing a set of URLs collected by crawling the Internet; accessing a list of URLs collected from one or more sources that collect clicks of URLs by users; for each URL in the set of URLs, for a given URL: computing a measure of likelihood that the given URL will be searched by a user in the future based on whether the URL has been clicked by a user, storing the given URL and its measure, selecting a subset of the URLs based on their respective stored measures, where some of the URLs in the set of URLs are omitted from the subset based on the measures of the omitted URLs; generating a first index of the web pages pointed to by the URLs in the subset of URLs, the first index not including the omitted URLs, the first index comprising a mapping between contents of the web pages and the URLs of the web pages, and generating a second index of the web pages pointed to by the URLs omitted from the subset, the second index not including the URLs in the first index; and using the index by a search engine to search for search results for arbitrary search queries submitted by users, wherein when the first index does not satisfy a given query, using the second index to attempt to satisfy the given query. |
20100159968 | 12716060 | 0 | 1. A communication device for sending a text message, the communication device comprising: a memory to store voice characteristic information associated with a user; a processor to monitor speech of the user and adjust the voice characteristic information stored in memory; and a transmitter to transmit the text message and adjusted voice characteristic information to a recipient device to audibly output the text message utilizing the adjusted voice characteristic information. |
7698127 | 10956270 | 1 | 1. A method comprising: receiving a series of user input selections by a computer configured to auto complete and suggest data entries; for a most recently received input selection: checking, by the computer, whether the most recently received input selection is for addition of an input character or deletion of an input character; if the most recently received input selection is for addition of an input character, then: waiting by the computer, in response to the most recently received input selection, until a first amount of time since the most recently received input selection was received has passed, and presenting by the computer, in response to the first amount of time having passed, an option to automatically enter an additional input selection; and if the most recently received input selection is for deletion of an input character, then: waiting by the computer, in response to the most recently received input selection, until a second amount of time since the most recently received input selection was received has passed, and presenting by the computer, in response to the second amount of time having passed, the option to automatically enter an additional input selection, the second amount of time being longer than the first amount of time. |
10157180 | 14994099 | 1 | 1. A computer-implemented method for facilitating information in multiple languages based on an optical code, comprising: scanning, by a computing device, an optical code accompanying a text phrase; retrieving a location of a conversion server and an optical code identifier embedded in the optical code, wherein the optical code identifier corresponds to a set of target phrases in the conversion server, wherein a respective target phrase is a translation of the text phrase in a target language; determining, by the computing device from the optical code, a set of target languages in which the text phrase is available, wherein the optical code indicates the set of target languages; selecting, by the computing device, at least one target languages from the set of target languages for obtaining the text phrase in the selected target language; and sending a query message to the conversion server based on the retrieved location, wherein the query message comprises the optical code identifier, a list of the selected target language, and information indicating whether the selected target language is supported by the computing device; and displaying the target phrase in the selected target language supported by the computing device, which is received from the conversion server. |
8523673 | 12967269 | 1 | 1. A vocally interactive video gaming mechanism comprising: a) a microphone configured to connect to a video game system to allow a player in a physical game space to communicate by voice of said player within a video game by means of voice recognition software; b) a video recording device configured to scan the physical game space and record physical characteristics of said player, the physical characteristics including tracked movements of said player within said physical game space, wherein the recorded physical characteristics of said player in said physical game space are synchronously displayed, via a display device, with said voice of said player in a virtual gaming world of said video game in association with a virtual avatar; c) a holographic projection device configured to holographically project one or more holographic images corresponding to said player into said physical game space in which said player is engaged in said video game, wherein the one or more holographic images mimic the recorded physical characteristics and said voice of said player in real-time; and d) an audio speaker configured to generate sounds to accompany said one or more holographic images and said virtual avatar. |
20040044952 | 10399587 | 0 | 1. A method of generating index terms for documents, including: parsing the documents to identify key terms of each document based on sentence structure; determining an importance score for terms of the documents based on distribution of said terms in said documents; and retaining said key terms having an importance score above a predetermined threshold as said index terms. |
7642922 | 11521384 | 1 | 1. A drive support apparatus for a movable body, comprising: a driving state detecting portion for acquiring driving state information comprising at least either information about a traveling situation of the movable body or information about a driving operation of a driver of the movable body, from a sensor provided in the movable body during a drive of the movable body by the driver; a driver information storing portion for storing awareness information that represents a digitized awareness of the driver indicating how much the driver is aware of a regulated or recommended driving condition for safe driving of the movable body; an interaction controlling portion configured to, decide whether to start an interaction with the driver based on whether the driving state information and the awareness information satisfy a predetermined interaction-start condition; generate a question to check the driver's awareness that indicates how much the driver is aware of a regulated or recommended driving condition for safe driving in the case of a decision to start the interaction; output the generated question as a synthesized voice; recognize an answer content from a voice of the driver to the question; generate advice for safe driving in response to the answer; and output the generated advice as a synthesized voice; an awareness information generating portion for updating the awareness information in accordance with the answer content recognized by the interaction controlling portion; and a safe-driving information storing portion for storing safe driving information comprising information that represents a condition for safe driving, wherein the awareness information generating portion decides whether the answer content recognized by the interaction controlling portion is correct or not, on the basis of the answer content and the safe driving information; when the answer content is decided as correct, awareness information generating portion decides whether or not to update the awareness information stored in the driver information storing portion depending on the answer content and the driving state information satisfy a predetermined awareness information updating condition, and in a case of a decision that the awareness information should be updated, the awareness information generating portion updates the awareness information to a value representing a state of a higher awareness, and when the answer content is decided as incorrect, the awareness information generating portion updates the awareness information stored in the driver information storing portion to a value representing a state of a lower awareness. |
20020160342 | 10127995 | 0 | 1. The method of teaching the alphabet comprising the steps of: providing an electronic teaching device having a keyboard with keys for the letters of the alphabet, a touch screen area for displaying each letter in response to actuation of the key for that letter, a speaker for delivering selected recorded audio messages with the display of a selected letters, and a second screen area for displaying a selected picture of an object with the delivery of a selected audio message; causing a letter to be displayed on said touch screen area in response to the actuation of the key for that letter, and concurrently causing an introductory audio message to be delivered that identifies the selected letter; causing the selected letter to disappear from the touch screen and causing a series of outline elements to appear sequentially representing the sequence of strokes to be used for writing the letter, and concurrently causing intermediate audio messages to be delivered that instruct the user to follow the series of outline elements; delivering a concluding audio message through the speaker identifying the selected letter, instructing in its phonetic pronunciation and providing an exemplary word identifying an object constituting an example of the use of the letter; and simultaneously displaying on said second screen area a picture of the object that is identified by the word provided in the concluding audio message. |
8897151 | 13183571 | 1 | 1. A computer-implemented system for application protocol field extraction, comprising: an extraction specification that specifies data elements to be extracted from data packets and is expressed in terms of a context-free grammar, where the grammar defines grammatical structures of data packets transmitted in accordance with an application protocol and is defined as a tuple having nonterminals, terminals, counters, production rules and a start nonterminal, such that the counters are variables with an integer value used to chronicle parsing history of the production rules, and at least one of the production rules includes an action association with a terminal or nonterminal comprising body of the production rule and the action is an expression for updating a value of a counter defined by the grammar; an automata generator configured to receive the extraction specification and generate a counting automaton; and a field extractor configured to receive a data flow comprised of a plurality of data packets traversing through a network and extract data elements from the data packets in accordance with the counting automaton, where the field extractor is implemented by an integrated circuit. |
20100185444 | 12356814 | 0 | 1. A method comprising: receiving a speech signal corresponding to a particular speaker; selecting, via a processor, a cluster model including both a speaker independent portion and a speaker dependent portion based at least in part on a characteristic of speech of the particular speaker; and processing the speech signal using the selected cluster model. |
9251142 | 14326283 | 1 | 1. A communication device comprising: a language input device configured to detect a first language signal associated with a first language; and a recognition and interpretation engine coupled with the language input device and configured to: obtain the first language signal from the language input device; generate a first recognition result set from the first language signal according to at least one of a grammar and statistical language model of the first language, said language model comprising a mobile interference model; generate an improved recognition result set from the first recognition result set by rescoring the first recognition result set according to a domain-specific language model; generate at least one interpretation result from the improved recognition results set; map the at least one interpretation result to a second language representation of a second language; and cause an output device to present an output interpretation according to the second language derived from the second language representation. |
20080076104 | 11926305 | 0 | 1. A learning system for auditory learners, comprising: a broadcast server means configured for: i) capturing, digitizing, storing, and indexing voice statements; ii) allowing an auditory learner to search said indexing for desirable voice statements; iii) playing on demand and broadcasting said desired voice statements to said auditory learner; iv) creating an auditory live session for said learner including joining a live discussion with human interaction at a multi-sensory level; and v) mining said voice statements for emerging subject matter and to create new voice statements for playing on demand to said learner; and a learner workstation for receiving said desired voice statements. |
20040093209 | 10690028 | 0 | 1. A data input device for inputting numeric data by voice, the data input device comprising: a holder operable to hold numeric data input in the past; a calculator operable to calculate a prediction range of a value expected to be input on the basis of the numeric data held in the holder; a speech recognizer operable to perform speech recognition of input speech representing a value; a determiner operable to determine whether or not a value represented by a recognition result obtained by the speech recognizer is within the prediction range calculated by the calculator; and a presenter operable to present details corresponding to the determined result. |
20090185075 | 12408204 | 0 | 1. A storage medium, comprising: audio-visual data; and text-based subtitle data to provide subtitles of the audio-visual data, wherein the text-based subtitle data comprises a plurality of dialog presentation units and a dialog style unit defining a set of output styles to be applied to the dialog presentation units, and each dialog presentation unit comprises dialog text information, time information indicating a time for the dialog text information to be output, palette information defining colors to be applied to the dialog text information, and a color update flag indicating whether only the palette information has changed as compared with a graphical composition of a previous dialog presentation unit. |
20080056283 | 11469601 | 0 | 1. A system for converting instant messaging (IM) text to telephone speech and back, comprising: a first electronic device configured to transmit and receive IM text; a translation module housed inside the first electronic device, the translation module configured to (i) translate the outgoing IM text into speech, and (ii) translate incoming speech into IM text; and a second electronic device communicatively coupled by way of a network to the first electronic device such that the second electronic device may receive the IM text as translated speech from the first electronic device via a call. |
7739215 | 12417959 | 1 | 1. A normalization system, comprising: a processor; a memory communicatively coupled to the processor, the memory having stored therein computer-executable instructions to implement the system, including: an interface component that processes questions posed by users, the questions corresponding to a heterogeneous knowledge base; a dialog component that requests users to reformulate questions based upon a cost-benefit analysis; a normalization component that applies a utility model that predicts accuracy results to provide a regularized understanding of the knowledge base based on at least one of the processed questions or the reformulated questions; and an answer composer component that employs the predicted accuracy of results to generate answers to the processed questions. |
7826945 | 11173736 | 1 | 1. A method for providing a voice enabled user interface comprising: storing general configuration information for the interface; storing user-specific configuration information for the interface; processing a speech input from a user using the general configuration information and the user-specific configuration information; and selectively updating the user-specific configuration information based on a result of processing the speech input wherein the updating is performed upon correct recognition of the speech input when a first score associated with the speech input indicates that an incorrect recognition hypothesis has a second score within a predetermined threshold. |
8091023 | 11864013 | 1 | 1. A method of identifying a proposed spelling correction for a word that has been determined to at least potentially be misspelled, the method comprising: receiving from a data source a series of characters of a candidate spelling correction; making a determination for each character of at least a portion of the series that at least one of: the character validly corresponds with a predetermined portion of a canonical version of the word, and the character is, according to at least one spell check algorithm from among a number of spell check algorithms, within a predetermined edit distance from a predetermined portion of the canonical version of the word; and outputting at least a portion of the candidate spelling correction as a proposed spelling correction, wherein the canonical version of the word includes a character set having both diacritical and non-diacritical forms of at least one character. |
20120310651 | 13485303 | 0 | 1. An apparatus for synthesizing a voice signal using a plurality of phonetic piece data each indicating a phonetic piece which contains at least two phoneme sections corresponding to different phonemes, the apparatus comprising; a phonetic piece adjustment part that forms a target section from a first phonetic piece and a second phonetic piece so as to connect the first phonetic piece and the second phonetic piece to each other such that the target section is formed of a rear phoneme section of the first phonetic piece corresponding to a consonant phoneme and a front phoneme section of the second phonetic piece corresponding to the consonant phoneme, and that carries out an expansion process for expanding the target section by a target time length to form an adjustment section such that a central part of the target section is expanded at an expansion rate higher than that of a front part and a rear part of the target section, to thereby create synthesized phonetic piece data of the adjustment section having the target time length and corresponding to the consonant phoneme; and a voice synthesis part that creates a voice signal from the synthesized phonetic piece data created by the phonetic piece adjustment part. |
9569551 | 15202734 | 1 | 1. A method of dynamically modeling geospatial words, comprising: receiving GPS annotated text data generated by a GPS-enabled device containing latitude and longitude coordinates, the GPS annotated text data comprising live text stream harvested from social media; generating a word set based on frequencies of words occurring in the GPS annotated text data; partitioning locations by mapping GPS coordinates in the GPS annotated text data to a set of discrete non-overlapped locations; segmenting a text stream contained in the GPS annotated text data into time windows; generating footprints of locations in time windows; determining localness of words based on the footprints, the localness of words incrementally updated over time; and dynamically integrating in geotagging by extracting words in a text message and determining scores associated with the set of discrete non-overlapped locations. |
10102655 | 13339650 | 1 | 1. A method for outputting images, the method comprising the steps of: receiving a base image and a plurality of personalized messages, wherein a personalized message is formed from elements consisting of at least two pixels found in the base image; converting the base image only once into control codes of a printer, the control codes used for printing; converting each of the plurality of the personalized messages into control codes of the printer; sending the control codes for the base image and the personalized messages to the printer for printing; for each personalized message: using the control codes for the base image, causing the base image to be printed on a page; and using the control codes for the personalized message, causing the personalized message to be printed as overlaid onto the base image. |
20150254057 | 14600884 | 0 | 1. On a computing system, a method for suggesting voice commands to control user interaction with the computing system, the method comprising: identifying a user identity of a user interacting with the computing system; selecting a voice command from a set of voice commands based on the user identity; identifying a voice-command suggestion corresponding to the voice command; and presenting, via a display, a graphical user interface including the voice-command suggestion. |
9094775 | 14267918 | 1 | 1. A communication device comprising: a microphone; a speaker; an input device; a display; an antenna; a voice communication implementer, wherein voice communication is implemented by sending and receiving audio data via said antenna; a stereo audio data output implementer, wherein a stereo audio data is processed to be output in a stereo fashion; and a location dependent program executing implementer, wherein when said communication device is identified to be located at a first location, a first software program is executed, and when said communication device is identified to be located at a second location, a second software program is executed; wherein said first software program is a software program operable to be executed independently from said second software program; wherein said second software program is a software program operable to be executed independently from said first software program; wherein said first software program is the software program selected by the user to be executed when said communication device is identified to be located at said first location; and wherein said second software program is the software program selected by the user to be executed when said communication device is identified to be located at said second location. |
10062373 | 15801045 | 1 | 1. A wireless earpiece comprising: a wireless earpiece housing; a processor disposed within the wireless earpiece housing; at least one air microphone operatively connected to the processor; at least one bone microphone operatively connected to the processor; at least one speaker operatively connected to the processor; wherein the processor is configured to receive audio from the at least one air microphone, the at least one bone microphone, perform processing of the audio to provide processed audio, and output the processed audio to the at least one speaker; wherein the processing of the audio comprises identifying body generated sounds generated by a body of a user of the wireless earpiece and removing the body generated sounds; and wherein the identifying body generated sounds is performed by comparing a first audio signal from the at least one air microphone with the second audio signal from the at least bone microphone. |
20070276806 | 11636857 | 0 | 1. A method for searching stock documents containing non-word-based data, comprising: (a) collecting a group of stock documents to form a collection of stock documents; (b) dividing each document in said group of collected documents into a series of elements; (c) defining a plurality of non-word-based token patterns; (d) tokenizing said documents by matching said series of elements against said plurality of defined non-word-based token patterns to generate a collection of tokens for each of said documents, and providing a name for each of said tokens; (e) combining the collections of tokens for said documents into a master collection of tokens; (f) searching for stock documents in said collection of documents that have the same token names as a query or a combination of queries, by searching said query or queries in said master collection of tokens, to provide a plurality of matching documents with respective scores; and (g) displaying matching documents in the order of their matching scores; whereby said method will be able to search stock documents efficiently and systematically. |
4528687 | 06432379 | 1 | 1. A spoken-instruction controlled system for an automotive vehicle which can operate at least one vehicle device in accordance with a plurality of spoken instructions inputted through a microphone when a recognition switch is kept turned on which comprises: (a) a speed recognizer for outputting a plurality of recognition command signals independently whenever one of a plurality of predetermined spoken instructions is recognized to be similar to one of recorded reference spoken instruction pattern data; (b) at least one vehicle device actuator connected between said speech recognizer and the vehicle device for actuating the vehicle device in response to the recognition command signals; and (c) means for outputting a stop command signal to said vehicle device actuator for a predetermined time period T.sub.2 when recognition switch is turned on again within a predetermined time period T.sub.1 after said speech recognizer has outputted a recognition command signal to said vehicle device actuator, said stop command signal outputting means being connected to said speech recognizer and the recognition switch. |
8135189 | 12243327 | 1 | 1. A method for segmenting organs in digitized medical images, the method comprising: using a computer to perform steps comprising: providing a set of training images of an object of interest, wherein said object of interest has been segmented; computing a surface mesh having a plurality of mesh cells that approximates a border of said object of interest; extracting positive examples of all mesh cells and extracting negative examples in the neighborhood of each mesh cell which do not belong to the object surface; training from said positive examples and negative examples a plurality of classifiers for outputting a probability of a point being a center of a particular mesh cell; computing an active shape model using a robust subset of center points in said mesh cells; wherein said robust subset of said mesh cells is determined by maximizing arg max w ( α ∑ i n ∑ j m w i g ij + β ∑ i n w i C _ i ) , g ij <D, wherein g ij is a geodesic distance between centers of mesh cells i and j, D is a maximum distance between mesh cells, C i is an average classification accuracy of said classifier for mesh cell i, α and β are adjustable parameters that can trade-off between sparsity and classification accuracy, and w i is a resulting weight of mesh cell i; generating a new shape by iteratively deforming said active shape model to fit a test image; and using said classifiers to calculate a probability of each center point of said new shape being a center of a particular mesh cell which said classifier was trained to recognize. |
9037471 | 13745233 | 1 | 1. An image processing apparatus comprising: an image processor configured to process a broadcasting signal to display a broadcasting channel program; a communicator configured to communicate with a server; a voice receiver configured to receive a speech from a user; a voice processor configured to process a performance of an operation corresponding to the speech; and a controller configured to control the speech to be processed by one of the voice processor and the server, wherein if the speech comprises a keyword relating to a desired call sign of a broadcasting channel, the controller controls to select a representative call sign corresponding to the keyword, from a database which stores a plurality of representative call signs and a plurality of call signs groups, each comprising at least one call sign relating to the respective representative call sign, controls to display a list which comprises a call sign group relating to the selected representative call sign such that one call sign from the call sign group in the displayed list is selected, and performs the operation according to the speech with respect to the broadcasting channel of the selected call sign. |
20080140388 | 11987641 | 0 | 1. An information processing apparatus, comprising: a registration unit configured to register a property of document data; a document language identification unit configured to identify a language used in the document data; a property language identification unit configured to identify a language used in a property value entered with respect to the document data; and a translation unit configured to translate the property value entered with respect to the document data from the language used in the property value into the language used in the document data when the language used in the property value is different from the language used in the document data. |
20020097915 | 09949872 | 0 | 1. A word recognition device recognizing a word image, comprising: a capacity reducing unit reducing a capacity of a character feature dictionary used for synthesizing a word feature; a synthesizing unit synthesizing a word feature for a comparison based on a word list to be recognized from column or row features within a feature dictionary a capacity of which is reduced by said capacity reducing unit; a feature extracting unit extracting a feature of an input word; and a comparing unit making a comparison between the feature of the input word, which is extracted by said feature extracting unit, and a synthesized word feature. |
9195654 | 13834371 | 1 | 1. A method comprising: receiving, by one or more processors of a device, a translation query, the translation query requesting a translation of one or more terms from a source language to a target language; determining, by the one or more processors, one or more translation features associated with the translation query; assigning, by the one or more processors, a feature value to each of the one or more translation features to form one or more feature values; applying, by the one or more processors, a feature weight to each of the one or more feature values, in a linear or non-linear manner, to generate a final value; determining, by the one or more processors, whether to provide a dialog translation user interface or a non-dialog translation user interface based on whether the final value satisfies a threshold, the dialog translation user interface facilitating translation of a conversation, the non-dialog translation user interface providing: one or more translation search results, and one or more links to one or more documents that provide a translation from the source language to the target language, and the non-dialog translation user interface being different than the dialog translation user interface; and providing, by the one or more processors, the dialog translation user interface for display when the final value satisfies the threshold. |
20050065782 | 10769501 | 0 | 1. A method of encoding speech with, comprising: (a) encoding speech frames of a first mode with waveform coefficients; and (b) encoding speech frames of a second mode with magnitudes of waveform coefficient plus an alignment phase; (c) wherein said encoding of step (b) includes a second alignment phase for a frame of said second mode immediately following a frame of said first mode. |
8442828 | 11378710 | 1 | 1. A natural language understanding system, comprising: a decoder; a conditional random field model accessible by the decoder, the conditional random field model using a computer to assign a conditional probability of a state sequence given an observed vector of features to statistically model alignments between textual characters in an observed natural language input and semantic frames of a semantic structure, the semantic frames corresponding to states in the state sequence, wherein the conditional random field model models prior knowledge of relationships between elements of the semantic frames by assigning the conditional probability based on whether the observed vector of features includes command prior features, the command prior features being independent of the observed natural language input and indicative of a prior likelihood of commands defined by the semantic frames; and a computer processing unit, being a functional hardware component of the system and activated by the decoder and conditional model, facilitating modeling alignments. |
20050159950 | 11005567 | 0 | 1. A method of speech recognition comprising: receiving an original utterance of one or more words; performing an original speech recognition upon the original utterance; producing a user perceivable output representing one or more sequences of one or more words selected by the recognition as most likely corresponding to the utterance; providing a user interface that allows a user to select to perform a re-utterance recognition upon a part of the original utterance corresponding to all or a selected part of the user perceivable output; and responding to a user selection to perform a re-utterance recognition upon all or a part of the original utterance by: treating a second utterance received in association with the selection as a re-utterance of the selected portion of the original utterance; and performing speech recognition upon the re-utterance to select one or more sequences of one or more words considered to most likely match the re-utterance based on the scoring of the one or more words against both the re-utterance and the selected portion of the original utterance. |
20040215451 | 10423730 | 0 | 1. A method for conducting voice communications, comprising operations of: a human operator receiving telephone calls from a variety of remote callers over time; for each particular call from a caller, performing operations comprising: selecting an output voice that exhibits prescribed speech characteristics appropriate to that particular call; the human operator listening to voice utterances of the caller; instead of the operator speaking to the caller, performing operations comprising: the operator selecting target speech content to be conveyed to the caller; the operator directing a representation of the selected target speech content to a speech processing facility; the speech processing facility producing a signal representing an enunciation of the target speech content utilizing the selected output voice; transmitting the signal to the caller. |
20150269431 | 14443918 | 0 | 1. A method for the spotting of keywords in a handwritten document, comprising the steps of: inputting an image of the handwritten document; performing word segmentation on the image to obtain segmented words; performing word matching, consisting in the sub-steps of: performing character segmentation on the segmented words; performing character recognition on the segmented characters; performing distance computations on the recognized characters using a Generalized Hidden Markov Model with ergodic topology to identify words based on character models; performing non-keyword rejection using a classifier based on a combination of Gaussian Mixture Models, Hidden Markov Models and Support Vector Machines; outputting the spotted keywords. |
20110248924 | 13140803 | 0 | 1. A touch-typable device for accepting desired textual input from a user comprising: a movable keymask comprising of a plurality of cells mapped with one or more keys of text input keypad of said touch-typable device, wherein each of said plurality of cells of said movable keymask superimposes over textual input choices enabling said user to select said desired textual input from said plurality of cells of said movable keymask using said text input keypad of said touch-typable device. |
20060277048 | 11422175 | 0 | 1. A handheld analysis instrument for assaying a medically significant sample, the handheld analysis instrument comprising: a measuring device for measuring the concentration of an analyte in the sample; and an output device for outputting measurement results which were determined by the measuring device, wherein the output device comprises: an acoustic signal output device for operation in an acoustic mode for outputting the measurement results through nonverbal acoustic signals; and a wireless interface for communicating with an external speech output unit, such that the output device operates in a speech output mode, by which the measurement results may be outputted verbally via the interface; such that the output device transitions from the acoustic mode into the speech output mode by receiving a signal transmitted by the speech output unit. |