Lines Matching full:word
119 batch->word = awtoken->word;
245 q2->word = list1->word;
255 the number of possible word ends. Any tokens beyond that length will get
671 /*this function allocates a new word token and remembers it in a list in the srec structure (to be used for backtrace later on*/
683 dump_core("create_word_token: cannot allocate word token - we need"
684 " to figure out a word pruning strategy when this happens!\n");
692 /* gets rid of fsmnode which trace back to this word since
693 the word is not goingto make it ono the word lattice */
710 /* processing a word boundary,
718 wordID word,
739 /*make new word token*/
746 re-pruning word tokens too deep in the search update */
752 wtoken->word = word;
765 /* the word was truly added to the priority q, so we must
766 get the new worst word on that list */
775 /* ok, the word won't we maintained, so there's no point to
780 /* we killed the fsmnode token associated with the word being removed.
781 But, we didn't kill it's word backtrace, so there may be word tokens
814 /*handles epsilon transitions (used for word boundaries). Epsilons come from active
819 epsilon, create a word token, put it in the path, and remember it in a
820 list of all word tokens*/
901 word_with_wtw = current_ftoken->word;
910 /*if word boundary, see if it crosses the word end threshold*/
911 /* no room on the word priority_q, so not worth pursuing */
958 awtoken->word,
992 new_ftoken->word = WORD_EPSILON_LABEL;
1004 new_ftoken->word = WORD_EPSILON_LABEL;
1038 new_ftoken->word = word_with_wtw;
1050 new_ftoken->word = word_with_wtw;
1082 just been killed on the basis of no space for word propagation */
1195 current_token->word[internal_state] = current_token->word[internal_state-1];
1197 if (current_token->word[internal_state-1] != MAXwordID)
1293 || (current_token->word[end_state] != MAXwordID));
1335 ftoken->word = current_token->word[end_state];
1336 if (end_model_index == SILENCE_MODEL_INDEX && ftoken->word != rec->context->beg_silence_word)
1344 if (ftoken->word != MAXwordID)
1369 ASSERT( ((current_token->word[end_state] == MAXwordID) && (ftoken->word == MAXwordID))
1370 || ((current_token->word[end_state] != MAXwordID) && (ftoken->word != MAXwordID)) );
1373 when scores are equal, used to prefer longer pau2 word */
1394 if (ftoken->word != MAXwordID)
1402 awtoken->word = ftoken->word;
1438 ftoken->word = current_token->word[end_state];
1439 if (end_model_index == SILENCE_MODEL_INDEX && ftoken->word != rec->context->beg_silence_word)
1455 if (ftoken->word != MAXwordID)
1462 awtoken->word = current_token->word[end_state];
1510 ASSERT(ftoken->word != MAXwordID || ftoken->aword_backtrace == AWTNULL);
1636 token->word[0] = olabel;
1641 token->word[0] = fsmnode_token->word;
1643 ASSERT(token->word[0] != MAXwordID
1772 token->word = MAXwordID;
1805 wtoken->word = end_word;
1819 creates a word linked list even though there is no WORD_BOUNDARY ilabel */
1847 /*remove all word paths from the priority_q which do not end at end_node
1872 ftoken->word,
1886 awtoken->word,
1944 /* release all word tokens */
1988 6. update epsilons, including word boundary arcs (which put words onto the word lattice).
2245 /* it's nice to do word token pruning here 'cuz we only need to traceback
2278 done here before epsilons - that way we don't need to update the word
2309 6. update epsilons, including word boundary arcs (which put words onto the word lattice).
2326 add costs to epsilon arcs (at word boundaries for example), add another
2375 if (wtoken->word == rec->context->beg_silence_word)
2379 if (wtoken->word == rec->context->hack_silence_word)
2384 if (next_wtoken->word == rec->context->beg_silence_word)
2392 if (wtoken->word == rec->context->hack_silence_word
2407 last_word = wtoken->word;
2430 minimized graphs and word merging
2432 When propagating a fsmarc_token, we need to remember the word.id when it
2433 is observed. Let's continue to use fsmarc_token->word[] to remember those.
2440 need to keep all word, a max of 10 is fine cuz that's the most we'll need