Home | History | Annotate | Download | only in dyn
      1 <html><body>
      2 <style>
      3 
      4 body, h1, h2, h3, div, span, p, pre, a {
      5   margin: 0;
      6   padding: 0;
      7   border: 0;
      8   font-weight: inherit;
      9   font-style: inherit;
     10   font-size: 100%;
     11   font-family: inherit;
     12   vertical-align: baseline;
     13 }
     14 
     15 body {
     16   font-size: 13px;
     17   padding: 1em;
     18 }
     19 
     20 h1 {
     21   font-size: 26px;
     22   margin-bottom: 1em;
     23 }
     24 
     25 h2 {
     26   font-size: 24px;
     27   margin-bottom: 1em;
     28 }
     29 
     30 h3 {
     31   font-size: 20px;
     32   margin-bottom: 1em;
     33   margin-top: 1em;
     34 }
     35 
     36 pre, code {
     37   line-height: 1.5;
     38   font-family: Monaco, 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', 'Lucida Console', monospace;
     39 }
     40 
     41 pre {
     42   margin-top: 0.5em;
     43 }
     44 
     45 h1, h2, h3, p {
     46   font-family: Arial, sans serif;
     47 }
     48 
     49 h1, h2, h3 {
     50   border-bottom: solid #CCC 1px;
     51 }
     52 
     53 .toc_element {
     54   margin-top: 0.5em;
     55 }
     56 
     57 .firstline {
     58   margin-left: 2 em;
     59 }
     60 
     61 .method  {
     62   margin-top: 1em;
     63   border: solid 1px #CCC;
     64   padding: 1em;
     65   background: #EEE;
     66 }
     67 
     68 .details {
     69   font-weight: bold;
     70   font-size: 14px;
     71 }
     72 
     73 </style>
     74 
     75 <h1><a href="speech_v1.html">Google Cloud Speech API</a> . <a href="speech_v1.speech.html">speech</a></h1>
     76 <h2>Instance Methods</h2>
     77 <p class="toc_element">
     78   <code><a href="#longrunningrecognize">longrunningrecognize(body, x__xgafv=None)</a></code></p>
     79 <p class="firstline">Performs asynchronous speech recognition: receive results via the</p>
     80 <p class="toc_element">
     81   <code><a href="#recognize">recognize(body, x__xgafv=None)</a></code></p>
     82 <p class="firstline">Performs synchronous speech recognition: receive results after all audio</p>
     83 <h3>Method Details</h3>
     84 <div class="method">
     85     <code class="details" id="longrunningrecognize">longrunningrecognize(body, x__xgafv=None)</code>
     86   <pre>Performs asynchronous speech recognition: receive results via the
     87 google.longrunning.Operations interface. Returns either an
     88 `Operation.error` or an `Operation.response` which contains
     89 a `LongRunningRecognizeResponse` message.
     90 
     91 Args:
     92   body: object, The request body. (required)
     93     The object takes the form of:
     94 
     95 { # The top-level message sent by the client for the `LongRunningRecognize`
     96       # method.
     97     "audio": { # Contains audio data in the encoding specified in the `RecognitionConfig`. # *Required* The audio data to be recognized.
     98         # Either `content` or `uri` must be supplied. Supplying both or neither
     99         # returns google.rpc.Code.INVALID_ARGUMENT. See
    100         # [audio limits](https://cloud.google.com/speech/limits#content).
    101       "content": "A String", # The audio data bytes encoded as specified in
    102           # `RecognitionConfig`. Note: as with all bytes fields, protobuffers use a
    103           # pure binary representation, whereas JSON representations use base64.
    104       "uri": "A String", # URI that points to a file that contains audio data bytes as specified in
    105           # `RecognitionConfig`. Currently, only Google Cloud Storage URIs are
    106           # supported, which must be specified in the following format:
    107           # `gs://bucket_name/object_name` (other URI formats return
    108           # google.rpc.Code.INVALID_ARGUMENT). For more information, see
    109           # [Request URIs](https://cloud.google.com/storage/docs/reference-uris).
    110     },
    111     "config": { # Provides information to the recognizer that specifies how to process the # *Required* Provides information to the recognizer that specifies how to
    112         # process the request.
    113         # request.
    114       "languageCode": "A String", # *Required* The language of the supplied audio as a
    115           # [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tag.
    116           # Example: "en-US".
    117           # See [Language Support](https://cloud.google.com/speech/docs/languages)
    118           # for a list of the currently supported language codes.
    119       "encoding": "A String", # *Required* Encoding of audio data sent in all `RecognitionAudio` messages.
    120       "maxAlternatives": 42, # *Optional* Maximum number of recognition hypotheses to be returned.
    121           # Specifically, the maximum number of `SpeechRecognitionAlternative` messages
    122           # within each `SpeechRecognitionResult`.
    123           # The server may return fewer than `max_alternatives`.
    124           # Valid values are `0`-`30`. A value of `0` or `1` will return a maximum of
    125           # one. If omitted, will return a maximum of one.
    126       "sampleRateHertz": 42, # *Required* Sample rate in Hertz of the audio data sent in all
    127           # `RecognitionAudio` messages. Valid values are: 8000-48000.
    128           # 16000 is optimal. For best results, set the sampling rate of the audio
    129           # source to 16000 Hz. If that's not possible, use the native sample rate of
    130           # the audio source (instead of re-sampling).
    131       "profanityFilter": True or False, # *Optional* If set to `true`, the server will attempt to filter out
    132           # profanities, replacing all but the initial character in each filtered word
    133           # with asterisks, e.g. "f***". If set to `false` or omitted, profanities
    134           # won't be filtered out.
    135       "speechContexts": [ # *Optional* A means to provide context to assist the speech recognition.
    136         { # Provides "hints" to the speech recognizer to favor specific words and phrases
    137             # in the results.
    138           "phrases": [ # *Optional* A list of strings containing words and phrases "hints" so that
    139               # the speech recognition is more likely to recognize them. This can be used
    140               # to improve the accuracy for specific words and phrases, for example, if
    141               # specific commands are typically spoken by the user. This can also be used
    142               # to add additional words to the vocabulary of the recognizer. See
    143               # [usage limits](https://cloud.google.com/speech/limits#content).
    144             "A String",
    145           ],
    146         },
    147       ],
    148     },
    149   }
    150 
    151   x__xgafv: string, V1 error format.
    152     Allowed values
    153       1 - v1 error format
    154       2 - v2 error format
    155 
    156 Returns:
    157   An object of the form:
    158 
    159     { # This resource represents a long-running operation that is the result of a
    160       # network API call.
    161     "metadata": { # Service-specific metadata associated with the operation.  It typically
    162         # contains progress information and common metadata such as create time.
    163         # Some services might not provide such metadata.  Any method that returns a
    164         # long-running operation should document the metadata type, if any.
    165       "a_key": "", # Properties of the object. Contains field @type with type URL.
    166     },
    167     "done": True or False, # If the value is `false`, it means the operation is still in progress.
    168         # If true, the operation is completed, and either `error` or `response` is
    169         # available.
    170     "response": { # The normal response of the operation in case of success.  If the original
    171         # method returns no data on success, such as `Delete`, the response is
    172         # `google.protobuf.Empty`.  If the original method is standard
    173         # `Get`/`Create`/`Update`, the response should be the resource.  For other
    174         # methods, the response should have the type `XxxResponse`, where `Xxx`
    175         # is the original method name.  For example, if the original method name
    176         # is `TakeSnapshot()`, the inferred response type is
    177         # `TakeSnapshotResponse`.
    178       "a_key": "", # Properties of the object. Contains field @type with type URL.
    179     },
    180     "name": "A String", # The server-assigned name, which is only unique within the same service that
    181         # originally returns it. If you use the default HTTP mapping, the
    182         # `name` should have the format of `operations/some/unique/name`.
    183     "error": { # The `Status` type defines a logical error model that is suitable for different # The error result of the operation in case of failure or cancellation.
    184         # programming environments, including REST APIs and RPC APIs. It is used by
    185         # [gRPC](https://github.com/grpc). The error model is designed to be:
    186         #
    187         # - Simple to use and understand for most users
    188         # - Flexible enough to meet unexpected needs
    189         #
    190         # # Overview
    191         #
    192         # The `Status` message contains three pieces of data: error code, error message,
    193         # and error details. The error code should be an enum value of
    194         # google.rpc.Code, but it may accept additional error codes if needed.  The
    195         # error message should be a developer-facing English message that helps
    196         # developers *understand* and *resolve* the error. If a localized user-facing
    197         # error message is needed, put the localized message in the error details or
    198         # localize it in the client. The optional error details may contain arbitrary
    199         # information about the error. There is a predefined set of error detail types
    200         # in the package `google.rpc` that can be used for common error conditions.
    201         #
    202         # # Language mapping
    203         #
    204         # The `Status` message is the logical representation of the error model, but it
    205         # is not necessarily the actual wire format. When the `Status` message is
    206         # exposed in different client libraries and different wire protocols, it can be
    207         # mapped differently. For example, it will likely be mapped to some exceptions
    208         # in Java, but more likely mapped to some error codes in C.
    209         #
    210         # # Other uses
    211         #
    212         # The error model and the `Status` message can be used in a variety of
    213         # environments, either with or without APIs, to provide a
    214         # consistent developer experience across different environments.
    215         #
    216         # Example uses of this error model include:
    217         #
    218         # - Partial errors. If a service needs to return partial errors to the client,
    219         #     it may embed the `Status` in the normal response to indicate the partial
    220         #     errors.
    221         #
    222         # - Workflow errors. A typical workflow has multiple steps. Each step may
    223         #     have a `Status` message for error reporting.
    224         #
    225         # - Batch operations. If a client uses batch request and batch response, the
    226         #     `Status` message should be used directly inside batch response, one for
    227         #     each error sub-response.
    228         #
    229         # - Asynchronous operations. If an API call embeds asynchronous operation
    230         #     results in its response, the status of those operations should be
    231         #     represented directly using the `Status` message.
    232         #
    233         # - Logging. If some API errors are stored in logs, the message `Status` could
    234         #     be used directly after any stripping needed for security/privacy reasons.
    235       "message": "A String", # A developer-facing error message, which should be in English. Any
    236           # user-facing error message should be localized and sent in the
    237           # google.rpc.Status.details field, or localized by the client.
    238       "code": 42, # The status code, which should be an enum value of google.rpc.Code.
    239       "details": [ # A list of messages that carry the error details.  There will be a
    240           # common set of message types for APIs to use.
    241         {
    242           "a_key": "", # Properties of the object. Contains field @type with type URL.
    243         },
    244       ],
    245     },
    246   }</pre>
    247 </div>
    248 
    249 <div class="method">
    250     <code class="details" id="recognize">recognize(body, x__xgafv=None)</code>
    251   <pre>Performs synchronous speech recognition: receive results after all audio
    252 has been sent and processed.
    253 
    254 Args:
    255   body: object, The request body. (required)
    256     The object takes the form of:
    257 
    258 { # The top-level message sent by the client for the `Recognize` method.
    259     "audio": { # Contains audio data in the encoding specified in the `RecognitionConfig`. # *Required* The audio data to be recognized.
    260         # Either `content` or `uri` must be supplied. Supplying both or neither
    261         # returns google.rpc.Code.INVALID_ARGUMENT. See
    262         # [audio limits](https://cloud.google.com/speech/limits#content).
    263       "content": "A String", # The audio data bytes encoded as specified in
    264           # `RecognitionConfig`. Note: as with all bytes fields, protobuffers use a
    265           # pure binary representation, whereas JSON representations use base64.
    266       "uri": "A String", # URI that points to a file that contains audio data bytes as specified in
    267           # `RecognitionConfig`. Currently, only Google Cloud Storage URIs are
    268           # supported, which must be specified in the following format:
    269           # `gs://bucket_name/object_name` (other URI formats return
    270           # google.rpc.Code.INVALID_ARGUMENT). For more information, see
    271           # [Request URIs](https://cloud.google.com/storage/docs/reference-uris).
    272     },
    273     "config": { # Provides information to the recognizer that specifies how to process the # *Required* Provides information to the recognizer that specifies how to
    274         # process the request.
    275         # request.
    276       "languageCode": "A String", # *Required* The language of the supplied audio as a
    277           # [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tag.
    278           # Example: "en-US".
    279           # See [Language Support](https://cloud.google.com/speech/docs/languages)
    280           # for a list of the currently supported language codes.
    281       "encoding": "A String", # *Required* Encoding of audio data sent in all `RecognitionAudio` messages.
    282       "maxAlternatives": 42, # *Optional* Maximum number of recognition hypotheses to be returned.
    283           # Specifically, the maximum number of `SpeechRecognitionAlternative` messages
    284           # within each `SpeechRecognitionResult`.
    285           # The server may return fewer than `max_alternatives`.
    286           # Valid values are `0`-`30`. A value of `0` or `1` will return a maximum of
    287           # one. If omitted, will return a maximum of one.
    288       "sampleRateHertz": 42, # *Required* Sample rate in Hertz of the audio data sent in all
    289           # `RecognitionAudio` messages. Valid values are: 8000-48000.
    290           # 16000 is optimal. For best results, set the sampling rate of the audio
    291           # source to 16000 Hz. If that's not possible, use the native sample rate of
    292           # the audio source (instead of re-sampling).
    293       "profanityFilter": True or False, # *Optional* If set to `true`, the server will attempt to filter out
    294           # profanities, replacing all but the initial character in each filtered word
    295           # with asterisks, e.g. "f***". If set to `false` or omitted, profanities
    296           # won't be filtered out.
    297       "speechContexts": [ # *Optional* A means to provide context to assist the speech recognition.
    298         { # Provides "hints" to the speech recognizer to favor specific words and phrases
    299             # in the results.
    300           "phrases": [ # *Optional* A list of strings containing words and phrases "hints" so that
    301               # the speech recognition is more likely to recognize them. This can be used
    302               # to improve the accuracy for specific words and phrases, for example, if
    303               # specific commands are typically spoken by the user. This can also be used
    304               # to add additional words to the vocabulary of the recognizer. See
    305               # [usage limits](https://cloud.google.com/speech/limits#content).
    306             "A String",
    307           ],
    308         },
    309       ],
    310     },
    311   }
    312 
    313   x__xgafv: string, V1 error format.
    314     Allowed values
    315       1 - v1 error format
    316       2 - v2 error format
    317 
    318 Returns:
    319   An object of the form:
    320 
    321     { # The only message returned to the client by the `Recognize` method. It
    322       # contains the result as zero or more sequential `SpeechRecognitionResult`
    323       # messages.
    324     "results": [ # *Output-only* Sequential list of transcription results corresponding to
    325         # sequential portions of audio.
    326       { # A speech recognition result corresponding to a portion of the audio.
    327         "alternatives": [ # *Output-only* May contain one or more recognition hypotheses (up to the
    328             # maximum specified in `max_alternatives`).
    329             # These alternatives are ordered in terms of accuracy, with the first/top
    330             # alternative being the most probable, as ranked by the recognizer.
    331           { # Alternative hypotheses (a.k.a. n-best list).
    332             "confidence": 3.14, # *Output-only* The confidence estimate between 0.0 and 1.0. A higher number
    333                 # indicates an estimated greater likelihood that the recognized words are
    334                 # correct. This field is typically provided only for the top hypothesis, and
    335                 # only for `is_final=true` results. Clients should not rely on the
    336                 # `confidence` field as it is not guaranteed to be accurate, or even set, in
    337                 # any of the results.
    338                 # The default of 0.0 is a sentinel value indicating `confidence` was not set.
    339             "transcript": "A String", # *Output-only* Transcript text representing the words that the user spoke.
    340           },
    341         ],
    342       },
    343     ],
    344   }</pre>
    345 </div>
    346 
    347 </body></html>