Home | History | Annotate | Download | only in dyn
      1 <html><body>
      2 <style>
      3 
      4 body, h1, h2, h3, div, span, p, pre, a {
      5   margin: 0;
      6   padding: 0;
      7   border: 0;
      8   font-weight: inherit;
      9   font-style: inherit;
     10   font-size: 100%;
     11   font-family: inherit;
     12   vertical-align: baseline;
     13 }
     14 
     15 body {
     16   font-size: 13px;
     17   padding: 1em;
     18 }
     19 
     20 h1 {
     21   font-size: 26px;
     22   margin-bottom: 1em;
     23 }
     24 
     25 h2 {
     26   font-size: 24px;
     27   margin-bottom: 1em;
     28 }
     29 
     30 h3 {
     31   font-size: 20px;
     32   margin-bottom: 1em;
     33   margin-top: 1em;
     34 }
     35 
     36 pre, code {
     37   line-height: 1.5;
     38   font-family: Monaco, 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', 'Lucida Console', monospace;
     39 }
     40 
     41 pre {
     42   margin-top: 0.5em;
     43 }
     44 
     45 h1, h2, h3, p {
     46   font-family: Arial, sans serif;
     47 }
     48 
     49 h1, h2, h3 {
     50   border-bottom: solid #CCC 1px;
     51 }
     52 
     53 .toc_element {
     54   margin-top: 0.5em;
     55 }
     56 
     57 .firstline {
     58   margin-left: 2 em;
     59 }
     60 
     61 .method  {
     62   margin-top: 1em;
     63   border: solid 1px #CCC;
     64   padding: 1em;
     65   background: #EEE;
     66 }
     67 
     68 .details {
     69   font-weight: bold;
     70   font-size: 14px;
     71 }
     72 
     73 </style>
     74 
     75 <h1><a href="language_v1.html">Google Cloud Natural Language API</a> . <a href="language_v1.documents.html">documents</a></h1>
     76 <h2>Instance Methods</h2>
     77 <p class="toc_element">
     78   <code><a href="#analyzeEntities">analyzeEntities(body, x__xgafv=None)</a></code></p>
     79 <p class="firstline">Finds named entities (currently proper names and common nouns) in the text</p>
     80 <p class="toc_element">
     81   <code><a href="#analyzeSentiment">analyzeSentiment(body, x__xgafv=None)</a></code></p>
     82 <p class="firstline">Analyzes the sentiment of the provided text.</p>
     83 <p class="toc_element">
     84   <code><a href="#analyzeSyntax">analyzeSyntax(body, x__xgafv=None)</a></code></p>
     85 <p class="firstline">Analyzes the syntax of the text and provides sentence boundaries and</p>
     86 <p class="toc_element">
     87   <code><a href="#annotateText">annotateText(body, x__xgafv=None)</a></code></p>
     88 <p class="firstline">A convenience method that provides all the features that analyzeSentiment,</p>
     89 <h3>Method Details</h3>
     90 <div class="method">
     91     <code class="details" id="analyzeEntities">analyzeEntities(body, x__xgafv=None)</code>
     92   <pre>Finds named entities (currently proper names and common nouns) in the text
     93 along with entity types, salience, mentions for each entity, and
     94 other properties.
     95 
     96 Args:
     97   body: object, The request body. (required)
     98     The object takes the form of:
     99 
    100 { # The entity analysis request message.
    101     "encodingType": "A String", # The encoding type used by the API to calculate offsets.
    102     "document": { # ################################################################ # # Input document.
    103         #
    104         # Represents the input to API methods.
    105       "content": "A String", # The content of the input in string format.
    106       "type": "A String", # Required. If the type is not set or is `TYPE_UNSPECIFIED`,
    107           # returns an `INVALID_ARGUMENT` error.
    108       "language": "A String", # The language of the document (if not specified, the language is
    109           # automatically detected). Both ISO and BCP-47 language codes are
    110           # accepted.<br>
    111           # [Language Support](/natural-language/docs/languages)
    112           # lists currently supported languages for each API method.
    113           # If the language (either specified by the caller or automatically detected)
    114           # is not supported by the called API method, an `INVALID_ARGUMENT` error
    115           # is returned.
    116       "gcsContentUri": "A String", # The Google Cloud Storage URI where the file content is located.
    117           # This URI must be of the form: gs://bucket_name/object_name. For more
    118           # details, see https://cloud.google.com/storage/docs/reference-uris.
    119           # NOTE: Cloud Storage object versioning is not supported.
    120     },
    121   }
    122 
    123   x__xgafv: string, V1 error format.
    124     Allowed values
    125       1 - v1 error format
    126       2 - v2 error format
    127 
    128 Returns:
    129   An object of the form:
    130 
    131     { # The entity analysis response message.
    132     "entities": [ # The recognized entities in the input document.
    133       { # Represents a phrase in the text that is a known entity, such as
    134           # a person, an organization, or location. The API associates information, such
    135           # as salience and mentions, with entities.
    136         "mentions": [ # The mentions of this entity in the input document. The API currently
    137             # supports proper noun mentions.
    138           { # Represents a mention for an entity in the text. Currently, proper noun
    139               # mentions are supported.
    140             "text": { # Represents an output piece of text. # The mention text.
    141               "content": "A String", # The content of the output text.
    142               "beginOffset": 42, # The API calculates the beginning offset of the content in the original
    143                   # document according to the EncodingType specified in the API request.
    144             },
    145             "type": "A String", # The type of the entity mention.
    146           },
    147         ],
    148         "salience": 3.14, # The salience score associated with the entity in the [0, 1.0] range.
    149             #
    150             # The salience score for an entity provides information about the
    151             # importance or centrality of that entity to the entire document text.
    152             # Scores closer to 0 are less salient, while scores closer to 1.0 are highly
    153             # salient.
    154         "type": "A String", # The entity type.
    155         "name": "A String", # The representative name for the entity.
    156         "metadata": { # Metadata associated with the entity.
    157             #
    158             # Currently, Wikipedia URLs and Knowledge Graph MIDs are provided, if
    159             # available. The associated keys are "wikipedia_url" and "mid", respectively.
    160           "a_key": "A String",
    161         },
    162       },
    163     ],
    164     "language": "A String", # The language of the text, which will be the same as the language specified
    165         # in the request or, if not specified, the automatically-detected language.
    166         # See Document.language field for more details.
    167   }</pre>
    168 </div>
    169 
    170 <div class="method">
    171     <code class="details" id="analyzeSentiment">analyzeSentiment(body, x__xgafv=None)</code>
    172   <pre>Analyzes the sentiment of the provided text.
    173 
    174 Args:
    175   body: object, The request body. (required)
    176     The object takes the form of:
    177 
    178 { # The sentiment analysis request message.
    179     "encodingType": "A String", # The encoding type used by the API to calculate sentence offsets.
    180     "document": { # ################################################################ # # Input document.
    181         #
    182         # Represents the input to API methods.
    183       "content": "A String", # The content of the input in string format.
    184       "type": "A String", # Required. If the type is not set or is `TYPE_UNSPECIFIED`,
    185           # returns an `INVALID_ARGUMENT` error.
    186       "language": "A String", # The language of the document (if not specified, the language is
    187           # automatically detected). Both ISO and BCP-47 language codes are
    188           # accepted.<br>
    189           # [Language Support](/natural-language/docs/languages)
    190           # lists currently supported languages for each API method.
    191           # If the language (either specified by the caller or automatically detected)
    192           # is not supported by the called API method, an `INVALID_ARGUMENT` error
    193           # is returned.
    194       "gcsContentUri": "A String", # The Google Cloud Storage URI where the file content is located.
    195           # This URI must be of the form: gs://bucket_name/object_name. For more
    196           # details, see https://cloud.google.com/storage/docs/reference-uris.
    197           # NOTE: Cloud Storage object versioning is not supported.
    198     },
    199   }
    200 
    201   x__xgafv: string, V1 error format.
    202     Allowed values
    203       1 - v1 error format
    204       2 - v2 error format
    205 
    206 Returns:
    207   An object of the form:
    208 
    209     { # The sentiment analysis response message.
    210     "documentSentiment": { # Represents the feeling associated with the entire text or entities in # The overall sentiment of the input document.
    211         # the text.
    212       "score": 3.14, # Sentiment score between -1.0 (negative sentiment) and 1.0
    213           # (positive sentiment).
    214       "magnitude": 3.14, # A non-negative number in the [0, +inf) range, which represents
    215           # the absolute magnitude of sentiment regardless of score (positive or
    216           # negative).
    217     },
    218     "language": "A String", # The language of the text, which will be the same as the language specified
    219         # in the request or, if not specified, the automatically-detected language.
    220         # See Document.language field for more details.
    221     "sentences": [ # The sentiment for all the sentences in the document.
    222       { # Represents a sentence in the input document.
    223         "text": { # Represents an output piece of text. # The sentence text.
    224           "content": "A String", # The content of the output text.
    225           "beginOffset": 42, # The API calculates the beginning offset of the content in the original
    226               # document according to the EncodingType specified in the API request.
    227         },
    228         "sentiment": { # Represents the feeling associated with the entire text or entities in # For calls to AnalyzeSentiment or if
    229             # AnnotateTextRequest.Features.extract_document_sentiment is set to
    230             # true, this field will contain the sentiment for the sentence.
    231             # the text.
    232           "score": 3.14, # Sentiment score between -1.0 (negative sentiment) and 1.0
    233               # (positive sentiment).
    234           "magnitude": 3.14, # A non-negative number in the [0, +inf) range, which represents
    235               # the absolute magnitude of sentiment regardless of score (positive or
    236               # negative).
    237         },
    238       },
    239     ],
    240   }</pre>
    241 </div>
    242 
    243 <div class="method">
    244     <code class="details" id="analyzeSyntax">analyzeSyntax(body, x__xgafv=None)</code>
    245   <pre>Analyzes the syntax of the text and provides sentence boundaries and
    246 tokenization along with part of speech tags, dependency trees, and other
    247 properties.
    248 
    249 Args:
    250   body: object, The request body. (required)
    251     The object takes the form of:
    252 
    253 { # The syntax analysis request message.
    254     "encodingType": "A String", # The encoding type used by the API to calculate offsets.
    255     "document": { # ################################################################ # # Input document.
    256         #
    257         # Represents the input to API methods.
    258       "content": "A String", # The content of the input in string format.
    259       "type": "A String", # Required. If the type is not set or is `TYPE_UNSPECIFIED`,
    260           # returns an `INVALID_ARGUMENT` error.
    261       "language": "A String", # The language of the document (if not specified, the language is
    262           # automatically detected). Both ISO and BCP-47 language codes are
    263           # accepted.<br>
    264           # [Language Support](/natural-language/docs/languages)
    265           # lists currently supported languages for each API method.
    266           # If the language (either specified by the caller or automatically detected)
    267           # is not supported by the called API method, an `INVALID_ARGUMENT` error
    268           # is returned.
    269       "gcsContentUri": "A String", # The Google Cloud Storage URI where the file content is located.
    270           # This URI must be of the form: gs://bucket_name/object_name. For more
    271           # details, see https://cloud.google.com/storage/docs/reference-uris.
    272           # NOTE: Cloud Storage object versioning is not supported.
    273     },
    274   }
    275 
    276   x__xgafv: string, V1 error format.
    277     Allowed values
    278       1 - v1 error format
    279       2 - v2 error format
    280 
    281 Returns:
    282   An object of the form:
    283 
    284     { # The syntax analysis response message.
    285     "tokens": [ # Tokens, along with their syntactic information, in the input document.
    286       { # Represents the smallest syntactic building block of the text.
    287         "text": { # Represents an output piece of text. # The token text.
    288           "content": "A String", # The content of the output text.
    289           "beginOffset": 42, # The API calculates the beginning offset of the content in the original
    290               # document according to the EncodingType specified in the API request.
    291         },
    292         "dependencyEdge": { # Represents dependency parse tree information for a token. (For more # Dependency tree parse for this token.
    293             # information on dependency labels, see
    294             # http://www.aclweb.org/anthology/P13-2017
    295           "headTokenIndex": 42, # Represents the head of this token in the dependency tree.
    296               # This is the index of the token which has an arc going to this token.
    297               # The index is the position of the token in the array of tokens returned
    298               # by the API method. If this token is a root token, then the
    299               # `head_token_index` is its own index.
    300           "label": "A String", # The parse label for the token.
    301         },
    302         "partOfSpeech": { # Represents part of speech information for a token. Parts of speech # Parts of speech tag for this token.
    303             # are as defined in
    304             # http://www.lrec-conf.org/proceedings/lrec2012/pdf/274_Paper.pdf
    305           "case": "A String", # The grammatical case.
    306           "reciprocity": "A String", # The grammatical reciprocity.
    307           "mood": "A String", # The grammatical mood.
    308           "form": "A String", # The grammatical form.
    309           "gender": "A String", # The grammatical gender.
    310           "number": "A String", # The grammatical number.
    311           "person": "A String", # The grammatical person.
    312           "tag": "A String", # The part of speech tag.
    313           "tense": "A String", # The grammatical tense.
    314           "aspect": "A String", # The grammatical aspect.
    315           "proper": "A String", # The grammatical properness.
    316           "voice": "A String", # The grammatical voice.
    317         },
    318         "lemma": "A String", # [Lemma](https://en.wikipedia.org/wiki/Lemma_%28morphology%29) of the token.
    319       },
    320     ],
    321     "language": "A String", # The language of the text, which will be the same as the language specified
    322         # in the request or, if not specified, the automatically-detected language.
    323         # See Document.language field for more details.
    324     "sentences": [ # Sentences in the input document.
    325       { # Represents a sentence in the input document.
    326         "text": { # Represents an output piece of text. # The sentence text.
    327           "content": "A String", # The content of the output text.
    328           "beginOffset": 42, # The API calculates the beginning offset of the content in the original
    329               # document according to the EncodingType specified in the API request.
    330         },
    331         "sentiment": { # Represents the feeling associated with the entire text or entities in # For calls to AnalyzeSentiment or if
    332             # AnnotateTextRequest.Features.extract_document_sentiment is set to
    333             # true, this field will contain the sentiment for the sentence.
    334             # the text.
    335           "score": 3.14, # Sentiment score between -1.0 (negative sentiment) and 1.0
    336               # (positive sentiment).
    337           "magnitude": 3.14, # A non-negative number in the [0, +inf) range, which represents
    338               # the absolute magnitude of sentiment regardless of score (positive or
    339               # negative).
    340         },
    341       },
    342     ],
    343   }</pre>
    344 </div>
    345 
    346 <div class="method">
    347     <code class="details" id="annotateText">annotateText(body, x__xgafv=None)</code>
    348   <pre>A convenience method that provides all the features that analyzeSentiment,
    349 analyzeEntities, and analyzeSyntax provide in one call.
    350 
    351 Args:
    352   body: object, The request body. (required)
    353     The object takes the form of:
    354 
    355 { # The request message for the text annotation API, which can perform multiple
    356       # analysis types (sentiment, entities, and syntax) in one call.
    357     "encodingType": "A String", # The encoding type used by the API to calculate offsets.
    358     "document": { # ################################################################ # # Input document.
    359         #
    360         # Represents the input to API methods.
    361       "content": "A String", # The content of the input in string format.
    362       "type": "A String", # Required. If the type is not set or is `TYPE_UNSPECIFIED`,
    363           # returns an `INVALID_ARGUMENT` error.
    364       "language": "A String", # The language of the document (if not specified, the language is
    365           # automatically detected). Both ISO and BCP-47 language codes are
    366           # accepted.<br>
    367           # [Language Support](/natural-language/docs/languages)
    368           # lists currently supported languages for each API method.
    369           # If the language (either specified by the caller or automatically detected)
    370           # is not supported by the called API method, an `INVALID_ARGUMENT` error
    371           # is returned.
    372       "gcsContentUri": "A String", # The Google Cloud Storage URI where the file content is located.
    373           # This URI must be of the form: gs://bucket_name/object_name. For more
    374           # details, see https://cloud.google.com/storage/docs/reference-uris.
    375           # NOTE: Cloud Storage object versioning is not supported.
    376     },
    377     "features": { # All available features for sentiment, syntax, and semantic analysis. # The enabled features.
    378         # Setting each one to true will enable that specific analysis for the input.
    379       "extractDocumentSentiment": True or False, # Extract document-level sentiment.
    380       "extractEntities": True or False, # Extract entities.
    381       "extractSyntax": True or False, # Extract syntax information.
    382     },
    383   }
    384 
    385   x__xgafv: string, V1 error format.
    386     Allowed values
    387       1 - v1 error format
    388       2 - v2 error format
    389 
    390 Returns:
    391   An object of the form:
    392 
    393     { # The text annotations response message.
    394     "tokens": [ # Tokens, along with their syntactic information, in the input document.
    395         # Populated if the user enables
    396         # AnnotateTextRequest.Features.extract_syntax.
    397       { # Represents the smallest syntactic building block of the text.
    398         "text": { # Represents an output piece of text. # The token text.
    399           "content": "A String", # The content of the output text.
    400           "beginOffset": 42, # The API calculates the beginning offset of the content in the original
    401               # document according to the EncodingType specified in the API request.
    402         },
    403         "dependencyEdge": { # Represents dependency parse tree information for a token. (For more # Dependency tree parse for this token.
    404             # information on dependency labels, see
    405             # http://www.aclweb.org/anthology/P13-2017
    406           "headTokenIndex": 42, # Represents the head of this token in the dependency tree.
    407               # This is the index of the token which has an arc going to this token.
    408               # The index is the position of the token in the array of tokens returned
    409               # by the API method. If this token is a root token, then the
    410               # `head_token_index` is its own index.
    411           "label": "A String", # The parse label for the token.
    412         },
    413         "partOfSpeech": { # Represents part of speech information for a token. Parts of speech # Parts of speech tag for this token.
    414             # are as defined in
    415             # http://www.lrec-conf.org/proceedings/lrec2012/pdf/274_Paper.pdf
    416           "case": "A String", # The grammatical case.
    417           "reciprocity": "A String", # The grammatical reciprocity.
    418           "mood": "A String", # The grammatical mood.
    419           "form": "A String", # The grammatical form.
    420           "gender": "A String", # The grammatical gender.
    421           "number": "A String", # The grammatical number.
    422           "person": "A String", # The grammatical person.
    423           "tag": "A String", # The part of speech tag.
    424           "tense": "A String", # The grammatical tense.
    425           "aspect": "A String", # The grammatical aspect.
    426           "proper": "A String", # The grammatical properness.
    427           "voice": "A String", # The grammatical voice.
    428         },
    429         "lemma": "A String", # [Lemma](https://en.wikipedia.org/wiki/Lemma_%28morphology%29) of the token.
    430       },
    431     ],
    432     "entities": [ # Entities, along with their semantic information, in the input document.
    433         # Populated if the user enables
    434         # AnnotateTextRequest.Features.extract_entities.
    435       { # Represents a phrase in the text that is a known entity, such as
    436           # a person, an organization, or location. The API associates information, such
    437           # as salience and mentions, with entities.
    438         "mentions": [ # The mentions of this entity in the input document. The API currently
    439             # supports proper noun mentions.
    440           { # Represents a mention for an entity in the text. Currently, proper noun
    441               # mentions are supported.
    442             "text": { # Represents an output piece of text. # The mention text.
    443               "content": "A String", # The content of the output text.
    444               "beginOffset": 42, # The API calculates the beginning offset of the content in the original
    445                   # document according to the EncodingType specified in the API request.
    446             },
    447             "type": "A String", # The type of the entity mention.
    448           },
    449         ],
    450         "salience": 3.14, # The salience score associated with the entity in the [0, 1.0] range.
    451             #
    452             # The salience score for an entity provides information about the
    453             # importance or centrality of that entity to the entire document text.
    454             # Scores closer to 0 are less salient, while scores closer to 1.0 are highly
    455             # salient.
    456         "type": "A String", # The entity type.
    457         "name": "A String", # The representative name for the entity.
    458         "metadata": { # Metadata associated with the entity.
    459             #
    460             # Currently, Wikipedia URLs and Knowledge Graph MIDs are provided, if
    461             # available. The associated keys are "wikipedia_url" and "mid", respectively.
    462           "a_key": "A String",
    463         },
    464       },
    465     ],
    466     "documentSentiment": { # Represents the feeling associated with the entire text or entities in # The overall sentiment for the document. Populated if the user enables
    467         # AnnotateTextRequest.Features.extract_document_sentiment.
    468         # the text.
    469       "score": 3.14, # Sentiment score between -1.0 (negative sentiment) and 1.0
    470           # (positive sentiment).
    471       "magnitude": 3.14, # A non-negative number in the [0, +inf) range, which represents
    472           # the absolute magnitude of sentiment regardless of score (positive or
    473           # negative).
    474     },
    475     "language": "A String", # The language of the text, which will be the same as the language specified
    476         # in the request or, if not specified, the automatically-detected language.
    477         # See Document.language field for more details.
    478     "sentences": [ # Sentences in the input document. Populated if the user enables
    479         # AnnotateTextRequest.Features.extract_syntax.
    480       { # Represents a sentence in the input document.
    481         "text": { # Represents an output piece of text. # The sentence text.
    482           "content": "A String", # The content of the output text.
    483           "beginOffset": 42, # The API calculates the beginning offset of the content in the original
    484               # document according to the EncodingType specified in the API request.
    485         },
    486         "sentiment": { # Represents the feeling associated with the entire text or entities in # For calls to AnalyzeSentiment or if
    487             # AnnotateTextRequest.Features.extract_document_sentiment is set to
    488             # true, this field will contain the sentiment for the sentence.
    489             # the text.
    490           "score": 3.14, # Sentiment score between -1.0 (negative sentiment) and 1.0
    491               # (positive sentiment).
    492           "magnitude": 3.14, # A non-negative number in the [0, +inf) range, which represents
    493               # the absolute magnitude of sentiment regardless of score (positive or
    494               # negative).
    495         },
    496       },
    497     ],
    498   }</pre>
    499 </div>
    500 
    501 </body></html>