1 <html><body> 2 <style> 3 4 body, h1, h2, h3, div, span, p, pre, a { 5 margin: 0; 6 padding: 0; 7 border: 0; 8 font-weight: inherit; 9 font-style: inherit; 10 font-size: 100%; 11 font-family: inherit; 12 vertical-align: baseline; 13 } 14 15 body { 16 font-size: 13px; 17 padding: 1em; 18 } 19 20 h1 { 21 font-size: 26px; 22 margin-bottom: 1em; 23 } 24 25 h2 { 26 font-size: 24px; 27 margin-bottom: 1em; 28 } 29 30 h3 { 31 font-size: 20px; 32 margin-bottom: 1em; 33 margin-top: 1em; 34 } 35 36 pre, code { 37 line-height: 1.5; 38 font-family: Monaco, 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', 'Lucida Console', monospace; 39 } 40 41 pre { 42 margin-top: 0.5em; 43 } 44 45 h1, h2, h3, p { 46 font-family: Arial, sans serif; 47 } 48 49 h1, h2, h3 { 50 border-bottom: solid #CCC 1px; 51 } 52 53 .toc_element { 54 margin-top: 0.5em; 55 } 56 57 .firstline { 58 margin-left: 2 em; 59 } 60 61 .method { 62 margin-top: 1em; 63 border: solid 1px #CCC; 64 padding: 1em; 65 background: #EEE; 66 } 67 68 .details { 69 font-weight: bold; 70 font-size: 14px; 71 } 72 73 </style> 74 75 <h1><a href="ml_v1.html">Google Cloud Machine Learning Engine</a> . <a href="ml_v1.projects.html">projects</a> . <a href="ml_v1.projects.models.html">models</a></h1> 76 <h2>Instance Methods</h2> 77 <p class="toc_element"> 78 <code><a href="ml_v1.projects.models.versions.html">versions()</a></code> 79 </p> 80 <p class="firstline">Returns the versions Resource.</p> 81 82 <p class="toc_element"> 83 <code><a href="#create">create(parent, body, x__xgafv=None)</a></code></p> 84 <p class="firstline">Creates a model which will later contain one or more versions.</p> 85 <p class="toc_element"> 86 <code><a href="#delete">delete(name, x__xgafv=None)</a></code></p> 87 <p class="firstline">Deletes a model.</p> 88 <p class="toc_element"> 89 <code><a href="#get">get(name, x__xgafv=None)</a></code></p> 90 <p class="firstline">Gets information about a model, including its name, the description (if</p> 91 <p class="toc_element"> 92 <code><a href="#list">list(parent, pageToken=None, x__xgafv=None, pageSize=None)</a></code></p> 93 <p class="firstline">Lists the models in a project.</p> 94 <p class="toc_element"> 95 <code><a href="#list_next">list_next(previous_request, previous_response)</a></code></p> 96 <p class="firstline">Retrieves the next page of results.</p> 97 <h3>Method Details</h3> 98 <div class="method"> 99 <code class="details" id="create">create(parent, body, x__xgafv=None)</code> 100 <pre>Creates a model which will later contain one or more versions. 101 102 You must add at least one version before you can request predictions from 103 the model. Add versions by calling 104 [projects.models.versions.create](/ml-engine/reference/rest/v1/projects.models.versions/create). 105 106 Args: 107 parent: string, Required. The project name. 108 109 Authorization: requires `Editor` role on the specified project. (required) 110 body: object, The request body. (required) 111 The object takes the form of: 112 113 { # Represents a machine learning solution. 114 # 115 # A model can have multiple versions, each of which is a deployed, trained 116 # model ready to receive prediction requests. The model itself is just a 117 # container. 118 "regions": [ # Optional. The list of regions where the model is going to be deployed. 119 # Currently only one region per model is supported. 120 # Defaults to 'us-central1' if nothing is set. 121 # Note: 122 # * No matter where a model is deployed, it can always be accessed by 123 # users from anywhere, both for online and batch prediction. 124 # * The region for a batch prediction job is set by the region field when 125 # submitting the batch prediction job and does not take its value from 126 # this field. 127 "A String", 128 ], 129 "defaultVersion": { # Represents a version of the model. # Output only. The default version of the model. This version will be used to 130 # handle prediction requests that do not specify a version. 131 # 132 # You can change the default version by calling 133 # [projects.methods.versions.setDefault](/ml-engine/reference/rest/v1/projects.models.versions/setDefault). 134 # 135 # Each version is a trained model deployed in the cloud, ready to handle 136 # prediction requests. A model can have multiple versions. You can get 137 # information about all of the versions of a given model by calling 138 # [projects.models.versions.list](/ml-engine/reference/rest/v1/projects.models.versions/list). 139 "description": "A String", # Optional. The description specified for the version when it was created. 140 "runtimeVersion": "A String", # Optional. The Google Cloud ML runtime version to use for this deployment. 141 # If not set, Google Cloud ML will choose a version. 142 "manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the 143 # model. You should generally use `automatic_scaling` with an appropriate 144 # `min_nodes` instead, but this option is available if you want more 145 # predictable billing. Beware that latency and error rates will increase 146 # if the traffic exceeds that capability of the system to serve it based 147 # on the selected number of nodes. 148 "nodes": 42, # The number of nodes to allocate for this model. These nodes are always up, 149 # starting from the time the model is deployed, so the cost of operating 150 # this model will be proportional to `nodes` * number of hours since 151 # last billing cycle plus the cost for each prediction performed. 152 }, 153 "deploymentUri": "A String", # Required. The Google Cloud Storage location of the trained model used to 154 # create the version. See the 155 # [overview of model 156 # deployment](/ml-engine/docs/concepts/deployment-overview) for more 157 # informaiton. 158 # 159 # When passing Version to 160 # [projects.models.versions.create](/ml-engine/reference/rest/v1/projects.models.versions/create) 161 # the model service uses the specified location as the source of the model. 162 # Once deployed, the model version is hosted by the prediction service, so 163 # this location is useful only as a historical record. 164 # The total number of model files can't exceed 1000. 165 "lastUseTime": "A String", # Output only. The time the version was last used for prediction. 166 "automaticScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in 167 # response to increases and decreases in traffic. Care should be 168 # taken to ramp up traffic according to the model's ability to scale 169 # or you will start seeing increases in latency and 429 response codes. 170 "minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These 171 # nodes are always up, starting from the time the model is deployed, so the 172 # cost of operating this model will be at least 173 # `rate` * `min_nodes` * number of hours since last billing cycle, 174 # where `rate` is the cost per node-hour as documented in 175 # [pricing](https://cloud.google.com/ml-engine/pricing#prediction_pricing), 176 # even if no predictions are performed. There is additional cost for each 177 # prediction performed. 178 # 179 # Unlike manual scaling, if the load gets too heavy for the nodes 180 # that are up, the service will automatically add nodes to handle the 181 # increased load as well as scale back as traffic drops, always maintaining 182 # at least `min_nodes`. You will be charged for the time in which additional 183 # nodes are used. 184 # 185 # If not specified, `min_nodes` defaults to 0, in which case, when traffic 186 # to a model stops (and after a cool-down period), nodes will be shut down 187 # and no charges will be incurred until traffic to the model resumes. 188 }, 189 "createTime": "A String", # Output only. The time the version was created. 190 "isDefault": True or False, # Output only. If true, this version will be used to handle prediction 191 # requests that do not specify a version. 192 # 193 # You can change the default version by calling 194 # [projects.methods.versions.setDefault](/ml-engine/reference/rest/v1/projects.models.versions/setDefault). 195 "name": "A String", # Required.The name specified for the version when it was created. 196 # 197 # The version name must be unique within the model it is created in. 198 }, 199 "name": "A String", # Required. The name specified for the model when it was created. 200 # 201 # The model name must be unique within the project it is created in. 202 "onlinePredictionLogging": True or False, # Optional. If true, enables StackDriver Logging for online prediction. 203 # Default is false. 204 "description": "A String", # Optional. The description specified for the model when it was created. 205 } 206 207 x__xgafv: string, V1 error format. 208 Allowed values 209 1 - v1 error format 210 2 - v2 error format 211 212 Returns: 213 An object of the form: 214 215 { # Represents a machine learning solution. 216 # 217 # A model can have multiple versions, each of which is a deployed, trained 218 # model ready to receive prediction requests. The model itself is just a 219 # container. 220 "regions": [ # Optional. The list of regions where the model is going to be deployed. 221 # Currently only one region per model is supported. 222 # Defaults to 'us-central1' if nothing is set. 223 # Note: 224 # * No matter where a model is deployed, it can always be accessed by 225 # users from anywhere, both for online and batch prediction. 226 # * The region for a batch prediction job is set by the region field when 227 # submitting the batch prediction job and does not take its value from 228 # this field. 229 "A String", 230 ], 231 "defaultVersion": { # Represents a version of the model. # Output only. The default version of the model. This version will be used to 232 # handle prediction requests that do not specify a version. 233 # 234 # You can change the default version by calling 235 # [projects.methods.versions.setDefault](/ml-engine/reference/rest/v1/projects.models.versions/setDefault). 236 # 237 # Each version is a trained model deployed in the cloud, ready to handle 238 # prediction requests. A model can have multiple versions. You can get 239 # information about all of the versions of a given model by calling 240 # [projects.models.versions.list](/ml-engine/reference/rest/v1/projects.models.versions/list). 241 "description": "A String", # Optional. The description specified for the version when it was created. 242 "runtimeVersion": "A String", # Optional. The Google Cloud ML runtime version to use for this deployment. 243 # If not set, Google Cloud ML will choose a version. 244 "manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the 245 # model. You should generally use `automatic_scaling` with an appropriate 246 # `min_nodes` instead, but this option is available if you want more 247 # predictable billing. Beware that latency and error rates will increase 248 # if the traffic exceeds that capability of the system to serve it based 249 # on the selected number of nodes. 250 "nodes": 42, # The number of nodes to allocate for this model. These nodes are always up, 251 # starting from the time the model is deployed, so the cost of operating 252 # this model will be proportional to `nodes` * number of hours since 253 # last billing cycle plus the cost for each prediction performed. 254 }, 255 "deploymentUri": "A String", # Required. The Google Cloud Storage location of the trained model used to 256 # create the version. See the 257 # [overview of model 258 # deployment](/ml-engine/docs/concepts/deployment-overview) for more 259 # informaiton. 260 # 261 # When passing Version to 262 # [projects.models.versions.create](/ml-engine/reference/rest/v1/projects.models.versions/create) 263 # the model service uses the specified location as the source of the model. 264 # Once deployed, the model version is hosted by the prediction service, so 265 # this location is useful only as a historical record. 266 # The total number of model files can't exceed 1000. 267 "lastUseTime": "A String", # Output only. The time the version was last used for prediction. 268 "automaticScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in 269 # response to increases and decreases in traffic. Care should be 270 # taken to ramp up traffic according to the model's ability to scale 271 # or you will start seeing increases in latency and 429 response codes. 272 "minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These 273 # nodes are always up, starting from the time the model is deployed, so the 274 # cost of operating this model will be at least 275 # `rate` * `min_nodes` * number of hours since last billing cycle, 276 # where `rate` is the cost per node-hour as documented in 277 # [pricing](https://cloud.google.com/ml-engine/pricing#prediction_pricing), 278 # even if no predictions are performed. There is additional cost for each 279 # prediction performed. 280 # 281 # Unlike manual scaling, if the load gets too heavy for the nodes 282 # that are up, the service will automatically add nodes to handle the 283 # increased load as well as scale back as traffic drops, always maintaining 284 # at least `min_nodes`. You will be charged for the time in which additional 285 # nodes are used. 286 # 287 # If not specified, `min_nodes` defaults to 0, in which case, when traffic 288 # to a model stops (and after a cool-down period), nodes will be shut down 289 # and no charges will be incurred until traffic to the model resumes. 290 }, 291 "createTime": "A String", # Output only. The time the version was created. 292 "isDefault": True or False, # Output only. If true, this version will be used to handle prediction 293 # requests that do not specify a version. 294 # 295 # You can change the default version by calling 296 # [projects.methods.versions.setDefault](/ml-engine/reference/rest/v1/projects.models.versions/setDefault). 297 "name": "A String", # Required.The name specified for the version when it was created. 298 # 299 # The version name must be unique within the model it is created in. 300 }, 301 "name": "A String", # Required. The name specified for the model when it was created. 302 # 303 # The model name must be unique within the project it is created in. 304 "onlinePredictionLogging": True or False, # Optional. If true, enables StackDriver Logging for online prediction. 305 # Default is false. 306 "description": "A String", # Optional. The description specified for the model when it was created. 307 }</pre> 308 </div> 309 310 <div class="method"> 311 <code class="details" id="delete">delete(name, x__xgafv=None)</code> 312 <pre>Deletes a model. 313 314 You can only delete a model if there are no versions in it. You can delete 315 versions by calling 316 [projects.models.versions.delete](/ml-engine/reference/rest/v1/projects.models.versions/delete). 317 318 Args: 319 name: string, Required. The name of the model. 320 321 Authorization: requires `Editor` role on the parent project. (required) 322 x__xgafv: string, V1 error format. 323 Allowed values 324 1 - v1 error format 325 2 - v2 error format 326 327 Returns: 328 An object of the form: 329 330 { # This resource represents a long-running operation that is the result of a 331 # network API call. 332 "metadata": { # Service-specific metadata associated with the operation. It typically 333 # contains progress information and common metadata such as create time. 334 # Some services might not provide such metadata. Any method that returns a 335 # long-running operation should document the metadata type, if any. 336 "a_key": "", # Properties of the object. Contains field @type with type URL. 337 }, 338 "error": { # The `Status` type defines a logical error model that is suitable for different # The error result of the operation in case of failure or cancellation. 339 # programming environments, including REST APIs and RPC APIs. It is used by 340 # [gRPC](https://github.com/grpc). The error model is designed to be: 341 # 342 # - Simple to use and understand for most users 343 # - Flexible enough to meet unexpected needs 344 # 345 # # Overview 346 # 347 # The `Status` message contains three pieces of data: error code, error message, 348 # and error details. The error code should be an enum value of 349 # google.rpc.Code, but it may accept additional error codes if needed. The 350 # error message should be a developer-facing English message that helps 351 # developers *understand* and *resolve* the error. If a localized user-facing 352 # error message is needed, put the localized message in the error details or 353 # localize it in the client. The optional error details may contain arbitrary 354 # information about the error. There is a predefined set of error detail types 355 # in the package `google.rpc` that can be used for common error conditions. 356 # 357 # # Language mapping 358 # 359 # The `Status` message is the logical representation of the error model, but it 360 # is not necessarily the actual wire format. When the `Status` message is 361 # exposed in different client libraries and different wire protocols, it can be 362 # mapped differently. For example, it will likely be mapped to some exceptions 363 # in Java, but more likely mapped to some error codes in C. 364 # 365 # # Other uses 366 # 367 # The error model and the `Status` message can be used in a variety of 368 # environments, either with or without APIs, to provide a 369 # consistent developer experience across different environments. 370 # 371 # Example uses of this error model include: 372 # 373 # - Partial errors. If a service needs to return partial errors to the client, 374 # it may embed the `Status` in the normal response to indicate the partial 375 # errors. 376 # 377 # - Workflow errors. A typical workflow has multiple steps. Each step may 378 # have a `Status` message for error reporting. 379 # 380 # - Batch operations. If a client uses batch request and batch response, the 381 # `Status` message should be used directly inside batch response, one for 382 # each error sub-response. 383 # 384 # - Asynchronous operations. If an API call embeds asynchronous operation 385 # results in its response, the status of those operations should be 386 # represented directly using the `Status` message. 387 # 388 # - Logging. If some API errors are stored in logs, the message `Status` could 389 # be used directly after any stripping needed for security/privacy reasons. 390 "message": "A String", # A developer-facing error message, which should be in English. Any 391 # user-facing error message should be localized and sent in the 392 # google.rpc.Status.details field, or localized by the client. 393 "code": 42, # The status code, which should be an enum value of google.rpc.Code. 394 "details": [ # A list of messages that carry the error details. There will be a 395 # common set of message types for APIs to use. 396 { 397 "a_key": "", # Properties of the object. Contains field @type with type URL. 398 }, 399 ], 400 }, 401 "done": True or False, # If the value is `false`, it means the operation is still in progress. 402 # If true, the operation is completed, and either `error` or `response` is 403 # available. 404 "response": { # The normal response of the operation in case of success. If the original 405 # method returns no data on success, such as `Delete`, the response is 406 # `google.protobuf.Empty`. If the original method is standard 407 # `Get`/`Create`/`Update`, the response should be the resource. For other 408 # methods, the response should have the type `XxxResponse`, where `Xxx` 409 # is the original method name. For example, if the original method name 410 # is `TakeSnapshot()`, the inferred response type is 411 # `TakeSnapshotResponse`. 412 "a_key": "", # Properties of the object. Contains field @type with type URL. 413 }, 414 "name": "A String", # The server-assigned name, which is only unique within the same service that 415 # originally returns it. If you use the default HTTP mapping, the 416 # `name` should have the format of `operations/some/unique/name`. 417 }</pre> 418 </div> 419 420 <div class="method"> 421 <code class="details" id="get">get(name, x__xgafv=None)</code> 422 <pre>Gets information about a model, including its name, the description (if 423 set), and the default version (if at least one version of the model has 424 been deployed). 425 426 Args: 427 name: string, Required. The name of the model. 428 429 Authorization: requires `Viewer` role on the parent project. (required) 430 x__xgafv: string, V1 error format. 431 Allowed values 432 1 - v1 error format 433 2 - v2 error format 434 435 Returns: 436 An object of the form: 437 438 { # Represents a machine learning solution. 439 # 440 # A model can have multiple versions, each of which is a deployed, trained 441 # model ready to receive prediction requests. The model itself is just a 442 # container. 443 "regions": [ # Optional. The list of regions where the model is going to be deployed. 444 # Currently only one region per model is supported. 445 # Defaults to 'us-central1' if nothing is set. 446 # Note: 447 # * No matter where a model is deployed, it can always be accessed by 448 # users from anywhere, both for online and batch prediction. 449 # * The region for a batch prediction job is set by the region field when 450 # submitting the batch prediction job and does not take its value from 451 # this field. 452 "A String", 453 ], 454 "defaultVersion": { # Represents a version of the model. # Output only. The default version of the model. This version will be used to 455 # handle prediction requests that do not specify a version. 456 # 457 # You can change the default version by calling 458 # [projects.methods.versions.setDefault](/ml-engine/reference/rest/v1/projects.models.versions/setDefault). 459 # 460 # Each version is a trained model deployed in the cloud, ready to handle 461 # prediction requests. A model can have multiple versions. You can get 462 # information about all of the versions of a given model by calling 463 # [projects.models.versions.list](/ml-engine/reference/rest/v1/projects.models.versions/list). 464 "description": "A String", # Optional. The description specified for the version when it was created. 465 "runtimeVersion": "A String", # Optional. The Google Cloud ML runtime version to use for this deployment. 466 # If not set, Google Cloud ML will choose a version. 467 "manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the 468 # model. You should generally use `automatic_scaling` with an appropriate 469 # `min_nodes` instead, but this option is available if you want more 470 # predictable billing. Beware that latency and error rates will increase 471 # if the traffic exceeds that capability of the system to serve it based 472 # on the selected number of nodes. 473 "nodes": 42, # The number of nodes to allocate for this model. These nodes are always up, 474 # starting from the time the model is deployed, so the cost of operating 475 # this model will be proportional to `nodes` * number of hours since 476 # last billing cycle plus the cost for each prediction performed. 477 }, 478 "deploymentUri": "A String", # Required. The Google Cloud Storage location of the trained model used to 479 # create the version. See the 480 # [overview of model 481 # deployment](/ml-engine/docs/concepts/deployment-overview) for more 482 # informaiton. 483 # 484 # When passing Version to 485 # [projects.models.versions.create](/ml-engine/reference/rest/v1/projects.models.versions/create) 486 # the model service uses the specified location as the source of the model. 487 # Once deployed, the model version is hosted by the prediction service, so 488 # this location is useful only as a historical record. 489 # The total number of model files can't exceed 1000. 490 "lastUseTime": "A String", # Output only. The time the version was last used for prediction. 491 "automaticScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in 492 # response to increases and decreases in traffic. Care should be 493 # taken to ramp up traffic according to the model's ability to scale 494 # or you will start seeing increases in latency and 429 response codes. 495 "minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These 496 # nodes are always up, starting from the time the model is deployed, so the 497 # cost of operating this model will be at least 498 # `rate` * `min_nodes` * number of hours since last billing cycle, 499 # where `rate` is the cost per node-hour as documented in 500 # [pricing](https://cloud.google.com/ml-engine/pricing#prediction_pricing), 501 # even if no predictions are performed. There is additional cost for each 502 # prediction performed. 503 # 504 # Unlike manual scaling, if the load gets too heavy for the nodes 505 # that are up, the service will automatically add nodes to handle the 506 # increased load as well as scale back as traffic drops, always maintaining 507 # at least `min_nodes`. You will be charged for the time in which additional 508 # nodes are used. 509 # 510 # If not specified, `min_nodes` defaults to 0, in which case, when traffic 511 # to a model stops (and after a cool-down period), nodes will be shut down 512 # and no charges will be incurred until traffic to the model resumes. 513 }, 514 "createTime": "A String", # Output only. The time the version was created. 515 "isDefault": True or False, # Output only. If true, this version will be used to handle prediction 516 # requests that do not specify a version. 517 # 518 # You can change the default version by calling 519 # [projects.methods.versions.setDefault](/ml-engine/reference/rest/v1/projects.models.versions/setDefault). 520 "name": "A String", # Required.The name specified for the version when it was created. 521 # 522 # The version name must be unique within the model it is created in. 523 }, 524 "name": "A String", # Required. The name specified for the model when it was created. 525 # 526 # The model name must be unique within the project it is created in. 527 "onlinePredictionLogging": True or False, # Optional. If true, enables StackDriver Logging for online prediction. 528 # Default is false. 529 "description": "A String", # Optional. The description specified for the model when it was created. 530 }</pre> 531 </div> 532 533 <div class="method"> 534 <code class="details" id="list">list(parent, pageToken=None, x__xgafv=None, pageSize=None)</code> 535 <pre>Lists the models in a project. 536 537 Each project can contain multiple models, and each model can have multiple 538 versions. 539 540 Args: 541 parent: string, Required. The name of the project whose models are to be listed. 542 543 Authorization: requires `Viewer` role on the specified project. (required) 544 pageToken: string, Optional. A page token to request the next page of results. 545 546 You get the token from the `next_page_token` field of the response from 547 the previous call. 548 x__xgafv: string, V1 error format. 549 Allowed values 550 1 - v1 error format 551 2 - v2 error format 552 pageSize: integer, Optional. The number of models to retrieve per "page" of results. If there 553 are more remaining results than this number, the response message will 554 contain a valid value in the `next_page_token` field. 555 556 The default value is 20, and the maximum page size is 100. 557 558 Returns: 559 An object of the form: 560 561 { # Response message for the ListModels method. 562 "models": [ # The list of models. 563 { # Represents a machine learning solution. 564 # 565 # A model can have multiple versions, each of which is a deployed, trained 566 # model ready to receive prediction requests. The model itself is just a 567 # container. 568 "regions": [ # Optional. The list of regions where the model is going to be deployed. 569 # Currently only one region per model is supported. 570 # Defaults to 'us-central1' if nothing is set. 571 # Note: 572 # * No matter where a model is deployed, it can always be accessed by 573 # users from anywhere, both for online and batch prediction. 574 # * The region for a batch prediction job is set by the region field when 575 # submitting the batch prediction job and does not take its value from 576 # this field. 577 "A String", 578 ], 579 "defaultVersion": { # Represents a version of the model. # Output only. The default version of the model. This version will be used to 580 # handle prediction requests that do not specify a version. 581 # 582 # You can change the default version by calling 583 # [projects.methods.versions.setDefault](/ml-engine/reference/rest/v1/projects.models.versions/setDefault). 584 # 585 # Each version is a trained model deployed in the cloud, ready to handle 586 # prediction requests. A model can have multiple versions. You can get 587 # information about all of the versions of a given model by calling 588 # [projects.models.versions.list](/ml-engine/reference/rest/v1/projects.models.versions/list). 589 "description": "A String", # Optional. The description specified for the version when it was created. 590 "runtimeVersion": "A String", # Optional. The Google Cloud ML runtime version to use for this deployment. 591 # If not set, Google Cloud ML will choose a version. 592 "manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the 593 # model. You should generally use `automatic_scaling` with an appropriate 594 # `min_nodes` instead, but this option is available if you want more 595 # predictable billing. Beware that latency and error rates will increase 596 # if the traffic exceeds that capability of the system to serve it based 597 # on the selected number of nodes. 598 "nodes": 42, # The number of nodes to allocate for this model. These nodes are always up, 599 # starting from the time the model is deployed, so the cost of operating 600 # this model will be proportional to `nodes` * number of hours since 601 # last billing cycle plus the cost for each prediction performed. 602 }, 603 "deploymentUri": "A String", # Required. The Google Cloud Storage location of the trained model used to 604 # create the version. See the 605 # [overview of model 606 # deployment](/ml-engine/docs/concepts/deployment-overview) for more 607 # informaiton. 608 # 609 # When passing Version to 610 # [projects.models.versions.create](/ml-engine/reference/rest/v1/projects.models.versions/create) 611 # the model service uses the specified location as the source of the model. 612 # Once deployed, the model version is hosted by the prediction service, so 613 # this location is useful only as a historical record. 614 # The total number of model files can't exceed 1000. 615 "lastUseTime": "A String", # Output only. The time the version was last used for prediction. 616 "automaticScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in 617 # response to increases and decreases in traffic. Care should be 618 # taken to ramp up traffic according to the model's ability to scale 619 # or you will start seeing increases in latency and 429 response codes. 620 "minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These 621 # nodes are always up, starting from the time the model is deployed, so the 622 # cost of operating this model will be at least 623 # `rate` * `min_nodes` * number of hours since last billing cycle, 624 # where `rate` is the cost per node-hour as documented in 625 # [pricing](https://cloud.google.com/ml-engine/pricing#prediction_pricing), 626 # even if no predictions are performed. There is additional cost for each 627 # prediction performed. 628 # 629 # Unlike manual scaling, if the load gets too heavy for the nodes 630 # that are up, the service will automatically add nodes to handle the 631 # increased load as well as scale back as traffic drops, always maintaining 632 # at least `min_nodes`. You will be charged for the time in which additional 633 # nodes are used. 634 # 635 # If not specified, `min_nodes` defaults to 0, in which case, when traffic 636 # to a model stops (and after a cool-down period), nodes will be shut down 637 # and no charges will be incurred until traffic to the model resumes. 638 }, 639 "createTime": "A String", # Output only. The time the version was created. 640 "isDefault": True or False, # Output only. If true, this version will be used to handle prediction 641 # requests that do not specify a version. 642 # 643 # You can change the default version by calling 644 # [projects.methods.versions.setDefault](/ml-engine/reference/rest/v1/projects.models.versions/setDefault). 645 "name": "A String", # Required.The name specified for the version when it was created. 646 # 647 # The version name must be unique within the model it is created in. 648 }, 649 "name": "A String", # Required. The name specified for the model when it was created. 650 # 651 # The model name must be unique within the project it is created in. 652 "onlinePredictionLogging": True or False, # Optional. If true, enables StackDriver Logging for online prediction. 653 # Default is false. 654 "description": "A String", # Optional. The description specified for the model when it was created. 655 }, 656 ], 657 "nextPageToken": "A String", # Optional. Pass this token as the `page_token` field of the request for a 658 # subsequent call. 659 }</pre> 660 </div> 661 662 <div class="method"> 663 <code class="details" id="list_next">list_next(previous_request, previous_response)</code> 664 <pre>Retrieves the next page of results. 665 666 Args: 667 previous_request: The request for the previous page. (required) 668 previous_response: The response from the request for the previous page. (required) 669 670 Returns: 671 A request object that you can call 'execute()' on to request the next 672 page. Returns None if there are no more items in the collection. 673 </pre> 674 </div> 675 676 </body></html>