-
Notifications
You must be signed in to change notification settings - Fork 3
Adding generated Bigtable classes. #22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 2 commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,74 @@ | ||
| """Simple Bigtable client demonstrating auto-generated veneer. | ||
| """ | ||
|
|
||
| import argparse | ||
| import logging | ||
|
|
||
| from google.bigtable.v1 import bigtable_service_api | ||
| from google.bigtable.admin.cluster.v1 import bigtable_cluster_service_api | ||
| from google.bigtable.admin.table.v1 import bigtable_table_service_api | ||
|
|
||
| from google.bigtable.v1 import bigtable_data_pb2 as data | ||
| from google.bigtable.admin.cluster.v1 import bigtable_cluster_data_pb2 as cluster_data | ||
| from google.bigtable.admin.table.v1 import bigtable_table_data_pb2 as table_data | ||
|
|
||
| def run(project_id): | ||
| with bigtable_service_api.BigtableServiceApi() as bigtable_api, \ | ||
| bigtable_cluster_service_api.BigtableClusterServiceApi() as cluster_api, \ | ||
| bigtable_table_service_api.BigtableTableServiceApi() as table_api: | ||
|
Owner
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Why do we want users to have to do this? It is insanity to have to type that much. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. If this is needed routinely for common scenarios, it could be a problem with the API. If the scenarios are fairly affinitized the with the API, I would change the sample, to do the with statements one a time.
Owner
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
|
||
|
|
||
| disp_name = 'my-cluster' | ||
| zone_name = 'projects/{0}/zones/{1}'.format(project_id, 'us-central1-c') | ||
| employee_id='employee1' | ||
|
|
||
| try: | ||
| print 'Creating a cluster.' | ||
| cluster = cluster_data.Cluster(display_name=disp_name, serve_nodes=3) | ||
| cluster_name = cluster_api.create_cluster( | ||
| name=zone_name, cluster_id=disp_name, cluster=cluster).name | ||
| print 'Successfully created a cluster named {0}'.format(cluster_name) | ||
|
|
||
| print 'Creating a bigtable.' | ||
|
Owner
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. What if I wanted to create a small table? 😉 There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This seems fair game to me since the bigtable brand is already in the API. is there a specific change recommended here?
Owner
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ha I was just joking. I would've expected the printed statement to say |
||
| table_name = table_api.create_table( | ||
| table=table_data.Table(), name=cluster_name, table_id='my-table').name | ||
| name_column_family = table_api.create_column_family( | ||
| name=table_name, column_family_id='Name', | ||
| column_family=table_data.ColumnFamily()) | ||
| bday_column_family = table_api.create_column_family( | ||
| name=table_name, column_family_id='Birthday', | ||
| column_family=table_data.ColumnFamily()) | ||
| print 'Successfully created a table named {0}'.format(table_name) | ||
|
|
||
| print 'Writing some data to the table.' | ||
| rule1 = data.ReadModifyWriteRule( | ||
| family_name='Name', column_qualifier='First Name', | ||
| append_value='Jane') | ||
| rule2 = data.ReadModifyWriteRule( | ||
| family_name='Name', column_qualifier='Last Name', append_value='Doe') | ||
| rule3 = data.ReadModifyWriteRule( | ||
| family_name='Birthday', column_qualifier='date', | ||
| append_value='Feb. 29') | ||
| bigtable_api.read_modify_write_row( | ||
| table_name=table_name, row_key=employee_id, | ||
| rules=[rule1, rule2, rule3]) | ||
|
|
||
| print 'Reading the data we wrote to the table.' | ||
| for response in bigtable_api.read_rows( | ||
| table_name=table_name, row_key=employee_id): | ||
| print response | ||
|
|
||
| print 'Deleting the table and cluster.' | ||
| table_api.delete_table(name=table_name) | ||
| cluster_api.delete_cluster(name=cluster_name) | ||
|
|
||
| except Exception as exception: | ||
| logging.exception(exception) | ||
| print 'failed with {0}:{1}'.format(exception.code, exception.details) | ||
|
|
||
| if __name__ == '__main__': | ||
| parser = argparse.ArgumentParser() | ||
| parser.add_argument( | ||
| '--project_id', help='The numerical id of the project to create bigtable in.', | ||
| required=True) | ||
| args = parser.parse_args() | ||
| run(args.project_id) | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,262 @@ | ||
| # Copyright 2015 Google Inc. All rights reserved. | ||
| # | ||
| # Licensed under the Apache License, Version 2.0 (the "License"); | ||
| # you may not use this file except in compliance with the License. | ||
| # You may obtain a copy of the License at | ||
| # | ||
| # http://www.apache.org/licenses/LICENSE-2.0 | ||
| # | ||
| # Unless required by applicable law or agreed to in writing, software | ||
| # distributed under the License is distributed on an "AS IS" BASIS, | ||
| # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
| # See the License for the specific language governing permissions and | ||
| # limitations under the License. | ||
| # | ||
| # EDITING INSTRUCTIONS | ||
| # This file was generated from google/bigtable/admin/cluster/v1/bigtable_cluster_service.proto, | ||
| # and updates to that file get reflected here through a regular refresh | ||
| # process. However, manual and updates to that file get reflected here | ||
|
Owner
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Done. |
||
| # through a regular refresh additions are allowed because the refresh | ||
| # process performs a 3-way merge in order to preserve those manual additions. | ||
| # In order not to break the refresh process, only certain types of | ||
|
Owner
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Is this a goal to make the generated files easier to visually diff for source control? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes. To minimize costs, we want code-gen updates to merge cleanly most of the time, so we would discourage manual editing except in specific defined sections that won't conflict with the code generation.
Owner
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It's an admirable goal, though I'm not sure if your customers (i.e. open source devs) value it enough for it to be worth the effort required. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. FWIW, I'm pretty heavily opposed to ever editing any generated module, unless the generation is intended only as a starting point (never to be regenerated). Application code should just import and use the generated code. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Regeneration is intended for non-breaking change updates only. Agree that this pattern is a red flag by default. However, we have verified that these merge cleanly as the editing is confined to explicit sections and breaking API changes are not allowed. |
||
| # modifications are allowed. | ||
| # | ||
| # Allowed modifications: | ||
| # 1. New methods (these should be added to the end of the class) | ||
| # 2. "Notes specific to this wrapper method" sections in the method | ||
| # documentation | ||
| # | ||
| # Happy editing! | ||
|
|
||
| from google.bigtable.admin.cluster.v1 import bigtable_cluster_data_pb2 | ||
| from google.bigtable.admin.cluster.v1 import bigtable_cluster_service_messages_pb2 | ||
| from google.bigtable.admin.cluster.v1 import bigtable_cluster_service_pb2 | ||
| from google.gax import api_callable | ||
| from google.gax import api_utils | ||
| from google.gax import page_descriptor | ||
| from google.longrunning import operations_pb2 | ||
| from google.protobuf import timestamp_pb2 | ||
|
|
||
| class BigtableClusterServiceApi(object): | ||
| """Service for managing zonal Cloud Bigtable resources.""" | ||
|
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Should be conforming to PEP8 - so 4 space indents... There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think we'll actually want to conform to Google style: https://google.github.io/styleguide/pyguide.html But yes, we should use 4 space indents.
Owner
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Google style doesn't make much sense for an open source project. |
||
|
|
||
| # The default address of the logging service. | ||
| _SERVICE_ADDRESS = "bigtableclusteradmin.googleapis.com" | ||
|
Owner
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Where does this live? I don't see it in any of the protos released with the Java client.
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is from the service .yaml metadata. It is visible from the discovery API. |
||
|
|
||
| # The default port of the logging service. | ||
| _DEFAULT_SERVICE_PORT = 443 | ||
|
Owner
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Seems crazy that gRPC doesn't just handle this. I also hard-coded |
||
|
|
||
| # The scopes needed to make gRPC calls to all of the methods defined in | ||
| # this service | ||
| _ALL_SCOPES = [ | ||
| 'https://www.googleapis.com/auth/bigtable.admin', | ||
| 'https://www.googleapis.com/auth/bigtable.admin.cluster', | ||
| 'https://www.googleapis.com/auth/cloud-bigtable.admin', | ||
| 'https://www.googleapis.com/auth/cloud-bigtable.admin.cluster', | ||
| 'https://www.googleapis.com/auth/cloud-platform' | ||
| ] | ||
|
Owner
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. These are not all needed all at once. I suppose it's a moot point for a service account, but I made a distinction between methods that need the admin scopes and those that do not. Using Also, this summer when I got the scope list, they were as in the source: ADMIN_SCOPE = 'https://www.googleapis.com/auth/cloud-bigtable.admin'
DATA_SCOPE = 'https://www.googleapis.com/auth/cloud-bigtable.data'
READ_ONLY_SCOPE = ('https://www.googleapis.com/auth/'
'cloud-bigtable.data.readonly')There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Scopes are a pretty blunt instrument for permissions that are being phased out. I think for code-gen, having the full set is fine. Someone can still get more fine grained if they choose by passing in explicit credentials.
Owner
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I don't know how you mean "permissions that are being phased out". I'd say that it's not clear to an external user that scopes are being phased out. |
||
|
|
||
| def __init__( | ||
| self, service_path=_SERVICE_ADDRESS, port=_DEFAULT_SERVICE_PORT, | ||
| channel=None, ssl_creds=None, scopes=_ALL_SCOPES, | ||
| is_idempotent_retrying=True, max_attempts=3, timeout=30): | ||
|
Owner
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think it'd be more idiomatic to have the code generator produce: def __init__(self,
service_path=_SERVICE_ADDRESS,
port=_DEFAULT_SERVICE_PORT,
channel=None,
ssl_creds=None,
scopes=_ALL_SCOPES,
...):
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Done - our solution right now is just to use new lines everywhere... but we have an issue tracking implementing a better formatting solution. |
||
|
|
||
| self.defaults = api_callable.ApiCallableDefaults( | ||
| timeout=timeout, max_attempts=max_attempts, | ||
| is_idempotent_retrying=is_idempotent_retrying) | ||
|
|
||
| self.stub = api_utils.create_stub( | ||
| bigtable_cluster_service_pb2.beta_create_BigtableClusterService_stub, service_path, port, | ||
| ssl_creds=ssl_creds, channel=channel, scopes=scopes) | ||
|
|
||
| def __enter__(self): | ||
| return self | ||
|
Owner
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. If
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We made an issue to track this... it's not solvable immediately - I believe we're waiting for something in gRPC. |
||
|
|
||
| def __exit__(self, type, value, traceback): | ||
| self.close() | ||
|
|
||
| def close(self): | ||
| del self.stub | ||
|
|
||
| # Properties | ||
| @property | ||
| def channel(self): | ||
| return self.channel | ||
|
Owner
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is circular. >>> import sys
>>> sys.setrecursionlimit(4)
>>> class A(object):
... @property
... def b(self):
... return self.b
...
>>> a = A()
>>> a.b
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 4, in b
RuntimeError: maximum recursion depth exceeded
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Removed. |
||
|
|
||
| # Page descriptors | ||
|
|
||
| # Service calls | ||
| def list_zones(self, name="", **kwargs): | ||
| """Lists the supported zones for the given project.""" | ||
|
Owner
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We made an issue to track the documentation... I think @geigerj can explain There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The API owner will eventually be able to configure, of the fields in the request proto, which should be represented as positional arguments, and which as named arguments in the generated Python code. Those proto fields that are represented by neither positional nor named arguments can be set using the kwargs. So currently it's just a placeholder, but expect that to change in the future. |
||
| list_zones_request = bigtable_cluster_service_messages_pb2.ListZonesRequest( | ||
| name=name, **kwargs) | ||
| return self.list_zones_callable()(list_zones_request) | ||
|
Owner
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. As mentioned below, this could just be return api_callable.idempotent_callable(
self.stub.ListZones,
is_retrying=None,
max_attempts=None,
defaults=self.defaults)and the generator would have no problem doing the mapping list_zones --> self.stub.ListZones
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. |
||
|
|
||
| def list_zones_callable( | ||
|
Owner
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This seems like overkill to make a method, especially if you aren't going to allow users to over-ride
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We provide two Python methods for each API method; this one is intended to be non-configurable, using the defaults set by the API owner for, e.g., retrying. If a client has an unusual use-case, they can use the "_callable" version or fall back to regular gRPC. |
||
| self, is_retrying=None, max_attempts=None): | ||
| return api_callable.idempotent_callable( | ||
| self.stub.ListZones, | ||
| is_retrying=is_retrying, | ||
| max_attempts=max_attempts, | ||
| defaults=self.defaults) | ||
|
|
||
| def get_cluster(self, name="", **kwargs): | ||
| """Gets information about a particular cluster.""" | ||
| get_cluster_request = bigtable_cluster_service_messages_pb2.GetClusterRequest( | ||
| name=name, **kwargs) | ||
| return self.get_cluster_callable()(get_cluster_request) | ||
|
|
||
| def get_cluster_callable( | ||
| self, is_retrying=None, max_attempts=None): | ||
| return api_callable.idempotent_callable( | ||
| self.stub.GetCluster, | ||
| is_retrying=is_retrying, | ||
| max_attempts=max_attempts, | ||
| defaults=self.defaults) | ||
|
|
||
| def list_clusters(self, name="", **kwargs): | ||
| """ | ||
| Lists all clusters in the given project, along with any zones for which | ||
| cluster information could not be retrieved. | ||
|
Owner
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. "Breaks" PEP257 since there is no real short-description of the method. I'm unsure what so suggest an auto-generator should do in these situations. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We are trying to re-use the language-independent proto doc. We can address some of these with changes to the guidelines of those docs. So requiring that the first sentence of doc is a concise description is possible, but we would want to understand the consequences of breaking PEP257 since, changes to these doc standards are costly to enforce.
Owner
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Totally agree. I think generated files can have relaxed requirements, but just wanted to point it out if your goal includes making readable / idiomatic code. |
||
| """ | ||
| list_clusters_request = bigtable_cluster_service_messages_pb2.ListClustersRequest( | ||
| name=name, **kwargs) | ||
| return self.list_clusters_callable()(list_clusters_request) | ||
|
|
||
| def list_clusters_callable( | ||
| self, is_retrying=None, max_attempts=None): | ||
| return api_callable.idempotent_callable( | ||
| self.stub.ListClusters, | ||
| is_retrying=is_retrying, | ||
| max_attempts=max_attempts, | ||
| defaults=self.defaults) | ||
|
|
||
| def create_cluster(self, name="", cluster_id="", cluster=bigtable_cluster_data_pb2.Cluster(), **kwargs): | ||
|
Owner
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ditto here about mutable defaults. Big-time bad idea. |
||
| """ | ||
| Creates a cluster and begins preparing it to begin serving. The returned | ||
|
Owner
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. When I wrote docstrings by hand I mostly just copied the docs from the However, for some things like There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. In Java we also pulled out the message information. This is just incomplete work in Python. We have an internal bug, but we should create a new one on GitHub and link the two.
Owner
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Good deal. Sounds like there needs to be an external project first. |
||
| cluster embeds as its "current_operation" a long-running operation which | ||
| can be used to track the progress of turning up the new cluster. | ||
| Immediately upon completion of this request: | ||
| * The cluster will be readable via the API, with all requested attributes | ||
| but no allocated resources. | ||
| Until completion of the embedded operation: | ||
| * Cancelling the operation will render the cluster immediately unreadable | ||
| via the API. | ||
| * All other attempts to modify or delete the cluster will be rejected. | ||
| Upon completion of the embedded operation: | ||
| * Billing for all successfully-allocated resources will begin (some types | ||
| may have lower than the requested levels). | ||
| * New tables can be created in the cluster. | ||
| * The cluster's allocated resource levels will be readable via the API. | ||
| The embedded operation's "metadata" field type is | ||
| [CreateClusterMetadata][google.bigtable.admin.cluster.v1.CreateClusterMetadata] The embedded operation's "response" field type is | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Could we strip out the link headers so that this is ".. field type is google.bigtable.admin.cluster.v1.CreaseClusterMetadata. The ..."
Owner
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 👍 It may be a pain unless you have something that can easily parse it from the comments in the If you can parse it, you should reformat in Sphinx / RsT format to link within the generated classes.
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We're working on improving the docs as we can right now, and have an issue tracking it. |
||
| [Cluster][google.bigtable.admin.cluster.v1.Cluster], if successful. | ||
| """ | ||
| create_cluster_request = bigtable_cluster_service_messages_pb2.CreateClusterRequest( | ||
| name=name, cluster_id=cluster_id, cluster=cluster, **kwargs) | ||
| return self.create_cluster_callable()(create_cluster_request) | ||
|
|
||
| def create_cluster_callable( | ||
| self, is_retrying=None, max_attempts=None): | ||
| return api_callable.idempotent_callable( | ||
| self.stub.CreateCluster, | ||
| is_retrying=is_retrying, | ||
| max_attempts=max_attempts, | ||
| defaults=self.defaults) | ||
|
|
||
| def update_cluster(self, name="", delete_time=timestamp_pb2.Timestamp(), current_operation=operations_pb2.Operation(), display_name="", serve_nodes=0, default_storage_type=bigtable_cluster_data_pb2.STORAGE_UNSPECIFIED, **kwargs): | ||
| """ | ||
| Updates a cluster, and begins allocating or releasing resources as | ||
| requested. The returned cluster embeds as its "current_operation" a | ||
| long-running operation which can be used to track the progress of updating | ||
| the cluster. | ||
| Immediately upon completion of this request: | ||
| * For resource types where a decrease in the cluster's allocation has been | ||
| requested, billing will be based on the newly-requested level. | ||
| Until completion of the embedded operation: | ||
| * Cancelling the operation will set its metadata's "cancelled_at_time", | ||
| and begin restoring resources to their pre-request values. The operation | ||
| is guaranteed to succeed at undoing all resource changes, after which | ||
| point it will terminate with a CANCELLED status. | ||
| * All other attempts to modify or delete the cluster will be rejected. | ||
| * Reading the cluster via the API will continue to give the pre-request | ||
| resource levels. | ||
| Upon completion of the embedded operation: | ||
| * Billing will begin for all successfully-allocated resources (some types | ||
| may have lower than the requested levels). | ||
| * All newly-reserved resources will be available for serving the cluster's | ||
| tables. | ||
| * The cluster's new resource levels will be readable via the API. | ||
| [UpdateClusterMetadata][google.bigtable.admin.cluster.v1.UpdateClusterMetadata] The embedded operation's "response" field type is | ||
| [Cluster][google.bigtable.admin.cluster.v1.Cluster], if successful. | ||
| """ | ||
| cluster = bigtable_cluster_data_pb2.Cluster( | ||
| name=name, delete_time=delete_time, current_operation=current_operation, display_name=display_name, serve_nodes=serve_nodes, default_storage_type=default_storage_type, **kwargs) | ||
| return self.update_cluster_callable()(cluster) | ||
|
|
||
| def update_cluster_callable( | ||
| self, is_retrying=None, max_attempts=None): | ||
| return api_callable.idempotent_callable( | ||
| self.stub.UpdateCluster, | ||
| is_retrying=is_retrying, | ||
| max_attempts=max_attempts, | ||
| defaults=self.defaults) | ||
|
|
||
| def delete_cluster(self, name="", **kwargs): | ||
| """ | ||
| Marks a cluster and all of its tables for permanent deletion in 7 days. | ||
| Immediately upon completion of the request: | ||
| * Billing will cease for all of the cluster's reserved resources. | ||
| * The cluster's "delete_time" field will be set 7 days in the future. | ||
| Soon afterward: | ||
| * All tables within the cluster will become unavailable. | ||
| Prior to the cluster's "delete_time": | ||
| * The cluster can be recovered with a call to UndeleteCluster. | ||
| * All other attempts to modify or delete the cluster will be rejected. | ||
| At the cluster's "delete_time": | ||
| * The cluster and *all of its tables* will immediately and irrevocably | ||
| disappear from the API, and their data will be permanently deleted. | ||
| """ | ||
| delete_cluster_request = bigtable_cluster_service_messages_pb2.DeleteClusterRequest( | ||
| name=name, **kwargs) | ||
| return self.delete_cluster_callable()(delete_cluster_request) | ||
|
|
||
| def delete_cluster_callable( | ||
| self, is_retrying=None, max_attempts=None): | ||
| return api_callable.idempotent_callable( | ||
| self.stub.DeleteCluster, | ||
| is_retrying=is_retrying, | ||
| max_attempts=max_attempts, | ||
| defaults=self.defaults) | ||
|
|
||
| def undelete_cluster(self, name="", **kwargs): | ||
| """ | ||
| Cancels the scheduled deletion of an cluster and begins preparing it to | ||
| resume serving. The returned operation will also be embedded as the | ||
| cluster's "current_operation". | ||
| Immediately upon completion of this request: | ||
| * The cluster's "delete_time" field will be unset, protecting it from | ||
| automatic deletion. | ||
| Until completion of the returned operation: | ||
| * The operation cannot be cancelled. | ||
| Upon completion of the returned operation: | ||
| * Billing for the cluster's resources will resume. | ||
| * All tables within the cluster will be available. | ||
| [UndeleteClusterMetadata][google.bigtable.admin.cluster.v1.UndeleteClusterMetadata] The embedded operation's "response" field type is | ||
| [Cluster][google.bigtable.admin.cluster.v1.Cluster], if successful. | ||
| """ | ||
| undelete_cluster_request = bigtable_cluster_service_messages_pb2.UndeleteClusterRequest( | ||
| name=name, **kwargs) | ||
| return self.undelete_cluster_callable()(undelete_cluster_request) | ||
|
|
||
| def undelete_cluster_callable( | ||
| self, is_retrying=None, max_attempts=None): | ||
| return api_callable.idempotent_callable( | ||
| self.stub.UndeleteCluster, | ||
| is_retrying=is_retrying, | ||
| max_attempts=max_attempts, | ||
| defaults=self.defaults) | ||
|
|
||
| # ======== | ||
| # Manually-added methods: add custom (non-generated) methods after this point. | ||
| # ======== | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you want users to run an entire application within a context manager?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We were trying to capture the Python idiom for a closable resource like IDisposable in C#. However, the pattern in our examples in other languages was to call a single function so you don't have this huge block
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Discussed with Sai that we should refactor the example to have a single function inside the with clause. @geigerj.