Home > Software design >  GCS: Can we have different Storage Class objects inside a bucket?
GCS: Can we have different Storage Class objects inside a bucket?

Time:01-24

I am aware of similar concept in AWS cloud where a bucket can hold multiple storage class objects like Standard object and Coldline object.
I tried googling about the same in GCP since the objects that I will have, need to be of different Storage Class objects since they won't be accessed frequently.

CodePudding user response:

Yes, GCS can hold multiple storage class objects within a bucket. Refer this documents DOC1. DOC2 for detailed steps and explanation to change the storage class of indvidual object within a bucket.

Moreover there are multiple storage classes available in GCP like

  • Standard - A noraml storage class which can be used in frequent operations.

  • Nearline - Nearline is recommended to use when the data that needs to be accessed on average once every 30 days or less.

  • Coldline - Coldline can be used for infrequent data which needs to be accessed on average once per quarter i.e, 90 days.

  • Archive - Archive is the best storage plan when the data needs to be accessed once once per year i.e, 365 days

Note: The pricing of storage class differs from each one based on the type you choose.

For more detailed information refer to these documents DOC1 DOC2.

CodePudding user response:

Yes. You can set the storage classes in a number of ways:

First, when you upload an object, you can specify its storage class. It's a property of most the client library "write" or "upload" methods. If you're using the JSON API directly, check the storageClass property on the objects.insert call. If you're using the XML API, use the x-goog-storage-class header.

Second, you can also set the "default storage class" on the bucket, which will be used for all object uploads that do not specify a class.

Third, you can change an object's storage class using the objects.rewrite call. If you're using an API like the Python API, you can use a function like blob.update_storage_class(new_storage_class) to change the storage class (note that this counts as an object write).

Finally, you can put "lifecycle policies" on your bucket that will automatically transition storage classes for individual objects over time or in response to some change. For example, you could have a rule like "downgrade an object's storage class to coldline 60 days after its creation." See https://cloud.google.com/storage/docs/lifecycle for more.

Full documentation of storage classes can be found at : https://cloud.google.com/storage/docs/storage-classes

  • Related