I have a fairly simple Bicep script for creating a Cosmos Database as well as a container within it:
resource cosmos_db_live 'Microsoft.DocumentDB/databaseAccounts/sqlDatabases@2022-05-15' = {
parent: cosmos_account
name: 'live'
properties: {
resource: {
id: 'live'
}
options: {
throughput: 600
}
}
}
resource cosmos_container 'Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers@2022-05-15' = {
parent: cosmos_db_live
name: 'container_name'
properties: {
resource: {
id: 'container_name'
partitionKey: {
paths: ['/partition']
}
conflictResolutionPolicy: {
mode: 'LastWriterWins'
conflictResolutionPath: '/_ts'
}
indexingPolicy: {
indexingMode: 'consistent'
automatic: true
includedPaths: [{path: '/*'}]
excludedPaths: [{path: '/"_etag"/?'}]
}
}
}
}
This works great. However I now want to create multiple containers all with the same structure so I am attempting to template the container definition out to a module:
param name string
param partition string
resource cosmos_container 'Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers@2022-05-15' = {
name: name
properties: {
resource: {
id: name
partitionKey: {
paths: ['/${partition}']
}
conflictResolutionPolicy: {
mode: 'LastWriterWins'
conflictResolutionPath: '/_ts'
}
indexingPolicy: {
indexingMode: 'consistent'
automatic: true
includedPaths: [{path: '/*'}]
excludedPaths: [{path: '/"_etag"/?'}]
}
}
}
}
I now have no idea how to link it back to the parent. I cant use parent:
within the module because I cant find a way to pass the DB resource into the module via the toplevel file. I cant use parent:
within the module call because it is not a valid operation. I cant call the module from within the parent resource because it is not valid syntax.
How can I call the above module from my parent file and have dependencies automatically resolved as if it was all in one file? Is this not supported? There should be a very basic way to do this (unless I am missing something).
CodePudding user response:
This is supported in Bicep. Here's a sample that does name and partition key as well as different throughput values on each container.
More examples in Bicep Quickstart Loops.
param accountName string
param location string
param primaryRegion string
param databaseName string
param containersConfig object
var locations = [
{
locationName: primaryRegion
failoverPriority: 0
isZoneRedundant: false
}
]
resource account 'Microsoft.DocumentDB/databaseAccounts@2022-05-15' = {
name: toLower(accountName)
location: location
kind: 'GlobalDocumentDB'
properties: {
consistencyPolicy: { defaultConsistencyLevel: 'Session'}
locations: locations
databaseAccountOfferType: 'Standard'
}
}
resource database 'Microsoft.DocumentDB/databaseAccounts/sqlDatabases@2022-05-15' = {
parent: account
name: databaseName
properties: {
resource: {
id: databaseName
}
}
}
resource containers 'Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers@2022-05-15' = [for containerConfig in items(containersConfig): {
parent: database
name: '${containerConfig.value.name}'
properties: {
resource: {
id: '${containerConfig.value.name}'
partitionKey: {
paths: [
'${containerConfig.value.partitionKey}'
]
kind: 'Hash'
}
indexingPolicy: {
indexingMode: 'consistent'
includedPaths: [
{
path: '/*'
}
]
excludedPaths: [
{
path: '/_etag/?'
}
]
}
}
options: {
throughput: containerConfig.value.throughput
}
}
}]
Then the parameter file.
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"accountName": {
"value": "my-cosmos-account"
},
"location": {
"value": "West US"
},
"primaryRegion": {
"value": "West US"
},
"databaseName": {
"value": "myDatabase"
},
"containersConfig": {
"value": {
"container1": {
"name": "myContainer1",
"partitionKey": "/myPartitionKey1",
"throughput": 400
},
"container2": {
"name": "myContainer2",
"partitionKey": "/myPartitionKey2",
"throughput": 500
},
"container3": {
"name": "myContainer3",
"partitionKey": "/myPartitionKey3",
"throughput": 600
}
}
}
}
}
PS: I noticed you are using database throughput. I advise against sharing database throughput with containers if they have highly asymmetric request and storage needs. Good when the containers are all roughly equal. for those that are not, give them their own throughput.