I'm building a simple app that adds a hat on top of the user's face. I've seen examples of 2 different approaches:
- Adding the object as a scene to
Experience.rcproject
- Reading the object from the bundle directly as a
.usdz
file
Approach #1
struct ARViewContainer: UIViewRepresentable {
func makeUIView(context: Context) -> ARView {
arView = ARView(frame: .zero)
arView.automaticallyConfigureSession = false
return arView
}
func updateUIView(_ uiView: ARView, context: Context) {
let arConfiguration = ARFaceTrackingConfiguration()
uiView.session.run(arConfiguration,
options:[.resetTracking, .removeExistingAnchors])
let arAnchor = try! Experience.loadHat()
uiView.scene.anchors.append(arAnchor)
}
}
Approach #2
struct ARViewContainer: UIViewRepresentable {
func makeUIView(context: Context) -> ARView {
arView = ARView(frame: .zero)
arView.automaticallyConfigureSession = false
return arView
}
func updateUIView(_ uiView: ARView, context: Context) {
let arConfiguration = ARFaceTrackingConfiguration()
uiView.session.run(arConfiguration,
options:[.resetTracking, .removeExistingAnchors])
let fileName = "hat.usdz"
let modelEntity = try! ModelEntity.loadModel(named: fileName)
// here need to position the object in the code
modelEntity.position = SIMD3(0, -4.9, 11.8)
modelEntity.orientation = simd_quatf.init(angle: 0, axis: SIMD3(-90, 0, 0))
modelEntity.scale = SIMD3(0.93, 0.93, 0.93)
let arAnchor = AnchorEntity(.face)
arAnchor.addChild(modelEntity)
uiView.scene.anchors.append(arAnchor)
}
}
What is the main difference between these approaches? Approach #1 works, but the issue is that approach #2 doesn't even work for me - the object simply doesn't load into the scene. Could anyone explain a bit?
Thanks!
CodePudding user response:
For approach number 2, try removing the the position for the modelEntity. You provided position as 0, -4.9 and 11.8. Those positions are in meters. So try to remove it and see if appears.
CodePudding user response:
The difference between .rcproject
and .usdz
is quite obvious: the Reality Composer file already has an anchor for the model (and it's at the top of the hierarchy). When you prototype in Reality Composer, you have the ability to visually control the scale of your models. .usdz
models very often have a huge scale, which you need to reduce by 100 times.
As a rule, .usdz
model doesn't have a floor, while .rcproject
has a floor by default and this floor acts as a shadow catcher. Also, note that the .rcproject
file is larger than the .usdz
file.
let scene = try! Experience.loadHat()
arView.scene.anchors.append(scene)
print(scene)
When loading .usdz
into a scene, you have to programmatically create an anchor (either swiftly or pythonically). It also makes sense to use .reality
files as they are optimized for faster loading.
let model = try! ModelEntity.load(named: "hat.usdz")
let anchor = AnchorEntity(.face)
anchor.addChild(model)
arView.scene.anchors.append(anchor)
print(model)
Also, put a face tracking config inside makeUIView
method:
import SwiftUI
import RealityKit
import ARKit
func makeUIView(context: Context) -> ARView {
let arView = ARView(frame: .zero)
let model = try! ModelEntity.load(named: "hat.usdz")
arView.session.run(ARFaceTrackingConfiguration())
let anchor = AnchorEntity(.face)
anchor.position.y = 0.25
anchor.addChild(model)
arView.scene.addAnchor(anchor)
return arView
}