I am comparing images and I have used BFMatcher
to perform feature matching
My actual code is:
def get_similarity_from_desc(approach, query_desc, corp_desc):
if approach == 'sift':
# BFMatcher with euclidean distance
bf = cv.BFMatcher()
else:
# BFMatcher with hamming distance
bf = cv.BFMatcher(cv.NORM_HAMMING)
matches = bf.knnMatch(query_desc,corp_desc,k=2)
# Apply ratio test
good = []
for m,n in matches:
if m.distance < 0.75*n.distance:
good.append([m])
similarity = ??
return similarity
I am wondering if it is possible to compute a similarity measure given the list of good matches good
and the descriptors of the two images query_desc
and corp_desc
At this moment I have thought:
similarity = len(good) / len(matches)
But I think this is not a correct way of determining similarity between two images
Do you know a better approach for computing this measure?
CodePudding user response:
I have finally done this, which seems to work well:
def get_similarity_from_desc(approach, search_desc, idx_desc):
if approach == 'sift' or approach == 'orb_sift':
# BFMatcher with euclidean distance
bf = cv.BFMatcher()
else:
# BFMatcher with hamming distance
bf = cv.BFMatcher(cv.NORM_HAMMING)
matches = bf.match(search_desc, idx_desc)
# Distances between search and index features that match
distances = [m.distance for m in matches]
# Distance between search and index images
distance = sum(distances) / len(distances)
# If distance == 0 -> similarity = 1
similarity = 1 / (1 distance)
return similarity