Home > Back-end >  Find an ith row and ith column such that all numbers in the row are zeros and all numbers in the col
Find an ith row and ith column such that all numbers in the row are zeros and all numbers in the col

Time:09-25

The question asks to find an ith row and ith column such that the ith row only contains zeros, ane the ith column only contains ones (what is at their intersection doesn't matter) in linear time. If an i that satisfies this condition doesn't exist, just return -1. Here is the best I could come up with:

def zero_one(X):
    X = numpy.array(X)
    N = len(X)
    for i in range(N):
        original_value = X[i][i]
        X[i][i] = 0
        r = binaryToDecimal(X[i, :])
        X[i][i] = 1
        c = binaryToDecimal(X[:, i])
        if r == 0 and c == 2**(N - 1) - 1:
            X[i][i] = original_value
            return i
        X[i][i] = original_value
    return -1 # in case there's no i that satisfies the condition

Assuming that binaryToDecimal() runs in linear time and column-wise like in NumPy is available, this could be linear, but I doubt it.

CodePudding user response:

I believe it's doable in O(n) with a "tournament" strategy : first we will eliminate all candidate but one, then we'll check if the last candidate is a valid answer. I will give you a pseudo-code answer, because I don't know python, but it should be easy to implements.

  1. make a variable "champion" containing the first index. (This variable will store the current winning index of the tournament).
  2. For each other index j, match them against the champion index : if X[champion][j] is 0, then the champion stays, else champion becomes j (If X[i][j] is a 1, then the ith row contains a 1 and i is not valid, and if it's a 0 then the jth column contains a 0, and is valid. Each match will thus eliminate one possible candidate.)
  3. Check if the champion satifies the conditions : if so return the champion, else return -1. (All other indexes have been eliminated during 2) )

EDIT : Tried to add python code.

def zero_one(m):
    n = len(m)
    c = 0
    for j in range(1,n):
        if m[c][j] == 1:
            c = j
    for i in range(0,c-1):
        if m[c][i] == 1 or m[i][c] == 0:
            return -1
    for i in range(c 1,n):
        if m[c][i] == 1 or m[i][c] == 0:
            return -1
    return c

The complexity is 0(n) : you first check (n-1) element of the matrix during the tournament, and then check 2(n-1) element to validate the last index.

If you have any questions feel free to ask, I'll update accordingly.

CodePudding user response:

I believe it is simpler than it seems. Consider that if row X holds all zeros (apart for the intersection with column X) then no other column Y!=X can hold all ones.

Hence we only need to check the rows: if we find an all-zeros (but the intersection) row X then we check the column X and just return X or -1 based on whether that column holds all ones (but the intersection). If no rows satisfy the criterion then we return -1 as well.

So the worst case is, we have to check N rows and 1 column.

def zero_one(mx):
    n=len(mx)
    for x in range(n):
        if sum(mx[x])-mx[x][x] == 0:
            return x if sum(list(zip(*mx))[x])-mx[x][x] == n-1 else -1
    return -1
  • Related