Home > Software engineering >  cmd.executescalar() works but throws ORA-25191 Exception
cmd.executescalar() works but throws ORA-25191 Exception

Time:10-28

my Code is working, the function gives me the correct Select count (*) value but anyway, it throws an ORA-25191 Exception - Cannot reference overflow table of an index-organized table tips, at retVal = Convert.ToInt32(cmd.ExecuteScalar());

Since I use the function very often, the exceptions slow down my program tremendously.

 private int getSelectCountQueryOracle(string Sqlquery)
        {
            try
            {
                int retVal = 0;
                using (DataTable dataCount = new DataTable())
                {
                    using (OracleCommand cmd = new OracleCommand(Sqlquery))
                    {
                        cmd.CommandType = CommandType.Text;
                        cmd.Connection = oraCon;

                        using (OracleDataAdapter dataAdapter = new OracleDataAdapter())
                        {
                            retVal = Convert.ToInt32(cmd.ExecuteScalar());
                        }
                    }
                }
                return retVal;
            } 
            catch (Exception ex)
            {
                exceptionProtocol("Count Function", ex.ToString());
                return 1;
            }
        }

This function is called in a foreach loop

    // function call in foreach loop which goes through tablenames
        foreach (DataRow row in dataTbl.Rows)
                    {...    
                     tableNameFromRow = row["TABLE_NAME"].ToString();                                 
                     tableRows=getSelectCountQueryOracle("select count(*) as 'count' from "  tableNameFromRow);
                     tableColumns = getSelectCountQueryOracle("SELECT COUNT(*) as 'count' FROM INFORMATION_SCHEMA.COLUMNS WHERE table_name='"   tableNameFromRow   "'");
                     ...}

dataTbl.rows in this outer loop, in turn, comes from the query

SELECT * FROM USER_TABLES ORDER BY TABLE_NAME

CodePudding user response:

This article helped me to solve my problem.

I've changed my query to this:

SELECT * FROM user_tables 
WHERE iot_type IS NULL OR iot_type != 'IOT_OVERFLOW' 
ORDER BY TABLE_NAME

CodePudding user response:

If you're using a database-agnostic API like ADO.Net, you would almost always want to use the API's framework to fetch metadata rather than writing custom queries against each database's metadata tables. The various ADO.Net providers are much more likely to write data dictionary queries that handle all the various corner cases and are much more likely to be optimized than the queries you're likely to write. So rather than writing your own query to populate the dataTbl data table, you'd want to use the GetSchema method

DataTable dataTbl = connection.GetSchema("Tables");  

If you want to keep your custom-coded data dictionary query for some reason, you'd need to filter out the IOT overflow tables since you can't query those directly.

select *
  from user_tables
 where iot_type IS NULL
    or iot_type != 'IOT_OVERFLOW'

Be aware, however, that there are likely to be other tables that you don't want to try to get a count from. For example, the dropped column indicates whether a table has been dropped-- presumably, you don't want to count the number of rows in an object in the recycle bin. So you'd want a dropped = 'NO' predicate as well. And you can't do a count(*) on a nested table so you'd want to have a nested = 'NO' predicate as well if your schema happens to contain nested tables. There are probably other corner cases depending on the exact set of features your particular schema makes use of that the developers of the provider have added code for that you'd have to deal with.

So I'd start with

select *
  from user_tables
 where (   iot_type IS NULL
        or iot_type != 'IOT_OVERFLOW')
   and dropped = 'NO'
   and nested = 'NO'

but know that you'll probably need/ want to add some additional filters depending on the specific features users make use of. I'd certainly much rather let the fine folks that develop the ADO.Net provider worry about all those corner cases than to deal with finding all of them myself.

Taking a step back, though, I'd question why you're regularly doing a count(*) on every table in a schema and why you need an exact answer. In most cases where you're doing counts, you're either doing a one-off where you don't much care how long it takes (i.e. a validation step after a migration) or approximate counts would be sufficient (i.e. getting a list of the biggest tables in the system in order to triage some effort or to track growth over time for projections) in which case you could just use the counts that are already stored in the data dictionary- user_tables.num_rows- from the last time that statistics were run.

  • Related