Home > database >  Sybase large amount of data insert performance? How to improve?
Sybase large amount of data insert performance? How to improve?

Time:10-09

我目前在做的需求是需要把后台生成的csv文件导入到sybase数据库,因为每条数据需要判断插入得目标表,只能每条读取了之后插入,我采取的是自己判断组装批插入的sql,例如:insert into WG_float_202007 (ObjectId, AttributeId, Seconds, Nanoseconds, Type, ErrorValue, Value) values(527433753,630,1592477964,74700000,'r',0,953.000000) union all (select 527433753,630,1592477924,984800000,'r',0,940.000000) union all (select 527433753,630,1592477864,861200000,'r',0,920.000000) union all (select 527433753,630,1592477937,17500000,'r',0,944.000000) union all (select 527433753,630,1592477894,922500000,'r',0,930.000000) union all (select 527433753,630,1592477855,842200000,'r',0,917.000000)这样的,我能找到的资料只有这一种批插入得方式,但是速度也不快,每秒最多才2k条,而且一次插入数据过多会报"There is not enough procedure cache to run this procedure, trigger, or SQL batch. Retry later, or ask your SA to reconfigure ASE with more procedure cache."的错! I am using jconnection4 sybase15.7 version, Java project make of, I ask for millions of inserting data need how make, SQL how to write, how to optimize,

CodePudding user response:

Very simple, use after judgment PreparedStatment addBatch function, the last PreparedStatement. ExecuteBatch () bulk insert,
At the same time, pay attention to the ENABLE_BULK_LOAD open the JDBC connection, and then to control each batch of the total number of not more than 5000, can jdbc7.07 or after 16.0 to 20000, the best effect

CodePudding user response:

Wrong, you this is not a JDBC,
It's more simple, the use of function of BCP bai, use -t specified column separators for, can

CodePudding user response:

Yes I am using JDBC, online to find the jar package jconn4, specific ENABLE_BULK_LOAD how open the , it is best to add a friend

CodePudding user response:

Use the BCP import CSV file directly to the sybase database, -t parameter specifies the comma separator,
  • Related