Manual points a few area, found that the two problems,
1. In the range partitions, p0=20, 40, p1=p2=60, p3=80, p4=maxvalue
When I put & gt;=80 data is deleted, the data file p4
It did not have the
I added more than 80 data, results p4 file did not come out, look at the database definition
Has been changed to
P0=20, p1=40, p2=60, p3=maxvalue
Is this normal?
2. To have good data division, I adjust, such as
P0=20, p1=40, p2=60, p3=80, p4=maxvalue
Adjusted for
P0=20, p1=40, p2=60, p3=80, p4=100, p5=maxvalue
The results found that generate the same amount of
# 2 - SQL tablename# px
File, and then
Tablename# px
Back one by one and one by one to be deleted,
The temporary files have been deleted
Does this mean that as long as it is to partition, will all the data records to rewrite it again...
This question is killing me (I thought points a little a little at a time, the consumption of partition allocation in different times!)
Barons forced to take me
CodePudding user response:
Strange questions added3. There is a table with the data and the primary key and index, partition field smallint has been added to the primary key,
The alter table t1 partition by range (year) {
Partition p0 values less than (2018),
Partition p1 values less than maxvalue
}
Operation prompted a 1217 error...
Before executing added
The set foreign_key_checks=0;
Still can't perform
CodePudding user response:
Found that cannot use foreign key constraints after partition4. That the amount of data is very big, and foreign key constraints, step by step a multiple tables for efficiency have to cancel the foreign keys, so how to ensure complete data correlation?
Is implemented in the business logic, or use the trigger?
CodePudding user response:
There is no foreign key constraints in the production of basic, foreign key constraints by the program to handle,Don't foreign key important reason, which is able to support onlineddl,