Home > Back-end >  Can you tell me the interface program should be how to design?
Can you tell me the interface program should be how to design?

Time:11-23

Use Java language to develop an interface program, which is used to obtain incremental data, excuse me how should the interface design, the input parameters and output parameters are respectively?

How to distinguish between stock and incremental data, it is through the primary key ID, or through time?

This interface is external, external customers call the interface, not on a regular basis the incremental data synchronization to our database, to ensure that the newly inserted data record can neither more nor less, can not repeat, can not miss,

It is best to give the sample code, thank you!

CodePudding user response:

This according to the actual situation, if the data is only the insert, similar to the log that, with the id is ok, if the data is to modify, delete operations, such as the suggestion to add a version field, with the version number, when you every time to obtain the version number automatically incrementing up every time, when you have to modify or delete (delete) record the version number, so that we can according to the version number in the query,

CodePudding user response:

 
/* *
* merge data, insert and update, the primary key can be multiple
*
* @ param dbset database configuration
* @ param table table name
* @ param list data
* @ param targetMain table's primary key
* @ param targetColumn table all columns (including primary key)
* @ param sourceMain data source of the primary key
* @ param sourceColumn all column in the data source (including the primary key)
* @ throws the Exception catch exceptions
*/
Public static void merge (DatabaseSet dbset, String table, List The list, List TargetMain List TargetColumn,
List SourceMain List SourceColumn) throws the Exception {
If (Database) saphana) equals (dbset. Database ())) {
String SQL=SqlUtil. Insert (dbset, table, targetColumn). The replaceAll (" insert into ", "the upsert") + SqlUtil. JoinMain (dbset, targetMain);
SourceColumn. AddAll (sourceMain);
BatchUtil. The manage (dbset. The connect (), SQL, list, sourceColumn);
return;
}
List SourceUpdateColumn=new ArrayList (a);
For (String s: sourceColumn) {
if (! SourceMain. The contains (s)) {
SourceUpdateColumn. Add (s);
}
}
SourceUpdateColumn. AddAll (sourceMain);
String insert=SqlUtil. Insert (dbset, table, targetColumn), the update=SqlUtil. Update (dbset, table, targetColumn, targetMain);
List Inserts=new ArrayList (),
Updates=split (dbset, list, inserts, table, targetMain, sourceMain);
Object [] values={inserts. The size (), and updates the size ()};
The info (EntityUtil toJson (EntityUtil. Create (Map. The class, the new String [] {" insert ", "update"}, values, false)));
If (updates. The size () & gt; 0 {
The manage (dbset. The connect (), update, updates, sourceUpdateColumn);
}
If (inserts. The size () & gt; 0 {
The manage (dbset. The connect (), insert, inserts, sourceColumn);
}
}

CodePudding user response:

This is about to see is you want to use what means to synchronize data (mainly coordinates with external customers)
I have just completed an on - the premise of transplant programs in the cloud took the form of (because of considering the data quantity can be quite large) to synchronize the incremental data, namely customer irregular extracting incremental data (the timestamp shall prevail, and of course, if your primary key ID is automatically incremented, with the primary key ID can also be) into a file, uploaded to the server on my side, will trigger the batch execution server monitoring to a new file, the file data into database,
Newly inserted data record can neither more nor less, can not repeat, can not miss, to ensure this, main or see you on the way, my side is the client timestamp to extract data extract slightly forward of the end of the timestamp than last time (that is, take the data file will be made of a small amount of repeated data), the side of the batch to filter out duplicate data insert,

CodePudding user response:

If you didn't express error, and I didn't understand correctly, the situation should be:
1, you develop an interface to call
2, the data would have been in the customer there, he regularly by calling the interface, the incremental data to your platform
3, you save the incremental data

This demand, it is recommended that:
1, open an HTTP interface, let each other use HTTP protocol, the data sent to you, in the form of json, of course, also want to consider the problem of identity authentication, such as
2, the data should have the primary key, if the customer is not willing to give a primary key, that also should have at least one or more fields to uniquely identify a record, which is the candidate key, so as to determine the sending data, do you want to update or insert
3, have you received the data, the best platform of adequate response information to customers, such as the other 100 article, each record processing results, such as insert, update, failure, so resend to each other, if this kind of treatment is not convenient, then at least the handling results to the 100 data, response to each other,
4, want to consider the unexpected, such as customer sends the data to you, just your server crashes, or router crashed, such as computer room power resistance factors data reception is not acceptable, you should ask to have failed retransmission mechanism,

CodePudding user response:

And method is simple, you make a data format, then you have a FTP server, the other the data to your FTP server, and then if you have a new file regularly scan

CodePudding user response:

Generally will be built a few tables to store data,
, interface request form to record every time call interface, client id, call time, the amount of data transferred (data record number), and transmission batch number (can be from the increased value of the primary key, can also custom), and transmission of the data type (used for different types of data, the server for the processing of different ways, namely polymorphism), etc.,
Table 2. The interface data, store each interface data transmission and data processing state, is doing the bulk insert operation, will bring the batch number in the table 1, convenient data tracking, when each of the data processing is completed, the update data, with the passage of time, the length of the table will increase, the late consideration of data migration, or destroy data,

Interface design,
1. Make sure that a transport request is a record, or a request to transmit multiple records (multiple records to set the maximum transmission bar),
2. The requested data need to be a uniquely identifies a record in the field of (also can be a combination of multiple fields to form, business logic to indicate a data ID value), is used to distinguish whether repeat transmission of data,
3. If there is a certification request data requirements, prevent foreign submit other illegal purpose of data, and is often done through into the tocken certification,
4. Is there a validation request data requirements, to prevent data being intercepted during transmission, change again after the incoming server, typically by data and the client ID for the MD5 (digital signature) in the request record, after the server using the same algorithm for validation,
5. The requested data encryption requirements, prevent the data transmission process, by a third party or program check, cause data breaches,
6. The interface design is the way that USES push, do not regularly by the client's push, so, will ask client to push data to the retransmission mechanism, failure mode data loss, the server receives data, to distinguish between what is data retransmission,
7. Push the receiving data, there will be a server overload problem, if the received data is too large at the same time, the server can't deal with, sometimes cause the client push for timeout exception, so, if possible, the service side it is best to have a judge server load condition, when the server overload, don't do data processing, directly tell the client, try again later.
8. Response data content, usually contains an error code and text explanation, inside the interface protocol document, there could be an appendix, there is a error code table, that what kind of error code, said what scenarios of error, the client can according to the error code, improve business processes,

The above is the manner in which the original poster said push for data transfer,
In fact, you can also pull the way, is that each data source server regularly polling, if they have the incremental data transmission to come over, by the way, if not, just reply without a code for you,

CodePudding user response:

This must be by ID to distinguish, the time, may be a period of time is a few days ago from the leakage data

CodePudding user response:

I felt the building the biggest problem is how to define the increment, not how to define interfaces, database, what json, what HTTP, etc., so how to define the increment is the premise, the need according to the actual business situation to define what is incremental data,

CodePudding user response:

Now that is others call interface push data to do incremental, it defines a type String parameters, receive a JSONArray format String, then put this JSONArray format String into List Again, this is the incremental data, call me the method to merge the data to the database, just like the

Full amount is the same, no more than is data more, if special data, let each other callback interface to a few times more

nullnullnullnullnullnullnullnullnullnullnullnullnull
  • Related