Home > database >  1 the system more data frequent interaction best DBLINK in that way?
1 the system more data frequent interaction best DBLINK in that way?

Time:09-15

Status: said the situation now has a student management system is given priority to, including student information, information departments, professional information, class information, etc.,,,, but there are still other school systems, purchase, or the research database these information do not need to read just said together related statistics, or sorting query (example: system B only preserved the student ID according to professional class, such as information query or sort) like this kind of situation we are mainly two methods
1. Through the DBlink operation but in the long run DBlink too chaos management.
2. There is a third party is the backup database import system is used, the solution timeliness is a big problem, and the backup and recovery is too human consumption.

Requirements:
1. A more reasonable to solve the problem of data sharing 1 v N, with the increasing system could provide support services for more of the system in the future
2. How do most to provide data for the third party system, the efficiency of reading efficiency, number of concurrent, security issues, if consider these cannot be ruled out the way to use DbLink
3. Is there a solution by WebApi providing data, wanted to but the large amount of data associated with queries, sorting didn't expect the appropriate solution

Now very upset please give some ideas I appreciate receiving great god, few score a great god, please don't abandon

CodePudding user response:

One is through the dblink access other library data, one is to put the other data imported into the local access, and what no other good way,

Also can make the import data a script automatically tasks, or use some third-party ETL tool to import the data, but generally depends on the above two kind of way,

CodePudding user response:

I have used webapi data to support this for?

CodePudding user response:

This is a question of cross-database queries, our company is to solve the problem using ogg

1. The student management system database related table by ogg real-time synchronization to other libraries need to use the data
Advantages: real-time data, only the synchronous need table, better
Disadvantages: expensive

2. Through the ETL tools, timing will push data to the other libraries need to use the data
Advantages: free, convenient to maintenance,
Disadvantages: not real-time data

(3) API, write a stored procedure, receives the parameters, the output result set to develop a cursor, and then develop their database data after go out, two data sets in the code segment, through the set operations,
Through stored procedures to replace the API function

CodePudding user response:

Web interface, or do real time synchronization to pull the data to a local persistence query again, or only want to check the data in memory, according to the situation, the workload will be very big

CodePudding user response:

The
reference 3 floor is just a reply:
this is a question of cross-database queries, our company is to solve the problem using ogg

1. The student management system database related table by ogg real-time synchronization to other libraries need to use the data
Advantages: real-time data, only the synchronous need table, better
Disadvantages: expensive

2. Through the ETL tools, timing will push data to the other libraries need to use the data
Advantages: free, convenient to maintenance,
Disadvantages: not real-time data

(3) API, write a stored procedure, receives the parameters, the output result set to develop a cursor, and then develop their database data after go out, two data sets in the code segment, through the set operations,
Through stored procedures to replace the API function


The party a was transmitted through odi synchronous ogg, etl does not come into contact with, mainly synchronous data can not be managed in the past, just out of the party a
API thought about the way you said before, there is a project done before but the efficiency is too slow

CodePudding user response:

Nayi_224
reference 4 floor response:
web interface, or do real time synchronization to pull the data to a local persistence query again, or only want to check the data in memory, according to the situation, workload will be a big


Now don't want to use synchronization schemes are party a's confidentiality requirements and use system is are some of the other third party company, synchronization is the last as a last resort, the memory word, some even can't satisfy the complex query

CodePudding user response:

refer to 6th floor hongliangc5dn response:
Quote: refer to the third floor is just a reply:
this is a question of cross-database queries, our company is to solve the problem using ogg

1. The student management system database related table by ogg real-time synchronization to other libraries need to use the data
Advantages: real-time data, only the synchronous need table, better
Disadvantages: expensive

2. Through the ETL tools, timing will push data to the other libraries need to use the data
Advantages: free, convenient to maintenance,
Disadvantages: not real-time data

(3) API, write a stored procedure, receives the parameters, the output result set to develop a cursor, and then develop their database data after go out, two data sets in the code segment, through the set operations,
Through stored procedures to replace the API function


The party a was transmitted through odi synchronous ogg, etl does not come into contact with, mainly synchronous data can not be managed in the past, just out of the party a
API thought about the way you said before, there is a project done before but the efficiency is too slow



You this is they want to use this data already, do not let them see the data again, worry about safety related problems,

Since they can be take into this part of the data, you can control through the audit, because authority here have no way to control the

Etl tools can synchronize some fields, such as your student information form 100 columns, you just want to give him 10 columns, sensitive fields are not synchronized, or md5 encryption, such as you targeted synchronization that 10 column with respect to OK,

Then a schema in the target library alone, only to the business schema a read-only privileges, not allowed to change, and all the read each audit in oracle,

You said the odi is one kind of ETL tool

CodePudding user response:

In addition, a third of the API method, said you tried, very slowly,

That those who moved the application from Oracle and MySQL, vertical depots table, horizontal depots table, to obtain a complete list of data, involving N database
Different developers by middleware, the other way, to get all the data, then computing and rendering,

Application on the backend server can cluster, can increase the CPU, memory, can be added to the cache, etc., there is always some solution,

CodePudding user response:

Two solutions:
One is through the ETL tool for data unification to draw the same database, defect is unable to achieve the real-time data, but most of the reports of data real-time demand is not high, it is the mainstream;
2 it is through the reporting tools to connect multiple databases at the same time, in the front page, advantage is real-time data acquisition, the disadvantage is that there may be a complex statements performance issues, that may lead to serious business system database to collapse,

CodePudding user response:

Regrets, in 2020, has anyone wanting to use dblink across systems and API interface way out statements, too amateur
  • Related