Home > database >  GBase8s cm configuration method
GBase8s cm configuration method

Time:10-09

GBase8s city disaster the high availability cluster configuration method
Tips:
A city GBase8s disaster preparedness in the high availability cluster, there can be at most a city disaster node between nodes replication technology based on the logical logs, so need to open the database log mode,
Cluster structures needed to meet the following condition:
? Each node server database version of the same
? Each node for hardware and operating system versions are basically identical
? All the replica of the database must be open log
? Instance installation path consistent
Suggestion: each node server hardware platform, operating system, is exactly the same
2.3.1. Database parameter configuration
1233.11) modify the sqlhosts file, make the master-slave sqlhost files containing instances of master-slave connection information
[I:]
[root @ redhat25 hac_54] # cat/etc/sqlhosts ol_hac_pri
Ol_hac onsoctcp 192.168.152.26 23697
Ol_hac_pri onsoctcp 192.168.152.25 15723
Dr_hac_pri drsoctcp redhat25 dr_hac_pri
Lo_hac_pri onsoctcp 127.0.0.1 lo_hac_pri
[auxiliary:]
[root @ redhat26 hac_54] # cat/etc/sqlhosts ol_hac
Ol_hac_pri onsoctcp 192.168.152.25 15723
Ol_hac onsoctcp 192.168.152.26 23697
Dr_hac drsoctcp redhat26 dr_hac
Lo_hac onsoctcp 127.0.0.1 lo_hac
2) two servers about ROOTDBSpace parameters must be the same
ROOTNAME rootdbs
ROOTPATH/home/hac_54/storage/rootdbs
ROOTOFFSET 0
ROOTSIZE 1024000
3) physical/logical log configuration parameters must be the same
PHYSFILE 189440
PLOG_OVERFLOW_PATH $GBASEDBTDIR/TMP
PHYSBUFF 512

LOGFILES 18
LOGSIZE 6144
DYNAMIC_LOGS 2
LOGBUFF 256
4) d) related parameters must be the same
DRAUTO is used 3 CM management (d
0 d failure does not automatically switch server type
1 d auxiliaries automatically converted into standard state failure, when d link recovery, the original auxiliary machine will automatically cutting back auxiliary type
2 d failure auxiliary automatically switch to the host, when d link to restore the original state of auxiliary engine is still not the main, and the original host will switch to auxiliary type,
DRINTERVAL - 1 for synchronous update//
DRTIMEOUT 30//this parameter specifies d of two independent database server ping process in wait for the other TCP/IP transmission response time length, and finally confirmed that both sides communication network and all malfunction and cause d
Failed the maximum waiting time to WAIT_TIME=DRTIMEOUT * 4
UPDATABLE_SECONDARY 1/city/disaster node server can write
5) different parameter
[I:]
SERVERNUM 100
DBSERVERNAME ol_hac_pri
[auxiliary:]
SERVERNUM 171
DBSERVERNAME ol_hac
2. The configuration (d
1) node in the online state, execute onmode -d primary ol_hac
This action makes its host, after the success of the execution, check the nodes of the current status of On - Line,
2) in the node level 0 perfect: ontape - s - 0, L will backup folder remote transmission under the path to the backup path of city disaster node, and folder name: HOSTNAME_SERVERNUM_L0 (redhat25_100_L0)
3) backup path of city disaster node, and modify the folder name for the machine hostname and instance num:
[root @ redhat26 hac_54] # backups mv/redhat25_100_L0 backups/redhat26_171_L0
[root @ redhat26 hac_54] # chown gbasedbt: gbasedbt backups/redhat26_171_L0
[root @ redhat26 hac_54] # chmod 660 backups/redhat26_171_L0
4) close the city disaster node services: onmode - ky
5) perform ontape -p physical Recovery, after the trip, auxiliary node status for Fast Recovery
6) in the same city disaster nodes perform onmode -d secondary ol_hac_pri
Auxiliary state into a Fast Recovery (Sec), wait a moment, the state of the auxiliary into Updatable (Sec),
Note: if the node configuration parameters UPDATABLE_SECONDARY is 1, the city disaster node for Updatable (Sec); If UPDATABLE_SECONDARY is 0, the node for the READ - ONLY state (Sec)

7) On the nodes perform the onstat -g dri to view its status On - Line, also can see in the Server information of city disaster node information:
3. Testing and monitoring
1) test:
Take the log database created on a node (d, and create table hac_1 and inserting data, view on city disaster node, all can see the success table data,
2) monitoring in the advocate complementary, perform the onstat -g HDR verbose, monitoring its running state

  • Related