Oracle 11g rac + Dataguard 环境调整 redo log 大小

Oracle 11g rac + Dataguard 环境调整 redo log 大小

目录

  • Oracle 11g rac + Dataguard 环境调整 redo log 大小
    • 一、问题的产生
        • 1、查看日志文件、日志组信息
        • 2、日志切换较快出现的问题
    • 二、 确认 DG 同步是否正常
        • 1、检查数据库角色
        • 2、查看进程信息
        • 3、检查归档是否一致
        • 4、查看 GAP 是否有延迟
    • 三、在主库上重建 redo log file
        • 1、查看 redo 日志组及大小
        • 2、在主库删除 standby log
        • 3、在主库替换 redo log
        • 4、主库添加 standby log
    • 四、在备库上重建 standby log
        • 步骤1:查看 redo 日志组及大小
        • 步骤2:备库取消日志应用
        • 步骤3:修改日志管理模式为手动
        • 步骤4:删除备库 standby log
        • 步骤5:备库新建 standby log
        • 步骤6:主库切日志,将临时`standby logfile`切到`UNASSIGNED`状态,然后删除:
        • 步骤7:打开实时应用日志和自动日志管理
        • 步骤8:重新启动备库,查看数据同步情况

一、问题的产生

客户的生产库为 Oracle 11g rac双节点集群,并且配置有Dataguard,近期业务出现卡顿,检查发现日志文件异常,系统一共配置6redo log1号线程与2号线程各3组,每个日志文件的大小为100MB)与8standby log1号线程与2号线程各4组)。

1、查看日志文件、日志组信息

(1)日志文件信息

SQL> select * from v$logfile;GROUP# STATUS  TYPE MEMBER   IS_
-----------------------------------------------------------------------------------1	   ONLINE  +DATA/hisdb/onlinelog/redo01.log  NO2	   ONLINE  +DATA/hisdb/onlinelog/redo02.log  NO4	   ONLINE  +DATA/hisdb/onlinelog/redo04.log  NO5	   ONLINE  +DATA/hisdb/onlinelog/redo05.log  NO6	   ONLINE  +DATA/hisdb/onlinelog/redo06.log  NO3	   ONLINE  +DATA/hisdb/onlinelog/redo03.log  NO7	   STANDBY  +DATA/hisdb/onlinelog/group_7.446.1121009477   NO8	   STANDBY  +DATA/hisdb/onlinelog/group_8.447.1121009483   NO9	   STANDBY  +DATA/hisdb/onlinelog/group_9.448.1121009489   NO10	   STANDBY  +DATA/hisdb/onlinelog/group_10.449.1121009493  NO11	   STANDBY  +DATA/hisdb/onlinelog/group_11.450.1121009499  NO12	   STANDBY  +DATA/hisdb/onlinelog/group_12.451.1121009507  NO13	   STANDBY  +DATA/hisdb/onlinelog/group_13.452.1121009507  NO14	   STANDBY  +DATA/hisdb/onlinelog/group_14.453.1121009507  NO
14 rows selected.

(2)日志组信息

SQL> select group#,thread#,sequence#,round(bytes/1024/1024,2) size_mb,members,archived,status,first_change#,first_time from v$log;GROUP#    THREAD#  SEQUENCE#    SIZE_MB    MEMBERS ARC STATUS  FIRST_CHANGE# FIRST_TIME
------------- ---------------------- ---------- ---------- ---------- ---------- --- ----------1	    2	  114460	100	     1 YES ACTIVE    9057709636 03-JAN-252	    2	  114461	100	     1 YES ACTIVE    9057751045 03-JAN-253	    2	  114462	100	     1 NO  CURRENT   9057800557 03-JAN-254	    1	  324121	100	     1 YES ACTIVE    9057794541 03-JAN-255	    1	  324122	100	     1 NO  CURRENT   9057807196 03-JAN-256	    1	  324120	100	     1 YES ACTIVE   9057790272 03-JAN-25
6 rows selected.

(3)standby log信息

SQL> select group#,thread#,round(bytes/1024/1024,2) size_mb,status from v$standby_log;GROUP#    THREAD#	 SIZE_MB STATUS
---------- ---------- ---------- ----------7	    1	     100 UNASSIGNED8	    1	     100 UNASSIGNED9	    1	     100 UNASSIGNED10	    1	     100 UNASSIGNED11	    2	     100 UNASSIGNED12	    2	     100 UNASSIGNED13	    2	     100 UNASSIGNED14	    2	     100 UNASSIGNED
8 rows selected.
2、日志切换较快出现的问题

频繁的日志切换会增加CPUI/O负载,因为每次切换都需要更新控制文件和数据字典,并且产生新的归档。

日志组循环写满以后,LGWR进程要覆盖先前的日志文件,如果未完成归档会导致无法切换,出现等待,数据库将陷于停顿状态,直到要覆盖的日志文件完成归档。

在生产环境中,设置一个相对合理的redo log大小是十分重要的,可以提升数据库的整体性能,减轻系统I/O负担,数据库恢复时间合理。

理想性状态下,平均一个小时切换2-4次较为合理。

使用以下脚本可以查看日志每小时切换次数,平均每小时2-4次合适,也就是15-30分钟切一次日志。

set linesize 120
set pagesize 100
column  day     format a15              heading 'Day'
column  d_0     format a3               heading '00'
column  d_1     format a3               heading '01'
column  d_2     format a3               heading '02'
column  d_3     format a3               heading '03'
column  d_4     format a3               heading '04'
column  d_5     format a3               heading '05'
column  d_6     format a3               heading '06'
column  d_7     format a3               heading '07'
column  d_8     format a3               heading '08'
column  d_9     format a3               heading '09'
column  d_10    format a3               heading '10'
column  d_11    format a3               heading '11'
column  d_12    format a3               heading '12'
column  d_13    format a3               heading '13'
column  d_14    format a3               heading '14'
column  d_15    format a3               heading '15'
column  d_16    format a3               heading '16'
column  d_17    format a3               heading '17'
column  d_18    format a3               heading '18'
column  d_19    format a3               heading '19'
column  d_20    format a3               heading '20'
column  d_21    format a3               heading '21'
column  d_22    format a3               heading '22'
column  d_23    format a3               heading '23'
selectsubstr(to_char(FIRST_TIME,'YYYY/MM/DD,DY'),1,15) day,decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'00',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'00',1,0))) d_0,decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'01',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'01',1,0))) d_1,decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'02',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'02',1,0))) d_2,decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'03',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'03',1,0))) d_3,decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'04',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'04',1,0))) d_4,decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'05',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'05',1,0))) d_5,decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'06',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'06',1,0))) d_6,decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'07',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'07',1,0))) d_7,decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'08',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'08',1,0))) d_8,decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'09',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'09',1,0))) d_9,decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'10',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'10',1,0))) d_10,decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'11',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'11',1,0))) d_11,decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'12',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'12',1,0))) d_12,decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'13',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'13',1,0))) d_13,decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'14',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'14',1,0))) d_14,decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'15',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'15',1,0))) d_15,decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'16',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'16',1,0))) d_16,decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'17',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'17',1,0))) d_17,decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'18',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'18',1,0))) d_18,decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'19',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'19',1,0))) d_19,decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'20',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'20',1,0))) d_20,decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'21',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'21',1,0))) d_21,decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'22',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'22',1,0))) d_22,decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'23',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'23',1,0))) d_23
fromgv$log_history where first_time> sysdate-60
group bysubstr(to_char(FIRST_TIME,'YYYY/MM/DD,DY'),1,15)
order bysubstr(to_char(FIRST_TIME,'YYYY/MM/DD,DY'),1,15) desc;-- 运行结果如下:
Day	00  01	02  03	04  05	06  07	08  09	10  11	12  13	14  15	16  17	18  19	20  21	22  23
------- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- ---
2025/01/02,THU	-   -	-   -	-   -	1   -	-   -	-   -	-   -	5   -	-   -	-   -	-   -	1   -
2025/01/01,WED	-   -	-   -	1   -	-   -	-   -	-   -	-   -	-   -	-   2	-   1	1   1	-   -
2024/12/31,TUE	-   -	-   -	-   -	-   -	-   -	-   -	-   -	-   -	1   -	-   -	-   -	-   -
2024/12/30,MON	-   -	-   -	-   -	-   -	-   -	-   -	-   -	-   -	-   -	-   -	-   4	2   -

经过分析日志的切换频率,拟把日志文件大小扩容为1024MB,操作步骤如下:

二、 确认 DG 同步是否正常

1、检查数据库角色
--主库
SQL> select db_unique_name, open_mode, switchover_status, database_role from v$database;DB_UNIQUE_NAME		       OPEN_MODE	    SWITCHOVER_STATUS   DATABASE_ROLE
---------------------------------------------- -------------------- --------------------
HISDB			       READ WRITE	    SESSIONS ACTIVE          PRIMARY--备库
SQL> select db_unique_name, open_mode, switchover_status, database_role from v$database;DB_UNIQUE_NAME		       OPEN_MODE	    SWITCHOVER_STATUS   DATABASE_ROLE
---------------------------------------------- -------------------- --------------------
DGHISDB 		       READ ONLY WITH APPLY NOT ALLOWED   PHYSICAL STANDBY
2、查看进程信息

(1)在主库查看LNS进程,此进程负责将主数据库的重做日志条目传输到备用数据库。

SQL> select process, status, sequence# from v$managed_standby;PROCESS   STATUS	SEQUENCE#
--------- ------------ ----------
ARCH	  CLOSING	   324114
ARCH	  CLOSING	   317879
ARCH	  CLOSING	   324115
ARCH	  CLOSING	   324116
LNS	  WRITING	   324117

(2)在备库查看MRP0进程,此进程负责将接收到的归档日志应用到备用数据库上,以维持与主数据库的同步。MRP进程是DG中的关键组件,它确保备用数据库的数据与主数据库保持一致。

SQL> select process, status, sequence# from v$managed_standby;PROCESS   STATUS	SEQUENCE#
--------- ------------ ----------
ARCH	  CLOSING	   324117
ARCH	  CLOSING	   114458
ARCH	  CONNECTED		0
ARCH	  CLOSING	   324116
RFS	  IDLE		   324118
RFS	  IDLE			0
RFS	  IDLE			0
RFS	  IDLE			0
MRP0	  APPLYING_LOG	   324118
RFS	  IDLE			0
RFS	  IDLE			0
RFS	  IDLE		   114459
RFS	  IDLE			013 rows selected.
3、检查归档是否一致
-- 主库
SQL> select max(sequence#),thread# from v$archived_log where RESETLOGS_CHANGE# = (SELECT RESETLOGS_CHANGE# FROM V$DATABASE_INCARNATION WHERE STATUS = 'CURRENT') GROUP BY THREAD#;MAX(SEQUENCE#)	  THREAD#
-------------- ----------324117		1114458		2-- 备库
SQL> select max(sequence#),thread# from v$archived_log where RESETLOGS_CHANGE# = (SELECT RESETLOGS_CHANGE# FROM V$DATABASE_INCARNATION WHERE STATUS = 'CURRENT') GROUP BY THREAD#;MAX(SEQUENCE#)	  THREAD#
-------------- ----------324117		1114458		2
4、查看 GAP 是否有延迟

GAP产生的原因是,一般是备库已经长时间未与主库同步,等发现的时候,主库的归档日志已经删除,备库无法再次与主库同步,这时候GAP就产生了。

-- 主库
SQL> select * from v$archive_gap;no rows selectedSQL> select STATUS, GAP_STATUS from V$ARCHIVE_DEST_STATUS where DEST_ID = 2;STATUS	  GAP_STATUS
--------- ------------------------
VALID	  NO GAP-- 备库
SQL> select * from v$archive_gap;no rows selectedSQL> select STATUS, GAP_STATUS from V$ARCHIVE_DEST_STATUS where DEST_ID = 2;STATUS	  GAP_STATUS
--------- ------------------------
VALID	  RESOLVABLE GAP

三、在主库上重建 redo log file

1、查看 redo 日志组及大小

(1)查看日志文件

SQL> select * from v$logfile;GROUP# STATUS  TYPE MEMBER   IS_
-----------------------------------------------------------------------------------1	   ONLINE  +DATA/hisdb/onlinelog/redo01.log  NO2	   ONLINE  +DATA/hisdb/onlinelog/redo02.log  NO4	   ONLINE  +DATA/hisdb/onlinelog/redo04.log  NO5	   ONLINE  +DATA/hisdb/onlinelog/redo05.log  NO6	   ONLINE  +DATA/hisdb/onlinelog/redo06.log  NO3	   ONLINE  +DATA/hisdb/onlinelog/redo03.log  NO7	   STANDBY  +DATA/hisdb/onlinelog/group_7.446.1121009477   NO8	   STANDBY  +DATA/hisdb/onlinelog/group_8.447.1121009483   NO9	   STANDBY  +DATA/hisdb/onlinelog/group_9.448.1121009489   NO10	   STANDBY  +DATA/hisdb/onlinelog/group_10.449.1121009493  NO11	   STANDBY  +DATA/hisdb/onlinelog/group_11.450.1121009499  NO12	   STANDBY  +DATA/hisdb/onlinelog/group_12.451.1121009507  NO13	   STANDBY  +DATA/hisdb/onlinelog/group_13.452.1121009507  NO14	   STANDBY  +DATA/hisdb/onlinelog/group_14.453.1121009507  NO
14 rows selected.

(2)查看redo log信息

SQL> select group#,thread#,sequence#,round(bytes/1024/1024,2) size_mb,members,archived,status,first_change#,first_time from v$log;GROUP#    THREAD#  SEQUENCE#    SIZE_MB    MEMBERS ARC STATUS  FIRST_CHANGE# FIRST_TIME
------------- ---------------------- ---------- ---------- ---------- ---------- --- ----------1	    2	  114460	100	     1 YES ACTIVE    9057709636 03-JAN-252	    2	  114461	100	     1 YES ACTIVE    9057751045 03-JAN-253	    2	  114462	100	     1 NO  CURRENT   9057800557 03-JAN-254	    1	  324121	100	     1 YES ACTIVE    9057794541 03-JAN-255	    1	  324122	100	     1 NO  CURRENT   9057807196 03-JAN-256	    1	  324120	100	     1 YES ACTIVE   9057790272 03-JAN-25
6 rows selected.

(3)查看standby log信息

SQL> select group#,thread#,round(bytes/1024/1024,2) size_mb,status from v$standby_log;GROUP#    THREAD#	 SIZE_MB STATUS
---------- ---------- ---------- ----------7	    1	     100 UNASSIGNED8	    1	     100 UNASSIGNED9	    1	     100 UNASSIGNED10	    1	     100 UNASSIGNED11	    2	     100 UNASSIGNED12	    2	     100 UNASSIGNED13	    2	     100 UNASSIGNED14	    2	     100 UNASSIGNED
8 rows selected.
2、在主库删除 standby log

删除旧的 standby loggroup: 7-14):

alter database drop logfile group 7;
alter database drop logfile group 8;
alter database drop logfile group 9;
alter database drop logfile group 10;
alter database drop logfile group 11;
alter database drop logfile group 12;
alter database drop logfile group 13;
alter database drop logfile group 14;
3、在主库替换 redo log

步骤1:添加两组临时日志文件:

节点1alter database add logfile thread 1 group 14 '+DATA' size 1024M;
alter database add logfile thread 1 group 15 '+DATA' size 1024M;节点2alter database add logfile thread 2 group 16 '+DATA' size 1024M;
alter database add logfile thread 2 group 17 '+DATA' size 1024M;SQL> select group#,thread#,sequence#,round(bytes/1024/1024,2) size_mb,members,archived,status,first_change#,first_time from v$log;GROUP#    THREAD#  SEQUENCE#    SIZE_MB    MEMBERS ARC STATUS	    FIRST_CHANGE# FIRST_TIME
---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- ------------1	    2	  114484	100	     1 YES ACTIVE	       9058578050 03-JAN-252	    2	  114485	100	     1 YES ACTIVE	       9058584300 03-JAN-253	    2	  114483	100	     1 YES ACTIVE	       9058554867 03-JAN-254	    1	  324142	100	     1 YES ACTIVE	       9058579106 03-JAN-255	    1	  324140	100	     1 YES INACTIVE	       9058515833 03-JAN-256	    1	  324141	100	     1 YES ACTIVE	       9058538032 03-JAN-2514	    1	  324143       1024	     1 NO  CURRENT	       9058623959 03-JAN-2515	    1	       0       1024	     1 YES UNUSED			016	    2	  114486       1024	     1 NO  CURRENT	       9058621526 03-JAN-2517	    2	       0       1024	     1 YES UNUSED			010 rows selected.

步骤2:切日志,让旧的日志文件(group: 1-6)组为INACTIVE

alter system switch logfile;
alter system checkpoint;select group#,thread#,sequence#,round(bytes/1024/1024,2) size_mb,members,archived,status,first_change#,first_time from v$log;SQL> select group#,thread#,sequence#,round(bytes/1024/1024,2) size_mb,members,archived,status,first_change#,first_time from v$log;GROUP#    THREAD#  SEQUENCE#    SIZE_MB    MEMBERS ARC STATUS	    FIRST_CHANGE# FIRST_TIME
---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- ------------1	    2	  114484	100	     1 YES INACTIVE	       9058578050 03-JAN-252	    2	  114485	100	     1 YES INACTIVE	       9058584300 03-JAN-253	    2	  114483	100	     1 YES INACTIVE	       9058554867 03-JAN-254	    1	  324142	100	     1 YES INACTIVE	       9058579106 03-JAN-255	    1	  324140	100	     1 YES INACTIVE	       9058515833 03-JAN-256	    1	  324141	100	     1 YES INACTIVE	       9058538032 03-JAN-2514	    1	  324143       1024	     1 YES INACTIVE	       9058623959 03-JAN-2515	    1	  324144       1024	     1 NO  CURRENT	       9058634615 03-JAN-2516	    2	  114486       1024	     1 NO  CURRENT	       9058621526 03-JAN-2517	    2	       0       1024	     1 YES UNUSED			010 rows selected.

步骤3:删除1-6组日志文件:

alter database drop logfile group 1;
alter database drop logfile group 2;
alter database drop logfile group 3;
alter database drop logfile group 4;
alter database drop logfile group 5;
alter database drop logfile group 6;SQL> select group#,thread#,sequence#,round(bytes/1024/1024,2) size_mb,members,archived,status,first_change#,first_time from v$log;GROUP#    THREAD#  SEQUENCE#    SIZE_MB    MEMBERS ARC STATUS	    FIRST_CHANGE# FIRST_TIME
---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- ------------14	    1	  324143       1024	     1 YES INACTIVE	       9058623959 03-JAN-2515	    1	  324144       1024	     1 NO  CURRENT	       9058634615 03-JAN-2516	    2	  114486       1024	     1 NO  CURRENT	       9058621526 03-JAN-2517	    2	       0       1024	     1 YES UNUSED			0

步骤4:添加新的日志文件:

/*
SQL> select * from v$logfile;GROUP# STATUS  TYPE MEMBER   IS_
-----------------------------------------------------------------------------------1	   ONLINE   +DATA/hisdb/onlinelog/redo01.log  NO2	   ONLINE   +DATA/hisdb/onlinelog/redo02.log  NO4	   ONLINE   +DATA/hisdb/onlinelog/redo04.log  NO5	   ONLINE   +DATA/hisdb/onlinelog/redo05.log  NO6	   ONLINE   +DATA/hisdb/onlinelog/redo06.log  NO3	   ONLINE   +DATA/hisdb/onlinelog/redo03.log  NO7	   STANDBY  +DATA/hisdb/onlinelog/group_7.446.1121009477   NO8	   STANDBY  +DATA/hisdb/onlinelog/group_8.447.1121009483   NO9	   STANDBY  +DATA/hisdb/onlinelog/group_9.448.1121009489   NO10	   STANDBY  +DATA/hisdb/onlinelog/group_10.449.1121009493  NO11	   STANDBY  +DATA/hisdb/onlinelog/group_11.450.1121009499  NO12	   STANDBY  +DATA/hisdb/onlinelog/group_12.451.1121009507  NO13	   STANDBY  +DATA/hisdb/onlinelog/group_13.452.1121009507  NO14	   STANDBY  +DATA/hisdb/onlinelog/group_14.453.1121009507  NO
14 rows selected.
*/alter database add logfile thread 2 group 1 '+DATA/hisdb/onlinelog/redo1.log' size 1024M;
alter database add logfile thread 2 group 2 '+DATA/hisdb/onlinelog/redo2.log' size 1024M;
alter database add logfile thread 2 group 3 '+DATA/hisdb/onlinelog/redo3.log' size 1024M;
alter database add logfile thread 1 group 4 '+DATA/hisdb/onlinelog/redo4.log' size 1024M;
alter database add logfile thread 1 group 5 '+DATA/hisdb/onlinelog/redo5.log' size 1024M;
alter database add logfile thread 1 group 6 '+DATA/hisdb/onlinelog/redo6.log' size 1024M;SQL> select group#,thread#,sequence#,round(bytes/1024/1024,2) size_mb,members,archived,status,first_change#,first_time from v$log;GROUP#    THREAD#  SEQUENCE#    SIZE_MB    MEMBERS ARC STATUS	    FIRST_CHANGE# FIRST_TIME
---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- ------------1	    2	       0       1024	     1 YES UNUSED			02	    2	       0       1024	     1 YES UNUSED			03	    2	       0       1024	     1 YES UNUSED			04	    1	       0       1024	     1 YES UNUSED			05	    1	       0       1024	     1 YES UNUSED			06	    1	       0       1024	     1 YES UNUSED			014	    1	  324143       1024	     1 YES INACTIVE	       9058623959 03-JAN-2515	    1	  324144       1024	     1 NO  CURRENT	       9058634615 03-JAN-2516	    2	  114486       1024	     1 NO  CURRENT	       9058621526 03-JAN-2517	    2	       0       1024	     1 YES UNUSED			010 rows selected.

步骤5:切日志,删除临时添加的日志文件:

alter system switch logfile;
alter system checkpoint;SQL> select group#,thread#,sequence#,round(bytes/1024/1024,2) size_mb,members,archived,status,first_change#,first_time from v$log;GROUP#    THREAD#  SEQUENCE#    SIZE_MB    MEMBERS ARC STATUS	    FIRST_CHANGE# FIRST_TIME
---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- ------------1	    2	  114487       1024	     1 NO  CURRENT	       9058754188 03-JAN-252	    2	       0       1024	     1 YES UNUSED			03	    2	       0       1024	     1 YES UNUSED			04	    1	  324145       1024	     1 YES INACTIVE	       9058753823 03-JAN-255	    1	  324146       1024	     1 NO  CURRENT	       9058759093 03-JAN-256	    1	       0       1024	     1 YES UNUSED			014	    1	  324143       1024	     1 YES INACTIVE	       9058623959 03-JAN-2515	    1	  324144       1024	     1 YES INACTIVE	       9058634615 03-JAN-2516	    2	  114486       1024	     1 YES INACTIVE	       9058621526 03-JAN-2517	    2	       0       1024	     1 YES UNUSED			010 rows selected.alter database drop logfile group 14;
alter database drop logfile group 15;
alter database drop logfile group 16;
alter database drop logfile group 17;SQL> select group#,thread#,sequence#,round(bytes/1024/1024,2) size_mb,members,archived,status,first_change#,first_time from v$log;GROUP#    THREAD#  SEQUENCE#    SIZE_MB    MEMBERS ARC STATUS	    FIRST_CHANGE# FIRST_TIME
---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- ------------1	    2	  114487       1024	     1 NO  CURRENT	       9058754188 03-JAN-252	    2	       0       1024	     1 YES UNUSED			03	    2	       0       1024	     1 YES UNUSED			04	    1	  324145       1024	     1 YES INACTIVE	       9058753823 03-JAN-255	    1	  324146       1024	     1 NO  CURRENT	       9058759093 03-JAN-256	    1	       0       1024	     1 YES UNUSED			06 rows selected.
4、主库添加 standby log
Alter database add standby logfile thread 1 group 7 '+DATA/hisdb/onlinelog/standby07.log' size 1024m;
Alter database add standby logfile thread 1 group 8 '+DATA/hisdb/onlinelog/standby08.log' size 1024m;
Alter database add standby logfile thread 1 group 9 '+DATA/hisdb/onlinelog/standby09.log' size 1024m;
Alter database add standby logfile thread 1 group 10 '+DATA/hisdb/onlinelog/standby10.log' size 1024m;
Alter database add standby logfile thread 2 group 11 '+DATA/hisdb/onlinelog/standby11.log' size 1024m;
Alter database add standby logfile thread 2 group 12 '+DATA/hisdb/onlinelog/standby12.log' size 1024m;
Alter database add standby logfile thread 2 group 13 '+DATA/hisdb/onlinelog/standby13.log' size 1024m;
Alter database add standby logfile thread 2 group 14 '+DATA/hisdb/onlinelog/standby14.log' size 1024m;

四、在备库上重建 standby log

步骤1:查看 redo 日志组及大小

查看日志文件信息

SQL> select * from v$logfile;GROUP# STATUS  TYPE   MEMBER   IS_
------------------------------------------------------------------------1	   ONLINE  /oradata/dghisdb/onlinelog/redo01.log  NO2	   ONLINE  /oradata/dghisdb/onlinelog/redo02.log  NO4	   ONLINE  /oradata/dghisdb/onlinelog/redo04.log  NO5	   ONLINE  /oradata/dghisdb/onlinelog/redo05.log  NO6	   ONLINE  /oradata/dghisdb/onlinelog/redo06.log  NO3	   ONLINE  /oradata/dghisdb/onlinelog/redo03.log  NO7	   STANDBY /oradata/dghisdb/onlinelog/group_7.446.1121009477 NO8	   STANDBY /oradata/dghisdb/onlinelog/group_8.447.1121009483 NO9	   STANDBY /oradata/dghisdb/onlinelog/group_9.448.1121009489 NO10	   STANDBY /oradata/dghisdb/onlinelog/group_10.449.1121009493 NO11	   STANDBY /oradata/dghisdb/onlinelog/group_11.450.1121009499 NO12	   STANDBY /oradata/dghisdb/onlinelog/group_12.451.1121009507 NO13	   STANDBY /oradata/dghisdb/onlinelog/group_13.452.1121009507 NO14	   STANDBY /oradata/dghisdb/onlinelog/group_14.453.1121009507 NO14 rows selected.SQL> select group#,thread#,round(bytes/1024/1024,2) size_mb,status from v$standby_log;GROUP#    THREAD#	 SIZE_MB STATUS
---------- ---------- ---------- ----------7	    1	     100 UNASSIGNED8	    1	     100 UNASSIGNED9	    1	     100 UNASSIGNED10	    1	     100 UNASSIGNED11	    2	     100 UNASSIGNED12	    2	     100 UNASSIGNED13	    2	     100 UNASSIGNED14	    2	     100 UNASSIGNED8 rows selected.
步骤2:备库取消日志应用
alter database recover managed standby database cancel;
步骤3:修改日志管理模式为手动
SQL> alter system set standby_file_management='manual';SQL> show parameter standby_file_managementNAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
standby_file_management              string      manual
步骤4:删除备库 standby log

(1)添加两组临时standby log file

alter database add standby logfile group 15 '/oradata/dghisdb/onlinelog/stlog15.log' size 1024m reuse;
alter database add standby logfile group 16 '/oradata/dghisdb/onlinelog/stlog16.log' size 1024m reuse;

(2)在主库切日志,将active状态切到临时文件上,将所有旧的standby logstatus刷到UNASSIGNED

SQL> alter system switch logfile;
--备库查看
SQL> select group#,thread#,round(bytes/1024/1024,2) size_mb,status from v$standby_log;

(3)删除旧的standby log file

alter database drop logfile group 7;
alter database drop logfile group 8;
alter database drop logfile group 9;
alter database drop logfile group 10;
alter database drop logfile group 11;
alter database drop logfile group 12;
alter database drop logfile group 13;
alter database drop logfile group 14;
步骤5:备库新建 standby log

重新添加standby log,大小为1024mgroup: 7-14

alter database add standby logfile group 7 '/oradata/dghisdb/onlinelog/standby07.log' size 1024m reuse;
alter database add standby logfile group 8 '/oradata/dghisdb/onlinelog/standby08.log' size 1024m reuse;
alter database add standby logfile group 9 '/oradata/dghisdb/onlinelog/standby09.log' size 1024m reuse;
alter database add standby logfile group 10 '/oradata/dghisdb/onlinelog/standby10.log' size 1024m reuse;
alter database add standby logfile group 11 '/oradata/dghisdb/onlinelog/standby11.log' size 1024m reuse;
alter database add standby logfile group 12 '/oradata/dghisdb/onlinelog/standby12.log' size 1024m reuse;
alter database add standby logfile group 13 '/oradata/dghisdb/onlinelog/standby13.log' size 1024m reuse;
alter database add standby logfile group 14 '/oradata/dghisdb/onlinelog/standby14.log' size 1024m reuse;
步骤6:主库切日志,将临时standby logfile切到UNASSIGNED状态,然后删除:
alter database drop logfile group 15;
alter database drop logfile group 16;  

说明:关于备库的 redo log 处理

通常情况下,备库为只读模式,不对数据库进行修改,不会启用redo log files

另外,由于 DG 备库处于只读模式,因此,不对备库的 redo log 做任何操作。

步骤7:打开实时应用日志和自动日志管理

完成以上操作后,dg 环境的redo logfilestandby logfile就算更新完成了,接下来只需要恢复数据同步即可。

alter database recover managed standby database using current logfile disconnect;alter system set standby_file_management='AUTO';
步骤8:重新启动备库,查看数据同步情况
SQL> select process ,status , sequence# from v$managed_standby;PROCESS   STATUS	SEQUENCE#
--------- ------------ ----------
ARCH	  CLOSING	   324152
ARCH	  CONNECTED		0
ARCH	  CONNECTED		0
ARCH	  CLOSING	   114489
RFS	  IDLE			0
RFS	  IDLE			0
RFS	  IDLE		   114490
RFS	  IDLE			0
RFS	  IDLE		   324153
RFS	  IDLE			0
MRP0	  APPLYING_LOG	   32415311 rows selected.SQL> select group#,thread#,sequence#,archived,status from v$standby_log;GROUP#    THREAD#  SEQUENCE# ARC STATUS
---------- ---------- ---------- --- ----------7	    2	  114490 YES ACTIVE8	    2	       0 NO  UNASSIGNED9	    1	  324153 YES ACTIVE10	    1	       0 NO  UNASSIGNED11	    0	       0 YES UNASSIGNED12	    0	       0 YES UNASSIGNED13	    0	       0 YES UNASSIGNED14	    0	       0 YES UNASSIGNED8 rows selected.

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/891842.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

MCP(Model Context Protocol)模型上下文协议 进阶篇3 - 传输

MCP 目前定义了两种标准的客户端-服务端通信传输机制: stdio(标准输入输出通信)HTTP with Server-Sent Events (SSE)(HTTP 服务端发送事件) 客户端应尽可能支持 stdio。此外,客户端和服务端也可以以插件方…

openai swarm agent框架源码详解及应用案例实战

文章目录 简介数据类型Agent类Response类Result类Swarm类run_demo_loop交互式会话 基础应用agent-handsofffunction-callingcontext_variablestriage_agent 高阶应用通用客服机器人(support bot)构建航班服务agent 参考资料 openai 在24年10月份开源了一个教育性质的多agents协…

【顶刊TPAMI 2025】多头编码(MHE)之极限分类 Part 2:基础知识

目录 1 预热1.1 记号1.2 分类器计算过载问题 2 多头编码(MHE)2.1 标签分解2.2 多头组合(Multi-Head Combination) 论文:Multi-Head Encoding for Extreme Label Classification 作者:Daojun Liang, Haixia …

攻防世界 - Misc - Level 1 | 适合作为桌面

关注这个靶场的其它相关笔记:攻防世界(XCTF) —— 靶场笔记合集-CSDN博客 0x01:考点速览 想要通过本关,你需要掌握以下知识点: Stegolve 查看图片隐写内容。 了解 pyc 文件,并知道如何通过 Wi…

Unity 从零开始的框架搭建1-3 关于命令模式的一些思考

Unity 从零开始的框架搭建1-2 事件的发布-订阅-取消的小优化及调用对象方法总结[半干货]-CSDN博客 本人水平有限 如有不足还请斧正,该文章专栏是向QFrameWork作者凉鞋老师学习总结得来,吃水不忘打井人,不胜感激 关于此模式我曾实现过&#…

UVM :uvm_sequence_item property and methods

transaction是uvm_sequence_item的一个子类。 topic transaction介绍 uvm_sequence_item override Set_type_override:同类型替换 Set_inst_override:同例化替换

SpringBoot原理分析-1

SpringBoot原理分析 作为一个javaer,和boot打交道是很常见的吧。熟悉boot的人都会知道,启动一个springboot应用,就是用鼠标点一下启动main方法,然后等着就行了。我们来看看这个main里面。 SpringBootApplication public class E…

前端(API)学习笔记(CLASS 4):进阶

1、日期对象 日期对象:用来表示事件的对象 作用:可以得到当前系统时间 1、实例化 在代码中发现了new关键字,一般将这个操作称为实例化 创建一个时间对象并获取时间 获得当前时间 const datenew Date() 使用日志查看,得到的…

力扣刷题:二叉树OJ篇(上)

大家好,这里是小编的博客频道 小编的博客:就爱学编程 很高兴在CSDN这个大家庭与大家相识,希望能在这里与大家共同进步,共同收获更好的自己!!! 目录 1.单值二叉树(1)题目描…

4.1.2 栈和队列(二)

文章目录 队列的定义队列的基本运算队列的存储结构 队列的定义 队列先进先出,仅允许一端插入、一端删除 队尾(Rear),插入 队头(Front),删除 队列的基本运算 队列的基本运算 初始化空队列:initQueue(Q)判队空:isEmpty…

【HeadFirst系列之HeadFirst设计模式】第1天之HeadFirst设计模式开胃菜

HeadFirst设计模式开胃菜 前言 从今日起,陆续分享《HeadFirst设计模式》的读书笔记,希望能够帮助大家更好的理解设计模式,提高自己的编程能力。 今天要分享的是【HeadFirst设计模式开胃菜】,主要介绍了设计模式的基本概念、设计模…

Oracle数据库如何找到 Top Hard Parsing SQL 语句?

有一个数据库应用程序存在过多的解析问题,因此需要找到产生大量硬解析的主要语句。 什么是硬解析 Oracle数据库中的硬解析(Hard Parse)是指在执行SQL语句时,数据库需要重新解析该SQL语句,并创建新的执行计划的过程。这…

【GBT32960协议学习系列】GBT 32960协议的背景、目的和适用范围

GBT 32960协议的背景、目的和适用范围 1. GBT 32960协议的背景 GBT 32960是中国国家标准,全称为《电动汽车远程服务与管理系统技术规范》。该标准由中国国家标准化管理委员会发布,旨在规范电动汽车远程服务与管理系统的技术要求,确保电动汽车…

蓝桥杯备赛:C++基础,顺序表和vector(STL)

目录 一.C基础 1.第一个C程序: 2.头文件: 3.cin和cout初识: 4.命名空间: 二.顺序表和vector(STL) 1.顺序表的基本操作: 2.封装静态顺序表: 3.动态顺序表--vector:…

创建并配置华为云虚拟私有云

目录 私有云 创建虚拟私有云 私有云 私有云是一种云计算模式,它将云服务部署在企业或组织内部的私有基础设施上,仅供该企业或组织内部使用,不对外提供服务.私有云的主要特点包括: 私密性:私有云的资源(如…

OWASP ZAP之API 请求基础知识

ZAP API 提供对 ZAP 大部分核心功能的访问,例如主动扫描器和蜘蛛。ZAP API 在守护进程模式和桌面模式下默认启用。如果您使用 ZAP 桌面,则可以通过访问以下屏幕来配置 API: Tools -> Options -> API。 ZAP 需要 API 密钥才能通过 REST API 执行特定操作。必须在所有 …

音视频入门基础:MPEG2-PS专题(3)——MPEG2-PS格式简介

一、引言 本文对MPEG2-PS格式进行简介。 进行简介之前,请各位先下载MPEG2-PS的官方文档。ITU-T和ISO/IEC都分别提供MPEG2-PS的官方文档。但是ITU提供的文档是免费的,ISO/IEC是付费的,所以我们主要阅读ITU提供的官方文档,比如较新…

CPT203 Software Engineering 软件工程 Pt.3 系统建模(中英双语)

文章目录 5. System Modeling(系统建模)5.1 Context models(上下文模型)5.2 Interaction models(交互模型)5.2.1 Use case modeling(用况建模)5.2.2 Sequence diagram(顺…

什么是Kafka的重平衡机制?

Kafka 的重平衛机制是指在消费者组中新增或删除消费者时,Kafka 集群会重新分配主题分区给各个消费者,以保证每个消费者消费的分区数量尽可能均衡。 重平衡机制的目的是实现消费者的负载均衡和高可用性,以确保每个消费者都能够按照预期的方式…

Nginx——反向代理(三/五)

目录 1.Nginx 反向代理1.1.Nginx 反向代理概述1.2.Nginx 反向代理的配置语法1.2.1.proxy_pass1.2.2.proxy_set_header1.2.3.proxy_redirect 1.3.Nginx 反向代理实战1.4.Nginx 的安全控制1.4.1.如何使用 SSL 对流量进行加密1.4.2.Nginx 添加 SSL 的支持1.4.3.Nginx 的 SSL 相关指…