富贵资源网 Design By www.hznty.com

近日,在对一张百万数据的业务表进行去重时,去重操作竟然夯住了。下面就来简单回忆一下。

1、查询业务表数据量,查看到总共有200多w条

SQL> select count(*) from tb_bj_banker_etl;

2552381

2、查询表内应该去掉的重复数据量,共80多w条

SQL> select count(*) from tb_bj_banker_etl where (id) in (select id from tb_bj_banker_etl group by id having count(*)>1) and rowid not in(select max(rowid) from tb_bj_banker_etl group by id having count(*)>1);

830099

3、于是,在晚上下班前,执行了下面的语句脚本,为了去重

SQL> delete from tb_bj_banker_etl where(id) in (select id from tb_bj_banker_etl group by id having count(*)>1) and rowid not in(select max(rowid) from tb_bj_banker_etl group by id having count(*)>1);

SQL> commit;

4、第二天,到达现场时,发现PL/SQL Developer工具中昨天晚上执行的语句仍在执行中

首先察觉,80多w的去重数据跑了一个晚上也没跑完"htmlcode">

SELECT 
 A.OWNER,            --OBJECT所属用户 
 A.OBJECT_NAME,         --OBJECT名称 
 B.XIDUSN, 
 B.XIDSLOT, 
 B.XIDSQN, 
 B.SESSION_ID,          --锁表用户的session 
 B.ORACLE_USERNAME,       --锁表用户的Oracle用户名 
 B.OS_USER_NAME,         --锁表用户的操作系统登陆用户名 
 B.PROCESS, 
 B.LOCKED_MODE, 
 C.MACHINE,           --锁表用户的计算机名称 
 C.STATUS,            --锁表状态 
 C.SERVER, 
 C.SID, 
 C.SERIAL#, 
 C.PROGRAM            --锁表用户所用的数据库管理工具 
FROM 
 ALL_OBJECTS A, 
 V$LOCKED_OBJECT B, 
 SYS.GV_$SESSION C 
WHERE 
 A.OBJECT_ID = B.OBJECT_ID 
 AND B.PROCESS = C.PROCESS 
ORDER BY 1,2 

在下面结果中可以看到,锁表的只是去重语句的发起会话,并没有其它用户造成锁表,这说明语句仍然在执行嘛"htmlcode">

第一次: 
delete from tb_bj_banker_etl where(id) in (select id from tb_bj_banker_etl group by id having count(*)>1) and rowid not in(select max(rowid) from tb_bj_banker_etl group by id having count(*)>1) and rownum<=100000; 
commit; 
 
第二次: 
delete from tb_bj_banker_etl where(id) in (select id from tb_bj_banker_etl group by id having count(*)>1) and rowid not in(select max(rowid) from tb_bj_banker_etl group by id having count(*)>1) and rownum<=100000; 
commit; 
 
。。。。。。。 
。。。。。。。 
。。。。。。。 
 
第八次: 
delete from tb_bj_banker_etl where(id) in (select id from tb_bj_banker_etl group by id having count(*)>1) and rowid not in(select max(rowid) from tb_bj_banker_etl group by id having count(*)>1); 
commit; 

结果:通过将80多万数据划分成以10w数据为单次进行去重操作,总共用时140多秒,完成了去重80万数据的目的。但为何直接处理出现夯死情况,有待后续跟踪分析。

以上就是临时处理去重80w数据时夯死现象的全部过程,希望可以帮到大家。

富贵资源网 Design By www.hznty.com
广告合作:本站广告合作请联系QQ:858582 申请时备注:广告合作(否则不回)
免责声明:本站资源来自互联网收集,仅供用于学习和交流,请遵循相关法律法规,本站一切资源不代表本站立场,如有侵权、后门、不妥请联系本站删除!
富贵资源网 Design By www.hznty.com