简单项目计划书模板,哪家公司做推广优化好,tomcat 部署wordpress,郴州市地图对于实际的业务系统#xff0c;通常有一些热点的表#xff0c;insert和delete的量非常大#xff0c;这个时候就会发现一些查询语句的逻辑读比较偏高#xff0c;这时可能就是oracle在构建一致性块的进行的consistentread。下面做一个测试看下#xff1a;第一步准备数据通常有一些热点的表insert和delete的量非常大这个时候就会发现一些查询语句的逻辑读比较偏高这时可能就是oracle在构建一致性块的进行的consistentread。下面做一个测试看下第一步准备数据create tabletest(col1varchar2(12)col2numberextvarchar2(4000));create index test_ind on test(user_id, col2);create sequence seq_test cache 200;这样这样的表我们假设有频繁的插入和删除操作那么下面来测试一下select的逻辑读的情况。开启两个session:1创建表保存snapshot在session1create table prefix_stats tablespace IW_ACCOUNT_LOG_01 as select * from v$sesstat where sid1;2在session2查询select * from (select * from test t where col1 ‘xpchild001‘ order by trans_log_id) where rownum 200;3在session1监控session2的统计信息select *from (selectt.name,pre.valueaspre,suf.valueassuf,(suf.value- pre.value) asdifffromprefix_stats pre, v$sesstat suf, v$statname twhere pre.sid suf.sidand pre.STATISTIC# suf.STATISTIC#and pre.STATISTIC# t.STATISTIC#) tmpwhere tmp.diff 0order by tmp.diff descName PRE SUF DIFF---------------------------------------------------------------------- ---------- ---------- ----------session pga memory max 957208 1153816 196608session pga memory957208 1153816 196608bytes sent via SQL*Net to client 6692 37013 30321redo size0 8256 8256session logical reads52 1508 1456consistent getsfrom cache 52 1508 1456consistent gets52 1508 1456bytes received via SQL*Net from client 4385 5639 1254consistent gets- examination 21 1253 1232data blocks consistent reads- undo records applied 0 920 920consistent changes0 920 920bufferis not pinned count 17 222 205table fetch by rowid 6 206 200bufferis pinned count 0 197 197CR blocks created0 160 160callsto kcmgas 0 160 160db block changes0 120 120redo entries0 120 120cleanout- number of ktugct calls 0 120 120cleanoutsand rollbacks - consistent read gets 0 120 120immediate (CR) block cleanout applications0 120 120nowork - consistent read gets 19 83 64heap block compress0 51 51rollbacksonly - consistent read gets 0 40 40shared hash latch upgrades- no wait 0 5 5user calls 28 33 5execute count 21 23 2DB time0 2 2parsecount (total) 22 24 2sessioncursor cache count 16 17 1CPU usedwhen call started 0 1 1recursive calls92 93 1parsecount (hard) 0 1 1sessioncursor cache hits 4 5 1CPU usedby this session 0 1 1这一次的查询返回记录200条。table fetch by rowid200;1逻辑读session logical readsconsistent gets(一致读)db blockgets(当前读);这个sql只有一致性读session logical readsconsistent gets14562构建一致性读应用回滚记录统计data blocks consistent reads(undo recordsapplied)920等价于consistent changes。3需要回滚或者块清除产生的一致性读(这里边没有回滚只可能有块清除)cleanouts and rollbacks - consistent readgets120跟db block changes120一致这里进行了块清楚从而改变了db block。4构建一致性读clone的块数CR blocks created1605块清除产生的redoredo size 8256验证了开始的猜测大量的构建一致性读。对于这样的热点表通常有几种手动去调整但核心都是要分散热点减少争用。hash表分散热点调整pctfree增加pctfred的大小。使用块中的记录数变少减少构建一致性读的问题。未完待续。。。原文http://www.cnblogs.com/xpchild/p/3694987.html