A friend and I had a close encounter with a nosy squirrel today. We sat on a bench in Hyde park eating our muffins and drinking our lates from Starbucks when this squirrel senses the smell and starts hassling us to get a bite. It even climbs up on the bench and snatches my paper bag. I managed to take a few snaps with my camera phone. I think it was Carrie who called them "rats with furry tails" but I must say that they are way smarter than rats (and cuter).
More snaps
Saturday, July 30
Friday, July 29
Sysadmin day
Happy System Administrator Appreciation Day!
If your a sysadmin I hope your users appreciate (read buy you [beer|cake]) today, and if you are a "user" please buy your sysadmin [beer|cake].
I got our office a box of Krispy Kremes.
http://www.sysadminday.com
I've been doing some benchmarks on huge PostgreSQL tables in the past days. Loaded a table with 150 million rows and created 10 partial indexes of 15 million rows each. The response times are quite amazing, selects around 1ms and updates under 20ms.
I'll consider posting some more results if anyone is interested.
The London terror wave continues, today police raided a council estate a few miles from where I live. I thought that area was mainly inhabited by rasta faris but apparently some Muslims live there as well.
BBC News has the scope.
Another music recommendation:
Heard "Hurricane" on TV and had to dig up the old classic Bob Dylan - Desire album.
If your a sysadmin I hope your users appreciate (read buy you [beer|cake]) today, and if you are a "user" please buy your sysadmin [beer|cake].
I got our office a box of Krispy Kremes.
http://www.sysadminday.com
I've been doing some benchmarks on huge PostgreSQL tables in the past days. Loaded a table with 150 million rows and created 10 partial indexes of 15 million rows each. The response times are quite amazing, selects around 1ms and updates under 20ms.
I'll consider posting some more results if anyone is interested.
The London terror wave continues, today police raided a council estate a few miles from where I live. I thought that area was mainly inhabited by rasta faris but apparently some Muslims live there as well.
BBC News has the scope.
Another music recommendation:
Heard "Hurricane" on TV and had to dig up the old classic Bob Dylan - Desire album.
Wednesday, July 27
Gadget recession
- "I don't have a MP3 player, I don't have a Blackberry, I don't have a PDA, I don't have a smart-phone, I don't have a GPS navigator, I don't own one single piece of Bluetooth hardware, I don't even have a USB memory stick."
What is happening with the world? Most of my "geek friends" don't have loads of gadgets any more.
Are gadgets becoming fashion accessories? At least the iPod has, followed by the Blackberry (which isn't even black any more but comes in all sorts of cute colors.
On the streets of London I see people with gadgets all the time, but not the people you used to see with gadgets. Now we see cute girls with iPods and flashy mobile handsets, bankers with their Blackberrys and just normal people using IM on the move.
I even saw a nice looking girl the other day with a Zaurus Linux PDA. She probably didn't even now how hot it looked.
What is left of the "legacy gadget freaks"? Have the geeks "been there, done that" and realized that gadgets don't chance our lives that much or are they just rejecting gadgets because they are "socially acceptable".
Is blogging is the new gadget!? :)
Need to stock up on Gadgets?
Gadget Stuff
Dr. Gadget
Think Geek
mp3players.co.uk
Gadget Review
What is happening with the world? Most of my "geek friends" don't have loads of gadgets any more.
Are gadgets becoming fashion accessories? At least the iPod has, followed by the Blackberry (which isn't even black any more but comes in all sorts of cute colors.
On the streets of London I see people with gadgets all the time, but not the people you used to see with gadgets. Now we see cute girls with iPods and flashy mobile handsets, bankers with their Blackberrys and just normal people using IM on the move.
I even saw a nice looking girl the other day with a Zaurus Linux PDA. She probably didn't even now how hot it looked.
What is left of the "legacy gadget freaks"? Have the geeks "been there, done that" and realized that gadgets don't chance our lives that much or are they just rejecting gadgets because they are "socially acceptable".
Is blogging is the new gadget!? :)
Need to stock up on Gadgets?
Gadget Stuff
Dr. Gadget
Think Geek
mp3players.co.uk
Gadget Review
Monday, July 25
Function based indexes in Postgres
I actually found something that PostgreSQL does better than Oracle (or at least they
take a different approach that proves quite good in our use case).
Lets say w have a table with ticket numbers and most of the queries run against the table is to determine if a ticket is valid or not and the condition is that VALID='Y' and that REDEEM_DATE is not set. Simple, lets create an index for just those conditions. I'm not claiming this can't be done in Oracle, but to be honest, the approach in Postgres is so much simpler for such a basic scenario.
Example DDL
Ok, lets see the execution plans (hail GEQO).
Compare the cost figures for the first (heavily used) query with the negation of the second query (which is never run except in this test case).
take a different approach that proves quite good in our use case).
Lets say w have a table with ticket numbers and most of the queries run against the table is to determine if a ticket is valid or not and the condition is that VALID='Y' and that REDEEM_DATE is not set. Simple, lets create an index for just those conditions. I'm not claiming this can't be done in Oracle, but to be honest, the approach in Postgres is so much simpler for such a basic scenario.
Example DDL
CREATE TABLE tickets (
ID serial,
TICKET_CODE char(10),
ISSUE_DATE timestamp,
VALID char(1),
REDEEM_DATE timestamp,
CAMPAIGN_ID int
);
[ insert data (90.000 rows in this case) ]
CREATE INDEX tickets_valid_idx ON tickets (redeem_date,valid)
WHERE valid='Y' AND redeem_date is null;
ANALYZE TICKETS;
Ok, lets see the execution plans (hail GEQO).
Compare the cost figures for the first (heavily used) query with the negation of the second query (which is never run except in this test case).
testdb=> explain select count(*) from tickets whereThe downside is of course as with all indexes that updates and insert to the table will be a bit slower. We do bulk inserts so that's not a huge problem. One concern tho is that when a ticket is used we set VALID to U and redeem_date to now() so that the row in question must be removed from the index, this could lead to quite fragmented indexes. Might be worth keeping an eye on it for the first few months.
VALID='Y' and redeem_date is null;
QUERY PLAN
-------------------------------------------------------
Aggregate (cost=192.91..192.91 rows=1 width=0)
-> Index Scan using tickets_valid_idx on tickets
(cost=0.00..192.79 rows=48 width=0)
Filter: (("valid" = 'Y'::bpchar)
AND (redeem_date IS NULL))
(3 rows)
testdb=> explain select count(*) from tickets where
VALID='N' and redeem_date is null;
QUERY PLAN
-------------------------------------------------------
Aggregate (cost=2715.14..2715.14 rows=1 width=0)
-> Seq Scan on tickets
(cost=0.00..2714.12 rows=403 width=0)
Filter: (("valid" = 'N'::bpchar)
AND (redeem_date IS NULL))
(3 rows)
Oracle data tranportation made easy
I got to spend some time with the Oracle Data Pump this morning. We where sent an expdp file for analysis, quite large and using something like 15 tablespaces and of course they had also renamed our main application schema. The dump was even made with 10gR2 and my target system was 10gR1.
So what, not a problem?! :-)
A few lines of impdp hacking and the exp loaded like a charm, to our default two tablespaces and to our default schema name.
Example (cut down version):
And a quick CD recommendation:
We've all heard them before, just need to be reminded about their greatness once in a while. Their live performances are something extra, there is a reason they where the first band to play at LiveAid 20 years ago.
Dire Straits - Alchemy - Live
So what, not a problem?! :-)
A few lines of impdp hacking and the exp loaded like a charm, to our default two tablespaces and to our default schema name.
Example (cut down version):
USERID="/ as sysdba"Read the Oracle Data Pump Overview docs.
LOGFILE=import.log
DIRECTORY=tempdir
DUMPFILE=dp050720.dmp
REMAP_SCHEMA=appschema1:app
REMAP_TABLESPACE=users:data,data1:data,data2:data,index1:indx
And a quick CD recommendation:
We've all heard them before, just need to be reminded about their greatness once in a while. Their live performances are something extra, there is a reason they where the first band to play at LiveAid 20 years ago.
Dire Straits - Alchemy - Live
Thursday, July 21
How UNIX are you?
Sometimes people are just wasting time when it comes to computers or rather the computers are wasting peoples time. That's why I like UNIX and applications like Oracle. If you know what you are doing, you waste very little time fucking around. Things are where they are supposed to be and they just work as intended (well, at least most of the time).
I wrote a small check list to see how UNIX-like you are :)
Do any of these statements match you?
If more than three or four of these are true for you then I think you are more UNIX based than you think :-)
Congrats.
I wrote a small check list to see how UNIX-like you are :)
Do any of these statements match you?
If more than three or four of these are true for you then I think you are more UNIX based than you think :-)
Congrats.
Tuesday, July 19
Oracle oratab location bug [update]
Update on my bug discovery.
Oracle of course knew about it already and had a closed bug report.
Strange that it hasn't been fixed.
In addition to editing dbhome Oracle suggested to edit the dbshut and dbstart scripts as well (I always write my own init.d scripts).
Bug ID 4108039
Case closed
Oracle of course knew about it already and had a closed bug report.
Strange that it hasn't been fixed.
In addition to editing dbhome Oracle suggested to edit the dbshut and dbstart scripts as well (I always write my own init.d scripts).
Bug ID 4108039
Case closed
Monday, July 18
Oracle oratab location bug
Ok, so I finally got around to confirm my suspicion of a tiny bug in Oracle on Linux.
Tested and confirmed on the default 10.1.0.3 installation and I'm pretty sure I've seen it on at least 9.2.0.4.
Problem description
If the directory /var/opt/oracle exists during the Oracle installation the dbhome script will have the parameter ORATAB set to "/var/opt/oracle/oratab" instead of "/etc/oratab". The normal DBCA process during a default installation will still add the instance entry to /etc/oratab.
This causes the oraenv script to fail finding the ORACLE_HOME and will prompt the user. oraenv calls dbhome to determine the ORACLE_HOME for the instance selected.
Workaround
Modify the dbhome script(s) in your "common" path (usually /usr/local/bin) and in $ORACLE_HOME/bin.
I'll submit the TAR to Oracle and see what they say about it. Tiny bug, I'm quite sure that not many people create /var/opt/oracle on Linux.
Tested and confirmed on the default 10.1.0.3 installation and I'm pretty sure I've seen it on at least 9.2.0.4.
Problem description
If the directory /var/opt/oracle exists during the Oracle installation the dbhome script will have the parameter ORATAB set to "/var/opt/oracle/oratab" instead of "/etc/oratab". The normal DBCA process during a default installation will still add the instance entry to /etc/oratab.
This causes the oraenv script to fail finding the ORACLE_HOME and will prompt the user. oraenv calls dbhome to determine the ORACLE_HOME for the instance selected.
Workaround
Modify the dbhome script(s) in your "common" path (usually /usr/local/bin) and in $ORACLE_HOME/bin.
Change line 47 from
ORATAB=/var/opt/oracle/oratab
to
ORATAB=/etc/oratab
I'll submit the TAR to Oracle and see what they say about it. Tiny bug, I'm quite sure that not many people create /var/opt/oracle on Linux.
Extracting Oracle object DDL
It is often usefull to be able to get the DDL for existing schema objects, one easy method is to do an export of the schema in question and import it to an indexfile and the other method is to use the dbms_metadata package, the later method is probably faster and the preferred way.
Example how to get scott.emp DDL
Read Using DBMS_METADATA documentation.
Example how to get scott.emp DDL
set heading offI wrote a small script to export a full schema, just connect as the schema owner and run the script. [ View ].
set pages 0
set long 1000
select
dbms_metadata.get_ddl('TABLE','EMP','SCOTT')
from
dual
/
Read Using DBMS_METADATA documentation.
Saturday, July 16
Summer time is maintenance time
Upgrades
People are on vacation and system usage is quite low, perfect for some upgrades!
I took the opportunity to upgrade our last 8i databases, we are officially (almost) 8i free. There is one old Sun box running 8.1.7 system in India that no one really cares about, it holds a quite big database of historical data and has perhaps 2 logins per month.
The lowest production release we have now is 9.2.0.4 which I plan to upgrade to 9.2.0.6 sometime in the coming weeks.
A few old RedHat 2.1/9 systems also got upgraded to RHEL4 or CentOS 4 (yes, sick and tired of paying RedHat the big dosh for terrible "support").
The older Solaris systems keeps ticking with old release that are still fully supported by Sun without any hassle or weird comments from the support, we should use Solaris more.
Oracle CPU licensing policies
I browsed through the Oracle price list and noticed that multi-core CPU's are charged higher than single core. My position is that one CPU is one socket and I think most companies agree with me, but not Oracle :(
fortunately for us single CPU systems with multiple cores are treated as 1 CPU when running SE/SE One, but on larger systems you have to multiply the number of cores with 0.75 to get the number of CPU licenses to purchase. Takes some of the glory from the upcoming v80z from Sun, a four CPU system with dual-cores will be charged as a six CPU box, meaning about $80k extra cost!
This is not cool, Oracle needs to re-think this!
People are on vacation and system usage is quite low, perfect for some upgrades!
I took the opportunity to upgrade our last 8i databases, we are officially (almost) 8i free. There is one old Sun box running 8.1.7 system in India that no one really cares about, it holds a quite big database of historical data and has perhaps 2 logins per month.
The lowest production release we have now is 9.2.0.4 which I plan to upgrade to 9.2.0.6 sometime in the coming weeks.
A few old RedHat 2.1/9 systems also got upgraded to RHEL4 or CentOS 4 (yes, sick and tired of paying RedHat the big dosh for terrible "support").
The older Solaris systems keeps ticking with old release that are still fully supported by Sun without any hassle or weird comments from the support, we should use Solaris more.
Oracle CPU licensing policies
I browsed through the Oracle price list and noticed that multi-core CPU's are charged higher than single core. My position is that one CPU is one socket and I think most companies agree with me, but not Oracle :(
fortunately for us single CPU systems with multiple cores are treated as 1 CPU when running SE/SE One, but on larger systems you have to multiply the number of cores with 0.75 to get the number of CPU licenses to purchase. Takes some of the glory from the upcoming v80z from Sun, a four CPU system with dual-cores will be charged as a six CPU box, meaning about $80k extra cost!
This is not cool, Oracle needs to re-think this!
Thursday, July 14
10gR2 features part#1
Spent some time playing with the 10gR2 instance today, some things are quite nice. Sort operations are definitely faster, several of our reporting queries are 10-20% faster and the new functionally of rman is truly amazing, tape library support etc.
Example how to use restore points.
Ok, to be honest this could "almost" be done in 10gR1 as well, we did have flashbacks to SCN's and timestamps but this is easier to manage.
You must the database in a media recovery state (archivelog enabled.)
Example how to use restore points.
Ok, to be honest this could "almost" be done in 10gR1 as well, we did have flashbacks to SCN's and timestamps but this is easier to manage.
You must the database in a media recovery state (archivelog enabled.)
[ enable database flashbacks (if you haven't done so already) ]Oracle documentation for restore points.
SQL> SHUTDOWN TRANSACTIONAL;
SQL> STARTUP MOUNT;
SQL> ALTER SYSTEM SET DB_FLASHBACK_RETENTION_TARGET=720 scope=spfile;
SQL> ALTER DATABASE FLASHBACK ON;
SQL> ALTER DATABASE OPEN;
[ create a database restore point ]
SQL> CREATE RESTORE POINT TEST_POINT1 GUARANTEE FLASHBACK DATABASE;
SQL> SELECT NAME, TIME FROM V$RESTORE_POINT;
[ do stuff ]
[ damn, broke things]
[ lets flashback the database to it's previous state ]
SQL> SHUTDOWN TRANSACTIONAL;
SQL> STARTUP MOUNT;
SQL> FLASHBACK DATABASE TO RESTORE POINT TEST_POINT1;
SQL> ALTER DATABASE OPEN NORESETLOGS;
SQL> DROP RESTORE POINT TEST_POINT1;
Monday, July 11
Oracle 10g Release 2
So we finally have the second release of Oracle 10G available for download from the Oracle website. It's actually been available since June 6th but I've been tied up in other things so I haven't had time to look at it until now. Anyway.
I installed it on one of our test boxes and the installation hit a few problems, first gcc/ld failed with a missing libstd++.so.5 error, had to install another compat library and the linking retry worked fine.
Secondly dbca failed with a boring "TNS: lost connection" error. Quick look in the log files showed that sqlplus died with a missing libaio error (yes, the pre-reqs should have checked for this and warned if the RPM was missing). After installing the RPM I ran dbca again and it finished without problems.
10gR2 pretty much feels like normal 10g on the surface, the cool stuff is deeper down. I'm going to try out some of the new features later today or tomorrow, and no, I'm not going to write any stored procedures in VB. ;-)
One thing I reflected over is that 10gR2 feels a bit "lighter" than 10g, it uses a tad more memory but feels faster on my quite slow 800MHz test box than 10g does.
View the 10g Release 2 documentation library.
Command to check for the two missing RPMs (required on my FC4 system):
I installed it on one of our test boxes and the installation hit a few problems, first gcc/ld failed with a missing libstd++.so.5 error, had to install another compat library and the linking retry worked fine.
Secondly dbca failed with a boring "TNS: lost connection" error. Quick look in the log files showed that sqlplus died with a missing libaio error (yes, the pre-reqs should have checked for this and warned if the RPM was missing). After installing the RPM I ran dbca again and it finished without problems.
10gR2 pretty much feels like normal 10g on the surface, the cool stuff is deeper down. I'm going to try out some of the new features later today or tomorrow, and no, I'm not going to write any stored procedures in VB. ;-)
One thing I reflected over is that 10gR2 feels a bit "lighter" than 10g, it uses a tad more memory but feels faster on my quite slow 800MHz test box than 10g does.
View the 10g Release 2 documentation library.
Command to check for the two missing RPMs (required on my FC4 system):
rpm -q compat-libstdc++-33 libaio
Saturday, July 9
Wasting CPU cycles
A friend asked me to have a look at one of his Linux servers that have had some performance problems in the last few weeks. The machine collects and do simple data processing, mostly in SQL but there are a few shell scripts launched from cron every few minutes. When looking at the ps output there where a lot of those shell scripts running and continuously forking processes over and over, which of course slowed down the machine. Looking at the shells scripts they where quite simple, open one file and output to another. All data processing was done with the classic shell utils (cut, sed, wc, tr etc.). What people seem to forget is that bash has a quite nice array of built-in features that require way less resources and doesn't fork any child processes. I simply changed perhaps 15-20 lines a few shell scripts and the load avg. went down quite a bit.
The main features that can save CPU cycles are for variable manipulation. Things like the classic cut -d, -f1 can be replaces by ${VARNAME//,*} and wc -c can be replaced with ${#VARNAME}. It is highly recommended to study the bash man-page if you frequently use shell-scripts on production systems.
I wrote a small script to demonstrate two different ways to get fields from a CSV value (view).
Here are the results for 100.000 processed values (2x50.000).
The main features that can save CPU cycles are for variable manipulation. Things like the classic cut -d, -f1 can be replaces by ${VARNAME//,*} and wc -c can be replaced with ${#VARNAME}. It is highly recommended to study the bash man-page if you frequently use shell-scripts on production systems.
I wrote a small script to demonstrate two different ways to get fields from a CSV value (view).
Here are the results for 100.000 processed values (2x50.000).
[hali@halidell rep]$ ./fields.sh
Running 50000 tests...
Method one took 170 seconds
Method two took 5 seconds
[hali@halidell rep]$
Friday, July 8
Busy busy
I've been quite busy the last few days, been trying to sort out a few things at our India site and then your head Java developer left here in the UK so it's been a couple of hectic days.
Then the terrible London bombing occurred yesterday. I was on a circle line train at Paddington when the blast at Edgeware road happened, we all got asked to leave the station in a rush. I wasn't aware of the magnitude of the incident until I got to work (walked) and read about it online.
The Indian news channel Aaj Tak found out that I had been near one of the explosions (we are in the same building as them in India) so they wanted to interview me on live television, only 34 million viewers(!!!).
Site recommendation
Cool article about Oracle time and date usage.
http://forums.dbasupport.com/oracle/ora9i/TimeZone.shtml
Music recommendation
Some soft blues from Taj Mahal
Especially the song "Ain't gwine to whistle dixi" is so soft and well performed, I've been listening to it in my Grado headphones all morning (the recording is very good as well).
Then the terrible London bombing occurred yesterday. I was on a circle line train at Paddington when the blast at Edgeware road happened, we all got asked to leave the station in a rush. I wasn't aware of the magnitude of the incident until I got to work (walked) and read about it online.
The Indian news channel Aaj Tak found out that I had been near one of the explosions (we are in the same building as them in India) so they wanted to interview me on live television, only 34 million viewers(!!!).
Site recommendation
Cool article about Oracle time and date usage.
http://forums.dbasupport.com/oracle/ora9i/TimeZone.shtml
Music recommendation
Some soft blues from Taj Mahal
Especially the song "Ain't gwine to whistle dixi" is so soft and well performed, I've been listening to it in my Grado headphones all morning (the recording is very good as well).
Monday, July 4
Get the machine type in Linux
Found a new usefull command in Linux, similiar to the prtconf command in Solaris.
I've been looking for some way to find out the machine type in Linux and after much seeking I found dmidecode. It's not uncommon I get asked to "check out" a server in India or elsewhere in the world and it's always frustrating not to know what brand/model it is.
Example from a Dell PowerEdge 650 pizzabox.
I've been looking for some way to find out the machine type in Linux and after much seeking I found dmidecode. It's not uncommon I get asked to "check out" a server in India or elsewhere in the world and it's always frustrating not to know what brand/model it is.
Example from a Dell PowerEdge 650 pizzabox.
[root@monitor root]# dmidecode |grep Prod
Product Name: PowerEdge 650
Product Name: 0W1373
[root@monitor root]#
Saturday, July 2
Pink Floyd @ Live8
Sometimes it's good to live in central London, sometimes it suck.
Today, it's extremely good!
500 meters to Live8 and the Pink Floyd concert at midnight. Coolness!
To bad I didn't score any tickets (stupid contest!).
Today, it's extremely good!
500 meters to Live8 and the Pink Floyd concert at midnight. Coolness!
To bad I didn't score any tickets (stupid contest!).
Friday, July 1
Oracle I/O tuning (or not to)
When an Oracle table is updated and new inserts happen it is possible for a row to end up on more than one block, this will cause more disk I/O than what is really necessary. It's not a huge problem in a smaller installation but may become an issue if you have very large tables and lots of fetches.
Here is a small script that a schema owner can use to find chained rows in his/hers tables.
Ok, so we got a table that has got some quite a high percentage of chained rows.
First thought in most DBAs head would be to resolve the "problem" as soon as possible, start rebuilding things. Why do DBAs always want to rebuild things? Indexes tables and god knows what. I'd say 75% or all object rebuild are unnecessary or even damaging (i.e. loosing index statistics).
But does it really matter, do we really fetch these rows? Lets check.
So there seems to be some problems with chained rows. We now need to either rebuild the table or simply delete and re-insert the chained rows. The later solution is of course preferred but may not be possible due to constraints that cannot be dropped.
Oracle provides nice tools to find chained rows and re-insert them.
A quick guide on how to that is provided here: Eliminating Migrated or Chained Rows in a Table
To create the CHAINED_ROWS table that is mention in the above article issue this command at your SQL*Plus prompt.
There is also a quite nice thread about this over at Ask Tom
Update:
To see all you basic row fetch stats use this query.
Here is a small script that a schema owner can use to find chained rows in his/hers tables.
column AVG format 99.99
select
table_name,
avg_row_len,
num_rows,
chain_cnt,
chain_cnt/num_rows*100 AVG
from
user_tables
where
chain_cnt >0
/
TABLE_NAME AVG_ROW_LEN NUM_ROWS CHAIN_CNT AVG
-------------- ----------- ---------- ---------- ------
INVENTORY 87 3066 1377 .45
Ok, so we got a table that has got some quite a high percentage of chained rows.
First thought in most DBAs head would be to resolve the "problem" as soon as possible, start rebuilding things. Why do DBAs always want to rebuild things? Indexes tables and god knows what. I'd say 75% or all object rebuild are unnecessary or even damaging (i.e. loosing index statistics).
But does it really matter, do we really fetch these rows? Lets check.
select
name,
value
from
v$sysstat
where
name like 'table fetch cont%'
/
NAME VALUE
------------------------------- ----------
table fetch continued row 4000
So there seems to be some problems with chained rows. We now need to either rebuild the table or simply delete and re-insert the chained rows. The later solution is of course preferred but may not be possible due to constraints that cannot be dropped.
Oracle provides nice tools to find chained rows and re-insert them.
A quick guide on how to that is provided here: Eliminating Migrated or Chained Rows in a Table
To create the CHAINED_ROWS table that is mention in the above article issue this command at your SQL*Plus prompt.
@?/rdbms/admin/utlchain
There is also a quite nice thread about this over at Ask Tom
Update:
To see all you basic row fetch stats use this query.
select
name,
value
from
v$sysstat
where
name like 'table fetch%'
or name like 'table scan%gotten'
/
Subscribe to:
Posts (Atom)