

More snaps
CREATE TABLE tickets (
ID serial,
TICKET_CODE char(10),
ISSUE_DATE timestamp,
VALID char(1),
REDEEM_DATE timestamp,
CAMPAIGN_ID int
);
[ insert data (90.000 rows in this case) ]
CREATE INDEX tickets_valid_idx ON tickets (redeem_date,valid)
WHERE valid='Y' AND redeem_date is null;
ANALYZE TICKETS;
testdb=> explain select count(*) from tickets whereThe downside is of course as with all indexes that updates and insert to the table will be a bit slower. We do bulk inserts so that's not a huge problem. One concern tho is that when a ticket is used we set VALID to U and redeem_date to now() so that the row in question must be removed from the index, this could lead to quite fragmented indexes. Might be worth keeping an eye on it for the first few months.
VALID='Y' and redeem_date is null;
QUERY PLAN
-------------------------------------------------------
Aggregate (cost=192.91..192.91 rows=1 width=0)
-> Index Scan using tickets_valid_idx on tickets
(cost=0.00..192.79 rows=48 width=0)
Filter: (("valid" = 'Y'::bpchar)
AND (redeem_date IS NULL))
(3 rows)
testdb=> explain select count(*) from tickets where
VALID='N' and redeem_date is null;
QUERY PLAN
-------------------------------------------------------
Aggregate (cost=2715.14..2715.14 rows=1 width=0)
-> Seq Scan on tickets
(cost=0.00..2714.12 rows=403 width=0)
Filter: (("valid" = 'N'::bpchar)
AND (redeem_date IS NULL))
(3 rows)
USERID="/ as sysdba"Read the Oracle Data Pump Overview docs.
LOGFILE=import.log
DIRECTORY=tempdir
DUMPFILE=dp050720.dmp
REMAP_SCHEMA=appschema1:app
REMAP_TABLESPACE=users:data,data1:data,data2:data,index1:indx
Change line 47 from
ORATAB=/var/opt/oracle/oratab
to
ORATAB=/etc/oratab
set heading offI wrote a small script to export a full schema, just connect as the schema owner and run the script. [ View ].
set pages 0
set long 1000
select
dbms_metadata.get_ddl('TABLE','EMP','SCOTT')
from
dual
/
[ enable database flashbacks (if you haven't done so already) ]Oracle documentation for restore points.
SQL> SHUTDOWN TRANSACTIONAL;
SQL> STARTUP MOUNT;
SQL> ALTER SYSTEM SET DB_FLASHBACK_RETENTION_TARGET=720 scope=spfile;
SQL> ALTER DATABASE FLASHBACK ON;
SQL> ALTER DATABASE OPEN;
[ create a database restore point ]
SQL> CREATE RESTORE POINT TEST_POINT1 GUARANTEE FLASHBACK DATABASE;
SQL> SELECT NAME, TIME FROM V$RESTORE_POINT;
[ do stuff ]
[ damn, broke things]
[ lets flashback the database to it's previous state ]
SQL> SHUTDOWN TRANSACTIONAL;
SQL> STARTUP MOUNT;
SQL> FLASHBACK DATABASE TO RESTORE POINT TEST_POINT1;
SQL> ALTER DATABASE OPEN NORESETLOGS;
SQL> DROP RESTORE POINT TEST_POINT1;
rpm -q compat-libstdc++-33 libaio
[hali@halidell rep]$ ./fields.sh
Running 50000 tests...
Method one took 170 seconds
Method two took 5 seconds
[hali@halidell rep]$
[root@monitor root]# dmidecode |grep Prod
Product Name: PowerEdge 650
Product Name: 0W1373
[root@monitor root]#
column AVG format 99.99
select
table_name,
avg_row_len,
num_rows,
chain_cnt,
chain_cnt/num_rows*100 AVG
from
user_tables
where
chain_cnt >0
/
TABLE_NAME AVG_ROW_LEN NUM_ROWS CHAIN_CNT AVG
-------------- ----------- ---------- ---------- ------
INVENTORY 87 3066 1377 .45
select
name,
value
from
v$sysstat
where
name like 'table fetch cont%'
/
NAME VALUE
------------------------------- ----------
table fetch continued row 4000
@?/rdbms/admin/utlchain
select
name,
value
from
v$sysstat
where
name like 'table fetch%'
or name like 'table scan%gotten'
/