I probably get an email every couple a weeks about this one, not sure why.
E-mails regarding something in Solaris not working on Dell PowerEdge 9th gen servers.
For any device driver related issues with Solaris on x86, the first resource to check if you run in to trouble is the Sun HCL, the 'hardware compatability list'.
Solaris 10 works fine on most Dell boxes, but some need NIC or HBA drivers.
For a Dell 2950 that tells us that Solaris works, from release 11/06 upwards, it also tells us to download the ethernet driver here and the MegaRAID SAS driver here.
NB, the Sun HCL does not mention the new 'III' series 9-gen server yet. I would guess they are working on as Sun and Dell recently partenered to provide better Solaris support. The PERC6 card may need some new drivers.
Thursday, December 20
Wednesday, November 28
Converting MySQL "on update current_timestamp" to Oracle
Another short simple SQL for all out there in the process of converting old MySQL schemas to Oracle.
MySQL has got a built in feature to automatically update a column to the current timestamp whenever the row is updated, just by using the default-clause. The "on update current_timestamp" feature can be quite handy if you have lazy developers who can't be bothered writing full insert statements. :)
The MySQL create table statement would be something like this:
Not difficult to do in Oracle either, but we need a trigger to assist.
Example:
MySQL has got a built in feature to automatically update a column to the current timestamp whenever the row is updated, just by using the default-clause. The "on update current_timestamp" feature can be quite handy if you have lazy developers who can't be bothered writing full insert statements. :)
The MySQL create table statement would be something like this:
create table p (
id int,
a varchar(10),
d timestamp DEFAULT CURRENT_TIMESTAMP
ON UPDATE CURRENT_TIMESTAMP,
constraint p_pk primary key (id)
);
Not difficult to do in Oracle either, but we need a trigger to assist.
Example:
SQL> alter session set nls_Date_format='HH24:MI:SS';
Session altered.
SQL> create table p (id number, a varchar2(10), d date default sysdate,
constraint p_pk primary key (id));
Table created.
SQL> insert into p(id,a) values(1,'test');
1 row created.
SQL> host cat p_test.sql
CREATE OR REPLACE TRIGGER p_d_trig
BEFORE UPDATE ON p
FOR EACH ROW
BEGIN
select sysdate into :new.d from dual;
END p_d_trig;
/
SQL> @p_test
Trigger created.
SQL> select * from p;
ID A D
---------- ---------- --------
1 test 21:15:05
SQL> update p set a='foo' where id=1;
1 row updated.
SQL> select * from p;
ID A D
---------- ---------- --------
1 foo 21:16:44
SQL>
Thursday, November 1
Oracle SQL to return a alphabetical subset
Yes, I know, I've been waaay bad at blogging lately. Been busy working on new projects at work, upgrading applications to Java1.5 containers (Websphere 6.1, Weblogic 9.2 etc). On the fun side we've got an Azul Java-acceleration box, that really needs a couple of blog entries!
Anyway, got asked if there was a way to return a resultset of data based on the leading character, the use case was to ignore all strings starting with a,b, or d and return e to z.
Fairly straight forward but a good SQL to have.
I simply grab the first character in the varchar and compare it's ASCII value to the ASCII value D.
Anyway, got asked if there was a way to return a resultset of data based on the leading character, the use case was to ignore all strings starting with a,b, or d and return e to z.
Fairly straight forward but a good SQL to have.
I simply grab the first character in the varchar and compare it's ASCII value to the ASCII value D.
SQL> select * from t;
A
----------
Atest
Btest
Etest
Htest
Wtest
Dtest
dtest
SQL> with data as (
2 select ascii(upper(substr(a,1,1))) a_val,a from t
3 )
4 select * from data where a_val not between ascii('A') and ascii('D') order by a
5 /
A_VAL A
---------- ----------
69 Etest
72 Htest
87 Wtest
SQL>
Tuesday, August 21
Oracle 11g: Archive log alert log output
Starting with Oracle 11g the archivelog process is no longer logged to the alert log by default. For good and bad in my opinion, nice to get rid of the log entries on smaller databases but it can be quite useful to see the details on larger systems with multiple log destinations.
The log level is simply a database parameter and can easily be changed, look at this page to figure out what level you want. I figured 79 would be a good starting point (64+8+4+2+1). Databases running dataguard should probably have 207 (add 128) to include FAL service details.
How to set the trace level:
Read details about the log trace level here and general alert log related stuff here.
The log level is simply a database parameter and can easily be changed, look at this page to figure out what level you want. I figured 79 would be a good starting point (64+8+4+2+1). Databases running dataguard should probably have 207 (add 128) to include FAL service details.
How to set the trace level:
[oracle@vm-rac1 ~]$ rsqlplus "/ as sysdba"
SQL*Plus: Release 11.1.0.6.0 - Production on Sun Aug 19 02:35:23 2007
Copyright (c) 1982, 2007, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> show parameter log_archive_trace
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
log_archive_trace integer 0
SQL> alter system set log_archive_trace=79;
System altered.
SQL> alter system switch logfile;
System altered.
SQL> Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
[oracle@vm-rac1 ~]$ talert # my shell alias to tail the current alert log
Current log# 1 seq# 7 mem# 0: /u01/app/oracle/oradata/test11g/redo01.log
Sun Aug 19 02:03:14 2007
ARC3: Evaluating archive log 3 thread 1 sequence 6
ARC3: Beginning to archive thread 1 sequence 6 (627362-628594) (test11g)
ARC3: Creating local archive destination LOG_ARCHIVE_DEST_10:
'/u01/app/oracle/flash_recovery_area/TEST11G/archivelog/2007_08_19/o1_mf_1_6_%u_.arc' (thread 1 sequence 6)
(test11g)
ARC3: Creating local archive destination LOG_ARCHIVE_DEST_1:
'/u01/app/oracle/11.1.0/db_1/dbs/arch1_6_630935902.dbf' (thread 1 sequence 6)
(test11g)
ARC3: Closing local archive destination LOG_ARCHIVE_DEST_10:
'/u01/app/oracle/flash_recovery_area/TEST11G/archivelog/2007_08_19/o1_mf_1_6_3dh5pld1_.arc'
(test11g)
Committing creation of archivelog
'/u01/app/oracle/flash_recovery_area/TEST11G/archivelog/2007_08_19/o1_mf_1_6_3dh5pld1_.arc'
ARC3: Closing local archive destination LOG_ARCHIVE_DEST_1:
'/u01/app/oracle/11.1.0/db_1/dbs/arch1_6_630935902.dbf'
(test11g)
Committing creation of archivelog '/u01/app/oracle/11.1.0/db_1/dbs/arch1_6_630935902.dbf'
ARC3: Completed archiving thread 1 sequence 6 (627362-628594) (test11g)
Read details about the log trace level here and general alert log related stuff here.
Sunday, August 19
Oracle 11g: basic rundown
Ok, so 11g has been out some time but I've been to busy to really dig in to it until now. 11g is probably the first release that doesn't really many new features; no, before you send me angry e-mails hear me out :)
Take a look at the new features list. What do most of these features do? Easy, they just make the most out of the fairly old but totally outstanding REDO and UNDO management in Oracle. Why not use what we already got? Ok, some things are totally new features like the improved partitioning and new data types for the medical and life-sciences industry.
Partitioning and compression
The coolest things in 11g is probably the VLDB features for partitioning and compressions, can really reduce storage costs (at the same time as storage costs are plummeting with the competition from iSCSI). It is now possible to create "automatic" partitions, i.e. instead of adding a partition every month with DDL when doing a bulk load Oracle can automatically partition the table as per predefined rules. Pretty cool and saves a lot of time.
Direct NFS
Another pretty cool feature, at least for me as I'm using a lot of Netapp NFS storage, is the Direct NFS client in Oracle (Linux). Switch to the direct NFS lib and Oracle will do NFS ops outside the Linux kernel, which removes the NFS caching layer and kernel block device mapping etc. Gives a quite massive performance increase on some workloads. This is what I'm playing most with at the moment.
Data Guard
Lots of new cool stuff for Data Guard. Most significantly is probably the fact that you can have an open standby, available for real-time queries such as reporting applications and other read only operations. In fact, you can even open a Data Guard instance in read-write mode and mess about with it and later just flashback to a SCN where the db was in sync with the Data Guard master db and just roll forward to the current SCN using the normal FAL services. How sweet isn't that for DR testing and schema change testing. DR testing during live operations, that's a couple of late nights saved.
Other new data guard features include redo shipping compression, the normal improvements to the DG broker, sqlplus manageability etc.
Workload replay
Another thing that is really useful for deployment testing and change control is the database replay feature, one can "record" all transactions over a period of time and replay them after a change has happened to ensure performance and functionality has not degraded.
Improved ASM features.
You can now bring online a replaced drive and new writes will go directly to the drive while Oracle sync data from a redundant mirror in the background. Reduces resync time quite a fair bit.
PL/SQL
Couple of improvements to plsql;
Use of sequences in declares.
Real compound triggers.
New data types
Not really my thing but probably useful for a lot of people.
Spatial and multimedia support for medical imaging, life-sciences and other fairly odd things. :) Docs here.
A couple of minor things have changed as well, the alert log lives in a new place (with it's buddies in the new diag system), the logging levels in the alert log can now be adjusted as well (got a blog entry about that in a couple of days).
Lots of fun things to play with and to blog about.
Ah, almost forgot. Larry pulled a version hack on us again. As some of you may now there was never an Oracle Version 1 release, the first version was called 2 since Larry Ellison didn't think anyone would buy version 1. The first (public) 11g release is 11.1.0.6
Take a look at the new features list. What do most of these features do? Easy, they just make the most out of the fairly old but totally outstanding REDO and UNDO management in Oracle. Why not use what we already got? Ok, some things are totally new features like the improved partitioning and new data types for the medical and life-sciences industry.
Partitioning and compression
The coolest things in 11g is probably the VLDB features for partitioning and compressions, can really reduce storage costs (at the same time as storage costs are plummeting with the competition from iSCSI). It is now possible to create "automatic" partitions, i.e. instead of adding a partition every month with DDL when doing a bulk load Oracle can automatically partition the table as per predefined rules. Pretty cool and saves a lot of time.
Direct NFS
Another pretty cool feature, at least for me as I'm using a lot of Netapp NFS storage, is the Direct NFS client in Oracle (Linux). Switch to the direct NFS lib and Oracle will do NFS ops outside the Linux kernel, which removes the NFS caching layer and kernel block device mapping etc. Gives a quite massive performance increase on some workloads. This is what I'm playing most with at the moment.
Data Guard
Lots of new cool stuff for Data Guard. Most significantly is probably the fact that you can have an open standby, available for real-time queries such as reporting applications and other read only operations. In fact, you can even open a Data Guard instance in read-write mode and mess about with it and later just flashback to a SCN where the db was in sync with the Data Guard master db and just roll forward to the current SCN using the normal FAL services. How sweet isn't that for DR testing and schema change testing. DR testing during live operations, that's a couple of late nights saved.
Other new data guard features include redo shipping compression, the normal improvements to the DG broker, sqlplus manageability etc.
Workload replay
Another thing that is really useful for deployment testing and change control is the database replay feature, one can "record" all transactions over a period of time and replay them after a change has happened to ensure performance and functionality has not degraded.
Improved ASM features.
You can now bring online a replaced drive and new writes will go directly to the drive while Oracle sync data from a redundant mirror in the background. Reduces resync time quite a fair bit.
PL/SQL
Couple of improvements to plsql;
Use of sequences in declares.
Real compound triggers.
New data types
Not really my thing but probably useful for a lot of people.
Spatial and multimedia support for medical imaging, life-sciences and other fairly odd things. :) Docs here.
A couple of minor things have changed as well, the alert log lives in a new place (with it's buddies in the new diag system), the logging levels in the alert log can now be adjusted as well (got a blog entry about that in a couple of days).
Lots of fun things to play with and to blog about.
Ah, almost forgot. Larry pulled a version hack on us again. As some of you may now there was never an Oracle Version 1 release, the first version was called 2 since Larry Ellison didn't think anyone would buy version 1. The first (public) 11g release is 11.1.0.6
Saturday, August 11
Oracle Database 11g available for download
Well, it's about time. Oracle finally made 11g available for download. Only 32-bit Linux so far though and I have a feeling we'll have to wait a while for most other platforms (possibly a 64-bit Linux download soon).
Download it here.
Lots of new cool stuff to blog about, I'm away on holiday for a week but my recently upgraded lab machines at home are sitting there waiting. Fun times when I get home.
Download it here.
Lots of new cool stuff to blog about, I'm away on holiday for a week but my recently upgraded lab machines at home are sitting there waiting. Fun times when I get home.
Monday, July 30
Limit user sessions in Oracle
Developers sometimes enjoy configuring their JDBC datasources after their own likings, testing a performance issue by having a gazzillion threads to the database "just in case we need them".
Well in my opinion, if you need what is obviously too many threads, then there is a coding issue.
So to make people think twice before flooding a service with connections I've started limiting the number of sessions they are allowed to have.
Easy as always to do in Oracle. First we allow resource limits by setting the resource_limit to true, then we simply create a profile and assign that to the user. The profile can set the limit of how many session a user is allowed to have.
A little demonstration;
Well in my opinion, if you need what is obviously too many threads, then there is a coding issue.
So to make people think twice before flooding a service with connections I've started limiting the number of sessions they are allowed to have.
Easy as always to do in Oracle. First we allow resource limits by setting the resource_limit to true, then we simply create a profile and assign that to the user. The profile can set the limit of how many session a user is allowed to have.
A little demonstration;
oracle@htpc:~$ rsqlplus "/ as sysdba"
SQL*Plus: Release 10.2.0.1.0 - Production on Mon Jul 30 21:56:12 2007
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
SQL> show parameter resource_limit
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
resource_limit boolean FALSE
SQL> alter system set RESOURCE_LIMIT=true scope=both;
System altered.
SQL> create profile sesslimit limit sessions_per_user 2;
Profile created.
SQL> create user fish identified by fish123
2 default tablespace data profile sesslimit;
User created.
SQL> grant create session to fish;
Grant succeeded.
SQL> connect fish/fish123
Connected.
SQL> disconnect
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
-- I'll start two sessions in another terminal.
SQL> connect fish/fish123
ERROR:
ORA-02391: exceeded simultaneous SESSIONS_PER_USER limit
SQL>
Sunday, July 8
Using SERVICE_NAMES in Oracle
The use of "SERVICE_NAMES" in Oracle is quite an old and probably well known feature but perhaps not everyone is familiar with it yet.
Got asked today about a recovery scenario where the administrator had a failed instance (broken data files, no logs, no backups, just a nightly exp), a new database was created with 'dbca', but with a new name to test importing the exp file.
All worked fine, but there was a problem with the database name. The application had the service name set in a number of config files and there was also a number of ETL scripts with service names hardcoded. The thinking at the time was to delete the old instance, remove all traces of it (oratab etc.) and then create it *again* with the same name.
Now hold on here, we have tested the imp in a new database, all is fine and all we want to do is allow connections to the old database instance name?
That's pretty much a one-liner, not a new database.
We can simply add the new name we want to "listen on" to the SERVICE_NAMES parameter.
Easy peasy.
Ok, here is what we should do. Quite a long example for something simple.
But hey, just want to make it clear.
Got asked today about a recovery scenario where the administrator had a failed instance (broken data files, no logs, no backups, just a nightly exp), a new database was created with 'dbca', but with a new name to test importing the exp file.
All worked fine, but there was a problem with the database name. The application had the service name set in a number of config files and there was also a number of ETL scripts with service names hardcoded. The thinking at the time was to delete the old instance, remove all traces of it (oratab etc.) and then create it *again* with the same name.
Now hold on here, we have tested the imp in a new database, all is fine and all we want to do is allow connections to the old database instance name?
That's pretty much a one-liner, not a new database.
We can simply add the new name we want to "listen on" to the SERVICE_NAMES parameter.
Easy peasy.
Ok, here is what we should do. Quite a long example for something simple.
But hey, just want to make it clear.
oracle@htpc:admin$ lsnrctl status
-- What's the current db_name and service_names?
LSNRCTL for Linux: Version 10.2.0.1.0 - Production on 08-JUL-2007 18:31:22
Copyright (c) 1991, 2005, Oracle. All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 10.2.0.1.0 - Production
Start Date 08-JUL-2007 18:23:39
Uptime 0 days 0 hr. 7 min. 43 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/oracle/10g/network/admin/listener.ora
Listener Log File /u01/oracle/10g/network/log/listener.log
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=htpc)(PORT=1521)))
Services Summary...
Service "PLSExtProc" has 1 instance(s).
Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...
Service "peggy" has 2 instance(s).
Instance "peggy", status UNKNOWN, has 1 handler(s) for this service...
Instance "peggy", status READY, has 1 handler(s) for this service...
Service "peggyXDB" has 1 instance(s).
Instance "peggy", status READY, has 1 handler(s) for this service...
Service "peggy_XPT" has 1 instance(s).
Instance "peggy", status READY, has 1 handler(s) for this service...
The command completed successfully
oracle@htpc:admin$ rsqlplus hlinden/hlinden as sysdba
SQL*Plus: Release 10.2.0.1.0 - Production on Sun Jul 8 18:31:24 2007
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
SQL> show parameter db_name
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_name string peggy
SQL> show parameter service_names
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
service_names string peggy
SQL>
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
-- What can we connect to?
oracle@htpc:admin$ rsqlplus hlinden/hlinden@//htpc/peggy
SQL*Plus: Release 10.2.0.1.0 - Production on Sun Jul 8 18:31:53 2007
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
SQL>
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
oracle@htpc:admin$ rsqlplus hlinden/hlinden@//htpc/dog
SQL*Plus: Release 10.2.0.1.0 - Production on Sun Jul 8 18:31:58 2007
Copyright (c) 1982, 2005, Oracle. All rights reserved.
ERROR:
ORA-12514: TNS:listener does not currently know of service requested in connect
descriptor
Enter user-name:
-- Ouch, that's the one we want!
oracle@htpc:admin$ rsqlplus hlinden/hlinden as sysdba
SQL*Plus: Release 10.2.0.1.0 - Production on Sun Jul 8 18:32:01 2007
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
-- Here is the 'one-liner'
SQL> alter system set service_names='peggy,dog' scope=spfile;
System altered.
SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.
Total System Global Area 599785472 bytes
Fixed Size 2022600 bytes
Variable Size 167772984 bytes
Database Buffers 423624704 bytes
Redo Buffers 6365184 bytes
Database mounted.
Database opened.
SQL>
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
-- Let's see what changed. What can we connect to now?
oracle@htpc:admin$ lsnrctl status
LSNRCTL for Linux: Version 10.2.0.1.0 - Production on 08-JUL-2007 18:33:57
Copyright (c) 1991, 2005, Oracle. All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 10.2.0.1.0 - Production
Start Date 08-JUL-2007 18:23:39
Uptime 0 days 0 hr. 10 min. 18 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/oracle/10g/network/admin/listener.ora
Listener Log File /u01/oracle/10g/network/log/listener.log
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=htpc)(PORT=1521)))
Services Summary...
Service "PLSExtProc" has 1 instance(s).
Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...
Service "dog" has 1 instance(s).
Instance "peggy", status READY, has 1 handler(s) for this service...
Service "peggy" has 2 instance(s).
Instance "peggy", status UNKNOWN, has 1 handler(s) for this service...
Instance "peggy", status READY, has 1 handler(s) for this service...
Service "peggyXDB" has 1 instance(s).
Instance "peggy", status READY, has 1 handler(s) for this service...
Service "peggy_XPT" has 1 instance(s).
Instance "peggy", status READY, has 1 handler(s) for this service...
The command completed successfully
oracle@htpc:admin$ rsqlplus hlinden/hlinden@//htpc/dog
SQL*Plus: Release 10.2.0.1.0 - Production on Sun Jul 8 18:34:18 2007
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
-- It works, but where are we?
SQL> select sys_context('userenv','SERVICE_NAME') from dual;
SYS_CONTEXT('USERENV','SERVICE_NAME')
------------------------------------------------------------------------------------------------------------------------
dog
SQL> select sys_context('userenv','DB_NAME') from dual;
SYS_CONTEXT('USERENV','DB_NAME')
------------------------------------------------------------------------------------------------------------------------
peggy
Sunday, July 1
The new top 500 supercomputer list is out!
Finally had some time to look at the latest Top500 supercomputer list released last week at ISC '07 last week.
Lots of changes compared to the old list; two quite interesting new things. First of all is Intel multi-core chips, secondly we have Blades, loads and loads of blades.
A significant number of entries on the list are using Intel Xeon 5160 3.0GHz dual-core CPU's and there is also a number of quad-core systems.
Another interesting point is that the crown in terms of number of systems on the list has been passed from IBM (with 192 systems) to HP (with 201 systems).
The HP Cluster Platform BL460c holds quite a few spots on the list. Can't say I've heard much about HP's HPC offering so far, they're not exactly beating the drum on that one (compared to the IBM guy who's calling me every 2 months).
SGI holds a few new slots on the list as well, which is extra fun to see considering the hard last year SGI have had. One Altix system even made it to number 10 on the list (with Columbia on number 13).
Lots of changes compared to the old list; two quite interesting new things. First of all is Intel multi-core chips, secondly we have Blades, loads and loads of blades.
A significant number of entries on the list are using Intel Xeon 5160 3.0GHz dual-core CPU's and there is also a number of quad-core systems.
Another interesting point is that the crown in terms of number of systems on the list has been passed from IBM (with 192 systems) to HP (with 201 systems).
The HP Cluster Platform BL460c holds quite a few spots on the list. Can't say I've heard much about HP's HPC offering so far, they're not exactly beating the drum on that one (compared to the IBM guy who's calling me every 2 months).
SGI holds a few new slots on the list as well, which is extra fun to see considering the hard last year SGI have had. One Altix system even made it to number 10 on the list (with Columbia on number 13).
Thursday, June 28
Dell teaming with Oracle Enterprise Linux?
Just noticed that Dell change the "Red Hat Enterprise Linux" label on their support website to just "Enterprise Linux".
Is this the first step for Dell embracing Oracle Enterprise Linux?
Is this the first step for Dell embracing Oracle Enterprise Linux?
Sunday, June 24
Sun 2002 flashback
We've been doing some shuffling around of hardware at work and I've recommissioned some old Sun hardware for a new QA environment.
Oh the memories I have of old Sun kit :)
The most exciting thing I've installed is a Sun v480 connected to a T3 brick. That combo was pretty much the industry standard Oracle solution when I had my first job in "real" systems administration and it was probably one of the first the real mid-range installs I did (not counting ageing 420r/450's).
To be honest, it's a match made in heaven. A quad cpu v480 with a t3 brick (or two) kicked out some real performance numbers in it's day. The v480 is probably one of the nicest boxes to work with (considering age and all), the boards are easy to fit, the RSC is awesome, it's simply rock solid. Running Solaris 8 of course (we still have *lots* of clients on Solaris 8 (with Solaris 10 gaining ground)).
The T3 array is pretty cool, but it brings back the frustrations everyone had with early low-end SANs. The limitation to two disk RAID volumes with very limited zoning and slicing. But hey, it's pretty old and is still pretty fast.
Now if I only could figure out how to get rid of that darn Sun E250 we still have running Sybase 12.5.4, it just works to well so far.
Oh the memories I have of old Sun kit :)
The most exciting thing I've installed is a Sun v480 connected to a T3 brick. That combo was pretty much the industry standard Oracle solution when I had my first job in "real" systems administration and it was probably one of the first the real mid-range installs I did (not counting ageing 420r/450's).
To be honest, it's a match made in heaven. A quad cpu v480 with a t3 brick (or two) kicked out some real performance numbers in it's day. The v480 is probably one of the nicest boxes to work with (considering age and all), the boards are easy to fit, the RSC is awesome, it's simply rock solid. Running Solaris 8 of course (we still have *lots* of clients on Solaris 8 (with Solaris 10 gaining ground)).
The T3 array is pretty cool, but it brings back the frustrations everyone had with early low-end SANs. The limitation to two disk RAID volumes with very limited zoning and slicing. But hey, it's pretty old and is still pretty fast.
Now if I only could figure out how to get rid of that darn Sun E250 we still have running Sybase 12.5.4, it just works to well so far.
Thursday, June 14
Reclaiming LOB space in Oracle
Reclaiming space in Oracle can sometimes be a bit of a "problem", not really a problem it just works in a funny way. It's a quite common question I get and users are usually happy with a manual alter table table_name shrink space compact;, but what do we do for lobs?
We need a manual reclaim for the lob column.
A little demo:
We need a manual reclaim for the lob column.
A little demo:
(spinner1)oracle@spinner[~/lob_test]$ rsqlplus hlinden/hlindenMy simple blob loading code:
SQL*Plus: Release 10.2.0.2.0 - Production on Thu Jun 14 12:19:02 2007
Copyright (c) 1982, 2005, Oracle. All Rights Reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
-- Create a table and sequence to play with
HLINDEN@spinner1> create table lob_test (id number, data blob);
Table created.
HLINDEN@spinner1> create sequence lob_test_seq;
Sequence created.
-- Load 50 rows with 1.5Mb blobs (see code bellow)
HLINDEN@spinner1> @lobload
PL/SQL procedure successfully completed.
-- Find out what our lob segment is called
HLINDEN@spinner1> select object_name,object_type from user_objects where
2 created>sysdate-interval '5' minute and object_type='LOB';
OBJECT_NAME OBJECT_TYPE
------------------------------ -------------------
SYS_LOB0000199575C00002$$ LOB
-- Display the current size of the lob segment
HLINDEN@spinner1> select round(sum(bytes)/1024/1024) Mb from user_segments where
segment_name='SYS_LOB0000199575C00002$$';
MB
----------
75
-- Ok, let's delete those blobs and see what the size is after
HLINDEN@spinner1> delete from lob_test purge;
50 rows deleted.
HLINDEN@spinner1> select round(sum(bytes)/1024/1024) Mb from user_segments where
segment_name='SYS_LOB0000199575C00002$$';
MB
----------
75
-- Still 75Mb, hm, perhaps it recycled if we insert more data?
HLINDEN@spinner1> @lobload
PL/SQL procedure successfully completed.
HLINDEN@spinner1> select round(sum(bytes)/1024/1024) Mb from user_segments where
segment_name='SYS_LOB0000199575C00002$$';
MB
----------
150
-- Nope, not recycled. We need to issue a shrink command to free up the
-- space immediately
HLINDEN@spinner1> delete from lob_test;
50 rows deleted.
HLINDEN@spinner1> alter table lob_test modify lob (data) (shrink space);
Table altered.
HLINDEN@spinner1> select round(sum(bytes)/1024/1024) Mb from user_segments where
segment_name='SYS_LOB0000199575C00002$$';
MB
----------
0
-- All gone!
DECLARE
src_file BFILE := bfilename('TMP', 'data.dat');
dst_file BLOB;
lgh_file BINARY_INTEGER;
cur_id NUMBER(10);
BEGIN
FOR i IN 1..50
LOOP
INSERT INTO lob_test(id,data) VALUES(lob_test_seq.nextval,empty_blob())
RETURNING id into cur_id;
-- lock record
SELECT data INTO dst_file FROM lob_test WHERE id=cur_id FOR UPDATE;
dbms_lob.fileopen(src_file, dbms_lob.file_readonly);
lgh_file := dbms_lob.getlength(src_file);
dbms_lob.loadfromfile(dst_file, src_file, lgh_file);
dbms_lob.fileclose(src_file);
END LOOP;
END;
/
Tuesday, June 12
Oracle 11g - one month to go
Sunday, June 10
New blade enclosure from Sun
Ever since Andy Bechtolsheim returned to Sun pretty much everything on the Sun x64 server line meet all expectations and then some. Sure, there has been the slight ethernet chipset problem, but generally the AMD kit has really kicked ass
Though some people where surprised to see first (new) blade offering from Sun, the 8000 series. A 4-socket blade offering when most of the market moved away from larger blade enclosures and focused on 2-socket systems. The 8000 still made good sense for larger customers like financial institutions and telcos. Server consolidation on a large level.
Now the other day Sun released a second (third actually, the 8000 comes in two models) blade offering. The 6000 series. It's a fairly basic 10U enclosure, 10 blade slots and the classic Ethernet modules. Not very exciting at first glance, I have to admit. Sounds like the PowerEdge 1955 box but not with the same high density (Dell can do 10 blades in 7U).
The thing cool about the 6000 enclosure is that it is not a Blade enclosure in the classic sense, it's not "stripped down computers in a box". It's actually 10 high capacity servers. It enclosure offers the same expandability as normal 1U servers.
All blade modules are dual socket, can take up to 16 DIMMS, four 2.5" SAS drives and even two PCI-e slots. That's right, normal PCI-e slots, no more proprietary on board FC-AL or Infiniband modules. Just slot in a couple of standard c-PCIe cards and off you go. No more opening up the blade to install modules.
Another new feature is a hardware RAID controller on-board (for good and bad indeed).
The whole enclosure is based around industry standards and open solutions.
In addition to all this cool stuff the real winner here is the selection of blades.
Sun offers no less than 3 blade types.
The t6300 UltraSPARC T1 blade,
the x6220 AMD blade and
the x6250 Intel Xeon blade
It's the first Intel Xeon (including quad-core procs) offering from Sun, with many more to come.
Watch Andy talk about the new box here.
Though some people where surprised to see first (new) blade offering from Sun, the 8000 series. A 4-socket blade offering when most of the market moved away from larger blade enclosures and focused on 2-socket systems. The 8000 still made good sense for larger customers like financial institutions and telcos. Server consolidation on a large level.
Now the other day Sun released a second (third actually, the 8000 comes in two models) blade offering. The 6000 series. It's a fairly basic 10U enclosure, 10 blade slots and the classic Ethernet modules. Not very exciting at first glance, I have to admit. Sounds like the PowerEdge 1955 box but not with the same high density (Dell can do 10 blades in 7U).
The thing cool about the 6000 enclosure is that it is not a Blade enclosure in the classic sense, it's not "stripped down computers in a box". It's actually 10 high capacity servers. It enclosure offers the same expandability as normal 1U servers.
All blade modules are dual socket, can take up to 16 DIMMS, four 2.5" SAS drives and even two PCI-e slots. That's right, normal PCI-e slots, no more proprietary on board FC-AL or Infiniband modules. Just slot in a couple of standard c-PCIe cards and off you go. No more opening up the blade to install modules.
Another new feature is a hardware RAID controller on-board (for good and bad indeed).
The whole enclosure is based around industry standards and open solutions.
In addition to all this cool stuff the real winner here is the selection of blades.
Sun offers no less than 3 blade types.
The t6300 UltraSPARC T1 blade,
the x6220 AMD blade and
the x6250 Intel Xeon blade
It's the first Intel Xeon (including quad-core procs) offering from Sun, with many more to come.
Watch Andy talk about the new box here.
Wednesday, May 30
Run system commands from Oracle with PL/SQL
I friend of mine asked if it was possible to show the exact Linux kernel version on an Oracle server without actually having shell access to the server.
He had full access to Oracle with sysdba/dba roles etc, but not SSH.
I've seen some versions of executing system commands from Java but never really liked the idea of invoking Java for something simple like that.
One way I thought of would be to use dbms_scheduler to execute a job with an executable job_typ. The first problem was to find a way to actually return the standard output from the execution to Oracle.
Ok, so my 'hack' here is a stored procedure (entirely in PL/SQL) that creates a job with dbms_scheduler; calling /bin/sh as the executable and hands it a temporary script to execute. In the script I have a simple redirect to a temporary spool file and then the procedure simply reads and outputs the content of the file. It's a bit of a hack but at least it gets the job done and doesn't use Java.
I haven't drilled down on what kind of permissions you need to actually use the procedure but I suspect it's quite a lot.
The temporary spool file handling in my example is quite poor, but works. :)
A word of warning as usual when using PL/SQL, this code example is a proof of concept. It needs *loads' of error catching etc. in order to be production ready, use with caution.
Example of use:
He had full access to Oracle with sysdba/dba roles etc, but not SSH.
I've seen some versions of executing system commands from Java but never really liked the idea of invoking Java for something simple like that.
One way I thought of would be to use dbms_scheduler to execute a job with an executable job_typ. The first problem was to find a way to actually return the standard output from the execution to Oracle.
Ok, so my 'hack' here is a stored procedure (entirely in PL/SQL) that creates a job with dbms_scheduler; calling /bin/sh as the executable and hands it a temporary script to execute. In the script I have a simple redirect to a temporary spool file and then the procedure simply reads and outputs the content of the file. It's a bit of a hack but at least it gets the job done and doesn't use Java.
I haven't drilled down on what kind of permissions you need to actually use the procedure but I suspect it's quite a lot.
The temporary spool file handling in my example is quite poor, but works. :)
A word of warning as usual when using PL/SQL, this code example is a proof of concept. It needs *loads' of error catching etc. in order to be production ready, use with caution.
Example of use:
oracle@htpc:~$ rsqlplus hlinden/password as sysdbaAnd here is the procedure code:
SQL*Plus: Release 10.2.0.1.0 - Production on Wed May 30 21:55:06 2007
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
SQL> set serveroutput on
SQL> @system_run
Procedure created.
SQL> exec system_run('ls -l /home/oracle/bin');
total 12
-rwxr-xr-x 1 oracle dba 797 Nov 5 2006 backup_controlfile.sh
PL/SQL procedure successfully completed.
SQL> exec system_run('uname -a');
Linux htpc 2.6.20-15-generic #2 SMP Sun Apr 15 06:17:24 UTC 2007 x86_64 GNU/Linux
PL/SQL procedure successfully completed.
SQL>
CREATE OR REPLACE PROCEDURE system_run(cmd IN varchar2)
IS
script_file varchar2(40) := 'my-temp-script.sh';
script_data varchar2(4000);
MyFile utl_file.file_type;
d varchar2(4000);
dump_file varchar2(40) := '/tmp/my-temp-file.dat';
dump_type utl_file.file_type;
BEGIN
-- Open file
MyFile := utl_file.fopen('TMP',script_file,'w');
-- Write data to file
script_data := '#!/bin/bash' || chr(10) || cmd||'>'||dump_file;
utl_file.put_line(MyFile, script_data, FALSE);
-- Close file
utl_file.fflush(MyFile);
utl_file.fclose(MyFile);
-- Purge old logs, no fun anyway
dbms_scheduler.purge_log(JOB_NAME=>'TEST');
-- Execute script
-- The job is created as disabled as
-- we execute it manually and will
-- drop itself once executed.
dbms_scheduler.create_job(
job_name => 'TEST',
job_type => 'EXECUTABLE',
job_action => '/bin/bash',
number_of_arguments => 1,
start_date => SYSTIMESTAMP,
enabled => FALSE);
dbms_scheduler.set_job_argument_value('TEST', 1, '/tmp/'||script_file);
dbms_scheduler.enable('TEST');
-- Wait for the job to be executed
-- usually done within 1 second but
-- I set it to 2 just in case.
dbms_lock.sleep(2);
-- Open the output file and
-- print the result.
dump_type := utl_file.fopen('TMP',dump_file,'r');
loop
begin
utl_file.get_line(dump_type,d);
dbms_output.put_line(d);
exception
when others then
exit;
end;
end loop;
utl_file.fclose(dump_type);
-- Clean up our temp files
utl_file.fremove('TMP', script_file);
utl_file.fremove('TMP', dump_file);
END;
/
Monday, May 28
Cyrus IMAP file system tuning
Been busy, not enough blogging, bla bla bla. I know.
Just a lot of stuff going on at work at the moment, mergers and integrations.
We've had some problems with one of our IMAP servers at work running Postfix and Cyrus IMAPd. A for the job quite well speced machine, dual Xeons and 3x146Gb disk in RAID5 (4x146 in RAID10 would have been nicer). Anyway, the machine has got 50 or so IMAP users and perhaps 100Gb spool data on a ReiserFS partition.
The machines' avg. load has peaked at over 6.00 with about 75% in iowait on a bad day. I suspect that modern fancy e-mail search tools are to blame for the problems, applications building search indexes and such (Apple Mail anyone?).
Monitoring of the server showed quite a lot of inode update activity, even though there isn't *that* much new email coming in.
Must be our old (not so) dear friend atime that's making a little mess, I've used the noatimea and nodiratime mount options in the past with great success. Seen performance improvements of a couple of percent.
The mount-options noatime and nodiratime simply disables the feature to update the access timestamp of a file (and directory). I.e. when someone clicks and reads an email in their mail application the inode atime timestamp is updated. When is this atime timestamp used? Never.
Did a quick online remount of the spool fs with noatime and nodiratime.
The result?
Avg. load hasn't touch 1.00 since. Wow! I was expecting an improvement, but not that big.
Great and easy way to improve performance on IMAP spools.
Cool.
Just a lot of stuff going on at work at the moment, mergers and integrations.
We've had some problems with one of our IMAP servers at work running Postfix and Cyrus IMAPd. A for the job quite well speced machine, dual Xeons and 3x146Gb disk in RAID5 (4x146 in RAID10 would have been nicer). Anyway, the machine has got 50 or so IMAP users and perhaps 100Gb spool data on a ReiserFS partition.
The machines' avg. load has peaked at over 6.00 with about 75% in iowait on a bad day. I suspect that modern fancy e-mail search tools are to blame for the problems, applications building search indexes and such (Apple Mail anyone?).
Monitoring of the server showed quite a lot of inode update activity, even though there isn't *that* much new email coming in.
Must be our old (not so) dear friend atime that's making a little mess, I've used the noatimea and nodiratime mount options in the past with great success. Seen performance improvements of a couple of percent.
The mount-options noatime and nodiratime simply disables the feature to update the access timestamp of a file (and directory). I.e. when someone clicks and reads an email in their mail application the inode atime timestamp is updated. When is this atime timestamp used? Never.
Did a quick online remount of the spool fs with noatime and nodiratime.
The result?
Avg. load hasn't touch 1.00 since. Wow! I was expecting an improvement, but not that big.
Great and easy way to improve performance on IMAP spools.
mount -o remount,noatime,nodiratime /var/spool/imapAnd don't forget to update /etc/fstab with the same mount options.
Cool.
Wednesday, May 9
Insider for Oracle - funky GUI bits
We've been doing a lot of application benchmarking and tuning at work in the last couple of months as we are prepare for a new major release. I've been involved in the Oracle side of things and have spent a lot of time tracking down expensive (or plain weird) SQL-statements and trying to optimize the Oracle databases we use.
After spending about 6 - 7 hours a day running my normal sqlplus scripts over and over trying different things, monitoring benchmark runs I got a bit bored of text output and had a quick google for some sort of basic v$whats_up GUI tool and found this little cool little app called Insider for Oracle, it a great app for doing spot-on monitoring, sort of like Spot light from Quest, but without the hassle. Insider tries to be a more proactive tool with round the clock monitoring, but for that, it's no good.
The thing I find it useful for is when we start a benchmark I can trail what's going on with the system. See new SQL statements, wait events, SGA usage, PGA usage, hot segments etc. Nothing I can't do from sqlplus but it makes my tasks a bit easier.
Price wise it's quite a bargain for what I use it for, the license is for each database on the application "workspace" (i.e. how many databases you want to look at at once). And for what I use it for just looking at one DB at a time, that works out pretty cheap, 599USD or so. Cheap as chips.
I've found a couple of bugs (and submitted bug reports accordingly) and I've even submitted a couple of enhancement suggestions.
Check out their company blog.
After spending about 6 - 7 hours a day running my normal sqlplus scripts over and over trying different things, monitoring benchmark runs I got a bit bored of text output and had a quick google for some sort of basic v$whats_up GUI tool and found this little cool little app called Insider for Oracle, it a great app for doing spot-on monitoring, sort of like Spot light from Quest, but without the hassle. Insider tries to be a more proactive tool with round the clock monitoring, but for that, it's no good.
The thing I find it useful for is when we start a benchmark I can trail what's going on with the system. See new SQL statements, wait events, SGA usage, PGA usage, hot segments etc. Nothing I can't do from sqlplus but it makes my tasks a bit easier.
Price wise it's quite a bargain for what I use it for, the license is for each database on the application "workspace" (i.e. how many databases you want to look at at once). And for what I use it for just looking at one DB at a time, that works out pretty cheap, 599USD or so. Cheap as chips.
I've found a couple of bugs (and submitted bug reports accordingly) and I've even submitted a couple of enhancement suggestions.
Check out their company blog.
Monday, May 7
OpenSolaris SXDE upgrade time
It's been a while since I last played with Solaris Express Community Edition, so long that it actually changed names. There are now two versions, one for "random people" and one version for developers. SXCE is not released as often as SXDE but is more tested and stable, SXDE also comes with a lot more goodies, like Sun Studio 11 and Netbeans.
Anyway, the install of SXDE b63 under VMWare workstation 6 RC was as easy as ever.
However, at first boot I almost thought the VM was frozen for a minute or so, after grub it just sits there with a cursor in the corner of the screen for about 2 minutes before kicking off the kernel. All good after that though.
One thing I wanted to test was the new simplified resource capping in Solaris Zones. It was very straightforward and easy to use.
A couple of extra commands in zonecfg and the zone is capped to one core and 512Megs of memory. No more messing around with projects, except if you want really fine grained capping that is.
Cool!
Took a screenshoot:
Anyway, the install of SXDE b63 under VMWare workstation 6 RC was as easy as ever.
However, at first boot I almost thought the VM was frozen for a minute or so, after grub it just sits there with a cursor in the corner of the screen for about 2 minutes before kicking off the kernel. All good after that though.
One thing I wanted to test was the new simplified resource capping in Solaris Zones. It was very straightforward and easy to use.
A couple of extra commands in zonecfg and the zone is capped to one core and 512Megs of memory. No more messing around with projects, except if you want really fine grained capping that is.
Cool!
Took a screenshoot:
Monday, April 30
How to do an "insert ignore" in Oracle
Ok, I should start off with a disclaimer. This is not a good idea to do.
Using the "insert ignore" statement is a way to let MySQL insert data from a dataset which may contain duplicate constraints to existing data, and simply skip the duplicate row.
Sounds like a great way to screw up data doesn't it? Should not be used unless you really know what is going on.
Ok, in MySQL we can do this.
Well we can and it is probably a quite good suggestion if you *really* want to ditch those dupe rows.
Using the "insert ignore" statement is a way to let MySQL insert data from a dataset which may contain duplicate constraints to existing data, and simply skip the duplicate row.
Sounds like a great way to screw up data doesn't it? Should not be used unless you really know what is going on.
Ok, in MySQL we can do this.
mysql> insert ignore into a select * from b;But can this be done in Oracle without to much fuss? After a discussion on IRC a guy asked why can't we simply use the merge statement?
Query OK, 1 row affected (0.00 sec)
Records: 2 Duplicates: 1 Warnings: 0
Well we can and it is probably a quite good suggestion if you *really* want to ditch those dupe rows.
-- So what do we got?Looks ok.
SQL> select * from a;
A B
---------- ----------
1 2
2 3
SQL> select * from b;
A B
---------- ----------
1 3
3 4
SQL> merge into a using b on (a.a=b.a)
when not matched then insert values (b.a,b.b);
1 row merged.
SQL> select * from a;
A B
---------- ----------
1 2
2 3
3 4
SQL>
-- Let's rollback that and have a look at the exeuction plan.
SQL> rollback;
Rollback complete.
SQL> set autotrace on
SQL> merge into a using b on (a.a=b.a)
when not matched then insert values (b.a,b.b);
1 row merged.
Execution Plan
----------------------------------------------------------
Plan hash value: 1973318225
-----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------
| 0 | MERGE STATEMENT | | 2 | 64 | 7 (15)| 00:00:01 |
| 1 | MERGE | A | | | | |
| 2 | VIEW | | | | | |
|* 3 | HASH JOIN OUTER | | 2 | 64 | 7 (15)| 00:00:01 |
| 4 | TABLE ACCESS FULL| B | 2 | 52 | 3 (0)| 00:00:01 |
| 5 | TABLE ACCESS FULL| A | 2 | 12 | 3 (0)| 00:00:01 |
-----------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
3 - access("A"."A"(+)="B"."A")
Note
-----
- dynamic sampling used for this statement
Statistics
----------------------------------------------------------
0 recursive calls
5 db block gets
15 consistent gets
0 physical reads
0 redo size
819 bytes sent via SQL*Net to client
778 bytes received via SQL*Net from client
4 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
1 rows processed
SQL>
Sunday, April 15
Oracle under CentOS 5
I'm a bit late on this one, but ...
CentOS version 5 was released the other day. For those of you unfamiliar with CentOS it is RedHat Enterprise Linux recompiled and supported by the open source community.
Pretty much RHEL for free.
I can't say there is anything *really* exciting in the new version of RedHat, sure there is Xen support and the general updated versions of the applications included.
New version of Open Office, PostgreSQL etc.
One big difference is that everything has to be NTPL threaded. NPTL threading is the first real *good* threading implementation in Linux. The old Linux threads model is way old. Although NPTL has been available (and the default) in RHEL since version 3, it is not the only supported threading model.
-So what does this mean for Oracle?
No more LD_ASSUME_KERNEL hacks.
10g is fine with this, no problem at all. However 9i will have some issues with this, the old way to get Oracle running was simply to set LD_ASSUME_KERNEL to 2.4.19 and Linux would use the Linux threads model. No go in version 5.
Big question, will Oracle support 9i under RHEL5? I'm honestly not to bothered, by the time RHEL5 is mature and tested I really hope that people have switched to 10gR2.
9iR2 has been around for a lng time now and will be for some time. But new system design implementation on 9iR2?
I've done a quick test install of Centos 5 and 10gR2 (10.2.0.1 and patchset 10.2.0.2) and the install was very painless, pretty similar to Centos 4 with a few changes in the RPM's needed.
RedHat did add a few SHM parameters to sysctl.conf so that section needs reviewing though.
CentOS version 5 was released the other day. For those of you unfamiliar with CentOS it is RedHat Enterprise Linux recompiled and supported by the open source community.
Pretty much RHEL for free.
I can't say there is anything *really* exciting in the new version of RedHat, sure there is Xen support and the general updated versions of the applications included.
New version of Open Office, PostgreSQL etc.
One big difference is that everything has to be NTPL threaded. NPTL threading is the first real *good* threading implementation in Linux. The old Linux threads model is way old. Although NPTL has been available (and the default) in RHEL since version 3, it is not the only supported threading model.
-So what does this mean for Oracle?
No more LD_ASSUME_KERNEL hacks.
10g is fine with this, no problem at all. However 9i will have some issues with this, the old way to get Oracle running was simply to set LD_ASSUME_KERNEL to 2.4.19 and Linux would use the Linux threads model. No go in version 5.
Big question, will Oracle support 9i under RHEL5? I'm honestly not to bothered, by the time RHEL5 is mature and tested I really hope that people have switched to 10gR2.
9iR2 has been around for a lng time now and will be for some time. But new system design implementation on 9iR2?
I've done a quick test install of Centos 5 and 10gR2 (10.2.0.1 and patchset 10.2.0.2) and the install was very painless, pretty similar to Centos 4 with a few changes in the RPM's needed.
RedHat did add a few SHM parameters to sysctl.conf so that section needs reviewing though.
Wednesday, April 11
New dual socket AMD system from Dell
Dell just introduced a new two socket AMD Opteron system, the PowerEdge 2970, a system designed to compete with Sun x4200 and the HP DL385 G2.
Fairly similar to the Intel-based PowerEdge 2950 but with AMD procs. Same redundancy with dual psu's, two onboard nic's and the onboard DRAC5 remote management adapter.
Comes default with the 8 slot 2.5" SAS backplane which is nice to see. I'm all in favour of 2.5" drives over 3.5" in small servers. Up to 32Gb RAM, with expensive 4GB DIMMS that is, 32Gb comes in at about $35000 USD :)
Perfect little J2EE application server running Linux.
It's interesting to see the latest trends in that area. A number of our clients, major investment banks, are currently in the middle of Linux migration projects.
One thing I think slowed Linux adaptation in past has been the "32-bit barrier" and the limit it puts on memory allocation. A J2EE app with a maximum heap size of about 1.5Gb isn't very popular. 8 or even 16Gb is not unusual to see on the Solaris/SPARC side.
Now with AMD Opteron processors (and Intel EM64T) and a mature 64-bit Linux environment that problem has gone away. Websphere 6.0.2 runs perfectly on my test machine with 6Gb heap, performance is great.
Actually all but one of our "Linux clients" are moving to AMD machines over Intel Xeon's, HP and Sun AMD based hardware only so far.
One good example is a project that is currently being migrated from a six processor Sun v890 to a HP DL585, they even got a small performance increase.
Sad to see the SPARC's go but fun to see new platforms coming. Linux one one side and Solaris 10 on the other side.
But the SPARC's will be back, the new "rock" SPARC processor just hit silicone and with it's massive coherent memory access space of 256 Tb, the expected number of cores and performance. It will kick some serious ass.
Fairly similar to the Intel-based PowerEdge 2950 but with AMD procs. Same redundancy with dual psu's, two onboard nic's and the onboard DRAC5 remote management adapter.
Comes default with the 8 slot 2.5" SAS backplane which is nice to see. I'm all in favour of 2.5" drives over 3.5" in small servers. Up to 32Gb RAM, with expensive 4GB DIMMS that is, 32Gb comes in at about $35000 USD :)
Perfect little J2EE application server running Linux.
It's interesting to see the latest trends in that area. A number of our clients, major investment banks, are currently in the middle of Linux migration projects.
One thing I think slowed Linux adaptation in past has been the "32-bit barrier" and the limit it puts on memory allocation. A J2EE app with a maximum heap size of about 1.5Gb isn't very popular. 8 or even 16Gb is not unusual to see on the Solaris/SPARC side.
Now with AMD Opteron processors (and Intel EM64T) and a mature 64-bit Linux environment that problem has gone away. Websphere 6.0.2 runs perfectly on my test machine with 6Gb heap, performance is great.
Actually all but one of our "Linux clients" are moving to AMD machines over Intel Xeon's, HP and Sun AMD based hardware only so far.
One good example is a project that is currently being migrated from a six processor Sun v890 to a HP DL585, they even got a small performance increase.
Sad to see the SPARC's go but fun to see new platforms coming. Linux one one side and Solaris 10 on the other side.
But the SPARC's will be back, the new "rock" SPARC processor just hit silicone and with it's massive coherent memory access space of 256 Tb, the expected number of cores and performance. It will kick some serious ass.
Friday, March 23
Understanding Oracle pseudo-columns
All the cool people I meet at Oracle events keep referring to "function based indexes" as pseudo-columns. I never really understood the whole thing about it, sure I figures one could consider it a pseudo-column as it was sort of a calculated value that was stored in the database. Never thought much about investigating it either.
However, when playing around with some composite indexes including functions today I finally got it. They are indeed pseudo-columns, or calculated columns as I think Oracle wants to call them in 11g.
As always,
an example:
However, when playing around with some composite indexes including functions today I finally got it. They are indeed pseudo-columns, or calculated columns as I think Oracle wants to call them in 11g.
As always,
an example:
SQL> create table d as select * from all_objects;
Table created.
SQL> insert into d select * from d;
66242 rows created.
SQL> update d set object_id=rownum;
132484 rows updated.
-- Ok, got a test table. Let's index it with some pseudo stuff.SQL> create index d_1 on d(upper(status));
Index created.
SQL> select index_name,column_name from user_ind_columns where table_name='D';
INDEX_NAME COLUMN_NAME
------------------------------ ----------------------------------------
D_1 SYS_NC00014$
SQL> select index_name,column_expression from user_ind_expressions
2 where table_name='D';
INDEX_NAME COLUMN_EXPRESSION
------------------------------ ------------------------------
D_1 UPPER("STATUS")
SQL> create index d_2 on d(object_id,upper(status),timestamp);
Index created.
SQL> select index_name,column_name from user_ind_columns where table_name='D';
INDEX_NAME COLUMN_NAME
------------------------------ ------------------------------
D_1 SYS_NC00014$
D_2 OBJECT_ID
D_2 SYS_NC00014$
D_2 TIMESTAMP
SQL> select index_name,column_expression from user_ind_expressions
2 where table_name='D';
INDEX_NAME COLUMN_EXPRESSION
------------------------------ ------------------------------
D_1 UPPER("STATUS")
D_2 UPPER("STATUS")
-- Thats the index bit, we got two indexes with the same "function",
-- so Oracle will index the same pseudo column! Eureka!
-- Some tests then...SQL> set autotrace traceonly exp
SQL> select object_name from d where upper(status)='T';
Execution Plan
----------------------------------------------------------
Plan hash value: 2452139598
------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 21 | 462 | 165 (0)| 00:00:02 |
| 1 | TABLE ACCESS BY INDEX ROWID| D | 21 | 462 | 165 (0)| 00:00:02 |
|* 2 | INDEX RANGE SCAN | D_1 | 418 | | 157 (0)| 00:00:02 |
------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access(UPPER("STATUS")='T')
Note
-----
- dynamic sampling used for this statement
-- A column can go in the where clause, even pseudo-columns.SQL> select object_name from d where sys_nc00014$='T';
Execution Plan
----------------------------------------------------------
Plan hash value: 2452139598
------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 21 | 462 | 165 (0)| 00:00:02 |
| 1 | TABLE ACCESS BY INDEX ROWID| D | 21 | 462 | 165 (0)| 00:00:02 |
|* 2 | INDEX RANGE SCAN | D_1 | 418 | | 157 (0)| 00:00:02 |
------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access(UPPER("STATUS")='T')
Note
-----
- dynamic sampling used for this statement
-- Since it is a column we can obviously select it as well.SQL> select SYS_NC00014$ from d where object_id=500;
SYS_NC0
-------
VALID
Execution Plan
----------------------------------------------------------
Plan hash value: 2591304836
-------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 18 | 5 (0)| 00:00:01 |
|* 1 | INDEX RANGE SCAN| D_2 | 1 | | 3 (0)| 00:00:01 |
-------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("OBJECT_ID"=500)
Note
-----
- dynamic sampling used for this statement
-- You can even rename the pseudo-column.
-- Oracle did actually dump core after I did that so better not.SQL> alter table d rename column SYS_NC00014$ to dog;
Table altered.
SQL> alter table d rename column dog to SYS_NC00014$;
Table altered.
Wednesday, March 21
Solaris 10 NFS version when using the automounter
Classic problem, new stuff is cool and all. But sometimes people (like myself) can't be bothered with configuring everything to the extreme so all new features are working and are compatible. For basic things, the basics is enough.
Take NFSv4, a default Linux box these days do support NFSv4 but I'm usually to lazy to configure it and just slap in an old NFSv3 line in my exports file. Fine with me, fine with most things, except one thing.
The Solaris 10 automounter, it detects the NFSv4 support and automatically tries to mount the NFS share in v4 mode when browsing /net.
Sidenote, if you are not familiar with the /net feature in Linux and Solaris it's pretty neat. See it as a cli version of "My network neighbourhood" in Windows. You can simply cd to /net/<machine name>/<share path> and the automounter will mount that share on that machine. Easy peasy.
Back to my problem, what do I want? I want the default NFS mount level to be vers=3.
Easy change, just uncomment and set NFS_CLIENT_MAXVERS to 3 in /etc/default/nfs.
Here's an example, the machine dolphin is my old trustworthy Linux workstation (yes, I like examples):
Take NFSv4, a default Linux box these days do support NFSv4 but I'm usually to lazy to configure it and just slap in an old NFSv3 line in my exports file. Fine with me, fine with most things, except one thing.
The Solaris 10 automounter, it detects the NFSv4 support and automatically tries to mount the NFS share in v4 mode when browsing /net.
Sidenote, if you are not familiar with the /net feature in Linux and Solaris it's pretty neat. See it as a cli version of "My network neighbourhood" in Windows. You can simply cd to /net/<machine name>/<share path> and the automounter will mount that share on that machine. Easy peasy.
Back to my problem, what do I want? I want the default NFS mount level to be vers=3.
Easy change, just uncomment and set NFS_CLIENT_MAXVERS to 3 in /etc/default/nfs.
Here's an example, the machine dolphin is my old trustworthy Linux workstation (yes, I like examples):
[root@marble /]$ cd /net/dolphin/
[root@marble dolphin]$ ls
shared
[root@marble dolphin]$ cd shared
bash: cd: shared: Permission denied
[root@marble dolphin]$ grep NFS_CLIENT_VERSMAX /etc/default/nfs
#NFS_CLIENT_VERSMAX=4
[root@marble dolphin]$ vim /etc/default/nfs
.. edit ...
[root@marble dolphin]$ grep NFS_CLIENT_VERSMAX /etc/default/nfs
NFS_CLIENT_VERSMAX=3
[root@marble dolphin]$ cd shared
[root@marble shared]$ ls -l
total 84
drwxrwxr-x 6 hlinden hlinden 4096 Mar 8 15:29 apps
drwxrwxr-x 6 hlinden hlinden 4096 Sep 7 2006 archive
drwxrwxr-x 11 hlinden hlinden 12288 Mar 21 16:32 download
drwxrwxr-x 15 hlinden hlinden 4096 Mar 1 11:05 tmp
[root@marble shared]$
Sunday, February 25
Oracle constraints in XML data
Oracle introduced pretty cool XML support in 9i, it's even cooler in 10g and I don't understand why people don't use it more often. I keep seeing XML data stored in CLOB's all the time. Why not store it as proper XML, it's possible to index, query and even update individual XML elements, attributes or nodes. Fast, simple, easy, no need for full text indexes. Performance of xpath queries is pretty good if indexed correctly, 25 000 XML documents 10k each is still in the sub second range when hitting an index.
One thing that can be quite nice to have in the XML store is constraints to avoid duplicate data, indexes on XML data are pretty much plain standard pseudo-column indexes (or "functional indexes" as some refer to them as).
We just use the basic extract() or extractValue() functions in Oracles XML feature set.
Here's an example on how to to create unique constraints (indexes) on XML elements (or attributes):
One thing that can be quite nice to have in the XML store is constraints to avoid duplicate data, indexes on XML data are pretty much plain standard pseudo-column indexes (or "functional indexes" as some refer to them as).
We just use the basic extract() or extractValue() functions in Oracles XML feature set.
Here's an example on how to to create unique constraints (indexes) on XML elements (or attributes):
SQL> create table x (a xmltype);And some docs to read.
Table created.
SQL> insert into x values('<type><name>dog</name></type>');
1 row created.
SQL> insert into x values('<type><name>cat</name></type>');
1 row created.
SQL> create unique index xui on x(extractValue(a, '/type/name'));
Index created.
SQL> insert into x values('<type><name>cat</name></type>');
insert into x values('<type><name>cat</name></type>')
*
ERROR at line 1:
ORA-00001: unique constraint (HLINDEN.XUI) violated
SQL> insert into x values('<type><name>fish</name></type>');
1 row created.
-- Lets try the constraint on an attribute.
-- attributes are handeled just like elements but need a @ sign prefix
SQL> drop index xui;
Index dropped.
SQL> truncate table x;
Table truncated.
SQL> insert into x(a) values('<type id="1"><name>sally</name></type>');
1 row created.
SQL> insert into x(a) values('<type id="2"><name>bob</name></type>');
1 row created.
SQL> create unique index xui on x(extractValue(a, '/type/@id'));
Index created.
SQL> insert into x(a) values('<type id="2"><name>carol</name></type>');
insert into x(a) values('<type id="2"><name>carol</name></type>')
*
ERROR at line 1:
ORA-00001: unique constraint (HLINDEN.XUI) violated
SQL> insert into x(a) values('<type id="3"><name>carol</name></type>');
1 row created.
-- Ok, lets see if we can have duplicate names.
SQL> insert into x(a) values('<type id="4"><name>carol</name></type>');
1 row created.
SQL>
Wednesday, February 21
Viewing bind variable values in 10g
Oracle 10g introduced a couple of new nice views to help tune queries that use bind variables.
One cool view is v$sql_bind_capture, this view hold the latest captured value for each bind variable in queries that has been run.
First have a look in v$sql to find the SQL query you are looking for, join the sql_id to v$sql_bind_capture and to view the bind variable values for that query.
Example:
One cool view is v$sql_bind_capture, this view hold the latest captured value for each bind variable in queries that has been run.
First have a look in v$sql to find the SQL query you are looking for, join the sql_id to v$sql_bind_capture and to view the bind variable values for that query.
Example:
selectI found a pretty bad example here, an Oracle internal dictionary query, but it should show the point.
sql_id,
t.sql_text SQL_TEXT,
b.name BIND_NAME,
b.value_string BIND_STRING
from
v$sql t
join v$sql_bind_capture b
using (sql_id)
where
b.value_string is not null
and sql_id='f8pavn1bvsj7t'
/
SQL_TEXT BIND_NAME BIND_STRIN
------------------------------------------- ---------- ----------
select con#,obj#,rcon#,enabled,nvl(defer,0) :1 9110
from cdef$ where robj#=:1
Mixing Dell PowerEdge 1955 and 1855 blades
After looking around the Internet for information regarding mixing Dell PowerEdge 1855 and 1955 blades in one enclosure I found some varying "opinions" whether it works or not.
To get things right.
Yes, it is possible to mix and match any 1855 and 1955 blades in one single enclosure
However, there are two small requirements.
The 1955 blades plug-in during operations just as any 1855 blades would, you get a sensor detect and they power on just fine.
Printout from the DRAC/MC:
To get things right.
Yes, it is possible to mix and match any 1855 and 1955 blades in one single enclosure
However, there are two small requirements.
- The DRAC/MC needs firmware 1.3 or later (everyone should upgrade to 1.3, even if you don't have any 1955 blades).
- You need the digital KVM modules, they rock, get them!
The 1955 blades plug-in during operations just as any 1855 blades would, you get a sensor detect and they power on just fine.
Printout from the DRAC/MC:
[Server Module Power Consumption Table]Cool!
<Slot#> <Server Name> <Blade Type> <Power State> <Current/Max Consumption>
1 Server-1 PE1855 ON 300/300W
2 Server-2 PE1855 ON 300/300W
3 Server-3 PE1855 ON 300/300W
4 Server-4 PE1855 ON 300/300W
5 Server-5 PE1855 ON 300/300W
6 Server-6 PE1855 ON 300/300W
7 Server-7 PE1855 ON 300/300W
8 Server-8 PE1955 ON 304/304W
9 Server-9 PE1955 ON 304/304W
10 Server-10 N/A N/A N/A
Thursday, February 15
Extended deployment
Ok, I admit it. I've sucked at blogging lately.
In my defence I have had a quite annoying cold that's been hanging on for the last two week, I *really* hate having a fever, I get cranky. I wrote half a blog post about using Oracle XE for reporting with materialized views but never finished it, will probably do that tomorrow or Monday.
Beyond that I just haven't had anything exciting to do, at work I've been working with Sybase, can't say that's very exciting. Especially not very exciting when one has to support a major investment bank which has a team of Sybase DBA's which seems to know even less about Sybase than I do (and trust me, I'm a Sybase noob).
One exciting thing I've been toying with is IBM Websphere Extended Deployment (XD). Everyone knows that Websphere isn't very exciting, the XD edition has got some pretty cool features though. And one has to love how simple IBM explains the huuuge cost savings it brings.
- "A client can buy a few as four medium sized mainframes and deploy a number of applications across these machines and achieve unprecedented utilization".
Ok, back up a bit here IBM. The concept of Websphere XD works pretty much all platforms. So why bring out the mainframes, I can't really see many clients needing four mainframes for a normal Websphere deployment.
Anyway, what XD brings is pretty much a resource manager and an object grid. Say you have a blade server with 10 blades, you want to deploy two applications which will be load balanced. Instead of telling Websphere to deploy each application on five servers you define metrics of what kind of response times you expect the applications to have, then Websphere will allocate as much resource as needed (or send angry emails requesting more servers). Say one application runs on 2 servers and the other application on the remaining 8, then every Friday everyone in the company needs to use the first application for a couple of hours. Websphere will see the extra utilization and assign a couple of more servers to this app, then when not used any more they will be returned to the second application. Neat!
Another thing you can use XD for is to distribute a large set of data, you can write a distributed application and requests will be sent to the server holding that data. It's way cheaper to buy 16 servers with 16Gb RAM each than to buy one server with 256Gb RAM, let each server hold a piece of the data in RAM in an object grid.
XD also brings some cool monitoring features and other crud. See the comparison of the different versions here.
Enough about Websphere, it's not that great. It's just a nice challange to work with. :-)
In my defence I have had a quite annoying cold that's been hanging on for the last two week, I *really* hate having a fever, I get cranky. I wrote half a blog post about using Oracle XE for reporting with materialized views but never finished it, will probably do that tomorrow or Monday.
Beyond that I just haven't had anything exciting to do, at work I've been working with Sybase, can't say that's very exciting. Especially not very exciting when one has to support a major investment bank which has a team of Sybase DBA's which seems to know even less about Sybase than I do (and trust me, I'm a Sybase noob).
One exciting thing I've been toying with is IBM Websphere Extended Deployment (XD). Everyone knows that Websphere isn't very exciting, the XD edition has got some pretty cool features though. And one has to love how simple IBM explains the huuuge cost savings it brings.
- "A client can buy a few as four medium sized mainframes and deploy a number of applications across these machines and achieve unprecedented utilization".
Ok, back up a bit here IBM. The concept of Websphere XD works pretty much all platforms. So why bring out the mainframes, I can't really see many clients needing four mainframes for a normal Websphere deployment.
Anyway, what XD brings is pretty much a resource manager and an object grid. Say you have a blade server with 10 blades, you want to deploy two applications which will be load balanced. Instead of telling Websphere to deploy each application on five servers you define metrics of what kind of response times you expect the applications to have, then Websphere will allocate as much resource as needed (or send angry emails requesting more servers). Say one application runs on 2 servers and the other application on the remaining 8, then every Friday everyone in the company needs to use the first application for a couple of hours. Websphere will see the extra utilization and assign a couple of more servers to this app, then when not used any more they will be returned to the second application. Neat!
Another thing you can use XD for is to distribute a large set of data, you can write a distributed application and requests will be sent to the server holding that data. It's way cheaper to buy 16 servers with 16Gb RAM each than to buy one server with 256Gb RAM, let each server hold a piece of the data in RAM in an object grid.
XD also brings some cool monitoring features and other crud. See the comparison of the different versions here.
Enough about Websphere, it's not that great. It's just a nice challange to work with. :-)
Thursday, February 1
bash globbing and dot-files
Found a nifty little feature in bash.
Globbing is expanding file pattterns, like when you typ "ls -l file*" in bash it is not ls that does the file matching and filtering. bash will glob ("expand") the file list file* and ls will get all the files as arguments.
Now, per default bash doesn't glob dot-files. If I do "ls *" I will not get .bashrc and .bash_profile etc. Luckily it is easy to change this behavior. Set the bash option dotglob to enabled and it works!
Example:
Another quite nice globbing feature is to have case insensitive globbing.
Check this out:
Globbing is expanding file pattterns, like when you typ "ls -l file*" in bash it is not ls that does the file matching and filtering. bash will glob ("expand") the file list file* and ls will get all the files as arguments.
Now, per default bash doesn't glob dot-files. If I do "ls *" I will not get .bashrc and .bash_profile etc. Luckily it is easy to change this behavior. Set the bash option dotglob to enabled and it works!
Example:
Just put shopt -s dotglob in your .bashrc file or in a global /etc/profile.d file.
[hlinden@spinner testdir]$ ls -Al
total 0
-rw-rw-r-- 1 hlinden hlinden 0 Feb 1 2007 .dotfile1
-rw-rw-r-- 1 hlinden hlinden 0 Feb 1 2007 .dotfile2
-rw-rw-r-- 1 hlinden hlinden 0 Feb 1 2007 file1
-rw-rw-r-- 1 hlinden hlinden 0 Feb 1 2007 file2
-rw-rw-r-- 1 hlinden hlinden 0 Feb 1 2007 file3
[hlinden@spinner testdir]$ echo *
file1 file2 file3
[hlinden@spinner testdir]$ shopt -s dotglob
[hlinden@spinner testdir]$ echo *
.dotfile1 .dotfile2 file1 file2 file3
[hlinden@spinner testdir]$
Another quite nice globbing feature is to have case insensitive globbing.
Check this out:
[hlinden@spinner testdir]$ shopt -s nocaseglob
[hlinden@spinner testdir]$ ls -l F*1
-rw-rw-r-- 1 hlinden hlinden 0 Feb 1 2007 file1
[hlinden@spinner testdir]$
Sunday, January 21
Flashback a user or schema in Oracle
Since 10g we've had the quite cool feature 'flashback', or rather we've had it since waay back. But now Oracle gave us an easy way to use it. We have the recyclebin as a new feature though. Cool but sometimes a bit confusing.
Flashback can be done at two levels, the whole database or for a single object. Why on earth didn't Oracle include a "flashback schema" feature. I would guess most users use flashback when doing testing, like schema upgrades and things like that. And if the test fails you want to restore the tables to before the scripts ran (ok workspaces comes to mind here but that's another post).
I took some time and wrote a script to flashback all tables and dependant objects from the recyclebin and to flashback existing tables to a previous version.
Since flashback require row movement to be enabled the script checks if it needs to enable that for the tables and will do so if needed.
Read more about flashback here.
Flashback can be done at two levels, the whole database or for a single object. Why on earth didn't Oracle include a "flashback schema" feature. I would guess most users use flashback when doing testing, like schema upgrades and things like that. And if the test fails you want to restore the tables to before the scripts ran (ok workspaces comes to mind here but that's another post).
I took some time and wrote a script to flashback all tables and dependant objects from the recyclebin and to flashback existing tables to a previous version.
Since flashback require row movement to be enabled the script checks if it needs to enable that for the tables and will do so if needed.
-- Ok, let's create a little mess we can clear up.Download my script here: http://halisway.hifichoice.com/flashback_user.sql.
SQL> drop table i;
Table dropped.
SQL> select * from a;
ID D
---------- ----------------------------
1 30-NOV-06 10.34.35.000000 PM
2 06-NOV-06 08.15.54.000000 AM
SQL> update a set d=sysdate;
2 rows updated.
SQL> commit;
Commit complete.
SQL> select * from a;
ID D
---------- ----------------------------
1 21-JAN-07 11.29.53.000000 PM
2 21-JAN-07 11.29.53.000000 PM
-- Commited and all! We sure screwed that up
SQL> select tname,tabtype from tab;
TNAME TABTYPE
------------------------------ -------
A TABLE
T TABLE
D TABLE
BIN$J5ZNZK2dxdvgQKjAKF9ZFQ==$0 TABLE
4 rows selected.
-- Lets run the script to generate our flashback script
-- The script will prompt you for the number of minutes you want to go back
SQL> @flashback_user
How far back do you want to flashback (in minutes)?
Enter value for minute: 8
8
testuser
Spooling flashback_user_testuser.sql
alter table T enable row movement;
flashback table A to timestamp sysdate - interval '8' minute;
flashback table T to timestamp sysdate - interval '8' minute;
flashback table D to timestamp sysdate - interval '8' minute;
flashback table I to before drop;
SQL> @flashback_user_testuser
Table altered.
Flashback complete.
Flashback complete.
Flashback complete.
Flashback complete.
SQL> select tname,tabtype from tab;
TNAME TABTYPE
------------------------------ -------
A TABLE
D TABLE
T TABLE
I TABLE
4 rows selected.
SQL> select * from a;
ID D
---------- ----------------------------
1 30-NOV-06 10.34.35.000000 PM
2 06-NOV-06 08.15.54.000000 AM
2 rows selected.
-- And we are back in business.
Read more about flashback here.
Tuesday, January 16
Balancing SGA in Oracle 9i
Now with 10g having sga_target for automatically managing your SGA memory size some people get lazy and just forget about SGA. But most of us still have 9i databases to maintain.
Time for a quick overview on how to reduce disk i/o with the help of db_buffers.
I've got a Sun v480 with 4Gb RAM running one instance, it's not under a lot of load but could do with some tuning.
Lets look how our current SGA settings looks:
in the shared pool, what should we do with that?
from looking at iostat and wait statistics I've noticed that the disk
subsystem (a simple Sun 3310 in this case) is getting a bit hammered sometimes.
Lets see how the db cache is doing.
Right, from looking in the v$db_chace_advice view we can determine that a few hundred Mb more of cache would be a good idea. There is quite a lot of free memory at the OS level, so we can probably increase the sga_max_size with 250Mb as well.
Don't forget:
Please have more than one look at the statistics before you do anything, the free memory in the shared pool could be free today but needed all other days. Take a few snapshots over time and analyze the results before doing any changes. And whatever you do, don't over allocate SGA.
The last thing you want is the SGA to be swapped to disk or the system swapping out other things.
Time for a quick overview on how to reduce disk i/o with the help of db_buffers.
I've got a Sun v480 with 4Gb RAM running one instance, it's not under a lot of load but could do with some tuning.
Lets look how our current SGA settings looks:
SQL> show parameter sga_max_sizeOk, we can see that we do have quite some free memory
NAME TYPE VALUE
------------------------------------ ----------- ------------
sga_max_size big integer 2283246744
SQL> select * from v$sgastat order by pool,bytes;
POOL NAME BYTES
----------- -------------------------- ----------
java pool free memory 67108864
shared pool trigger source 152
shared pool fixed allocation callback 496
shared pool trigger defini 576
shared pool trigger inform 920
shared pool PLS non-lib hp 2088
shared pool joxs heap init 4240
shared pool KQR S SO 5416
shared pool table definiti 18984
shared pool PX subheap 28096
shared pool session heap 29560
shared pool KGK heap 33368
shared pool DG Broker heap 39200
shared pool MTTR advisory 388024
shared pool errors 390856
shared pool sessions 905840
shared pool message pool freequeue 940944
shared pool sim memory hea 1014808
shared pool KSXR receive buffers 1034000
shared pool FileIdentificatonBlock 1791824
shared pool parameters 1827072
shared pool PL/SQL DIANA 1960192
shared pool 1M buffer 2098176
shared pool Checkpoint queue 2622720
shared pool KQR M PO 2885104
shared pool dictionary cache 3229952
shared pool KQR L PO 3372488
shared pool event statistics per sess 3762720
shared pool KQR L SO 5260312
shared pool KGLS heap 5424464
shared pool KQR M SO 6135256
shared pool FileOpenBlock 11813536
shared pool PL/SQL MPCODE 22044808
shared pool miscellaneous 25966760
shared pool library cache 66948552
shared pool sql area 166511368
shared pool free memory 248709688
fixed_sga 734360
log_buffer 787456
buffer_cache 1073741824
40 rows selected.
in the shared pool, what should we do with that?
from looking at iostat and wait statistics I've noticed that the disk
subsystem (a simple Sun 3310 in this case) is getting a bit hammered sometimes.
Lets see how the db cache is doing.
SQL> select size_factor, size_for_estimate, estd_physical_read_factor
2 from v$db_cache_advice order by size_factor;
SIZE_FACTOR SIZE_FOR_ESTIMATE ESTD_PHYSICAL_READ_FACTOR
----------- ----------------- -------------------------
.0938 96 4344.0852
.1875 192 163.0333
.2813 288 23.782
.375 384 22.5502
.4688 480 20.6444
.5625 576 10.4806
.6563 672 1.4957
.75 768 1.0948
.8438 864 1.0367
.9375 960 1.0102
1 1024 1
1.0313 1056 .9999
1.125 1152 .813
1.2188 1248 .7185
1.3125 1344 .5829
1.4063 1440 .536 <- Lets aim for this
1.5 1536 .5196
1.5938 1632 .5139
1.6875 1728 .5079
1.7813 1824 .5055
1.875 1920 .5043
21 rows selected.
Right, from looking in the v$db_chace_advice view we can determine that a few hundred Mb more of cache would be a good idea. There is quite a lot of free memory at the OS level, so we can probably increase the sga_max_size with 250Mb as well.
-- Reduce the shared pool with 150Mb.That should do the trick, lets see what the server thinks about things in a couple a weeks time or so.
SQL> alter system set shared_pool=379584512 scope=spfile
System altered.
-- Add max size with 250Mb
SQL> alter system set sga_max_size=2545390744 scope=spfile;
System altered.
-- Add the 250Mb we added + the 150Mb we reduced from the shared pool.
SQL> alter system set db_cache_size=1493172224 scope=spfile;
System altered.
-- Restart Oracle
SQL> shutdown immediate
...
SQL> startup
Don't forget:
Please have more than one look at the statistics before you do anything, the free memory in the shared pool could be free today but needed all other days. Take a few snapshots over time and analyze the results before doing any changes. And whatever you do, don't over allocate SGA.
The last thing you want is the SGA to be swapped to disk or the system swapping out other things.
Wednesday, January 10
Cleaning up sqlplus cut and pastes in vim
Classical problem, you write a short piece of SQL in sqlplus, you then want to cut and paste it to vim edit it. But then you get the really annoying line numbers in front of each line.
Sure you can use save myquery in sqlplus and open the file myquery.sql in vim, but perhaps you want to run vim on another machine, perhaps you run gvim.
So ok, time to get rid of the line numbers.
Here's what we got in sqlplus:
We now need to delete the first 4 characters of each line. Easy.
Switch to visual mode, jump to the end of the file, step two rows right, hit delete char.
Or in vim command terms (in normal mode (esc-esc)).
ctrl-v to get in to visual mode
G to jump to the end of the file
l to move the cursor right
x to delete the selected area
Visual mode is quite powerful, you can use almost the same commands to comment out code.
Place the cursor in the first line you want to comment out, hit ctrl-v to do a visual mark, use j to move the cursor down to the last line you want to comment out, hit I to get insert, type two hyphens and hit escape. Done.
In vim command terms:
Sure you can use save myquery in sqlplus and open the file myquery.sql in vim, but perhaps you want to run vim on another machine, perhaps you run gvim.
So ok, time to get rid of the line numbers.
Here's what we got in sqlplus:
SQL> listLets cut n paste those numbered lines to a vim session.
1 select
2 e.ename,
3 d.dname
4 from
5 dept d,
6 emp e
7 where
8 d.deptno=e.deptno
9* order by e.ename
SQL>
We now need to delete the first 4 characters of each line. Easy.
Switch to visual mode, jump to the end of the file, step two rows right, hit delete char.
Or in vim command terms (in normal mode (esc-esc)).
ctrl-v G l l xBreakdown:
ctrl-v to get in to visual mode
G to jump to the end of the file
l to move the cursor right
x to delete the selected area
Visual mode is quite powerful, you can use almost the same commands to comment out code.
Place the cursor in the first line you want to comment out, hit ctrl-v to do a visual mark, use j to move the cursor down to the last line you want to comment out, hit I to get insert, type two hyphens and hit escape. Done.
In vim command terms:
ctrl-v j j j I -- esc
Sunday, January 7
Time for HP Proliant?
Now I can't say I've blogged much about HP server, simply because it's not been very exciting hardware. Dell and Sun have always had better products at better prices and have always been better to deal with.
Now recently, especially with the G5 series of it's Proliant line HP have some quite exciting and unique products. The one thing I've always liked with HP is that they've been a long term AMD Opteron supporter. The where the second big server vendor after IBM to launch Opteron based servers. The DL385 was unique at the time and served me well in a previous company I worked for. The following DL585 (4 sockets, 128Gb RAM max and 8x 2.5" SAS slots) is still a pretty ok Oracle server, the G2 version comes with a pretty cool front accessible CPU and memory drawer for easy upgrades and repairs is even better. But price wise it's on par with the Sun x4600, and the x4600 will let you scale to 8-sockets over of the 4-socket DL585.
The Intel equivalent, the DL580 offers front accessible memory access, CPU's are still top lid accessible.
Now with the Intel-based (dual and quad core Xeon 3000, 5000, 5100 and 5300 boxes) machines in the "G5" series we have a few pretty cool new features, the focus on 2.5" SAS drives is nice to see, Sun is going the same way but Dell is clinging on to 3.5" drives (probably to keep the price tag down), some Dell machines can be fitted with a 2.5" backplane (like the PE1950).
The DL360 makes a decent app server with two sockets and up to 32Gb RAM, nice 6 slot SAS backplane and the DL380 adds more PCI I/O and can take up to 8 drives.
Big brother (in the 2-socket famaily) is the ML370, this is a pretty large machine for a 2-socket box, 5U in the rack, but it makes a great VMWare server, loads of local disk (16 SAS slots), loads of I/O and can take a massive 64Gb RAM. Great for stand-alone Oracle deployments as well.
The smallest member in the 300-family is the DL320, pretty standard 1-socket Xeon 3000 machine (3.5" drives though), makes a decent webserver or network appliance.
Ok, now on to the reason I actually started getting excited about HP hardware again.
The recently introduced DL 320s hybrid data server (or "storage server").
It's pretty much the same as the DL320, but the "s" adds some pretty cool stuff. It can take 12 SAS or SATA drives. Which makes it a pretty interesting machine, one could say this is the "HP Thumper", the HP equivalent of the Sun x4500. It's not as big and impressive as the x4500 but it's still interesting. Applications like distributed data mining, data analysis and video processing fits like a glove on this box.
A few bad things about HP, everything is extra. The price you see is not the price you pay. The list price of a normal, say DL380, includes exactly what is listed. You even have to add the second sets of fans to get redundancy. Half the features in the iLO2 (remote management interface) require extra licenses to be activated. I think HP needs to re-think this. As it stands today I would not look at buying HP servers as generic servers, possibly if I have a project that require something only HP can offer (like a DL320s). It simply doesn't make sense financially.
If I want standard server, I go to Dell. If I want business critical servers, I go to Sun. It's as easy as that. If HP can offer competitive pricing I would not hesitate to at least evaluate deploying HP instead of Dell and in some cases Sun boxes.
Now recently, especially with the G5 series of it's Proliant line HP have some quite exciting and unique products. The one thing I've always liked with HP is that they've been a long term AMD Opteron supporter. The where the second big server vendor after IBM to launch Opteron based servers. The DL385 was unique at the time and served me well in a previous company I worked for. The following DL585 (4 sockets, 128Gb RAM max and 8x 2.5" SAS slots) is still a pretty ok Oracle server, the G2 version comes with a pretty cool front accessible CPU and memory drawer for easy upgrades and repairs is even better. But price wise it's on par with the Sun x4600, and the x4600 will let you scale to 8-sockets over of the 4-socket DL585.
The Intel equivalent, the DL580 offers front accessible memory access, CPU's are still top lid accessible.
Now with the Intel-based (dual and quad core Xeon 3000, 5000, 5100 and 5300 boxes) machines in the "G5" series we have a few pretty cool new features, the focus on 2.5" SAS drives is nice to see, Sun is going the same way but Dell is clinging on to 3.5" drives (probably to keep the price tag down), some Dell machines can be fitted with a 2.5" backplane (like the PE1950).
The DL360 makes a decent app server with two sockets and up to 32Gb RAM, nice 6 slot SAS backplane and the DL380 adds more PCI I/O and can take up to 8 drives.
Big brother (in the 2-socket famaily) is the ML370, this is a pretty large machine for a 2-socket box, 5U in the rack, but it makes a great VMWare server, loads of local disk (16 SAS slots), loads of I/O and can take a massive 64Gb RAM. Great for stand-alone Oracle deployments as well.
The smallest member in the 300-family is the DL320, pretty standard 1-socket Xeon 3000 machine (3.5" drives though), makes a decent webserver or network appliance.
Ok, now on to the reason I actually started getting excited about HP hardware again.
The recently introduced DL 320s hybrid data server (or "storage server").
It's pretty much the same as the DL320, but the "s" adds some pretty cool stuff. It can take 12 SAS or SATA drives. Which makes it a pretty interesting machine, one could say this is the "HP Thumper", the HP equivalent of the Sun x4500. It's not as big and impressive as the x4500 but it's still interesting. Applications like distributed data mining, data analysis and video processing fits like a glove on this box.
A few bad things about HP, everything is extra. The price you see is not the price you pay. The list price of a normal, say DL380, includes exactly what is listed. You even have to add the second sets of fans to get redundancy. Half the features in the iLO2 (remote management interface) require extra licenses to be activated. I think HP needs to re-think this. As it stands today I would not look at buying HP servers as generic servers, possibly if I have a project that require something only HP can offer (like a DL320s). It simply doesn't make sense financially.
If I want standard server, I go to Dell. If I want business critical servers, I go to Sun. It's as easy as that. If HP can offer competitive pricing I would not hesitate to at least evaluate deploying HP instead of Dell and in some cases Sun boxes.
Tuesday, January 2
Happy new 2007
It's been a pretty good year. Lots of cool stuff has happened and lots of interesting technology is maturing and evolving. Most notably OpenSolaris, While most other UNIX-vendors are pretty much sitting on their behinds watching time go by Sun is kicking in a higher gear. ZFS is being adopted by the FreeBSD and MacOS communities. Iscsi is maturing and VMWare is getting some pretty cool features, Veritas is giving away the basic edition of Storage Foundation. Oracle 11g looks very promising.
All well, I'm flying back to the UK tomorrow (finally), can't wait to get back to London.
All well, I'm flying back to the UK tomorrow (finally), can't wait to get back to London.
Subscribe to:
Posts (Atom)