Tuesday, December 26

Merry Christmas and VMWare 6

.. and all that. :)

Been chilling in Sweden for the last few days, had a terrible flight back with a 2 hour delay due to fog over south east england and then a very annoying crying baby on the plane.
We even had a distubance in the queue to the gate, some guy from the middle east was making a scene and started threatening a few other people for bumping in to him. I have to admit, I knew it wasn't a good idea to mess with airport security, but these guys where hard. The scene was quite funny, when they asked him to come with them and he started questioning them they said something in the lines of "Sir, you have 2 seconds to comply or we will remove you with force." We didn't seem him again after that.
Anyway, got socks and a few good CD's for Christmas. Always good with new music.
I installed VMWare Workstation 6 beta yesterday and had a play around with that. A word of warning, although it has some cool features and is meant to be a bit faster the beta version is a debug build and in classic vmware-style the debug builds are slow... really slow. Booting CentOS 4 takes at least 50% longer. The "freeze bug" when starting a VM in Vista is fixed though.

All jolly good fun!

Thursday, December 21

Using Solaris projects with Oracle

In older versions of Solaris (9 and earlier) we used to set the system resource control in /etc/system, in Solaris 10 and later we do it in a neater way. No more reboots or mocking about, all online, even while Oracle is running.
It's all about projects now.
I've just finished installing 10gR2 under Solaris Express Community Release Build 54 (long name ey?) under VMware on my new laptop.
Example project:
root@solvm[~]$ projadd -U oracle -K \
project.max-shm-ids=(priv,100,deny)" oraproj

-- Lets review our project

root@solvm[~]$ su - oracle
Sun Microsystems Inc. SunOS 5.11 snv_54 October 2007
oracle@solvm[~]$ projects
default oraproj
oracle@solvm[~]$ projects -l oraproj
projid : 101
comment: ""
users : oracle
groups : (none)
attribs: project.max-sem-ids=(priv,100,deny)
It's described slightly different in the Oracle installation manual but I'd say my way is slightly better. prctl is legacy. You can use usermod -K project=oraproj oracle to set the new project as the users default project (not really necessary)

Thursday, December 14

Manual operations in an automatic SGA database

Oracle 10g introduced the nifty feature of automatic SGA, simply set the parameter sga_target and sga_max_size to the maximum size you can afford and Oracle will do the rest. But we still have a few manual settings, say you want a tablespace with a non-default block size (also a new feature) than the database default and you create, say a 4k block size, tablespace in a 8k database and you allocate a db_4k_cache_size for that. How do Oracle allocate that memory?

SQL> select name,value from v$parameter
where name like 'sga_target'

--------------- --------------------
sga_target 599785472

SQL> select name,round(bytes/1024/1024) M from v$sgainfo;

---------------------------------------- ----------
Fixed SGA Size 2
Redo Buffers 6
Buffer Cache Size 416
Shared Pool Size 140
Large Pool Size 4
Java Pool Size 4
Streams Pool Size 0
Granule Size 4
Maximum SGA Size 572
Startup overhead in Shared Pool 68
Free SGA Memory Available 0

11 rows selected.

SQL> alter system set db_4k_cache_size=4M;

System altered.

SQL> alter system set db_4k_cache_size=0;

System altered.

-- What actually happend here, from what
-- did Oracle steal those 4Mb
-- Let's query the v$sga_resize_ops view to see that was done.

SQL> select parameter,oper_type,initial_size,target_size
from v$sga_resize_ops
where start_time>sysdate-1

---------------- ------------- ------------ -----------
db_cache_size SHRINK 436207616 427819008
db_4k_cache_size GROW 0 8388608
db_4k_cache_size SHRINK 8388608 0
db_cache_size GROW 427819008 436207616

Ok, we can see that Oracle shrank the db_cache_size with the requested size and allowed the db_4k_cache_size to grow and when reset back to zero it freed the memory and grew the db_cache_size. Oracle could just as well have allocated the memory to the large_pool if the automatic SGA engine thought that would make more sense.
I've got the last 7 days worth for resize ops in my weekly database report that is e-mailed to me every Monday morning, it's definitely worth keeping an eye on whats going on. Sometimes it's just easier to configure the SGA manually.

Monday, December 11

Testing iSCSI

I've been reading a lot about iSCSI lately, Sun is pushing the ZFS and iSCSI integration in the latest release of Open Solaris and it's the new hot thing in the Oracle community. I remember testing the Linux target ("server side") a few years ago and back then it was quite flaky, far from production ready.
This time around it seems pretty solid.

I just used the very simple instrucions here to compile the iscsi kernel module in Ubuntu, I created two "zero files" using dd (one for my VMware Linux test and one for my Vista laptop) and added them as LUN's to the /etc/ietd.conf config file.
# cat /etc/ietd.conf
Target starbuck
Lun 0 Path=/u03/starbuck_lun0.dat,Type=fileio
Target oravm
Lun 0 Path=/u03/oravm_lun0.dat,Type=fileio

The installation in CentOS 4 was simple, do a yum -y install iscsi-initiator-tools and set the discovery host and initiator name in /etc/iscsi.conf. Starting /etc/init.d/iscsi gives:
iscsi-sfnet: Loading iscsi_sfnet version 4:0.1.11-3
iscsi-sfnet: Control device major number 254
iscsi-sfnet:host3: Session established
scsi3 : SFNet iSCSI driver
Vendor: IET Model: VIRTUAL-DISK Rev: 0
Type: Direct-Access ANSI SCSI revision: 04
SCSI device sdd: 524288 512-byte hdwr sectors (268 MB)
SCSI device sdd: drive cache: write through
SCSI device sdd: 524288 512-byte hdwr sectors (268 MB)
SCSI device sdd: drive cache: write through
sdd: unknown partition table
Attached scsi disk sdd at scsi3, channel 0, id 0, lun 0
Create a partition and file system on the whole disk with
printf "n\np\n1\n\n\nw\n" | fdisk /dev/sdd && mkfs.ext3 /dev/sdd1


Conneting the iSCSI disk in Vista was pretty straight forward as well, add the iSCSI discovery IP and do a refresh of your targets.

Read about Oracle RAC and iSCSI in this article.

Sunday, December 10

Monitoring RMAN I/O performance

When trying to cut rman backup times there are a lot of parameters to try to tune, database settings, storage subsystem settings, file system settings etc etc.
It can sometimes be hard to remember all the results you get, luckily Oracle keeps them for you. Just query the v$rman_backup_job_details view for the results.

column END_TIME format a16
column START_TIME format a16
column COMMAND_ID format a20

to_char(start_time,'YYMMDD HH24:MI') start_time,
to_char(end_time,'YYMMDD HH24:MI') end_time,
round(input_bytes/1024/1024) backup_size,
round(INPUT_BYTES_PER_SEC/1024/1024) in_mb_sec,
round(OUTPUT_BYTES_PER_SEC/1024/1024) out_mb_sec
start_time > sysdate - 7
by start_time

-------------------- ---------------- ---------------- ----------- ---------- ----------
2006-12-10T00:34:25 061210 00:35 061210 00:45 13722 22 10
2006-12-10T23:17:48 061210 23:17 061210 23:26 13784 26 12

Oracle ASM instance startup script

Tought I'd publish my Oracle ASM instance sysv (Redhat/CentOS/Unbreakable etc) init script. The default oracleasm script provided by asmlib only prepares the disks available to ASM, you also need to start the basic instance for ASM. I've set the ASM instance to start just after oracleasm and to be killed just before it. Make sure your normal Oracle instance startup scripts are started in the correct order, i.e. after this script for startup and before this script for shutdown.
It reads the default /etc/sysconfig/oracle config file, if nothing it sets it goes to defaults and should be able to start the instance. If you don't have the ASM instance name set in /etc/sysconfig/oracle (as ASM_SID=FOO) and your instance name is not +ASM you may have to edit the script to change the default ASM SID.

Download the script here. Copy the script to /etc/init.d/ and use chkconfig --add oraasm to add it to startup. Do a chkconfig --list oraasm to verify that the script was added the runlevels you use.

Please report any bugs, problems or suggestions for the script to me.

Tuesday, December 5

Date range generation in Oracle

Another interesting question someone asked in the IRC #oracle chat.
-How do I create a table with all dates from 2000 until today in it?
It's rather simple in plain SQL, no PL/SQL or scripting required.
SQL> create table date1 (d date);

Table created.

SQL> insert
into date1(d)
to_date('00-01-01','YY-DD-MM') + level
(to_date('00-01-01','YY-DD-MM')+level) < sysdate
by level<=1000000

2530 rows created.

SQL> select min(d) s, max(d) e from date1;

--------- ---------
02-JAN-00 05-DEC-06


Oracle trace event 10132

The new "in" Oracle trace event seems to be 10132, it's pretty cool. Most Oracle DBA's and developers have probably at least heard about trace levels 10046 and 10053.
10132 is a different from the classic two in the sense that it is is really easy for anyone to intepret, the 10132 trace event gives you a full execution plan and optimizer settings in the udump (user_dump_dest) directory.
It's an easy thing to throw in a script or even as a logon trigger when troubleshooting query problems. Think hibernate or other (weird) ORM tool.

Lets take a look:
SQL> alter session set events
2 '10132 trace name context forever, level 12';

Session altered.

SQL> select e.ename,d.dname from dept d, emp e where
2 d.deptno=e.deptno;

---------- --------------

14 rows selected.

And the trace we get looks like this.

Sunday, December 3

Laptops - 3 years ago and today

So I figured it was time to get a new laptop, my old trustworthy Dell Inspiron 8600 has done a great job but it's starting to show it's age, it's actually 3 years old on the day next week (Yes, remembering the exact date you bought your last laptop is weird, I know). My 8600 had a few scratches and marks but it still looks pretty ok. Performance wise it's not to bad, I've been running Open Solaris for the last 18 months and it's been working great, felt fast and snappy. But when it comes to heavy multi tasking, Solaris container and VMWare it's not really keeping up with times.
After browsing the laptop market it was quite clear that getting a Core2 machine was the best option if I wanted performance. I was very tempted to get a cool Acer Ferrari 4006, the price is pretty great now when the 5000-series is just around the corner, but the AMD ML-40 benchmarks put me off, the Core2 totally trash it. Since my old Dell has been working great I went and got a new Dell to replace it. Although I was looking for a smaller machine this time I actually went for pretty much the exact replacement for the 8600, the new Dell Inspiron 6400. Funny thing, the price of the new machine turned out to be pretty much exactly the same as I payed for my old machine.

Dell Inspiron 6400Dell Inspiron 8600
CPUIntel Core2 Duo 2.0GhzIntel Pentium M 1.4GHz
Disk120Gb 5400rpm30Gb 5400rpm
GraphicsATI X1400 128MBNvidia Geforce 5250Go
Network54Mbit wifi + 100Mbit wired11Mbit Wifi + 100Mbit wired
Pointing deviceTouchpadTouchpad / poiting stick
Screen1680x1050 high gloss1680x1050
Weight2.8 kg3.1 kg
SizeA bit smallerQuite big
VMWare speedFastSlow

It's quite nice to be able to boot up a CentOS 4.4 running 10gR2 and still have great performance. Both in the VM and in the host OS.
The one thing I don't like is that the 6400 don't have the pointing stick thingy, just the touch pad. But I'm getting used to it.

Wednesday, November 29

Oracle database links using full TNS entries

Database links can be most useful, say you want to extract data to a reporting system or something like that. This is a quite common scenario people have problems with and come to the Oracle support IRC channel and ask about, especially developer with just a user account on the remote database and plain user account in the target database. They want to create a database link but don't have access to tnsnames.ora file at the OS level on the server, they may not even have SSH access to the server.
No problem, you can use the full TNS entry pretty much everywhere you can use the TNS alias, just remember to single quote it.

Database link example:
[hlinden@whale dblink]$ rsqlplus /

SQL*Plus: Release - Production on Wed Nov 29 11:17:26 2006

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.

Connected to:
Oracle9i Enterprise Edition Release - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release - Production

SQL> select db_link from user_db_links;

no rows selected


Database link created.

SQL> select db_link from user_db_links;


SQL> select * from t@prodsyslink;

---------- ----------
1 2
2 1

Cool, now it's possible to create a nice materialized view or whatever is needed.

Oracle docs about CREATE DATABASE LINK.

Sunday, November 26

Redmond technology

Disclimer: Long pro-Microsoft ranting. I know most of my blog readers are hard core Unix geeks so please be warned if you are sensitive to Microsoft material.

Ok, I have to admit it, I'm posting this entry from a machine running Windows Vista.
And on top of that, I have to admit that Microsoft SQL Sever 2005 seems pretty cool. SQL server is not comparable to Oracle by far but it's certainly improved over the 2000 release. I've even fiddled around a bit with C# and OLDEDB providers in Vista. It's pretty sweet, using something like the Vista search API is peanuts, it's easy and clean, all the basics is there and easy accessible.
After using Vista for about a week (No, I did not download the torrent, I'm using a developer licensed copy from MSDN) I quite like most aspects of it, the new aero interface is quite fast, it's "cute" without getting annoying and slow. My favourite feature is the basic search functionality which works pretty much everywhere, you can sort of filter the result set in all views. If you type "Comm" in the Start-menu Windows filters the selection to the Command Prompt, so instead of classic browsing you can just type the first few letters of something and just hit enter, sort of like we have done in Unix for the last 15 years ;). Anyway, without a doubt, I feel more productive using Windows Vista than I have felt in a long time.

I've been quite distanced from the Redmond scene for a while, the "2003 kit" did nothing for me, but as a technologist I have to admit that there are one or two quite interesting things coming from the Redmond campus in the next six months (and I'm not talking about the Zune here.

Another thing that comes to mind here is that Microsoft is in a way shifting away from it's core customer base and trying to gain the interest of Unix shops and "the bigiron people".
Take a product like Exchange 2007, it's a pretty solid mail server (if you pair it with a couple of Postfix releays), but deploying a quite basic Exchange installation is getting pretty tricky, I have to admit I've only seen the beta2 release so far so the final version is probably better, compare that to the super simple and straight forward Exchange 5.4 deployment. Sure, you had a few gotchas in 5.5 and you always swore about all it's shortcomings, but the installation was done in 45 minutes or so. A company with say 100 employees and perhaps 3-4 servers can't just have "the IT guy" any more, not unless he is a pretty good and dedicated IT guy. Microsoft product deployments almost require consultants or at least a decent support channel. Things you previously associated more with larger and more back office deployments like an Oracle database, SAP or IFS.
They are adding all this great functionally, but are they abandoning the ease of use and simplicity of their server products?
Myself I don't mind the added complexity as long as the new features can help me get the most out of my deployment, something that Oracle has done an excellent job of. While it is a very complicated product, it is also the most powerful database on the market. Can we have it both ways? Probably not.

So what's cool?

  • Vista in itself isn't that cool, but I think what people will do with it will be pretty cool. Both developers and end-users will benefit. I think we'll see the same thing we saw when the MacOS X revolution started. Innovation from all corners of the market.
  • SQL Server 2005 - pretty cool. But not great, it'll kick some more Sybase butt and perhaps excite a few more developers to give up their Access MDB databases.
  • .NET framework 3.0 - same as the two above. Managed code is here to stay, .NET, Java and python. Now when you can have managed code in unmanaged code (confusing yes) I think we'll see a more rapid adaptation.

Enough of this silliness, I need to sort myself out, cat some files, pipe some stuff.
Seriously, I'll stick to my motto, use whatever tools are best for the job and don't be afraid to try new products. Homogeneous environments and all that is nice, but when it all comes down to providing the service your clients require you have to look at all the options. Web 2.0 is all about providing a better service than the competition. If that's Microsoft or OpenSolaris, so be it.

Classic screenshot:

Thursday, November 23

More cores, less space

Dell just started shipping the "mark ii" 9-series boxes capable of taking the new quad-core Intel Xeon processor.
Unfortunately the older 9th generation are not compatible with the new quad core processors. Intel made some last minute design changes to the new chip that required Dell to revise their motherboard designs.
Look out for a "II" symbol on the front of your 9th gen boxes.
Quad-core will be available on the SC1430, 1900, 2900, 1950, 2950 and the 1955 blades.
Performance is said to be up to 60% better than a dual core system, which sort of puts 2-socket machines in "enterprise territory" for the first time. Sure, we got the 8-core Sun T1, but that is a quite niche market machine.
A 8-core Dell PowerEdge 2950 with dual quad core procesors, 8Gb RAM and a pair of 72Gb SAS disks costs just under £6000 ($10kUSD).

I'd love to do some JBoss benchmark on a mk2 1950.

Wednesday, November 22

Oracle merge-statement

The merge statement has been in Oracle since version 9, but is still not very widely used.
When looking at data loading and merging code I think the most common bit of PL/SQL I've seen is the "merge" statement written in pl with slow and boring if-statements to see check if an entity exists in a table and if it exists the pl-code will do an update, otherwise it will do an insert.
Two words, super slow.

A much better solution is to use the merge-statement which does exactly that, it's basically
"if exists then update else insert". Merge data from one data entity to another.
The basic concept is sort of a join between the loading entity and the destination table.
You can load from a normal table or from select query.

SQL> select * from emp_load;

----- ---------- --------- ----- --------- ----- -------- ----------
7654 MARTIN SALESMAN 7698 28-SEP-81 1450 1400 30
7788 SCOTT ANALYST 7566 19-APR-87 3500 20
7902 FORD SALESMAN 7698 03-DEC-81 3000 1300 20
7100 CHARLES CLERK 7788 22-NOV-06 1150 10
7302 SALLY CLERK 7902 22-NOV-06 1300 30

SQL> MERGE INTO emp e USING emp_load l
ON (e.empno=l.empno)
e.ename = l.ename,
e.job = l.job,
e.mgr = l.mgr,
e.hiredate = l.hiredate,
e.sal = l.sal,
e.comm = l.comm,
e.deptno = l.deptno
l.empno, l.ename, l.job,
l.mgr, l.hiredate,
l.sal, l.comm, l.deptno

5 rows merged.

SQL> select * from emp;

----- ---------- --------- ----- --------- ----- -------- ----------
7100 CHARLES CLERK 7788 22-NOV-06 1150 10
7302 SALLY CLERK 7902 22-NOV-06 1300 30

7369 SMITH CLERK 7902 17-DEC-80 800 20
7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300 30
7521 WARD SALESMAN 7698 22-FEB-81 1250 500 30
7566 JONES MANAGER 7839 02-APR-81 2975 20
7654 MARTIN SALESMAN 7698 28-SEP-81 1450 1400 30
7698 BLAKE MANAGER 7839 01-MAY-81 2850 30
7782 CLARK MANAGER 7839 09-JUN-81 2450 10
7788 SCOTT ANALYST 7566 19-APR-87 3500 20
7839 KING PRESIDENT 17-NOV-81 5000 10
7844 TURNER SALESMAN 7698 08-SEP-81 1500 0 30
7876 ADAMS CLERK 7788 23-MAY-87 1100 20
7900 JAMES CLERK 7698 03-DEC-81 950 30
7902 FORD SALESMAN 7698 03-DEC-81 3000 1300 20
7934 MILLER CLERK 7782 23-JAN-82 1300 10

16 rows selected.
If you want to load from a query you can use this syntax:
(select * from emp_load
where hiredate > sysdate - 7
) l
ON (e.empno=l.empno)
Check out the merge docs .

The Week of Oracle Database Bugs

In an effort to try to get Oracle to improve their response time and responsiveness to fixing critical security flaws in the Oracle database the Argeniss security firm has started the "The Week of Oracle Database Bugs".
In all honesty, Oracle are really bad at fixing critical bugs. Not that many databases are Internet facing, but the concept of social hacking and internal hacking is not unheard of.

I hope we see a positive response from Oracle.

Saturday, November 18

Random bits

I must admit that I haven't been very active on the blogging side lately. Been quite busy at work and a fair amount of my free time has been consumed by excessive Battestar Galactica viewing. :)
Work has been quite busy lately, not sure what I've actually done, lots of small things add up I guess. We recently signed a few new customers and need to configure staging environments and of course they have to use the most boring stuff around, Sybase and boring Webfear.
Why can't people just face the facts and use JBoss and Oracle?

On the plus side I've had to get a closer look at the FSC PrimePower Solaris boxes. The SPARC64 processor isn't that bad actually, it's pretty fast. I'd love to get some "end customer" pricing for their systems (noone pays list).

Productivity wise I'm writing a small system to find new physical devices on a LAN, it simply polls the MAC address table from a switch and stores the entries in a PostgreSQL database and when a new MAC is found it sends an e-mail alert to the administrator. That bit is currently working and I'm just writing a small PHP-interface to add comments to MACs so they can be easily identified. I'll post the code when it's done if anyone is interested.

Album recommendation:
And the artist of the "random time unit" is Blondie!
The album Autoamerican must without a doubt be one of the most creative and uprising albums of all times. Rap, jazz, reggae and disco on the same album is not something we see every day.

Tuesday, November 14

Oracle Alert log shell alias

Inspired by Laurent Schneiders recent shell alias blog post I decided to post a nice little shell alias I use.
The alias opens the current alert log file in vi, it reads the background dump dest from sqlplus and constructs the full path to the file name. Works pretty well, it's quite convenient to have one single command to quickly open the alert log, it requires that your oraenv (ORACLE_HOME, PATH and ORACLE_SID) is set correctly.

The alias:
alias alert='vi + $(printf "set heading off pages 0 feedback off\n select value from v\$parameter where name='\''background_dump_dest'\'';\n"|sqlplus -S "/ as sysdba")/alert_${ORACLE_SID}.log'

If you just want the last 20 lines of the alert log you can use this alias:
alias talert='tail -20 $(printf "set heading off pages 0 feedback off\n select value from v\$parameter where name='\''background_dump_dest'\'';\n"|sqlplus -S "/ as sysdba")/alert_${ORACLE_SID}.log'

Sunday, November 5

Oracle controlfile backup script

Wrote a small script to get a "usable" copy of the "CREATE CONTROLFILE" statement one can backup to trace. In a heavily used database it can be quite tricky to find the control file backup without actually "looking for it", in the sense of actually query the database for your thread number.
The script will dump the controlfile to trace and copy it to the local directory with the file name control_INSTANCENAME_TIMESTAMP.sql, should be fairly simple to integrate the script in to a generic backup process.

sqlplus -S / as sysdba <<EOF >/dev/null
set echo off feedback off verify off pages 0
column TRCLOC new_value TRCFILE
column TRCDATE new_value FILEDATE
column INSTNAME new_value INSTANCE
select to_char(sysdate, 'YYYYMMDD_HH24MISS') TRCDATE
from dual;
select instance INSTNAME
from v\$thread;
alter database backup controlfile to trace;
c.value || '/' ||
instance || '_ora_' ||
ltrim(to_char(a.spid,'fm99999')) ||
'.trc' TRCLOC
v\$process a, v\$session b,
v\$parameter c, v\$thread c
a.addr = b.paddr
and b.audsid = userenv('sessionid')
and c.name = 'user_dump_dest'
host cp &TRCFILE control_&instance._&filedate..sql


Example usage:
oracle@htpc:bin$ ls
oracle@htpc:bin$ ./backup_controlfile.sh
oracle@htpc:bin$ ls -1
oracle@htpc:bin$ grep -m1 "CREATE CONTROLFILE" control*
Found the trace location query over at Asktom.

Quick update:
The blogger editor buggered up the script the first time around, had to use ampersand encoding to get the << and > to work properly.

Monday, October 30

rlwrap for rhel / centos / unbreakable

Just buil a new RPM file for rlwrap 0.26 for Centos 4, Redhat 4 and Oracle Unbreakable Linux. I've added my (very) basic sqlplus auto complete file.

If you are not familiar with rlwrap but use Oracle on Linux/Unix you should definatly have a look. rlwrap provides basic "bash style" command line editing in sqlplus (and other command line tools), you get command history with working arrow keys and normal command line editing.

Download 32-bit version: rlwrap-hali-0.26-1.i386.rpm
Download 64-bit version: rlwrap-hali-0.26-1.x86_64.rpm

Sunday, October 29

Running Oracle Unbreakable Linux under VMWare

I finally got around to do a proper install of Unbreakable Linux today.
Pretty much as expect, it's pretty much identical to CentOS. It's got yum and all that.

Took the opertunity to try the new free version of VMWare GSX Server, or at least what used to be known as GSX. They just call it VMWare server now.
The VMware tgz install on my 64-bit Ubuntu 6.10 was quite smooth. You still have to enter a serial, easily obtained from filling out a simple form on the vmware website. The installation of vmware-authd failed though. Since Ubuntu's xinetd doesn't read /etc/inetd.conf I had to create an entry in /etc/xinet.d/.
hlinden@htpc:~$ cat /etc/xinetd.d/vmware
service vmware
disable = no
socket_type = stream
protocol = tcp
user = root
wait = no
user = root
server = /usr/sbin/vmware-authd
port = 902
Speed wise VMWare server is great, I only allocated 384Mb RAM to the VM and it still feels quite snappy and fast. I've only tried running two VM's at the same time so far. But my first impressions are simply, great stuff! The VM creation wizard thingy even had Solaris x86 in the list, haven't tried it yet but I sure will.

Took a few screenshots during the Unbreakable Linux installation.

Wednesday, October 25

Oracle Unbreakable Linux

Ok, so Oracle is doing "a CentOS".
Oracle is taking the Redhat Enterprise Linux source code, removing all the trademarks, logos and references to Redhat and recompiling it with the Unbreakable Linux label. If there are any bugs found and redhat don't fix them fast enough (at all?) Oracle will take care of that and guarantee all bugs will be fixed and backported to all versions of Unbreakable Linux.
Sounds pretty good to me, especially when you look at the support cost model.

The basic support contract "network level" gives you access to patches via Oracles up2date equivalent. And the cost is a mear $99 per year per system.
If you want enterprise support you get that for a couple of hundred more.
If you want premium support you get that for about $1399 per year, you can't even get that support level from Redhat.
The full support policy can be found here.
And... even better... support contract are 50% off for the first few months (although I still haven't found where you can actually buy them).

Read more. Take a few minutes (about 60) to watch Larrys keynote, quite interesting.

Download ISO images, free for all.

Monday, October 23

Dell AMD Opteron servers

We've known about this for quite some time, I think I heard the first rumors last November and we saw new AMD desktop machines from Dell about a month ago.
And now we have to new shiny Opteron Socket-F machines. They all run Linux of course.

First the anticipated Dell PowerEdge SC1435. A nice little box, as I've said before, the rack mounts in the SC series just get things done. I do think the price tag at around $3000 USD is a bit high. That'll probably drop quite a lot if you haggle with Dell and buy them in bulk. It is a perfect JBoss server, deployed in a nice farm of course, or it would make a very nice HPC node.
Now if just Dell would send me a few of these to test, they could use a bit of blogger hype. :)

The second new box looks even more interesting,
the Dell PowerEdge 6950.
A four socket machine with up to 8 cores and 64 Gigs of memory. On par with the HP DL585 and sort of a "half size" machine compared to the Sun X4600. I wonder which box packs the best bang for buck, but thats for another post. But a similar speced X4600 cost about twice as much, but on the other hand you get the extra scalability and future proof CPU cards instead of on-board socket-f.
The PE 6950 takes 5 internal SAS disks and has got 7 (free) PCIe slots.
Thats quite a lot of I/O for more NIC's and HBA's, do I dare to say this makes a very nice RAC node. Probably one of the top players in the area, team it up with a decent SAN and you are in business.
Starting price is not bad, not bad at all. An entry-level machine comes in at around $10000 USD.

See the product videos over at direct2dell.


Saturday, October 21

Oracle 10gR2 on Ubuntu 6.10 AMD64

I just finished installing Oracle 10gR2 on my Intel Core2 Ubuntu 6.10 machine running in x86_64 (AMD64) mode. The installation is fairly straight forward with a few gotchas. In comparison to CentOS/Redhat, Ubuntu 6.10 has got much more native 64-bit libraries, pretty much everything is actually 64-bit. Under Redhat most libraries are 32-bit. And the Oracle Universal Installer (OUI) uses a 32-bit Java runtime so hence we have to install a few i386 compatability libraries to get it working.

First of all make sure your machines hostname resolves to your external IP and not the loopback IP (
Double check using this command
$ grep $(hostname) /etc/hosts htpc
If you see just edit the hosts file to have the external IP.

Now you need to install a few compat libraries and 32-bit libraries.
sudo apt-get install gcc libaio1 lesstif2 
lesstif2-dev make libc6 libstdc++5 \
lib32stdc++6 lib32z1 ia32-libs
This will probably cascade in to quite a few packages depending on how much you have installed, be patient and let it download everything it needs. The extra ia32 libraries is probably the biggest change to the old 32-bit installation.

Edit /etc/sysctl.conf to have the kernel parameters required by Oracle (I've added a few extras here as well to avoid unecessery swapping).
Add these lines:
# Oracle stuff
kernel.shmall = 2097152
kernel.shmmax = 2147483648
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 65536
net.ipv4.ip_local_port_range = 1024 65000
net.core.rmem_default = 262144
net.core.rmem_max = 262144
net.core.wmem_default = 262144
net.core.wmem_max = 262144
Create the oracle user and required groups (you don't really need the nobody group, it's only used by root.sh to be the group owning the oraenv and dbhome scripts).
sudo groupadd nobody
sudo groupadd oinstall
sudo groupadd dba
sudo useradd -s /bin/bash -g oinstall -G dba oracle
sudo passwd oracle

Increase system limits by adding these lines to /etc/security/limits.conf:
* soft nproc 2047
* hard nproc 16384
* soft nofile 1024
* hard nofile 65536
To avoid the non-redhat warnings during the prereqs run this command:
echo 3 > /etc/redhat-release
Add a few symlinks to avoid any script problems:
sudo ln -s /usr/bin/awk /bin/awk
sudo ln -s /bin/true /bin/rpm
sudo ln -s /usr/bin/basename /bin/basename
sudo ln -s /lib/libgcc_s.so.1 /lib/libgcc_s.so
sudo mkdir -p /u01/oracle/10g
sudo chown -R oracle:oinstall /u01/oracle/10g
sudo chmod -R 775 /u01/oracle
Time to run the installer, make sure you are allowed to use the X11 server (export DISPLAY, xhost, vnc whatever).
To get the 32-bit JRE to work you need to set the XLOCALELIBDIR before running the installer:
export XLOCALELIBDIR=/usr/lib32/X11/locale
cd /your/install/dir
I just ran the defaults straight through (except the SID name and password). The installation without any warnings or errors. The linker worked fine and everything was configure properly, dbca created the standard database.
oracle@htpc:~$ uname -a
Linux htpc 2.6.17-10-generic #2 SMP Fri Oct 13 15:34:39 UTC 2006 x86_64 GNU/Linux
oracle@htpc:~$ rsqlplus / as sysdba

SQL*Plus: Release - Production on Sat Oct 21 20:13:45 2006

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to:
Oracle Database 10g Enterprise Edition Release - 64bit Production
With the Partitioning, OLAP and Data Mining options

SQL> select platform_name from v$database;

Linux 64-bit for AMD


This guide is based on the Oracle 10gR2 x64_64 installation guide and I've also used Dizwells excellent Ubuntu 6.06 32-bit guide for some ubuntu specific notes. Thanks.

Cool, now I can alt-tab between sqlplus and my latest Prison Break episodes on the TV . :)

Friday, October 20

Funny temporary table behavior in Postgres

Was playing around with temporary tables in PostgreSQL and found this quite weird behavior.
When you have a table called say animal and create a new temporary table with the same name postgres is totally ok with the "duplicate" name. Since the temporary table ends up in a different schema I presume. Anyway, it was quite confusing before I figured out what was going on.
It's the same both in 8.1 and 8.2beta. Not sure if it to be expected, a oversight from my side or simply a "bug".

[postgres@dolphin ~]$ psql
Welcome to psql 8.2beta1, the PostgreSQL interactive terminal.

Type: \copyright for distribution terms
\h for help with SQL commands
\? for help with psql commands
\g or terminate with semicolon to execute query
\q to quit

postgres=# create user test password 'test';
postgres=# create database testdb owner test template template0;
postgres=# \c testdb test
You are now connected to database "testdb".
testdb=> create table animal (dog int);
testdb=> \d
List of relations
Schema | Name | Type | Owner
public | animal | table | test
(1 row)

testdb=> \d animal
Table "public.animal"
Column | Type | Modifiers
dog | integer |

testdb=> create temporary table animal (cat int);
testdb=> \d
List of relations
Schema | Name | Type | Owner
pg_temp_1 | animal | table | test
(1 row)

testdb=> \d animal
Table "pg_temp_1.animal"
Column | Type | Modifiers
cat | integer |

testdb=> drop table animal;
testdb=> \d animal
Table "public.animal"
Column | Type | Modifiers
dog | integer |


Tuesday, October 17

SQL challange #1

Ok, time for a little competition. Myself and Lukas Smith where helping a user with an at first very simple SQL problem. But after a few minutes we wouldn't find a "nice" way to solve it.

So here is the problem:
1, 2
2, 1
3, 4
3, 6
4, 3
6, 3
And the output from the query we want is:

All rows with a matching sister "a=b" relation should be "merged".

Lukas found one quite good way using exists in a self-join.

Any RDBMS goes, ANSI SQL is an advantage but not necessary. Analytics perhaps?
Nicest/fastest/cleanest query wins. Post any questions as comments.
E-mail me or post your solution as a comment to this blog.

See Lukas' invite to the challenge. :)

Thanks and good luck!

Saturday, October 14

Running Ubuntu 6.10 on Abit AB9

Playing around with my new media center PC at the moment.
Ubuntu 6.10 Edgy (daily build from Oct 13th) installed without too many problems yesterday, most of the hardware on my Abit AB9 pro motherboard was detected out of the box.
However the ethernet controller, a Realtek 8111B, card was not detected.
From lspci I got Realtek Semiconductor Co., Ltd. Unknown device 8168 (rev 01) and after a quick look in google I found this article on how to get the card working. Worked great, just download, change the three MODULE_PARM lines in the src to MODULE_PARM_DESC and it compiles ok.

A few notes on installing Linux on the AB9 motherboard though, a few thought from my experience. First of all, use a very recent version of Linux. Ubuntu 6.10 worked for me and I'd expect Fedora Core 6 would work.
Use the jmicron controller, not the ICH8R controller for the disk and optical drive.

Another thing, which I think is just related to me not having used debian/ubuntu enough. After I partitioned my disk and create ext3 file systems using labels I was unable to mount using LABEL=foo, turns out I had to restart the udev service before they showed up. In CentOS, they just appear under the dev-mapper.

I'm quite impressed on how good it looks on my LCD television. Even xvid files in HDTV resolution look great.

Friday, October 6

Blog design updates

Joined the blogger beta program yesterday which allowed me to do a few cool updates to the template of the blog.
Note the "Latest Oracle downloads" RSS-feed and the labels amongst other things.

Thursday, October 5

Media Center 10g

I'm currently shopping around for a combined media center PC / Oracle test box, must be the first time someone tried to combine those two in one box. And the first time someone plan to hook up a Oracle server to a television set. But they, there is a first for everything.
Currently looking at the new Intel Core2 processors. The 2.13GHz E6400 seems to be the best value for money. I wouldn't mind the E6600 with 4Mb cache, but the extra £67 can probably be spent on something that adds more value to the system, like an extra SATA disk to distribute the IO load. Not sure if AMD AM2 processors are a valid option anymore? The new Intel 965/975 chipsets are probably not supported in Solaristhough, where as a Nforce chipset probably works gret.
Found a quite decent looking case for the system as well, the Silverstone LC17.
I'll post a photo of sqlplus on a 32" LCD television once I buy the system. :-)

Tuesday, October 3

Dell 9th generation servers

Quick overview of Dells 9th generations rack-mount server line up

It's been a couple of months now since Dell introduced the first machines in the new "9th generation" server line-up, the PowerEdge 1950, 2950, 2900 and the Blade 1955 all looked like quite nice machines, they can be fitted with the new DRAC/5 card and onboard PERC.
And they have recently been accompanied by slightly simpler (and more cost effective) SC1435 and the baby 860.

All machines use the new generation Intel processors (64-bit), FB-DIMM memory and SAS and/or SATA disk drives.
I'm not going to go on about products specs and other sales crap, you can find all that on the Dell website. Just my view on the machines and what they are suitable for. Everything from using Linux as the OS on the box. Solaris 10 is currently not confirmed as working, using 06/06 you still need a driver disk for the PERC card and ethernet is unsupported so you need a addon Intel PCI card to even make use of the server.

Dell PowerEdge 1950
Nice 1U pizza box machine. I would use this as an application or web server, if you equip it with the 4 2.5" disk backplane you could possibly have a very small database on it. Disk I/O is very likely to be a bottle neck though. Hooking it up to a SAN is another option. Good layout of the internal components and the back connectors are easy to access. The PCIe slots are a bit limited in lenght, but then again there aren't many long cards available any more (anyone remember the old full lenght PCI raid cards?). Quite redundant with dual "most things".
To summarize, a perfect box for a Apache HTTPd or Java app server.
Starting price in a usable spec: $3437

Dell PowerEdge 2950
Pretty much the same machine as the 1950 but with more I/O. Six 3.5" drives or eight 2.5" drives. Decent small office file server, takes 1.8Tb internally (3Tb if you go SATA), room for extra Ethernet cards if you want to run iSCSI. Would also be a good "all in all" intranet server, put Postgres, PHP, Postifx and few other things on it. Or install a Zimbra and use it as a e-mail server.
Slap 8 2.5" drives in it and it is a nice decent DB server. If you team up a group of these and hook them up to a SAN they make great RAC-nodes. As the 1950 it is fairly redundant and can be trusted to work by itself for non-critical things.
Starting price in a usable spec: $3757

Dell PowerEdge 2900
Not really a rack-mount box but it does come with a rackmount kit. Only included here as it is the only "one box solution" for larger installations such as a Oracle EE, mid-sized file server or even a large e-mail server. Takes up to 10 internal drives (8 + 2 cage), thats 3Tb with current SAS drives. Lots of PCIe I/O, six slots for HBAs and NICs. If you like virtualization it would make a great VMWare box, it takes an impressive 48Gb RAM (you don't want to see the price tag for this though).
5U in a rack though, I hope you're not paying per U if you colocate it.
Starting price in a usable spec: $4370

Dell PowerEdge 860
This one I don't really understand. First of all, who came up with the name 860, it is clearly a 950. Second of all I don't really get where this box is meant to be used, the SC1435 is a better choice in 90% of all cases. The only advantage over the SC1435 is the fact that it has the normal kick ass DRAC/5 card as an option. It takes Celeron, Pentium D and Xeon 3000 series processors, I really *really* recommend against using a Celeron proc in anything that goes in a rack.
Possible uses (if I have to) would be a DNS server, mail relay, basic web server or management server.
Starting price in a usable spec: $1391

Dell PowerEdge SC1435
This little guy is not officially out yet, should be available in the comming weeks. The tower equivalent SC1430 is currently available though. I must admit that this is probably one of my favorites in the line-up, not because it's a great flexible, highly redundant machine. But simply because it's cheap and gets the job done! No hot swap anything. This is probably the perfect app or web server for larger load balanced solutions. This is however an AMD Opteron Socket-F server, not a Intel as the other. Quite confusing to be honest. I would have hoped that Dell launched a completely new line for the opteron gear.
I would not consider this box unless I was installing at least 4 (or possibly even 6) nodes for a single deployment. If they fail, you'll have a longer down time period than most other boxes since you probably have to get it out of the rack to service it. But it's cheap and just get things done. I can imagine many high performance computing farms rubbing their hands when they see this box, it's very suitable for a Linux beowulf cluster.
You can read my review of the older SC1425.
Starting price in a usable spec: $2900 (estimated)

Dell PowerEdge 1955
The Blade server in the new family. Data processing, web servers, app servers and possibly other uses if you add the FC daughter card. Quite decent machines and as a 1855 admin I'm very pleased so far.
One thing to remember though, the 1955/1855 chassis backplane is not redunant, deploy in groups of two chassis and at least 8 servers to make them cost effective. Price excluding chassis, which usually comes in at about $3000 with redundant PSU's, DRAC, IP-KVM and a pair of switch modules.
Starting price in usable spec: $2516

Regarding the price, the starting list price on Dells website is unusable. One disk drive included, way to little RAM, no DRAC card etc. All machines have been speced out to a suitable minimal config for actuall deployment use with redundant PSU's, a PERC card, at least 2Gb RAM etc and Bronze NBD support. The size and usage of the box has also been considered, as an example the 2900 was fitted with six drives since it's main purpose is disk and I/O expandability.

For Solaris 10 support see this post.

Sunday, October 1

PostgreSQL 8.2 hits beta

PostgreSQL 8.2 hit beta other day.
Just installed it in a Solaris zone on my laptop (you need a fairly recent Solaris Express release to use the a dtrace [pdf] functionallity).
A few new cool features.

  • dtrace support in Solaris
  • Warm standby server support
  • CREATE INDEX without locking the table
  • Improved syntax of the config file, support for SI units (kilobyte, megabyte, hour etc)
  • Basic support for SQL 2003 analytical functions (I couldn't find any docs about this).
  • Faster vacuuming
  • Improved WAL and log management

Quite a lot of exciting stuff. Analytics could be fun and the speedups in WAL and logging could make a improvement for write intense operations, an area where postgres have been quite slowish in the past.

Full list and downloads here. Cool.

Wednesday, September 27

Rotating tables (and sybase)

Got a question on IRC today about the equivalent of MySQLs MAX_ROW setting on a table. The problem at hand was more specific. The table should always contain N number of rows, if a new row was inserted, the oldest should be deleted. I wrote this quick example to provide a possible solution.
CREATE TABLE top10data (

SELECT id,name FROM top10data;

INSERT INTO top10data VALUES(1,'one');
INSERT INTO top10data VALUES(2,'two');
INSERT INTO top10data VALUES(3,'three');
INSERT INTO top10data VALUES(4,'four');
INSERT INTO top10data VALUES(5,'five');
INSERT INTO top10data VALUES(6,'six');
INSERT INTO top10data VALUES(7,'seven');
INSERT INTO top10data VALUES(8,'eight');
INSERT INTO top10data VALUES(9,'nine');
INSERT INTO top10data VALUES(10,'ten');

-- trigger on the view, catch the insert and
-- do it on the actuall table

UPDATE top10data SET id=id+1;
DELETE FROM top10data WHERE id>10;
INSERT INTO top10data VALUES(1,:new.name);
END top10_trigger;

-- insert example, note that we insert on the view
SQL> select * from top10 order by id;

---------- --------------------
1 one
2 two
3 three
4 four
5 five
6 six
7 seven
8 eight
9 nine
10 ten

10 rows selected.

SQL> insert into top10(name) values('New one');

1 row created.

SQL> select * from top10 order by id;

---------- --------------------
1 New one
2 one
3 two
4 three
5 four
6 five
7 six
8 seven
9 eight
10 nine

10 rows selected.
This solution may have some concurrency problems if it is heavly populated. To get around that problem you could use a sequence and increment the ID from the trigger and delete based on rank() of the numbers.

On another issue
I've been working a bit with Sybase lately.
To say the least, I do not like it.
  • The way it handles users, permissions and the "database owner" thing. I don't like it.
  • Data space and log space, whats this about. Isn't it just better to define logs and a number of "data storage thingies" (why don't we call them tablespaces) for anyone to use. Why pre-assign space and devices to a database (schema'ish) . I don't like it.
  • The structure of transaction logging and checkpointing. It just feels ancient. I don't like it.
  • If you do a database dump on Solaris/SPARC and try to load it on a Linux/x86 machine you need to do a data conversion process thingie. Whats that about, sure big endian vs. small endian when using raw devices. But a dump should be universal. Sybase should forget about always working with plain raw devices (disk, tape etc) and allow people to actually make use of the file system provided. Besides if you want "endian less" raw devices just use a cds volume under vxvm. Transportable tablespaces anyone? I don't like it.
  • The basic footprint is tiny compared to Oracle. The installation takes about 10 minutes and the sybase server uses about 40Mb of memory when you first start it up. So for a basic testing / dev database it is quite nice. I do like it.

And I've been messing around a bit with DB 2 as well. I'm not sure I like it yet but it is miles better than Sybase. I'll have to think about that one for a while (and play with it a lot more).

Wednesday, September 20

Update of updates

Another quickie update, nothing fun to report anyway. Been busy fighting WebSphere lately, trying to get decent performance on a Sun T2000 server. Out of the box performance was horrible. Websphere 5.1 is still far from satisfactory, I'm putting my hope on Websphere 6.0 now.

Updated Sun boxes:
Sun finally decided to update the classic vXX0 range. Just a simple face lift to be honest, they still run the same range processors. The v440 has been replaced by the new v445, the v240 with the v245 and the 1U v210 is replaced by the v215.
The main two updates are the use of SAS drives and the addition of PCIe. Same basic features as the Galaxy and Niagra servers. They updated the chassi to Andys design.
My favorite new feature is the fact that you can have twice as many drives in the v445. 8 SAS drives instead of the old 4 SCSI. Might even be usable as a Oracle server without external storage. Oh, one last thing. No Solaris 8 support, bad bad. Who uses Solaris 9 anyway?!

Oracle patch release:
Patch release is out, not much new. Bug fixes and a patch for the dreaded select-view-permission-hack security hole. I have tried applying it to both and machines without any hickups. 4547809 in Metalink (now if Oracle would just make these patches available for anyone to download *sigh*).

Btw, I've switched to KDE at work as well.

Tuesday, September 12

KDE - Not that krap after all

Ok, I've been quite "anti KDE" the last few years, with good reason imo. The last time I tried KDE must have been around 2001 or 20002 and back then it wasn't that good.
Now the other week my beloved OpenSolaris powered laptops hard drive failed on me. After I spend about a week moaning about it I finally got around to buying a new drive for it. Replace the old 30gb 4200rpm with a snappy 80gb 5400rpm drive (yes, the speed difference is really significant). Since I didn't have a recent Nevada build ready and was to lazy to download one I decided to install Kubuntu instead, got a free CD at an event the other day. Simple install as always with Ubuntu. Even the widescreen res on my laptop worked out of the box. Easy setup, did the usuall apt-get install oracle-xe, install postgres.
Then I realized I really lliked using KDE, things where so much easier than GNOME. KDE has probably got less features, but it just works. The integration between the applications is great, the little IM client Kopete is much nicer than gaim. KDE seems to use less memory than GNOME as well. Oh well, I saved disk space for Solaris and will probably install that as soon as I get around to downloading a recent version. Nevada B48 should be out any day now.

Friday, September 8

Fancy group by rollup example

Played around with group by rollup today and thought I'd post a little example.

Let's look at the basic usage of the rollup clause
sum(sal) sal from emp
group by rollup (deptno,ename)

-- Here's the output from sqlplus

---------- ---------- ----------
10 KING 5000
10 CLARK 2450
10 MILLER 1300
10 8750
20 FORD 3000
20 ADAMS 1100
20 JONES 2975
20 SCOTT 3000
20 SMITH 800
20 10875
30 WARD 1250
30 ALLEN 1600
30 BLAKE 2850
30 JAMES 950
30 MARTIN 1250
30 TURNER 1500
30 9400

18 rows selected.
Ok, thats all well and cool. Oracle sums up the previous rows when the group by condition change, which in our case is for each individual and for the department. When a rollup happens all "non rolled up" columns are returned as nulls as shown in the above example. Notice the last row which has sum of all employees and hence nulls are returned for the two rollup columns.
You probably want to label the rows instead of the nulls with something more usefull, like a small "Sum" descriptor or something. As usuall, decode() is our friend.
Example with pretty labels:
with empt as
sum(sal) sal from emp
group by rollup (deptno,ename)

select deptno,
decode(rownum,count(*) over (),'TOTAL SUM','DEPT SUM'),
ename) ename,
from empt

-- And the output from sqlplus

---------- ---------- ----------
10 CLARK 2450
10 KING 5000
10 MILLER 1300
10 DEPT SUM 8750
20 ADAMS 1100
20 FORD 3000
20 JONES 2975
20 SCOTT 3000
20 SMITH 800
20 DEPT SUM 10875
30 ALLEN 1600
30 BLAKE 2850
30 JAMES 950
30 MARTIN 1250
30 TURNER 1500
30 WARD 1250
30 DEPT SUM 9400

18 rows selected.


Wednesday, September 6

Sending SMS from Oracle (abuse warning)

Ok, this post is mostly to annoy people who think stuff like this should be done from the application layer. I do agree with them but these things can done from the db layer as well.
Anyway, here is a little code sniplet to send a SMS text message from the database just using a insert statement. The SMS provider we use at work for basic Nagios alerts and stuff like that is Sign.Up To, cheap and quite reliable (they did have some downtime yesterday though). To send the actually message we use a trigger which in turn calls the quite handy (and abusive) dbms package utl_http, utl_http works in Oracle 8 and up (including XE).
If you haven't got utl_http installed you can do so by running the two scripts ?/rdbms/admin/utlhttp.sql and ?/rdbms/admin/prvthttp.plb as sysdba on the server. The response you get back is either a message id or the keyword "fail" if the message fails for some reason, the trigger simply catch that message and store it a table.
This code is just proof of concept and should not be used for anything "real", no error handeling or input checking. It just sends off the data to the message aggregator. And it can't handle any fancy characters in the message nor spaces. You can easily implement basic URL encoding using this function written by Tome Kyte.
create table smslog (
sms_id number(8),
smsno varchar2(18) not null,
message varchar2(160) not null,
status varchar2(150),
sendtime date

alter table smslog add constraint
smslog_pk primary key (sms_id);

create sequence smslog_pk_seq;

CREATE OR REPLACE TRIGGER smslog_insert_trigger
val VARCHAR2(2000);
url VARCHAR2(300);
url := 'http://sms.sign-up.to/smpp_send.php?username=username&from=oracle&
UTL_HTTP.SET_HEADER(req, 'User-Agent', 'Mozilla/4.0 Oracle');
:new.status := val;
:new.sendtime := sysdate;
SELECT smslog_pk_seq.nextval INTO :new.sms_id FROM dual;
END smslog_insert_trigger;

-- And a sample sent message.SQL> insert into smslog(message,smsno)
2 values('Eat_more_fruit','4479697xxxxx');

1 row created.

SQL> select * from smslog;

------- --------------- -------------------- ------------- ---------
1 4479697xxxxx Eat_more_fruit 3211821 06-SEP-06

Monday, September 4

NFS locking issues

Been quite busy at work, decommissioning some old servers.
We are trying to get rid of everything pre RHEL/CentOS 3.x. After buying a new shiny file server for our home directories (a Dell 2850 with lots of cheap disk space) I ran in to some very weird problems. Everything worked fine at first, I myself (as a XFCE user) didn’t have any problems at all, but our GNOME users ran it some serious problems. When logging in to GNOME the desktop did not start properly, gnome-panel looked like a grey bar without any tools and the desktop menu didn’t appear on right click. Simply, nothing in GNOME worked. Not in Centos, not in Fedora, not even JDS in Solaris worked.
After installing the GNOME debuginfo packages for Linux it appeared to be some weird NFS locking problem (haven’t we all hear that one before) in gconfd. NLM auth lock calls got denied. Hard to troubleshoot why and how at the time, we simply remounted the NFS shared with the nolock mount option in Linux and the sort of undocumented mount option llock in Solaris.
NFSv3 is just a mostly irritating patch work of addons, statd, lockd etc etc.
Hate NFSv3, why didn’t I include a NFSv4 migration while we where at it.
The nolock "hack" will have to do for now, I’ll will try to troubleshoot the problem later on.

fstab line in Linux:
fs1:/export/home  /export/home nfs rsize=16384,wsize=16384,intr,nolock
I'll try to post some rather interesting Oracle 10gR2 Sun Fire T200 server benchmarks later on this week. I've found a rather good benchmarking tool called Swingbench. Written in Java and easy to use. It comes with a data loading utility to populate the database with nice bogus data and then has a few tools to simulate different workloads.
Been reading a lot about file system tuning recently as well, ext3 tuning parameters and Oracles filesystemio_options parameter, need to dig up a good test box for that so I can evaluate the options. My current two disk machine doesn't cut it for I/O load testing.

Finally a quick movie recommendation.
Bought a new LCD TV the other day so I've been overindulging in movies recently.

And a big thanks to Lukas Smith for sending me a few DVDs from my wish list.

Friday, August 25

Oracle dates diffs using analytics

Simple little analytical query to find out the difference between two dates on adjecent rows in oracle using the lead analytical function lead() fetches the value from the next row in the resultset, you can use lag() to find the previous row.
SQL> select * from leadtest;

---------- ---------- ---------
1 dog 05-JAN-06
2 dog 05-MAY-06
3 dog 05-SEP-06
4 cat 05-APR-06
5 cat 28-APR-06
6 cat 28-SEP-06
7 cat 28-OCT-06

7 rows selected.

SQL> with t as
2 (
3 select
4 tag,
5 stamp,
6 lead(stamp,1,stamp) over
7 (partition by tag order by stamp) stamp2
8 from
9 leadtest
10 )
11 select tag, stamp, stamp2, stamp2 - stamp diff from t
12 /

---------- --------- --------- ----------
cat 05-APR-06 28-APR-06 23
cat 28-APR-06 28-SEP-06 153
cat 28-SEP-06 28-OCT-06 30
cat 28-OCT-06 28-OCT-06 0
dog 05-JAN-06 05-MAY-06 120
dog 05-MAY-06 05-SEP-06 123
dog 05-SEP-06 05-SEP-06 0

7 rows selected.

-- Lets try lag() as well.
SQL> with t as
2 (
3 select
4 tag,
5 stamp,
6 lag(stamp,1,stamp) over
7 (partition by tag order by stamp) stamp2
8 from
9 leadtest
10 )
11 select tag, stamp, stamp2, stamp - stamp2 diff from t
12 /

---------- --------- --------- ----------
cat 05-APR-06 05-APR-06 0
cat 28-APR-06 05-APR-06 23
cat 28-SEP-06 28-APR-06 153
cat 28-OCT-06 28-SEP-06 30
dog 05-JAN-06 05-JAN-06 0
dog 05-MAY-06 05-JAN-06 120
dog 05-SEP-06 05-MAY-06 123

7 rows selected.

Tuesday, August 22

And there was Oracle

Found a cool link on the oracle-l mailing list to a guy running Oracle 4.1 under DOS. :)
Geeky indeed.
Can't say I remember UFI (previous version of sqlplus), since I was a toddler when it was released. At least they had scott/tiger.

Check the video http://technology.amis.nl/blog/?p=1127

Sunday, August 13

Using Oracle Workspaces

Say the thing we can't do with versioning in Oracle. One really useful feature I often use for day to day stuff is workspaces. A workspace is an independent space where you can play with a table (or a whole bunch of tables) before you actually decide you want your new data to be the live data. Inside a workspace you can commit and rollback data without affecting any other workspace. Even though one should always test and prepare any work on a staging system it can be reassuring not to actually modify a live table without knowing how it will look. You can have almost unlimited numbers of workspaces and even have child workspaces to a parent. You can even do DDL operations. The workspace for the live data is simply called LIVE (in caps).
Let's do a little test.
-- Lets begin with enabling versioning 
-- and creating the new workspace

HALI@spinner1> exec dbms_wm.enableversioning('t1');

PL/SQL procedure successfully completed.

HALI@spinner1> exec dbms_wm.createworkspace('test');

PL/SQL procedure successfully completed.

-- Don't forget to actually switch to your new ws
HALI@spinner1> exec dbms_wm.gotoworkspace('test');

PL/SQL procedure successfully completed.

HALI@spinner1> select * from t1;

---------- ----------
1 test
2 test2

-- Ok, lets update the table and insert a new row
OPS$HLINDEN@spinner1> insert into t1(id,name) values(3,'test 3');

1 row created.

OPS$HLINDEN@spinner1> update t1 set name='test 2' where id=2;

1 row updated.

-- Note that we actually commit the data.
OPS$HLINDEN@spinner1> commit;

Commit complete.

HALI@spinner1> select * from t1;

---------- ----------
1 test
2 test 2
3 test 3

-- Now we switch bake to the LIVE data.
HALI@spinner1> exec dbms_wm.gotoworkspace('LIVE');

PL/SQL procedure successfully completed.

-- And the live data is unchanged at this point.
HALI@spinner1> select * from t1;

---------- ----------
1 test
2 test2

-- But we are happy with our updated data and want
-- our changed to be the added to the live workspace.

HALI@spinner1> exec dbms_wm.mergetable('test','t1');

PL/SQL procedure successfully completed.

HALI@spinner1> select * from t1;

---------- ----------
1 test
2 test 2
3 test 3

-- Don't forget to clean up after the operation.
HALI@spinner1> exec dbms_wm.removeworkspace('test');

PL/SQL procedure successfully completed.

HALI@spinner1> exec dbms_wm.disableversioning('t1');

PL/SQL procedure successfully completed.
Read the Oracle documentation for workspaces.

Sunday, August 6

New JBoss platform

I'm currently in the process of evaluating a new JBoss platform for our company. A few months this would have been a quite simple task. But now with the new Intel Xeon 5100 "Woodcrest" processors on the market, it's a new ballgame.
The three different platforms I'm researching at the moment are:
My previous choice would have been the X4100 without question, the price/performance is good and it's a sort of common platform that can be used for other tasks as well if our needs would change.
Sun recently introduced a dual SAS disk kit for the T1000, which actually makes it usable as an app server, price wise it is slightly pricey (the 2x72Gb SAS kit is about 650GBP (1000USD)), and it's single PSU. The T2000 would have been a better choice but the cost is simly to high. The biggest benefit it gives is it's top speed with huge amounts of threads and the fact it runs Solaris. We're quite likely to run several Jboss instances on these boxes so running these in Solaris containers is quite a good benefit.
I presume most people have seen the amazing Intel benchmarks of the 5100-series CPU's. It just flies. Trust me here, I know a lot of you probably hate dell kit, but the Dell "9-series" is pretty darn good. Pretty decent remote management with the new DRAC/5 card (it is in fact better than the LOM in the galaxy line (X4100 etc)).

Quick summary (for midspec machines with 8Gb RAM, procs as listed and max number of 72Gb SAS disks and an extra dual port NIC in the 1950):
Sun X4100Sun T1000Dell 1950
ProcessorDual AMD 280 4-coresSPARC T1 8-coresDual Intel Xeon 5130 4-cores
Max memory16Gb16Gb32Gb
Disks4x SAS2x SAS4x SAS or 2x SATA
Unit cost£4500£6000£3850
Support pa£420£850£110
3yr cost£5760£8550£4130

I'll post my decision when I make it. I have to say I'm leaning towards Dell at the moment.

Friday, August 4

Oracle group_concat() updated (again)

I've ranted about this twice before. The mysql group_concat() equivalent in Oracle has in the past been quite problematic query.
We found a quite good solution a while back and after checking Tom Kytes blog I found a even better solution today.
This is mighty impressive SQL code, very simple functions but used in a very clever way.

Have a peek.
with data
select job,
row_number() over (partition by job order by ename) rn,
count(*) over (partition by job) cnt
from emp
select job, ltrim(sys_connect_by_path(ename,','),',') scbp
from data
where rn = cnt
start with rn = 1
connect by prior job = job and prior rn = rn-1
order by job

--------- ----------------------------------------

How cool is that, nothing extra to the query! Just beautiful use of the analytical functions.

Full TK post over here.

Thursday, July 20

Fabric 7 servers

Found this very cool new server vendor called Fabric 7.
The thing that sets them aside from other vendors is that they actually have something new to offer, they recently introduced their first products, the Q80 and Q160, to the market. AMD Opteron servers, we've all seem those before and know that's about.

Here's the cool part, they support hardware level partitioning. Similar functionality you get in a Sun Fire 2900 and up system. You can cut up the box in four domain, run VMWare in on, Solaris 10 two and Windows in the forth. Sure you could probaly do all that in VMWare, but in some cases you need dedicated hardware for a project. A basic system with 4 single-core processors and 8Gb RAM starts a $29k. Not a great price when comparing to the Sun x4600 which starts at $26k for a 4 dual-core system with 16Gb RAM. They do however come in at about the same price for a 8 dual-core system with 32Gb RAM. These are list prices, haggeling is up to you.

This can be a very interesting company in the future. I wouldn't be suprised if one of the big fours try to acquire Fabric 7.

Tuesday, July 18

It's HOT!

Been quite slack with the blogging lately.
First of all, it's freakin' hot in London at the moment. 35C tomorrow (95F).
Second of all I'm in the middle of moving to a new flat in east London, about 3 minutes away from London bridge.
And lastly I've I'm still quite busy at work, migrating lots of services to new hardware. Batteling with Windows server for the first time in a year or so.

The only thing that keeps my mood up is the new cool and freshing "frappe" from Caffe Nero. I prefer the mint flavored one. Yummy!

Thursday, July 13

Solaris 10 06/06 on Dell PE 1855 blades

Just installed Solaris 10 06/06 on a Dell PowerEdge 1855 blade.
Everything worked out of the box. Drivers for the raidcard, ethernet etc. was all included.
I just mounted the ISO image in the virtual media interface in the remote management DRAC card and ran the manual installer as usual. Default BIOS settings, except that I disable hyperthreading to get better performance.
root@juliet:[etc]$ psrinfo -v
Status of virtual processor 0 as of: 07/13/2006 15:07:14
on-line since 07/13/2006 14:18:56.
The i386 processor operates at 3200 MHz,
and has an i387 compatible floating point processor.
Status of virtual processor 1 as of: 07/13/2006 15:07:14
on-line since 07/13/2006 14:19:02.
The i386 processor operates at 3200 MHz,
and has an i387 compatible floating point processor.
root@juliet:[etc]$ isainfo -v
64-bit amd64 applications
mon sse3 sse2 sse fxsr mmx cmov amd_sysc cx8 tsc fpu
32-bit i386 applications
mon sse3 sse2 sse fxsr mmx cmov sep cx8 tsc fpu
root@juliet:[etc]$ prtdiag
System Configuration: Dell Computer Corporation PowerEdge 1855
BIOS Configuration: Dell Computer Corporation A04 08/24/2005
BMC Configuration: IPMI 1.5 (KCS: Keyboard Controller Style)

==== Processor Sockets ====================================

Version Location Tag
-------------------------------- --------------------------
Intel Xeon PROC_1
Intel Xeon PROC_2

==== Memory Device Sockets ================================

Type Status Set Device Locator Bank Locator
------- ------ --- ------------------- --------------------
DDR2 in use 1 DIMM1_A
DDR2 in use 1 DIMM1_B
DDR2 empty 2 DIMM2_A
DDR2 empty 2 DIMM2_B
DDR2 empty 3 DIMM3_A
DDR2 empty 3 DIMM3_B

==== On-Board Devices =====================================
LSI Logic 53C1020 Ultra 320 SCSI
Intel 82546GB Gigabit Ethernet
Intel 82546GB Gigabit Ethernet

==== Upgradeable Slots ====================================

ID Status Type Description
--- --------- ---------------- ----------------------------
1 available PCI-X DC_CONN

Look at the prtconf -pv
It was especially cool to have ZFS on a production ready box. The blade I installed Solaris on had a pair of mirrored 146Gb drives so I created a 40Gig system partition and a 100Gb partition for ZFS.
Now I just need to investigate how well WebLogic 8.1 plays in a zone.

As a really annoying side note I can mention that it took almost 2 days to get Windows 2000 server installed on another blade. Getting drivers and a retail copy of Windows 2000 with slipstreamed SP4 was quite difficult. We usually use MSDN media to install development server but the Dell Server Assitant (required to get Windows 2000 working at all) requires a retail OEM version of Windows 2000. Then again, I'm not a very Windowsy' person :)

Tuesday, July 11

New massive Sun servers

Sun, or should I say Andy Bechtolsheim, announced some really awesome servers at the NC event today.
I mean, these new boxes will kick some HP butt.

  • Sun x4600 - an 8-socket (16-way) Opteron machine. 4 SAS drives, 8 PCI-e, up to 128Gb! memory. Sounds like a fun box.
  • Sun x4500 - ok, most companies are moving away from internal storage but now when iSCSI is making great progress it is time to rething the mid size storage sector. This little 4U box has nothing less than 48!! internal SATA drives. Up to 24Tb per machine, with 10 machines in a rack, thats 240Tb per rack. Yes please.
  • Sun Blades 8000 series - nice blade enclosure. Future proof, 2 pci-e slots per blade which is nice. And the network backplane has a smart config. Instead of having a interface card on the blade and connect via ethernet to the switch, the "switch" connects to the blade via PCI-e, so the actuall ethernet card is in the switch itself. Small customer base though, only massive datacenters can afford this type of kit.

    I can see a very nice Oracle Data Warehouse setup here. One x4600 processing machine backed by 24Tb iSCSI storage mounted from a x4500 :)
    And, we can build a massvie 48-way 3 node RAC deployment in just 12U's (plus proper storage).

  • Saturday, July 8

    HPC Top500 - AMD vs. Intel

    The latest top 500 HPC list was released about a month back. A lot of new systems on the list which is great news. Opterons are climbing fast and IBM is gaining a lot of market share with Blue Gene systems.

    One thing struck me as quite interesting when I first glanced the list, Xeon processors seems to get higher "per cpu" benchmark figures. I know these numbers aren't the only thing to go after when it comes to comparing HPC processing power. The type of load etc . makes a difference. But still, it's a known fact that Opterons are faster in almost all tasks.
    I did a quick calculation of the top 100 entries which had Xeons or Opterons, the average Rmax per cpu is almost 50% better for Xeons?!
    Xeon gets 4.40 and Opteron gets 3.00.

    This makes me even more courius about the next list where we'll hopefully have some Xeon 5100 systems.

    Wednesday, July 5

    Busy with the blades

    Have been quite slow on the blogging side lately. Actually quite busy at work installing a bunch of new servers. We ordered some Dell kit last week, PowerEdge 2850's and more interestingly PowerEdge 1855 blades.
    I must say I really love the blade concept, not so much by the annoying "data center cost saving" hype. I strongly doubt that 10 Xeon blades use much less power than 10 1U Xeon servers.
    Power wise, the enclosure wants 4 16 Amp 3-pole plugs.. go figure :)

    No, what I really like about blade servers is the ease of management and administration. The Dell Blade enclosures we got has 2 built in switch modules, DRAC/MC card and the optional IP-over-KVM module. All these cool modules are cheap so I can't imagine anyone ordering a blade enclosure without them.

    Then we have the cable side of things. Each enclosure holds 10 servers, from the enclosure we have:

  • 4 power cables
  • 2 ethernet uplinks (one from each switch module)
  • 1 ethernet management interface

    Thats 7 in total, compare to 10 1U servers:

  • 10x 2 power cables
  • 10x 2 ethernet cables to switches
  • 10x 3 kvm (mouse, keyboard and screen to a KVM switch)
  • 10x 1 ethernet management interface

    Thats 80!! in total. Or about 180 meters of cables. Dang! If we would have had fibre-channel connectivity we would have saved another 18 cables.


    Ok, now to what really counts. Price!
    For us it started making sense after about 4-5 servers, thas just silly. The question was more "why not" to buy blades. Each blade came in at about £1350 (dual 3.2Ghz, 4Gb, 2x146Gb) and the enclosure itself was about a silly £1000 (including the modules). Thats £15000 for 10 machines, a similar speced PE 1850 costs about £1600. Not that much of saving you say, a lousy £1000 for 10 machines.
    But look at the extra costs around having 10 servers, take the 10 slots on a KVM switch, take the 18 extra ports in ethernet switches, take the freakin' cables. I'm assuming a cost of £1200 for a 16 port KVM switch, allocating 10 of those costs £750, 18 switch ports in two switches at £500 each is £416, 28 cat5 cables at perhaps 60 pounds for a pack.
    I'm not counting the extra allocation of PDU's due to the fact the blade enclosure uses the previously mentioned 16A 3-pole sockets which of you probably have to have a few extra installed.
    Now we have a cost saving of about £2260 per enclosure.
    The biggest saving however will be in time, I'll blog about the ease of use another day. Funny enough the blades will be installed with a huge mix of opterating systems. Windows, both 2000 server and 2003 server, alongside Centos 3, Centos 4 and Solaris 10. Most of them running Weblogic, Websphere and Jboss.

    And a small PS. Nooo, I could not have bought Opterons nor the new Conroe procs. These machines are going to be used mainly for application support mimicking customer environments. And as most people probably know, investment banks don't mix with the latest technology.