Wednesday, May 31

Sun cutting costs

It must be hard to be Jonathan Schwartz a day like this, starting his career as CEO with a 10% head count reduction. Not really news, just a matter of when this would happen. Wall street has been pushing for cost reductions for quite some time now.
Lets hope things will be for the better than Schwartz will be remembered as the man who brought Sun back in to a profitable company.
The statement from Schwartz was pretty clear though. Focus the buisness around the core products and the technology they already have and spend less on specialiced systems.

I'm still convinced Sun will stand strong and keep growing its market share in a number of sectors.

Schwartz has already got a long entry in his blog.

Read the official press release.

Pay for what you use

Ok, this is a long entry but it's worth the read.

I recently helped a friend of mine who work for a quite small company to sort out one of their secondary Oracle system. They only really have one big database, but they need to have a secondary box for training, testing and development. The system they used to have was a four proc HP DL580g2 server running RHEL 3.
Their main problem was that since they had a training system on the machine they had to pay for Oracle licensing and support for the machine, for all four CPU's. And since their application schema used partitioning and other EE features they had to have EE on this system as well.

They had opted for a term license from Oracle costing $32k annually. The server was starting to run a bit slow as well. Developers often have that effect on systems, starting with one test schema and ending up with about 37. The training system was used by a 10 user classroom perhaps twice a month. Minimal usage (for the training functionality) and load, still a massive license.
The problem, they need a faster system and reduce costs. Both where hard requirements.

Their suggestion was to replace the machine with a HP DL585 with four Opteron processors with RHEL 4, still a $32k per year term license plus support on that. Time to cut back a bit here. Why do they need four cpu's on a training system?

Oracle are nice enough to accept per zone licensing of Oracle under Solaris if you use the resource manager to limit the number of processors the zone can use. I first suggested that they would opt for a Sun T2000 server but they where quite set on using the HP box. Fair enough, they are a HP shop and I'm not in any position to argue with them. So let's stick with the HP box, but let's run Solaris 10 on it. HP even officially support* Solaris 10 on that system. They agreed to do so. Very simple setup, two RAID1 arrays on 15kRPM disks. The databases on the machine aren't very large so there is currently no need for an external disk subsystem.

We installed Solaris 10 update 1 on the server and created two zones. First zone is “training” which was given one CPU and the second zone “dev” was not resource limited at all and could use all CPU's in the system.
One CPU for the training system that had to be licensed, thats a neat little saving of $24k per annum plus the reduced support cost. The server itself came in at about $15k and the Solaris support contract costs $960 per annum compared to the $1500 per year for RedHat. So even with the new hardware they had a nifty saving at about $10k for the first year. Not to mention the full $24k save for year two and three.

I almost sound like a Sun marketer here, but hey, it's plain facts. To prove I'm not a Sun fanboy I'm going to skip the step where I pull out the graphs and ROI calculations. Draw your own graphs! (And may I suggest you do that in Oracle BI Discoverer)

* http://www.hp.com/wwsolutions/solaris/index-all.html

Sunday, May 28

Testing VxVM under Vmware

Just downloaded and tested the new "free" Vertias Storage foundation software for Linux in a Vmware instance.
Ran in to a few problems, not unexpected.

First, the prein rpm scripts for VXVM fails if you are on a AMD box. Just run through the install script and manually add VRTSvxvm-common and VRTSvxvm-platform with the --nopre rpm options. After rpm is done just run vxinstall to finalize things.

Now for the second hickup. CDS disks do not work under Vmware. For those of you who are not familiar with cdsdisks it is a new platform independant disk format recently introduced in VxVM. It makes it possible to migrate SAN LUN (or physical disks) from
for instance a HP-UX box to a Solaris box without doing any conversion.
To avoid this problem we need to use the old "simple" disk format. The easiest to always default to this is to create two files under /etc/default.
One file called vxdg with the line cds=off and one file called vxdisk with the line format=simple.

The rest is quite straight forward.
# cat /etc/default/vxdg
cds=off
# cat /etc/default/vxdisk
format=simple
# vxdiskadd sdb sdc
[ enter dg1 when prompted for disk group name ]
[ just hit enter for all other prompts ]

# vxdisk list
DEVICE TYPE DISK GROUP STATUS
sda auto:none - - online invalid
sdb auto:simple dg101 dg1 online
sdc auto:simple dg102 dg1 online

# vxassist -g dg1 make u02 1G layout=mirror
# vxprint
Disk group: dg1

TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0
dg dg1 dg1 - - - - - -

dm dg101 sdb - 37701920 - - - -
dm dg102 sdc - 37701920 - - - -

v u02 fsgen ENABLED 2097152 - ACTIVE - -
pl u02-01 u02 ENABLED 2097152 - ACTIVE - -
sd dg101-01 u02-01 ENABLED 2097152 0 - - -
pl u02-02 u02 ENABLED 2097152 - ACTIVE - -
sd dg102-01 u02-02 ENABLED 2097152 0 - - -


# mkfs.vxfs /dev/vx/dsk/dg1/u02
# mount /dev/vx/dsk/dg1/u02 /u02
# df /u02

Filesystem 1K-blocks Used Available Use% Mounted on
/dev/vx/dsk/dg1/u02 1048576 17338 966793 2% /u02


Vertias actually let me create 5 user volumes before starting to complain about licensing.
VxVM vxvol WARNING V-5-1-12761 You have exceeded the authorized usage for this product and are out of compliance with your License Agreement. Please email sales_mail@symantec.com or contact your Symantec sales representative for information on how to obtain additional licenses for this product.

Veritas are scared

Ok, I missed this bit of news completly.
Vertias (yeah yeah, Symantec whatever) have released a limited version of VxVM and VxFS free to use. They labled the new product Storage Foundation Basic. I'm not sure how useful it is. The free version has a few limitations.

From the Vertias website:
Download your free copy of Veritas Storage Foundation Basic for Linux and Solaris x64 servers. Note: This free version is limited to 4 user-data volumes, and/or 4 user-data file systems, and/or 2 processor sockets in a single physical system.

Veritas used to own the storage software market but I'm must, to be honest, say I haven't bought any major Veritas licensing in quite a long while and haven't really considered doing so either.
Now with a production release of ZFS just around the corner it makes even less sense to use VxVM/VxFS on Solaris and considering the awfull performance on Linux I'm not very tempted to use it on any Linux systems.
Perhaps when deploying legacy Solaris 8 based systems. Err, can't do that since they the Solaris/SPARC version of the product is not free.

Saturday, May 27

Ubuntu 6.06 - the aftermath

Ok, I got Ubuntu RC installed without to much trouble.
There is a known bug with some ATI graphics cards (I know, I should have gone with Nvidia) so the installer crashed because it couldn't start X11. Luckily it just dropped back to the text console so I could edit /etc/X11/xorg.conf and switch back to the plain VESA driver. After I HUP'd gdm I got my nice installer desktop.
The install itself is very painless. Six simple steps, selecting language, timezone, disklayout and creating the "admin user" (normal user with sudo access). There is no package selection, you're given a nice useful but quite clean system without a fuss. A good approach in my opinion.
It found all my NTFS partitions and added them to fstab with their respective disk labels. I'd say the installer is much easier than installing XP, it's at least much faster. In my twisted Oracle ways I added one OFA mount point (/u01,/u02 and /u03) on each of my three disks as well.

After the installer finished and I logged in I installed the latest updates, added two new very useful apt reposatories to /etc/apt/sources.list. The first one to get all the media codecs and other nifty packages and the second one, obviously, to get Oracle XE. Sweet.
deb http://www.grawert.net/ubuntu/ warty universe
deb http://oss.oracle.com/debian unstable main non-free
I even used the Synaptic package manager to install Oracle XE, very simple.

All in all a good first impression of 6.06.

Took a few screenshots.


Ubuntu 6.06RC and why Sun will rule

It's raining in London, as usual. So I got a bit bored and decided to see if Ubuntu 6.06 was released yet. It's not, but a nice looking release candidate is.
And now to the interesting part...
Ubuntu 6.06 LTS (Dapper Drake) RC is available for Sun UltraSPARC
It's even got support for the new coolthreads servers. Many people where expecting this and now it's confirmed. Cool indeed.
To quote The Schwartz himself; Ubuntu is currently the most exciting Linux distribution on the market today. Take that your Redhatters! (I'm not even going to mention the SuSE guys, you are so out of the picture already).

This is yet another great move from Sun, they really show their support to let the customer do whatever they want with their hardware. Sun's core focus is to flog servers and if the customer wants more, and only if, they offer the end to end solution.
Myself along with lots of other sysadmins and DBA's out there like to have a bit of both. If we deploy a buisness critical HA Oracle system we do want to run Solaris and have the Sun Cluster suite, we want a rock solid support agreement and may even ask someone from Sun to come press the power-on button when we go live.
On other occations we don't want squat from Sun except a kick ass piece of hardware. Take a development system, development systems often change over time and they have to be flixible and cost effective. If a proc fails we'll just take the next server in the pile and let Sun replace the proc later on (NBD). And not to mention that we tend install all sorts of weird peices on software on them.

My next question to Sun is (and I'm pretty sure they'll answer "Yes of course" to this one) is, will the Sun Ray server software be released for Linux on the T1's?

Can we dare Oracle to start supporting Ubuntu now? Come on Larry, you know it's the right thing to do.

Anyway, my 6.06 RC download is almost done. Time to burn a CD-RW and reboot.

Footnote,
The "LTS" bit in the Ubuntu release name stands for "Long term support", they now offer 3 year support for desktop released and an impressive 5 year support for server installs.

Wednesday, May 24

Installing CentOS linux on a remote machine

Most people know how easy it is to install Solaris on remote sparc server and some people have probably tried using a kickstart file for Redhat'ish Linux installs.
I've used kickstart files for quite some time with quite frequent hickups. The annoying bit about kickstarts is that if anything goes wrong, the anaconda installer either blocks with a message on the screen or simply crash and reboot the machine. Such situations can be quit tricky to troubleshoot over a 2Mbit WAN connection (2Mbit is better but I've tried it over 512kb DSL lines without to much hassle). Lately I've been more and more turned away from kickstarts for WAN installs and instead just PXE boot the remote machine and pull upp a remote VNC screen on my workstation. This is especially usefull when the machine lacks a decent remote management card. Like the Dell SC1425's where the IPMI card pretty much suck.
Yep, ananconda has native VNC support.

It's quite simple to use, just remember to pass all pre-stage 2 installer questions to the installer from the PXE command line, things like install method and language support.

I've simply copied the vmlinuz and initrd files from the PXE dir in the CentOS distro to /tftpboot on my deployment server.
Here's an example entry from my pxelinux.cfg for that box, remember to put all the append stuff on one line.
label c4remote
kernel vmlinuz-c4-i386
append initrd=initrd-c4-i386.img lang=en_US.UTF-8 keymap=uk \
vnc vncconnect=haliws.uk.company.com ip=dhcp \
method=nfs:deploysrv:/dist/centos-4-i386
All you need now is to get the box to actually boot of the network. I usually use two methods do this.
  • First method, use the remote card to reboot the machine with pxe as the default boot device. ipmitool is usefull to do this or if you have a nicer box, like a midrange HP with a decent iLO inteface just ssh in and reboot it that way.

  • The second method is more tricky, you actually need someone on the remote site with a console to the server who can actually press F12 (or whatever kicks off the PXE boot on the server). The human error factor here is masssive. :)

  • On the machine where you want the VNC screen you simply start a vnc listener with the command vncviewer -listen

    Simple as that, now when anaconda enter stage 2 of the install it will instead of starting X on the server start a vnc session connected to the listening viewer.
    This method works on RHEL, Fedora as well as on CentOS.

    Saturday, May 20

    Oracle XML output (native) formatting

    Got a few question on how to actually get properly formated XML straight from Oracle.
    I'm not aware of any suitable DBMS function if there are any.
    One way I to simply insert and select the query result in an Oracle xmltype data column in a temporary table. A temporary table is per session and is per default truncated when you commit.

    set pages 0 long 900000 lines 100
    -- create a temporary table with one xmltype column
    create global temporary table xmlformat (data xmltype);

    -- do the select but with an prefixed insert
    insert into xmlformat select
    xmlelement("emp",
    xmlagg(xmlelement("employee",
    xmlattributes(empno as "id"),
    xmlelement("name",ename),
    xmlelement("job", job),
    xmlelement("hiredate", to_char(hiredate, 'YYYY/MM/DD')),
    xmlelement("deptno", deptno)
    )
    )
    )
    from
    emp
    where
    empno<=7566
    /

    -- Ok, what do we got here, the output here is exactly
    -- as displayed by Oracle. No cheating. Do remember that this is
    -- a temporary table so the data is just for your current session
    -- and it will be lost when you commit (or end your session)


    select data from xmlformat;

    <emp>
    <employee id="7369">
    <name>SMITH</name>
    <job>CLERK</job>
    <hiredate>1980/12/17</hiredate>
    <deptno>20</deptno>
    </employee>
    <employee id="7499">
    <name>ALLEN</name>
    <job>SALESMAN</job>
    <hiredate>1981/02/20</hiredate>
    <deptno>30</deptno>
    </employee>
    <employee id="7521">
    <name>WARD</name>
    <job>SALESMAN</job>
    <hiredate>1981/02/22</hiredate>
    <deptno>30</deptno>
    </employee>
    <employee id="7566">
    <name>JONES</name>
    <job>MANAGER</job>
    <hiredate>1981/04/02</hiredate>
    <deptno>20</deptno>
    </employee>
    </emp>


    Querying XML data is next. :-)

    Thursday, May 18

    Ranking rows in Oracle

    Oracle has a very nifty function to rank the usefullness of a row, it is sort of comparable to an aggregate but returns an rank value for the input instead of just the result row.
    Perhaps you want to find the 5 most sold items for every month of the year or you need to figure out who worked the longest hours from a timesheet system. The rank query returns a sequence over the "top ranked" rows as ordered by the rank queried. If you just want the top 1 you just select the whole rank statement as an identifier and add "where identifier=1" in your outer where statement.

    Example
    I wrote this mockup query to show how it can be used.
    We have a nice web based application where we, for audit purposes, store all applications logins to a logins table in the database. Lets say that you for some unknown reason need to figure out who was the first user to login each month (yes I know, terrible use case).

    select
    user_id,
    to_char(time,'YYYY-MM-DD HH24:MI:SS') LOGON_TIME
    from
    (
    select
    user_id,
    time,
    rank() over(partition by user_id
    order by time asc) rowrank
    from
    logins
    )
    where
    rowrank=1
    /
    Returns
       USER_ID LOGIN_TIME
    ---------- -------------------
    68 2006-02-01 00:07:04
    68 2006-03-01 00:07:17
    68 2006-04-01 03:21:17
    108 2006-05-01 00:00:13


    More reading:
    http://www.adp-gmbh.ch/ora/sql/analytical/rank.html
    http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14200/functions123.htm

    Im working on the second part of the XML stuff.

    Tuesday, May 16

    New MacBook Core Duo

    Apple finaly released the cool MacBook "non-Pro".
    Sweet 13" screen and dual core Centrino proc.

    A decent spec model (2Gb/100Gb) still lands over the 1kGBP mark :/

    Saturday, May 13

    Using XML in Oracle

    Oracle has a extremly nice built in XML support, in 10gR2 you even have native xquery support. 9iR2 and later versions still have excellent XML support in the shape of XML DB.

    First step in getting to know the XML functions in Oracle is to extract data to XML format, probably a quite common task and from what I've seen this is usually done using some highly customized java app that needs to be rewritten, recompiled, tested etc. every time there is a change. So why not use the functions available and let Oracle do the job.

    The most basic XML functions to use are.
  • xmlelement() - simply creates an xmlelement. An element can have child elements and attributes.

  • xmlattributes() - adds an attribute to an element.

  • xmlagg() - aggregates a number of elements under another element, sort of like group by in a normal query.

  • Example query to generate employee records from scott.emp.
    select
    xmlelement("emp",
    xmlagg(xmlelement("employee",
    xmlattributes(empno as "id"),
    xmlelement("name",ename),
    xmlelement("job", job),
    xmlelement("hiredate", to_char(hiredate, 'YYYY/MM/DD')),
    xmlelement("deptno", deptno)
    )
    )
    )
    from
    emp
    where
    empno<=7566
    /
    And the result will look like this (and no, the basic XML function are not formatting aware and will not indent the XML structure).
    <emp>
    <employee id="7369">
    <name>SMITH</name>
    <job>CLERK</job>
    <hiredate>1980/12/17</hiredate>
    <deptno>20</deptno>
    </employee>
    <employee id="7499">
    <name>ALLEN</name>
    <job>SALESMAN</job>
    <hiredate>1981/02/20</hiredate>
    <deptno>30</deptno>
    </employee>
    <employee id="7521">
    <name>WARD</name>
    <job>SALESMAN</job>
    <hiredate>1981/02/22</hiredate>
    <deptno>30</deptno>
    </employee>
    <employee id="7566">
    <name>JONES</name>
    <job>MANAGER</job>
    <hiredate>1981/04/02</hiredate>
    <deptno>20</deptno>
    </employee>
    </emp>
    Check here for the offical (10gR2) XML docs.

    Stay tuned, more XML to come.

    Wednesday, May 10

    TOra on Linux

    I'm not a fan of Toad, I'm not a fan of Tora. But a lot of people seem to be and keep asking me how to get it running on their system, mainly people who just installed Oracle XE.
    Here is a quick guide on how to get it working.

    Download Tora from http://tora.sourceforge.net/ and extract it to a directory.

    Make sure you have all the required devel RPM packages before you start, use rpm -qa a look for things like kdelibs-devel, qt-devel and other KDE related devel stuff. On my FC4 box ./configure complained that I didn't have qscintilla installed, yum did the job in a few seconds.

    yum -y install qscintilla qscintilla-devel
    ORACLE_HOME="/usr/lib/oracle/xe/app/oracle/product/10.2.0/server"
    LD_LIBRARY_PATH="$ORACLE_HOME/lib"
    export ORACLE_HOME LD_LIBRARY_PATH
    ./configure --prefix=/u01/app/tora --with-oci-version=10G
    # 'make' took ages on my machine and for some reason
    # it failed when I used the -j3 make option.

    make
    sudo make install
    All done!
    Launch tora with /u01/app/tora/bin/tora.

    Last thing, Oracle SQL Developer is written in Java, it takes 1 minute to install and configure. It's better than Tora in most aspect. Just use SQL Developer instead of Tora!

    Monday, May 8

    Lets hope its not the end

    As most people know SGI have had some financial troubles in the last few years and warned they may run out of cash by the end of the year.
    And today they announced they where filing a chapter 11 to try to get rid of some debt.
    A few debt holders have agreed to cancel debt in exchange for a shared in the company.

    It's truly sad, I'm a bit fan of (some) SGI technology, the numalink is an awesome interconnect and the old O2 workstations was an amazing product when they where introduced. If I had the space I would get one just for nostalgia reasons.

    Read the official press release

    Sunday, May 7

    Nerd podcasting

    Found some quite funny podcasts while browsing iTunes yesterday.
    Bob Cringely has some sort of talk show where he interviews intersting people from the internet and unix community. I can't really say Bob Cringely is the best man for the job, but he did it. The people he interviews are indeed very intersting people.

    http://www.pbs.org/cringely/nerdtv/

    My top 3 favorite people he has interviewed.
  • Bill Joy - co-founder of Sun and wrote some cool code
  • Judy Estrin - was CTO of Cisco and CEO of Bridge communication
  • Anina - Model and queen of mobile stuff
  • Tuesday, May 2

    Copying Oracle CBO statistics

    As in most sites I work with a number of Oracle systems for each application. The production system, the DR system, the staging setup, the test & traning setup and the development system(s) etc etc.
    We obviously often do database exports and import into various system, sometimes when doing imports we get funny skew statistics after the imports and a statistics gathering is required. It's not uncommon that we actually import the production schema in to perhaps two or even three development schemas and all schemas of course need these statistics.
    One solution is to run dbms_stats.gather_schema_stats on each schema, but thats very time consuming.
    I usually just gather the CBO stats for one schema and just copy it between schemas. After all, all freshly imported schemas are identical.

    exec dbms_stats.gather_schema_stats(
    ownname => 'TEST1', -
    options => 'GATHER AUTO', -
    estimate_percent => dbms_stats.auto_sample_size, -
    degree => 3 -
    )

    -- Create the transport table
    exec dbms_stats.create_stat_table(user, 'TRANSPORT_STATS');
    -- Export stats to the newly created table
    exec dbms_stats.export_schema_stats(user, 'TRANSPORT_STATS');
    -- Simple exp of the table
    !exp test1/test1 tables=transport_stats
    -- All done, no need for this any more
    drop table transport_stats;

    -- Reconnect as the second schema owner
    connect test2/test2
    -- Vanilla imp of the table
    !imp test2/test2 fromuser=test1 touser=test2
    -- Import the stats from the transport table
    exec dbms_stats.import_schema_stats(user, 'TRANSPORT_STATS');
    -- Done! Drop the transport again.
    drop table transport_stats;

    Cool, each additional schema in less than 30 seconds.