Sunday, March 9

Backups, cheaper and faster?

When did backups become easy and affordable?
I've been sort of slacking off in the backup game in the last year or so, our backups have been managed by other parts of the company and some backups have been outsourced to BSP's. I recently spec and deploy a basic Netbackup solution for our office servers at work and was amazed how extremely simple the whole thing was. The first backup system I worked with was Legato Networker (now owned by EMC) and it was at the time a pretty great product but it was quite difficult to manage, lots of things needed to be scripted and setting up a new tape library required quite a lot of tweaking.
Secondly, the budget was at first glance fairly limited but when we speced up what we needed and got a couple of quotes in I was amazed of how much we actually got for our money.
I wanted basic disk staging then duplication to tape. The obvious choice for tape was LTO4, we get pretty good price / performance, market leading capacity and it's a solid standard.
Most suppliers actually quoted the same tape library, just branded by different vendors. HP, Dell, IBM. All where IBM 3573 tape libraries, it's a excellent and affordable library. Dell came in best in price as usual. We opted for SAS attached. The backup server, my first choice server these days, is the Dell PowerEdge 2950. I buy it for pretty much 75% of all requirements. Speced out with a pair of Quad-Core 1.86GHz processors, 4Gb RAM, 6x500Gb SATA drives (in RAID5) for the disk staging pool and a SAS5 to attach the library.
Software wise the server is running CentOS 5.1 and Symantec/Vertias Netbackup 6.5 “Linux Starter Edition (LSE)”. The LSE is a pretty good deal, you get a server license, tape library license and 5 client licenses (with bare metal, open file etc.) for a fairly low cost.
Installing and configuring Netbackup was as easy as it gets, the tape library was detected on boot, no sysctl.conf changes needed. Stepped through the netbackup installation process, added the tape library (less than 5 clicks!!), defined a couple of tape label policies, created a 2Tb disk staging pool with my new tape group as the target, just add water (ehm, clients actually).

The total cost was under £13.000 (about $22.000), installation time was less than one day, included was:

  • Veritas Netbackup 6.5 Linux starter edition
  • Netbackup clients 5-pack

  • Netbackup Exchange agent

  • Dell PowerEdge 2950 server

  • Dell PowerVault TL2000 library w/ one LTO4 SAS drive

  • 25x LTO4 tapes

  • 3x LTO cleaning tapes

  • CentOS Linux 5.1 (free)

Wednesday, February 20

How to delete an object with a special character in Oracle

There are some things in Oracle that are possible but shouldn't be possible.
One thing I love to hate is the fact that you can create tables with almost any name, just as long as you double quote it.
I.e.:
SQL> create table "My Fruit Table" (id number);

Table created.

SQL> select * from tab;

TNAME TABTYPE CLUSTERID
------------------------------ ------- ----------
My Fruit Table TABLE
Horrible! And what's even more horrible is that people actually do this.

Now to the problem, a user created a table with a special character in the name. Not even sure what character but I need a way to drop it. PL/SQL to the rescue.
Example:
SQL> set serveroutput on
-- Lets create a dummy table with a bogus character in the name
SQL> declare
a varchar2(500);
begin
a:='create table "abctest'||chr(150)||'" (id number)';
execute immediate a;
end;
/

PL/SQL procedure successfully completed.

SQL> select * from tab;

TNAME TABTYPE CLUSTERID
------------------------------ ------- ----------
abctest? TABLE

SQL> desc "abctest?";
ERROR:
ORA-04043: object "abctest?" does not exist

-- And a bit of PL/SQL to drop it
-- change the abctest% string to something matching your single table.
SQL> declare
a varchar2(500);
tname varchar2(50);
begin
a:='select table_name from user_tables where table_name like ''abctest%''';
execute immediate a into tname;
dbms_output.put_line('Table name is: '||tname);
execute immediate 'drop table "'||tname||'"';
end;
/

Table name is: abctest?

PL/SQL procedure successfully completed.

SQL> select * from tab;

no rows selected
Handy.

Sunday, February 3

Buying marketshare

Ok, I'm way waay late on commenting on the two most recent IT company takeovers .

Sun buys MySQL
Tricky one, I have to honestly say I have a hard time seeing where MySQL fits in to Sun's portfolio.
Sun is one of Oracle biggest partners (the biggest after Dell?) and they are one of the biggest backers of PostgreSQL, then even employee a few of head lead Postgres developers.
Sun has got two options with MySQL, they can either invest a lot of RnD in MySQL and actually turn it in to a really good product. MySQL is without a doubt a product with incredible potential, but is today pretty much a over-sized way to store CSV index files (bit harsh perhaps but hey).
Or they can ride out the profits by selling LAMP-stack boxes and let MySQL sustain itself, I doubt they can make up for the billion dollars they payed for it though.
Oh, and what will Sun make of the (rather evil) closed source MySQL Enterprise Monitor administration product from MySQL. Sun having new closed source software? Errr!

Some advice for sun,
1) Add real constraint support. This is a show stopper for many, and the InnoDB approach isn't good enough. MySQL needs native solid from the ground up constraint support, independent of storage engine, version bla bla. If people don't want the "performance loss" of constraints, don't add the constraints!
2) Sort out proper I/O management, tablespaces, clean partitioning all that stuff. MySQL has improved a lot on this point over the last 12 months. But more work is needed, especially if they are planning on selling MySQL powered Thumpers.
3) 100% solid transactional support. MVCC would be fun. I've seen a few to many transactional problems in MySQL. Some sort of logging/undo/redo system is needed.

On top of that I wouldn't mind a clean hot-backup tool, requires proper transactional support (with something like SCN's etc), 3rd party scripting language support (PgPerl anyone?).

That's about 250.000 RnD hours, go get busy!

Oracle buys BEA
This is most likely very good news. I'm a big fan of the Oracle database (duh) and a fan of Weblogic Application Server.
What I'm not a big fan of is Oracle Application Server (oc4j).
I hope Oracle realize that Weblogic is the superior product and works with Weblogic as the front runner and port oc4j features to Weblogic and not the other way around. I doubt that will be the case though. If not, I hope they don't kill off Weblogic any time soon.

Hmm, that's all for now.

Friday, January 18

Websphere ND dmgr permission problems

Ran in to a really weird problem with one of our Websphere 6.1 Network deployment setups yesterday and as I couldn't find one single page about the problem in google I thought I'd blog it.
I'm not much of a Websphere admin but managed to fix it after a while.

The problem started when a datasource was updated and all of the sudden all node agents stopped trusting the deployment manager (dmgr). Syncing the nodes failed and hence pretty much everything failed to restart / deploy. Running servers where fine though.
We got this message in the logs:
[1/17/08 13:30:52:020 GMT] 00000028 RoleBasedAuth A   SECJ0305I: 
The role-based authorization check failed for admin-authz operation
SSLAdmin:temporarilyDisableCertificateAuthentication:java.lang.Long.
The user UNAUTHENTICATED (unique ID: unauthenticated) was not granted
any of the following required roles: administrator.
My best guess is that node agent configuration was corrupted in some way.
How do you fix the problem then?
Fairly easy actually.
# Stop all node-agents that seem broken (that would probably be all of them!).
# Go to the node agents bin directory on the node (usually something like $WAS_HOME/profiles//bin/).
# Manually sync the node with syncNode.sh, point to the SOAP connector (default is 8879) on the DMGR server. See example
./syncNode.sh dmgrhost 8879 -username websphere -password webfear
# Start the node agent and verify that the logs are happy. Kick off a cell sync from dmgr. You should see entries similar to this in the logs:
[17/01/08 16:14:59:872 GMT] 0000002f NodeSyncTask  A   ADMS0003I: 
The configuration synchronization completed successfully.

Tuesday, January 1

Solaris Express on a Toshiba Satellite Pro A200

I bought myself one of those cheap laptops the other month. I needed a small machine for testing and since laptops are just as cheap (if not cheaper) as desktops these days I got a laptop.
The machine came with Vista but I wanted to triple boot Vista, Ubuntu and Solaris Express Community Edition.

  • Use diskmgmt.msc in Vista to shrink the partition the machine came with, Windows can do this natively so there is no need to use Partition Magic or similar tools. Create at least three new partitions. One for Solaris, one for Linux and one for Linux swap.
  • Secondly install Solaris, boot off the CD and go through the basic installer. The widescreen resolution worked out of the box (as usual). Do a full install, spending time "fixing" a smaller installer is just annoying. Solaris will install it's grub boot loader on both the MBR and superblock (on the Solaris partition). It probably makes sense to leave a large slice unused so it can be used with ZFS after the installation is done.
  • Install Ubuntu. Nothing simpler than that.
  • Edit Ubuntu's grub menu config (/boot/grub/menu.lst) to include Solaris. Simply point it to the Solaris parition (hd0,2 for me). Add these lines at the end of the file.
    title Solaris
    root (hd0,2)
    chainloader
Done!

I had to install the gani NIC driver in Solaris to get the Ethernet card working and the Open Sound System sound card driver to get sound working.
The Atheros WiFi card is supposed to be supported but I couldn't get it to work, even after adding the pci device alias to the driver. I'll post an update if I get it to work.