SQL> select * from leadtest;
ID TAG STAMP
---------- ---------- ---------
1 dog 05-JAN-06
2 dog 05-MAY-06
3 dog 05-SEP-06
4 cat 05-APR-06
5 cat 28-APR-06
6 cat 28-SEP-06
7 cat 28-OCT-06
7 rows selected.
SQL> with t as
2 (
3 select
4 tag,
5 stamp,
6 lead(stamp,1,stamp) over
7 (partition by tag order by stamp) stamp2
8 from
9 leadtest
10 )
11 select tag, stamp, stamp2, stamp2 - stamp diff from t
12 /
TAG STAMP STAMP2 DIFF
---------- --------- --------- ----------
cat 05-APR-06 28-APR-06 23
cat 28-APR-06 28-SEP-06 153
cat 28-SEP-06 28-OCT-06 30
cat 28-OCT-06 28-OCT-06 0
dog 05-JAN-06 05-MAY-06 120
dog 05-MAY-06 05-SEP-06 123
dog 05-SEP-06 05-SEP-06 0
7 rows selected.
-- Lets try lag() as well.
SQL> with t as
2 (
3 select
4 tag,
5 stamp,
6 lag(stamp,1,stamp) over
7 (partition by tag order by stamp) stamp2
8 from
9 leadtest
10 )
11 select tag, stamp, stamp2, stamp - stamp2 diff from t
12 /
TAG STAMP STAMP2 DIFF
---------- --------- --------- ----------
cat 05-APR-06 05-APR-06 0
cat 28-APR-06 05-APR-06 23
cat 28-SEP-06 28-APR-06 153
cat 28-OCT-06 28-SEP-06 30
dog 05-JAN-06 05-JAN-06 0
dog 05-MAY-06 05-JAN-06 120
dog 05-SEP-06 05-MAY-06 123
7 rows selected.
Friday, August 25
Oracle dates diffs using analytics
Simple little analytical query to find out the difference between two dates on adjecent rows in oracle using the lead analytical function lead() fetches the value from the next row in the resultset, you can use lag() to find the previous row.
Tuesday, August 22
And there was Oracle
Found a cool link on the oracle-l mailing list to a guy running Oracle 4.1 under DOS. :)
Geeky indeed.
Can't say I remember UFI (previous version of sqlplus), since I was a toddler when it was released. At least they had scott/tiger.
Check the video http://technology.amis.nl/blog/?p=1127
Geeky indeed.
Can't say I remember UFI (previous version of sqlplus), since I was a toddler when it was released. At least they had scott/tiger.
Check the video http://technology.amis.nl/blog/?p=1127
Sunday, August 13
Using Oracle Workspaces
Say the thing we can't do with versioning in Oracle. One really useful feature I often use for day to day stuff is workspaces. A workspace is an independent space where you can play with a table (or a whole bunch of tables) before you actually decide you want your new data to be the live data. Inside a workspace you can commit and rollback data without affecting any other workspace. Even though one should always test and prepare any work on a staging system it can be reassuring not to actually modify a live table without knowing how it will look. You can have almost unlimited numbers of workspaces and even have child workspaces to a parent. You can even do DDL operations. The workspace for the live data is simply called LIVE (in caps).
Let's do a little test.
Let's do a little test.
-- Lets begin with enabling versioningRead the Oracle documentation for workspaces.
-- and creating the new workspace
HALI@spinner1> exec dbms_wm.enableversioning('t1');
PL/SQL procedure successfully completed.
HALI@spinner1> exec dbms_wm.createworkspace('test');
PL/SQL procedure successfully completed.
-- Don't forget to actually switch to your new ws
HALI@spinner1> exec dbms_wm.gotoworkspace('test');
PL/SQL procedure successfully completed.
HALI@spinner1> select * from t1;
ID NAME
---------- ----------
1 test
2 test2
-- Ok, lets update the table and insert a new row
OPS$HLINDEN@spinner1> insert into t1(id,name) values(3,'test 3');
1 row created.
OPS$HLINDEN@spinner1> update t1 set name='test 2' where id=2;
1 row updated.
-- Note that we actually commit the data.
OPS$HLINDEN@spinner1> commit;
Commit complete.
HALI@spinner1> select * from t1;
ID NAME
---------- ----------
1 test
2 test 2
3 test 3
-- Now we switch bake to the LIVE data.
HALI@spinner1> exec dbms_wm.gotoworkspace('LIVE');
PL/SQL procedure successfully completed.
-- And the live data is unchanged at this point.
HALI@spinner1> select * from t1;
ID NAME
---------- ----------
1 test
2 test2
-- But we are happy with our updated data and want
-- our changed to be the added to the live workspace.
HALI@spinner1> exec dbms_wm.mergetable('test','t1');
PL/SQL procedure successfully completed.
HALI@spinner1> select * from t1;
ID NAME
---------- ----------
1 test
2 test 2
3 test 3
-- Don't forget to clean up after the operation.
HALI@spinner1> exec dbms_wm.removeworkspace('test');
PL/SQL procedure successfully completed.
HALI@spinner1> exec dbms_wm.disableversioning('t1');
PL/SQL procedure successfully completed.
Sunday, August 6
New JBoss platform
I'm currently in the process of evaluating a new JBoss platform for our company. A few months this would have been a quite simple task. But now with the new Intel Xeon 5100 "Woodcrest" processors on the market, it's a new ballgame.
The three different platforms I'm researching at the moment are:
Sun recently introduced a dual SAS disk kit for the T1000, which actually makes it usable as an app server, price wise it is slightly pricey (the 2x72Gb SAS kit is about 650GBP (1000USD)), and it's single PSU. The T2000 would have been a better choice but the cost is simly to high. The biggest benefit it gives is it's top speed with huge amounts of threads and the fact it runs Solaris. We're quite likely to run several Jboss instances on these boxes so running these in Solaris containers is quite a good benefit.
I presume most people have seen the amazing Intel benchmarks of the 5100-series CPU's. It just flies. Trust me here, I know a lot of you probably hate dell kit, but the Dell "9-series" is pretty darn good. Pretty decent remote management with the new DRAC/5 card (it is in fact better than the LOM in the galaxy line (X4100 etc)).
Quick summary (for midspec machines with 8Gb RAM, procs as listed and max number of 72Gb SAS disks and an extra dual port NIC in the 1950):
I'll post my decision when I make it. I have to say I'm leaning towards Dell at the moment.
The three different platforms I'm researching at the moment are:
- Sun X4100 running Solaris 10
- Sun T1000 running Solaris 10
- Dell PowerEdge 1950 running CentOS 4
Sun recently introduced a dual SAS disk kit for the T1000, which actually makes it usable as an app server, price wise it is slightly pricey (the 2x72Gb SAS kit is about 650GBP (1000USD)), and it's single PSU. The T2000 would have been a better choice but the cost is simly to high. The biggest benefit it gives is it's top speed with huge amounts of threads and the fact it runs Solaris. We're quite likely to run several Jboss instances on these boxes so running these in Solaris containers is quite a good benefit.
I presume most people have seen the amazing Intel benchmarks of the 5100-series CPU's. It just flies. Trust me here, I know a lot of you probably hate dell kit, but the Dell "9-series" is pretty darn good. Pretty decent remote management with the new DRAC/5 card (it is in fact better than the LOM in the galaxy line (X4100 etc)).
Quick summary (for midspec machines with 8Gb RAM, procs as listed and max number of 72Gb SAS disks and an extra dual port NIC in the 1950):
Sun X4100 | Sun T1000 | Dell 1950 | |
Processor | Dual AMD 280 4-cores | SPARC T1 8-cores | Dual Intel Xeon 5130 4-cores |
Max memory | 16Gb | 16Gb | 32Gb |
Disks | 4x SAS | 2x SAS | 4x SAS or 2x SATA |
NIC | 4x GIG | 4x GIG | 2x GIG |
PCI | 2 PCIe | 1 PCI | 2 PCIe or PCI-x |
PSU | Dual | Single | Dual |
Unit cost | £4500 | £6000 | £3850 |
Support pa | £420 | £850 | £110 |
3yr cost | £5760 | £8550 | £4130 |
I'll post my decision when I make it. I have to say I'm leaning towards Dell at the moment.
Friday, August 4
Oracle group_concat() updated (again)
I've ranted about this twice before. The mysql group_concat() equivalent in Oracle has in the past been quite problematic query.
We found a quite good solution a while back and after checking Tom Kytes blog I found a even better solution today.
This is mighty impressive SQL code, very simple functions but used in a very clever way.
Have a peek.
How cool is that, nothing extra to the query! Just beautiful use of the analytical functions.
Full TK post over here.
We found a quite good solution a while back and after checking Tom Kytes blog I found a even better solution today.
This is mighty impressive SQL code, very simple functions but used in a very clever way.
Have a peek.
with data
as
(
select job,
ename,
row_number() over (partition by job order by ename) rn,
count(*) over (partition by job) cnt
from emp
)
select job, ltrim(sys_connect_by_path(ename,','),',') scbp
from data
where rn = cnt
start with rn = 1
connect by prior job = job and prior rn = rn-1
order by job
/
JOB SCBP
--------- ----------------------------------------
ANALYST FORD,SCOTT
CLERK ADAMS,JAMES,MILLER,SMITH
MANAGER BLAKE,CLARK,JONES
PRESIDENT KING
SALESMAN ALLEN,MARTIN,TURNER,WARD
How cool is that, nothing extra to the query! Just beautiful use of the analytical functions.
Full TK post over here.
Subscribe to:
Posts (Atom)