Thursday, July 20

Fabric 7 servers

Found this very cool new server vendor called Fabric 7.
The thing that sets them aside from other vendors is that they actually have something new to offer, they recently introduced their first products, the Q80 and Q160, to the market. AMD Opteron servers, we've all seem those before and know that's about.


Here's the cool part, they support hardware level partitioning. Similar functionality you get in a Sun Fire 2900 and up system. You can cut up the box in four domain, run VMWare in on, Solaris 10 two and Windows in the forth. Sure you could probaly do all that in VMWare, but in some cases you need dedicated hardware for a project. A basic system with 4 single-core processors and 8Gb RAM starts a $29k. Not a great price when comparing to the Sun x4600 which starts at $26k for a 4 dual-core system with 16Gb RAM. They do however come in at about the same price for a 8 dual-core system with 32Gb RAM. These are list prices, haggeling is up to you.

This can be a very interesting company in the future. I wouldn't be suprised if one of the big fours try to acquire Fabric 7.

Tuesday, July 18

It's HOT!

Been quite slack with the blogging lately.
First of all, it's freakin' hot in London at the moment. 35C tomorrow (95F).
Second of all I'm in the middle of moving to a new flat in east London, about 3 minutes away from London bridge.
And lastly I've I'm still quite busy at work, migrating lots of services to new hardware. Batteling with Windows server for the first time in a year or so.

The only thing that keeps my mood up is the new cool and freshing "frappe" from Caffe Nero. I prefer the mint flavored one. Yummy!

Thursday, July 13

Solaris 10 06/06 on Dell PE 1855 blades

Just installed Solaris 10 06/06 on a Dell PowerEdge 1855 blade.
Everything worked out of the box. Drivers for the raidcard, ethernet etc. was all included.
I just mounted the ISO image in the virtual media interface in the remote management DRAC card and ran the manual installer as usual. Default BIOS settings, except that I disable hyperthreading to get better performance.
root@juliet:[etc]$ psrinfo -v
Status of virtual processor 0 as of: 07/13/2006 15:07:14
on-line since 07/13/2006 14:18:56.
The i386 processor operates at 3200 MHz,
and has an i387 compatible floating point processor.
Status of virtual processor 1 as of: 07/13/2006 15:07:14
on-line since 07/13/2006 14:19:02.
The i386 processor operates at 3200 MHz,
and has an i387 compatible floating point processor.
root@juliet:[etc]$ isainfo -v
64-bit amd64 applications
mon sse3 sse2 sse fxsr mmx cmov amd_sysc cx8 tsc fpu
32-bit i386 applications
mon sse3 sse2 sse fxsr mmx cmov sep cx8 tsc fpu
root@juliet:[etc]$ prtdiag
System Configuration: Dell Computer Corporation PowerEdge 1855
BIOS Configuration: Dell Computer Corporation A04 08/24/2005
BMC Configuration: IPMI 1.5 (KCS: Keyboard Controller Style)

==== Processor Sockets ====================================

Version Location Tag
-------------------------------- --------------------------
Intel Xeon PROC_1
Intel Xeon PROC_2

==== Memory Device Sockets ================================

Type Status Set Device Locator Bank Locator
------- ------ --- ------------------- --------------------
DDR2 in use 1 DIMM1_A
DDR2 in use 1 DIMM1_B
DDR2 empty 2 DIMM2_A
DDR2 empty 2 DIMM2_B
DDR2 empty 3 DIMM3_A
DDR2 empty 3 DIMM3_B

==== On-Board Devices =====================================
LSI Logic 53C1020 Ultra 320 SCSI
ATI RADEON 7000 PCI Video
Intel 82546GB Gigabit Ethernet
Intel 82546GB Gigabit Ethernet

==== Upgradeable Slots ====================================

ID Status Type Description
--- --------- ---------------- ----------------------------
1 available PCI-X DC_CONN

Look at the prtconf -pv
It was especially cool to have ZFS on a production ready box. The blade I installed Solaris on had a pair of mirrored 146Gb drives so I created a 40Gig system partition and a 100Gb partition for ZFS.
Now I just need to investigate how well WebLogic 8.1 plays in a zone.

As a really annoying side note I can mention that it took almost 2 days to get Windows 2000 server installed on another blade. Getting drivers and a retail copy of Windows 2000 with slipstreamed SP4 was quite difficult. We usually use MSDN media to install development server but the Dell Server Assitant (required to get Windows 2000 working at all) requires a retail OEM version of Windows 2000. Then again, I'm not a very Windowsy' person :)

Tuesday, July 11

New massive Sun servers

Sun, or should I say Andy Bechtolsheim, announced some really awesome servers at the NC event today.
I mean, these new boxes will kick some HP butt.

  • Sun x4600 - an 8-socket (16-way) Opteron machine. 4 SAS drives, 8 PCI-e, up to 128Gb! memory. Sounds like a fun box.
  • Sun x4500 - ok, most companies are moving away from internal storage but now when iSCSI is making great progress it is time to rething the mid size storage sector. This little 4U box has nothing less than 48!! internal SATA drives. Up to 24Tb per machine, with 10 machines in a rack, thats 240Tb per rack. Yes please.
  • Sun Blades 8000 series - nice blade enclosure. Future proof, 2 pci-e slots per blade which is nice. And the network backplane has a smart config. Instead of having a interface card on the blade and connect via ethernet to the switch, the "switch" connects to the blade via PCI-e, so the actuall ethernet card is in the switch itself. Small customer base though, only massive datacenters can afford this type of kit.

    I can see a very nice Oracle Data Warehouse setup here. One x4600 processing machine backed by 24Tb iSCSI storage mounted from a x4500 :)
    And, we can build a massvie 48-way 3 node RAC deployment in just 12U's (plus proper storage).

  • Saturday, July 8

    HPC Top500 - AMD vs. Intel

    The latest top 500 HPC list was released about a month back. A lot of new systems on the list which is great news. Opterons are climbing fast and IBM is gaining a lot of market share with Blue Gene systems.

    One thing struck me as quite interesting when I first glanced the list, Xeon processors seems to get higher "per cpu" benchmark figures. I know these numbers aren't the only thing to go after when it comes to comparing HPC processing power. The type of load etc . makes a difference. But still, it's a known fact that Opterons are faster in almost all tasks.
    I did a quick calculation of the top 100 entries which had Xeons or Opterons, the average Rmax per cpu is almost 50% better for Xeons?!
    Xeon gets 4.40 and Opteron gets 3.00.

    This makes me even more courius about the next list where we'll hopefully have some Xeon 5100 systems.

    Wednesday, July 5

    Busy with the blades

    Have been quite slow on the blogging side lately. Actually quite busy at work installing a bunch of new servers. We ordered some Dell kit last week, PowerEdge 2850's and more interestingly PowerEdge 1855 blades.
    I must say I really love the blade concept, not so much by the annoying "data center cost saving" hype. I strongly doubt that 10 Xeon blades use much less power than 10 1U Xeon servers.
    Power wise, the enclosure wants 4 16 Amp 3-pole plugs.. go figure :)

    No, what I really like about blade servers is the ease of management and administration. The Dell Blade enclosures we got has 2 built in switch modules, DRAC/MC card and the optional IP-over-KVM module. All these cool modules are cheap so I can't imagine anyone ordering a blade enclosure without them.

    Then we have the cable side of things. Each enclosure holds 10 servers, from the enclosure we have:

  • 4 power cables
  • 2 ethernet uplinks (one from each switch module)
  • 1 ethernet management interface

    Thats 7 in total, compare to 10 1U servers:

  • 10x 2 power cables
  • 10x 2 ethernet cables to switches
  • 10x 3 kvm (mouse, keyboard and screen to a KVM switch)
  • 10x 1 ethernet management interface

    Thats 80!! in total. Or about 180 meters of cables. Dang! If we would have had fibre-channel connectivity we would have saved another 18 cables.

    PE1855

    Ok, now to what really counts. Price!
    For us it started making sense after about 4-5 servers, thas just silly. The question was more "why not" to buy blades. Each blade came in at about £1350 (dual 3.2Ghz, 4Gb, 2x146Gb) and the enclosure itself was about a silly £1000 (including the modules). Thats £15000 for 10 machines, a similar speced PE 1850 costs about £1600. Not that much of saving you say, a lousy £1000 for 10 machines.
    But look at the extra costs around having 10 servers, take the 10 slots on a KVM switch, take the 18 extra ports in ethernet switches, take the freakin' cables. I'm assuming a cost of £1200 for a 16 port KVM switch, allocating 10 of those costs £750, 18 switch ports in two switches at £500 each is £416, 28 cat5 cables at perhaps 60 pounds for a pack.
    I'm not counting the extra allocation of PDU's due to the fact the blade enclosure uses the previously mentioned 16A 3-pole sockets which of you probably have to have a few extra installed.
    Now we have a cost saving of about £2260 per enclosure.
    The biggest saving however will be in time, I'll blog about the ease of use another day. Funny enough the blades will be installed with a huge mix of opterating systems. Windows, both 2000 server and 2003 server, alongside Centos 3, Centos 4 and Solaris 10. Most of them running Weblogic, Websphere and Jboss.

    And a small PS. Nooo, I could not have bought Opterons nor the new Conroe procs. These machines are going to be used mainly for application support mimicking customer environments. And as most people probably know, investment banks don't mix with the latest technology.
  • Sunday, July 2

    Turning your desktop in to a desktop

    Found a very cool video on YouTube.
    Some canadian grad student has written a prototype desktop system called BumpTop™ which is designed to be an actuall desktop. With piles of documents and applications. You can pile up documents and sort them just as you would on a real life desktop.

    Check out the video.


    Project website.