Archive for January, 2008

Hyper-V (Windows Server 2008 x64) on 32 cores

Friday, January 25th, 2008

In the previous post, we tried Hyper-V with only 16 cores as per the release notes. Now we addedd another 8 CPUS (16 cores AMD Opteron) to the same system. This was to test the x64 Windows Server 2008 on 32 cores than Hyper-V. We already did this for the x86 version here.

The system did boot up just fine. Here is a screen shot.

Windows Server 2008 x64 on 32 AMD Opteron cores

With that taken care of, we quickly browsed through the event logs to see if the Hyper-V service / hypervisor failed to start as per the release notes. There was no such message. The only way to test if the hypervisor has started or not is to fire up the Server Manager and try to boot up the virtual machines.

We were pleased to see that the hypervisor indeed started and there was no problem booting up the virtual machines. And here is a screenshot.

Hyper-V with 32 AMD Opteron cores

This opens up a wide range of usage cases. With appropriate capacity planning, the entire data center for a small company can be replaced with one 5U server and two for a highly available setup.



Hyper-V on 16 core AMD Opteron system (A5808-32)

Thursday, January 24th, 2008

After a successful install and test of x86 Windows Server 2008, it was time to put the x64 version through the same test.

We will talk about the x64 installation and experiences in another post.In this post, we will focus on the Hyper-V installation, configuration and experiences on our 16 core AMD Opteron server, A5808-32.

Hyper-V is the new Microsoft hypervisor technology included with certain SKUs of Windows Server 2008 x64 version. Hyper-V, included in the Windows Server 2008 RC1, is still in beta stages.


Hyper-V is a new server role. This role has to be added after Windows Server 2008 installs and boots up. This installs the Hypervisor and reboots the server.

Hyper-V installation

Now a new role shows up under “Roles” in the “Server Manager”. The new role is “Hyper-V” and has the new category “Microsoft Hyper-V Servers”. Your server will show up under this category. In the future, we may see other categories than just “Microsoft Hyper-V Servers”. This could be a place holder for the future management framework talked about here.

Creating a new VM is pretty straight forward. Select New->Virtual machine from the Server Manager and follow the wizard. Here is a screen shot of FedoraCore 6 x86_64 installation. Look, Ma … Linux on Windows!

New VM - Hyper-V

New VM boot up - Hyper-V

FedoraCore 6 on Hyper-V

Next up, Windows XP. Here is a screenshot of Windows XP install and FC6 & Windows XP VMs active on the Hyper-V server.

Windows XP installation on Hyper-V

FC6 and Windows XP on Hyper-V

Over all experience:

It works well. We did not face any major issues with installation of Hyper-V or of VM’s. The management capabilities are polished. High Availability (HA) is implemented as a part of the Windows HA services.

There was no problem with mouse or keyboard inputs. No sticky mouse issues.

Certain Linux distros need noacpi flag to boot as VMs under Hyper-V.

Microsoft integration services CD ROM was not recognized by FC6. It logs “this disc does not contain any tracks I recognize.” Anyway, the integration services are suported only on Windows Server 2003 SP2 and Windows Server 2008.

Installation of both Windows XP and FC 6 was much longer than on VMWare or an actual system.

For a beta release, Hyper-V is suprisingly fully usable and comes fully integrated with Window Server 2008.

As per release notes, Hyper-V does not support more than 16 cores. So we configured our 32-core system with only 16 cores.

Future Work:

Test HA services

Capacity Planning

Quick Migration


We used remote desktop connection to connect to the server and manage it. Once the mouse is captured inside the VM, there is no way to release it without going to the console of the Windows server and releasing it from there. Also the we could not use the mouse inside the FC6 from Remote Dektop. We could not find what the key sequence was to send Ctrl+Alt+<– to the remote machine using RemoteDekstop. Hopefully once a standalone application, like VI3, is released for Windows Hypervisor, it would be much easier to manage.


If you are looking for detailed installation instructions, this post is useful.

Windows Server 2008 Release Candidate 1 on 32 cores

Wednesday, January 23rd, 2008

Windows Server 2008 hit RC1 milestone recently. Windows Server is a popular choice on our 8 socket Opteron server, A5808-32, for a number of customers. RC1 is a good time for small vendors to test the compatibility of the OS with their servers and storage.

Here is a screen shot of Windows Server 2008 RC1 with 32 cores – 8 sockets, quad core AMD Opteron. This is the x86 version with full install.

Windows Server 2008 RC 1 32 cores

Notice in the screen shot a report from CPU-Z utility – 8 processors, 4 cores and processor model. Windows version is displayed in the winver dialog and on the extreme left is the list of processors from the windows device manager.

Installation experiences:

  • First install hung at 60%. Cold restart
  • No video driver. Standard VGA. Well, you wouldn’t want Aero interface on your server anyway. We did not try to install the Windows 2003 version of the driver as the display was very usable.
  • All NIC’s successfully detected
  • On-board SATA successfully detected
  • 32-cores (8 sockets) successfully detected
  • System was configured with only 16GB instead of the maximum 256GB. We do not see any issues for use with 256GB
  • iSCSI initiator successfully mounted a remote volume
  • Windows “System” control panel applet does not display processor or memory details.

Now to try the x64 and Hyper-V versions. Stay tuned.



VMotion on HiPSerStor & A5808 – Better Together II

Tuesday, January 15th, 2008

Trying VMotion was a natural next step after our success in initial tests with VMWare ESX on A5808-32. iSCSI services on HiPerStor provide for a very affordable shared storage for VMWare ESX. Well, lets jump right in. (more…)

VMWare ESX server on 32 cores

Tuesday, January 8th, 2008

Well, what a pleasant way to start the day!

After many successful tests yesterday (instal & test, iSCSI & NFS), I populated the system with 8 quad core AMD Opteron processors and left for the day hoping to come back and fire up ESX on 32 cores.

And here it is …. ESX working like a charm on 32 cores.

VMWare ESX on 32 cores

I hope to get those VMMark numbers out soon!



Better Together – HiPerStor & A5808-32 with VMWare ESX server

Monday, January 7th, 2008

After we successfully installed and tested VMWare ESX server 3.5 with Linux and Windows machines, it was time to test HiPerStor with VMWare ESX server.

If you are not familiar with HiPerStor, it is a storage product from HPC Systems, Inc. featuring support for NFS, SMB/CIFS, iSCSI and Apple shares. read more about it here.

To start with, we would like to use the iSCSI target features of HiPerStor and add a new VMFS datastore to our ESX server. ESX comes with a builtin iSCSI initiator and hence there is no need for additional iSCSI hardware.

Since the HiPerStor has no default iSCSI volumes defined, the first step is to open up the web management GUI and add a new iSCSI volume. With that taken care of, ESX server has to be configured, as follows, to be able to access iSCSI volumes.

  • Create VMKernel port group
  • A corresponding Service Console port group on the same subnet as the VMKernel port group
  • Enable ESX software iSCSI initiator
  • Discover new LUNs after configuring appropriate security settings
  • Utilize the new storage device as VMFS datastore

None of the above steps posed any serious challenges. Discovering the iSCSI volumes was a breeze. However, we need to manually initate a “rescan” on the software iSCSI initator before the LUNs show up in a reasonable time. ESX was also happy to extend an already “hybrid” VMFS datastore we created earlier (SCSI partition on LSI controller + raw SATA disk on on-board controller). Well, thats nice.

But for now, I chose to create a separate datastore from the iSCSI volume so it will show up in the following screenshot.

Better Together - HiPerStor & A5808-32 w/ESX server 

You can also notice in the screenshot that there is also an NFS datastore created from an NFS share on HiPerStor. There cannot be a better testament to the versatility of HiPerStor than this screenshot. You should also know that the same HiPerStor system is concurrently serving up a bunch of CIFS shares as well. On the background you can catch a glimpse of the HiPerStor management GUI.

Now, to install a VM on the iSCSI datastore and boot it up. For this time, lets try it with SLES 10. And here is a screenshot of SLES 10 up & running successfully from the iSCSI datastore provided by our own HiPerStor.

SLES 10 running off an iSCSI datastore

You can notice in the background, the datastore information for this VM (SLES 10) and other key parameters. 

Next up: time measurements of booting up a VM from local disk and VM from an iSCSI disk, if I can figure out a way to measure the time accurately.

EDIT: I had to login to the ESX server and execute the commands to enable iSCSI before I could see the LUNs from the storage server.



VMWare ESX Server 3.5 on A5808-32, 8 socket AMD Opteron

Monday, January 7th, 2008

A slight deviation from the benchmarking exercise – VMWare

We started testing VMWare ESX 3.5 on our A5808-32 8 socket AMD Opteron server. To start with, we populated the system with only 4 socket instead of all 8 and 8 GB RAM. We are using the LSI Logic PCI-Express SCSI adapter instead of the on-board SATA controller. The installation succeeded without any issues.

We decided to use the local SCSI disk to hold all the VMs. When the system booted, we had a surprise in store. ESX also detected the on-board SATA controllers. We promptly shut down the system, added a few SATA drives to the on-board controller and as expected, ESX was happy to use the SATA drives as VMFS data stores. This was a pleasant surprise. However, we are yet to test if ESX will be happy to install ESX on to a SATA drive.

Please note that using SATA drives for your VMFS data store is really a bad idea. Only because SATA drives cannot provide the reliability of a SAS or SCSI drive. Performance also will be an issue when using SATA drives for creating your VMFS data stores.

We did face a small problem after initial boot up. ESX partitioned the SCSI drive in to the standard layout it needs:





But after bootup, VI client refused to see the local vmfs partition as a valid data store. A little search on the internet and this thread helped us fix that issue.

To test the SATA drive, we extended the vmfs3 partition (data store) on the SCSI drive with the SATA drive and it worked like a charm. Now we have a VMFS data store that is spanned across a partition on the SCSI drive and a full SATA drive connected to a totally different controller.

Well, we know it boots up fine. Time to install a VM. We quickly created a VM to install CentOS 5.0 64-bit ans fired up the VM. Here is a screen shot of the install.

VMWare 3.5 CentOS 5 64 bit Install

And here is a screen shot of the virtual machine booted up. You can also see the firefox window open in the background.

CentOS 5.0 64-bit Running

The obvious next step was to try Windows. Here is a screen shot of Windows 2003 Standard Edition 64-bit installation.

Windows 2003 Standrad Edition 64-bit Installation

Windows Server 2003 R2 up & running.

Windows Server 2003 R2 running successfully

In a nutshell, VMWare ESX 3.5 installed and working successfully on A5808-32.

What is next?

  • Use HiPerStor as datastore

  • VMWare VMMark test results

  • More workloads with VMWare on A5808-32

Cheers and Happy New Year!