Archive for the ‘Virtualization’ Category

Building the ultimate media workhorse

Thursday, September 10th, 2009

 

A prominent post production company approached us to build a system for post production (obviously!) workloads. The interesting part is that they like to use a high end card to capture up to four HD or SD video streams, a high end video card, an output card to composite real time video and graphics. If that’s not enough, we are adding a Fibre Channel card for storage access, another HD video capture card.

In a nutshell, we are building a system powerful enough to handle the demands of an HD digital video pipeline (DVP) for real time video processing, compositing and rich media production. Want more? They want it to run a linux OS and windows OS at once. So, lets try to wrap this up in a sentence: A dual socket system supporting two HD video capture cards, a high end graphics card, an HD output card, an FC card and run Windows and Linux virtualized with access to these cards. Can your vendor provide this system and support it?

The most popular vendor in digital media does provide digital media solutions (well, not like these systems anyway), but at a very high premium. Why? I don’t know. Maybe they spent all that time and energy in reaching out and establishing themselves as the leader in developing solutions for media space, which are not much different from other high end computing systems. And when you talk about high end computing, HPC Systems has delivered more complex machines than anyone can even imagine. We have delivered numerous multi-socket (4,8 socket) systems to some of the premier federal organizations for a variety of workloads, for forest fire simulations, for IC designs and simulation, for virtualization, for desk-side supercomputing and for financial modeling. We have even delivered a fully integrated single cluster that includes a Cell processor rack mount system, CUDA cards, accelerators, Opteron processors and Infiniband. There are not many who can deliver that complex designs.

The point of all this is not bragging but to say we don’t charge our customers a premium based on their requirements. We don’t charge you more because of the company size or because of anything else. However, we do charge for non-trivial software installation or configuration. Compared to the quality of the systems (with full software configuration and free phone, email support) we deliver the unbeatable value.

Well, I will keep you posted on the progress of this project. Keep coming back!

RedHat/AMD trumps VMWare/Intel on live VM migration technology

Wednesday, December 3rd, 2008

In this YouTube video, RedHat & AMD demonstrate live migration of a virtual machine from Intel based server to an AMD based server.

Ever since Intel bought a stake in VMWare, VMWare products have noticeably and obviously featured enhancements for Intel processors. One of the coolest things to come out after the VMWare/Intel alliance was the capability to move virtual machines from one generation of Intel processors to another generation of Intel processors. This capability is markted as “VMWare Enhanced VMotion” and “Intel VT FlexMigration”. FlexMigration was a much needed feature given how incompatible one generation of Intel processors are from the other. Intel being one of the biggest investors, VMWare may be reluctant to provide enhancements to VMWare that work better with AMD’s products. A related post is here .

With the new demonstration, AMD might see this as a way to co-exist in data centers that are exclusive to Intel. When power, rack space and heating costs are going up, virtualization is efficiently consolidating hardware for a good number of today’s applications. FlexMigration allowed the customers to be able to invest in the (almost) entire Intel product line without worrying about the incompatibilities when trying to use VMWare VMotion.However, the same technology will prevent customers from investing in AMD technology becasue they will be unable to migrate workloads without downtime.

Regardless of business decisions, this capability, when it is commercialized, will put the choice back in customers hands.

Good stuff.

Hyper-V (Windows Server 2008 x64) on 32 cores

Friday, January 25th, 2008

In the previous post, we tried Hyper-V with only 16 cores as per the release notes. Now we addedd another 8 CPUS (16 cores AMD Opteron) to the same system. This was to test the x64 Windows Server 2008 on 32 cores than Hyper-V. We already did this for the x86 version here.

The system did boot up just fine. Here is a screen shot.

Windows Server 2008 x64 on 32 AMD Opteron cores

With that taken care of, we quickly browsed through the event logs to see if the Hyper-V service / hypervisor failed to start as per the release notes. There was no such message. The only way to test if the hypervisor has started or not is to fire up the Server Manager and try to boot up the virtual machines.

We were pleased to see that the hypervisor indeed started and there was no problem booting up the virtual machines. And here is a screenshot.

Hyper-V with 32 AMD Opteron cores

This opens up a wide range of usage cases. With appropriate capacity planning, the entire data center for a small company can be replaced with one 5U server and two for a highly available setup.

Cheers,

Kalyan

Hyper-V on 16 core AMD Opteron system (A5808-32)

Thursday, January 24th, 2008

After a successful install and test of x86 Windows Server 2008, it was time to put the x64 version through the same test.

We will talk about the x64 installation and experiences in another post.In this post, we will focus on the Hyper-V installation, configuration and experiences on our 16 core AMD Opteron server, A5808-32.

Hyper-V is the new Microsoft hypervisor technology included with certain SKUs of Windows Server 2008 x64 version. Hyper-V, included in the Windows Server 2008 RC1, is still in beta stages.

Installation:

Hyper-V is a new server role. This role has to be added after Windows Server 2008 installs and boots up. This installs the Hypervisor and reboots the server.

Hyper-V installation

Now a new role shows up under “Roles” in the “Server Manager”. The new role is “Hyper-V” and has the new category “Microsoft Hyper-V Servers”. Your server will show up under this category. In the future, we may see other categories than just “Microsoft Hyper-V Servers”. This could be a place holder for the future management framework talked about here.

Creating a new VM is pretty straight forward. Select New->Virtual machine from the Server Manager and follow the wizard. Here is a screen shot of FedoraCore 6 x86_64 installation. Look, Ma … Linux on Windows!

New VM - Hyper-V

New VM boot up - Hyper-V

FedoraCore 6 on Hyper-V

Next up, Windows XP. Here is a screenshot of Windows XP install and FC6 & Windows XP VMs active on the Hyper-V server.

Windows XP installation on Hyper-V

FC6 and Windows XP on Hyper-V

Over all experience:

It works well. We did not face any major issues with installation of Hyper-V or of VM’s. The management capabilities are polished. High Availability (HA) is implemented as a part of the Windows HA services.

There was no problem with mouse or keyboard inputs. No sticky mouse issues.

Certain Linux distros need noacpi flag to boot as VMs under Hyper-V.

Microsoft integration services CD ROM was not recognized by FC6. It logs “this disc does not contain any tracks I recognize.” Anyway, the integration services are suported only on Windows Server 2003 SP2 and Windows Server 2008.

Installation of both Windows XP and FC 6 was much longer than on VMWare or an actual system.

For a beta release, Hyper-V is suprisingly fully usable and comes fully integrated with Window Server 2008.

As per release notes, Hyper-V does not support more than 16 cores. So we configured our 32-core system with only 16 cores.

Future Work:

Test HA services

Capacity Planning

Quick Migration

Comments:

We used remote desktop connection to connect to the server and manage it. Once the mouse is captured inside the VM, there is no way to release it without going to the console of the Windows server and releasing it from there. Also the we could not use the mouse inside the FC6 from Remote Dektop. We could not find what the key sequence was to send Ctrl+Alt+<– to the remote machine using RemoteDekstop. Hopefully once a standalone application, like VI3, is released for Windows Hypervisor, it would be much easier to manage.

Edit:

If you are looking for detailed installation instructions, this post is useful.

VMotion on HiPSerStor & A5808 – Better Together II

Tuesday, January 15th, 2008

Trying VMotion was a natural next step after our success in initial tests with VMWare ESX on A5808-32. iSCSI services on HiPerStor provide for a very affordable shared storage for VMWare ESX. Well, lets jump right in. (more…)

VMWare ESX server on 32 cores

Tuesday, January 8th, 2008

Well, what a pleasant way to start the day!

After many successful tests yesterday (instal & test, iSCSI & NFS), I populated the system with 8 quad core AMD Opteron processors and left for the day hoping to come back and fire up ESX on 32 cores.

And here it is …. ESX working like a charm on 32 cores.

VMWare ESX on 32 cores

I hope to get those VMMark numbers out soon!

 Cheers,

Kalyan

Better Together – HiPerStor & A5808-32 with VMWare ESX server

Monday, January 7th, 2008

After we successfully installed and tested VMWare ESX server 3.5 with Linux and Windows machines, it was time to test HiPerStor with VMWare ESX server.

If you are not familiar with HiPerStor, it is a storage product from HPC Systems, Inc. featuring support for NFS, SMB/CIFS, iSCSI and Apple shares. read more about it here.

To start with, we would like to use the iSCSI target features of HiPerStor and add a new VMFS datastore to our ESX server. ESX comes with a builtin iSCSI initiator and hence there is no need for additional iSCSI hardware.

Since the HiPerStor has no default iSCSI volumes defined, the first step is to open up the web management GUI and add a new iSCSI volume. With that taken care of, ESX server has to be configured, as follows, to be able to access iSCSI volumes.

  • Create VMKernel port group
  • A corresponding Service Console port group on the same subnet as the VMKernel port group
  • Enable ESX software iSCSI initiator
  • Discover new LUNs after configuring appropriate security settings
  • Utilize the new storage device as VMFS datastore

None of the above steps posed any serious challenges. Discovering the iSCSI volumes was a breeze. However, we need to manually initate a “rescan” on the software iSCSI initator before the LUNs show up in a reasonable time. ESX was also happy to extend an already “hybrid” VMFS datastore we created earlier (SCSI partition on LSI controller + raw SATA disk on on-board controller). Well, thats nice.

But for now, I chose to create a separate datastore from the iSCSI volume so it will show up in the following screenshot.

Better Together - HiPerStor & A5808-32 w/ESX server 

You can also notice in the screenshot that there is also an NFS datastore created from an NFS share on HiPerStor. There cannot be a better testament to the versatility of HiPerStor than this screenshot. You should also know that the same HiPerStor system is concurrently serving up a bunch of CIFS shares as well. On the background you can catch a glimpse of the HiPerStor management GUI.

Now, to install a VM on the iSCSI datastore and boot it up. For this time, lets try it with SLES 10. And here is a screenshot of SLES 10 up & running successfully from the iSCSI datastore provided by our own HiPerStor.

SLES 10 running off an iSCSI datastore

You can notice in the background, the datastore information for this VM (SLES 10) and other key parameters. 

Next up: time measurements of booting up a VM from local disk and VM from an iSCSI disk, if I can figure out a way to measure the time accurately.

EDIT: I had to login to the ESX server and execute the commands to enable iSCSI before I could see the LUNs from the storage server.

Cheers,

Kalyan

VMWare ESX Server 3.5 on A5808-32, 8 socket AMD Opteron

Monday, January 7th, 2008

A slight deviation from the benchmarking exercise – VMWare

We started testing VMWare ESX 3.5 on our A5808-32 8 socket AMD Opteron server. To start with, we populated the system with only 4 socket instead of all 8 and 8 GB RAM. We are using the LSI Logic PCI-Express SCSI adapter instead of the on-board SATA controller. The installation succeeded without any issues.

We decided to use the local SCSI disk to hold all the VMs. When the system booted, we had a surprise in store. ESX also detected the on-board SATA controllers. We promptly shut down the system, added a few SATA drives to the on-board controller and as expected, ESX was happy to use the SATA drives as VMFS data stores. This was a pleasant surprise. However, we are yet to test if ESX will be happy to install ESX on to a SATA drive.

Please note that using SATA drives for your VMFS data store is really a bad idea. Only because SATA drives cannot provide the reliability of a SAS or SCSI drive. Performance also will be an issue when using SATA drives for creating your VMFS data stores.

We did face a small problem after initial boot up. ESX partitioned the SCSI drive in to the standard layout it needs:

/boot

/var/log

vmfs

vmkcore

But after bootup, VI client refused to see the local vmfs partition as a valid data store. A little search on the internet and this thread helped us fix that issue.

To test the SATA drive, we extended the vmfs3 partition (data store) on the SCSI drive with the SATA drive and it worked like a charm. Now we have a VMFS data store that is spanned across a partition on the SCSI drive and a full SATA drive connected to a totally different controller.

Well, we know it boots up fine. Time to install a VM. We quickly created a VM to install CentOS 5.0 64-bit ans fired up the VM. Here is a screen shot of the install.

VMWare 3.5 CentOS 5 64 bit Install

And here is a screen shot of the virtual machine booted up. You can also see the firefox window open in the background.

CentOS 5.0 64-bit Running

The obvious next step was to try Windows. Here is a screen shot of Windows 2003 Standard Edition 64-bit installation.

Windows 2003 Standrad Edition 64-bit Installation

Windows Server 2003 R2 up & running.

Windows Server 2003 R2 running successfully

In a nutshell, VMWare ESX 3.5 installed and working successfully on A5808-32.

What is next?

  • Use HiPerStor as datastore

  • VMWare VMMark test results

  • More workloads with VMWare on A5808-32

Cheers and Happy New Year!

Kalyan