Archive for the ‘Storage’ Category

Pulling it all together, integrated updates

Thursday, October 29th, 2009

I know I am behind a little bit on updating this blog.

Today I made significant progress in this regard. I finally integrated all my social networks in one place and linked them all to this blog.

So when I make publish a new post, it is automatically updated to my facebook, linkedin, twitter, googletalk, yahoo and live. I think that’s enough distribution, isn’t it? :)

Stay tuned for an update on the new storage we announced recently.

Formatting large volumes with ext3

Friday, November 7th, 2008

In RedHat 5.1, the maximum file system size is increased to 16 TB from 8TB. However, getting mkfs to format a volume larger than 2 TB is not straight forward.

We do  ship large volumes to customers regularly. We recommend that customers use XFS for large volumes for performance and size considerations. However, sometimes customers want only ext3 because of the familiarity with the file system.

Before being able to format a volume,  you must be able to create a volume greater than 2 TB. fdisk cannot do this.

You will need to use GNU Parted (parted) to create partitions larger than 2 TB. Details on how to use parted can be found here and here

A simple example of using parted, we assume are working on /dev/sdb of size 10 TB from a RAID controller.

$> parted /dev/sdb

GNU Parted 1.8.9
Using /dev/sdd
Welcome to GNU Parted! Type 'help' to view a list of commands.

(parted) mkpart primary ext3 10737418240
(parted) print
(parted) quit

A straight forward mkfs command on any volume larger than 2 TB will yield the following error:

mkfs.ext3: Filesystem too large.  No more than 2**31-1 blocks
(8TB using a blocksize of 4k) are currently supported.

A simple workaround is to force mkfs to format the device in spite of the size:

mkfs.ext3 -F -b 4096 /dev/<block device>

mkfs.ext3 -F -b 4096 /dev/<path to logical volume> if you are using LVM

In order to use the above command you need to have e2fsprogs 1.39 or above. The above command also sets block size to 4kb.

You could also use -m0  to set the reserved blocks to zero.

Note that ext3 is not recommended for large volumes. XFS is better suited for that purpose.

Further reading:

RedHat Knowledgebase  Article



VMotion on HiPSerStor & A5808 – Better Together II

Tuesday, January 15th, 2008

Trying VMotion was a natural next step after our success in initial tests with VMWare ESX on A5808-32. iSCSI services on HiPerStor provide for a very affordable shared storage for VMWare ESX. Well, lets jump right in. (more…)

Better Together – HiPerStor & A5808-32 with VMWare ESX server

Monday, January 7th, 2008

After we successfully installed and tested VMWare ESX server 3.5 with Linux and Windows machines, it was time to test HiPerStor with VMWare ESX server.

If you are not familiar with HiPerStor, it is a storage product from HPC Systems, Inc. featuring support for NFS, SMB/CIFS, iSCSI and Apple shares. read more about it here.

To start with, we would like to use the iSCSI target features of HiPerStor and add a new VMFS datastore to our ESX server. ESX comes with a builtin iSCSI initiator and hence there is no need for additional iSCSI hardware.

Since the HiPerStor has no default iSCSI volumes defined, the first step is to open up the web management GUI and add a new iSCSI volume. With that taken care of, ESX server has to be configured, as follows, to be able to access iSCSI volumes.

  • Create VMKernel port group
  • A corresponding Service Console port group on the same subnet as the VMKernel port group
  • Enable ESX software iSCSI initiator
  • Discover new LUNs after configuring appropriate security settings
  • Utilize the new storage device as VMFS datastore

None of the above steps posed any serious challenges. Discovering the iSCSI volumes was a breeze. However, we need to manually initate a “rescan” on the software iSCSI initator before the LUNs show up in a reasonable time. ESX was also happy to extend an already “hybrid” VMFS datastore we created earlier (SCSI partition on LSI controller + raw SATA disk on on-board controller). Well, thats nice.

But for now, I chose to create a separate datastore from the iSCSI volume so it will show up in the following screenshot.

Better Together - HiPerStor & A5808-32 w/ESX server 

You can also notice in the screenshot that there is also an NFS datastore created from an NFS share on HiPerStor. There cannot be a better testament to the versatility of HiPerStor than this screenshot. You should also know that the same HiPerStor system is concurrently serving up a bunch of CIFS shares as well. On the background you can catch a glimpse of the HiPerStor management GUI.

Now, to install a VM on the iSCSI datastore and boot it up. For this time, lets try it with SLES 10. And here is a screenshot of SLES 10 up & running successfully from the iSCSI datastore provided by our own HiPerStor.

SLES 10 running off an iSCSI datastore

You can notice in the background, the datastore information for this VM (SLES 10) and other key parameters. 

Next up: time measurements of booting up a VM from local disk and VM from an iSCSI disk, if I can figure out a way to measure the time accurately.

EDIT: I had to login to the ESX server and execute the commands to enable iSCSI before I could see the LUNs from the storage server.