Tuesday, May 4, 2010

Evil ZVOL/ISCSI Tuning?

After a few days of VDI in service it was about time to start tuning the whole beast. The W7 guests show a rather poor performance when it came to disk IO. The virtual disks live on ISCSI targets provided by a X4540, so one would hope for decent performance. But the W7 guest hardly ever got more than 5MB/s from the virtual disks. So something needed some serious checking.
As a comparison I got close to 100MB/s when using NFS from the storage to the boxes running Virtualbox (VBox is running on X4450s with 64GB RAM, all NICs aggregated). So there was hope. Next I tried Solaris' ISCSI initiator and got transfer rates around 60MB/s. Both way better than the VBox' own ISCSI initiator in combination with whatever W7 does.
Digging around a little I found this blog post. So it seems like Windows tries to flush the cache pretty often. First thing I tried was to enable fast ack for incoming wirtes to the ISCSI target(s) with
iscsitadm modifiy admin -f enable
Luckily the server is backed by UPS power so I chose to ignore the warning that this might be dangerous. (Anyway there a no data on the client images, so worst case I can just clone the guest again if it goes wrong ;-))
Next thing I set the MaxRecvDataSegmentLength to 128MB, that should give the clients a bit more memory to dump their stuff faster:
iscsitadm modifiy target -m 128M
That made things a little better. I got nearly twice the throughput. Still way too slow.
So as a last resort I remember this post by Robert Milkowski about enabling write cache for zvols. As the X4540 is running Solaris 10 the Comstar option to ignore cache flushes doesn't exist yet (or more precise no comstar in Sol10). So mileks program to enable the write cache on zvols does help a lot.
I now get about 30MB/s for the W7 guest, which I consider decent enough for a virtual Windooze box ;-)
For all above always keep in mind, if you loose power, you may loose data.

3 comments:

Anonymous said...

You might want to run a test on the IO operations for Virtualbox itself without ISCSI involved but rather images of NFS and see what the actual VirtualBox overhead is and what the Windows overhead is.

phaedrus77 said...

I tried VBox with local images before I imported them into VDI. Local images gave me about 45-50MB/s inside the guest (native throughput on the local disks is about 100-120MB/s). So wouldn't expect much difference when the image comes in over NFS (considering NFS gives about twice the throughput I see inside the guest). But since I haven't found a way to configure images on NFS shares in VDI (it insists to use ISCSI) that option isn't available to test and compare.

Ursula said...

Du schreibst ja gar nichts mehr im Blog?

Post a Comment