Saturday, May 17, 2008

Hypervisor and I/O

I was reading blog by Avi who is Kernel Developer and found it very interesting. In his blog he tried to explain how I/O is important for hypervisor and how vendor like VMWare and Xen maintain this hypervisor. Its true that VMWare has their own proprietary hypervisior which means any development or modification can be made ONLY by VMware, where as Xen has open with their hypervisor. That means any kernel developer like Avi can change it to install drivers. Its true hypervisor get I/O hit because all the drivers or communication happen through this I/O. You can read his complete blog here


--------------------------------------------------
I/O performance is of great importance to a hypervisor. I/O is also a huge maintenance burden, due to the large number of hardware devices that need to be supported, numerous I/O protocols, high availability options, and management for it all.
VMware opted for the performance option, but putting the I/O stack in the hypervisor. Unfortunately the VMware kernel is proprietary, so that means VMware has to write and maintain the entire I/O stack. That means a slow development rate, and that your hardware may take a while to be supported.
Xen took the maintainability route, by doing all I/O within a Linux guest, called "domain 0". By reusing Linux for I/O, the Xen maintainers don't have to write an entire I/O stack. Unfortunately, this eats away performance: every interrupt has to go through the Xen scheduler so that Xen can switch to domain 0, and everything has to go through an additional layer of mapping.
Not that Xen solved the maintainability problem completely: the Xen domain 0 kernel is still stuck on the ancient Linux 2.6.18 release (whereas 2.6.25 is now available). These problems have led Fedora 9 to drop support for hosting Xen guests, leaving kvm as the sole hypervisor.
So how does kvm fare here? like VMware, I/O is done within the hypervisor context, so full performance is retained. Like Xen, it reuses the entire Linux I/O stack, so kvm users enjoy the latest drivers and I/O stack improvements. Who said you can't have your cake and eat it?

No comments: