Home Linux Community Forums Hardware Storage Linux hosts on VMware and disk subsystem timeouts.

Linux hosts on VMware and disk subsystem timeouts.

Link to this post 22 Apr 09

This is perhaps a tough one to answer but it nags me sometimes.

When you have a Linux guest on a VMware host where the disk subsystem is under heavy load and takes to long to answer the Linux guest often remounts the disk as read-only.

My ventures into the various documentation, interweb and support sites hasnt given me a better solution than to throw more hardware at the problem.

I use "tune2fs -e continue /dev/sda" to avoid having the disk remounted RO in a vmware enviroment but i suspect it will continue on other faliures than timeouts then.

Is it possible to tune the timeout for disks or is that something thats hardcoded in the drivers?

Link to this post 22 Apr 09

tuxmania wrote:

Is it possible to tune the timeout for disks or is that something thats hardcoded in the drivers?

That's a good question. I'll have to do some research --- ok. This is from an entry I found in the VMware user forums that might help.
Are you wanting to increase the disk timeout for you Linux guests? If so, I use the following script in my rc.local file for my 2.6 kernel based distro's such as RHEL4 and RHEL5:

for i in `ls /sys/block | grep -P ^sd`;do
echo "60" > /sys/block/$i/device/timeout

This basically just loops through all scsi disks and sets the timeout to 60. I'm not sure if there is a way to do the same with 2.4 based distros like RHEL3 and earlier but I've actually never had a problem with those anyway so maybe the default is already fairly high.

I hope this helps.

Link to this post 22 Apr 09

I cant say im sure but i think the value for /sys/block/$i/device/timeout effect command timeouts only. Other kinds of timeouts arent handled by this i think. The default values i have seen has been 60 seconds with udev and 30 without but the timeouts that triggers the remounts has been much shorter than this.

The only thing that has worked for me has been to use tune2fs but that doesnt work for eg. NSS filesystems on Novell Open Enterprise on a VMware guest.

Link to this post 06 Aug 09

I would be curious to see what the host's storage is up to. Usually in a virtual environment the kernel will remount the filesystem to r/o to protect itself when errors start showing up. You see this when the SCSI ibas having trouble due to whatever reason. Feedback comes back to the kernel and if it does not know how to handle the errors it protects itself.

Most newer kernels of the 2.6.16 has been tweaked in error handling. However, if you are still having trouble you can add barrier support. The barrier transaction is a method the CPU uses to confirm messages are being received by peripheral devices. Usually they are sent when response have not been received.

From one of Novell's TID (Technical Information Document) explains it pretty well:

When a kernel update (as discussed above) is not an option, the problem can als be worked around by explicitly disabling barrier support for the affected filesystems, e.g. by specifying
in /etc/fstab's mount options field for the affected filesystems.

Error handling code in the ext3 filesystem code is not properly handling the case where a device has stopped to accept barrier requests, which can happen with software RAID devices, LVM devices, device-mapper devices and with third party multipathing software like EMC PowerPath.

Who we are ?

The Linux Foundation is a non-profit consortium dedicated to the growth of Linux.

More About the foundation...

Frequent Questions

Linux Training / Board

/** BC-056 Ameex changes to add tracking code - 2016-01-22 **/ ?>