[lustre-discuss] LU-11465 OSS/MDS deadlock in 2.10.5

Andreas Dilger adilger at whamcloud.com
Mon Oct 22 18:55:57 PDT 2018


On Oct 23, 2018, at 09:25, Marion Hakanson <hakansom at ohsu.edu> wrote:
> 
> I think Patrick's warning of data loss on a local ZFS filesystem is not
> quite right.  It's a design feature of ZFS that it flushes caches upon
> committing writes before returning a "write complete" back to the
> application.  Data loss can still happen if the storage lies to ZFS
> about having sent the data to stable storage.

Just to clarify, even ZFS on a local node does not avoid data loss if
the file is written only to RAM, and is not sync'd to disk.  That is
true of any filesystem, unless your writes are all O_SYNC (which can
hurt performance significantly), or until NVRAM is used exclusively to
store data.

There is some time after the write() syscall returns to an application
before the filesystem will even _start_ to write to the disk, to allow
it to aggregate data from multiple write() syscalls for efficiency.
Once the data is sent from RAM to disk, the disk should not ack the write
until it is persistent.  If sync() (or variant) is called by userspace,
that should not return until the data is persistent, which is true with
Lustre as well.

What Patrick was referencing is if the server crashes after the client
write() has received the data, but before it is persistent on disk, and
*then* the client is evicted from the server, the data would be lost.
It would still return an error if fsync() is called on the file handle,
but this is often not done by applications.  The same is true if a local
disk disconnects from the node before the data is persistent (e.g. USB
device unplug, cable failure, external RAID enclosure power failure, etc).

Cheers, Andreas

> Anyway, thanks, Andreas and others, for clarifying about the use of
> abort_recovery.  Using it turns out to not have been helpful in our
> situation so far, but this has been a useful discussion about the
> risks of data loss, etc.
> 
> Thanks and regards,
> 
> Marion
> 
> 
>> From: Patrick Farrell <paf at cray.com>
>> To: "Mohr Jr, Richard Frank (Rick Mohr)" <rmohr at utk.edu>, Marion Hakanson
>> 	<hakansom at ohsu.edu>
>> CC: "lustre-discuss at lists.lustre.org" <lustre-discuss at lists.lustre.org>
>> Subject: Re: [lustre-discuss] LU-11465 OSS/MDS deadlock in 2.10.5
>> Date: Fri, 19 Oct 2018 17:36:56 +0000
>> 
>> There is a somewhat hidden danger with eviction: You can get silent data loss.  The simplest example is buffered (ie, any that aren't direct I/O) writes - Lustre reports completion (ie your write() syscall completes) once the data is in the page cache on the client (like any modern file system, including local ones - you can get silent data loss on EXT4, XFS, ZFS, etc, if your disk becomes unavailable before data is written out of the page cache).
>> 
>> So if that client with pending dirty data is evicted from the OST the data is destined for - which is essentially what abort recovery does - that data is lost, and the application doesn't get an error (because the write() call has already completed).
>> 
>> A message is printed to the console on the client in this case, but you have to know to look for it.  The application will run to completion, but you may still experience data loss, and not know it.  It's just something to keep in mind - applications that run to completion despite evictions may not have completed *successfully*.
>> 
>> - Patrick
>> 
>> On 10/19/18, 11:42 AM, "lustre-discuss on behalf of Mohr Jr, Richard Frank (Rick Mohr)" <lustre-discuss-bounces at lists.lustre.org on behalf of rmohr at utk.edu> wrote:
>> 
>> 
>>> On Oct 19, 2018, at 10:42 AM, Marion Hakanson <hakansom at ohsu.edu> wrote:
>>> 
>>> Thanks for the feedback.  You're both confirming what we've learned so far, that we had to unmount all the clients (which required rebooting most of them), then reboot all the storage servers, to get things unstuck until the problem recurred.
>>> 
>>> I tried abort_recovery on the clients last night, before rebooting the MDS, but that did not help.  Could well be I'm not using it right:
>> 
>>    Aborting recovery is a server-side action, not something that is done on the client.  As mentioned by Peter, you can abort recovery on a single target after it is mounted by using “lctl —device <DEV> abort_recover”.  But you can also just skip over the recovery step when the target is mounted by adding the “-o abort_recov” option to the mount command.  For example, 
>> 
>>    mount -t lustre -o abort_recov /dev/my/mdt /mnt/lustre/mdt0
>> 
>>    And similarly for OSTs.  So you should be able to just unmount your MDT/OST on the running file system and then remount with the abort_recov option.  From a client perspective, the lustre client will get evicted but should automatically reconnect.   
>> 
>>    Some applications can ride through a client eviction without causing issues, some cannot.  I think it depends largely on how the application does its IO and if there is any IO in flight when the eviction occurs.  I have had to do this a few times on a running cluster, and in my experience, we have had good luck with most of the applications continuing without issues.  Sometimes there are a few jobs that abort, but overall this is better than having to stop all jobs and remount lustre on all the compute nodes.
>> 
>>    --
>>    Rick Mohr
>>    Senior HPC System Administrator
>>    National Institute for Computational Sciences
>>    http://www.nics.tennessee.edu
>> 
>>    _______________________________________________
>>    lustre-discuss mailing list
>>    lustre-discuss at lists.lustre.org
>>    http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>> 
>> 
> 
> _______________________________________________
> lustre-discuss mailing list
> lustre-discuss at lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

Cheers, Andreas
---
Andreas Dilger
CTO Whamcloud






More information about the lustre-discuss mailing list