[lustre-discuss] [EXTERNAL] MDT mount stuck

Mohr, Rick mohrrf at ornl.gov
Thu Mar 11 15:05:10 PST 2021


Thomas,

Is the behavior any different if you mount with the "-o abort_recov" option to avoid the recovery phase?

--Rick

On 3/11/21, 11:48 AM, "lustre-discuss on behalf of Thomas Roth via lustre-discuss" <lustre-discuss-bounces at lists.lustre.org on behalf of lustre-discuss at lists.lustre.org> wrote:

    Hi all,

    after not getting out of the ldlm_lockd - situation, we are trying a shutdown plus restart.
    Does not work at all, the very first mount of the restart is MGS + MDT0, of course.

    It is quite busy writing traces to the log


    Mar 11 17:21:17 lxmds19.gsi.de kernel: INFO: task mount.lustre:2948 blocked for more than 120 seconds.
    Mar 11 17:21:17 lxmds19.gsi.de kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
    this message.
    Mar 11 17:21:17 lxmds19.gsi.de kernel: mount.lustre    D ffff9616ffc5acc0     0  2948   2947 0x00000082
    Mar 11 17:21:17 lxmds19.gsi.de kernel: Call Trace:
    Mar 11 17:21:17 lxmds19.gsi.de kernel:  [<ffffffff8a785da9>] schedule+0x29/0x70
    Mar 11 17:21:17 lxmds19.gsi.de kernel:  [<ffffffff8a7838b1>] schedule_timeout+0x221/0x2d0
    Mar 11 17:21:17 lxmds19.gsi.de kernel:  [<ffffffff8a0e17f6>] ? select_task_rq_fair+0x5a6/0x760
    Mar 11 17:21:17 lxmds19.gsi.de kernel:  [<ffffffff8a78615d>] wait_for_completion+0xfd/0x140
    Mar 11 17:21:17 lxmds19.gsi.de kernel:  [<ffffffff8a0db990>] ? wake_up_state+0x20/0x20
    Mar 11 17:21:17 lxmds19.gsi.de kernel:  [<ffffffffc0b7c9a4>] llog_process_or_fork+0x244/0x450 [obdclass]
    Mar 11 17:21:17 lxmds19.gsi.de kernel:  [<ffffffffc0b7cbc4>] llog_process+0x14/0x20 [obdclass]
    Mar 11 17:21:17 lxmds19.gsi.de kernel:  [<ffffffffc0bafd05>] class_config_parse_llog+0x125/0x350 
    [obdclass]
    Mar 11 17:21:17 lxmds19.gsi.de kernel:  [<ffffffffc077efc0>] mgc_process_cfg_log+0x790/0xc40 [mgc]
    Mar 11 17:21:17 lxmds19.gsi.de kernel:  [<ffffffffc07824cc>] mgc_process_log+0x3dc/0x8f0 [mgc]
    Mar 11 17:21:17 lxmds19.gsi.de kernel:  [<ffffffffc078315f>] ? config_recover_log_add+0x13f/0x280 [mgc]
    Mar 11 17:21:17 lxmds19.gsi.de kernel:  [<ffffffffc0bb7f40>] ? class_config_dump_handler+0x7e0/0x7e0 
    [obdclass]
    Mar 11 17:21:17 lxmds19.gsi.de kernel:  [<ffffffffc0783b2b>] mgc_process_config+0x88b/0x13f0 [mgc]
    Mar 11 17:21:17 lxmds19.gsi.de kernel:  [<ffffffffc0bbbb58>] lustre_process_log+0x2d8/0xad0 [obdclass]
    Mar 11 17:21:17 lxmds19.gsi.de kernel:  [<ffffffffc0a84177>] ? libcfs_debug_msg+0x57/0x80 [libcfs]
    Mar 11 17:21:17 lxmds19.gsi.de kernel:  [<ffffffffc0ba68b9>] ? lprocfs_counter_add+0xf9/0x160 [obdclass]
    Mar 11 17:21:17 lxmds19.gsi.de kernel:  [<ffffffffc0bea8f4>] server_start_targets+0x13a4/0x2a20 [obdclass]
    Mar 11 17:21:17 lxmds19.gsi.de kernel:  [<ffffffffc0bbebb0>] ? lustre_start_mgc+0x260/0x2510 [obdclass]
    Mar 11 17:21:17 lxmds19.gsi.de kernel:  [<ffffffffc0bb7f40>] ? class_config_dump_handler+0x7e0/0x7e0 
    [obdclass]
    Mar 11 17:21:17 lxmds19.gsi.de kernel:  [<ffffffffc0bed03c>] server_fill_super+0x10cc/0x1890 [obdclass]
    Mar 11 17:21:17 lxmds19.gsi.de kernel:  [<ffffffffc0bc1a08>] lustre_fill_super+0x468/0x960 [obdclass]
    Mar 11 17:21:17 lxmds19.gsi.de kernel:  [<ffffffffc0bc15a0>] ? lustre_common_put_super+0x270/0x270 
    [obdclass]
    Mar 11 17:21:17 lxmds19.gsi.de kernel:  [<ffffffff8a2510ff>] mount_nodev+0x4f/0xb0
    Mar 11 17:21:17 lxmds19.gsi.de kernel:  [<ffffffffc0bb99a8>] lustre_mount+0x38/0x60 [obdclass]
    Mar 11 17:21:17 lxmds19.gsi.de kernel:  [<ffffffff8a251c7e>] mount_fs+0x3e/0x1b0
    Mar 11 17:21:17 lxmds19.gsi.de kernel:  [<ffffffff8a2707d7>] vfs_kern_mount+0x67/0x110
    Mar 11 17:21:17 lxmds19.gsi.de kernel:  [<ffffffff8a272f0f>] do_mount+0x1ef/0xd00
    Mar 11 17:21:17 lxmds19.gsi.de kernel:  [<ffffffff8a249daa>] ? __check_object_size+0x1ca/0x250
    Mar 11 17:21:17 lxmds19.gsi.de kernel:  [<ffffffff8a2288ec>] ? kmem_cache_alloc_trace+0x3c/0x200
    Mar 11 17:21:17 lxmds19.gsi.de kernel:  [<ffffffff8a273d63>] SyS_mount+0x83/0xd0
    Mar 11 17:21:17 lxmds19.gsi.de kernel:  [<ffffffff8a792ed2>] system_call_fastpath+0x25/0x2a




    Other than that, nothing is happening.

    The Lustre processes have started, but e.g. recovery_status = Inactive.
    OK, perhaps because there is nothing out there to recover besides this MDS, all other Lustre 
    servers+clients are still stopped.


    Still, on previous occasions the mount would not block in this way. The device would be mounted - now 
    it does not make it into /proc/mounts

    Btw, the disk device can be mounted as type ldiskfs. So it exists, and it looks definitely like a 
    Lustre MDT on the inside.


    Best,
    Thomas

    -- 
    --------------------------------------------------------------------
    Thomas Roth
    Department: Informationstechnologie
    Location: SB3 2.291
    Phone: +49-6159-71 1453  Fax: +49-6159-71 2986


    GSI Helmholtzzentrum für Schwerionenforschung GmbH
    Planckstraße 1, 64291 Darmstadt, Germany, www.gsi.de

    Commercial Register / Handelsregister: Amtsgericht Darmstadt, HRB 1528
    Managing Directors / Geschäftsführung:
    Professor Dr. Paolo Giubellino, Dr. Ulrich Breuer, Jörg Blaurock
    Chairman of the Supervisory Board / Vorsitzender des GSI-Aufsichtsrats:
    State Secretary / Staatssekretär Dr. Volkmar Dietz

    _______________________________________________
    lustre-discuss mailing list
    lustre-discuss at lists.lustre.org
    http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org



More information about the lustre-discuss mailing list