[lustre-discuss] lustre-discuss Digest, Vol 175, Issue 2

Ms. Megan Larko dobsonunit at gmail.com
Tue Oct 6 11:49:01 PDT 2020


For S:  Help mounting MDT to Alastair,
Just to clarify, you mentioned that the MDT is ldiskfs, but are you
mounting the MDT as a part of a full Lustre File System on the MDS server,
are you mounting te MDT as type lustre?

Cheers,
megan

On Mon, Oct 5, 2020 at 4:50 PM <lustre-discuss-request at lists.lustre.org>
wrote:

> Send lustre-discuss mailing list submissions to
>         lustre-discuss at lists.lustre.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
>         http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
> or, via email, send a message with subject or body 'help' to
>         lustre-discuss-request at lists.lustre.org
>
> You can reach the person managing the list at
>         lustre-discuss-owner at lists.lustre.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of lustre-discuss digest..."
>
>
> Today's Topics:
>
>    1. Help mounting MDT (Alastair Basden)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Mon, 5 Oct 2020 16:28:37 +0100 (BST)
> From: Alastair Basden <a.g.basden at durham.ac.uk>
> To: lustre-discuss at lists.lustre.org
> Subject: [lustre-discuss] Help mounting MDT
> Message-ID: <alpine.DEB.2.22.394.2010051620120.2902 at xps14>
> Content-Type: text/plain; format=flowed; charset=US-ASCII
>
> Hi all,
>
> We are having a problem mounting a ldiskfs mdt.  The mount command is
> hanging, with /var/log/messages containing:
> Oct  5 16:26:17 c6mds1 kernel: INFO: task mount.lustre:4285 blocked for
> more than 120 seconds.
> Oct  5 16:26:17 c6mds1 kernel: "echo 0 >
> /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> Oct  5 16:26:17 c6mds1 kernel: mount.lustre    D ffff92cd279de2a0     0
> 4285   4284 0x00000082
> Oct  5 16:26:17 c6mds1 kernel: Call Trace:
> Oct  5 16:26:17 c6mds1 kernel: [<ffffffff8db7f229>] schedule+0x29/0x70
> Oct  5 16:26:17 c6mds1 kernel: [<ffffffff8db7cbb1>]
> schedule_timeout+0x221/0x2d0
> Oct  5 16:26:17 c6mds1 kernel: [<ffffffff8d4e64a8>] ?
> enqueue_task_fair+0x208/0x6c0
> Oct  5 16:26:17 c6mds1 kernel: [<ffffffff8d4dd425>] ?
> sched_clock_cpu+0x85/0xc0
> Oct  5 16:26:17 c6mds1 kernel: [<ffffffff8d4d6500>] ?
> check_preempt_curr+0x80/0xa0
> Oct  5 16:26:17 c6mds1 kernel: [<ffffffff8d4d6539>] ?
> ttwu_do_wakeup+0x19/0xe0
> Oct  5 16:26:17 c6mds1 kernel: [<ffffffff8db7f5dd>]
> wait_for_completion+0xfd/0x140
> Oct  5 16:26:17 c6mds1 kernel: [<ffffffff8d4da0b0>] ?
> wake_up_state+0x20/0x20
> Oct  5 16:26:17 c6mds1 kernel: [<ffffffffc0cd69d4>]
> llog_process_or_fork+0x244/0x450 [obdclass]
> Oct  5 16:26:17 c6mds1 kernel: [<ffffffffc0cd6bf4>] llog_process+0x14/0x20
> [obdclass]
> Oct  5 16:26:17 c6mds1 kernel: [<ffffffffc0d09eb5>]
> class_config_parse_llog+0x125/0x350 [obdclass]
> Oct  5 16:26:17 c6mds1 kernel: [<ffffffffc0804fc0>]
> mgc_process_cfg_log+0x790/0xc40 [mgc]
> Oct  5 16:26:17 c6mds1 kernel: [<ffffffffc08084cc>]
> mgc_process_log+0x3dc/0x8f0 [mgc]
> Oct  5 16:26:17 c6mds1 kernel: [<ffffffffc080915f>] ?
> config_recover_log_add+0x13f/0x280 [mgc]
> Oct  5 16:26:17 c6mds1 kernel: [<ffffffffc0d120f0>] ?
> class_config_dump_handler+0x7e0/0x7e0 [obdclass]
> Oct  5 16:26:17 c6mds1 kernel: [<ffffffffc0809b2b>]
> mgc_process_config+0x88b/0x13f0 [mgc]
> Oct  5 16:26:17 c6mds1 kernel: [<ffffffffc0d15d08>]
> lustre_process_log+0x2d8/0xad0 [obdclass]
> Oct  5 16:26:17 c6mds1 kernel: [<ffffffffc0bdff97>] ?
> libcfs_debug_msg+0x57/0x80 [libcfs]
> Oct  5 16:26:17 c6mds1 kernel: [<ffffffffc0d00a69>] ?
> lprocfs_counter_add+0xf9/0x160 [obdclass]
> Oct  5 16:26:17 c6mds1 kernel: [<ffffffffc0d449f4>]
> server_start_targets+0x13a4/0x2a20 [obdclass]
> Oct  5 16:26:17 c6mds1 kernel: [<ffffffffc0d18d60>] ?
> lustre_start_mgc+0x260/0x2510 [obdclass]
> Oct  5 16:26:17 c6mds1 kernel: [<ffffffffc0d120f0>] ?
> class_config_dump_handler+0x7e0/0x7e0 [obdclass]
> Oct  5 16:26:17 c6mds1 kernel: [<ffffffffc0d4713c>]
> server_fill_super+0x10cc/0x1890 [obdclass]
> Oct  5 16:26:17 c6mds1 kernel: [<ffffffffc0d1ba78>]
> lustre_fill_super+0x328/0x950 [obdclass]
> Oct  5 16:26:17 c6mds1 kernel: [<ffffffffc0d1b750>] ?
> lustre_common_put_super+0x270/0x270 [obdclass]
> Oct  5 16:26:17 c6mds1 kernel: [<ffffffff8d64c67f>] mount_nodev+0x4f/0xb0
> Oct  5 16:26:17 c6mds1 kernel: [<ffffffffc0d13b58>] lustre_mount+0x38/0x60
> [obdclass]
> Oct  5 16:26:17 c6mds1 kernel: [<ffffffff8d64d1fe>] mount_fs+0x3e/0x1b0
> Oct  5 16:26:17 c6mds1 kernel: [<ffffffff8d66b387>]
> vfs_kern_mount+0x67/0x110
> Oct  5 16:26:17 c6mds1 kernel: [<ffffffff8d66dadf>] do_mount+0x1ef/0xce0
> Oct  5 16:26:17 c6mds1 kernel: [<ffffffff8d64521a>] ?
> __check_object_size+0x1ca/0x250
> Oct  5 16:26:17 c6mds1 kernel: [<ffffffff8d62368c>] ?
> kmem_cache_alloc_trace+0x3c/0x200
> Oct  5 16:26:17 c6mds1 kernel: [<ffffffff8d66e913>] SyS_mount+0x83/0xd0
> Oct  5 16:26:17 c6mds1 kernel: [<ffffffff8db8cede>]
> system_call_fastpath+0x25/0x2a
>
>
> This is Lustre 2.12.2 on CentOS 7.6
>
> Does anyone have any suggestions?
>
> Cheers,
> Alastair.
>
>
> ------------------------------
>
> Subject: Digest Footer
>
> _______________________________________________
> lustre-discuss mailing list
> lustre-discuss at lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
>
> ------------------------------
>
> End of lustre-discuss Digest, Vol 175, Issue 2
> **********************************************
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20201006/2f434b1c/attachment.html>


More information about the lustre-discuss mailing list