[lustre-discuss] [Iudev] GID only mapping in 2.8.60?

Stephane Thiell sthiell at stanford.edu
Mon Nov 28 17:57:01 PST 2016


Hi Kit,

Thanks much! This is super helpful, I just tested your patch (on master) and it seems to work as expected, at least with map_mode = gid_only.

I will do more tests on this system and let you know if I notice any issues.

Thanks again,

Stephane


> On Nov 18, 2016, at 7:52 AM, Kit Westneat <kit.westneat at gmail.com> wrote:
> 
> Hi Stephane,
> 
> I wrote a patch that adds a couple of flags to allow GID only mapping:
> http://review.whamcloud.com/23853
> 
> How to activate it:
> lctl nodemap_modify --name my_nodemap --property map_mode --value gid_only
> 
> 
> 
> On Mon, Nov 7, 2016 at 5:59 PM, Stephane Thiell <sthiell at stanford.edu> wrote:
> 
> > On Nov 4, 2016, at 11:05 PM, Dilger, Andreas <andreas.dilger at intel.com> wrote:
> >
> > Actually, the nodemap feature will work with any client, since it is only affecting lookups on the MDS and quota on the OSS.
> 
> Great! :-)
> 
> >
> > It probably would take less time for you to implement the flag feature than the time it is taking to create the thousands of UID entries. While I think it should scale very large, I don't think we have tested the 1M or so entries you are creating. The good news is that since this is using a hash table on the server it shouldn't hurt performance too much.
> >
> > Let us know how many you finally create, and how it is working with so many entries.
> 
> I was able to add ~820k idmaps in 48h [with all servers up and running, see my note below]… but I can’t go much beyond that point, as I am seeing less than 1 idmap being added per second. Even with such a low rate, I noticed a cpu-bound thread on all MDS and OSS:
> 
>    PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
>  30258 root      20   0       0      0      0 R 100.0  0.0   1798:38 ll_cfg_requeue
> 
> I have two vmcore dumps that show backtraces for this running thread that can be found at the end of this email.
> 
> I don’t know why the nodemap_idx_insert is called so often, as the apparent idmap creation rate is very slow.
> 
> Last minute note: I was able to quickly add a more idmaps from the MGS much faster by disconnecting the lustre servers first. Once the idmaps were added to the MGS, I started all MDS and OSS servers again, and the idmap synchronization from the MGS was immediate. So for large uid ranges, I would recommend to add all the idmaps with only the MGS up and running. That can be very useful in case of writeconf until a uid range-based or identity mapping solution is available.
> 
> Also, to finish with a good news, I just completed a few small scale metadata benchmarks using files with mapped uig/gid, but I didn't notice any difference in terms of md performance like create or delete/sec rates when the nodemap feature is enable or disabled. Tested with 820k loaded idmaps.
> 
> All the best,
> Stephane
> 
> 
> 
> bt running ll_cfg_requeue #1:
> --- <NMI exception stack> ---
>  #4 [ffff881f3e93fae8] memset at ffffffff813017d9
>  #5 [ffff881f3e93fae8] kmem_cache_alloc_trace at ffffffff811c117e
>  #6 [ffff881f3e93fb30] lu_context_init at ffffffffa0a72d06 [obdclass]
>  #7 [ffff881f3e93fb50] osd_trans_start at ffffffffa0de0151 [osd_ldiskfs]
>  #8 [ffff881f3e93fb88] nodemap_idx_insert at ffffffffa0f28af4 [ptlrpc]
>  #9 [ffff881f3e93fbd0] nodemap_save_config_cache at ffffffffa0f2c5e0 [ptlrpc]
> #10 [ffff881f3e93fc78] nodemap_config_set_active_mgc at ffffffffa0f2c9ad [ptlrpc]
> #11 [ffff881f3e93fce0] mgc_process_recover_nodemap_log at ffffffffa0a0ce6b [mgc]
> #12 [ffff881f3e93fd70] mgc_process_log at ffffffffa0a0f894 [mgc]
> #13 [ffff881f3e93fe30] mgc_requeue_thread at ffffffffa0a11908 [mgc]
> #14 [ffff881f3e93fec8] kthread at ffffffff810a5aef
> #15 [ffff881f3e93ff50] ret_from_fork at ffffffff81645a58
> 
> bt running ll_cfg_requeue #2:
> --- <NMI exception stack> ---
>  #4 [ffff883f5f0239a8] iam_it_init at ffffffffa1168563 [osd_ldiskfs]
>  #5 [ffff883f5f0239b0] iam_insert at ffffffffa116a2f3 [osd_ldiskfs]
>  #6 [ffff883f5f023b20] osd_index_iam_insert at ffffffffa1157027 [osd_ldiskfs]
>  #7 [ffff883f5f023b88] nodemap_idx_insert at ffffffffa0d41c0c [ptlrpc]
>  #8 [ffff883f5f023bd0] nodemap_save_config_cache at ffffffffa0d455e0 [ptlrpc]
>  #9 [ffff883f5f023c78] nodemap_config_set_active_mgc at ffffffffa0d459ad [ptlrpc]
> #10 [ffff883f5f023ce0] mgc_process_recover_nodemap_log at ffffffffa0a30e6b [mgc]
> #11 [ffff883f5f023d70] mgc_process_log at ffffffffa0a33894 [mgc]
> #12 [ffff883f5f023e30] mgc_requeue_thread at ffffffffa0a35908 [mgc]
> #13 [ffff883f5f023ec8] kthread at ffffffff810a5aef
> #14 [ffff883f5f023f50] ret_from_fork at ffffffff81645a58
> 
> 
> _______________________________________________
> Iudev mailing list
> Iudev at lists.opensfs.org
> http://lists.opensfs.org/listinfo.cgi/iudev-opensfs.org
> 



More information about the lustre-discuss mailing list