[Lustre-discuss] [zfs-discuss] ZFS/Lustre echo 0 >> max_cached_mb chewing 100% cpu

Lee, Brett brett.lee at intel.com
Tue Oct 22 14:10:06 PDT 2013



> -----Original Message-----
> From: lustre-discuss-bounces at lists.lustre.org [mailto:lustre-discuss-
> bounces at lists.lustre.org] On Behalf Of Prakash Surya
> Sent: Tuesday, October 22, 2013 2:53 PM
> To: zfs-discuss at zfsonlinux.org
> Cc: lustre-discuss at lists.lustre.org
> Subject: Re: [Lustre-discuss] [zfs-discuss] ZFS/Lustre echo 0 >> max_cached_mb
> chewing 100% cpu
> 
> On Tue, Oct 22, 2013 at 07:01:47PM +0000, Lee, Brett wrote:
> > Andrew,
> >
> > If I recall correctly, "FSTYPE=zfs /usr/lib64/lustre/tests/llmount.sh" will create
> and start a sample ZFS-backed Lustre file system using loopback devices.
> 
> That's not entirely true with ZFS. It'll create ZFS pools backed by ordinary files.
> No need for loopback devices.

Ahh, yes.  I stand corrected.  Thanks for the reminder.

--
Brett Lee
Sr. Systems Engineer
Intel High Performance Data Division
> 
> --
> Cheers, Prakash
> 
> >
> > Could you please check to see if there are loopback devices mounted as
> Lustre storage targets?  If so, unmounting these and stopping the Lustre file
> system should (could?) clean things up.
> >
> > --
> > Brett Lee
> > Sr. Systems Engineer
> > Intel High Performance Data Division
> >
> >
> > > -----Original Message-----
> > > From: lustre-discuss-bounces at lists.lustre.org
> > > [mailto:lustre-discuss- bounces at lists.lustre.org] On Behalf Of
> > > Andrew Holway
> > > Sent: Tuesday, October 22, 2013 10:44 AM
> > > To: zfs-discuss at zfsonlinux.org
> > > Cc: lustre-discuss at lists.lustre.org
> > > Subject: Re: [Lustre-discuss] [zfs-discuss] ZFS/Lustre echo 0 >>
> > > max_cached_mb chewing 100% cpu
> > >
> > > On 22 October 2013 16:21, Prakash Surya <surya1 at llnl.gov> wrote:
> > > > This probably belongs on the Lustre mailing list.
> > >
> > > I cross posted :)
> > >
> > > > Regardless, I don't
> > > > think you want to do that (do you?). It'll prevent any client side
> > > > caching, and more importantly, I don't think it's a case that's
> > > > been tested/optimized. What're you trying to acheive?
> > >
> > > Sorry I was not clear, I didn't action this and I cant kill the
> > > process. It seemed to start directly after running:
> > >
> > > "FSTYPE=zfs /usr/lib64/lustre/tests/llmount.sh"
> > >
> > > I have tried to kill it first with -2 upto -9 but the process will not budge.
> > >
> > > Here is the top lines from perf top
> > >
> > > 37.39%  [osc]              [k] osc_set_info_async
> > >  27.14%  [lov]              [k] lov_set_info_async
> > >   4.13%  [kernel]           [k] kfree
> > >   3.57%  [ptlrpc]           [k] ptlrpc_set_destroy
> > >   3.14%  [kernel]           [k] mutex_unlock
> > >   3.10%  [lustre]           [k] ll_wr_max_cached_mb
> > >   3.00%  [kernel]           [k] mutex_lock
> > >   2.82%  [ptlrpc]           [k] ptlrpc_prep_set
> > >   2.52%  [kernel]           [k] __kmalloc
> > >
> > > Thanks,
> > >
> > > Andrew
> > >
> > > >
> > > > Also, just curious, where's the CPU time being spent? What process
> > > > and/or kernel thread? What are the top entries listed when you run
> > > > "perf
> > > top"?
> > > >
> > > > --
> > > > Cheers, Prakash
> > > >
> > > > On Tue, Oct 22, 2013 at 12:53:44PM +0100, Andrew Holway wrote:
> > > >> Hello,
> > > >>
> > > >> I have just setup a "toy" lustre setup using this guide here:
> > > >> http://zfsonlinux.org/lustre and have this process chewing 100% cpu.
> > > >>
> > > >> sh -c echo 0 >>
> > > >> /proc/fs/lustre/llite/lustre-ffff88006b0c7c00/max_cached_mb
> > > >>
> > > >> Until I get something more beasty I am using my desktop machine
> > > >> with KVM. Using standard Centos 6.4 with latest kernel. (2.6.32-
> 358.23.2).
> > > >> my machine has 2GB ram
> > > >>
> > > >> Any ideas?
> > > >>
> > > >> Thanks,
> > > >>
> > > >> Andrew
> > > >>
> > > >> To unsubscribe from this group and stop receiving emails from it,
> > > >> send an
> > > email to zfs-discuss+unsubscribe at zfsonlinux.org.
> > > >
> > > > To unsubscribe from this group and stop receiving emails from it,
> > > > send an
> > > email to zfs-discuss+unsubscribe at zfsonlinux.org.
> > > _______________________________________________
> > > Lustre-discuss mailing list
> > > Lustre-discuss at lists.lustre.org
> > > http://lists.lustre.org/mailman/listinfo/lustre-discuss
> >
> > To unsubscribe from this group and stop receiving emails from it, send an
> email to zfs-discuss+unsubscribe at zfsonlinux.org.
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-discuss at lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss



More information about the lustre-discuss mailing list