[Lustre-discuss] Fwd: Lustre and Large Pages

John Hammond jhammond at ices.utexas.edu
Fri Aug 20 06:21:16 PDT 2010


On 08/19/2010 11:10 PM, Oleg Drokin wrote:
> Hello!
>
> On Aug 19, 2010, at 7:07 PM, Andreas Dilger wrote:
>> If you want to flush all the memory used by a Lustre client
>> between jobs, you can do "lctl set_param
>> ldlm.namespaces.*.lru_size=clear". Unlike Kevin's suggestion it is
>> Lustre-specific, while drop_caches will try to flush memory from
>> everything.
>
>
> Actually there is one extra bit that won't get freed by dropping
> locks that is lustre debug logs (assuming non-zero debug level). It
> could be cleared with lctl clear

Indeed, thanks.  On Ranger, the compute nodes use compact flash drives 
for /, and so they depend on tmpfs's for /tmp, /var/run, /var/log, and 
of course /dev/shm.  So cleaning up these ram backed filesystems as much 
as practical before asking for any hugepages is also a win.

Also, in imitation of the systems that pre-allocate all needed hugepages 
at boot time, we are considering the idea of first pre-allocating a 
large chunk of memory (say 7/8) in hugepages, then mounting the Lustre 
filesystems, then releasing the hugepages.  The hope is that Lustre's 
persistent structures will be fit into a more compact region of memory 
thereby.

The main obstacle in testing all of this is that benchmarking the gains 
gotten by a particular approach is difficult, since we have not yet 
found an easy way of producing external fragmentation of physical memory 
in short order.  Suggestions are welcome.

Best,

-John

-- 
John L. Hammond, Ph.D.
ICES, The University of Texas at Austin
jhammond at ices.utexas.edu
(512) 471-9304



More information about the lustre-discuss mailing list