[Lustre-discuss] Job fails opening 24k files and keeps them	open during execution
    Wang Yibin 
    Yibin.Wang at Sun.COM
       
    Wed Mar  4 18:37:51 PST 2009
    
    
  
Lustre does not impose maximum number of open files, but practically it
depends on amount of RAM on the MDS. 
There are no "tables" for open files on the MDS, as they are only linked
in a list to a given client's export. 
Each client process probably has a limit of several thousands of open
files which depends on the ulimit.
在 2009-03-04三的 16:27 -0500,Osvaldo Rentas写道:
> Hello,
> 
>  
> 
> I am working with a user that has  Fortran code that opens 24.000
> files and keeps them open during execution.  We had to adjust our
> kernel parameters to allow this to happen, since Linux cuts you off at
> 1024 by default.  This is job runs successfully for him on the local
> disk of a Linux machine, but when he moves the job to Lustre, it
> fails.  The metadata servers are running Red Hat …do they impose their
> own user limitations as well?  Or is there a limitation within Lustre
> or a config file? 
> 
>  
> 
> Thanks in advance,
> 
> Oz
> 
>  
> 
>  
> 
>  
> 
>  
> 
> 
> 
> No virus found in this outgoing message.
> Checked by AVG.
> Version: 7.5.557 / Virus Database: 270.11.3/1975 - Release Date:
> 2/27/2009 7:05 AM
> 
> 
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-discuss at lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss
    
    
More information about the lustre-discuss
mailing list