[Lustre-discuss] Set quota on Lustre system file client, reboots MDS/MGS node

tay kian kianpishe at gmail.com
Sat Sep 4 05:23:22 PDT 2010


 Hi
I used lustre-1.8.3 for Centos5.4. I patched the kernel according to Lustre
1.8 operations manual.pdf.
I have a problem when I want to implement quota.
My cluster configuration is:
1. one MGS/MDS host (with two devices: sda and sdb,respectively)
     with the following commands:
     1) mkfs.lustre --mgs /dev/sda
     2) mount -t lustre /dev/sda /mnt/mgt
     3) mkfs.lustre --fsname=lustre --mdt --mgsnode=<mgs IP at net> --param
mdt.quota_type=ug /dev/sdb
     4) mount -t lustre /dev/sdb /mnt/mdt
2. one OSS host (with two devices: sda and sdb as OST targets)
    with the following commands:
    1) mkfs.lustre --fsname=lustre --ost --mgsnode=<mgs IP at net> --param
ost.quota_type=ug /dev/sda
    2) mkfs.lustre --fsname=lustre --ost --mgsnode=<mgs IP at net> --param
ost.quota_type=ug /dev/sdb
    3) mount -t lustre /dev/sda /mnt/ost1
    4) mount -t lustre /dev/sdb /mnt/ost2
3. and one client with the following command:
   mount -t lustre <mds IP at net>:/lustre <IP at net%3E:/lustre> /mnt/client1

I tried to set quota for user1 with the following command:
  lfs quotacheck -ug /mnt/client1
  lfs setquota -u user1 10240 10440 3 5 /mnt/client1

But the following problems occurred:
 when I ran the command, the MGS/MDS host is rebooted, so the process
failed: Connection timed out

I can not understand that a client "can" force it's server to being rebooted
and also, why the client forces the MGS/MDS host to reboot

Any Idea?
Best Regards
Kian
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20100904/ea401ce8/attachment.htm>


More information about the lustre-discuss mailing list