[lustre-discuss] Lustre and server upgrade

STEPHENS, DEAN - US dean.stephens at caci.com
Tue Nov 30 05:48:54 PST 2021


You are right I am new to setting up Lustre and the system that we are working on has been around for a while.

I have verified that the LUN that is attached to the VM is the correct one using the UUID and WWID as reference. The interesting thing is that when I attach the LUN as a disk to the VM it immediately creates the partition of /dev/sdb1. When I attach the OSS LUNs to the OSS servers they do not create the partition they are just /dev/sdX. We are running puppet on these servers so I am not sure if that is what is creating the /dev/sdb1 partition or not.

Dean

From: Andreas Dilger <adilger at whamcloud.com>
Sent: Monday, November 29, 2021 2:04 PM
To: STEPHENS, DEAN - US <dean.stephens at caci.com>
Cc: Colin Faber <cfaber at gmail.com>; lustre-discuss at lists.lustre.org
Subject: Re: [lustre-discuss] Lustre and server upgrade

On Nov 29, 2021, at 13:10, STEPHENS, DEAN - US <dean.stephens at caci.com<mailto:dean.stephens at caci.com>> wrote:

Unfortunately I do not have that as that was done years ago before I got involved. Let me ask a few questions here:


  1.  If I were to do a mkfs.lustre on the MDS nodes, that will destroy all existing data. How do the MDS nodes know how to see the OSS nodes and the lustre filesystem to derive the meta data?
  2.  Since the lustre meta data LUN is 1.1TB, any idea how long it will take to do the mkfs and rebuild the lustre metadata?
  3.  Do you think that this will resolve the error that I am seeing in the OSS nodes, “Invalid argument” after seeing the lustre filesystem per the output from before?

I think you misunderstand how Lustre is working.  If you reformat the MDT then your filesystem will be totally lost, so definitely don't do that.  There is no "rebuild the metadata from the OSTs" functionality available, the same as if you reformat a local filesystem there is no way it will rebuild itself, unless you restore it from backup.

It is possible that you are not checking the correct block device?  The output below shows that /dev/sdb1 is an LVM volume, so it would appear that the MDT is an LV like /dev/vg<something>/lv<something>?  Running "blkid" on the system might tell you what the volume name is.


From: Colin Faber <cfaber at gmail.com<mailto:cfaber at gmail.com>>
Sent: Monday, November 29, 2021 12:12 PM
To: STEPHENS, DEAN - US <dean.stephens at caci.com<mailto:dean.stephens at caci.com>>
Cc: Andreas Dilger <adilger at whamcloud.com<mailto:adilger at whamcloud.com>>; lustre-discuss at lists.lustre.org<mailto:lustre-discuss at lists.lustre.org>
Subject: Re: [lustre-discuss] Lustre and server upgrade

Well, all signs indicate that this target has not been prepared for lustre. Can you post the output of your original formatting command?

On Mon, Nov 29, 2021 at 8:26 AM STEPHENS, DEAN - US <dean.stephens at caci.com<mailto:dean.stephens at caci.com>> wrote:
That was my fault. I did not use the correct command.

The output of the lsblk command showing the attached storage (I did not created the /dev/sdb1 partition and the OSS servers do not have /dev/sdX1 partitions, they are just the /dev/sbX):

Sdb        1.1TB     0              disk
    Sdb1  1.1TB     0              part

Here is the output of the tune2fs -l /dev/sdb and /dev/sdb1 as they are different:

Tune2fs -l /dev/sdb
Tune2fs: Bad magic number in super-block while trying to open /dev/sdb
Found a gpt partition table in /dev/sdb

Tune2fs -l /dev/sdb1
Tune2fs: Bad magic number in super-block while trying to open /dev/sdb1
/dev/sdb1 contains a LVM2_member file system

Dean

From: Colin Faber <cfaber at gmail.com<mailto:cfaber at gmail.com>>
Sent: Monday, November 29, 2021 8:02 AM
To: STEPHENS, DEAN - US <dean.stephens at caci.com<mailto:dean.stephens at caci.com>>
Cc: Andreas Dilger <adilger at whamcloud.com<mailto:adilger at whamcloud.com>>; lustre-discuss at lists.lustre.org<mailto:lustre-discuss at lists.lustre.org>
Subject: Re: [lustre-discuss] Lustre and server upgrade

Hi, tune2fs and tunefs.lustre are different tools which yield different information about the block device. I'd like to be sure that we're working with the right type of device here and basic ext4/ldiskfs data is present (nevermind if the lustre configuration data is present)

-cf


On Mon, Nov 29, 2021 at 6:22 AM STEPHENS, DEAN - US <dean.stephens at caci.com<mailto:dean.stephens at caci.com>> wrote:
No worries. I was out last week with Thanksgiving and was not able to respond to your email.

The tunefs.lustre /dev/sdb says this:

Checking for existing lustre data: not found

Tunefs.lustre FATAL: Device /dev/sdb has not been formatted with mkfs.lustre
Tunefs.lustre: existing with 19 (no such device)

I think that this is a bit weird as the “disk” /dev/sdb is a LUN from a SAN that is attached to the VM and is the same LUN that was attached to the RHEL6 VM (verified the WWID is the same). The LUNs that are attached to the OSS servers have do not seem to have that same issue as the tunefs.lustre command comes back seeing lustre data.

Any idea on what the “Invalid argument” means from the OSS tunefs.lustre command is?

Dean




From: Colin Faber <cfaber at gmail.com<mailto:cfaber at gmail.com>>
Sent: Wednesday, November 24, 2021 8:35 PM
To: STEPHENS, DEAN - US <dean.stephens at caci.com<mailto:dean.stephens at caci.com>>
Cc: Andreas Dilger <adilger at whamcloud.com<mailto:adilger at whamcloud.com>>; lustre-discuss at lists.lustre.org<mailto:lustre-discuss at lists.lustre.org>
Subject: Re: [lustre-discuss] Lustre and server upgrade

what does tune2fs report for /dev/sdb on the MDS?

(Also sorry, this somehow got lost in my inbox)

On Mon, Nov 22, 2021 at 8:57 AM STEPHENS, DEAN - US <dean.stephens at caci.com<mailto:dean.stephens at caci.com>> wrote:
Colin and Andreas, so to clarify some points for you, This is what I am seeing:

Rpm -qa | grep lustre
Kmod_lustre-2.12.6-1.el7.x86_64
Lustre-iokit-2.12.6-1.el7.x86_64
Lustre_test-2.12.6-1.el7.x86_64
Kernel-devel-3.10.0-1160.2.el7_lustre.x86_64
Lustre-osd-ldiskfs-2.12.6-1.el7.x86_64
Kmod-lustre-osd-ldiskfs-2.12.6-1.el7.x86_64
Kmod-lustre-tests-2.12.6-1.el7.x86_64
lustre-resource-agents-2.12.6-1.el7.x86_64
kernel-3.10.0-1160.2.el7_lustre.x86_64
lustre-2.12.6-1.el7.x86_64

rpm -qa | grep e2fs
e2fsprogs-libs-1.45.6.wc1-0.el7.x86_64
e2fsprogs-1.45.6.wc1-0.el7.x86_64

With all of that installed and the successful running and clean up of the llmount.sh and llmountcleanup.sh I am still getting the errors:
“Unable to mount /dev/sdb: Invalid argument”
“tunefs.luster: FATAL: failed to write local files and tunefs.luster: exiting with 22 (Invalid argument)”

When I use the command tunefs.lustre /dev/sdb (which is one of the lustre LUNs that is attached as a “disk” to the VM)

Full output of the tunefs.luster /dev/sdb command (as mush as I can show anyway):

Tunefs.lustre /dev/sdb
Checking for existing lustre data: found
Reading CONFIGS/mountdata

     Read previous values:
Target:                  <name>-OST0009
Index:                   9
Luster FS:             <name>
Mount type:       ldiskfs
Flags:                     0x1002
                               (OST no_primmode )
Persistent mount opts: errors=remount-ro
Parameters: mgsnode=<IP of the 1st MGS node>@tcp mgsnode=<IP of the 2nd MGS node>@tcp failover.node=<IP of the 1st OSS node>@tcp failover.node=<IP of the 2nd OSS node>@tcp

     Permanent disk data:
Target:                  <name>-OST0009
Index:                   9
Luster FS:             <name>
Mount type:       ldiskfs
Flags:                     0x1002
                               (OST no_primmode )
Persistent mount opts: errors=remount-ro
Parameters: mgsnode=<IP of the 1st MGS node>@tcp mgsnode=<IP of the 2nd MGS node>@tcp failover.node=<IP of the 1st OSS node>@tcp failover.node=<IP of the 2nd OSS node>@tcp

tunefs.luster: Unable to mount /dev/sdb: Invalid argument

tunefs.luster: FATAL: failed to write local files
tunefs.luster: exiting with 22 (Invalid argument)

Now to be clear the MDS nodes are not working correctly as I am not able to mount /dev/sdb on them where the existing meta data is served out from. To this point I have been concentrating on the OSS nodes as that is where the lustre data is coming from. I have installed the lustre kernel and the same software on the MDS nodes in the same way that I have on the OSS nodes. When I try to use tunefs.lustre /dev/sdb on the MDS nodes I get an error saying:

Checking for existing lustre data: not found

tunefs.luster: FATAL: device /dev/sdb has not been formatted with mkfs.lustre
tunefs.luster: exiting with 19 (no such device)

I am assuming that this is correct as that attached LUN does not need to have lustre data on it as it is the meta data server. Is there anything that I can/need to check on the MDS nodes to see what is running/working correctly?

I know that this is a lot and I appreciate any help that you can give me to troubleshoot this.

Dean



From: STEPHENS, DEAN - US
Sent: Monday, November 22, 2021 5:58 AM
To: Andreas Dilger <adilger at whamcloud.com<mailto:adilger at whamcloud.com>>
Cc: Colin Faber <cfaber at gmail.com<mailto:cfaber at gmail.com>>; lustre-discuss at lists.lustre.org<mailto:lustre-discuss at lists.lustre.org>
Subject: RE: [lustre-discuss] Lustre and server upgrade

Thanks for the clarification. I am using llmount.sh to test the install of the OST and MDT not run in production. I hope to have more done today and will reach out to let you all know what I find.

Dean

From: Andreas Dilger <adilger at whamcloud.com<mailto:adilger at whamcloud.com>>
Sent: Friday, November 19, 2021 5:25 PM
To: STEPHENS, DEAN - US <dean.stephens at caci.com<mailto:dean.stephens at caci.com>>
Cc: Colin Faber <cfaber at gmail.com<mailto:cfaber at gmail.com>>; lustre-discuss at lists.lustre.org<mailto:lustre-discuss at lists.lustre.org>
Subject: Re: [lustre-discuss] Lustre and server upgrade

Dean,
it should be emphasized that "llmount.sh" and "llmountcleanup.sh" are for quickly formatting and mounting *TEST* filesystems.  They only create a few small (400MB) loopback files in /tmp and format them as OSTs and MDTs.  This should *NOT* be used on a production system, or you will be very sad when the files in /tmp disappear after the server is rebooted and/or they reformat your real filesystem devices.

I mention this here because it isn't clear to me whether you are using them for testing, or trying to get a real filesystem mounted.

Cheers, Andreas

On Nov 19, 2021, at 13:25, STEPHENS, DEAN - US via lustre-discuss <lustre-discuss at lists.lustre.org<mailto:lustre-discuss at lists.lustre.org>> wrote:

I also figure out how to clean up after the llmount.sh script is run. There is a llmountcleanup.sh that will do that.

Dean

From: STEPHENS, DEAN - US
Sent: Friday, November 19, 2021 1:08 PM
To: Colin Faber <cfaber at gmail.com<mailto:cfaber at gmail.com>>
Cc: lustre-discuss at lists.lustre.org<mailto:lustre-discuss at lists.lustre.org>
Subject: RE: [lustre-discuss] Lustre and server upgrade

One more thing that I have noticed using the llmount.sh script, the directories that were created by the script under /mnt have 000 set for the permissions. The ones that I have configure under /mnt/lustre are set to 750 permissions.

Is this something that needs to be fixed. I have these server being configure via puppet and that is how the /mnt/lustre directories are being created and the permissions set.

Dean


From: STEPHENS, DEAN - US
Sent: Friday, November 19, 2021 7:14 AM
To: Colin Faber <cfaber at gmail.com<mailto:cfaber at gmail.com>>
Cc: lustre-discuss at lists.lustre.org<mailto:lustre-discuss at lists.lustre.org>
Subject: RE: [lustre-discuss] Lustre and server upgrade

The other question that I have is how to clean up after the llmount.sh has been run? If I do a df on the server I see that mds1, osd1, and ost2 are still mounted to /mnt. Do I need to manually umount them since the llmount.sh completed successfully?

Also I have not done anything to my MDS node so some direction on what to do there will be helpful as well.

Dean

From: STEPHENS, DEAN - US
Sent: Friday, November 19, 2021 7:00 AM
To: Colin Faber <cfaber at gmail.com<mailto:cfaber at gmail.com>>
Cc: lustre-discuss at lists.lustre.org<mailto:lustre-discuss at lists.lustre.org>
Subject: RE: [lustre-discuss] Lustre and server upgrade

Thanks for the help yesterday and I was able to install the Lustre kernel and software on a VM to include the test RPM.

This is what I did following these directions<https://secure-web.cisco.com/1T2T_OiJFIk8v4JxFhEYWwUsMjPnEix0viV4ITX_qAff2qOe5Y5bc8HXbCHnm_oN4TMy_mwmSZkorQNpI_PQtb5LvIOyYn3RAh0FT5GB8sJvzPiTND9-wtdZUPW3XR08cZtbQiqgI61w83zZinHXISR4_2k9JUPfxL9-3Siesjd9TcB7D4z4foS2ct5s4tjHC_xb5rYg_El-EeJvxMQJgyq9TbxwP0JFB_oZ1GaYrytnG-gEEIfUFw11dWxHSVqhx9BEuqgqrBF8K2fXb7phsIcisLcmfrPIh1-y9z2T9ewtqKL2PtjMHKUWr1ySRHhubCgmuwIAl0jMGOfa36pkegGEtC2FbeK_7_vm4I6cCN18/https%3A%2F%2Fwiki.lustre.org%2FInstalling_the_Lustre_Software%23Lustre_Servers_with_LDISKFS_OSD_Support>:
Installed the Lustre kernel and kernel-devel (the other RPMs listed were not in my luster-server repo)
Rebooted the VM
Installed kmod-lustre kmod-lustre-osd-ldiskfs lustre-osd-ldiskfs-mount lustre lustre-resource-agents lustre-tests
Ran modprobe -v lustre (did not show that it loaded kernel modules as it has done in the past)
Ran lustre_rmmod (got an error Module Luster in use)
Rebooted again
Ran llmount.sh and it looked like it completed successfully
Ran tunefs.lustre /dev/sdb (at the bottom of the output I am seeing tunefs.luster: Unable to mount /dev/sdb: Invalid argument and tunefs.luster: FATAL: failed to write local files and tunefs.luster: exiting with 22 (Invalid argument))

Any idea what the “invalid argument” is talking about?

Dean

From: Colin Faber <cfaber at gmail.com<mailto:cfaber at gmail.com>>
Sent: Thursday, November 18, 2021 3:34 PM
To: STEPHENS, DEAN - US <dean.stephens at caci.com<mailto:dean.stephens at caci.com>>
Cc: lustre-discuss at lists.lustre.org<mailto:lustre-discuss at lists.lustre.org>
Subject: Re: [lustre-discuss] Lustre and server upgrade

The VM will need a full install of all server packages, as well as the tests package to allow for this test.

On Thu, Nov 18, 2021 at 2:26 PM STEPHENS, DEAN - US <dean.stephens at caci.com<mailto:dean.stephens at caci.com>> wrote:
I have not tried that but I can do that on a new VM that I can create. I assume that is all that I need is the lustre-tests RPM and associated dependencies and not the full blown lustre install?

Dean

From: Colin Faber <cfaber at gmail.com<mailto:cfaber at gmail.com>>
Sent: Thursday, November 18, 2021 2:22 PM
To: STEPHENS, DEAN - US <dean.stephens at caci.com<mailto:dean.stephens at caci.com>>
Cc: lustre-discuss at lists.lustre.org<mailto:lustre-discuss at lists.lustre.org>
Subject: Re: [lustre-discuss] Lustre and server upgrade

So that indicates that your installation is incomplete or something else is preventing lustre, ldiskfs, and possibly other modules from loading.  Have you been able to reproduce this behavior on a fresh rhel install with lustre 2.12.7? (i.e. llmount.sh failing)?

-cf


On Thu, Nov 18, 2021 at 2:20 PM STEPHENS, DEAN - US <dean.stephens at caci.com<mailto:dean.stephens at caci.com>> wrote:
Thanks for the direction. I found it and installed lustre-tests.x86_64 and now I have the llmount but it was defaulted to /usr/lib64/lustre/tests/llmount.sh and when I ran it but it failed with:

Stopping clients: <hostname> /mnt/lustre (opts: -f)
Stopping clients: <hostname> /mnt/lustre2 (opts: -f)
Loading modules from /usr/lib64/lustre/tests/..
Detected 2 online CPUs by sysfs
Force libcfs to create 2 CPU partitions
Formatting mgs, mds, osts
Format mds1: /tmp/lustre-mdt1
Mkfs.lustre: Unable to mount /dev/loop0: No such device (even though /dev/loop0 is a thing)
Is the ldiskfs module loaded?

Mkfs.lustre FATAL: failed to write local files
Mkfs.lustre: exiting with 19 (no such device)

From: Colin Faber <cfaber at gmail.com<mailto:cfaber at gmail.com>>
Sent: Thursday, November 18, 2021 2:03 PM
To: STEPHENS, DEAN - US <dean.stephens at caci.com<mailto:dean.stephens at caci.com>>
Cc: lustre-discuss at lists.lustre.org<mailto:lustre-discuss at lists.lustre.org>
Subject: Re: [lustre-discuss] Lustre and server upgrade

This would be part of the lustre-tests RPM package and will install llmount.sh to /usr/lib/lustre/tests/llmount.sh I believe.

On Thu, Nov 18, 2021 at 1:45 PM STEPHENS, DEAN - US <dean.stephens at caci.com<mailto:dean.stephens at caci.com>> wrote:
Not sure what you mean by “If you install the test suite”. I am not seeing a llmount.sh file on the server using “locate llmount.sh” at this point. What are the steps to install the test suite?

Dean

From: Colin Faber <cfaber at gmail.com<mailto:cfaber at gmail.com>>
Sent: Thursday, November 18, 2021 1:34 PM
To: STEPHENS, DEAN - US <dean.stephens at caci.com<mailto:dean.stephens at caci.com>>
Cc: lustre-discuss at lists.lustre.org<mailto:lustre-discuss at lists.lustre.org>
Subject: Re: [lustre-discuss] Lustre and server upgrade

Hm.. If you install the test suite does llmount.sh succeed? This should setup a single node cluster on whatever node you're running lustre on, I believe it will load modules as needed (IIRC), if this test succeeds, then you know that lustre is installed correctly (or correctly enough), if not, I'd focus on the installation as the target issue may be a redheirring

-cf


On Thu, Nov 18, 2021 at 1:01 PM STEPHENS, DEAN - US <dean.stephens at caci.com<mailto:dean.stephens at caci.com>> wrote:
Thanks for the fast reply.
When I do the tunefs.lustre /dev/sdX command I get:
Target: <name>-OST0009
Index: 9

Target: <name>-OST0008
Index: 8
I spot checked some others and they seem to be good with the exception of one. It shows:

Target: <name>-OST000a
Index: 10

But since there are 11 LUNs attached that make sense to me.

As far as the upgrade it was a fresh install using the legacy targets as the OSS and MDS nodes are virtual machine with the LUN disks attached to them so that Red Hat sees them as /dev/sdX devices.

When I loaded Lustre on the server I did a yum install lustre and since we were pointed at the lustre-2.12 repo in our environment it picked up the following RPMs to install:
Luster-resource-agents-2.12.6-1.el7.x86_64
Kmod-lustre-2.12.6-1.el7.x86_64
Kmod-zfs-3.10.0-1160.2.1.el7_lustre.x86_64-09.7.13-1.el7.x86_64
Kmod-lustre-osd-zfs-2.12.6-1.el7.x86_64
Lustre-2.12.6-1.el7.x86_64
Kmod-spl-3.10.0-1160.2.1.el7_lustre.x86_64-09.7.13-1.el7.x86_64
Lustre-osd-zfs-mount-2.12.6-1.el7.x86_64
Lustre-osd-ldiskfs-mount-2.12.6-1.el7.x86_64

Dean

From: Colin Faber <cfaber at gmail.com<mailto:cfaber at gmail.com>>
Sent: Thursday, November 18, 2021 12:35 PM
To: STEPHENS, DEAN - US <dean.stephens at caci.com<mailto:dean.stephens at caci.com>>
Cc: lustre-discuss at lists.lustre.org<mailto:lustre-discuss at lists.lustre.org>
Subject: Re: [lustre-discuss] Lustre and server upgrade


EXTERNAL EMAIL - This email originated from outside of CACI. Do not click any links or attachments unless you recognize and trust the sender.




Hi,

I believe in 2.10 sometime (someone correct me if I'm wrong) that the index parameter was required and needs to be specified. On an existing system this should already be set, but can you check the parameters line with tunefs.lustre for correct index=N values across your storage nodes?

Also, with your "upgrade", was this a fresh install utilizing legacy targets?

The last thing I can think of IIRC, there was on-disk format changes between 2.5 and 2.12, these should be transparent to you, but it may be some other issue is preventing successful upgrade, though the missing module error really speaks to possible issues around how lustre was installed and loaded on the system.

Cheers!

-cf


On Thu, Nov 18, 2021 at 12:24 PM STEPHENS, DEAN - US via lustre-discuss <lustre-discuss at lists.lustre.org<mailto:lustre-discuss at lists.lustre.org>> wrote:
I am by no means a Lustre expert and am seeking some help with our system. I am not able to get log file to post as the servers are in the closed area with no access to the Internet.

Here is a bit of history of our system:
The OSS and MDS nodes were RHEL6 and running a Luster server the kernel 2.6.32-431.23.3.el6_lustre.x86_64 and the Lustre version of 2.5.3. the client version was 2.10. That was in a working state.
We upgraded the OSS ad MDS nodes to RHEL7 and installed Lustre server 2.12 software and kernel.
The attached 11 LUNs are showing up as /dev/sdb - /dev/sdl
Right now, on the OSS nodes, if I use the command tunefs.luster /dev/sdb I get some data back saying that Lustre data has been found but at the bottom of the out put it shows “tunefs.lustre: Unable to mount /dev/sdb: No such device” and “Is the ldiskfs module available”
When I do a “modprobe -v lustre” I do not see ldiskfs.ko as being loaded even though there is a ldiskfs.ko file in /lib/modules/3.10.0-1160.2.1.el7_lustre.x86_64/extra/lustre/fs directory. I am not sure how to get it to load in the modprobe command.
I used “insmod /lib/modules/3.10.0-1160.2.1.el7_lustre.x86_64/extra/lustre/fs/ ldiskfs.ko” and re-ran the “tunefs.luster /dev/sdb” command with the same result.
If I use the same command on the MDS nodes I get “no Lustre data found and /dev/sdb has not been formatted with mkfs.lustre”. I am not sure that is what is needed here as the MDS nodes do not really have the lustre data as it is the meta data server.
I tried to use the command “tunefs.lustre --mgs --erase_params --mgsnode=<IP address>@tcp --writeconf --dryrun /dev/sdb” and get the error “/dev/sdb has not been formatted with mkfs.lustre”.

I need some help and guidance and I can provide what may be needed though it will need to be typed out as I am not able to get actual log files from the system.

Dean Stephens
CACI
Linux System Admin


________________________________

This electronic message contains information from CACI International Inc or subsidiary companies, which may be company sensitive, proprietary, privileged or otherwise protected from disclosure. The information is intended to be used solely by the recipient(s) named above. If you are not an intended recipient, be aware that any review, disclosure, copying, distribution or use of this transmission or its contents is prohibited. If you have received this transmission in error, please notify the sender immediately.
_______________________________________________
lustre-discuss mailing list
lustre-discuss at lists.lustre.org<mailto:lustre-discuss at lists.lustre.org>
http://secure-web.cisco.com/1W1muQD4c3W9jlz3ATBTpimQFCy3uxbyFeFRH8b0Giilvxexdseo0_vO0INboN2XMxnE0zr07jC5NKwiGPByP0BJWmjwsQR-b9rIb33vipV7lnU1Opbt_tXVS15PfodllSOvtx2dic-GEkcsxax3_OkBmIBoxMQJ7fgsvElG-xEzd6w10X9A6Y1pzqXXw6nYUhcnXKpam0qScIdg_sLMHDoj58svaBZT7XOYtHASwV04oHjhSRpluqldfGo-Pjkcnf5ukqA83bz16lMvJ2Gldgg_4L1y9r8KDerupJSTxiZ5nTexQWP3EPHJIQnAyKb-g9v-u_VLCjolzOcvZk0vpyQxjZVt_asbpPaJeUWVaDxs/http%3A%2F%2Flists.lustre.org%2Flistinfo.cgi%2Flustre-discuss-lustre.org
_______________________________________________
lustre-discuss mailing list
lustre-discuss at lists.lustre.org<mailto:lustre-discuss at lists.lustre.org>
http://secure-web.cisco.com/1W1muQD4c3W9jlz3ATBTpimQFCy3uxbyFeFRH8b0Giilvxexdseo0_vO0INboN2XMxnE0zr07jC5NKwiGPByP0BJWmjwsQR-b9rIb33vipV7lnU1Opbt_tXVS15PfodllSOvtx2dic-GEkcsxax3_OkBmIBoxMQJ7fgsvElG-xEzd6w10X9A6Y1pzqXXw6nYUhcnXKpam0qScIdg_sLMHDoj58svaBZT7XOYtHASwV04oHjhSRpluqldfGo-Pjkcnf5ukqA83bz16lMvJ2Gldgg_4L1y9r8KDerupJSTxiZ5nTexQWP3EPHJIQnAyKb-g9v-u_VLCjolzOcvZk0vpyQxjZVt_asbpPaJeUWVaDxs/http%3A%2F%2Flists.lustre.org%2Flistinfo.cgi%2Flustre-discuss-lustre.org

Cheers, Andreas
--
Andreas Dilger
Lustre Principal Architect
Whamcloud

Cheers, Andreas
--
Andreas Dilger
Lustre Principal Architect
Whamcloud






-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20211130/40677fc9/attachment-0001.html>


More information about the lustre-discuss mailing list