[lustre-discuss] Installing lustre 2.15.6 server on rhel-8.10 fails
Carlos Adean
carlosadean at linea.org.br
Mon Apr 28 13:15:08 PDT 2025
Hi Martin,
I really appreciate the help.
My answers are inline below.
One question: are you using the precompiled Lustre RPMs (e.g. those
> available from:
> https://downloads.whamcloud.com/public/lustre/lustre-2.15.6/ ) or are you
> compiling your own RPMs from the Lustre git repository (
> https://github.com/lustre/lustre-release ) ?
>
> In our case we use the second approach and I think it is better for two
> reasons:
>
> 1- You make sure that everything is consistent, especially with your MOFED
> environment
>
> 2- You are not forced to use the specific versions corresponding to tags
> exactly, you can chose any version available in git repository or
> cherry-pick the fixes you think are useful (more details on this later).
>
>
Precompiled RPMs.
> In our case we upgraded last week a small HPC cluster using RHEL 8 for the
> file server and RHEL 9 for the clients. The update was successful and we
> had no problem related to MOFED, Lustre, PMIx, Slurm, MPI (including
> MPI-IO) up to now.
>
Your upgrade scenario is similar to ours. We’re upgrading our servers from
RHEL 7 with Lustre 2.12.6 to RHEL 8.10 with Lustre 2.15.x. The clients
previously ran RHEL 7 and will now run RHEL 9.5.
Our upgrade is described in a message posted on this mailing list on April
> 7th:
>
>
>
> http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/2025-April/019471.html
>
>
Actually, our Lustre environment is a bit complex. It has approximately
570 TB of capacity, organized into two tiers: T0 (70 TB) and T1 (500 TB).
Its infrastructure is composed of two MDS servers connected to a Dell
ME4024 storage array, and four OSS servers. Two of these OSS nodes are
equipped with NVMe SSDs and provide the T0 tier (high-performance scratch
space), while the other two OSS nodes are connected via SAS to two ME4084
storage arrays, supporting the T1 tier (long-term data). The entire system
operates with high availability (HA) and load balancing (LB) mechanisms.
Cheers,
---
*Carlos Adean*
www.linea.org.br
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20250428/6048ce0d/attachment.htm>
More information about the lustre-discuss
mailing list