[Lustre-discuss] Lustre 1.8.4 with new kernel 2.6.18-194.11.4
Jason Hill
hilljj at ornl.gov
Tue Sep 21 12:41:47 PDT 2010
Brian,
While I agree with the question - some of us have user facing machines that
we've chosen to use the patched lustre kernel on for clients -- we see a
50-300MB/s bump from using a patched client. Those machines are user facing and
while we've put in a workaround, having a patched client + kernel that does not
require the workaround is something we'd like to get to. So while I don't think
most of the coummunity would have user accounts on their lustre servers - the
packages provided by Oracle are not solely used for server purposes.
Yes, we could undertake building kernels ourselves. We're working on an effort
to do so - so don't take this as me adding to the list of people who would like
this done "real soon now", just another point of view.
Thanks,
--
-Jason
-------------------------------------------------
// Jason J. Hill //
// HPC Systems Administrator //
// National Center for Computational Sciences //
// Oak Ridge National Laboratory //
// e-mail: hilljj at ornl.gov //
// Phone: (865) 576-5867 //
-------------------------------------------------
On Tue, Sep 21, 2010 at 03:25:01PM -0400, Brian J. Murrell wrote:
> On Tue, 2010-09-21 at 13:05 -0500, Mike Hanby wrote:
> > Are there any plans to build new Lustre 1.8.4 patched kernel packages for EL5 kernel 2.6.18-194.11.4
> >
> > This kernel has the patch that prevents the much talked about privilege escalation CVE-2010-3081:
> > https://rhn.redhat.com/errata/RHSA-2010-0704.html
>
> Without commenting one way or the other about whether we will produce a
> 1.8.4.1 to deal with this kernel issue (because I don't know), I'd ask
> do you have other (i.e. network) services or non-privileged user
> accounts on your Lustre servers?
>
> b.
>
Content-Description: ATT00001..txt
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-discuss at lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss
More information about the lustre-discuss
mailing list