[lustre-devel] Survey request to help with LLM development for Lustre
John Bent
johnbent at gmail.com
Wed Jun 26 14:23:01 PDT 2024
Hello all,
Thanks to the respondents so far! Just a reminder that this survey is
available and we are still hoping to get more results. The early data does
show that human responses are preferred by almost a 3:1 ratio but there are
some LLM responses that are preferred.
Any time spent helping with this will be repaid by me personally beaming
loving kindness into the universe during my next meditation session. Please
let me know how many minutes of loving kindness you want beamed.
Thanks!
John
On Tue, Jun 11, 2024 at 10:05 AM John Bent <johnbent at gmail.com> wrote:
> Dear Lustre Community Members,
>
> We hope this message finds you well.
>
> We are a research team from FIU and LANL. Following up on my recent talk
> at last month’s Lustre User Group (LUG) meeting (LINK
> <https://www.depts.ttu.edu/hpcc/events/LUG24/slides/Day2/LUG_2024_Talk_09-TASSI_John_Bent_LUG24.pdf>),
> we are reaching out to invite you to participate in a survey to evaluate
> the accuracy of Large Language Models (LLMs) in answering questions about
> Lustre. Your expertise and experience are crucial for assessing how well
> LLMs compare to human experts within our community.
>
> Here are the details of the survey:
>
> - Number of Questions: 10
> - Estimated Completion Time: 15 ~ 30 minutes
> - Link: https://forms.gle/MHEf2FBYTjyRioa16
>
> Note: We very much appreciate your time and contribution! Although it
> would be wonderful if all of you are able to answer all ten questions, we
> realize that this is a very large request. If you are able to help, please
> try to respond to at least 3 questions and feel free to select the option
> “I prefer not to answer” for the others.
>
> In this work, our ultimate goal is to improve the ability of local LLMs
> to help administrators, users, and developers of Lustre. We will publish
> the anonymized results of this survey back to this mailing list and hope to
> publish research results in peer-reviewed conferences as well as report
> back on our progress at future LUGs. Additionally, our work with local LLMs
> will be done in open source. Finally, we recognize that LLMs are evolving
> rapidly so we plan to repeat this exercise with other systems such as Ceph
> as well as repeating this exercise at some point in the future with Lustre.
> We hope to present results at a future LUG. Although, no future LUG could
> ever surpass the awesome one just organized by the great folks at Texas
> Tech, we know that they are all pretty great. :)
>
> Thank you for your time and dedication to the Lustre community. Your
> insights are invaluable, and we eagerly await your input.
>
> In addition to the survey, we welcome any and all feedback on this
> specific exercise as well as our research in general.
>
> Thanks,
>
> Hohnpeng, John, Raju, and Yanzhao
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-devel-lustre.org/attachments/20240627/49d9cfa4/attachment.htm>
More information about the lustre-devel
mailing list