[lustre-discuss] CPU usages of MDT/MDS and OST/OSS

Grigory Shamov Grigory.Shamov at umanitoba.ca
Mon Feb 25 12:13:47 PST 2019


Hi Masudul Hasan,

Have you looked at Prometheus ? The general systems metrics can be gathered with their node exporter:

https://github.com/prometheus/node_exporter

And HPE produced a working exporter for Lustre metrics .

https://github.com/HewlettPackard/lustre_exporter

--
Grigory Shamov
WestGrid Site Lead / HPC Specialist
University of Manitoba
E2-588 EITC Building,
(204) 474-9625


From: lustre-discuss <lustre-discuss-bounces at lists.lustre.org<mailto:lustre-discuss-bounces at lists.lustre.org>> on behalf of Masudul Hasan Masud Bhuiyan <masud.hasan at nevada.unr.edu<mailto:masud.hasan at nevada.unr.edu>>
Date: Monday, February 25, 2019 at 2:06 PM
To: "lustre-discuss at lists.lustre.org<mailto:lustre-discuss at lists.lustre.org>" <lustre-discuss at lists.lustre.org<mailto:lustre-discuss at lists.lustre.org>>
Subject: [lustre-discuss] CPU usages of MDT/MDS and OST/OSS

I need to know the cpu usages of a particular OST and MDT. I have seen it is available directly to system administrator. But I don't have access to that information. So, I was wondering how can I get idea about CPU load from other metrics? What other matrices can give rough idea about the the CPU load of the OST/MDT?

These are the available matrices for OST

active
filestotal
ost_server_uuid
blocksize
grant_shrink_interval  ping
checksum_dump
import
pinger_recov
checksums
kbytesavail
resend_count
checksum_type
kbytesfree
rpc_stats
connect_flags
kbytestotal
srpc_contexts
contention_seconds
lockless_truncate
srpc_info
cur_dirty_bytes
max_dirty_mb
state
cur_dirty_grant_bytes
max_pages_per_rpc      stats
cur_grant_bytes
max_rpcs_in_flight
timeouts
cur_lost_grant_bytes
osc_cached_mb
unstable_stats
destroys_in_flight
osc_stats
uuid
filesfree
ost_conn_uuid

Regards.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20190225/45284514/attachment.html>


More information about the lustre-discuss mailing list