[lustre-devel] [PATCH v3 23/26] staging: lustre: ptlrpc: use current CPU instead of hardcoded 0
James Simmons
jsimmons at infradead.org
Mon Jun 25 15:51:38 PDT 2018
> > From: Dmitry Eremin <dmitry.eremin at intel.com>
> >
> > fix crash if CPU 0 disabled.
> >
> > Signed-off-by: Dmitry Eremin <dmitry.eremin at intel.com>
> > WC-bug-id: https://jira.whamcloud.com/browse/LU-8710
> > Reviewed-on: https://review.whamcloud.com/23305
> > Reviewed-by: Doug Oucharek <dougso at me.com>
> > Reviewed-by: Andreas Dilger <adilger at whamcloud.com>
> > Signed-off-by: James Simmons <jsimmons at infradead.org>
> > ---
> > drivers/staging/lustre/lustre/ptlrpc/service.c | 11 ++++++-----
> > 1 file changed, 6 insertions(+), 5 deletions(-)
> >
> > diff --git a/drivers/staging/lustre/lustre/ptlrpc/service.c b/drivers/staging/lustre/lustre/ptlrpc/service.c
> > index 3fd8c74..8e74a45 100644
> > --- a/drivers/staging/lustre/lustre/ptlrpc/service.c
> > +++ b/drivers/staging/lustre/lustre/ptlrpc/service.c
> > @@ -421,7 +421,7 @@ static void ptlrpc_at_timer(struct timer_list *t)
> > * there are.
> > */
> > /* weight is # of HTs */
> > - if (cpumask_weight(topology_sibling_cpumask(0)) > 1) {
> > + if (cpumask_weight(topology_sibling_cpumask(smp_processor_id())) > 1) {
>
> This pops a warning for me:
> [ 1877.516799] BUG: using smp_processor_id() in preemptible [00000000] code: mount.lustre/14077
>
> I'll change it to disable preemption, both here and below.
For .config I have:
# CONFIG_PREEMPT_NONE is not set
CONFIG_PREEMPT_VOLUNTARY=y
# CONFIG_PREEMPT is not set
What does yours look like? Strange no one has ever reported an error
before. Thanks for finding this!!!
>
> > /* depress thread factor for hyper-thread */
> > factor = factor - (factor >> 1) + (factor >> 3);
> > }
> > @@ -2221,15 +2221,16 @@ static int ptlrpc_hr_main(void *arg)
> > struct ptlrpc_hr_thread *hrt = arg;
> > struct ptlrpc_hr_partition *hrp = hrt->hrt_partition;
> > LIST_HEAD(replies);
> > - char threadname[20];
> > int rc;
> >
> > - snprintf(threadname, sizeof(threadname), "ptlrpc_hr%02d_%03d",
> > - hrp->hrp_cpt, hrt->hrt_id);
> > unshare_fs_struct();
> >
> > rc = cfs_cpt_bind(ptlrpc_hr.hr_cpt_table, hrp->hrp_cpt);
> > if (rc != 0) {
> > + char threadname[20];
> > +
> > + snprintf(threadname, sizeof(threadname), "ptlrpc_hr%02d_%03d",
> > + hrp->hrp_cpt, hrt->hrt_id);
> > CWARN("Failed to bind %s on CPT %d of CPT table %p: rc = %d\n",
> > threadname, hrp->hrp_cpt, ptlrpc_hr.hr_cpt_table, rc);
> > }
> > @@ -2528,7 +2529,7 @@ int ptlrpc_hr_init(void)
> >
> > init_waitqueue_head(&ptlrpc_hr.hr_waitq);
> >
> > - weight = cpumask_weight(topology_sibling_cpumask(0));
> > + weight = cpumask_weight(topology_sibling_cpumask(smp_processor_id()));
> >
> > cfs_percpt_for_each(hrp, i, ptlrpc_hr.hr_partitions) {
> > hrp->hrp_cpt = i;
> > --
> > 1.8.3.1
>
More information about the lustre-devel
mailing list