From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2B639C433EF for ; Sat, 18 Dec 2021 21:25:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BEE666B0081; Sat, 18 Dec 2021 16:20:57 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B9D986B0082; Sat, 18 Dec 2021 16:20:57 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9A31A6B0083; Sat, 18 Dec 2021 16:20:57 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0231.hostedemail.com [216.40.44.231]) by kanga.kvack.org (Postfix) with ESMTP id 8AE066B0081 for ; Sat, 18 Dec 2021 16:20:57 -0500 (EST) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 5983688CEE for ; Sat, 18 Dec 2021 21:20:47 +0000 (UTC) X-FDA: 78932184534.15.132180B Received: from mail-ot1-f51.google.com (mail-ot1-f51.google.com [209.85.210.51]) by imf18.hostedemail.com (Postfix) with ESMTP id 2339B1C003A for ; Sat, 18 Dec 2021 21:20:42 +0000 (UTC) Received: by mail-ot1-f51.google.com with SMTP id x3-20020a05683000c300b0057a5318c517so7411698oto.13 for ; Sat, 18 Dec 2021 13:20:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=gcT6GwgJH1ROGJ+vpdu6rOoNWFNGzSHG/2agvouTvug=; b=eGpZJJOzTrDCTc9NoGZCXwVE7UT1bqZPyCl/EqRVByDV4v1GL9hSgc3Nb0pbiL3faJ 8lBsW7B3HboUVQC8QGXGuvujwEJfGQgP4uq7jvInwdir51juHA+a757tNhXeFQSnTfYL 9/n528aRas/64Qi+jjAy4z6NRwzsnJ761HZOMvWIrekzNuCJ4nsjfqHQIo7UvYl/RIsY 9mTnsxVfLNeext5C/CAxe72k9gwR25i5RtUPJLmw/Iu32Dm7U0kDLRRo7YCLKpVn4Vg2 9VRNrSmtJozCyJeHEGP2Gd0+d2INVTjD2+uzN8YRBNRV7RJ1+uJfKgkYVfdx5h37R0QJ l3Iw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=gcT6GwgJH1ROGJ+vpdu6rOoNWFNGzSHG/2agvouTvug=; b=DxRYHU4gfBBHGTqmtw/sO8z+ulfmWDTeVM/mssW7rRXrc2f2XutKGMMPdpJYDRg7cK I2j4cYF1uRJ17vhAiQqyjJ/RA9NF3XhaSrzfsjB2aLfU/d3JhZ8lqVFxRZonKduCdZ0C SAqGXvHnp5h2HwHpUQY0lWxlQLqgylk2SFUFaKDIcGieLsVxUfWqs7gQ/KF51F0JMsAq RWiN16GK8ZGUxnEr76F0sMfLgF7JYSYsmvYbyQpiP4wusEOA2t5vJ+1Pxko8IoDvFmO8 OKbn6cpucLuOnIsmxUg037wNjUIYzpjqx83qW/V9qeK4wc/MInUrnvabTu4FgiuBzXfq GaBg== X-Gm-Message-State: AOAM5316ERv+SOmk4eYSa3LipVaeFUWujxQ025P9zsdZP3m/izphy1ER GfxLlb/Et0nFhwwQb5n0FoA= X-Google-Smtp-Source: ABdhPJzEvWFUuhkbE5jeMvQXMggFzWfARXzc9vqhes5CUF9nVRJCpnEfHW+oI8/maKy/6ms17tgm/A== X-Received: by 2002:a9d:390:: with SMTP id f16mr6759364otf.325.1639862446026; Sat, 18 Dec 2021 13:20:46 -0800 (PST) Received: from localhost (searspoint.nvidia.com. [216.228.112.21]) by smtp.gmail.com with ESMTPSA id v12sm2321579ote.9.2021.12.18.13.20.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 18 Dec 2021 13:20:45 -0800 (PST) From: Yury Norov To: linux-kernel@vger.kernel.org, Yury Norov , "James E.J. Bottomley" , "Martin K. Petersen" , =?UTF-8?q?Micha=C5=82=20Miros=C5=82aw?= , "Paul E. McKenney" , "Rafael J. Wysocki" , Alexander Shishkin , Alexey Klimov , Amitkumar Karwar , Andi Kleen , Andrew Lunn , Andrew Morton , Andy Gross , Andy Lutomirski , Andy Shevchenko , Anup Patel , Ard Biesheuvel , Arnaldo Carvalho de Melo , Arnd Bergmann , Borislav Petkov , Catalin Marinas , Christoph Hellwig , Christoph Lameter , Daniel Vetter , Dave Hansen , David Airlie , David Laight , Dennis Zhou , Emil Renner Berthing , Geert Uytterhoeven , Geetha sowjanya , Greg Kroah-Hartman , Guo Ren , Hans de Goede , Heiko Carstens , Ian Rogers , Ingo Molnar , Jakub Kicinski , Jason Wessel , Jens Axboe , Jiri Olsa , Joe Perches , Jonathan Cameron , Juri Lelli , Kees Cook , Krzysztof Kozlowski , Lee Jones , Marc Zyngier , Marcin Wojtas , Mark Gross , Mark Rutland , Matti Vaittinen , Mauro Carvalho Chehab , Mel Gorman , Michael Ellerman , Mike Marciniszyn , Nicholas Piggin , Palmer Dabbelt , Peter Zijlstra , Petr Mladek , Randy Dunlap , Rasmus Villemoes , Russell King , Saeed Mahameed , Sagi Grimberg , Sergey Senozhatsky , Solomon Peachy , Stephen Boyd , Stephen Rothwell , Steven Rostedt , Subbaraya Sundeep , Sudeep Holla , Sunil Goutham , Tariq Toukan , Tejun Heo , Thomas Bogendoerfer , Thomas Gleixner , Ulf Hansson , Vincent Guittot , Vineet Gupta , Viresh Kumar , Vivien Didelot , Vlastimil Babka , Will Deacon , bcm-kernel-feedback-list@broadcom.com, kvm@vger.kernel.org, linux-alpha@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-crypto@vger.kernel.org, linux-csky@vger.kernel.org, linux-ia64@vger.kernel.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-perf-users@vger.kernel.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-snps-arc@lists.infradead.org, linuxppc-dev@lists.ozlabs.org Subject: [PATCH 09/17] lib/cpumask: add cpumask_weight_{eq,gt,ge,lt,le} Date: Sat, 18 Dec 2021 13:20:05 -0800 Message-Id: <20211218212014.1315894-10-yury.norov@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211218212014.1315894-1-yury.norov@gmail.com> References: <20211218212014.1315894-1-yury.norov@gmail.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 2339B1C003A X-Stat-Signature: jbj8igwzpnfx18zfamo4aif77ts546jo Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=eGpZJJOz; spf=pass (imf18.hostedemail.com: domain of yury.norov@gmail.com designates 209.85.210.51 as permitted sender) smtp.mailfrom=yury.norov@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspamd-Server: rspam10 X-HE-Tag: 1639862442-944528 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Kernel code calls cpumask_weight() to compare the weight of cpumask with a given number. We can do it more efficiently with cpumask_weight_{eq, ..= .} because conditional cpumask_weight may stop traversing the cpumask earlie= r, as soon as condition is met. Signed-off-by: Yury Norov --- arch/ia64/mm/tlb.c | 2 +- arch/mips/cavium-octeon/octeon-irq.c | 4 +- arch/mips/kernel/crash.c | 2 +- arch/powerpc/kernel/smp.c | 2 +- arch/powerpc/kernel/watchdog.c | 2 +- arch/powerpc/xmon/xmon.c | 4 +- arch/s390/kernel/perf_cpum_cf.c | 2 +- arch/x86/kernel/smpboot.c | 4 +- drivers/firmware/psci/psci_checker.c | 2 +- drivers/hv/channel_mgmt.c | 4 +- drivers/infiniband/hw/hfi1/affinity.c | 9 ++--- drivers/infiniband/hw/qib/qib_file_ops.c | 2 +- drivers/infiniband/hw/qib/qib_iba7322.c | 2 +- drivers/scsi/lpfc/lpfc_init.c | 2 +- drivers/soc/fsl/qbman/qman_test_stash.c | 2 +- include/linux/cpumask.h | 50 ++++++++++++++++++++++++ kernel/sched/core.c | 8 ++-- kernel/sched/topology.c | 2 +- kernel/time/clockevents.c | 2 +- 19 files changed, 78 insertions(+), 29 deletions(-) diff --git a/arch/ia64/mm/tlb.c b/arch/ia64/mm/tlb.c index 135b5135cace..a5bce13ab047 100644 --- a/arch/ia64/mm/tlb.c +++ b/arch/ia64/mm/tlb.c @@ -332,7 +332,7 @@ __flush_tlb_range (struct vm_area_struct *vma, unsign= ed long start, =20 preempt_disable(); #ifdef CONFIG_SMP - if (mm !=3D current->active_mm || cpumask_weight(mm_cpumask(mm)) !=3D 1= ) { + if (mm !=3D current->active_mm || !cpumask_weight_eq(mm_cpumask(mm), 1)= ) { ia64_global_tlb_purge(mm, start, end, nbits); preempt_enable(); return; diff --git a/arch/mips/cavium-octeon/octeon-irq.c b/arch/mips/cavium-octe= on/octeon-irq.c index 844f882096e6..914871f15fb7 100644 --- a/arch/mips/cavium-octeon/octeon-irq.c +++ b/arch/mips/cavium-octeon/octeon-irq.c @@ -763,7 +763,7 @@ static void octeon_irq_cpu_offline_ciu(struct irq_dat= a *data) if (!cpumask_test_cpu(cpu, mask)) return; =20 - if (cpumask_weight(mask) > 1) { + if (cpumask_weight_gt(mask, 1)) { /* * It has multi CPU affinity, just remove this CPU * from the affinity set. @@ -795,7 +795,7 @@ static int octeon_irq_ciu_set_affinity(struct irq_dat= a *data, * This removes the need to do locking in the .ack/.eoi * functions. */ - if (cpumask_weight(dest) !=3D 1) + if (!cpumask_weight_eq(dest, 1)) return -EINVAL; =20 if (!enable_one) diff --git a/arch/mips/kernel/crash.c b/arch/mips/kernel/crash.c index 81845ba04835..5b690d52491f 100644 --- a/arch/mips/kernel/crash.c +++ b/arch/mips/kernel/crash.c @@ -72,7 +72,7 @@ static void crash_kexec_prepare_cpus(void) */ pr_emerg("Sending IPI to other cpus...\n"); msecs =3D 10000; - while ((cpumask_weight(&cpus_in_crash) < ncpus) && (--msecs > 0)) { + while (cpumask_weight_lt(&cpus_in_crash, ncpus) && (--msecs > 0)) { cpu_relax(); mdelay(1); } diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c index c338f9d8ab37..00da2064ddf3 100644 --- a/arch/powerpc/kernel/smp.c +++ b/arch/powerpc/kernel/smp.c @@ -1655,7 +1655,7 @@ void start_secondary(void *unused) if (has_big_cores) sibling_mask =3D cpu_smallcore_mask; =20 - if (cpumask_weight(mask) > cpumask_weight(sibling_mask(cpu))) + if (cpumask_weight_gt(mask, cpumask_weight(sibling_mask(cpu)))) shared_caches =3D true; } =20 diff --git a/arch/powerpc/kernel/watchdog.c b/arch/powerpc/kernel/watchdo= g.c index bfc27496fe7e..62937a077de7 100644 --- a/arch/powerpc/kernel/watchdog.c +++ b/arch/powerpc/kernel/watchdog.c @@ -483,7 +483,7 @@ static void start_watchdog(void *arg) =20 wd_smp_lock(&flags); cpumask_set_cpu(cpu, &wd_cpus_enabled); - if (cpumask_weight(&wd_cpus_enabled) =3D=3D 1) { + if (cpumask_weight_eq(&wd_cpus_enabled, 1)) { cpumask_set_cpu(cpu, &wd_smp_cpus_pending); wd_smp_last_reset_tb =3D get_tb(); } diff --git a/arch/powerpc/xmon/xmon.c b/arch/powerpc/xmon/xmon.c index f9ae0b398260..b9e9d0b20a7b 100644 --- a/arch/powerpc/xmon/xmon.c +++ b/arch/powerpc/xmon/xmon.c @@ -469,7 +469,7 @@ static bool wait_for_other_cpus(int ncpus) =20 /* We wait for 2s, which is a metric "little while" */ for (timeout =3D 20000; timeout !=3D 0; --timeout) { - if (cpumask_weight(&cpus_in_xmon) >=3D ncpus) + if (cpumask_weight_ge(&cpus_in_xmon, ncpus)) return true; udelay(100); barrier(); @@ -1338,7 +1338,7 @@ static int cpu_cmd(void) case 'S': case 't': cpumask_copy(&xmon_batch_cpus, &cpus_in_xmon); - if (cpumask_weight(&xmon_batch_cpus) <=3D 1) { + if (cpumask_weight_le(&xmon_batch_cpus, 1)) { printf("There are no other cpus in xmon\n"); break; } diff --git a/arch/s390/kernel/perf_cpum_cf.c b/arch/s390/kernel/perf_cpum= _cf.c index ee8707abdb6a..4d217f7f5ccf 100644 --- a/arch/s390/kernel/perf_cpum_cf.c +++ b/arch/s390/kernel/perf_cpum_cf.c @@ -975,7 +975,7 @@ static int cfset_all_start(struct cfset_request *req) return -ENOMEM; cpumask_and(mask, &req->mask, cpu_online_mask); on_each_cpu_mask(mask, cfset_ioctl_on, &p, 1); - if (atomic_read(&p.cpus_ack) !=3D cpumask_weight(mask)) { + if (!cpumask_weight_eq(mask, atomic_read(&p.cpus_ack))) { on_each_cpu_mask(mask, cfset_ioctl_off, &p, 1); rc =3D -EIO; debug_sprintf_event(cf_dbg, 4, "%s CPUs missing", __func__); diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c index 617012f4619f..e851e9945eb5 100644 --- a/arch/x86/kernel/smpboot.c +++ b/arch/x86/kernel/smpboot.c @@ -1608,7 +1608,7 @@ static void remove_siblinginfo(int cpu) /*/ * last thread sibling in this cpu core going down */ - if (cpumask_weight(topology_sibling_cpumask(cpu)) =3D=3D 1) + if (cpumask_weight_eq(topology_sibling_cpumask(cpu), 1)) cpu_data(sibling).booted_cores--; } =20 @@ -1617,7 +1617,7 @@ static void remove_siblinginfo(int cpu) =20 for_each_cpu(sibling, topology_sibling_cpumask(cpu)) { cpumask_clear_cpu(cpu, topology_sibling_cpumask(sibling)); - if (cpumask_weight(topology_sibling_cpumask(sibling)) =3D=3D 1) + if (cpumask_weight_eq(topology_sibling_cpumask(sibling), 1)) cpu_data(sibling).smt_active =3D false; } =20 diff --git a/drivers/firmware/psci/psci_checker.c b/drivers/firmware/psci= /psci_checker.c index 116eb465cdb4..90c9473832a9 100644 --- a/drivers/firmware/psci/psci_checker.c +++ b/drivers/firmware/psci/psci_checker.c @@ -90,7 +90,7 @@ static unsigned int down_and_up_cpus(const struct cpuma= sk *cpus, * cpu_down() checks the number of online CPUs before the TOS * resident CPU. */ - if (cpumask_weight(offlined_cpus) + 1 =3D=3D nb_available_cpus) { + if (cpumask_weight_eq(offlined_cpus, nb_available_cpus - 1)) { if (ret !=3D -EBUSY) { pr_err("Unexpected return code %d while trying " "to power down last online CPU %d\n", diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c index 2829575fd9b7..da297220230d 100644 --- a/drivers/hv/channel_mgmt.c +++ b/drivers/hv/channel_mgmt.c @@ -762,8 +762,8 @@ static void init_vp_index(struct vmbus_channel *chann= el) } alloced_mask =3D &hv_context.hv_numa_map[numa_node]; =20 - if (cpumask_weight(alloced_mask) =3D=3D - cpumask_weight(cpumask_of_node(numa_node))) { + if (cpumask_weight_eq(alloced_mask, + cpumask_weight(cpumask_of_node(numa_node)))) { /* * We have cycled through all the CPUs in the node; * reset the alloced map. diff --git a/drivers/infiniband/hw/hfi1/affinity.c b/drivers/infiniband/h= w/hfi1/affinity.c index 38eee675369a..7c5ca5c5306a 100644 --- a/drivers/infiniband/hw/hfi1/affinity.c +++ b/drivers/infiniband/hw/hfi1/affinity.c @@ -507,7 +507,7 @@ static int _dev_comp_vect_cpu_mask_init(struct hfi1_d= evdata *dd, * available CPUs divide it by the number of devices in the * local NUMA node. */ - if (cpumask_weight(&entry->comp_vect_mask) =3D=3D 1) { + if (cpumask_weight_eq(&entry->comp_vect_mask, 1)) { possible_cpus_comp_vect =3D 1; dd_dev_warn(dd, "Number of kernel receive queues is too large for completion vect= or affinity to be effective\n"); @@ -593,7 +593,7 @@ int hfi1_dev_affinity_init(struct hfi1_devdata *dd) { struct hfi1_affinity_node *entry; const struct cpumask *local_mask; - int curr_cpu, possible, i, ret; + int curr_cpu, i, ret; bool new_entry =3D false; =20 local_mask =3D cpumask_of_node(dd->node); @@ -626,10 +626,9 @@ int hfi1_dev_affinity_init(struct hfi1_devdata *dd) local_mask); =20 /* fill in the receive list */ - possible =3D cpumask_weight(&entry->def_intr.mask); curr_cpu =3D cpumask_first(&entry->def_intr.mask); =20 - if (possible =3D=3D 1) { + if (cpumask_weight_eq(&entry->def_intr.mask, 1)) { /* only one CPU, everyone will use it */ cpumask_set_cpu(curr_cpu, &entry->rcv_intr.mask); cpumask_set_cpu(curr_cpu, &entry->general_intr_mask); @@ -1017,7 +1016,7 @@ int hfi1_get_proc_affinity(int node) cpu =3D cpumask_first(proc_mask); cpumask_set_cpu(cpu, &set->used); goto done; - } else if (current->nr_cpus_allowed < cpumask_weight(&set->mask)) { + } else if (cpumask_weight_gt(&set->mask, current->nr_cpus_allowed)) { hfi1_cdbg(PROC, "PID %u %s affinity set to CPU set(s) %*pbl", current->pid, current->comm, cpumask_pr_args(proc_mask)); diff --git a/drivers/infiniband/hw/qib/qib_file_ops.c b/drivers/infiniban= d/hw/qib/qib_file_ops.c index aa290928cf96..add89bc21b0a 100644 --- a/drivers/infiniband/hw/qib/qib_file_ops.c +++ b/drivers/infiniband/hw/qib/qib_file_ops.c @@ -1151,7 +1151,7 @@ static void assign_ctxt_affinity(struct file *fp, s= truct qib_devdata *dd) * reserve a processor for it on the local NUMA node. */ if ((weight >=3D qib_cpulist_count) && - (cpumask_weight(local_mask) <=3D qib_cpulist_count)) { + (cpumask_weight_le(local_mask, qib_cpulist_count))) { for_each_cpu(local_cpu, local_mask) if (!test_and_set_bit(local_cpu, qib_cpulist)) { fd->rec_cpu_num =3D local_cpu; diff --git a/drivers/infiniband/hw/qib/qib_iba7322.c b/drivers/infiniband= /hw/qib/qib_iba7322.c index ab98b6a3ae1e..636a080b2952 100644 --- a/drivers/infiniband/hw/qib/qib_iba7322.c +++ b/drivers/infiniband/hw/qib/qib_iba7322.c @@ -3405,7 +3405,7 @@ static void qib_setup_7322_interrupt(struct qib_dev= data *dd, int clearpend) local_mask =3D cpumask_of_pcibus(dd->pcidev->bus); firstcpu =3D cpumask_first(local_mask); if (firstcpu >=3D nr_cpu_ids || - cpumask_weight(local_mask) =3D=3D num_online_cpus()) { + cpumask_weight_eq(local_mask, num_online_cpus())) { local_mask =3D topology_core_cpumask(0); firstcpu =3D cpumask_first(local_mask); } diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.= c index a56f01f659f8..325e9004dacd 100644 --- a/drivers/scsi/lpfc/lpfc_init.c +++ b/drivers/scsi/lpfc/lpfc_init.c @@ -12643,7 +12643,7 @@ lpfc_cpuhp_get_eq(struct lpfc_hba *phba, unsigned= int cpu, * gone offline yet, we need >1. */ cpumask_and(tmp, maskp, cpu_online_mask); - if (cpumask_weight(tmp) > 1) + if (cpumask_weight_gt(tmp, 1)) continue; =20 /* Now that we have an irq to shutdown, get the eq diff --git a/drivers/soc/fsl/qbman/qman_test_stash.c b/drivers/soc/fsl/qb= man/qman_test_stash.c index b7e8e5ec884c..28b08568a349 100644 --- a/drivers/soc/fsl/qbman/qman_test_stash.c +++ b/drivers/soc/fsl/qbman/qman_test_stash.c @@ -561,7 +561,7 @@ int qman_test_stash(void) { int err; =20 - if (cpumask_weight(cpu_online_mask) < 2) { + if (cpumask_weight_lt(cpu_online_mask, 2)) { pr_info("%s(): skip - only 1 CPU\n", __func__); return 0; } diff --git a/include/linux/cpumask.h b/include/linux/cpumask.h index 64dae70d31f5..1906e3225737 100644 --- a/include/linux/cpumask.h +++ b/include/linux/cpumask.h @@ -575,6 +575,56 @@ static inline unsigned int cpumask_weight(const stru= ct cpumask *srcp) return bitmap_weight(cpumask_bits(srcp), nr_cpumask_bits); } =20 +/** + * cpumask_weight_eq - Check if # of bits in *srcp is equal to a given n= umber + * @srcp: the cpumask to count bits (< nr_cpu_ids) in. + * @num: the number to check. + */ +static inline bool cpumask_weight_eq(const struct cpumask *srcp, unsigne= d int num) +{ + return bitmap_weight_eq(cpumask_bits(srcp), nr_cpumask_bits, num); +} + +/** + * cpumask_weight_gt - Check if # of bits in *srcp is greater than a giv= en number + * @srcp: the cpumask to count bits (< nr_cpu_ids) in. + * @num: the number to check. + */ +static inline bool cpumask_weight_gt(const struct cpumask *srcp, int num= ) +{ + return bitmap_weight_gt(cpumask_bits(srcp), nr_cpumask_bits, num); +} + +/** + * cpumask_weight_ge - Check if # of bits in *srcp is greater than or eq= ual to a given number + * @srcp: the cpumask to count bits (< nr_cpu_ids) in. + * @num: the number to check. + */ +static inline bool cpumask_weight_ge(const struct cpumask *srcp, int num= ) +{ + return bitmap_weight_ge(cpumask_bits(srcp), nr_cpumask_bits, num); +} + +/** + * cpumask_weight_lt - Check if # of bits in *srcp is less than a given = number + * @srcp: the cpumask to count bits (< nr_cpu_ids) in. + * @num: the number to check. + */ +static inline bool cpumask_weight_lt(const struct cpumask *srcp, int num= ) +{ + return bitmap_weight_lt(cpumask_bits(srcp), nr_cpumask_bits, num); +} + +/** + * cpumask_weight_le - Check if # of bits in *srcp is less than or equal= to a given number + * @srcp: the cpumask to count bits (< nr_cpu_ids) in. + * @num: the number to check. + */ +static inline bool cpumask_weight_le(const struct cpumask *srcp, int num= ) +{ + return bitmap_weight_le(cpumask_bits(srcp), nr_cpumask_bits, num); +} + /** * cpumask_shift_right - *dstp =3D *srcp >> n * @dstp: the cpumask result diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 9b3ec14227e1..60f7d04a05f8 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -6006,7 +6006,7 @@ static void sched_core_cpu_starting(unsigned int cp= u) WARN_ON_ONCE(rq->core !=3D rq); =20 /* if we're the first, we'll be our own leader */ - if (cpumask_weight(smt_mask) =3D=3D 1) + if (cpumask_weight_eq(smt_mask, 1)) goto unlock; =20 /* find the leader */ @@ -6047,7 +6047,7 @@ static void sched_core_cpu_deactivate(unsigned int = cpu) sched_core_lock(cpu, &flags); =20 /* if we're the last man standing, nothing to do */ - if (cpumask_weight(smt_mask) =3D=3D 1) { + if (cpumask_weight_eq(smt_mask, 1)) { WARN_ON_ONCE(rq->core !=3D rq); goto unlock; } @@ -9053,7 +9053,7 @@ int sched_cpu_activate(unsigned int cpu) /* * When going up, increment the number of cores with SMT present. */ - if (cpumask_weight(cpu_smt_mask(cpu)) =3D=3D 2) + if (cpumask_weight_eq(cpu_smt_mask(cpu), 2)) static_branch_inc_cpuslocked(&sched_smt_present); #endif set_cpu_active(cpu, true); @@ -9128,7 +9128,7 @@ int sched_cpu_deactivate(unsigned int cpu) /* * When going down, decrement the number of cores with SMT present. */ - if (cpumask_weight(cpu_smt_mask(cpu)) =3D=3D 2) + if (cpumask_weight_eq(cpu_smt_mask(cpu), 2)) static_branch_dec_cpuslocked(&sched_smt_present); =20 sched_core_cpu_deactivate(cpu); diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c index 8478e2a8cd65..79395571599f 100644 --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -169,7 +169,7 @@ static const unsigned int SD_DEGENERATE_GROUPS_MASK =3D =20 static int sd_degenerate(struct sched_domain *sd) { - if (cpumask_weight(sched_domain_span(sd)) =3D=3D 1) + if (cpumask_weight_eq(sched_domain_span(sd), 1)) return 1; =20 /* Following flags need at least 2 groups */ diff --git a/kernel/time/clockevents.c b/kernel/time/clockevents.c index 003ccf338d20..32d6629a55b2 100644 --- a/kernel/time/clockevents.c +++ b/kernel/time/clockevents.c @@ -648,7 +648,7 @@ void tick_cleanup_dead_cpu(int cpu) */ list_for_each_entry_safe(dev, tmp, &clockevent_devices, list) { if (cpumask_test_cpu(cpu, dev->cpumask) && - cpumask_weight(dev->cpumask) =3D=3D 1 && + cpumask_weight_eq(dev->cpumask, 1) && !tick_is_broadcast_device(dev)) { BUG_ON(!clockevent_state_detached(dev)); list_del(&dev->list); --=20 2.30.2