From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 46B5CC83038 for ; Tue, 1 Jul 2025 18:08:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C6CC16B00B3; Tue, 1 Jul 2025 14:07:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C1CAB6B00B4; Tue, 1 Jul 2025 14:07:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B0B346B00B6; Tue, 1 Jul 2025 14:07:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 9FB3C6B00B3 for ; Tue, 1 Jul 2025 14:07:59 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 083A61D826E for ; Tue, 1 Jul 2025 18:07:59 +0000 (UTC) X-FDA: 83616479478.30.D546973 Received: from mail-ot1-f48.google.com (mail-ot1-f48.google.com [209.85.210.48]) by imf28.hostedemail.com (Postfix) with ESMTP id 2E832C0012 for ; Tue, 1 Jul 2025 18:07:57 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=linaro.org header.s=google header.b=coZH0dhw; dmarc=pass (policy=none) header.from=linaro.org; spf=pass (imf28.hostedemail.com: domain of dan.carpenter@linaro.org designates 209.85.210.48 as permitted sender) smtp.mailfrom=dan.carpenter@linaro.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1751393277; a=rsa-sha256; cv=none; b=futk1+AKH9+4ojNdNH5BDnxkMrQdeX4aDCO/6+OMUN4dN+rIWYnOU8tvyxBZu6gzEE+NNQ lbX8NUMGd4yFgG0tCJGs5T5keNQEEzg8XOMOtDLu7GAMOcpFkHMnEHkeKMGL+0LYHFMI+b SNwHK0ARldeiB5Q3+kMS54LPfgExztg= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=linaro.org header.s=google header.b=coZH0dhw; dmarc=pass (policy=none) header.from=linaro.org; spf=pass (imf28.hostedemail.com: domain of dan.carpenter@linaro.org designates 209.85.210.48 as permitted sender) smtp.mailfrom=dan.carpenter@linaro.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1751393277; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding:in-reply-to: references:dkim-signature; bh=ekcuP/HXYeNJLy2udvC2Ot1w0C2gypqjcEBXStSW5vw=; b=42A+0uvuk9X31s8ecGo5cqiardKwctb5sg3dgB8FV0UdbyE56StexhcadxLENH6v36c+86 A6JuV10BsFsdjDm7G005KtTqoaRg8Ax0avhfDOt6JYtRlnrTK3sidSF78IyYeyRHeXF1Ne MUS2dt5/z++iYBhhl7kNg+4v7TUCbmM= Received: by mail-ot1-f48.google.com with SMTP id 46e09a7af769-735b9d558f9so1383888a34.2 for ; Tue, 01 Jul 2025 11:07:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1751393276; x=1751998076; darn=kvack.org; h=content-disposition:mime-version:message-id:subject:cc:to:from:date :from:to:cc:subject:date:message-id:reply-to; bh=ekcuP/HXYeNJLy2udvC2Ot1w0C2gypqjcEBXStSW5vw=; b=coZH0dhwXUcXgAxVGTD7zqGmf+L7C7rnkH2gL1yzTBbJg3gTDjYgkHLuyk4aXjMNwj ayNNFXRmUFmeQRqb2ocvlIRpATju2YbWSPf3ByBqHMavlYQXPa7I3hzFzkfBl1fnVWbb ZZIzWnqLpC2ED12g4r23vCC02AvLud8jldUU/yo6hfK49stiuSrDMRlIiM5pokk5gL1D ZfU17dVSha+nB/bAq8y8dWPQmIY27G+v2s68fwdwyNmZQib6sAC+zsj8Qa/fWsbEJ3C4 gRtdIHDuAoO4KkIzNO93VuD0NX3je34hKolFVjunMBMC3kZ7NmMGheLaI/uy2OZn98Kg 5iLw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751393276; x=1751998076; h=content-disposition:mime-version:message-id:subject:cc:to:from:date :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ekcuP/HXYeNJLy2udvC2Ot1w0C2gypqjcEBXStSW5vw=; b=g06FBpc4cyr5EKqPoX3+K+guHZne2w5nxVFiH7FdGVirP7NFi32mIZFbXYQ1jmB0Nz 98SixLbQFZ+70W9Z5zwWBVB9fwWRCWg2J7RNBMbLopTITF8NPFsf3de8ns4st262JU3P 0csO96831JP+DkAJekriLSRWw+rbP4D8K264dGBKGkmOJV0DkCGGxmXGg3orBxbIpAW0 68e5eAGKi3IYzewKIEnATYdJENfOKAwbcWZeXmmncNnrMwgNyjLsQkjY9M6S3GvchgPH DPny34DVSx0KUSycqH9XE/y8t8sCI9Mlfkg5lmFhQXs+bPIlz8Awz8wieRUK/8E2epC9 AGVw== X-Gm-Message-State: AOJu0Yxolps/l6L9fNL92lTLE8KpgqmhGcAXHIpYVTirTLDECznffQro DsHYjET5K/TqQw9bOYQP/oPsR4aCddAKgxROHphhHBgbZEawZCO48UyHXTwxhyDIJ9s= X-Gm-Gg: ASbGnctzx7UPnENSOD6q7OPRzdM8hwjnpuIDoPwJgQoipa5lViSKji0xwwetLK3upOR ruoQSyWAvz9Q0R+U1C/qOLev0n77VR6JJXV4rE4LrtLtyPN7fWaS9Lhb9Km3yXuBc3pkTs9q4/8 merzasGuHt/KXh7d5FV41CsMsB1FYmR4cRZ3M1Kgz9+gWy06JLJVQ3s32TlFVMVD2+4rebr9E06 9L9ABxpcT2I1OdqhRN0m6YPw3xMMGyt4Gafpo9iJsmaKqN3UNpo+6fx67d2zk/K6bWDBHHZ16ZR O+Vz/9ibvPUwbthlQs2Sz1sJH91KP8UvP+LSYmJhu/gZzZVQ7WkRmvIV0l3aj683knyQLw== X-Google-Smtp-Source: AGHT+IEwjKgRoGjM1aWl+Wd3WEUw31QtcvE5iC4HT5zjk4zuN3YLNlnYjkP8U5QWp7ylb/DvCHWqyQ== X-Received: by 2002:a05:6830:3c8f:b0:72b:8c5a:7294 with SMTP id 46e09a7af769-73afc53dd3cmr13117536a34.9.1751393276078; Tue, 01 Jul 2025 11:07:56 -0700 (PDT) Received: from localhost ([2603:8080:b800:f700:d663:8de8:cafb:14e3]) by smtp.gmail.com with UTF8SMTPSA id 46e09a7af769-73afafee51bsm2177645a34.10.2025.07.01.11.07.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 01 Jul 2025 11:07:55 -0700 (PDT) Date: Tue, 1 Jul 2025 13:07:53 -0500 From: Dan Carpenter To: Joshua Hahn Cc: linux-mm@kvack.org Subject: [bug report] mm/mempolicy: skip extra call to __alloc_pages_bulk in weighted interleave Message-ID: <22b9509b-3d92-4ecd-91b4-7a08c1da956c@sabinyo.mountain> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 2E832C0012 X-Stat-Signature: 85ifj9rpgu3qph1trp3wh76y8wrnhj5c X-Rspam-User: X-HE-Tag: 1751393276-667212 X-HE-Meta: U2FsdGVkX1/redW8mBMNr5mypGjzeMH2ZRjMhKiZMaY8zgA02puoPSCSxGtIURbQ/8Km5Do0IJXg8tYOvcawlLcfw4mrRSGm8m6+SkS/zdreSJwj6dNsSfQJCrP7r4MS/30uTwR7H/1fuX5sO7Blmdnz/978HkKKtG2TnEbXllvrjLCeyHPY98/yovBtJFBKrc9wFQ8+qLhVqyDCqbZY8S5qBtxY9RGksbJk76getBjgHLmR8RVqJPc1nZkjlKmspwOSSm8ijHHu5H0FbzA99KMzoKmp5+x+VqmjeAiJl7RM+VXx1QoTnuKWMr9GrENQbcVthZux9xlpRDyXl5nAIMbGuevyTxeV11n8Y5YRK2cyV95ao7nyBPnD+Jo5ROoTI0a7caGrmckYVoZdAWhGL6Bx+NbTuSMR2GFTq/mWf0fZN+ibfUlnLVbsMKvBorOPrzPsRQ+xi52I7khQRo4ZSXbIwk4S06vBDDNM8LtEXSnJOq1c3NJhCCekVU2RshAiNnD4MeeBzwDsyPkaHCOw295abTNkUEBDShrqUj/EEsIyk9IHbSYQUcqRhQXcqWshNAFjR4N9+Nw8YE19kVSES/bgZPpL++mSdfqJndNwXWq3y18HItxuequ/5SJbEXuRVgW52KJR1s+HUQ5vJvG9mCFljxMbUW4ElY8NCPh0K5oZEfO4FWrnqvmdVVMeHyilvu0Eg7tGXNz5OS51fDmUNMTB1px93ZleebR3jvbpEnQWqAJPZYls9jX0mz45KaN0JO7PGVcFDtkbGl0m7dGZLWVjLJ1+RoYf0PqXuFMFvlOpf5SHefYIaZEuLYpwunZF8VKs5XIBCMiyH7hPTuCPJLNB7GLmlqatLv6M1eQEYiBoyUEWpKtJutpos29Gl3XCh/tWKCPOjtVU/NP08wb5TJStotOrYI589odCK7Frwa2lhNgzUlqENXdJ6X9AuoD45scWx8K15TbtEsmL+hI vE33FRTt Lf/4xZk9KgASuJzA3axkFtg3qVKpVv20rl2U5IBclOMpSHc5+iIq5tOhV8daZr4Vb/3ya9yzOB0bITgUc9k9ZWKgSYuH5ZGMD1hHNsgsNKBbAeSWst0WMuyUR8U5rtQO2KcmfbV5CLRBhV61/tFcvxRUroA5AmEDXDroQE4S6Wg2+uV+w1PKHqazrK4+58BL+CH1TyhYP5nNm7S+hd1jOzaJd59mMIR8Ql11K8+DYxdn6fYhOKgqqgZtujgPlj1OUjiQDawuuNR4tHC91Ne8IznUlt5zD/IXTRulLB1kAfRWhc+tTc2SjTjhqOcp1CEIydpLGQUnkzNyVLXwmi2nQrls8bmQelGD28QWQXrriyAr3E3fs6misboHUER2mYelU/cdzkamKUrOvYpg= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hello Joshua Hahn, Commit 72cf498b9e8a ("mm/mempolicy: skip extra call to __alloc_pages_bulk in weighted interleave") from Jun 26, 2025 (linux-next), leads to the following Smatch static checker warning: mm/mempolicy.c:2641 alloc_pages_bulk_weighted_interleave() error: uninitialized symbol 'i'. mm/mempolicy.c 2559 static unsigned long alloc_pages_bulk_weighted_interleave(gfp_t gfp, 2560 struct mempolicy *pol, unsigned long nr_pages, 2561 struct page **page_array) 2562 { 2563 struct weighted_interleave_state *state; 2564 struct task_struct *me = current; 2565 unsigned int cpuset_mems_cookie; 2566 unsigned long total_allocated = 0; 2567 unsigned long nr_allocated = 0; 2568 unsigned long rounds; 2569 unsigned long node_pages, delta; 2570 u8 *weights, weight; 2571 unsigned int weight_total = 0; 2572 unsigned long rem_pages = nr_pages, carryover = 0; 2573 nodemask_t nodes; 2574 int nnodes, node; 2575 int resume_node = MAX_NUMNODES - 1; 2576 u8 resume_weight = 0; 2577 int prev_node; 2578 int i; 2579 2580 if (!nr_pages) 2581 return 0; 2582 2583 /* read the nodes onto the stack, retry if done during rebind */ 2584 do { 2585 cpuset_mems_cookie = read_mems_allowed_begin(); 2586 nnodes = read_once_policy_nodemask(pol, &nodes); 2587 } while (read_mems_allowed_retry(cpuset_mems_cookie)); 2588 2589 /* if the nodemask has become invalid, we cannot do anything */ 2590 if (!nnodes) 2591 return 0; 2592 2593 /* Continue allocating from most recent node and adjust the nr_pages */ 2594 node = me->il_prev; 2595 weight = me->il_weight; 2596 if (weight && node_isset(node, nodes)) { 2597 if (rem_pages <= weight) { 2598 node_pages = rem_pages; 2599 me->il_weight -= node_pages; 2600 goto allocate; i is not initialized here. 2601 } 2602 carryover = weight; 2603 } 2604 /* clear active weight in case of an allocation failure */ 2605 me->il_weight = 0; 2606 prev_node = node; 2607 2608 /* create a local copy of node weights to operate on outside rcu */ 2609 weights = kzalloc(nr_node_ids, GFP_KERNEL); 2610 if (!weights) 2611 return 0; 2612 2613 rcu_read_lock(); 2614 state = rcu_dereference(wi_state); 2615 if (state) { 2616 memcpy(weights, state->iw_table, nr_node_ids * sizeof(u8)); 2617 rcu_read_unlock(); 2618 } else { 2619 rcu_read_unlock(); 2620 for (i = 0; i < nr_node_ids; i++) 2621 weights[i] = 1; 2622 } 2623 2624 /* calculate total, detect system default usage */ 2625 for_each_node_mask(node, nodes) 2626 weight_total += weights[node]; 2627 2628 /* 2629 * Calculate rounds/partial rounds to minimize __alloc_pages_bulk calls. 2630 * Track which node weighted interleave should resume from. 2631 * Account for carryover. It is always allocated from the first node. 2632 * 2633 * if (rounds > 0) and (delta == 0), resume_node will always be 2634 * the node following prev_node and its weight. 2635 */ 2636 rounds = (rem_pages - carryover) / weight_total; 2637 delta = (rem_pages - carryover) % weight_total; 2638 resume_node = next_node_in(prev_node, nodes); 2639 resume_weight = weights[resume_node]; 2640 node = carryover ? prev_node : next_node_in(prev_node, nodes); --> 2641 for (i = 0; i < nnodes; i++) { ^^^^^^^^^^^^^^^ Uninitialized variable. In production people should use the config to zero out stack variables but I always encourage developers to use CONFIG_INIT_STACK_ALL_PATTERN=y in their testing. 2642 weight = weights[node]; 2643 /* when delta is depleted, resume from that node */ 2644 if (delta && delta < weight) { 2645 resume_node = node; 2646 resume_weight = weight - delta; 2647 } 2648 /* Add the node's portion of the delta, if there is one */ 2649 node_pages = weight * rounds + min(delta, weight) + carryover; 2650 delta -= min(delta, weight); 2651 carryover = 0; 2652 2653 /* node_pages can be 0 if an allocation fails and rounds == 0 */ 2654 if (!node_pages) 2655 break; 2656 allocate: 2657 nr_allocated = __alloc_pages_bulk(gfp, node, NULL, node_pages, 2658 page_array); 2659 page_array += nr_allocated; 2660 total_allocated += nr_allocated; 2661 if (total_allocated == nr_pages) 2662 break; 2663 prev_node = node; 2664 node = next_node_in(prev_node, nodes); 2665 } 2666 2667 if (weights) { 2668 me->il_prev = resume_node; 2669 me->il_weight = resume_weight; 2670 kfree(weights); 2671 } 2672 return total_allocated; 2673 } regards, dan carpenter