From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3F02BC54EBD for ; Thu, 12 Jan 2023 10:31:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8AF71900002; Thu, 12 Jan 2023 05:31:54 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 838648E0001; Thu, 12 Jan 2023 05:31:54 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 70052900002; Thu, 12 Jan 2023 05:31:54 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 61CDC8E0001 for ; Thu, 12 Jan 2023 05:31:54 -0500 (EST) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 1603D120D4E for ; Thu, 12 Jan 2023 10:31:54 +0000 (UTC) X-FDA: 80345781348.02.D4FA0BA Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by imf25.hostedemail.com (Postfix) with ESMTP id 2D9C8A000C for ; Thu, 12 Jan 2023 10:31:50 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=LQIkCKUX; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=YG1dh487; dmarc=none; spf=pass (imf25.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.28 as permitted sender) smtp.mailfrom=vbabka@suse.cz ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1673519511; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=kDuPR09Dky6UQGEK8I/znXJ94j8W8+WzXtKjSCowtwg=; b=eU/B/oszOhWuZVctzzPA5nhI1joybXRd+T58UBPzy456vQw895LSOcmqSoUDiD2gKNr4O1 Zr2BCI+l9gbuLkkzon4j+KEJ0YxyNOTto/ZzwP9QJ2U5Q+15NKxxTeC21d9cEU4brisk3G 0nmCIGEeNqrQPMABsdAJynO0R6q/dS8= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=LQIkCKUX; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=YG1dh487; dmarc=none; spf=pass (imf25.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.28 as permitted sender) smtp.mailfrom=vbabka@suse.cz ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1673519511; a=rsa-sha256; cv=none; b=Eog/Sta8cK/xwJNcwh38fHTLF+PiDvv5aiBJDMjfAtYcNiL5sSzDcNIM8/FP9uPhf96n3K 8NTpz+0i6nOm4UUpxhSlafdvGpKscMkMv+/AJepQyhK7EsNifkUBSGItp4N1WlgWtMSVn9 GyEnASY5IGCHxyYRd3/Xwp51cB3VeLg= Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 881BC37A70; Thu, 12 Jan 2023 10:31:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1673519509; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=kDuPR09Dky6UQGEK8I/znXJ94j8W8+WzXtKjSCowtwg=; b=LQIkCKUXynd6jc9SPtAHB2A+Io+pkIU/tiOOvHF83rUX9Xf8Cc93wtwI98EUcqE5YskUNh lvDjl+WXKFu3XVEpvRaeZk84rUiJwVJTtlavo5Kc4EidIj4yvmOJPU2hBvvgZjUTmjrNBO O3Ekoxg+kGGcqUMwUedJuIOrnLRnyKg= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1673519509; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=kDuPR09Dky6UQGEK8I/znXJ94j8W8+WzXtKjSCowtwg=; b=YG1dh487UFcMJUm/4zYAGXs6or6Rn8kMzqTxFtttPEokrg6ut1P9gEWRfLhsn8l/+WI4Wd hul9YqRxhb+J96CQ== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 6128513585; Thu, 12 Jan 2023 10:31:49 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id amgwF5Xhv2NsKAAAMHmgww (envelope-from ); Thu, 12 Jan 2023 10:31:49 +0000 Message-ID: Date: Thu, 12 Jan 2023 11:31:49 +0100 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.6.1 Subject: Re: [PATCH v3 2/5] mm: mlock: use folios and a folio batch internally Content-Language: en-US To: Lorenzo Stoakes , linux-mm@kvack.org, Andrew Morton , linux-kernel@vger.kernel.org Cc: Matthew Wilcox , Hugh Dickins , Liam Howlett , William Kucharski , Christian Brauner , Jonathan Corbet , Mike Rapoport , Joel Fernandes , Geert Uytterhoeven References: <03ac78b416be5a361b79464acc3da7f93b9c37e8.1672043615.git.lstoakes@gmail.com> From: Vlastimil Babka In-Reply-To: <03ac78b416be5a361b79464acc3da7f93b9c37e8.1672043615.git.lstoakes@gmail.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: 2D9C8A000C X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: wheh7suypnxg9cdfats648jzau17jnnq X-HE-Tag: 1673519510-950954 X-HE-Meta: U2FsdGVkX1+WB03RcDpl/h+dOm1EAkIoMwO4JUuWWYdsMR1HMEIasrmTQ3TGx84DyPX4QymLGoUTEhzrSONLY4n2NhYnWz+TsbGOY2czOAjAuLNq3rZGxFbbTWeh6OB3ERl4EBBryrsz4ScWHwvdgMiljnLf8c/LK7sfzd0JdyyXPNSZZJ1RXP2RRdtrphV7YTaaJ1XtqO9k59EmsIFFlWRyphxGuhSq/Cti5mrHUjEMqXKxo6CXftXgpKrTdJC4e9huDuCBzxbtwKjxNbI3VrYrSTquz5XTwEBrJDhfUQyfVpLCECoPZHo5wg2Zm6UmyNJ1JGsM2MBKCn2iRygExihuFjwhFAxH+osWjmT4f7UsDnJPAQSlO1MllmSKmIowpVaSlUybM+Cjc3vPB4Ly+WPx1Wh4f3ukGuu+/GAn0jRtcRUUNYU2zlVk3gS7MfLWo8xd/bbIGpz5oNI+11WWLnovHkmz72GKBeCRrx3r+qFejUaodcK8Uury4u4Om4Fo6sv9NHM2fKWzTViN8R9hwA2JKrq9MnfLv9YmFhG/i2fVr1YrJJUUxWUw7XS2dNpienOTF+diL27PB0stMHgLSTGZhoL70kPXDleNd0CVtDMo7gZxe+LleWFfvGq2yj1/h+F6VjrUqFIPCY4Txj11tE142TGNff6MXgRvBYV9nYdfpsA8r+CfFWuVikHruPX8GlYw3VBifYt3ZHtGSfKwGAHXOLFmbMnPxFsq2FL17sn1JAda0m6Y2lPmcHHBE1PwP+Quyy8blhOSz4zCn6KShhNI2Os/CEFrKkJ4CeDOZ4qU68vNMTn34qKs9AM3V0Hxto6H15SC0QUsy3mxCpczY4N3Bv6VXnsAhvXVoAaCgXIEr3aXVTcBb/AKWkk9RHIjEWKC7Ca5UpQ/EeQFlGHQFYkU6G8JVsVV X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 12/26/22 09:44, Lorenzo Stoakes wrote: > This brings mlock in line with the folio batches declared in mm/swap.c and > makes the code more consistent across the two. > > The existing mechanism for identifying which operation each folio in the > batch is undergoing is maintained, i.e. using the lower 2 bits of the > struct folio address (previously struct page address). This should continue > to function correctly as folios remain at least system word-aligned. > > All invoctions of mlock() pass either a non-compound page or the head of a > THP-compound page and no tail pages need updating so this functionality > works with struct folios being used internally rather than struct pages. > > In this patch the external interface is kept identical to before in order > to maintain separation between patches in the series, using a rather > awkward conversion from struct page to struct folio in relevant functions. > > However, this maintenance of the existing interface is intended to be > temporary - the next patch in the series will update the interfaces to > accept folios directly. > > Signed-off-by: Lorenzo Stoakes Acked-by: Vlastimil Babka with some nits: > -static struct lruvec *__munlock_page(struct page *page, struct lruvec *lruvec) > +static struct lruvec *__munlock_folio(struct folio *folio, struct lruvec *lruvec) > { > - int nr_pages = thp_nr_pages(page); > + int nr_pages = folio_nr_pages(folio); > bool isolated = false; > > - if (!TestClearPageLRU(page)) > + if (!folio_test_clear_lru(folio)) > goto munlock; > > isolated = true; > - lruvec = folio_lruvec_relock_irq(page_folio(page), lruvec); > + lruvec = folio_lruvec_relock_irq(folio, lruvec); > > - if (PageUnevictable(page)) { > + if (folio_test_unevictable(folio)) { > /* Then mlock_count is maintained, but might undercount */ > - if (page->mlock_count) > - page->mlock_count--; > - if (page->mlock_count) > + if (folio->mlock_count) > + folio->mlock_count--; > + if (folio->mlock_count) > goto out; > } > /* else assume that was the last mlock: reclaim will fix it if not */ > > munlock: > - if (TestClearPageMlocked(page)) { > - __mod_zone_page_state(page_zone(page), NR_MLOCK, -nr_pages); > - if (isolated || !PageUnevictable(page)) > + if (folio_test_clear_mlocked(folio)) { > + zone_stat_mod_folio(folio, NR_MLOCK, -nr_pages); AFAIK the 1:1 replacement would be __zone_stat_mod_folio(), this is stronger thus not causing a bug, but unneccessary? > + if (isolated || !folio_test_unevictable(folio)) > __count_vm_events(UNEVICTABLE_PGMUNLOCKED, nr_pages); > else > __count_vm_events(UNEVICTABLE_PGSTRANDED, nr_pages); > } > > - /* page_evictable() has to be checked *after* clearing Mlocked */ > - if (isolated && PageUnevictable(page) && page_evictable(page)) { > - del_page_from_lru_list(page, lruvec); > - ClearPageUnevictable(page); > - add_page_to_lru_list(page, lruvec); > + /* folio_evictable() has to be checked *after* clearing Mlocked */ > + if (isolated && folio_test_unevictable(folio) && folio_evictable(folio)) { > + lruvec_del_folio(lruvec, folio); > + folio_clear_unevictable(folio); > + lruvec_add_folio(lruvec, folio); > __count_vm_events(UNEVICTABLE_PGRESCUED, nr_pages); > } > out: > if (isolated) > - SetPageLRU(page); > + folio_set_lru(folio); > return lruvec; > } > > /* > - * Flags held in the low bits of a struct page pointer on the mlock_pvec. > + * Flags held in the low bits of a struct folio pointer on the mlock_fbatch. > */ > #define LRU_PAGE 0x1 > #define NEW_PAGE 0x2 Should it be X_FOLIO now? > -static inline struct page *mlock_lru(struct page *page) > +static inline struct folio *mlock_lru(struct folio *folio) > { > - return (struct page *)((unsigned long)page + LRU_PAGE); > + return (struct folio *)((unsigned long)folio + LRU_PAGE); > } > > -static inline struct page *mlock_new(struct page *page) > +static inline struct folio *mlock_new(struct folio *folio) > { > - return (struct page *)((unsigned long)page + NEW_PAGE); > + return (struct folio *)((unsigned long)folio + NEW_PAGE); > } > > /* > - * mlock_pagevec() is derived from pagevec_lru_move_fn(): > - * perhaps that can make use of such page pointer flags in future, > - * but for now just keep it for mlock. We could use three separate > - * pagevecs instead, but one feels better (munlocking a full pagevec > - * does not need to drain mlocking pagevecs first). > + * mlock_folio_batch() is derived from folio_batch_move_lru(): perhaps that can > + * make use of such page pointer flags in future, but for now just keep it for ^ folio? > + * mlock. We could use three separate folio batches instead, but one feels > + * better (munlocking a full folio batch does not need to drain mlocking folio > + * batches first). > */ > -static void mlock_pagevec(struct pagevec *pvec) > +static void mlock_folio_batch(struct folio_batch *fbatch)