From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 749A6C54EBC for ; Thu, 12 Jan 2023 11:27:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D19D18E0002; Thu, 12 Jan 2023 06:27:24 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id CC8F38E0001; Thu, 12 Jan 2023 06:27:24 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B90C98E0002; Thu, 12 Jan 2023 06:27:24 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id A8A978E0001 for ; Thu, 12 Jan 2023 06:27:24 -0500 (EST) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 77815120A7C for ; Thu, 12 Jan 2023 11:27:24 +0000 (UTC) X-FDA: 80345921208.23.D567873 Received: from mail-wm1-f43.google.com (mail-wm1-f43.google.com [209.85.128.43]) by imf14.hostedemail.com (Postfix) with ESMTP id AB6E8100004 for ; Thu, 12 Jan 2023 11:27:22 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=F+iKcjlB; spf=pass (imf14.hostedemail.com: domain of lstoakes@gmail.com designates 209.85.128.43 as permitted sender) smtp.mailfrom=lstoakes@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1673522842; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=NUq8jM21Z0eMtvH0FWYvwnsV1U+elnmCWQU263wQO9o=; b=eYIS720c6k06CTwzZ8Q2R0Gt7woNQBJQetzFcPxLpnYK2TW6BSFgKvq4q+J/eMjBkjI24W 12+A1KLFSQC+1JzqhMRNjK2AG+yeT3Sup8nky6J4JjYLUGdlVdKkrboEAuFW4+510OFw6a /nf+3E14ehgntcqYygsN56juLaBET+Y= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=F+iKcjlB; spf=pass (imf14.hostedemail.com: domain of lstoakes@gmail.com designates 209.85.128.43 as permitted sender) smtp.mailfrom=lstoakes@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1673522842; a=rsa-sha256; cv=none; b=psMV0Ji2xhaSW+O5IgOqszIX2MN1lb7s3DgQ2tuaWtr33m8sENXY4oHZIgzVailP1du1jO PuyRNzCl/DY51N+6UYPiL8Uqc1Yvx9RFbbQv7dRPxTynzW/R+yyxExlUh/Madh/6YFkyL9 oWZ4pOBLwU/jriP73+qdVWpJntGPaHw= Received: by mail-wm1-f43.google.com with SMTP id ay40so12971858wmb.2 for ; Thu, 12 Jan 2023 03:27:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=NUq8jM21Z0eMtvH0FWYvwnsV1U+elnmCWQU263wQO9o=; b=F+iKcjlBwEtjfhmQP2gb9jFv0AJFVCgXhOix0KnT9A9QK1njgYVh8QEjjPiySmRbS+ WFmMlCgn+uvVKr/zUXow9Ntp0M2xwsIWMqbdTBsmRJEGZ/mFUsPKUzQpOWfEiLFPjQa+ 0GyMOx1uIHZaiTQ3OYs6QRvGJS6hawf+zdkDMtdujI76K7fkGtBRdBx2Z6i705JUJAx9 nhGyYODemAbvUC2zZkTqtGNsTQt/G8QfhJIe4Pktih8o0zT7TIxP8hIhSQd6MLgj8Ivv Gl9X/K4Yui730qJAh9y0OtZc1Hrs21b4fRL19NRSDsa7SJvDdXbmJ/jeUT8AhJdnWygA lR6w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=NUq8jM21Z0eMtvH0FWYvwnsV1U+elnmCWQU263wQO9o=; b=4vz+rkUPHJzFmbMgoXNXk4tRcyTS0qp4JL7mpVU6bf47P6zvymtUifS5q2Fe6UdMFR SX2BQn+RjpmoLWuSfBjNLh4WauhrCbis2U/It0L7JUWhvzD6qZtRScrHsfnul79a2nxY b9pUXHHmbkkecXcoBKfkWi7cGudIqnc3TsO9fwE27llaYmkhhzEvo73nsDQ6UJ6Vj/V+ cmP79+FWzrEkXaOlCusJwSBKYkbcESt5UAcMTpbmdh2MUoJApg1JTFTNsyImjAsUxAM5 v1Dpt7c+4w0tMVDU5vNXE7L1BZhhKAUR288HmKeLCaOBkDr2za6m8V5TcKyEPbUi2o7E h+aQ== X-Gm-Message-State: AFqh2kqIaxb4aIbcizp29UukC0DIcGCOnKQ9qJ2RjVjDPr8XeJ3CWPaq 00+rurNjwwBMR1ztNXqwRFM= X-Google-Smtp-Source: AMrXdXtAsorVS7O3LEpSlYBsEkiOY0E2c8FJRCRyYiHfSH4FMSsIhezW5eOChVodhvxVChEUqQTcPw== X-Received: by 2002:a05:600c:1e05:b0:3da:1d51:ef96 with SMTP id ay5-20020a05600c1e0500b003da1d51ef96mr72329wmb.23.1673522841154; Thu, 12 Jan 2023 03:27:21 -0800 (PST) Received: from localhost (host86-164-169-89.range86-164.btcentralplus.com. [86.164.169.89]) by smtp.gmail.com with ESMTPSA id f15-20020a7bcd0f000000b003d9a71ee54dsm20721261wmj.36.2023.01.12.03.27.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 Jan 2023 03:27:20 -0800 (PST) Date: Thu, 12 Jan 2023 11:27:19 +0000 From: Lorenzo Stoakes To: Vlastimil Babka Cc: linux-mm@kvack.org, Andrew Morton , linux-kernel@vger.kernel.org, Matthew Wilcox , Hugh Dickins , Liam Howlett , William Kucharski , Christian Brauner , Jonathan Corbet , Mike Rapoport , Joel Fernandes , Geert Uytterhoeven Subject: Re: [PATCH v3 2/5] mm: mlock: use folios and a folio batch internally Message-ID: References: <03ac78b416be5a361b79464acc3da7f93b9c37e8.1672043615.git.lstoakes@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: AB6E8100004 X-Rspam-User: X-Stat-Signature: w6d8kd37i8e7styqwxiwnfuiqzwx4ufq X-HE-Tag: 1673522842-590823 X-HE-Meta: U2FsdGVkX1/XODSg87B/Cy/LuJZptjXpdcP3jg5x9JuDZ2z/A/EIR4GsGQOhYb0T8g4HxqjuC2ivrY34LvPsbFw3qK7qo2eu1Zr/+OdRMS6ViZTsMEkce1pfRFiU/o2jwgjUJmcAg/aHIjAOpEhcFaM1vlYsP5hxDgSPzqXqhbgVJiDZBePxkCpt3+qDIcXQCU+qKQklUXxKjD1O761X7Js66gFHuRpASrdueoXjDaDSZ3xzamQOue9pI6r5U4GNmI0WaaS7FAP2GI8s1wRK0+RrPzzmZRnQvl9tT6YTTiuokVvGzsdJ0Cs5pnLVtksbyn8AGDPcycc/qcG62tqTqKcwTmPW1waSsHioiWbkpbV0cU1LqUE2xNksZ8g3Lb6V9HFCfWBb/S2fPq7CyI79q56agB+TjAB0zDwvhd7XYprOEVqSWr7o7Xi8SeFh13es4dQkwSP9pJRoPX15jkj/S2O5Nj2FEnPEu+Dn28X04Lp4FMDis5nPUoqaEw8W32D7nXfaMF2b3HMYhrDVZqts29dXCS8SxwCwBzVuXxatJOP/Wne43X4jQGg1LQI7ZRnLS/fP3a6mAt8od6M1W0p2c9Cawrwk0q8AiHdZsBud7dXozUbfUD4DEVnlRlbkixpaqocM5rwRwLsd/UdQItwebkbjxwrZTSV/HTwokFkTzU9iW9CZukJNP61lbgjXUhAhJjqAM96/MNQi5J40lSHeBtEIpvMH6ZoPZmQBgCr4EYiTyfAbC6rgQySTBXgbexrxSceLE21pKBQWADSk81vDxbzboSRWFdTs56nZz2mI/s8Fv73m/i+Og39VQdqXv1pk/A4cuz8oxbGkd16RJTTpvnVM4IvX9FLZxeoZGMNNKS4ehkrzUNo7ul89AlIqkcypiHQNIfLp9+sX2bU8fox1jxMF2K6g2xksUmvx0RYD8rAzTAWcB8WMeyQm3g2Oz8SL/Om7/rpqpmWyN7VlFh/ 2yzgp0Ch 8Ay/cSJ6apKDnWqUOxxdZPlWHrgYY+41/II5HcAvMd5s4oJvyeLriVc3TGPoa+AK/Tj+DAJxyZwH6A59UFfKzN9b/rQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Jan 12, 2023 at 11:31:49AM +0100, Vlastimil Babka wrote: > On 12/26/22 09:44, Lorenzo Stoakes wrote: > > This brings mlock in line with the folio batches declared in mm/swap.c and > > makes the code more consistent across the two. > > > > The existing mechanism for identifying which operation each folio in the > > batch is undergoing is maintained, i.e. using the lower 2 bits of the > > struct folio address (previously struct page address). This should continue > > to function correctly as folios remain at least system word-aligned. > > > > All invoctions of mlock() pass either a non-compound page or the head of a > > THP-compound page and no tail pages need updating so this functionality > > works with struct folios being used internally rather than struct pages. > > > > In this patch the external interface is kept identical to before in order > > to maintain separation between patches in the series, using a rather > > awkward conversion from struct page to struct folio in relevant functions. > > > > However, this maintenance of the existing interface is intended to be > > temporary - the next patch in the series will update the interfaces to > > accept folios directly. > > > > Signed-off-by: Lorenzo Stoakes > > Acked-by: Vlastimil Babka > > with some nits: > > > -static struct lruvec *__munlock_page(struct page *page, struct lruvec *lruvec) > > +static struct lruvec *__munlock_folio(struct folio *folio, struct lruvec *lruvec) > > { > > - int nr_pages = thp_nr_pages(page); > > + int nr_pages = folio_nr_pages(folio); > > bool isolated = false; > > > > - if (!TestClearPageLRU(page)) > > + if (!folio_test_clear_lru(folio)) > > goto munlock; > > > > isolated = true; > > - lruvec = folio_lruvec_relock_irq(page_folio(page), lruvec); > > + lruvec = folio_lruvec_relock_irq(folio, lruvec); > > > > - if (PageUnevictable(page)) { > > + if (folio_test_unevictable(folio)) { > > /* Then mlock_count is maintained, but might undercount */ > > - if (page->mlock_count) > > - page->mlock_count--; > > - if (page->mlock_count) > > + if (folio->mlock_count) > > + folio->mlock_count--; > > + if (folio->mlock_count) > > goto out; > > } > > /* else assume that was the last mlock: reclaim will fix it if not */ > > > > munlock: > > - if (TestClearPageMlocked(page)) { > > - __mod_zone_page_state(page_zone(page), NR_MLOCK, -nr_pages); > > - if (isolated || !PageUnevictable(page)) > > + if (folio_test_clear_mlocked(folio)) { > > + zone_stat_mod_folio(folio, NR_MLOCK, -nr_pages); > > AFAIK the 1:1 replacement would be __zone_stat_mod_folio(), this is stronger > thus not causing a bug, but unneccessary? I used this rather than __zone_stat_mod_folio() as this is what mlock_folio() does and I wanted to maintain consistency with that function. However, given we were previously user the weaker page version of this function, I agree that we should do the same with the folio, will change! > > > + if (isolated || !folio_test_unevictable(folio)) > > __count_vm_events(UNEVICTABLE_PGMUNLOCKED, nr_pages); > > else > > __count_vm_events(UNEVICTABLE_PGSTRANDED, nr_pages); > > } > > > > - /* page_evictable() has to be checked *after* clearing Mlocked */ > > - if (isolated && PageUnevictable(page) && page_evictable(page)) { > > - del_page_from_lru_list(page, lruvec); > > - ClearPageUnevictable(page); > > - add_page_to_lru_list(page, lruvec); > > + /* folio_evictable() has to be checked *after* clearing Mlocked */ > > + if (isolated && folio_test_unevictable(folio) && folio_evictable(folio)) { > > + lruvec_del_folio(lruvec, folio); > > + folio_clear_unevictable(folio); > > + lruvec_add_folio(lruvec, folio); > > __count_vm_events(UNEVICTABLE_PGRESCUED, nr_pages); > > } > > out: > > if (isolated) > > - SetPageLRU(page); > > + folio_set_lru(folio); > > return lruvec; > > } > > > > /* > > - * Flags held in the low bits of a struct page pointer on the mlock_pvec. > > + * Flags held in the low bits of a struct folio pointer on the mlock_fbatch. > > */ > > #define LRU_PAGE 0x1 > > #define NEW_PAGE 0x2 > > Should it be X_FOLIO now? > > > -static inline struct page *mlock_lru(struct page *page) > > +static inline struct folio *mlock_lru(struct folio *folio) > > { > > - return (struct page *)((unsigned long)page + LRU_PAGE); > > + return (struct folio *)((unsigned long)folio + LRU_PAGE); > > } > > > > -static inline struct page *mlock_new(struct page *page) > > +static inline struct folio *mlock_new(struct folio *folio) > > { > > - return (struct page *)((unsigned long)page + NEW_PAGE); > > + return (struct folio *)((unsigned long)folio + NEW_PAGE); > > } > > > > /* > > - * mlock_pagevec() is derived from pagevec_lru_move_fn(): > > - * perhaps that can make use of such page pointer flags in future, > > - * but for now just keep it for mlock. We could use three separate > > - * pagevecs instead, but one feels better (munlocking a full pagevec > > - * does not need to drain mlocking pagevecs first). > > + * mlock_folio_batch() is derived from folio_batch_move_lru(): perhaps that can > > + * make use of such page pointer flags in future, but for now just keep it for > > ^ folio? > > > + * mlock. We could use three separate folio batches instead, but one feels > > + * better (munlocking a full folio batch does not need to drain mlocking folio > > + * batches first). > > */ > > -static void mlock_pagevec(struct pagevec *pvec) > > +static void mlock_folio_batch(struct folio_batch *fbatch) > Ack on all remaining comments also, will spin a v4, thanks for the review!