From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id ABC07CD1299 for ; Thu, 11 Apr 2024 07:26:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D35986B0085; Thu, 11 Apr 2024 03:26:24 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C95B06B0087; Thu, 11 Apr 2024 03:26:24 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AE8EC6B0088; Thu, 11 Apr 2024 03:26:24 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 8E84F6B0085 for ; Thu, 11 Apr 2024 03:26:24 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 3EF04160AA0 for ; Thu, 11 Apr 2024 07:26:24 +0000 (UTC) X-FDA: 81996417888.10.BFD53B4 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.18]) by imf11.hostedemail.com (Postfix) with ESMTP id F15D94000A for ; Thu, 11 Apr 2024 07:26:21 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=EOrOwamP; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf11.hostedemail.com: domain of vivek.kasireddy@intel.com designates 192.198.163.18 as permitted sender) smtp.mailfrom=vivek.kasireddy@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712820382; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=FdtrZflmrTZEM4b8dBDzYd6rxmTmi7yvNu60nfLbPRY=; b=bjcsjzID6Yoi74ab9at3iIFwT4+R7PdUu4FU9gA7JP2EKgbzkSzer0Uake8hTnSVSC0zfh 2F1icTwgQ2BIz08dXzWqeReD/aQUF6U+RMOlpwd8J6KVd2b7JvIDru0KXIKZEe1h65qTyq mxv0taW3BBu5czPDC+Y5OrFYolSOy8Q= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=EOrOwamP; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf11.hostedemail.com: domain of vivek.kasireddy@intel.com designates 192.198.163.18 as permitted sender) smtp.mailfrom=vivek.kasireddy@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712820382; a=rsa-sha256; cv=none; b=NERfR8RMRQdnmZur+XHZKNBfCpJln7FLbsyfRASVWJYLriRVi01ogtie8GDbSnm9USOknF 3GzFpaeqyUPDCrsBbSrgpwvqY2Wb1XbaNpkMtqiLqqEY45I/7jXszlp4613ihOJphrhaor LbWQrhV+x9heV5MPcpYL0zc/elaX3j0= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1712820382; x=1744356382; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=72Tv8oBNdscDR5cBIv8NthRyNYbTiNCpUQjbNcDeGj0=; b=EOrOwamPgPmd9QKb9iRx6Jxog/cLNqzgN4WjKO8AKjB/9gLT4RpRG0AY x69SImwqYx7/zscP4GPD0QeIt7qK6nb5SvNz7o2UNPHN2ELRbY21SYNvp FUynzFVvlCScsfeTzbREw8N1FrqskXBF02NWwGbZir7Z4wnEwYdSV26QG kLCfhQe/DFIK7Mv0sdw0wz2sVNsPUIoL9ygJm8SXwRygFRI51RGUVFQ3Z Bs3cCdMrqnQq/xH0PEt62tWOeFb7yWjDhOfEq6RWn3c94MdKLdTnSCKpW urTXqU/hKKBzUPzZZ9Vyus/AcbG9KlZ3s5kD1Dd1l0SitZSjmbK97bN+9 g==; X-CSE-ConnectionGUID: RjOjWjIPRZS/uW7cTQKXLg== X-CSE-MsgGUID: w7cryk77THmMAg3j8paEew== X-IronPort-AV: E=McAfee;i="6600,9927,11039"; a="8074355" X-IronPort-AV: E=Sophos;i="6.07,192,1708416000"; d="scan'208";a="8074355" Received: from fmviesa009.fm.intel.com ([10.60.135.149]) by fmvoesa112.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Apr 2024 00:26:03 -0700 X-CSE-ConnectionGUID: oUQa4RU3T16v0izoR7aeUQ== X-CSE-MsgGUID: KHaQlRT2R8GqQPIM7PEg2Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,192,1708416000"; d="scan'208";a="20848537" Received: from vkasired-desk2.fm.intel.com ([10.105.128.132]) by fmviesa009-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Apr 2024 00:26:03 -0700 From: Vivek Kasireddy To: dri-devel@lists.freedesktop.org, linux-mm@kvack.org Cc: Vivek Kasireddy , David Hildenbrand , Matthew Wilcox , Christoph Hellwig , Jason Gunthorpe , Peter Xu Subject: [PATCH v14 2/8] mm/gup: Introduce check_and_migrate_movable_folios() Date: Wed, 10 Apr 2024 23:59:38 -0700 Message-ID: <20240411070157.3318425-3-vivek.kasireddy@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240411070157.3318425-1-vivek.kasireddy@intel.com> References: <20240411070157.3318425-1-vivek.kasireddy@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: F15D94000A X-Stat-Signature: psfym6ut4bjwdq8i7izpbcxteade1nkr X-Rspam-User: X-HE-Tag: 1712820381-844651 X-HE-Meta: U2FsdGVkX197hBuNzki7G5rlWuy5Nfhivtnb3QT6xSZOrg4rp2wzLPB/PygoY3iBwIKi97k2kH6zo04LKHg7+n5HGR3baKUXPmr7gShwDDG5EEMLkmomUR3qB5vwE8AKdZsp0c5b7HFZiUinBdiB4in30jXNdSlbvuuYO2bLu9+rCWxw20kKeF2ayblrScgaLOdWr1SoVzeRttjPkLOcLs7Ei5pqsV9Q5OQQrQ+nkV5dE1bLdAQMqslNF+oEHOqVMb4QB35Tip+T4XnaMZwfvlhWxULguk1akWhb8wxkRknzZjey1HNelQfZN4S4FuendwvEPd/COTzAlZVv48hiwZ/bLgRpxAtSwQs7PXYQu1N4iPtKLwMjdgeeb1UhRUY+nTnjW4OGoBZ9C2hKgl2YEzDlviD/rdzyW/cmsdeAN9vWjrfeUAcgJCl8powHdIomVwRyhxZvqE4gKKqwIae9qrDhFKPjlHvNo9YMawTiUxvhsz9aCSmv9vE0wmvUD0x4t/MeBldXFY+uIKes8CDweFYWUR4FCo4IPJr6vKJZoeXky1MFJHph8AUfyb/mAinDtJ0m5WzohBSAQW1DnJmNUTaHbXXf0GGH+i2JozAQNJUmPDIvo4RqXJO37ZSpYlyrMLyeyiix4ahTT40tsWwhIggb3fPvVOznMHpoinYE4c0xqQG0U8yWn+r4oDybqGZ+rwC9y8G4fthGvXgoKiue9ZIzCcHjT4dpk5PoF0QzCfbzAoRWgOG+tq/8e9xDL3lD2dqI/YCPj8yn5HhjrN2P76TGlb/EJzNVtWjbEH9nBTgQ0jqkiiujybbpj1nM3e7lCqucky0cM4uuA2FAZTj4MqsyLxrF8b0tMeZlAJYpJHOYUpzr0KqvLHSNwh9EJVVTOOjBwnNZSG0+3hZcBxekFA/nQPpsIs0X04RNLh8S0oNWBHxm7IU7zC39OszDVr6xWeskejwWaCjbO9mlBpz zgRhW17e tr8/XdMth7hezgCxzhLI9X03gokK4EefhqfZxYSUaiqO6NNkH4475aGFSA6V0CCpY0aN7TKGPfDj7bOhQHJoIjVifcfyQfWN7HY67qXA0ju5ow9aazR9LNT1Heyw7vhP1n9RoHGB1azAylyyrdOPHfdpkGbqXwKJy1BZL/8P/S+jtfGBpZeVIMm+c80UnqN6IJSa1KK8TzTnXYyAlGA5hLGMARKO1s2HxGN5M7eyp95QC0kKswcC3JMzsObnNKx0e+Uk+PShksQXQNC3LS2j8edFnmLdVoaIelm6XgRwGCLkcPaDu1nj4qU5XnA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This helper is the folio equivalent of check_and_migrate_movable_pages(). Therefore, all the rules that apply to check_and_migrate_movable_pages() also apply to this one as well. Currently, this helper is only used by memfd_pin_folios(). This patch also includes changes to rename and convert the internal functions collect_longterm_unpinnable_pages() and migrate_longterm_unpinnable_pages() to work on folios. As a result, check_and_migrate_movable_pages() is now a wrapper around check_and_migrate_movable_folios(). Cc: David Hildenbrand Cc: Matthew Wilcox Cc: Christoph Hellwig Cc: Jason Gunthorpe Cc: Peter Xu Suggested-by: David Hildenbrand Acked-by: David Hildenbrand Signed-off-by: Vivek Kasireddy --- mm/gup.c | 124 ++++++++++++++++++++++++++++++++++--------------------- 1 file changed, 77 insertions(+), 47 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 14e94fdfa827..20ec66e3c2c9 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -2413,19 +2413,19 @@ struct page *get_dump_page(unsigned long addr) #ifdef CONFIG_MIGRATION /* - * Returns the number of collected pages. Return value is always >= 0. + * Returns the number of collected folios. Return value is always >= 0. */ -static unsigned long collect_longterm_unpinnable_pages( - struct list_head *movable_page_list, - unsigned long nr_pages, - struct page **pages) +static unsigned long collect_longterm_unpinnable_folios( + struct list_head *movable_folio_list, + unsigned long nr_folios, + struct folio **folios) { unsigned long i, collected = 0; struct folio *prev_folio = NULL; bool drain_allow = true; - for (i = 0; i < nr_pages; i++) { - struct folio *folio = page_folio(pages[i]); + for (i = 0; i < nr_folios; i++) { + struct folio *folio = folios[i]; if (folio == prev_folio) continue; @@ -2440,7 +2440,7 @@ static unsigned long collect_longterm_unpinnable_pages( continue; if (folio_test_hugetlb(folio)) { - isolate_hugetlb(folio, movable_page_list); + isolate_hugetlb(folio, movable_folio_list); continue; } @@ -2452,7 +2452,7 @@ static unsigned long collect_longterm_unpinnable_pages( if (!folio_isolate_lru(folio)) continue; - list_add_tail(&folio->lru, movable_page_list); + list_add_tail(&folio->lru, movable_folio_list); node_stat_mod_folio(folio, NR_ISOLATED_ANON + folio_is_file_lru(folio), folio_nr_pages(folio)); @@ -2462,27 +2462,28 @@ static unsigned long collect_longterm_unpinnable_pages( } /* - * Unpins all pages and migrates device coherent pages and movable_page_list. - * Returns -EAGAIN if all pages were successfully migrated or -errno for failure - * (or partial success). + * Unpins all folios and migrates device coherent folios and movable_folio_list. + * Returns -EAGAIN if all folios were successfully migrated or -errno for + * failure (or partial success). */ -static int migrate_longterm_unpinnable_pages( - struct list_head *movable_page_list, - unsigned long nr_pages, - struct page **pages) +static int migrate_longterm_unpinnable_folios( + struct list_head *movable_folio_list, + unsigned long nr_folios, + struct folio **folios) { int ret; unsigned long i; - for (i = 0; i < nr_pages; i++) { - struct folio *folio = page_folio(pages[i]); + for (i = 0; i < nr_folios; i++) { + struct folio *folio = folios[i]; if (folio_is_device_coherent(folio)) { /* - * Migration will fail if the page is pinned, so convert - * the pin on the source page to a normal reference. + * Migration will fail if the folio is pinned, so + * convert the pin on the source folio to a normal + * reference. */ - pages[i] = NULL; + folios[i] = NULL; folio_get(folio); gup_put_folio(folio, 1, FOLL_PIN); @@ -2495,24 +2496,24 @@ static int migrate_longterm_unpinnable_pages( } /* - * We can't migrate pages with unexpected references, so drop + * We can't migrate folios with unexpected references, so drop * the reference obtained by __get_user_pages_locked(). - * Migrating pages have been added to movable_page_list after + * Migrating folios have been added to movable_folio_list after * calling folio_isolate_lru() which takes a reference so the - * page won't be freed if it's migrating. + * folio won't be freed if it's migrating. */ - unpin_user_page(pages[i]); - pages[i] = NULL; + unpin_folio(folios[i]); + folios[i] = NULL; } - if (!list_empty(movable_page_list)) { + if (!list_empty(movable_folio_list)) { struct migration_target_control mtc = { .nid = NUMA_NO_NODE, .gfp_mask = GFP_USER | __GFP_NOWARN, .reason = MR_LONGTERM_PIN, }; - if (migrate_pages(movable_page_list, alloc_migration_target, + if (migrate_pages(movable_folio_list, alloc_migration_target, NULL, (unsigned long)&mtc, MIGRATE_SYNC, MR_LONGTERM_PIN, NULL)) { ret = -ENOMEM; @@ -2520,48 +2521,71 @@ static int migrate_longterm_unpinnable_pages( } } - putback_movable_pages(movable_page_list); + putback_movable_pages(movable_folio_list); return -EAGAIN; err: - for (i = 0; i < nr_pages; i++) - if (pages[i]) - unpin_user_page(pages[i]); - putback_movable_pages(movable_page_list); + unpin_folios(folios, nr_folios); + putback_movable_pages(movable_folio_list); return ret; } /* - * Check whether all pages are *allowed* to be pinned. Rather confusingly, all - * pages in the range are required to be pinned via FOLL_PIN, before calling - * this routine. + * Check whether all folios are *allowed* to be pinned indefinitely (longterm). + * Rather confusingly, all folios in the range are required to be pinned via + * FOLL_PIN, before calling this routine. * - * If any pages in the range are not allowed to be pinned, then this routine - * will migrate those pages away, unpin all the pages in the range and return + * If any folios in the range are not allowed to be pinned, then this routine + * will migrate those folios away, unpin all the folios in the range and return * -EAGAIN. The caller should re-pin the entire range with FOLL_PIN and then * call this routine again. * * If an error other than -EAGAIN occurs, this indicates a migration failure. * The caller should give up, and propagate the error back up the call stack. * - * If everything is OK and all pages in the range are allowed to be pinned, then - * this routine leaves all pages pinned and returns zero for success. + * If everything is OK and all folios in the range are allowed to be pinned, + * then this routine leaves all folios pinned and returns zero for success. */ -static long check_and_migrate_movable_pages(unsigned long nr_pages, - struct page **pages) +static long check_and_migrate_movable_folios(unsigned long nr_folios, + struct folio **folios) { unsigned long collected; - LIST_HEAD(movable_page_list); + LIST_HEAD(movable_folio_list); - collected = collect_longterm_unpinnable_pages(&movable_page_list, - nr_pages, pages); + collected = collect_longterm_unpinnable_folios(&movable_folio_list, + nr_folios, folios); if (!collected) return 0; - return migrate_longterm_unpinnable_pages(&movable_page_list, nr_pages, - pages); + return migrate_longterm_unpinnable_folios(&movable_folio_list, + nr_folios, folios); +} + +/* + * This routine just converts all the pages in the @pages array to folios and + * calls check_and_migrate_movable_folios() to do the heavy lifting. + * + * Please see the check_and_migrate_movable_folios() documentation for details. + */ +static long check_and_migrate_movable_pages(unsigned long nr_pages, + struct page **pages) +{ + struct folio **folios; + long i, ret; + + folios = kmalloc_array(nr_pages, sizeof(*folios), GFP_KERNEL); + if (!folios) + return -ENOMEM; + + for (i = 0; i < nr_pages; i++) + folios[i] = page_folio(pages[i]); + + ret = check_and_migrate_movable_folios(nr_pages, folios); + + kfree(folios); + return ret; } #else static long check_and_migrate_movable_pages(unsigned long nr_pages, @@ -2569,6 +2593,12 @@ static long check_and_migrate_movable_pages(unsigned long nr_pages, { return 0; } + +static long check_and_migrate_movable_folios(unsigned long nr_folios, + struct folio **folios) +{ + return 0; +} #endif /* CONFIG_MIGRATION */ /* -- 2.43.0