From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 18149C64ED6 for ; Tue, 28 Feb 2023 01:14:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6953A6B0073; Mon, 27 Feb 2023 20:14:38 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6452D6B0075; Mon, 27 Feb 2023 20:14:38 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 50CED6B0078; Mon, 27 Feb 2023 20:14:38 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 3EA106B0073 for ; Mon, 27 Feb 2023 20:14:38 -0500 (EST) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id F224C1A0C9F for ; Tue, 28 Feb 2023 01:14:37 +0000 (UTC) X-FDA: 80514930594.30.C3A05E8 Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by imf06.hostedemail.com (Postfix) with ESMTP id 0842B180006 for ; Tue, 28 Feb 2023 01:14:34 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=MuGLgIak; spf=pass (imf06.hostedemail.com: domain of ying.huang@intel.com designates 192.55.52.43 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1677546876; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=tMRwxicn7nhX4NESM7EpzwqcLjaGlFsd0tlUIYlaCoM=; b=TaVJVRDu1YOlvWfqvAXy00czXCpl0UEmnIh5cxbJWr+ANtMGum2IM3LAK5RihQWplfxJZ1 3uXOVA9YeYNCu1LAhAySwhp1DaCDB8S0zk/CZ6xgi7PQH+WvFeMXsfPgK4y536y+Bwze6a FqMsU5uR9TSUdBrmLnk5fkQuZdj1jnE= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=MuGLgIak; spf=pass (imf06.hostedemail.com: domain of ying.huang@intel.com designates 192.55.52.43 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1677546876; a=rsa-sha256; cv=none; b=G6zfLJ/iyHRMnA9dkY9FSsjToLJmN1exp/twCetujPdJsMN9CCqQ33Nj8lIp8QZg0atCy3 wbbVBssYnKqGXEHXSzSC8OR9Vz9I6WMOmbTZihQlrOk2KXQPdG13gHPlgRmlQf33n4lgO+ WPqJQ4RhWa/xJbHWI3jIPUxjlfHG2iE= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1677546875; x=1709082875; h=from:to:cc:subject:in-reply-to:references:date: message-id:mime-version; bh=zoOLlOI4FDpyzfpulrTD38E/AaKOdhBBYSxTiVbEAuE=; b=MuGLgIakv6Qy5iCVQwkL3A50xwrp6j6wenuMMRjys2XUbS+SchbJMBg6 gGCrDsDdpaTGsFl7hmwlyS6AHAnVA/EBGj+uK5PWNw3vWvyuYFHdlxleH oZvLLEF/Z8pUBRS0uKNcOV+2RtQm0gQg726gnz/eG4Zseq1hX9QTUFQwk IBJzCHGcxk3fCDo8pqQo7vfO8TCfd7+nIiPRAwJy3HVbblNLmagoDglE6 3Z2fKmT1u3oDLBLjeDMi36l11zV9RoOyyH/D//ahPVF+DhJ77aApXd8jn IgRmi0BlGV66iwRWGyyTxzCFmiYNncgVtXpyUUxh3z4rsSTB8qNQhMQWe g==; X-IronPort-AV: E=McAfee;i="6500,9779,10634"; a="420279363" X-IronPort-AV: E=Sophos;i="5.98,220,1673942400"; d="scan'208";a="420279363" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Feb 2023 17:14:32 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10634"; a="783628835" X-IronPort-AV: E=Sophos;i="5.98,220,1673942400"; d="scan'208";a="783628835" Received: from yhuang6-desk2.sh.intel.com (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.238.208.55]) by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Feb 2023 17:14:29 -0800 From: "Huang, Ying" To: Jan Kara Cc: Hugh Dickins , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, Zi Yan , Yang Shi , Baolin Wang , Oscar Salvador , Matthew Wilcox , Bharata B Rao , Alistair Popple , Xin Hao , Minchan Kim , Mike Kravetz , Hyeonggon Yoo <42.hyeyoo@gmail.com> Subject: Re: [PATCH -v5 0/9] migrate_pages(): batch TLB flushing In-Reply-To: <20230227110614.dngdub2j3exr6dfp@quack3> (Jan Kara's message of "Mon, 27 Feb 2023 12:06:14 +0100") References: <20230213123444.155149-1-ying.huang@intel.com> <87a6c8c-c5c1-67dc-1e32-eb30831d6e3d@google.com> <20230227110614.dngdub2j3exr6dfp@quack3> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/27.1 (gnu/linux) Date: Tue, 28 Feb 2023 09:13:26 +0800 Message-ID: <87pm9ubnih.fsf@yhuang6-desk2.ccr.corp.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=ascii X-Rspam-User: X-Rspamd-Server: rspam03 X-Stat-Signature: hbhbuitx8cd5ui5hxsyds5qd1omfeqpt X-Rspamd-Queue-Id: 0842B180006 X-HE-Tag: 1677546874-372421 X-HE-Meta: U2FsdGVkX1/hP4yITP+90H+yiecElS4nrGSwQhjKcAyQ5i6+xl8DmcNrTyJeZajn3SZNgBQMSFW6pDRgKxDEXYh9R49zi0MAPGU88sWwepdTcqda2pwoVwXo1Ua+UNewjWhlTNxyRLTuZZut05w4rp0imxq2Ru6LU3GcoyCCb+I8fu6LUOVevzf+qpwbPmtHnrVmQXZ1vPAKnZfJ5EpYL7Voj1iM3M/WQHeIPSlw3Kz3aphOIrwXmLQS8a8TWsGoPc373OLonXx0KWCtAdabBnmbThsFJsRxMA9u72DaHLReC1T09jth6rHz8jAyfI8BiSfOWoroXbAXhxf7RH8pBkRHcn6ULB5NWBKyaMEnwH47Gd4CZ7bJU2jwu5taHNLUHEr9e7x/ytyruS4jtE8weILvPzHcGyJG60XdDr/NPsmAPCJyT9S6BdCpNo7GccZ+lbe8oXV+5cCS6yBW1GKNKr3V56IjUdEVH9+jUcAQHEpEf20YajAmlSbVlu1XFy6/8aXNvu89fZpGgLc7T8TLOu6y+IAK+5B8ZmMoUpQvmmuQJTJ1UPd86VH7zKHsZaPZugeJr5BzeUvBqmkYdqiYv5a3tEBLFqRUoqw6aZPFs5qcmWMN3aTGAM2pCeBbYvmjERhKFsStKn9tlpfUQKVaU6HoTXcS1gjAHTcvul7X087y7sU/FfBeGvDjHfauMY5kiWCxRVstkVA0FYLPwfNf8A+2d10MPVdIlVUXCSAAHRuiBqrQjtaUu5+le+WlsEHj1K3JkZqAn7FsPMW+QCw2InCKso2jc3GKSmisTjwZpRBuB0Ub9zy3Udv5gA9rwHEOWaXbTfFPMXJRqC7BpIbpZ0kNSZ1PLcToV48gdOe+3UgbybrrFfBB3+PyrkacnEDgFUor3jpt4nkkfz7YcsZ7DrxktPjLmamYrLtEn1wpJGL4p/X/jXMgeWj/9DP4oGBn4HHjAekqyiZW7PJTZi2 zWVYIAyp aiNg432pNy/ou77gtQc1oY8J+FIh+AnLNac06gGOhQmu8ti0FnYTTBUJd5ab7338n1nZLAFnkh1b9rPQxDBlRClX1nDlUd7z8qCqJVOTza7TsAwzuuC6uzuD7bL9rWl4fGKT1BTL8sSqnjsEMfOK0bTDDvMuaiQkfopThQoDuFQzT5ErZ5YhXZlRIjPBrZ7SS11ifUTAoqPupbo7galLKm2EY36QEaqrkYtyO3AFs0c0Ex3k/n/q2Dy+PSJ34zIqgwjq4EZgkBWxOpVB0mtWL0vSSZBLFFcXp2ts/xKlZrBt8Ft9uZSafACrsDA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hi, Honza, Jan Kara writes: > On Fri 17-02-23 13:47:48, Hugh Dickins wrote: >> On Mon, 13 Feb 2023, Huang Ying wrote: >> >> > From: "Huang, Ying" >> > >> > Now, migrate_pages() migrate folios one by one, like the fake code as >> > follows, >> > >> > for each folio >> > unmap >> > flush TLB >> > copy >> > restore map >> > >> > If multiple folios are passed to migrate_pages(), there are >> > opportunities to batch the TLB flushing and copying. That is, we can >> > change the code to something as follows, >> > >> > for each folio >> > unmap >> > for each folio >> > flush TLB >> > for each folio >> > copy >> > for each folio >> > restore map >> > >> > The total number of TLB flushing IPI can be reduced considerably. And >> > we may use some hardware accelerator such as DSA to accelerate the >> > folio copying. >> > >> > So in this patch, we refactor the migrate_pages() implementation and >> > implement the TLB flushing batching. Base on this, hardware >> > accelerated folio copying can be implemented. >> > >> > If too many folios are passed to migrate_pages(), in the naive batched >> > implementation, we may unmap too many folios at the same time. The >> > possibility for a task to wait for the migrated folios to be mapped >> > again increases. So the latency may be hurt. To deal with this >> > issue, the max number of folios be unmapped in batch is restricted to >> > no more than HPAGE_PMD_NR in the unit of page. That is, the influence >> > is at the same level of THP migration. >> > >> > We use the following test to measure the performance impact of the >> > patchset, >> > >> > On a 2-socket Intel server, >> > >> > - Run pmbench memory accessing benchmark >> > >> > - Run `migratepages` to migrate pages of pmbench between node 0 and >> > node 1 back and forth. >> > >> > With the patch, the TLB flushing IPI reduces 99.1% during the test and >> > the number of pages migrated successfully per second increases 291.7%. >> > >> > Xin Hao helped to test the patchset on an ARM64 server with 128 cores, >> > 2 NUMA nodes. Test results show that the page migration performance >> > increases up to 78%. >> > >> > This patchset is based on mm-unstable 2023-02-10. >> >> And back in linux-next this week: I tried next-20230217 overnight. >> >> There is a deadlock in this patchset (and in previous versions: sorry >> it's taken me so long to report), but I think one that's easily solved. >> >> I've not bisected to precisely which patch (load can take several hours >> to hit the deadlock), but it doesn't really matter, and I expect that >> you can guess. >> >> My root and home filesystems are ext4 (4kB blocks with 4kB PAGE_SIZE), >> and so is the filesystem I'm testing, ext4 on /dev/loop0 on tmpfs. >> So, plenty of ext4 page cache and buffer_heads. >> >> Again and again, the deadlock is seen with buffer_migrate_folio_norefs(), >> either in kcompactd0 or in khugepaged trying to compact, or in both: >> it ends up calling __lock_buffer(), and that schedules away, waiting >> forever to get BH_lock. I have not identified who is holding BH_lock, >> but I imagine a jbd2 journalling thread, and presume that it wants one >> of the folio locks which migrate_pages_batch() is already holding; or >> maybe it's all more convoluted than that. Other tasks then back up >> waiting on those folio locks held in the batch. >> >> Never a problem with buffer_migrate_folio(), always with the "more >> careful" buffer_migrate_folio_norefs(). And the patch below fixes >> it for me: I've had enough hours with it now, on enough occasions, >> to be confident of that. >> >> Cc'ing Jan Kara, who knows buffer_migrate_folio_norefs() and jbd2 >> very well, and I hope can assure us that there is an understandable >> deadlock here, from holding several random folio locks, then trying >> to lock buffers. Cc'ing fsdevel, because there's a risk that mm >> folk think something is safe, when it's not sufficient to cope with >> the diversity of filesystems. I hope nothing more than the below is >> needed (and I've had no other problems with the patchset: good job), >> but cannot be sure. > > I suspect it can indeed be caused by the presence of the loop device as > Huang Ying has suggested. What filesystems using buffer_heads do is a > pattern like: > > bh = page_buffers(loop device page cache page); > lock_buffer(bh); > submit_bh(bh); > - now on loop dev this ends up doing: > lo_write_bvec() > vfs_iter_write() > ... > folio_lock(backing file folio); > > So if migration code holds "backing file folio" lock and at the same time > waits for 'bh' lock (while trying to migrate loop device page cache page), it > is a deadlock. > > Proposed solution of never waiting for locks in batched mode looks like a > sensible one to me... Thank you very much for detail explanation! Best Regards, Huang, Ying