From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.2 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C3056C433DF for ; Fri, 5 Jun 2020 02:31:21 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6DE6C20772 for ; Fri, 5 Jun 2020 02:31:21 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6DE6C20772 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=cn.fujitsu.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 9F02E8E0007; Thu, 4 Jun 2020 22:31:20 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9A15C8E0006; Thu, 4 Jun 2020 22:31:20 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8B6DE8E0007; Thu, 4 Jun 2020 22:31:20 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0220.hostedemail.com [216.40.44.220]) by kanga.kvack.org (Postfix) with ESMTP id 7485E8E0006 for ; Thu, 4 Jun 2020 22:31:20 -0400 (EDT) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 38F49181AC9B6 for ; Fri, 5 Jun 2020 02:31:20 +0000 (UTC) X-FDA: 76893581520.05.sky03_1d14a1a26d9c Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin05.hostedemail.com (Postfix) with ESMTP id 1EEEA18021C91 for ; Fri, 5 Jun 2020 02:31:20 +0000 (UTC) X-HE-Tag: sky03_1d14a1a26d9c X-Filterd-Recvd-Size: 7143 Received: from heian.cn.fujitsu.com (mail.cn.fujitsu.com [183.91.158.132]) by imf17.hostedemail.com (Postfix) with ESMTP for ; Fri, 5 Jun 2020 02:31:18 +0000 (UTC) X-IronPort-AV: E=Sophos;i="5.73,474,1583164800"; d="scan'208";a="93872612" Received: from unknown (HELO cn.fujitsu.com) ([10.167.33.5]) by heian.cn.fujitsu.com with ESMTP; 05 Jun 2020 10:31:16 +0800 Received: from G08CNEXMBPEKD05.g08.fujitsu.local (unknown [10.167.33.204]) by cn.fujitsu.com (Postfix) with ESMTP id BB1894BCC8B9; Fri, 5 Jun 2020 10:31:13 +0800 (CST) Received: from [10.167.225.141] (10.167.225.141) by G08CNEXMBPEKD05.g08.fujitsu.local (10.167.33.204) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 5 Jun 2020 10:31:13 +0800 Subject: =?UTF-8?B?UmU6IOWbnuWkjTogUmU6IFtSRkMgUEFUQ0ggMC84XSBkYXg6IEFkZCBh?= =?UTF-8?Q?_dax-rmap_tree_to_support_reflink?= To: Dave Chinner , "Darrick J. Wong" CC: Matthew Wilcox , "linux-kernel@vger.kernel.org" , "linux-xfs@vger.kernel.org" , "linux-nvdimm@lists.01.org" , "linux-mm@kvack.org" , "linux-fsdevel@vger.kernel.org" , "dan.j.williams@intel.com" , "hch@lst.de" , "rgoldwyn@suse.de" , "Qi, Fuli" , "Gotou, Yasunori" References: <20200427084750.136031-1-ruansy.fnst@cn.fujitsu.com> <20200427122836.GD29705@bombadil.infradead.org> <20200428064318.GG2040@dread.disaster.area> <153e13e6-8685-fb0d-6bd3-bb553c06bf51@cn.fujitsu.com> <20200604145107.GA1334206@magnolia> <20200605013023.GZ2040@dread.disaster.area> From: Ruan Shiyang Message-ID: <841a9dbb-daa7-3827-6bf9-664187e45a94@cn.fujitsu.com> Date: Fri, 5 Jun 2020 10:30:57 +0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.8.0 MIME-Version: 1.0 In-Reply-To: <20200605013023.GZ2040@dread.disaster.area> Content-Type: text/plain; charset="utf-8"; format=flowed Content-Language: en-US X-Originating-IP: [10.167.225.141] X-ClientProxiedBy: G08CNEXCHPEKD05.g08.fujitsu.local (10.167.33.203) To G08CNEXMBPEKD05.g08.fujitsu.local (10.167.33.204) X-yoursite-MailScanner-ID: BB1894BCC8B9.AEB9F X-yoursite-MailScanner: Found to be clean X-yoursite-MailScanner-From: ruansy.fnst@cn.fujitsu.com X-Rspamd-Queue-Id: 1EEEA18021C91 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2020/6/5 =E4=B8=8A=E5=8D=889:30, Dave Chinner wrote: > On Thu, Jun 04, 2020 at 07:51:07AM -0700, Darrick J. Wong wrote: >> On Thu, Jun 04, 2020 at 03:37:42PM +0800, Ruan Shiyang wrote: >>> >>> >>> On 2020/4/28 =E4=B8=8B=E5=8D=882:43, Dave Chinner wrote: >>>> On Tue, Apr 28, 2020 at 06:09:47AM +0000, Ruan, Shiyang wrote: >>>>> >>>>> =E5=9C=A8 2020/4/27 20:28:36, "Matthew Wilcox" =E5=86=99=E9=81=93: >>>>> >>>>>> On Mon, Apr 27, 2020 at 04:47:42PM +0800, Shiyang Ruan wrote: >>>>>>> This patchset is a try to resolve the shared 'page cache' prob= lem for >>>>>>> fsdax. >>>>>>> >>>>>>> In order to track multiple mappings and indexes on one page, I >>>>>>> introduced a dax-rmap rb-tree to manage the relationship. A d= ax entry >>>>>>> will be associated more than once if is shared. At the second= time we >>>>>>> associate this entry, we create this rb-tree and store its roo= t in >>>>>>> page->private(not used in fsdax). Insert (->mapping, ->index)= when >>>>>>> dax_associate_entry() and delete it when dax_disassociate_entr= y(). >>>>>> >>>>>> Do we really want to track all of this on a per-page basis? I wou= ld >>>>>> have thought a per-extent basis was more useful. Essentially, cre= ate >>>>>> a new address_space for each shared extent. Per page just seems l= ike >>>>>> a huge overhead. >>>>>> >>>>> Per-extent tracking is a nice idea for me. I haven't thought of it >>>>> yet... >>>>> >>>>> But the extent info is maintained by filesystem. I think we need a= way >>>>> to obtain this info from FS when associating a page. May be a bit >>>>> complicated. Let me think about it... >>>> >>>> That's why I want the -user of this association- to do a filesystem >>>> callout instead of keeping it's own naive tracking infrastructure. >>>> The filesystem can do an efficient, on-demand reverse mapping lookup >>>> from it's own extent tracking infrastructure, and there's zero >>>> runtime overhead when there are no errors present. >>> >>> Hi Dave, >>> >>> I ran into some difficulties when trying to implement the per-extent = rmap >>> tracking. So, I re-read your comments and found that I was misunders= tanding >>> what you described here. >>> >>> I think what you mean is: we don't need the in-memory dax-rmap tracki= ng now. >>> Just ask the FS for the owner's information that associate with one p= age >>> when memory-failure. So, the per-page (even per-extent) dax-rmap is >>> needless in this case. Is this right? >> >> Right. XFS already has its own rmap tree. >=20 > *nod* >=20 >>> Based on this, we only need to store the extent information of a fsda= x page >>> in its ->mapping (by searching from FS). Then obtain the owners of t= his >>> page (also by searching from FS) when memory-failure or other rmap ca= se >>> occurs. >> >> I don't even think you need that much. All you need is the "physical" >> offset of that page within the pmem device (e.g. 'this is the 307th 4k >> page =3D=3D offset 1257472 since the start of /dev/pmem0') and xfs can= look >> up the owner of that range of physical storage and deal with it as >> needed. >=20 > Right. If we have the dax device associated with the page that had > the failure, then we can determine the offset of the page into the > block device address space and that's all we need to find the owner > of the page in the filesystem. >=20 > Note that there may actually be no owner - the page that had the > fault might land in free space, in which case we can simply zero > the page and clear the error. OK. Thanks for pointing out. >=20 >>> So, a fsdax page is no longer associated with a specific file, but wi= th a >>> FS(or the pmem device). I think it's easier to understand and implem= ent. >=20 > Effectively, yes. But we shouldn't need to actually associate the > page with anything at the filesystem level because it is already > associated with a DAX device at a lower level via a dev_pagemap. > The hardware page fault already runs thought this code > memory_failure_dev_pagemap() before it gets to the DAX code, so > really all we need to is have that function pass us the page, offset > into the device and, say, the struct dax_device associated with that > page so we can get to the filesystem superblock we can then use for > rmap lookups on... >=20 OK. I was just thinking about how can I execute the FS rmap search from=20 the memory-failure. Thanks again for pointing out. :) -- Thanks, Ruan Shiyang. > Cheers, >=20 > Dave. >=20