From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-21.0 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,MENTIONS_GIT_HOSTING,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS, USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D160DC4320E for ; Tue, 31 Aug 2021 11:46:50 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6080161056 for ; Tue, 31 Aug 2021 11:46:50 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 6080161056 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id D4F1F6B0071; Tue, 31 Aug 2021 07:46:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CFDFC6B0072; Tue, 31 Aug 2021 07:46:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BC56C8D0001; Tue, 31 Aug 2021 07:46:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0210.hostedemail.com [216.40.44.210]) by kanga.kvack.org (Postfix) with ESMTP id AA3706B0071 for ; Tue, 31 Aug 2021 07:46:49 -0400 (EDT) Received: from smtpin31.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 5AAF81804EE01 for ; Tue, 31 Aug 2021 11:46:49 +0000 (UTC) X-FDA: 78535198938.31.59E2723 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf03.hostedemail.com (Postfix) with ESMTP id E428330000A6 for ; Tue, 31 Aug 2021 11:46:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1630410408; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dwvZYJk8XnuIWqXzPVZ+2IXhoVvLrsO62Vmv5jU4cQA=; b=b5VUpn9zB5wdSHWkC9UPZTJdF0cXtDvkcRem1FurwpjGpJbGH8ukBVZBgvBj8ell52MAI9 MMRi+drk3U+2l/OEefPPjAt+mN9Fx+zq8cKl8HYKWkhc2IMAQMz35C8w+j8Y3Ivjv08PPy nhbB/M6PjtelGuvF4JavkFNdY9PSESk= Received: from mail-wm1-f69.google.com (mail-wm1-f69.google.com [209.85.128.69]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-104-0QrAEFPpM4qDvVJ7niUXEw-1; Tue, 31 Aug 2021 07:46:45 -0400 X-MC-Unique: 0QrAEFPpM4qDvVJ7niUXEw-1 Received: by mail-wm1-f69.google.com with SMTP id u1-20020a05600c210100b002e74fc5af71so5874713wml.1 for ; Tue, 31 Aug 2021 04:46:44 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:to:cc:references:from:organization:subject :message-id:date:user-agent:mime-version:in-reply-to :content-language:content-transfer-encoding; bh=dwvZYJk8XnuIWqXzPVZ+2IXhoVvLrsO62Vmv5jU4cQA=; b=l9vIbiDI1kIYJNOmpJtm2uGapdq8vduocjjusbbVZ78JVjc+Ew1NQ5yVcvcY7UuDId Akav9IG02WwKr2iAbVisbjt7d9BQBmodmpzBza4JHMPYCuX8VwfSzEQNmJ+q3haWQxKx R17umaEwRfAXqmoQVaqopGBx+i/TIiBTydZ3lWjt/Fo4WIiXhJNS2Kz0jHFgkMGShyKo X84N/BxO+aHTOlDLv+XNzdZmbfbLwGKlYXB8bgQJyNC3v3cqsPeZnKWH9rzF8uAmtn90 vUciDf+F/c8BD0M/u0DNcpwoC4CmjtkFzDy5OPOaby1uPCriLosSm3OinnQf6lwVVznI 9tKA== X-Gm-Message-State: AOAM533i+YjHP1RFTphnacejXaXWIaJD6YrbEdPTgRTvPMxeG3E9Qbgw aSILlRv/QWy/34vruyNXtiZBA6zOulgDx91ch4b8yX6q8eWeT2rkI2G/035mubz65ub+IUaKqsr DcbRa9UF4PKY= X-Received: by 2002:a5d:6ca4:: with SMTP id a4mr31737894wra.140.1630410403903; Tue, 31 Aug 2021 04:46:43 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwWivN3ytOnBv6WhYZLuUsIybCSCSVJtoThZlRY4JzF7a/aXHDCKKGjMNxUq57IcbWNKpMzHg== X-Received: by 2002:a5d:6ca4:: with SMTP id a4mr31737878wra.140.1630410403669; Tue, 31 Aug 2021 04:46:43 -0700 (PDT) Received: from [192.168.3.132] (p4ff23bf5.dip0.t-ipconnect.de. [79.242.59.245]) by smtp.gmail.com with ESMTPSA id p11sm2312570wma.16.2021.08.31.04.46.43 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 31 Aug 2021 04:46:43 -0700 (PDT) To: SeongJae Park Cc: akpm@linux-foundation.org, markubo@amazon.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, SeongJae Park References: <20210831104938.33548-1-sjpark@amazon.de> From: David Hildenbrand Organization: Red Hat Subject: Re: [PATCH] mm/damon/vaddr: Safely walk page table Message-ID: Date: Tue, 31 Aug 2021 13:46:42 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.11.0 MIME-Version: 1.0 In-Reply-To: <20210831104938.33548-1-sjpark@amazon.de> X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=b5VUpn9z; dmarc=pass (policy=none) header.from=redhat.com; spf=none (imf03.hostedemail.com: domain of david@redhat.com has no SPF policy when checking 170.10.133.124) smtp.mailfrom=david@redhat.com X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: E428330000A6 X-Stat-Signature: waouq4k8uuki8gx6saerbj7t771zsbdx X-HE-Tag: 1630410408-924427 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 31.08.21 12:49, SeongJae Park wrote: > From: SeongJae Park >=20 > On Tue, 31 Aug 2021 11:53:05 +0200 David Hildenbrand = wrote: >=20 >> On 27.08.21 17:04, SeongJae Park wrote: >>> From: SeongJae Park >>> >>> Commit d7f647622761 ("mm/damon: implement primitives for the virtual >>> memory address spaces") of linux-mm[1] tries to find PTE or PMD for >>> arbitrary virtual address using 'follow_invalidate_pte()' without pro= per >>> locking[2]. This commit fixes the issue by using another page table >>> walk function for more general use case under proper locking. >>> >>> [1] https://github.com/hnaz/linux-mm/commit/d7f647622761 >>> [2] https://lore.kernel.org/linux-mm/3b094493-9c1e-6024-bfd5-7eca6639= 9b7e@redhat.com >>> >>> Fixes: d7f647622761 ("mm/damon: implement primitives for the virtual = memory address spaces") >>> Reported-by: David Hildenbrand >>> Signed-off-by: SeongJae Park >>> --- >>> mm/damon/vaddr.c | 81 +++++++++++++++++++++++++++++++++++++++++++-= ---- >>> 1 file changed, 74 insertions(+), 7 deletions(-) >>> >>> diff --git a/mm/damon/vaddr.c b/mm/damon/vaddr.c >>> index 230db7413278..b3677f2ef54b 100644 >>> --- a/mm/damon/vaddr.c >>> +++ b/mm/damon/vaddr.c >>> @@ -8,10 +8,12 @@ >>> #define pr_fmt(fmt) "damon-va: " fmt >>> =20 >>> #include >>> +#include >>> #include >>> #include >>> #include >>> #include >>> +#include >>> #include >>> #include >>> #include >>> @@ -446,14 +448,69 @@ static void damon_pmdp_mkold(pmd_t *pmd, struct= mm_struct *mm, >>> #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ >>> } >>> =20 >>> +struct damon_walk_private { >>> + pmd_t *pmd; >>> + pte_t *pte; >>> + spinlock_t *ptl; >>> +}; >>> + >>> +static int damon_pmd_entry(pmd_t *pmd, unsigned long addr, unsigned = long next, >>> + struct mm_walk *walk) >>> +{ >>> + struct damon_walk_private *priv =3D walk->private; >>> + >>> + if (pmd_huge(*pmd)) { >>> + priv->ptl =3D pmd_lock(walk->mm, pmd); >>> + if (pmd_huge(*pmd)) { >>> + priv->pmd =3D pmd; >>> + return 0; >>> + } >>> + spin_unlock(priv->ptl); >>> + } >>> + >>> + if (pmd_none(*pmd) || unlikely(pmd_bad(*pmd))) >>> + return -EINVAL; >>> + priv->pte =3D pte_offset_map_lock(walk->mm, pmd, addr, &priv->ptl); >>> + if (!pte_present(*priv->pte)) { >>> + pte_unmap_unlock(priv->pte, priv->ptl); >>> + priv->pte =3D NULL; >>> + return -EINVAL; >>> + } >>> + return 0; >>> +} >>> + >>> +static struct mm_walk_ops damon_walk_ops =3D { >>> + .pmd_entry =3D damon_pmd_entry, >>> +}; >>> + >>> +int damon_follow_pte_pmd(struct mm_struct *mm, unsigned long addr, >>> + struct damon_walk_private *private) >>> +{ >>> + int rc; >>> + >>> + private->pte =3D NULL; >>> + private->pmd =3D NULL; >>> + rc =3D walk_page_range(mm, addr, addr + 1, &damon_walk_ops, private= ); >>> + if (!rc && !private->pte && !private->pmd) >>> + return -EINVAL; >>> + return rc; >>> +} >>> + >>> static void damon_va_mkold(struct mm_struct *mm, unsigned long add= r) >>> { >>> - pte_t *pte =3D NULL; >>> - pmd_t *pmd =3D NULL; >>> + struct damon_walk_private walk_result; >>> + pte_t *pte; >>> + pmd_t *pmd; >>> spinlock_t *ptl; >>> =20 >>> - if (follow_invalidate_pte(mm, addr, NULL, &pte, &pmd, &ptl)) >>> + mmap_write_lock(mm); >> >> Can you elaborate why mmap_read_lock() isn't sufficient for your use >> case? The write mode might heavily affect damon performance and worklo= ad >> impact. >=20 > Because as you also mentioned in the previous mail, 'we can walk page t= ables > ignoring VMAs with the mmap semaphore held in write mode', and in this = case we > don't know to which VMA the address is belong. I thought the link to t= he mail > can help people understanding the reason. But, as you are suggesting, = I now > think putting an elaborated explanation here would be much better. I w= ill also > put a warning for the possible performance impact. walk_page_range() make sure to skip any VMA holes and only walks ranges=20 within VMAs. With the mmap sem in read mode, the VMA layout (mostly)=20 cannot change, so calling walk_page_range() is fine. So pagewalk.c=20 properly takes care of VMAs. As an example, take a look at MADV_COLD handling in mm/madvise.c. madvise_need_mmap_write() returns "0", and we end up calling=20 madvise_cold()->...->walk_page_range() with mmap_lock_read(). You can exclude any VMAs you don't care about in the test_walk()=20 callback, if required. --=20 Thanks, David / dhildenb