From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 10B61C00140 for ; Wed, 24 Aug 2022 15:13:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9049D940009; Wed, 24 Aug 2022 11:13:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8B448940007; Wed, 24 Aug 2022 11:13:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 755E3940009; Wed, 24 Aug 2022 11:13:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 63B71940007 for ; Wed, 24 Aug 2022 11:13:14 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 308D6C0F93 for ; Wed, 24 Aug 2022 15:13:14 +0000 (UTC) X-FDA: 79834829508.29.8BF9D35 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf25.hostedemail.com (Postfix) with ESMTP id 4ED40A0044 for ; Wed, 24 Aug 2022 15:13:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1661353992; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=j2EkIK7Ec5CQ3JRJQv0sZNP7yIn77zqG02uFB5xCBSA=; b=gbQFvA4+8W4ep+DapvNO4uUaDFu63P1quqQ6tLqo7iudUpr0oyb1ohWGk6Sq9UVraStY86 q0co+fwXFsBNo+58Lj/fPFzvKQJ/BttTa1labldB8v2BpGEKfBy8RwouDPjQIQB122ZJOt vnUuSr3tnB4TrwO4tdUPKyeIfv7VfUg= Received: from mail-wr1-f72.google.com (mail-wr1-f72.google.com [209.85.221.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-490-LXzYKNbcNKGGB5fDJAWEvg-1; Wed, 24 Aug 2022 11:13:11 -0400 X-MC-Unique: LXzYKNbcNKGGB5fDJAWEvg-1 Received: by mail-wr1-f72.google.com with SMTP id k20-20020adfb354000000b0022556a0b8cbso1655796wrd.5 for ; Wed, 24 Aug 2022 08:13:10 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:in-reply-to:organization:from:references :cc:to:content-language:subject:user-agent:mime-version:date :message-id:x-gm-message-state:from:to:cc; bh=j2EkIK7Ec5CQ3JRJQv0sZNP7yIn77zqG02uFB5xCBSA=; b=PObL087P3LGyLGs0cK1OS5uolL/pqT47DaaGzZp5fBJ1d9lu+8Hs/B8ggM8UVl15y1 QEsMpXJymjD/8xDGjb4LWA/5k589/QT8kfX69MoFw+2Ozh1f5EDs+SDzY4KTPRzG8FkL Z6kK3u/C9hEKWV0k3WzyowcgV6MMxgZeQRwXoWJfNylocd5sqnUmGaol8J5AqYNuVUVL Gfy0QiR1KV1iGh/02ik/B+ACqlJ72xEi+7Vd4SLO9QzD2mynT8OJfl/Ts4Ak9VqjP+Aq 188UZiABYjsG6w9nxfa0y0IwEIXMuoL58P9mPvEuOiclnI4iJ0GEmbPr0NsdGLhod/fg glUQ== X-Gm-Message-State: ACgBeo164kMnz7XDHO2jfhHhEHQtOYcBT3QHDTcjyoQQmnkqxxdvLcVY NTAhkaWqxWDdmKm4oT2RcaR2FNQBtuemDokfgQgOtCPj93RnJFp1Kz9Ubtm24kxO819neuM4tgc jNhkFSl+olok= X-Received: by 2002:a5d:6d0b:0:b0:222:955a:8774 with SMTP id e11-20020a5d6d0b000000b00222955a8774mr15635923wrq.129.1661353989960; Wed, 24 Aug 2022 08:13:09 -0700 (PDT) X-Google-Smtp-Source: AA6agR5eD/LyvKaPYC90/Ro6Oi2I+AixQD/rYEgcR6MD39YuKJ1MCboIQxsYUXNDYeJ3+NHjX5DxnQ== X-Received: by 2002:a5d:6d0b:0:b0:222:955a:8774 with SMTP id e11-20020a5d6d0b000000b00222955a8774mr15635903wrq.129.1661353989644; Wed, 24 Aug 2022 08:13:09 -0700 (PDT) Received: from ?IPV6:2003:cb:c707:c500:5445:cf40:2e32:6e73? (p200300cbc707c5005445cf402e326e73.dip0.t-ipconnect.de. [2003:cb:c707:c500:5445:cf40:2e32:6e73]) by smtp.gmail.com with ESMTPSA id p6-20020a1c5446000000b003a500b612fcsm2137004wmi.12.2022.08.24.08.13.08 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 24 Aug 2022 08:13:09 -0700 (PDT) Message-ID: <6e2539b7-b4c7-95dc-e4ac-27692d955936@redhat.com> Date: Wed, 24 Aug 2022 17:13:08 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.12.0 Subject: Re: [PATCH v2 1/5] mm/hugetlb: fix races when looking up a CONT-PTE size hugetlb page To: Baolin Wang , Mike Kravetz Cc: akpm@linux-foundation.org, songmuchun@bytedance.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <0e5d92da043d147a867f634b17acbcc97a7f0e64.1661240170.git.baolin.wang@linux.alibaba.com> <4c24b891-04ce-2608-79d2-a75dc236533f@redhat.com> <376d2e0a-d28a-984b-903c-1f6451b04a15@linux.alibaba.com> <7d4e7f47-30a5-3cc6-dc9f-aa89120847d8@redhat.com> <64669c0a-4a6e-f034-a15b-c4a8deea9e5d@linux.alibaba.com> <7ee73879-e402-9175-eae8-41471d80d59e@redhat.com> <041e2e43-2227-1681-743e-5f82e245b5ea@redhat.com> <0f736dc5-1798-10ad-c506-9a2a38841359@redhat.com> <22585fc8-b0bc-0e14-d121-2767cd178424@linux.alibaba.com> From: David Hildenbrand Organization: Red Hat In-Reply-To: <22585fc8-b0bc-0e14-d121-2767cd178424@linux.alibaba.com> X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=gbQFvA4+; spf=pass (imf25.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1661353993; a=rsa-sha256; cv=none; b=HZntxNiSx2Zur2QqWKijf5opqy7Yx0sdahVaNBfZUrBkulGhkKC192/u9whJXQ1O7laz3d zuSR1JGqR+N+YrncVF9IMUFj3076rCz4lk8FMxsN0Eup5Cq4uSsgr9iosrGIasX73dnWOo ygmd0VP2mIZjDK0agydgE5xQtZhby3w= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1661353993; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=j2EkIK7Ec5CQ3JRJQv0sZNP7yIn77zqG02uFB5xCBSA=; b=ORwIpGboWmoDSussr2qiFTgyJNRHFIhEhUmWnS4I3oYZqx+Ara6wdAnJ+kllB0NFd7X9wd mddNf5vI+p4EmXUrSokhk3uXP5goQTgzKrqrSFj7Y5X9jjUAR46vXrYP2XinEN14VzYjm2 ixCC164IvPh75T+D9TQBBYEMAU2rAyY= X-Stat-Signature: jc7b6eet18dqjt4jjzccpfj9opt3eq7w X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 4ED40A0044 Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=gbQFvA4+; spf=pass (imf25.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-HE-Tag: 1661353993-691267 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 24.08.22 17:06, Baolin Wang wrote: > > > On 8/24/2022 10:33 PM, David Hildenbrand wrote: >> On 24.08.22 16:30, Baolin Wang wrote: >>> >>> >>> On 8/24/2022 7:55 PM, David Hildenbrand wrote: >>>> On 24.08.22 11:41, Baolin Wang wrote: >>>>> >>>>> >>>>> On 8/24/2022 3:31 PM, David Hildenbrand wrote: >>>>>>>>>> >>>>>>>>>> IMHO, these follow_huge_xxx() functions are arch-specified at first and >>>>>>>>>> were moved into the common hugetlb.c by commit 9e5fc74c3025 ("mm: >>>>>>>>>> hugetlb: Copy general hugetlb code from x86 to mm"), and now there are >>>>>>>>>> still some arch-specified follow_huge_xxx() definition, for example: >>>>>>>>>> ia64: follow_huge_addr >>>>>>>>>> powerpc: follow_huge_pd >>>>>>>>>> s390: follow_huge_pud >>>>>>>>>> >>>>>>>>>> What I mean is that follow_hugetlb_page() is a common and >>>>>>>>>> not-arch-specified function, is it suitable to change it to be >>>>>>>>>> arch-specified? >>>>>>>>>> And thinking more, can we rename follow_hugetlb_page() as >>>>>>>>>> hugetlb_page_faultin() and simplify it to only handle the page faults of >>>>>>>>>> hugetlb like the faultin_page() for normal page? That means we can make >>>>>>>>>> sure only follow_page_mask() can handle hugetlb. >>>>>>>>>> >>>>>>>> >>>>>>>> Something like that might work, but you still have two page table walkers >>>>>>>> for hugetlb. I like David's idea (if I understand it correctly) of >>>>>>> >>>>>>> What I mean is we may change the hugetlb handling like normal page: >>>>>>> 1) use follow_page_mask() to look up a hugetlb firstly. >>>>>>> 2) if can not get the hugetlb, then try to page fault by >>>>>>> hugetlb_page_faultin(). >>>>>>> 3) if page fault successed, then retry to find hugetlb by >>>>>>> follow_page_mask(). >>>>>> >>>>>> That implies putting more hugetlbfs special code into generic GUP, >>>>>> turning it even more complicated. But of course, it depends on how the >>>>>> end result looks like. My gut feeling was that hugetlb is better handled >>>>>> in follow_hugetlb_page() separately (just like we do with a lot of other >>>>>> page table walkers). >>>>> >>>>> OK, fair enough. >>>>> >>>>>>> >>>>>>> Just a rough thought, and I need more investigation for my idea and >>>>>>> David's idea. >>>>>>> >>>>>>>> using follow_hugetlb_page for both cases. As noted, it will need to be >>>>>>>> taught how to not trigger faults in the follow_page_mask case. >>>>>>> >>>>>>> Anyway, I also agree we need some cleanup, and firstly I think we should >>>>>>> cleanup these arch-specified follow_huge_xxx() on some architectures >>>>>>> which are similar with the common ones. I will look into these. >>>>>> >>>>>> There was a recent discussion on that, e.g.: >>>>>> >>>>>> https://lkml.kernel.org/r/20220818135717.609eef8a@thinkpad >>>>> >>>>> Thanks. >>>>> >>>>>> >>>>>>> >>>>>>> However, considering cleanup may need more investigation and >>>>>>> refactoring, now I prefer to make these bug-fix patches of this patchset >>>>>>> into mainline firstly, which are suitable to backport to old version to >>>>>>> fix potential race issues. Mike and David, how do you think? Could you >>>>>>> help to review these patches? Thanks. >>>>>> >>>>>> Patch #1 certainly add more special code just to handle another hugetlb >>>>>> corner case (CONT pages), and maybe just making it all use >>>>>> follow_hugetlb_page() would be even cleaner and less error prone. >>>>>> >>>>>> I agree that locking is shaky, but I'm not sure if we really want to >>>>>> backport this to stable trees: >>>>>> >>>>>> https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html >>>>>> >>>>>> "It must fix a real bug that bothers people (not a, “This could be a >>>>>> problem...” type thing)." >>>>>> >>>>>> >>>>>> Do we actually have any instance of this being a real (and not a >>>>>> theoretical) problem? If not, I'd rather clean it all up right away. >>>>> >>>>> I think this is a real problem (not theoretical), and easy to write some >>>>> code to show the issue. For example, suppose thread A is trying to look >>>>> up a CONT-PTE size hugetlb page under the lock, however antoher thread B >>>>> can migrate the CONT-PTE hugetlb page at the same time, which will cause >>>>> thread A to get an incorrect page, if thread A want to do something for >>>>> this incorrect page, error occurs. >>>>> >>>>> Actually we also want to backport these fixes to the distro with old >>>>> kernel versions to make the hugetlb more stable. Otherwise we must hit >>>>> these issues sooner or later if the customers use CONT-PTE/PMD hugetlb. >>>>> >>>>> Anyway, if you and Mike still think these issues are not important >>>>> enough to be fixed in the old versions, I can do the cleanup firstly. >>>>> >>>> >>>> [asking myself which follow_page() users actually care about hugetlb, >>>> and why we need this handling in follow_page at all] >>>> >>>> Which follow_page() user do we care about here? Primarily mm/migrate.c >>>> only I assume? >>> >>> Right, mainly affects the move_pages() syscall I think. Yes, I can not >>> know all of the users of the move_pages() syscall now or in the future >>> in our data center, but like I said the move_pages() syscall + hugetlb >>> can be a real potential stability issue. >>> >> >> I wonder if we can get rid of follow_page() completely, there are not >> too many users. Or alternatively simply make it use general GUP >> infrastructure more clearly. We'd need something like FOLL_NOFAULT that >> also covers "absolutely no faults". > > I am not sure I get your point. So you want change to use > __get_user_pages() (or silimar wrappers) to look up a normal page or > hugetlb instead of follow_page()? and adding a new FOLL_NOFAULT flag to > __get_user_pages(). Essentially just getting rid of follow_page() completely or making it a wrapper of __get_user_pages(). > > If I understand correctly, we still need more work to move those > arch-specified follow_huge_xxx() into follow_hugetlb_page() firstly like > we disscussed before? Which seems not backportable too. I'm not sure we need all that magic in these arch specific helpers after all. I haven't looked into the details, but I really wonder why they handle something that follow_hugetlb_page() cannot easily handle. It all smells like legacy cruft. > > I am not againt your idea, and I also agree that we should do some > cleanup. But the point is if we need backport patches to fix this issue, > which affects move_pages() syscall, if the answer is yes, I think my > current fixes are suitable to backport. I really don't like adding more make-legacy-cruft-happy code unless there is *real* need for it. (you could always just fix old kernels you care about with your patches here -- do they have to be in mainline? don't think so) But of course, it's up to Mike to decide, just my 2 cents :) -- Thanks, David / dhildenb