From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8E6E0C00144 for ; Mon, 1 Aug 2022 08:30:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 148518E0001; Mon, 1 Aug 2022 04:30:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0F8146B0072; Mon, 1 Aug 2022 04:30:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EB3CA8E0001; Mon, 1 Aug 2022 04:30:52 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id D61ED6B0071 for ; Mon, 1 Aug 2022 04:30:52 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id A823DA08E2 for ; Mon, 1 Aug 2022 08:30:52 +0000 (UTC) X-FDA: 79750353144.13.B159008 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf09.hostedemail.com (Postfix) with ESMTP id 259EB140108 for ; Mon, 1 Aug 2022 08:30:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1659342651; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=5SVYDNCbGfXDgjbhPnDxlhZIe7nXKFMtGTA2pRBTKME=; b=JFe5/fdsBIieR78celIYEbAKacOOyNX1YzavQa2M3xmoh2yWVw5RpFifyySfRM+MmoQzZ3 XIA5KiKWTGP3tKix36ya4KZX8PRrgnlOg/hc9ByhcJdfVwRYGVoRD9hPcn8p0m7Tgn6r3O Gh/u+7bFhCuwDlGs4kSm3wG2MIzc/zk= Received: from mail-wr1-f69.google.com (mail-wr1-f69.google.com [209.85.221.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-2-Yq-F_Zs0Nv669gMKflQ17g-1; Mon, 01 Aug 2022 04:30:48 -0400 X-MC-Unique: Yq-F_Zs0Nv669gMKflQ17g-1 Received: by mail-wr1-f69.google.com with SMTP id m7-20020adfa3c7000000b0021ef71807e3so2333893wrb.9 for ; Mon, 01 Aug 2022 01:30:48 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:message-id:date:mime-version:user-agent:subject :content-language:to:cc:references:from:organization:in-reply-to :content-transfer-encoding; bh=5SVYDNCbGfXDgjbhPnDxlhZIe7nXKFMtGTA2pRBTKME=; b=MVWF7AP3tptU5Z/EfrDe21S9lPdX90yBDLeXOF1Nl08f9idYKTsT3dw14iq55CCQOU +TbhZ4xRvGnagrp75j6s4zjaOebwSMONvb9Hqgth1TmgABTSucj5Ynk1CAYj57RfYMCj lAJLtIhIS17pnHvkw1zFEzK4BDbZ2YIzoTdfzDXUtUfhGmC/XmEau9HVXCqoFyGtWz45 2QJ3pxzgDrnsLE2r3hIjpCsWDpPfWC+yTDKEgIeXY5VPnb0+9vDwh++F/bApxdRhhFAI u9bhnFYksKh7hPg5XwK+xVi4CIV5c1Cc4n/C7s5z29SXruHwIPkxf46xLFY/iycIKpJ4 sdEw== X-Gm-Message-State: ACgBeo21iSFzqR4Em8WHkRDW3dZvTKgyKSsKdMsmS/ffcZ0FNxUkSDF6 Cf3mnz2kESlu8O/c2SW/5CJQ7bPVwKdM0UJOgrQKlenNNWm/r6npbgqHVSG3sGpG6f0NNXadILe WcxfC6iEFSpk= X-Received: by 2002:a05:6000:178f:b0:21e:98ee:f1e6 with SMTP id e15-20020a056000178f00b0021e98eef1e6mr9475776wrg.405.1659342647073; Mon, 01 Aug 2022 01:30:47 -0700 (PDT) X-Google-Smtp-Source: AA6agR6qu7G6pTzZeQnCXr8qYDHyx221WdF+lE+hgTXRyolp+O873cegrvd4YSD4rUluGmRhJjjnXA== X-Received: by 2002:a05:6000:178f:b0:21e:98ee:f1e6 with SMTP id e15-20020a056000178f00b0021e98eef1e6mr9475750wrg.405.1659342646720; Mon, 01 Aug 2022 01:30:46 -0700 (PDT) Received: from ?IPV6:2003:cb:c704:6900:6d08:8df1:dd2c:bf00? (p200300cbc70469006d088df1dd2cbf00.dip0.t-ipconnect.de. [2003:cb:c704:6900:6d08:8df1:dd2c:bf00]) by smtp.gmail.com with ESMTPSA id c13-20020a5d528d000000b0021ef34124ebsm11208492wrv.11.2022.08.01.01.30.45 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 01 Aug 2022 01:30:46 -0700 (PDT) Message-ID: <3fc8a61b-ad70-8092-9197-4920e0897593@redhat.com> Date: Mon, 1 Aug 2022 10:30:45 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.11.0 Subject: Re: [PATCH V2] mm: fix use-after free of page_ext after race with memory-offline To: Charan Teja Kalla , akpm@linux-foundation.org, quic_pkondeti@quicinc.com, pasha.tatashin@soleen.com, sjpark@amazon.de, sieberf@amazon.com, shakeelb@google.com, dhowells@redhat.com, willy@infradead.org, liuting.0x7c00@bytedance.com, minchan@kernel.org, Michal Hocko Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org References: <1658931303-17024-1-git-send-email-quic_charante@quicinc.com> <6168cf49-bf75-2ebb-ab55-30de473835e3@redhat.com> From: David Hildenbrand Organization: Red Hat In-Reply-To: X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="JFe5/fds"; spf=pass (imf09.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1659342652; a=rsa-sha256; cv=none; b=L+iyXvIUcmSx0VdROwZXGrY057gLZ7dXgLhOGSr3X5tf1tmrHpmCtC0/tCzgbXCPFhVTlU Z0CgOEJNQZqLmgHAOO5GKgLlNNH2u88uFN28A1mF6NEMFET/91P7fERVgOHcPH5qnlNLZF mhtwK+agYva02DEU0WGSssVGgwDoBHk= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1659342652; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=5SVYDNCbGfXDgjbhPnDxlhZIe7nXKFMtGTA2pRBTKME=; b=JLfiKs4ZKb7igff9B+pwRqUA3/Y7uUq/aVNYa2aUadPRPosCMEmiVQLkYvnVp70hO4ow3r y4XQyPA2PtGXRMrwRQoE5zF5oZ0cEIb0aq3qPlqqzR1FQheioEoCMkhc9gGLp7Q6RgPQZ+ mzY5hDhxdj+ebHx64ezy9SY1zMteYl0= Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="JFe5/fds"; spf=pass (imf09.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-Stat-Signature: 64auze9ejo3m7xfr8yi14soei8irenq5 X-Rspamd-Queue-Id: 259EB140108 X-Rspam-User: X-Rspamd-Server: rspam07 X-HE-Tag: 1659342651-264901 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 28.07.22 11:53, Charan Teja Kalla wrote: > Thanks David for the inputs!! > > On 7/27/2022 10:59 PM, David Hildenbrand wrote: >>> Fix those paths where offline races with page_ext access by maintaining >>> synchronization with rcu lock and is achieved in 3 steps: >>> 1) Invalidate all the page_ext's of the sections of a memory block by >>> storing a flag in the LSB of mem_section->page_ext. >>> >>> 2) Wait till all the existing readers to finish working with the >>> ->page_ext's with synchronize_rcu(). Any parallel process that starts >>> after this call will not get page_ext, through lookup_page_ext(), for >>> the block parallel offline operation is being performed. >>> >>> 3) Now safely free all sections ->page_ext's of the block on which >>> offline operation is being performed. >>> >>> Thanks to David Hildenbrand for his views/suggestions on the initial >>> discussion[1] and Pavan kondeti for various inputs on this patch. >>> >>> FAQ's: >>> Q) Should page_ext_[get|put]() needs to be used for every page_ext >>> access? >>> A) NO, the synchronization is really not needed in all the paths of >>> accessing page_ext. One case is where extra refcount is taken on a >>> page for which memory block, this pages falls into, offline operation is >>> being performed. This extra refcount makes the offline operation not to >>> succeed hence the freeing of page_ext. Another case is where the page >>> is already being freed and we do reset its page_owner. >>> >>> Some examples where the rcu_lock is not taken while accessing the >>> page_ext are: >>> 1) In migration (where we also migrate the page_owner information), we >>> take the extra refcount on the source and destination pages and then >>> start the migration. This extra refcount makes the test_pages_isolated() >>> function to fail thus retry the offline operation. >>> >>> 2) In free_pages_prepare(), we do reset the page_owner(through page_ext) >>> which again doesn't need the protection to access because the page is >>> already freeing (through only one path). >>> >>> So, users need not to use page_ext_[get|put]() when they are sure that >>> extra refcount is taken on a page preventing the offline operation. >>> >>> Q) Why can't the page_ext is freed in the hot_remove path, where memmap >>> is also freed ? >>> >>> A) As per David's answers, there are many reasons and a few are: >>> 1) Discussions had happened in the past to eventually also use rcu >>> protection for handling pfn_to_online_page(). So doing it cleanly here >>> is certainly an improvement. >>> >>> 2) It's not good having to scatter section online checks all over the >>> place in page ext code. Once there is a difference between active vs. >>> stale page ext data things get a bit messy and error prone. This is >>> already ugly enough in our generic memmap handling code. >>> >>> 3) Having on-demand allocations, such as KASAN or page ext from the >>> memory online notifier is at least currently cleaner, because we don't >>> have to handle each and every subsystem that hooks into that during the >>> core memory hotadd/remove phase, which primarily only setups the >>> vmemmap, direct map and memory block devices. >>> >>> [1] https://lore.kernel.org/linux-mm/59edde13-4167-8550-86f0-11fc67882107@quicinc.com/ >>> >> I guess if we care about the synchronize_rcu() we could go crazy with >> temporary allocations for data-to-free + call_rcu(). > > IMO, single synchronize_rcu() call overhead shouldn't be cared > especially if the memory offline operation it self is expected to > complete in seconds. On the Snapdragon system, I can see the lowest it > can complete in 3-4secs for a complete memory block of size 512M. And > agree that this time depends on lot of other factors too but wanted to > raise a point that it is really not a path where tiny optimizations > should be strictly considered. __Please help in correcting me If I am > really downplaying the scenario here__. I agree that we should optimize only if we find this to be an issue. > > But then I moved to single synchronize_rcu() just to avoid any visible > effects that can cause by multiple synchronize_rcu() for a single memory > block with lot of sections. Makes sense. > > Having said that, I am open to go for call_rcu() and infact it will be a > much simple change where I can do the freeing of page_ext in the > __free_page_ext() itself which is called for every section there by > avoid the extra tracking flag PAGE_EXT_INVALID. > ........... > WRITE_ONCE(ms->page_ext, NULL); > call_rcu(rcu_head, fun); // Free in fun() > ............. > > Or your opinion is to use call_rcu () only once in place of > synchronize_rcu() after invalidating all the page_ext's of memory block? Yeah, that would be an option. And if you fail to allocate a temporary buffer to hold the data-to-free (structure containing rcu_head), the slower fallback path would be synchronize_rcu(). But again, I'm also not sure if we have to optimize here right now. -- Thanks, David / dhildenb