From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 55781C433DB for ; Tue, 12 Jan 2021 09:15:27 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id AD49422D2A for ; Tue, 12 Jan 2021 09:15:26 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AD49422D2A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 27BF58D0079; Tue, 12 Jan 2021 04:15:26 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 22C708D0051; Tue, 12 Jan 2021 04:15:26 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0CCEC8D0079; Tue, 12 Jan 2021 04:15:26 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0063.hostedemail.com [216.40.44.63]) by kanga.kvack.org (Postfix) with ESMTP id E4B508D0051 for ; Tue, 12 Jan 2021 04:15:25 -0500 (EST) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id A58AE9898 for ; Tue, 12 Jan 2021 09:15:25 +0000 (UTC) X-FDA: 77696564610.29.cent60_470c49327514 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin29.hostedemail.com (Postfix) with ESMTP id 8A98F180868E8 for ; Tue, 12 Jan 2021 09:15:25 +0000 (UTC) X-HE-Tag: cent60_470c49327514 X-Filterd-Recvd-Size: 7944 Received: from mail-ej1-f52.google.com (mail-ej1-f52.google.com [209.85.218.52]) by imf45.hostedemail.com (Postfix) with ESMTP for ; Tue, 12 Jan 2021 09:15:23 +0000 (UTC) Received: by mail-ej1-f52.google.com with SMTP id d17so2434755ejy.9 for ; Tue, 12 Jan 2021 01:15:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel-com.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=0RSCBUADFHXC7ujFfJT5dXXGGcwPYqsp6Fb9kyT+trQ=; b=aFkV8dYZxRlzaXtnSF1vMe/nuMAjesIm9kXVgiquMIls/7DGln3YaMuyCNL7TQmOBZ 3MSp4t4Ivvd8E7gIj18rCSbZMIS9TwiKlMxBbc9wIYaeQwHv92lw82fr5IEMucJEPdBA gfvHlPQShpw7Hmv8FL/XN0nxLRT6QF2SNCThvmxuT+DODjdWq0UpRk4Ih4/rqwmAX5no xZ92cSUOQdo6YD/J6j6Arek27Eg6qw3EHCNWqVMU5xd2ag2A+vbu4ousQ7entX9dTtsP g4DC5MJfrXfEV25Fb43Pt3Ew6Unwlsv+Bz20ymrys9JzvBHQ1jQpOViB21fHAmHfx3NS oTcw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=0RSCBUADFHXC7ujFfJT5dXXGGcwPYqsp6Fb9kyT+trQ=; b=MKQGavg8JAjHRa3V8qNvsx2kcQUeVpmDgF/u77F1cgt8+uWpr/aBiNMyRoKf6KFDs2 P8kq6u1SaNjz7Mk2+7UQE5O/9Fx5GspInEaTiUByLnLv0GYOYaHxn7noD8r+H/k1w5dn MKcX2Y8hE6aj55Vm7nVeOtb5Vi671R45BZhS4yA5K7dhY6yHsL/VpbiCveBBXdvFGXAU IYEOpwinkuxYN/V9ZkESUmF4MKpG4qNw/txkuT9wYVj8NhTz4KMSLx7JEtYAIuWt/9jl sLa58yySYXp+zboEkT8mZ4D/oFyaxi4l02IV1CyOVHcqFJ0Pmre37lnRG75ffhXlGlMn ntUg== X-Gm-Message-State: AOAM530HCQ8Qhp91wnykVp+Z5hoo+BRxtQ3NIhKc9c2ISYAuGGiEB+Z0 u9jUOVUO1pWPf/CMKbe1uuX1iwlIdhl1CccaBye2hQ== X-Google-Smtp-Source: ABdhPJytswuCe1ZVtIFZmx19O/N9mBn86fY/YP41ACHamXYnf2Gjp1cj+YNM+Z+dqpmFoVcyQucXbvgSXmNyVlDfwUs= X-Received: by 2002:a17:906:a29a:: with SMTP id i26mr2514293ejz.45.1610442922252; Tue, 12 Jan 2021 01:15:22 -0800 (PST) MIME-Version: 1.0 References: <160990599013.2430134.11556277600719835946.stgit@dwillia2-desk3.amr.corp.intel.com> <20210106095520.GJ13207@dhcp22.suse.cz> In-Reply-To: <20210106095520.GJ13207@dhcp22.suse.cz> From: Dan Williams Date: Tue, 12 Jan 2021 01:15:12 -0800 Message-ID: Subject: Re: [PATCH] mm: Teach pfn_to_online_page() about ZONE_DEVICE section collisions To: Michal Hocko Cc: Linux MM , Andrew Morton , David Hildenbrand , Linux Kernel Mailing List Content-Type: text/plain; charset="UTF-8" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Jan 6, 2021 at 1:55 AM Michal Hocko wrote: > > On Tue 05-01-21 20:07:18, Dan Williams wrote: > > While pfn_to_online_page() is able to determine pfn_valid() at > > subsection granularity it is not able to reliably determine if a given > > pfn is also online if the section is mixed with ZONE_DEVICE pfns. > > I would call out the problem more explicitly. E.g. something like > " > This means that pfn_to_online_page can lead to false positives and allow > to return a struct page which is not fully initialized because it > belongs to ZONE_DEVICE (PUT AN EXAMPLE OF SUCH AN UNITIALIZED PAGE > HERE). That can lead to either crash on PagePoisoned assertion or a > silently broken page state with debugging disabled. > " Apologies for the long pause in this conversation, I went off and wrote a test to trigger this condition so I could quote it directly. It turns out soft_offline_page(), even before fixing pfn_to_online_page(), is broken as it leaks a page reference. > > I would also appreciate a more specific note about how the existing HW > can trigger this. You have mentioned 64MB subsection examples in the > other email. It would be great to mention it here as well. Sure. > > > Update move_pfn_range_to_zone() to flag (SECTION_TAINT_ZONE_DEVICE) a > > section that mixes ZONE_DEVICE pfns with other online pfns. With > > SECTION_TAINT_ZONE_DEVICE to delineate, pfn_to_online_page() can fall > > back to a slow-path check for ZONE_DEVICE pfns in an online section. > > > > With this implementation of pfn_to_online_page() pfn-walkers mostly only > > need to check section metadata to determine pfn validity. In the > > rare case of mixed-zone sections the pfn-walker will skip offline > > ZONE_DEVICE pfns as expected. > > The above paragraph is slightly confusing. You do not require > pfn-walkers to check anything right? Section metadata is something that > is and should be hidden from them. They are asking for an online page > and get it or NULL. Nothing more nothing less. > Yeah, I'll drop it. I was describing what service pfn_to_online_page() performs for a pfn-walker, but it's awkwardly worded. > > > Other notes: > > > > Because the collision case is rare, and for simplicity, the > > SECTION_TAINT_ZONE_DEVICE flag is never cleared once set. > > I do not see a problem with that. > > > pfn_to_online_page() was already borderline too large to be a macro / > > inline function, but the additional logic certainly pushed it over that > > threshold, and so it is moved to an out-of-line helper. > > Worth a separate patch. > > The approach is sensible. Thanks! > > I was worried that we do not have sufficient space for a new flag but > the comment explains we have 6 bits available. I haven't double checked > that for the current state of the code. The comment is quite recent and > I do not remember any substantial changes in this area. Still something > that is rather fragile because an unexpected alignment would be a > runtime failure which is good to stop random corruptions but not ideal. > > Is there any way to explicitly test for this? E.g. enforce a shared > section by qemu and then trigger a pfn walk? > > > Fixes: ba72b4c8cf60 ("mm/sparsemem: support sub-section hotplug") > > Cc: Andrew Morton > > Reported-by: Michal Hocko > > Reported-by: David Hildenbrand > > Signed-off-by: Dan Williams > > [...] > > > +static int zone_id(const struct zone *zone) > > +{ > > + struct pglist_data *pgdat = zone->zone_pgdat; > > + > > + return zone - pgdat->node_zones; > > +} > > We already have zone_idx() Noted. > > > + > > +static void section_taint_zone_device(struct zone *zone, unsigned long pfn) > > +{ > > + struct mem_section *ms = __nr_to_section(pfn_to_section_nr(pfn)); > > + > > + if (zone_id(zone) != ZONE_DEVICE) > > + return; > > + > > + if (IS_ALIGNED(pfn, PAGES_PER_SECTION)) > > + return; > > + > > + ms->section_mem_map |= SECTION_TAINT_ZONE_DEVICE; > > +} > > + > > /* > > * Associate the pfn range with the given zone, initializing the memmaps > > * and resizing the pgdat/zone data to span the added pages. After this > > @@ -707,6 +769,15 @@ void __ref move_pfn_range_to_zone(struct zone *zone, unsigned long start_pfn, > > resize_pgdat_range(pgdat, start_pfn, nr_pages); > > pgdat_resize_unlock(pgdat, &flags); > > > > + /* > > + * Subsection population requires care in pfn_to_online_page(). > > + * Set the taint to enable the slow path detection of > > + * ZONE_DEVICE pages in an otherwise ZONE_{NORMAL,MOVABLE} > > + * section. > > + */ > > + section_taint_zone_device(zone, start_pfn); > > + section_taint_zone_device(zone, start_pfn + nr_pages); > > I think it would be better to add the checks here and only set the flag > in the called function. SECTION_TAINT_ZONE_DEVICE should go to where we > define helpers for ther flags. > Done.