From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.9 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 03CDECA9EC5 for ; Wed, 30 Oct 2019 17:54:07 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id AA83E2087F for ; Wed, 30 Oct 2019 17:54:06 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=soleen.com header.i=@soleen.com header.b="l3ydEFAx" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AA83E2087F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 595A16B0007; Wed, 30 Oct 2019 13:54:06 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 545D16B0008; Wed, 30 Oct 2019 13:54:06 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 45B036B000A; Wed, 30 Oct 2019 13:54:06 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0171.hostedemail.com [216.40.44.171]) by kanga.kvack.org (Postfix) with ESMTP id 1F0EF6B0007 for ; Wed, 30 Oct 2019 13:54:06 -0400 (EDT) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with SMTP id 94E35824999B for ; Wed, 30 Oct 2019 17:54:05 +0000 (UTC) X-FDA: 76101199650.29.soap69_68d6ec02655d X-HE-Tag: soap69_68d6ec02655d X-Filterd-Recvd-Size: 8220 Received: from mail-ed1-f68.google.com (mail-ed1-f68.google.com [209.85.208.68]) by imf04.hostedemail.com (Postfix) with ESMTP for ; Wed, 30 Oct 2019 17:54:04 +0000 (UTC) Received: by mail-ed1-f68.google.com with SMTP id a21so2479608edj.8 for ; Wed, 30 Oct 2019 10:54:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=A22yCdXQ4wKxgH4FAev2UDTCLWsAjVrVdIRX9CE3m0o=; b=l3ydEFAxhV6GSwc0Uekxe9DWSEGrLEvvsyW2uiGILGo5a5+4/dX/LFaAiiWs3ETJf0 /alX2ZRFbpNS6xLFBOYB7v9Y9IDIgCWqxQ9zC0fm45MEx2BzDPamEeMFqL1WqWtxK+wD NCIm4g+UGf6XE/hQjwN7cmEnrKV/rPqZtQmwuOhw8z3wVvXooRPprO/loxEftX4bEiz0 v0/iPuZUJ6FUH2Q87CPeRu+woeVSJajyaQrKFchJnzzcez+DiRT2hhGLJI0tMX1+Z3dk rolkWrWPYRMnvmwQwfVK3f5IH51jqJmii5XDNJNw8ja5qephOK3CIpwbb3RZcrNWO7nX c0sw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=A22yCdXQ4wKxgH4FAev2UDTCLWsAjVrVdIRX9CE3m0o=; b=W2zNXKbgDSpTgxlSrEH/Uyout0/dhYZGUZ2h2ywm90kQ2eKWy38Pr3iWd730UDKHby tkIIW2pIZKvj+gLZwFt2PBHmRu20ZhKuwm6pPWWgA25EgxQd2P5oLpkCajHPb/+JDI6f hwv33oGD3LE0s4eqJaoyGGaaMQxfWyd0sjlikf15juOhb6veVssflB16rje8zJ6dNyen 78B3Pk/DR1/ji2fq/VH3V5u9dP78zADIENfkTjqfhNGDDEYlGI7OF/HsbUDtYR/64oga rHSUbCQGMTmi9WgOh4UBBAOi5LpS3uGkhrm/y+2osacIooU8gQ55i+3cSbfoznF5DC8R aQqg== X-Gm-Message-State: APjAAAVmDpdC2fwb5B4siaGfrjIhIxm3OCek07f3owhSsJNTepXPxrOV cy04j5fNW39otvWqXk4FTCy2ndOTqaTv4mnRyAEzJw== X-Google-Smtp-Source: APXvYqwXBr9TvyBOuA23CYTQrGdthPHx6833Rsz1sxp6rOkkNopRuVVjppCyiRTYsh+Pzp3vC6PPHsMO7EdmlYXVe1M= X-Received: by 2002:a50:9254:: with SMTP id j20mr1136893eda.0.1572458043602; Wed, 30 Oct 2019 10:54:03 -0700 (PDT) MIME-Version: 1.0 References: <20191030131122.8256-1-vincent.whitchurch@axis.com> <20191030132958.GD31513@dhcp22.suse.cz> <20191030140216.i26n22asgafckfxy@axis.com> <20191030141259.GE31513@dhcp22.suse.cz> <20191030153150.GI31513@dhcp22.suse.cz> <20191030173123.GK31513@dhcp22.suse.cz> In-Reply-To: <20191030173123.GK31513@dhcp22.suse.cz> From: Pavel Tatashin Date: Wed, 30 Oct 2019 13:53:52 -0400 Message-ID: Subject: Re: [PATCH] mm/sparse: Consistently do not zero memmap To: Michal Hocko Cc: Vincent Whitchurch , "akpm@linux-foundation.org" , "osalvador@suse.de" , "linux-mm@kvack.org" , "linux-kernel@vger.kernel.org" Content-Type: text/plain; charset="UTF-8" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Oct 30, 2019 at 1:31 PM Michal Hocko wrote: > > On Wed 30-10-19 12:53:41, Pavel Tatashin wrote: > > On Wed, Oct 30, 2019 at 11:31 AM Michal Hocko wrote: > > > > > > On Wed 30-10-19 11:20:44, Pavel Tatashin wrote: > > > > On Wed, Oct 30, 2019 at 10:13 AM Michal Hocko wrote: > > > > > > > > > > [Add Pavel - the email thread starts http://lkml.kernel.org/r/20191030131122.8256-1-vincent.whitchurch@axis.com > > > > > but it used your old email address] > > > > > > > > > > On Wed 30-10-19 15:02:16, Vincent Whitchurch wrote: > > > > > > On Wed, Oct 30, 2019 at 02:29:58PM +0100, Michal Hocko wrote: > > > > > > > On Wed 30-10-19 14:11:22, Vincent Whitchurch wrote: > > > > > > > > (I noticed this because on my ARM64 platform, with 1 GiB of memory the > > > > > > > > first [and only] section is allocated from the zeroing path while with > > > > > > > > 2 GiB of memory the first 1 GiB section is allocated from the > > > > > > > > non-zeroing path.) > > > > > > > > > > > > > > Do I get it right that sparse_buffer_init couldn't allocate memmap for > > > > > > > the full node for some reason and so sparse_init_nid would have to > > > > > > > allocate one for each memory section? > > > > > > > > > > > > Not quite. The sparsemap_buf is successfully allocated with the correct > > > > > > size in sparse_buffer_init(), but sparse_buffer_alloc() fails to > > > > > > allocate the same size from it. > > > > > > > > > > > > The reason it fails is that sparse_buffer_alloc() for some reason wants > > > > > > to return a pointer which is aligned to the allocation size. But the > > > > > > sparsemap_buf was only allocated with PAGE_SIZE alignment so there's not > > > > > > enough space to align it. > > > > > > > > > > > > I don't understand the reason for this alignment requirement since the > > > > > > fallback path also allocates with PAGE_SIZE alignment. I'm guessing the > > > > > > alignment is for the VMEMAP code which also uses sparse_buffer_alloc()? > > > > > > > > > > I am not 100% sure TBH. Aligning makes some sense when mapping the > > > > > memmaps to page tables but that would suggest that sparse_buffer_init > > > > > is using a wrong alignment then. It is quite wasteful to allocate > > > > > alarge misaligned block like that. > > > > > > > > > > Your patch still makes sense but this is something to look into. > > > > > > > > > > Pavel? > > > > > > > > I remember thinking about this large alignment, as it looked out of > > > > place to me also. > > > > It was there to keep memmap in single chunks on larger x86 machines. > > > > Perhaps it can be revisited now. > > > > > > Don't we need 2MB aligned memmaps for their PMD mappings? > > > > Yes, PMD_SIZE should be the alignment here. It just does not make > > sense to align to size. > > What about this? It still aligns to the size but that should be > correctly done to the section size level. > > diff --git a/mm/sparse.c b/mm/sparse.c > index 72f010d9bff5..ab1e6175ac9a 100644 > --- a/mm/sparse.c > +++ b/mm/sparse.c > @@ -456,8 +456,7 @@ struct page __init *__populate_section_memmap(unsigned long pfn, > if (map) > return map; > > - map = memblock_alloc_try_nid(size, > - PAGE_SIZE, addr, > + map = memblock_alloc_try_nid(size, size, addr, > MEMBLOCK_ALLOC_ACCESSIBLE, nid); > if (!map) > panic("%s: Failed to allocate %lu bytes align=0x%lx nid=%d from=%pa\n", > @@ -474,8 +473,13 @@ static void __init sparse_buffer_init(unsigned long size, int nid) > { > phys_addr_t addr = __pa(MAX_DMA_ADDRESS); > WARN_ON(sparsemap_buf); /* forgot to call sparse_buffer_fini()? */ > + /* > + * Pre-allocated buffer is mainly used by __populate_section_memmap > + * and we want it to be properly aligned to the section size - this is > + * especially the case for VMEMMAP which maps memmap to PMDs > + */ > sparsemap_buf = > - memblock_alloc_try_nid_raw(size, PAGE_SIZE, > + memblock_alloc_try_nid_raw(size, section_map_size(), > addr, > MEMBLOCK_ALLOC_ACCESSIBLE, nid); > sparsemap_buf_end = sparsemap_buf + size; This looks good, I think we should also change alignment in fallback of vmemmap_alloc_block() to be section_map_size(). +++ b/mm/sparse-vmemmap.c @@ -65,9 +65,10 @@ void * __meminit vmemmap_alloc_block(unsigned long size, int node) warned = true; } return NULL; - } else - return __earlyonly_bootmem_alloc(node, size, size, + } else { + return __earlyonly_bootmem_alloc(node, size, section_map_size(), __pa(MAX_DMA_ADDRESS)); + } } Pasha > > -- > Michal Hocko > SUSE Labs