From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.5 required=3.0 tests=MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D014FC4360C for ; Wed, 16 Oct 2019 08:51:27 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9992E20872 for ; Wed, 16 Oct 2019 08:51:27 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9992E20872 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 46B8E8E0005; Wed, 16 Oct 2019 04:51:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 41A678E0001; Wed, 16 Oct 2019 04:51:27 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3306D8E0005; Wed, 16 Oct 2019 04:51:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0149.hostedemail.com [216.40.44.149]) by kanga.kvack.org (Postfix) with ESMTP id 11BBA8E0001 for ; Wed, 16 Oct 2019 04:51:27 -0400 (EDT) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with SMTP id 8BE388245571 for ; Wed, 16 Oct 2019 08:51:26 +0000 (UTC) X-FDA: 76049028972.14.steam76_80f6ef71a4c5f X-HE-Tag: steam76_80f6ef71a4c5f X-Filterd-Recvd-Size: 3893 Received: from mx1.suse.de (mx2.suse.de [195.135.220.15]) by imf08.hostedemail.com (Postfix) with ESMTP for ; Wed, 16 Oct 2019 08:51:26 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 557A7AF5D; Wed, 16 Oct 2019 08:51:24 +0000 (UTC) Date: Wed, 16 Oct 2019 10:51:23 +0200 From: Michal Hocko To: David Hildenbrand Cc: Anshuman Khandual , linux-mm@kvack.org, Mike Kravetz , Andrew Morton , Vlastimil Babka , David Rientjes , Andrea Arcangeli , Oscar Salvador , Mel Gorman , Mike Rapoport , Dan Williams , Pavel Tatashin , Matthew Wilcox , linux-kernel@vger.kernel.org Subject: Re: [PATCH] mm/page_alloc: Make alloc_gigantic_page() available for general use Message-ID: <20191016085123.GO317@dhcp22.suse.cz> References: <1571211293-29974-1-git-send-email-anshuman.khandual@arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.10.1 (2018-07-13) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed 16-10-19 10:08:21, David Hildenbrand wrote: > On 16.10.19 09:34, Anshuman Khandual wrote: [...] > > +static bool pfn_range_valid_contig(struct zone *z, unsigned long start_pfn, > > + unsigned long nr_pages) > > +{ > > + unsigned long i, end_pfn = start_pfn + nr_pages; > > + struct page *page; > > + > > + for (i = start_pfn; i < end_pfn; i++) { > > + page = pfn_to_online_page(i); > > + if (!page) > > + return false; > > + > > + if (page_zone(page) != z) > > + return false; > > + > > + if (PageReserved(page)) > > + return false; > > + > > + if (page_count(page) > 0) > > + return false; > > + > > + if (PageHuge(page)) > > + return false; > > + } > > We might still try to allocate a lot of ranges that contain unmovable data > (we could avoid isolating a lot of page blocks in the first place). I'd love > to see something like pfn_range_movable() (similar, but different to > is_mem_section_removable(), which uses has_unmovable_pages()). Just to make sure I understand. Do you want has_unmovable_pages to be called inside pfn_range_valid_contig? [...] > > +struct page *alloc_contig_pages(unsigned long nr_pages, gfp_t gfp_mask, > > + int nid, nodemask_t *nodemask) > > +{ > > + unsigned long ret, pfn, flags; > > + struct zonelist *zonelist; > > + struct zone *zone; > > + struct zoneref *z; > > + > > + zonelist = node_zonelist(nid, gfp_mask); > > + for_each_zone_zonelist_nodemask(zone, z, zonelist, > > + gfp_zone(gfp_mask), nodemask) { > > One important part is to never use the MOVABLE zone here (otherwise > unmovable data would end up on the movable zone). But I guess the caller is > responsible for that (not pass GFP_MOVABLE) like gigantic pages do. Well, if the caller uses GFP_MOVABLE then the movability should be implemented in some form. If that is not the case then it is a bug on the caller behalf. > > + spin_lock_irqsave(&zone->lock, flags); > > + > > + pfn = ALIGN(zone->zone_start_pfn, nr_pages); > > This alignment does not make too much sense when allowing passing in !power > of two orders. Maybe the caller should specify the requested alignment > instead? Or should we enforce this to be aligned to make our life easier for > now? Are there any usecases that would require than the page alignment? -- Michal Hocko SUSE Labs