From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5602FC4706C for ; Sat, 13 Jan 2024 00:11:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 919588D0006; Fri, 12 Jan 2024 19:11:17 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8C9BA8D0003; Fri, 12 Jan 2024 19:11:17 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 791598D0006; Fri, 12 Jan 2024 19:11:17 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 69DC78D0003 for ; Fri, 12 Jan 2024 19:11:17 -0500 (EST) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 42A2E1C1806 for ; Sat, 13 Jan 2024 00:11:17 +0000 (UTC) X-FDA: 81672358194.04.D611CEC Received: from mail-vk1-f173.google.com (mail-vk1-f173.google.com [209.85.221.173]) by imf01.hostedemail.com (Postfix) with ESMTP id 790D740017 for ; Sat, 13 Jan 2024 00:11:15 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="I+8/iWkq"; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf01.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.221.173 as permitted sender) smtp.mailfrom=21cnbao@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1705104675; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=lHvhMGOuMxGpja9NIsPHvvd3EE5Jghfyp8WBP+aJvlM=; b=EPVYuP9fCJSnpUM7M4E8tB48rEtxjIlYNYyjb9h9iJSumGynWq5z0yk7CZJ8gkKWj134KN lDIqJsUf4bXBegeOQcvIAxGr2O79xxpENkKBQiNM22n77ub7oQp+CCXJWdM+GryRB90dHm msXHjtm2Gjepj2CHjp+EZHGypyQck5c= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="I+8/iWkq"; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf01.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.221.173 as permitted sender) smtp.mailfrom=21cnbao@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1705104675; a=rsa-sha256; cv=none; b=XLPfaSxE85kcEgNwI/VKAhkOBrlvHcJhK8vEbDvmlQDj5FjeSpgbXtCzCIQhjlGR3TUZBX hn3XoxiPVKDyZnCOInc1GzBjX2ve+h9Twms2eFmTRLJT0RB+mmGT5c4gMvFlm/KY0rlD2k XoUyKxxxsFrzYbcPxO+tBWOQQh3+r4c= Received: by mail-vk1-f173.google.com with SMTP id 71dfb90a1353d-4b739b29686so3880227e0c.0 for ; Fri, 12 Jan 2024 16:11:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1705104674; x=1705709474; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=lHvhMGOuMxGpja9NIsPHvvd3EE5Jghfyp8WBP+aJvlM=; b=I+8/iWkq5remPMm6CepqGsrg3GYjUOL2kfm8kpbqSzgtaXZAXj5aZubcLH1jmpmHXK udI/PE+Bb8tEaQWilSsCnc0mOnmdyssQR+OT0Ri620jMYchOPiXkw7zWXpZCKDKnAapz mX5ryBP4yDDyaH0pvFXD6jqB6fNdrUzV2UZtxzHw6P++7Jur5lVAaxFpCn/kjtHLuxtP XUkR0kZ1zQKvgqx0BDUzoXyHns4DW816bIFhTag76ntIYUcbd2gKtSy17QBF/UPc78gB p70npnZ6tLW05/6XurJJdNTWw1uTtDahT6pjf6a3/3LYo/+fKInplXhueGnpB656ZEvC j/gA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1705104674; x=1705709474; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=lHvhMGOuMxGpja9NIsPHvvd3EE5Jghfyp8WBP+aJvlM=; b=tuwIlK61Y4TSkzrsa39PybAjSBd4plQZU4lB35Sq30eLA0sytJG1Gfv3cs1IQF29m7 bHnuf/RVoJKgD54F24X0DPfrFW+zjBX9cXKC3q+33HbN9l8fOkda7vA39j9hqKnyfb+Z WBMRc64IB90cUqsjeUhPL+pXXpB7tO5w/rC9qePoIzxKlBkxTWcu6sjFLGck+l9gLymK MOx+dHKfyPraATIkKdjw9NmJZ752vlcb6h/NmyaPHBWSnx+uij6fkXj4zO8BjzpeeDmC k0JzJeN6YG29PviFbaMFVBKigGFyo/tAaA0Au5xD5wshcCMDrQaGIDNetTTuirAhdtE1 0+Hw== X-Gm-Message-State: AOJu0Yxn8ToJBRFwyP4ekJW3G9LEEz2ddbUqeQndDc5nv4sjWs5bzSxD 0s51JC3VsuahFZSGcpSmx9+XEs8ieFuCpgjE8UE= X-Google-Smtp-Source: AGHT+IGTGdV1n31t1UQoN3GRkX3l9DchMb+zy5mxw5LKO1U3ELjDDBKRsqaGKvTfZyA+0Cwp3uYrS6neXlAyMtrlBrA= X-Received: by 2002:a05:6122:1811:b0:4b2:c554:cd04 with SMTP id ay17-20020a056122181100b004b2c554cd04mr2466010vkb.16.1705104674548; Fri, 12 Jan 2024 16:11:14 -0800 (PST) MIME-Version: 1.0 References: <20240111154106.3692206-1-ryan.roberts@arm.com> <654df189-e472-4a75-b2be-6faa8ba18a08@arm.com> In-Reply-To: From: Barry Song <21cnbao@gmail.com> Date: Sat, 13 Jan 2024 13:11:03 +1300 Message-ID: Subject: Re: [RFC PATCH v1] mm/filemap: Allow arch to request folio size for exec memory To: Matthew Wilcox Cc: Ryan Roberts , Catalin Marinas , Will Deacon , Mark Rutland , Andrew Morton , David Hildenbrand , John Hubbard , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 790D740017 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: hta8a7nuds8kc6g74jzarq9dj5164q6i X-HE-Tag: 1705104675-564318 X-HE-Meta: U2FsdGVkX1+PB7Mioq7XPMgRN0AEqfnSpCXt1gBLIzm2M44nFybq//l9GDmhlBkqKAtCWKvzek7ngYx79jTubV8MDBJeC5wAu2Y067zzATaxeQuVUTtJtgtt23T8Es0Lk8VvrURMPb/DB8jxTTwAIf2nVRHJfNwjBs0j8Vnqx4PNNWPoXyycGuRbnFqAHdds1COAH2vud+De3w/nWogXxoec0atQarmMg/ZHO6ntOx+ThlFJtyfPjZEh9Z2YH/iWWofKYNBYEo60umDPeseabEhBV3j01nNjbKFzwq6EWg34gwPjTHMLOFh/lvL713viEXLSeLKgEt4VBbEeNXs4YvdeYM94Xog1kg+FmT1oSiQ0JOHQxwj4yP1+S6PdOZaREQ+CFszZ94KN2QGP+JYYpSHGXtSfdrSD4Jl3FtfC2IN1VYc06WzjuWldtfuQPstLSQKwDXCJKmMuXyZSWEMTiM4QCA9cvV8uwRK9tWv8gc6xU5G0tl9Du7cyebpMVLLHz57FbcFNYQUdFg/4zeQM0A1+a7TRCyACQjncqr5PckvM5lj6WrYSYwoEjm4ZGT16UhXTWuYCknEh+eZLokGBBUpKcL28onVmaGvoo0m+ZxGr+LlMcetk0E17T8J9oXutoAWzTyYoSqEL827pMzu3SHjAm6InjUAD3v/8qq+Cd6WglyTPsR7Zr8cKNPIZlrJM2SXGvJYnSMpbw0vpca5Qm4naJ/wlERijFSoSdUIDYGlpk16lNpF/af6cd4DP/UpwPvp8h+XZxdqAjvuW6KXwj9l4EZT6wX/yRAfzfva2rudq8rOrExnst3JoWl0G93aEa+LrnglTB9HkJkVxlF3K/+fAVLrx8NXMcqRvpidNjQYCkaey6s23wI4rdfIwswXOEPgOAfyiYC4EePp7h6i6px6IpE4kdEiWC6o3PY3rUywww1YJD23f+0a7++PGOGhoNtCIAwJPa5kA4L9LHx8 UzIDzigV 8A+D0FXfenIFsZZa50Quk0y7S/A3Fl7jha9jy4VEqcOSTBNAJ60sEv5IAp5fAFIEDJjkeujynAn4FsMiLhmThSR6+ANwzGocZB9imofr2T8oENucJupjNaZAWDmL/tinozAk+6zLo6lMuluvuAbQxgvK/5N8w+frP0ejzjsV28jzQ3zGtMTwyerxECywJxxN3+33+LYJ8kM8IxAzKiREvB8zxyVvxDKAmMEFdehlA6bjgfjHrQonrSktp7dw4SPJz7m7pmXbRv+9aZhCUxxBnp/97dELB61p9uZfgmF39LAmW1hQ9TJX0tirG8s6cL1eQlZF/60YFkJ/lXe0lQj4tK/P2yXActIt4sv3TRtiIYVqoZzU9XiuAy8UtRSW1lhyEg1V1qVy4r+o7V1tFxz6dKVWIIWdcOrgWKv9u+fhwIWvH17DDgaqiQyOlIBCXEL76ze9wqoDZPzYJ9aR25Lc1Z2hV02I/zAg+JHwaqo77pz1yzpcgW8Eqw8eOLA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Sat, Jan 13, 2024 at 12:04=E2=80=AFPM Matthew Wilcox wrote: > > On Sat, Jan 13, 2024 at 11:54:23AM +1300, Barry Song wrote: > > > > Perhaps an alternative would be to double ra->size and set ra->asyn= c_size to > > > > (ra->size / 2)? That would ensure we always have 64K aligned blocks= but would > > > > give us an async portion so readahead can still happen. > > > > > > this might be worth to try as PMD is exactly doing this because async > > > can decrease > > > the latency of subsequent page faults. > > > > > > #ifdef CONFIG_TRANSPARENT_HUGEPAGE > > > /* Use the readahead code, even if readahead is disabled */ > > > if (vm_flags & VM_HUGEPAGE) { > > > fpin =3D maybe_unlock_mmap_for_io(vmf, fpin); > > > ractl._index &=3D ~((unsigned long)HPAGE_PMD_NR - 1); > > > ra->size =3D HPAGE_PMD_NR; > > > /* > > > * Fetch two PMD folios, so we get the chance to actu= ally > > > * readahead, unless we've been told not to. > > > */ > > > if (!(vm_flags & VM_RAND_READ)) > > > ra->size *=3D 2; > > > ra->async_size =3D HPAGE_PMD_NR; > > > page_cache_ra_order(&ractl, ra, HPAGE_PMD_ORDER); > > > return fpin; > > > } > > > #endif > > > > > > > BTW, rather than simply always reading backwards, we did something ver= y > > "ugly" to simulate "read-around" for CONT-PTE exec before[1] > > > > if page faults happen in the first half of cont-pte, we read this 64KiB > > and its previous 64KiB. otherwise, we read it and its next 64KiB. > > I don't think that makes sense. The CPU executes instructions forwards, > not "around". I honestly think we should treat text as "random access" > because function A calls function B and functions A and B might well be > very far apart from each other. The only time I see you actually > getting "readahead" hits is if a function is split across two pages (for > whatever size of page), but that's a false hit! The function is not, > generally, 64kB long, so doing readahead is no more likely to bring in > the next page of text that we want than reading any other random page. > it seems you are in favor of Ryan's modification even for filesystems which don't support large mapping? > Unless somebody finds the GNU Rope source code from 1998, or recreates it= : > https://lwn.net/1998/1029/als/rope.html > Then we might actually have some locality. > > Did you actually benchmark what you did? Is there really some locality > between the code at offset 256-288kB in the file and then in the range > 192kB-256kB? I really didn't have benchmark data, at that point I was like, instinctively didn=E2=80=99t want to break the logic of read-around, so made the code just that. The info your provide makes me re-think if the read-around code is necessar= y, thanks! was using filesystems without large-mapping support but worked around the problem by 1. preparing 16*n normals pages 2. insert normal pages into xa 3. let filesystem read 16 normal pages 4. after all IO completion, transform 16 pages into mTHP and reinsert mTHP to xa that was very painful and finally made no improvement probably because of due to various sync overhead. so ran away and didn't dig more data. Thanks Barry