From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.1 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,MSGID_FROM_MTA_HEADER,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0172BC433C1 for ; Thu, 25 Mar 2021 09:27:47 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 391C3619DC for ; Thu, 25 Mar 2021 09:27:46 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 391C3619DC Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=amd.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 9F23A6B0036; Thu, 25 Mar 2021 05:27:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 97AB76B006C; Thu, 25 Mar 2021 05:27:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 757E56B006E; Thu, 25 Mar 2021 05:27:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0103.hostedemail.com [216.40.44.103]) by kanga.kvack.org (Postfix) with ESMTP id 4E29E6B0036 for ; Thu, 25 Mar 2021 05:27:45 -0400 (EDT) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 087D7181AF5CA for ; Thu, 25 Mar 2021 09:27:45 +0000 (UTC) X-FDA: 77957869290.11.95A75AC Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2048.outbound.protection.outlook.com [40.107.92.48]) by imf24.hostedemail.com (Postfix) with ESMTP id 3C210A0009CD for ; Thu, 25 Mar 2021 09:27:42 +0000 (UTC) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=SlxDuLzwcCiIm0y8s3S/BohHboaYN+tXzRdt6kRMLAcou6GvZCIuOAFtU/cMDLXxV4GZXzbNtLkGBGwuIPls3uItxsKuQGYJ1OW54DWpK9W02HBqfUfAQOiU08VmSOu4rrEevL6CTxNT0KD2fI9qieU5G5ciVTJK897a/SuicJnHFVY/sja5YpUrmswML/7v0myPeUP/T+9K9m3R1pr6XGvkQ4HRNIJPFKqQ2yzmTlMVNjuZ+6SjkLcEjyi/en2t1wuaoxrUWKDuovZLl0I8usLVR7ejZetmX8dxZ/7/q6p47lUYHb8npL6iju0ojS/a3eBM4lssO4OUv7sNtsgS5Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=PptQRGjUF8ZWJVor+Fj8UhtspEVi6J6bSth+8yaC/74=; b=BMAUv4aVvGu3I0pnsM4ghVWDowUcDgWF4ONCN2RN4WoUip/JWJ2vJTM8G2vYXr+dimIole28GJyYTF/wzR3SMp+cONtd9kZQs8tWphXxg20QOD5BDlzP2tOURS6d24PW0d5mftbbXekVapAw2lnilT5eD163U9iYmQ2v/hPZXBP/dZyKOY+XwrOrC8HyhCtRAuHXEpvduLzagsF27Eu7JN6qAtJUVQrFFrT35smf8i5miY3o7eSkK4Es+WMbkxgvtBa6CK/Ck4+Qg0JGzwLKgMi6vR07XQYQNcBuuEu5rNXm6tH03GESgZYuvJoGhE6rgktBmjc2UcksxZdkd0ef8A== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=PptQRGjUF8ZWJVor+Fj8UhtspEVi6J6bSth+8yaC/74=; b=dc8hPwQar0eR80RE6vRenkfL2t4Zcd2NHktAkiAVuUcfUB1269uJWscE+diVhK4GYSfsWVKFJYw+Q8NDaf5AMoAJyWfyOwFN9bzOFYRsDBvS8y/lNoXSexD68yvUMjD/qAq0H8+elo5lOLUSfImIr9XQCOPvObSFnpXuqAgunVc= Authentication-Results: shipmail.org; dkim=none (message not signed) header.d=none;shipmail.org; dmarc=none action=none header.from=amd.com; Received: from BY5PR12MB3764.namprd12.prod.outlook.com (2603:10b6:a03:1ac::17) by BYAPR12MB2645.namprd12.prod.outlook.com (2603:10b6:a03:61::30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3955.23; Thu, 25 Mar 2021 09:27:42 +0000 Received: from BY5PR12MB3764.namprd12.prod.outlook.com ([fe80::11bb:b39e:3f42:d2af]) by BY5PR12MB3764.namprd12.prod.outlook.com ([fe80::11bb:b39e:3f42:d2af%7]) with mapi id 15.20.3977.025; Thu, 25 Mar 2021 09:27:42 +0000 Subject: Re: [PATCH] drm/ttm: switch back to static allocation limits for now To: Daniel Vetter Cc: dri-devel , Linux MM , Liang.Liang@amd.com, =?UTF-8?Q?Thomas_Hellstr=c3=b6m_=28VMware=29?= References: <20210324134845.2338-1-christian.koenig@amd.com> <0f2308b4-f36e-ab02-c26d-4065e9972b50@gmail.com> From: =?UTF-8?Q?Christian_K=c3=b6nig?= Message-ID: Date: Thu, 25 Mar 2021 10:27:36 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.7.1 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US X-Originating-IP: [2a02:908:1252:fb60:a792:596e:3412:8626] X-ClientProxiedBy: AM0PR02CA0148.eurprd02.prod.outlook.com (2603:10a6:20b:28d::15) To BY5PR12MB3764.namprd12.prod.outlook.com (2603:10b6:a03:1ac::17) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from [IPv6:2a02:908:1252:fb60:a792:596e:3412:8626] (2a02:908:1252:fb60:a792:596e:3412:8626) by AM0PR02CA0148.eurprd02.prod.outlook.com (2603:10a6:20b:28d::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3977.24 via Frontend Transport; Thu, 25 Mar 2021 09:27:40 +0000 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: 759f1fca-2e12-46c0-75d4-08d8ef703d36 X-MS-TrafficTypeDiagnostic: BYAPR12MB2645: X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:2733; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: GYRs0ZPJOnVyyVAVxdAfmfUfWoGAg+eJBdu2+Rni8kxz1NMdoApG3hltRCQyEjAMwfZgqZsiHKiEncq3m26L5SfNSBK++uGtUrDf2j1+fTaBhx3JL63v4EP8RNgruTLpK6ojZAdFvMnQv4NuK63kXDSzR9Ig8avnFzqHS6Pn4Sq6EY3OKEO2xA1nz1eVA26z35VTu/Tzzth4iJi/Vpx//t6C/ut1K5ZW7Cv20+hy8BsCUUGLHmpAIGjyP950reoydsDflPkgJlERhaZQsTo+eCDs2pZ9WYzqJSxnhDnkCOB7HLzhe7r5LvrNvnHXJDhuaHmFE+JLBJet/K3wz5Xi5ExOMM8TqpqmhXtrAqWVtVbZGcZzt/h1p0avv4yQsRj58IB2xT3o9fw39zLX1UEbDeGsJx8AKtXhnLQFYG5W6HdlugbvbmjYDiNnELGfR5iMUSq7HMBOkIQKxtGasSGQctZAj4RiiNu8sEXlY8wHMZAaUDh7a5rsk2jz4F3yQob1WxeriqFWWsMEy3MHiuboCvQw3Z4hwrS+XtGT4dzRdqkGey2qTBuGLh575fRur3aTYCSudPemywX4lEYBE3Z6DWVGJ2U6X2mjcOXOP5U2ME6TJn/RT1Uvrw7DBeusHRexSy69l9Igcp2kb8x9Rx4+4F16kMC3kABZPamVBAvn+bCjNgqpMIwmp4FVvGOch3hcl0+v+74WZlN7qm5F5wiQ8AF6TrUAde4qbNk9N7IaNFDysmc/G1S1ci+Se9vPXSzp/d7NcUntqHXgWpZpRbavPQ== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BY5PR12MB3764.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(366004)(346002)(136003)(39860400002)(376002)(16526019)(31696002)(54906003)(86362001)(2616005)(83380400001)(30864003)(6486002)(478600001)(66556008)(66476007)(36756003)(66946007)(66574015)(4326008)(53546011)(966005)(186003)(8676002)(31686004)(6916009)(2906002)(38100700001)(6666004)(5660300002)(8936002)(52116002)(316002)(45980500001)(43740500002);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?S0swQlBTSks3aUFvR1hXOFNrOE5pTFBka3NvMDk1N3czUUNodE9Ed0JFMEUy?= =?utf-8?B?UG01Tk1jZGw4dWRoKzB0WkpCZXQyV2M3cyt5VjFLNkI4Q3ZFYTFmTlB2Nm02?= =?utf-8?B?SXRlWGdKdGNuYkxhbWhUT29Ld2tESTZ1MXVaa01WcTJpeFJaT1RTai9SeEhD?= =?utf-8?B?WkcyYWZTT1lSalFNTnQ0ZnJYU1ZiTlNIcjN5UWN0K3BYOTlGRllVaEtnN0x2?= =?utf-8?B?M3Jvd0d2U2hGN1lSNXRWRFRoMnZXUUR2d25WVzNoOUNHeXZzOW51QmFPYmM1?= =?utf-8?B?Z2R2dTEwS0hRM1BvSElvaGgvOFE5TFIvT2JRS09XdjJHUXZvcFFWM25jMm1z?= =?utf-8?B?MHBzam56OTFQSVg0b2dBWDdYeFdPSGhPMFR1V2lYQVkyOEJNaTZ3YVB3UGps?= =?utf-8?B?RG9WTFlURkMyUzJnTnFqM05LWmNSUmpBWnpsTnJxODVIeVRrMXJZR3Rud0dH?= =?utf-8?B?azVqNzhHU0UvVmFVb25CU2RHZk96VUEzQ0VpRDR4VnFmaGhuTUJPUHBzM3VS?= =?utf-8?B?SEJuRjduTnJnQUE4a0FVUXYwQnJRdmlvOXYyVDVvbjlhbDNXV0dWZWhpNUhn?= =?utf-8?B?M3psZGFjN1lyRE9RZVgrVkM1NXVkQ1BSbWdvMk1IblNwZ1YyenQyY2hIVC9L?= =?utf-8?B?M1h3TXViQm44UENpSm1rY1hsQlBzY0VyQXIzQ2FjWVpQcHJmTG1mWmZLYWhH?= =?utf-8?B?M01PTHhTK3dlaStuMWdkR1BScWcrbTBCTlE1bFQrblBnOTZJVUc3dng0WjUx?= =?utf-8?B?L1BYMnFSNDFOdGxvZ1ljM05wTG5mcU5INW5wN2RpZUxxVTAwQTlLZ3pQWDFt?= =?utf-8?B?MmRubEpnWURQV0NEY3ZxSUtCbEN6TTdITUVLZU5naGZlZzd0NDYwYUJHSEtu?= =?utf-8?B?NHp5ZnNOQzZ2NTJHVEp4ZDVHbysyMnB2WWt1c3BKMEJyYWoxSGFVbWRnNDNa?= =?utf-8?B?R050eUdnQnV6Mk5vM1RwUXpBbGV1eDBiallzUzM1UEhwbTh5OVpGa0tYeEtn?= =?utf-8?B?R3p1OEt0NmMrN3BEelR1L0N4b0lwU3hTTk1JUzY5cUNSSkdNbXhGK1orSU8z?= =?utf-8?B?SzZ4L1ZvUDBNRUdOckZKaExzaUF6WnNOb1psL2pEdExxY0I4MWlOdDduQmU5?= =?utf-8?B?VWQ3VEg1dUJqN2d3NGxZOFJQdVNtNW5JWVVqbWlTUndRb1RLZVgyRVM3VHNw?= =?utf-8?B?YUREQU1TcUprQ1RZajJ1MjRGS1U5M3dzSU1sdHJHZEVYSzU0dWZuKzRveTE0?= =?utf-8?B?L25lbksrZlQ3d3JBRnJuOWlBSTQ3TUlOVVZPN0ozKzY1UDZnbWlKVkV4czhp?= =?utf-8?B?aUNDU2lwY2dHTHltMkxoUlBiaytwTmRHbVd4R3gxKzdIWWc3NDRCRHRUQk9t?= =?utf-8?B?TUl6NFRDYlhUalVQMWFIY0plUkpUQzlOdGluMURuSk0xeHFRYkgvMXdrWHJ5?= =?utf-8?B?ZmNZMzJhM2puSi9VWm1LQ05ld3dJWEZVa0d6Q1Q3blZNN0dpRDRhdnV6UVNH?= =?utf-8?B?ckRJcDNjL1E3YkFIZnFJSGd5RnpLbGFvdXMxUCt5SDBMY0ZDODBmZ1ZRSjdo?= =?utf-8?B?MWZTc0k1L3haZUNvbGpSU012bXNTRjhxWjZteTNUYmVQNTN5WllQR2RjelN2?= =?utf-8?B?L0tPM21qN0FhdVVWdmVmK2xvL2NxeWZ5MlJuaEFCSW0yR0JZUEJXTTZRbnYv?= =?utf-8?B?VDF2Z1cxZ3lEZmVhMWxtUG5JMGVkQW0xZ2U1elBQeXQ0L21SWUFJdldnUVdu?= =?utf-8?B?RjNReUFGK2VBRlhXaDFySDd2a0taaG1RenRmZm9TMGVhNW5BNHBlTTdsNnZB?= =?utf-8?B?aXltMGpZQWUrb1A5cWNmMDMrYWQ1TWdocmtISEFOM2wwTitoTFV4dVlSN0JM?= =?utf-8?Q?AF12QzsnYWB7+?= X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-Network-Message-Id: 759f1fca-2e12-46c0-75d4-08d8ef703d36 X-MS-Exchange-CrossTenant-AuthSource: BY5PR12MB3764.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Mar 2021 09:27:41.8798 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: VkrvjI242TtUwm+GKU2lqXOVEurCkr5XmtB+ec0IPXnpb+HFcB4fLpv5T75TeYsZ X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR12MB2645 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 3C210A0009CD X-Stat-Signature: 9gapt9f35ompkxts8bw3q5qe7ryd81xz Received-SPF: none (amd.com>: No applicable sender policy available) receiver=imf24; identity=mailfrom; envelope-from=""; helo=NAM10-BN7-obe.outbound.protection.outlook.com; client-ip=40.107.92.48 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1616664462-678994 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Am 25.03.21 um 09:31 schrieb Daniel Vetter: > On Thu, Mar 25, 2021 at 9:07 AM Christian K=C3=B6nig > wrote: >> Am 24.03.21 um 20:29 schrieb Daniel Vetter: >>> On Wed, Mar 24, 2021 at 02:48:45PM +0100, Christian K=C3=B6nig wrote: >>>> The shrinker based approach still has some flaws. Especially that we= need >>>> temporary pages to free up the pages allocated to the driver is prob= lematic >>>> in a shrinker. >>>> >>>> Signed-off-by: Christian K=C3=B6nig >>>> --- >>>> drivers/gpu/drm/ttm/ttm_device.c | 14 ++-- >>>> drivers/gpu/drm/ttm/ttm_tt.c | 112 ++++++++++++--------------= ----- >>>> include/drm/ttm/ttm_tt.h | 3 +- >>>> 3 files changed, 53 insertions(+), 76 deletions(-) >>>> >>>> diff --git a/drivers/gpu/drm/ttm/ttm_device.c b/drivers/gpu/drm/ttm/= ttm_device.c >>>> index 95e1b7b1f2e6..388da2a7f0bb 100644 >>>> --- a/drivers/gpu/drm/ttm/ttm_device.c >>>> +++ b/drivers/gpu/drm/ttm/ttm_device.c >>>> @@ -53,7 +53,6 @@ static void ttm_global_release(void) >>>> goto out; >>>> >>>> ttm_pool_mgr_fini(); >>>> - ttm_tt_mgr_fini(); >>>> >>>> __free_page(glob->dummy_read_page); >>>> memset(glob, 0, sizeof(*glob)); >>>> @@ -64,7 +63,7 @@ static void ttm_global_release(void) >>>> static int ttm_global_init(void) >>>> { >>>> struct ttm_global *glob =3D &ttm_glob; >>>> - unsigned long num_pages; >>>> + unsigned long num_pages, num_dma32; >>>> struct sysinfo si; >>>> int ret =3D 0; >>>> unsigned i; >>>> @@ -79,8 +78,15 @@ static int ttm_global_init(void) >>>> * system memory. >>>> */ >>>> num_pages =3D ((u64)si.totalram * si.mem_unit) >> PAGE_SHIFT; >>>> - ttm_pool_mgr_init(num_pages * 50 / 100); >>>> - ttm_tt_mgr_init(); >>>> + num_pages /=3D 2; >>>> + >>>> + /* But for DMA32 we limit ourself to only use 2GiB maximum. */ >>>> + num_dma32 =3D (u64)(si.totalram - si.totalhigh) * si.mem_unit >>>> + >> PAGE_SHIFT; >>>> + num_dma32 =3D min(num_dma32, 2UL << (30 - PAGE_SHIFT)); >>>> + >>>> + ttm_pool_mgr_init(num_pages); >>>> + ttm_tt_mgr_init(num_pages, num_dma32); >>>> >>>> spin_lock_init(&glob->lru_lock); >>>> glob->dummy_read_page =3D alloc_page(__GFP_ZERO | GFP_DMA32); >>>> diff --git a/drivers/gpu/drm/ttm/ttm_tt.c b/drivers/gpu/drm/ttm/ttm_= tt.c >>>> index 2f0833c98d2c..5d8820725b75 100644 >>>> --- a/drivers/gpu/drm/ttm/ttm_tt.c >>>> +++ b/drivers/gpu/drm/ttm/ttm_tt.c >>>> @@ -40,8 +40,18 @@ >>>> >>>> #include "ttm_module.h" >>>> >>>> -static struct shrinker mm_shrinker; >>>> -static atomic_long_t swapable_pages; >>>> +static unsigned long ttm_pages_limit; >>>> + >>>> +MODULE_PARM_DESC(pages_limit, "Limit for the allocated pages"); >>>> +module_param_named(pages_limit, ttm_pages_limit, ulong, 0644); >>>> + >>>> +static unsigned long ttm_dma32_pages_limit; >>>> + >>>> +MODULE_PARM_DESC(dma32_pages_limit, "Limit for the allocated DMA32 = pages"); >>>> +module_param_named(dma32_pages_limit, ttm_dma32_pages_limit, ulong,= 0644); >>>> + >>>> +static atomic_long_t ttm_pages_allocated; >>>> +static atomic_long_t ttm_dma32_pages_allocated; >>> Making this configurable looks an awful lot like "job done, move on".= Just >>> the revert to hardcoded 50% (or I guess just revert the shrinker patc= h at >>> that point) for -fixes is imo better. >> Well this is the revert to hardcoded 50%. We had that configurable >> before and have it configurable now. > Hm I looked, but I missed it I guess. > > Acked-by: Daniel Vetter > > I just really want to make sure we don't walk down the path of first > reinventing kswapd, then reinventing direct reclaim (i915-gem has done > that because of the single dev->struct_mutex, despite our shrinker, > it's real ugly and fragile) in selective calls, then realizing that's > still not enough because especially compute loves userptr, and then > we're back to having to reinvent the GFP hierarchy. Been there with > i915-gem for dev->struct_mutex reasons, but the effect is kinda > similar: In some cases the shrinker is defacto not there, and > everything then keels over. Yeah, I mean we really (REALLY) want to get away from this 50% hack. So=20 no worries about that. Doing the shrinker is likely the right approach for this, but we need to=20 get it bulled prove or otherwise we will run into issues. > btw one thing that cross my mind a bunch times is to add a GFP_NOGPU. > That fixes things for everything else except our own gpu memory usage, > and since the goal is to let that approach 100% it's not really a > gain. Hui what? How exactly should that help? Regards, Christian. > >> Reverting everything back would mean that we also need to revert the >> sysfs changes and move all the memory accounting code from VMWGFX back >> into TTM. > Hm yeah ... > -Daniel > >> Christian. >> >>> Then I guess retry again for 5.14 or so. >>> -Daniel >>> >>>> /* >>>> * Allocates a ttm structure for the given BO. >>>> @@ -294,8 +304,6 @@ static void ttm_tt_add_mapping(struct ttm_device= *bdev, struct ttm_tt *ttm) >>>> >>>> for (i =3D 0; i < ttm->num_pages; ++i) >>>> ttm->pages[i]->mapping =3D bdev->dev_mapping; >>>> - >>>> - atomic_long_add(ttm->num_pages, &swapable_pages); >>>> } >>>> >>>> int ttm_tt_populate(struct ttm_device *bdev, >>>> @@ -309,12 +317,25 @@ int ttm_tt_populate(struct ttm_device *bdev, >>>> if (ttm_tt_is_populated(ttm)) >>>> return 0; >>>> >>>> + atomic_long_add(ttm->num_pages, &ttm_pages_allocated); >>>> + if (bdev->pool.use_dma32) >>>> + atomic_long_add(ttm->num_pages, &ttm_dma32_pages_alloca= ted); >>>> + >>>> + while (atomic_long_read(&ttm_pages_allocated) > ttm_pages_limit= || >>>> + atomic_long_read(&ttm_dma32_pages_allocated) > >>>> + ttm_dma32_pages_limit) { >>>> + >>>> + ret =3D ttm_bo_swapout(ctx, GFP_KERNEL); >>>> + if (ret) >>>> + goto error; >>>> + } >>>> + >>>> if (bdev->funcs->ttm_tt_populate) >>>> ret =3D bdev->funcs->ttm_tt_populate(bdev, ttm, ctx); >>>> else >>>> ret =3D ttm_pool_alloc(&bdev->pool, ttm, ctx); >>>> if (ret) >>>> - return ret; >>>> + goto error; >>>> >>>> ttm_tt_add_mapping(bdev, ttm); >>>> ttm->page_flags |=3D TTM_PAGE_FLAG_PRIV_POPULATED; >>>> @@ -327,6 +348,12 @@ int ttm_tt_populate(struct ttm_device *bdev, >>>> } >>>> >>>> return 0; >>>> + >>>> +error: >>>> + atomic_long_sub(ttm->num_pages, &ttm_pages_allocated); >>>> + if (bdev->pool.use_dma32) >>>> + atomic_long_sub(ttm->num_pages, &ttm_dma32_pages_alloca= ted); >>>> + return ret; >>>> } >>>> EXPORT_SYMBOL(ttm_tt_populate); >>>> >>>> @@ -342,12 +369,9 @@ static void ttm_tt_clear_mapping(struct ttm_tt = *ttm) >>>> (*page)->mapping =3D NULL; >>>> (*page++)->index =3D 0; >>>> } >>>> - >>>> - atomic_long_sub(ttm->num_pages, &swapable_pages); >>>> } >>>> >>>> -void ttm_tt_unpopulate(struct ttm_device *bdev, >>>> - struct ttm_tt *ttm) >>>> +void ttm_tt_unpopulate(struct ttm_device *bdev, struct ttm_tt *ttm) >>>> { >>>> if (!ttm_tt_is_populated(ttm)) >>>> return; >>>> @@ -357,76 +381,24 @@ void ttm_tt_unpopulate(struct ttm_device *bdev= , >>>> bdev->funcs->ttm_tt_unpopulate(bdev, ttm); >>>> else >>>> ttm_pool_free(&bdev->pool, ttm); >>>> - ttm->page_flags &=3D ~TTM_PAGE_FLAG_PRIV_POPULATED; >>>> -} >>>> - >>>> -/* As long as pages are available make sure to release at least one= */ >>>> -static unsigned long ttm_tt_shrinker_scan(struct shrinker *shrink, >>>> - struct shrink_control *sc) >>>> -{ >>>> - struct ttm_operation_ctx ctx =3D { >>>> - .no_wait_gpu =3D false >>>> - }; >>>> - int ret; >>>> - >>>> - ret =3D ttm_bo_swapout(&ctx, GFP_NOFS); >>>> - return ret < 0 ? SHRINK_EMPTY : ret; >>>> -} >>>> - >>>> -/* Return the number of pages available or SHRINK_EMPTY if we have = none */ >>>> -static unsigned long ttm_tt_shrinker_count(struct shrinker *shrink, >>>> - struct shrink_control *sc) >>>> -{ >>>> - unsigned long num_pages; >>>> - >>>> - num_pages =3D atomic_long_read(&swapable_pages); >>>> - return num_pages ? num_pages : SHRINK_EMPTY; >>>> -} >>>> >>>> -#ifdef CONFIG_DEBUG_FS >>>> + atomic_long_sub(ttm->num_pages, &ttm_pages_allocated); >>>> + if (bdev->pool.use_dma32) >>>> + atomic_long_sub(ttm->num_pages, &ttm_dma32_pages_alloca= ted); >>>> >>>> -/* Test the shrinker functions and dump the result */ >>>> -static int ttm_tt_debugfs_shrink_show(struct seq_file *m, void *dat= a) >>>> -{ >>>> - struct shrink_control sc =3D { .gfp_mask =3D GFP_KERNEL }; >>>> - >>>> - fs_reclaim_acquire(GFP_KERNEL); >>>> - seq_printf(m, "%lu/%lu\n", ttm_tt_shrinker_count(&mm_shrinker, = &sc), >>>> - ttm_tt_shrinker_scan(&mm_shrinker, &sc)); >>>> - fs_reclaim_release(GFP_KERNEL); >>>> - >>>> - return 0; >>>> + ttm->page_flags &=3D ~TTM_PAGE_FLAG_PRIV_POPULATED; >>>> } >>>> -DEFINE_SHOW_ATTRIBUTE(ttm_tt_debugfs_shrink); >>>> - >>>> -#endif >>>> - >>>> - >>>> >>>> /** >>>> * ttm_tt_mgr_init - register with the MM shrinker >>>> * >>>> * Register with the MM shrinker for swapping out BOs. >>>> */ >>>> -int ttm_tt_mgr_init(void) >>>> +void ttm_tt_mgr_init(unsigned long num_pages, unsigned long num_dma= 32_pages) >>>> { >>>> -#ifdef CONFIG_DEBUG_FS >>>> - debugfs_create_file("tt_shrink", 0400, ttm_debugfs_root, NULL, >>>> - &ttm_tt_debugfs_shrink_fops); >>>> -#endif >>>> - >>>> - mm_shrinker.count_objects =3D ttm_tt_shrinker_count; >>>> - mm_shrinker.scan_objects =3D ttm_tt_shrinker_scan; >>>> - mm_shrinker.seeks =3D 1; >>>> - return register_shrinker(&mm_shrinker); >>>> -} >>>> + if (!ttm_pages_limit) >>>> + ttm_pages_limit =3D num_pages; >>>> >>>> -/** >>>> - * ttm_tt_mgr_fini - unregister our MM shrinker >>>> - * >>>> - * Unregisters the MM shrinker. >>>> - */ >>>> -void ttm_tt_mgr_fini(void) >>>> -{ >>>> - unregister_shrinker(&mm_shrinker); >>>> + if (!ttm_dma32_pages_limit) >>>> + ttm_dma32_pages_limit =3D num_dma32_pages; >>>> } >>>> diff --git a/include/drm/ttm/ttm_tt.h b/include/drm/ttm/ttm_tt.h >>>> index 069f8130241a..134d09ef7766 100644 >>>> --- a/include/drm/ttm/ttm_tt.h >>>> +++ b/include/drm/ttm/ttm_tt.h >>>> @@ -157,8 +157,7 @@ int ttm_tt_populate(struct ttm_device *bdev, str= uct ttm_tt *ttm, struct ttm_oper >>>> */ >>>> void ttm_tt_unpopulate(struct ttm_device *bdev, struct ttm_tt *tt= m); >>>> >>>> -int ttm_tt_mgr_init(void); >>>> -void ttm_tt_mgr_fini(void); >>>> +void ttm_tt_mgr_init(unsigned long num_pages, unsigned long num_dma= 32_pages); >>>> >>>> #if IS_ENABLED(CONFIG_AGP) >>>> #include >>>> -- >>>> 2.25.1 >>>> >> _______________________________________________ >> dri-devel mailing list >> dri-devel@lists.freedesktop.org >> https://lists.freedesktop.org/mailman/listinfo/dri-devel > >