From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg0-f71.google.com (mail-pg0-f71.google.com [74.125.83.71]) by kanga.kvack.org (Postfix) with ESMTP id 969F56B0038 for ; Thu, 16 Mar 2017 21:00:07 -0400 (EDT) Received: by mail-pg0-f71.google.com with SMTP id g2so118031532pge.7 for ; Thu, 16 Mar 2017 18:00:07 -0700 (PDT) Received: from hqemgate14.nvidia.com (hqemgate14.nvidia.com. [216.228.121.143]) by mx.google.com with ESMTPS id y9si6864916pli.294.2017.03.16.18.00.06 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 16 Mar 2017 18:00:06 -0700 (PDT) Subject: Re: [HMM 07/16] mm/migrate: new memory migration helper for use with device memory v4 References: <1489680335-6594-1-git-send-email-jglisse@redhat.com> <1489680335-6594-8-git-send-email-jglisse@redhat.com> <20170316160520.d03ac02474cad6d2c8eba9bc@linux-foundation.org> From: John Hubbard Message-ID: <94e0d115-7deb-c748-3dc2-60d6289e6551@nvidia.com> Date: Thu, 16 Mar 2017 17:57:35 -0700 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset="utf-8"; format=flowed Content-Transfer-Encoding: quoted-printable Sender: owner-linux-mm@kvack.org List-ID: To: Balbir Singh Cc: Andrew Morton , =?UTF-8?B?SsOpcsO0bWUgR2xpc3Nl?= , "linux-kernel@vger.kernel.org" , linux-mm , Naoya Horiguchi , David Nellans , Evgeny Baskakov , Mark Hairgrove , Sherry Cheung , Subhash Gutti On 03/16/2017 05:45 PM, Balbir Singh wrote: > On Fri, Mar 17, 2017 at 11:22 AM, John Hubbard wrot= e: >> On 03/16/2017 04:05 PM, Andrew Morton wrote: >>> >>> On Thu, 16 Mar 2017 12:05:26 -0400 J=C3=A9r=C3=B4me Glisse >>> wrote: >>> >>>> +static inline struct page *migrate_pfn_to_page(unsigned long mpfn) >>>> +{ >>>> + if (!(mpfn & MIGRATE_PFN_VALID)) >>>> + return NULL; >>>> + return pfn_to_page(mpfn & MIGRATE_PFN_MASK); >>>> +} >>> >>> >>> i386 allnoconfig: >>> >>> In file included from mm/page_alloc.c:61: >>> ./include/linux/migrate.h: In function 'migrate_pfn_to_page': >>> ./include/linux/migrate.h:139: warning: left shift count >=3D width of = type >>> ./include/linux/migrate.h:141: warning: left shift count >=3D width of = type >>> ./include/linux/migrate.h: In function 'migrate_pfn_size': >>> ./include/linux/migrate.h:146: warning: left shift count >=3D width of = type >>> >> >> It seems clear that this was never meant to work with < 64-bit pfns: >> >> // migrate.h excerpt: >> #define MIGRATE_PFN_VALID (1UL << (BITS_PER_LONG_LONG - 1)) >> #define MIGRATE_PFN_MIGRATE (1UL << (BITS_PER_LONG_LONG - 2)) >> #define MIGRATE_PFN_HUGE (1UL << (BITS_PER_LONG_LONG - 3)) >> #define MIGRATE_PFN_LOCKED (1UL << (BITS_PER_LONG_LONG - 4)) >> #define MIGRATE_PFN_WRITE (1UL << (BITS_PER_LONG_LONG - 5)) >> #define MIGRATE_PFN_DEVICE (1UL << (BITS_PER_LONG_LONG - 6)) >> #define MIGRATE_PFN_ERROR (1UL << (BITS_PER_LONG_LONG - 7)) >> #define MIGRATE_PFN_MASK ((1UL << (BITS_PER_LONG_LONG - PAGE_SHIF= T)) >> - 1) >> >> ...obviously, there is not enough room for these flags, in a 32-bit pfn. >> >> So, given the current HMM design, I think we are going to have to provid= e a >> 32-bit version of these routines (migrate_pfn_to_page, and related) that= is >> a no-op, right? > > Or make the HMM Kconfig feature 64BIT only by making it depend on 64BIT? > Yes, that was my first reaction too, but these particular routines are aspi= ring to be generic=20 routines--in fact, you have had an influence there, because these might pos= sibly help with NUMA=20 migrations. :) So it would look odd to see this: #ifdef CONFIG_HMM int migrate_vma(const struct migrate_vma_ops *ops, struct vm_area_struct *vma, unsigned long mentries, unsigned long start, unsigned long end, unsigned long *src, unsigned long *dst, void *private) { //...implementation #endif ...because migrate_vma() does not sound HMM-specific, and it is, after all,= in migrate.h and=20 migrate.c. We probably want this a more generic approach (not sure if I've = picked exactly the right=20 token to #ifdef on, but it's close): #ifdef CONFIG_64BIT int migrate_vma(const struct migrate_vma_ops *ops, struct vm_area_struct *vma, unsigned long mentries, unsigned long start, unsigned long end, unsigned long *src, unsigned long *dst, void *private) { /* ... full implementation */ } #else int migrate_vma(const struct migrate_vma_ops *ops, struct vm_area_struct *vma, unsigned long mentries, unsigned long start, unsigned long end, unsigned long *src, unsigned long *dst, void *private) { return -EINVAL; /* or something more appropriate */ } #endif thanks John Hubbard NVIDIA > > Balbir Singh > -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org