* incoming
@ 2021-11-11 4:32 Andrew Morton
2021-11-11 4:32 ` [patch 1/7] mm/page_owner.c: modify the type of argument "order" in some functions Andrew Morton
` (6 more replies)
0 siblings, 7 replies; 8+ messages in thread
From: Andrew Morton @ 2021-11-11 4:32 UTC (permalink / raw)
To: Linus Torvalds; +Cc: linux-mm, mm-commits
The post-linux-next material.
7 patches, based on debe436e77c72fcee804fb867f275e6d31aa999c.
Subsystems affected by this patch series:
mm/debug
mm/slab-generic
mm/migration
mm/memcg
mm/kasan
Subsystem: mm/debug
Yixuan Cao <caoyixuan2019@email.szu.edu.cn>:
mm/page_owner.c: modify the type of argument "order" in some functions
Subsystem: mm/slab-generic
Ingo Molnar <mingo@kernel.org>:
mm: allow only SLUB on PREEMPT_RT
Subsystem: mm/migration
Baolin Wang <baolin.wang@linux.alibaba.com>:
mm: migrate: simplify the file-backed pages validation when migrating its mapping
Alistair Popple <apopple@nvidia.com>:
mm/migrate.c: remove MIGRATE_PFN_LOCKED
Subsystem: mm/memcg
Christoph Hellwig <hch@lst.de>:
Patch series "unexport memcg locking helpers":
mm: unexport folio_memcg_{,un}lock
mm: unexport {,un}lock_page_memcg
Subsystem: mm/kasan
Kuan-Ying Lee <Kuan-Ying.Lee@mediatek.com>:
kasan: add kasan mode messages when kasan init
Documentation/vm/hmm.rst | 2
arch/arm64/mm/kasan_init.c | 2
arch/powerpc/kvm/book3s_hv_uvmem.c | 4
drivers/gpu/drm/amd/amdkfd/kfd_migrate.c | 2
drivers/gpu/drm/nouveau/nouveau_dmem.c | 4
include/linux/migrate.h | 1
include/linux/page_owner.h | 12 +-
init/Kconfig | 2
lib/test_hmm.c | 5 -
mm/kasan/hw_tags.c | 14 ++
mm/kasan/sw_tags.c | 2
mm/memcontrol.c | 4
mm/migrate.c | 151 +++++--------------------------
mm/page_owner.c | 6 -
14 files changed, 61 insertions(+), 150 deletions(-)
^ permalink raw reply [flat|nested] 8+ messages in thread
* [patch 1/7] mm/page_owner.c: modify the type of argument "order" in some functions
2021-11-11 4:32 incoming Andrew Morton
@ 2021-11-11 4:32 ` Andrew Morton
2021-11-11 4:32 ` [patch 2/7] mm: allow only SLUB on PREEMPT_RT Andrew Morton
` (5 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: Andrew Morton @ 2021-11-11 4:32 UTC (permalink / raw)
To: akpm, caoyixuan2019, linux-mm, mm-commits, torvalds
From: Yixuan Cao <caoyixuan2019@email.szu.edu.cn>
Subject: mm/page_owner.c: modify the type of argument "order" in some functions
The type of "order" in struct page_owner is unsigned short.
However, it is unsigned int in the following 3 functions:
__reset_page_owner
__set_page_owner_handle
__set_page_owner_handle
The type of "order" in argument list is unsigned int, which is
inconsistent.
[akpm@linux-foundation.org: update include/linux/page_owner.h]
Link: https://lkml.kernel.org/r/20211020125945.47792-1-caoyixuan2019@email.szu.edu.cn
Signed-off-by: Yixuan Cao <caoyixuan2019@email.szu.edu.cn>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
include/linux/page_owner.h | 12 ++++++------
mm/page_owner.c | 6 +++---
2 files changed, 9 insertions(+), 9 deletions(-)
--- a/include/linux/page_owner.h~mm-page_ownerc-modify-the-type-of-argument-order-in-some-functions
+++ a/include/linux/page_owner.h
@@ -8,9 +8,9 @@
extern struct static_key_false page_owner_inited;
extern struct page_ext_operations page_owner_ops;
-extern void __reset_page_owner(struct page *page, unsigned int order);
+extern void __reset_page_owner(struct page *page, unsigned short order);
extern void __set_page_owner(struct page *page,
- unsigned int order, gfp_t gfp_mask);
+ unsigned short order, gfp_t gfp_mask);
extern void __split_page_owner(struct page *page, unsigned int nr);
extern void __folio_copy_owner(struct folio *newfolio, struct folio *old);
extern void __set_page_owner_migrate_reason(struct page *page, int reason);
@@ -18,14 +18,14 @@ extern void __dump_page_owner(const stru
extern void pagetypeinfo_showmixedcount_print(struct seq_file *m,
pg_data_t *pgdat, struct zone *zone);
-static inline void reset_page_owner(struct page *page, unsigned int order)
+static inline void reset_page_owner(struct page *page, unsigned short order)
{
if (static_branch_unlikely(&page_owner_inited))
__reset_page_owner(page, order);
}
static inline void set_page_owner(struct page *page,
- unsigned int order, gfp_t gfp_mask)
+ unsigned short order, gfp_t gfp_mask)
{
if (static_branch_unlikely(&page_owner_inited))
__set_page_owner(page, order, gfp_mask);
@@ -52,7 +52,7 @@ static inline void dump_page_owner(const
__dump_page_owner(page);
}
#else
-static inline void reset_page_owner(struct page *page, unsigned int order)
+static inline void reset_page_owner(struct page *page, unsigned short order)
{
}
static inline void set_page_owner(struct page *page,
@@ -60,7 +60,7 @@ static inline void set_page_owner(struct
{
}
static inline void split_page_owner(struct page *page,
- unsigned int order)
+ unsigned short order)
{
}
static inline void folio_copy_owner(struct folio *newfolio, struct folio *folio)
--- a/mm/page_owner.c~mm-page_ownerc-modify-the-type-of-argument-order-in-some-functions
+++ a/mm/page_owner.c
@@ -125,7 +125,7 @@ static noinline depot_stack_handle_t sav
return handle;
}
-void __reset_page_owner(struct page *page, unsigned int order)
+void __reset_page_owner(struct page *page, unsigned short order)
{
int i;
struct page_ext *page_ext;
@@ -149,7 +149,7 @@ void __reset_page_owner(struct page *pag
static inline void __set_page_owner_handle(struct page_ext *page_ext,
depot_stack_handle_t handle,
- unsigned int order, gfp_t gfp_mask)
+ unsigned short order, gfp_t gfp_mask)
{
struct page_owner *page_owner;
int i;
@@ -169,7 +169,7 @@ static inline void __set_page_owner_hand
}
}
-noinline void __set_page_owner(struct page *page, unsigned int order,
+noinline void __set_page_owner(struct page *page, unsigned short order,
gfp_t gfp_mask)
{
struct page_ext *page_ext = lookup_page_ext(page);
_
^ permalink raw reply [flat|nested] 8+ messages in thread
* [patch 2/7] mm: allow only SLUB on PREEMPT_RT
2021-11-11 4:32 incoming Andrew Morton
2021-11-11 4:32 ` [patch 1/7] mm/page_owner.c: modify the type of argument "order" in some functions Andrew Morton
@ 2021-11-11 4:32 ` Andrew Morton
2021-11-11 4:32 ` [patch 3/7] mm: migrate: simplify the file-backed pages validation when migrating its mapping Andrew Morton
` (4 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: Andrew Morton @ 2021-11-11 4:32 UTC (permalink / raw)
To: akpm, bigeasy, cl, iamjoonsoo.kim, linux-mm, mingo, mm-commits,
penberg, rientjes, tglx, torvalds, vbabka
From: Ingo Molnar <mingo@kernel.org>
Subject: mm: allow only SLUB on PREEMPT_RT
Memory allocators may disable interrupts or preemption as part of the
allocation and freeing process. For PREEMPT_RT it is important that these
sections remain deterministic and short and therefore don't depend on the
size of the memory to allocate/ free or the inner state of the algorithm.
Until v3.12-RT the SLAB allocator was an option but involved several
changes to meet all the requirements. The SLUB design fits better with
PREEMPT_RT model and so the SLAB patches were dropped in the 3.12-RT
patchset. Comparing the two allocator, SLUB outperformed SLAB in both
throughput (time needed to allocate and free memory) and the maximal
latency of the system measured with cyclictest during hackbench.
SLOB was never evaluated since it was unlikely that it preforms better
than SLAB. During a quick test, the kernel crashed with SLOB enabled
during boot.
Disable SLAB and SLOB on PREEMPT_RT.
[bigeasy@linutronix.de: commit description]
Link: https://lkml.kernel.org/r/20211015210336.gen3tib33ig5q2md@linutronix.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
init/Kconfig | 2 ++
1 file changed, 2 insertions(+)
--- a/init/Kconfig~mm-allow-only-slub-on-preempt_rt
+++ a/init/Kconfig
@@ -1896,6 +1896,7 @@ choice
config SLAB
bool "SLAB"
+ depends on !PREEMPT_RT
select HAVE_HARDENED_USERCOPY_ALLOCATOR
help
The regular slab allocator that is established and known to work
@@ -1916,6 +1917,7 @@ config SLUB
config SLOB
depends on EXPERT
bool "SLOB (Simple Allocator)"
+ depends on !PREEMPT_RT
help
SLOB replaces the stock allocator with a drastically simpler
allocator. SLOB is generally more space efficient but
_
^ permalink raw reply [flat|nested] 8+ messages in thread
* [patch 3/7] mm: migrate: simplify the file-backed pages validation when migrating its mapping
2021-11-11 4:32 incoming Andrew Morton
2021-11-11 4:32 ` [patch 1/7] mm/page_owner.c: modify the type of argument "order" in some functions Andrew Morton
2021-11-11 4:32 ` [patch 2/7] mm: allow only SLUB on PREEMPT_RT Andrew Morton
@ 2021-11-11 4:32 ` Andrew Morton
2021-11-11 4:32 ` [patch 4/7] mm/migrate.c: remove MIGRATE_PFN_LOCKED Andrew Morton
` (3 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: Andrew Morton @ 2021-11-11 4:32 UTC (permalink / raw)
To: akpm, apopple, baolin.wang, linux-mm, mm-commits, shy828301,
torvalds, willy
From: Baolin Wang <baolin.wang@linux.alibaba.com>
Subject: mm: migrate: simplify the file-backed pages validation when migrating its mapping
There is no need to validate the file-backed page's refcount before trying
to freeze the page's expected refcount, instead we can rely on the
folio_ref_freeze() to validate if the page has the expected refcount
before migrating its mapping.
Moreover we are always under the page lock when migrating the page
mapping, which means nowhere else can remove it from the page cache, so we
can remove the xas_load() validation under the i_pages lock.
Link: https://lkml.kernel.org/r/cover.1629447552.git.baolin.wang@linux.alibaba.com
Link: https://lkml.kernel.org/r/df4c129fd8e86a95dbc55f4663d77441cc0d3bd1.1629447552.git.baolin.wang@linux.alibaba.com
Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Suggested-by: Matthew Wilcox <willy@infradead.org>
Cc: Yang Shi <shy828301@gmail.com>
Cc: Alistair Popple <apopple@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
mm/migrate.c | 6 ------
1 file changed, 6 deletions(-)
--- a/mm/migrate.c~mm-migrate-simplify-the-file-backed-pages-validation-when-migrating-its-mapping
+++ a/mm/migrate.c
@@ -404,12 +404,6 @@ int folio_migrate_mapping(struct address
newzone = folio_zone(newfolio);
xas_lock_irq(&xas);
- if (folio_ref_count(folio) != expected_count ||
- xas_load(&xas) != folio) {
- xas_unlock_irq(&xas);
- return -EAGAIN;
- }
-
if (!folio_ref_freeze(folio, expected_count)) {
xas_unlock_irq(&xas);
return -EAGAIN;
_
^ permalink raw reply [flat|nested] 8+ messages in thread
* [patch 4/7] mm/migrate.c: remove MIGRATE_PFN_LOCKED
2021-11-11 4:32 incoming Andrew Morton
` (2 preceding siblings ...)
2021-11-11 4:32 ` [patch 3/7] mm: migrate: simplify the file-backed pages validation when migrating its mapping Andrew Morton
@ 2021-11-11 4:32 ` Andrew Morton
2021-11-11 4:32 ` [patch 5/7] mm: unexport folio_memcg_{,un}lock Andrew Morton
` (2 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: Andrew Morton @ 2021-11-11 4:32 UTC (permalink / raw)
To: akpm, alexander.deucher, apopple, bskeggs, Felix.Kuehling, hch,
jglisse, jhubbard, linux-mm, mm-commits, rcampbell, torvalds,
ziy
From: Alistair Popple <apopple@nvidia.com>
Subject: mm/migrate.c: remove MIGRATE_PFN_LOCKED
MIGRATE_PFN_LOCKED is used to indicate to migrate_vma_prepare() that a
source page was already locked during migrate_vma_collect(). If it wasn't
then the a second attempt is made to lock the page. However if the first
attempt failed it's unlikely a second attempt will succeed, and the retry
adds complexity. So clean this up by removing the retry and
MIGRATE_PFN_LOCKED flag.
Destination pages are also meant to have the MIGRATE_PFN_LOCKED flag set,
but nothing actually checks that.
Link: https://lkml.kernel.org/r/20211025041608.289017-1-apopple@nvidia.com
Signed-off-by: Alistair Popple <apopple@nvidia.com>
Reviewed-by: Ralph Campbell <rcampbell@nvidia.com>
Acked-by: Felix Kuehling <Felix.Kuehling@amd.com>
Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: Jerome Glisse <jglisse@redhat.com>
Cc: John Hubbard <jhubbard@nvidia.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Ben Skeggs <bskeggs@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
Documentation/vm/hmm.rst | 2
arch/powerpc/kvm/book3s_hv_uvmem.c | 4
drivers/gpu/drm/amd/amdkfd/kfd_migrate.c | 2
drivers/gpu/drm/nouveau/nouveau_dmem.c | 4
include/linux/migrate.h | 1
lib/test_hmm.c | 5
mm/migrate.c | 145 ++++-----------------
7 files changed, 35 insertions(+), 128 deletions(-)
--- a/arch/powerpc/kvm/book3s_hv_uvmem.c~mm-migratec-remove-migrate_pfn_locked
+++ a/arch/powerpc/kvm/book3s_hv_uvmem.c
@@ -560,7 +560,7 @@ static int __kvmppc_svm_page_out(struct
gpa, 0, page_shift);
if (ret == U_SUCCESS)
- *mig.dst = migrate_pfn(pfn) | MIGRATE_PFN_LOCKED;
+ *mig.dst = migrate_pfn(pfn);
else {
unlock_page(dpage);
__free_page(dpage);
@@ -774,7 +774,7 @@ static int kvmppc_svm_page_in(struct vm_
}
}
- *mig.dst = migrate_pfn(page_to_pfn(dpage)) | MIGRATE_PFN_LOCKED;
+ *mig.dst = migrate_pfn(page_to_pfn(dpage));
migrate_vma_pages(&mig);
out_finalize:
migrate_vma_finalize(&mig);
--- a/Documentation/vm/hmm.rst~mm-migratec-remove-migrate_pfn_locked
+++ a/Documentation/vm/hmm.rst
@@ -360,7 +360,7 @@ between device driver specific code and
system memory page, locks the page with ``lock_page()``, and fills in the
``dst`` array entry with::
- dst[i] = migrate_pfn(page_to_pfn(dpage)) | MIGRATE_PFN_LOCKED;
+ dst[i] = migrate_pfn(page_to_pfn(dpage));
Now that the driver knows that this page is being migrated, it can
invalidate device private MMU mappings and copy device private memory
--- a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c~mm-migratec-remove-migrate_pfn_locked
+++ a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
@@ -317,7 +317,6 @@ svm_migrate_copy_to_vram(struct amdgpu_d
migrate->dst[i] = svm_migrate_addr_to_pfn(adev, dst[i]);
svm_migrate_get_vram_page(prange, migrate->dst[i]);
migrate->dst[i] = migrate_pfn(migrate->dst[i]);
- migrate->dst[i] |= MIGRATE_PFN_LOCKED;
src[i] = dma_map_page(dev, spage, 0, PAGE_SIZE,
DMA_TO_DEVICE);
r = dma_mapping_error(dev, src[i]);
@@ -610,7 +609,6 @@ svm_migrate_copy_to_ram(struct amdgpu_de
dst[i] >> PAGE_SHIFT, page_to_pfn(dpage));
migrate->dst[i] = migrate_pfn(page_to_pfn(dpage));
- migrate->dst[i] |= MIGRATE_PFN_LOCKED;
j++;
}
--- a/drivers/gpu/drm/nouveau/nouveau_dmem.c~mm-migratec-remove-migrate_pfn_locked
+++ a/drivers/gpu/drm/nouveau/nouveau_dmem.c
@@ -166,7 +166,7 @@ static vm_fault_t nouveau_dmem_fault_cop
goto error_dma_unmap;
mutex_unlock(&svmm->mutex);
- args->dst[0] = migrate_pfn(page_to_pfn(dpage)) | MIGRATE_PFN_LOCKED;
+ args->dst[0] = migrate_pfn(page_to_pfn(dpage));
return 0;
error_dma_unmap:
@@ -602,7 +602,7 @@ static unsigned long nouveau_dmem_migrat
((paddr >> PAGE_SHIFT) << NVIF_VMM_PFNMAP_V0_ADDR_SHIFT);
if (src & MIGRATE_PFN_WRITE)
*pfn |= NVIF_VMM_PFNMAP_V0_W;
- return migrate_pfn(page_to_pfn(dpage)) | MIGRATE_PFN_LOCKED;
+ return migrate_pfn(page_to_pfn(dpage));
out_dma_unmap:
dma_unmap_page(dev, *dma_addr, PAGE_SIZE, DMA_BIDIRECTIONAL);
--- a/include/linux/migrate.h~mm-migratec-remove-migrate_pfn_locked
+++ a/include/linux/migrate.h
@@ -110,7 +110,6 @@ static inline int migrate_misplaced_page
*/
#define MIGRATE_PFN_VALID (1UL << 0)
#define MIGRATE_PFN_MIGRATE (1UL << 1)
-#define MIGRATE_PFN_LOCKED (1UL << 2)
#define MIGRATE_PFN_WRITE (1UL << 3)
#define MIGRATE_PFN_SHIFT 6
--- a/lib/test_hmm.c~mm-migratec-remove-migrate_pfn_locked
+++ a/lib/test_hmm.c
@@ -613,8 +613,7 @@ static void dmirror_migrate_alloc_and_co
*/
rpage->zone_device_data = dmirror;
- *dst = migrate_pfn(page_to_pfn(dpage)) |
- MIGRATE_PFN_LOCKED;
+ *dst = migrate_pfn(page_to_pfn(dpage));
if ((*src & MIGRATE_PFN_WRITE) ||
(!spage && args->vma->vm_flags & VM_WRITE))
*dst |= MIGRATE_PFN_WRITE;
@@ -1137,7 +1136,7 @@ static vm_fault_t dmirror_devmem_fault_a
lock_page(dpage);
xa_erase(&dmirror->pt, addr >> PAGE_SHIFT);
copy_highpage(dpage, spage);
- *dst = migrate_pfn(page_to_pfn(dpage)) | MIGRATE_PFN_LOCKED;
+ *dst = migrate_pfn(page_to_pfn(dpage));
if (*src & MIGRATE_PFN_WRITE)
*dst |= MIGRATE_PFN_WRITE;
}
--- a/mm/migrate.c~mm-migratec-remove-migrate_pfn_locked
+++ a/mm/migrate.c
@@ -2362,7 +2362,6 @@ again:
* can't be dropped from it).
*/
get_page(page);
- migrate->cpages++;
/*
* Optimize for the common case where page is only mapped once
@@ -2372,7 +2371,7 @@ again:
if (trylock_page(page)) {
pte_t swp_pte;
- mpfn |= MIGRATE_PFN_LOCKED;
+ migrate->cpages++;
ptep_get_and_clear(mm, addr, ptep);
/* Setup special migration page table entry */
@@ -2406,6 +2405,9 @@ again:
if (pte_present(pte))
unmapped++;
+ } else {
+ put_page(page);
+ mpfn = 0;
}
next:
@@ -2510,15 +2512,17 @@ static bool migrate_vma_check_page(struc
}
/*
- * migrate_vma_prepare() - lock pages and isolate them from the lru
+ * migrate_vma_unmap() - replace page mapping with special migration pte entry
* @migrate: migrate struct containing all migration information
*
- * This locks pages that have been collected by migrate_vma_collect(). Once each
- * page is locked it is isolated from the lru (for non-device pages). Finally,
- * the ref taken by migrate_vma_collect() is dropped, as locked pages cannot be
- * migrated by concurrent kernel threads.
+ * Isolate pages from the LRU and replace mappings (CPU page table pte) with a
+ * special migration pte entry and check if it has been pinned. Pinned pages are
+ * restored because we cannot migrate them.
+ *
+ * This is the last step before we call the device driver callback to allocate
+ * destination memory and copy contents of original page over to new page.
*/
-static void migrate_vma_prepare(struct migrate_vma *migrate)
+static void migrate_vma_unmap(struct migrate_vma *migrate)
{
const unsigned long npages = migrate->npages;
const unsigned long start = migrate->start;
@@ -2527,32 +2531,12 @@ static void migrate_vma_prepare(struct m
lru_add_drain();
- for (i = 0; (i < npages) && migrate->cpages; i++) {
+ for (i = 0; i < npages; i++) {
struct page *page = migrate_pfn_to_page(migrate->src[i]);
- bool remap = true;
if (!page)
continue;
- if (!(migrate->src[i] & MIGRATE_PFN_LOCKED)) {
- /*
- * Because we are migrating several pages there can be
- * a deadlock between 2 concurrent migration where each
- * are waiting on each other page lock.
- *
- * Make migrate_vma() a best effort thing and backoff
- * for any page we can not lock right away.
- */
- if (!trylock_page(page)) {
- migrate->src[i] = 0;
- migrate->cpages--;
- put_page(page);
- continue;
- }
- remap = false;
- migrate->src[i] |= MIGRATE_PFN_LOCKED;
- }
-
/* ZONE_DEVICE pages are not on LRU */
if (!is_zone_device_page(page)) {
if (!PageLRU(page) && allow_drain) {
@@ -2562,16 +2546,9 @@ static void migrate_vma_prepare(struct m
}
if (isolate_lru_page(page)) {
- if (remap) {
- migrate->src[i] &= ~MIGRATE_PFN_MIGRATE;
- migrate->cpages--;
- restore++;
- } else {
- migrate->src[i] = 0;
- unlock_page(page);
- migrate->cpages--;
- put_page(page);
- }
+ migrate->src[i] &= ~MIGRATE_PFN_MIGRATE;
+ migrate->cpages--;
+ restore++;
continue;
}
@@ -2579,80 +2556,20 @@ static void migrate_vma_prepare(struct m
put_page(page);
}
- if (!migrate_vma_check_page(page)) {
- if (remap) {
- migrate->src[i] &= ~MIGRATE_PFN_MIGRATE;
- migrate->cpages--;
- restore++;
-
- if (!is_zone_device_page(page)) {
- get_page(page);
- putback_lru_page(page);
- }
- } else {
- migrate->src[i] = 0;
- unlock_page(page);
- migrate->cpages--;
+ if (page_mapped(page))
+ try_to_migrate(page, 0);
- if (!is_zone_device_page(page))
- putback_lru_page(page);
- else
- put_page(page);
+ if (page_mapped(page) || !migrate_vma_check_page(page)) {
+ if (!is_zone_device_page(page)) {
+ get_page(page);
+ putback_lru_page(page);
}
- }
- }
-
- for (i = 0, addr = start; i < npages && restore; i++, addr += PAGE_SIZE) {
- struct page *page = migrate_pfn_to_page(migrate->src[i]);
- if (!page || (migrate->src[i] & MIGRATE_PFN_MIGRATE))
+ migrate->src[i] &= ~MIGRATE_PFN_MIGRATE;
+ migrate->cpages--;
+ restore++;
continue;
-
- remove_migration_pte(page, migrate->vma, addr, page);
-
- migrate->src[i] = 0;
- unlock_page(page);
- put_page(page);
- restore--;
- }
-}
-
-/*
- * migrate_vma_unmap() - replace page mapping with special migration pte entry
- * @migrate: migrate struct containing all migration information
- *
- * Replace page mapping (CPU page table pte) with a special migration pte entry
- * and check again if it has been pinned. Pinned pages are restored because we
- * cannot migrate them.
- *
- * This is the last step before we call the device driver callback to allocate
- * destination memory and copy contents of original page over to new page.
- */
-static void migrate_vma_unmap(struct migrate_vma *migrate)
-{
- const unsigned long npages = migrate->npages;
- const unsigned long start = migrate->start;
- unsigned long addr, i, restore = 0;
-
- for (i = 0; i < npages; i++) {
- struct page *page = migrate_pfn_to_page(migrate->src[i]);
-
- if (!page || !(migrate->src[i] & MIGRATE_PFN_MIGRATE))
- continue;
-
- if (page_mapped(page)) {
- try_to_migrate(page, 0);
- if (page_mapped(page))
- goto restore;
}
-
- if (migrate_vma_check_page(page))
- continue;
-
-restore:
- migrate->src[i] &= ~MIGRATE_PFN_MIGRATE;
- migrate->cpages--;
- restore++;
}
for (addr = start, i = 0; i < npages && restore; addr += PAGE_SIZE, i++) {
@@ -2665,12 +2582,8 @@ restore:
migrate->src[i] = 0;
unlock_page(page);
+ put_page(page);
restore--;
-
- if (is_zone_device_page(page))
- put_page(page);
- else
- putback_lru_page(page);
}
}
@@ -2693,8 +2606,8 @@ restore:
* it for all those entries (ie with MIGRATE_PFN_VALID and MIGRATE_PFN_MIGRATE
* flag set). Once these are allocated and copied, the caller must update each
* corresponding entry in the dst array with the pfn value of the destination
- * page and with the MIGRATE_PFN_VALID and MIGRATE_PFN_LOCKED flags set
- * (destination pages must have their struct pages locked, via lock_page()).
+ * page and with MIGRATE_PFN_VALID. Destination pages must be locked via
+ * lock_page().
*
* Note that the caller does not have to migrate all the pages that are marked
* with MIGRATE_PFN_MIGRATE flag in src array unless this is a migration from
@@ -2764,8 +2677,6 @@ int migrate_vma_setup(struct migrate_vma
migrate_vma_collect(args);
if (args->cpages)
- migrate_vma_prepare(args);
- if (args->cpages)
migrate_vma_unmap(args);
/*
_
^ permalink raw reply [flat|nested] 8+ messages in thread
* [patch 5/7] mm: unexport folio_memcg_{,un}lock
2021-11-11 4:32 incoming Andrew Morton
` (3 preceding siblings ...)
2021-11-11 4:32 ` [patch 4/7] mm/migrate.c: remove MIGRATE_PFN_LOCKED Andrew Morton
@ 2021-11-11 4:32 ` Andrew Morton
2021-11-11 4:32 ` [patch 6/7] mm: unexport {,un}lock_page_memcg Andrew Morton
2021-11-11 4:32 ` [patch 7/7] kasan: add kasan mode messages when kasan init Andrew Morton
6 siblings, 0 replies; 8+ messages in thread
From: Andrew Morton @ 2021-11-11 4:32 UTC (permalink / raw)
To: akpm, hannes, hch, linux-mm, mhocko, mm-commits, torvalds, vdavydov.dev
From: Christoph Hellwig <hch@lst.de>
Subject: mm: unexport folio_memcg_{,un}lock
Patch series "unexport memcg locking helpers".
Neither the old page-based nor the new folio-based memcg locking helpers
are used in modular code at all, so drop the exports.
This patch (of 2):
folio_memcg_{,un}lock are only used in built-in core mm code.
Link: https://lkml.kernel.org/r/20210820095815.445392-1-hch@lst.de
Link: https://lkml.kernel.org/r/20210820095815.445392-2-hch@lst.de
Signed-off-by: Christoph Hellwig <hch@lst.de>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
mm/memcontrol.c | 2 --
1 file changed, 2 deletions(-)
--- a/mm/memcontrol.c~mm-unexport-folio_memcg_unlock
+++ a/mm/memcontrol.c
@@ -2058,7 +2058,6 @@ again:
memcg->move_lock_task = current;
memcg->move_lock_flags = flags;
}
-EXPORT_SYMBOL(folio_memcg_lock);
void lock_page_memcg(struct page *page)
{
@@ -2092,7 +2091,6 @@ void folio_memcg_unlock(struct folio *fo
{
__folio_memcg_unlock(folio_memcg(folio));
}
-EXPORT_SYMBOL(folio_memcg_unlock);
void unlock_page_memcg(struct page *page)
{
_
^ permalink raw reply [flat|nested] 8+ messages in thread
* [patch 6/7] mm: unexport {,un}lock_page_memcg
2021-11-11 4:32 incoming Andrew Morton
` (4 preceding siblings ...)
2021-11-11 4:32 ` [patch 5/7] mm: unexport folio_memcg_{,un}lock Andrew Morton
@ 2021-11-11 4:32 ` Andrew Morton
2021-11-11 4:32 ` [patch 7/7] kasan: add kasan mode messages when kasan init Andrew Morton
6 siblings, 0 replies; 8+ messages in thread
From: Andrew Morton @ 2021-11-11 4:32 UTC (permalink / raw)
To: akpm, hannes, hch, linux-mm, mhocko, mm-commits, torvalds, vdavydov.dev
From: Christoph Hellwig <hch@lst.de>
Subject: mm: unexport {,un}lock_page_memcg
These are only used in built-in core mm code.
Link: https://lkml.kernel.org/r/20210820095815.445392-3-hch@lst.de
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
mm/memcontrol.c | 2 --
1 file changed, 2 deletions(-)
--- a/mm/memcontrol.c~mm-unexport-unlock_page_memcg
+++ a/mm/memcontrol.c
@@ -2063,7 +2063,6 @@ void lock_page_memcg(struct page *page)
{
folio_memcg_lock(page_folio(page));
}
-EXPORT_SYMBOL(lock_page_memcg);
static void __folio_memcg_unlock(struct mem_cgroup *memcg)
{
@@ -2096,7 +2095,6 @@ void unlock_page_memcg(struct page *page
{
folio_memcg_unlock(page_folio(page));
}
-EXPORT_SYMBOL(unlock_page_memcg);
struct obj_stock {
#ifdef CONFIG_MEMCG_KMEM
_
^ permalink raw reply [flat|nested] 8+ messages in thread
* [patch 7/7] kasan: add kasan mode messages when kasan init
2021-11-11 4:32 incoming Andrew Morton
` (5 preceding siblings ...)
2021-11-11 4:32 ` [patch 6/7] mm: unexport {,un}lock_page_memcg Andrew Morton
@ 2021-11-11 4:32 ` Andrew Morton
6 siblings, 0 replies; 8+ messages in thread
From: Andrew Morton @ 2021-11-11 4:32 UTC (permalink / raw)
To: akpm, andreyknvl, catalin.marinas, chinwen.chang, david, dvyukov,
elver, glider, Kuan-Ying.Lee, linux-mm, matthias.bgg, mm-commits,
nicholas.tang, ryabinin.a.a, torvalds, will, yee.lee
From: Kuan-Ying Lee <Kuan-Ying.Lee@mediatek.com>
Subject: kasan: add kasan mode messages when kasan init
There are multiple kasan modes. It makes sense that we add some messages
to know which kasan mode is active when booting up. see [1].
Link: https://bugzilla.kernel.org/show_bug.cgi?id=212195 [1]
Link: https://lkml.kernel.org/r/20211020094850.4113-1-Kuan-Ying.Lee@mediatek.com
Signed-off-by: Kuan-Ying Lee <Kuan-Ying.Lee@mediatek.com>
Reviewed-by: Marco Elver <elver@google.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Andrey Konovalov <andreyknvl@gmail.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Matthias Brugger <matthias.bgg@gmail.com>
Cc: Chinwen Chang <chinwen.chang@mediatek.com>
Cc: Yee Lee <yee.lee@mediatek.com>
Cc: Nicholas Tang <nicholas.tang@mediatek.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
arch/arm64/mm/kasan_init.c | 2 +-
mm/kasan/hw_tags.c | 14 +++++++++++++-
mm/kasan/sw_tags.c | 2 +-
3 files changed, 15 insertions(+), 3 deletions(-)
--- a/arch/arm64/mm/kasan_init.c~kasan-add-kasan-mode-messages-when-kasan-init
+++ a/arch/arm64/mm/kasan_init.c
@@ -310,7 +310,7 @@ void __init kasan_init(void)
kasan_init_depth();
#if defined(CONFIG_KASAN_GENERIC)
/* CONFIG_KASAN_SW_TAGS also requires kasan_init_sw_tags(). */
- pr_info("KernelAddressSanitizer initialized\n");
+ pr_info("KernelAddressSanitizer initialized (generic)\n");
#endif
}
--- a/mm/kasan/hw_tags.c~kasan-add-kasan-mode-messages-when-kasan-init
+++ a/mm/kasan/hw_tags.c
@@ -106,6 +106,16 @@ static int __init early_kasan_flag_stack
}
early_param("kasan.stacktrace", early_kasan_flag_stacktrace);
+static inline const char *kasan_mode_info(void)
+{
+ if (kasan_mode == KASAN_MODE_ASYNC)
+ return "async";
+ else if (kasan_mode == KASAN_MODE_ASYMM)
+ return "asymm";
+ else
+ return "sync";
+}
+
/* kasan_init_hw_tags_cpu() is called for each CPU. */
void kasan_init_hw_tags_cpu(void)
{
@@ -177,7 +187,9 @@ void __init kasan_init_hw_tags(void)
break;
}
- pr_info("KernelAddressSanitizer initialized\n");
+ pr_info("KernelAddressSanitizer initialized (hw-tags, mode=%s, stacktrace=%s)\n",
+ kasan_mode_info(),
+ kasan_stack_collection_enabled() ? "on" : "off");
}
void kasan_alloc_pages(struct page *page, unsigned int order, gfp_t flags)
--- a/mm/kasan/sw_tags.c~kasan-add-kasan-mode-messages-when-kasan-init
+++ a/mm/kasan/sw_tags.c
@@ -42,7 +42,7 @@ void __init kasan_init_sw_tags(void)
for_each_possible_cpu(cpu)
per_cpu(prng_state, cpu) = (u32)get_cycles();
- pr_info("KernelAddressSanitizer initialized\n");
+ pr_info("KernelAddressSanitizer initialized (sw-tags)\n");
}
/*
_
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2021-11-11 4:33 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-11-11 4:32 incoming Andrew Morton
2021-11-11 4:32 ` [patch 1/7] mm/page_owner.c: modify the type of argument "order" in some functions Andrew Morton
2021-11-11 4:32 ` [patch 2/7] mm: allow only SLUB on PREEMPT_RT Andrew Morton
2021-11-11 4:32 ` [patch 3/7] mm: migrate: simplify the file-backed pages validation when migrating its mapping Andrew Morton
2021-11-11 4:32 ` [patch 4/7] mm/migrate.c: remove MIGRATE_PFN_LOCKED Andrew Morton
2021-11-11 4:32 ` [patch 5/7] mm: unexport folio_memcg_{,un}lock Andrew Morton
2021-11-11 4:32 ` [patch 6/7] mm: unexport {,un}lock_page_memcg Andrew Morton
2021-11-11 4:32 ` [patch 7/7] kasan: add kasan mode messages when kasan init Andrew Morton
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox