Hi, The patch below is for two races in sysV shared memory. The first (minor) one is in shmem_free_swp: swap_free (entry); *ptr = (swp_entry_t){0}; freed++; if (!(page = lookup_swap_cache(entry))) continue; delete_from_swap_cache(page); page_cache_release(page); has a window between the first swap_free() and the lookup_swap_cache(). If the swap_free() frees the last ref to the swap entry and another cpu allocates and caches the same entry before the lookup, we'll end up destroying another task's swap cache. The second is nastier. shmem_nopage() uses the inode semaphore to serialise access to shmem_getpage_locked() for paging in shared memory segments. Lookups in the page cache and in the shmem swap vector are done to locate the entry. _getpage_ can move entries from swap to page cache under protection of the shmem's info->lock spinlock. shmem_writepage() is locked via the page lock, and moves shmem pages from the page cache to the swap cache under protection of the same info->lock spinlock. However, shmem_nopage() does not hold this spinlock while doing its lookups in the page cache and swap vectors, so it can race with writepage, with once cpu in the middle of moving the page out of the page cache in writepage and the other cpu then failing to find the entry either in the page cache or in the shm swap entry vector. Feedback welcome. Cheers, Stephen