Hi, OK, this is the bit which addresses the dirtying of pages mapped into a kiobuf. It adds mapping-specific info to the kiobuf, so that just as map_user_kiobuf can add the file struct vector to allow any necessary writepage after the IO completes, so can other mapping functions specify their own operations on the mapped pages to deal with IO completion. The patch adds a "map_private" field in the kiobuf to be used by the mapping function's callbacks, and a vector of callbacks currently containing: mark_dirty: marks all pages mapped in the kiobuf as dirty. Used by the raw device code after read IOs complete to propagate the dirty state into any mmaped files. unmap: allows for cleanup of the map_private field when the kiobuf is destroyed. map_user_kiobuf does nothing special for kiobufs marked for write (meaning IO writes, ie. reads from memory), but sets up map_private as a vector of struct file *s for read IOs. The mark_dirty callback propagates that info into the page if page->mapping->a_ops->writepage exists. If the writepage does not exist, then it should simply do a SetPageDirty on the page, but the VM cannot cope with this at present: the swapout code does not yet handle flushing of dirty pages during page eviction, and if a process with such a page mapped exits, __free_pages_ok() will bugcheck on seeing the dirty bit. Rik, you said you were going to look at deferred swapout using the page dirty flag for anonymous pages --- do you want to take this up? One other thing: at some point in the future I'd like to add a "mark_dirty" a_ops callback to be used in preference of the writepage. This would allow filesystems such as ext2, which don't require the struct file * for page writes, to defer the write of these mmaped pages until later rather than to force a flush to disk every time we dirty a kiobuf-mapped mmaped page. Cheers, Stephen