Tags: xzpeter/linux
Tags
FIX: KVM: guest_memfd: Fix huge page leak Signed-off-by: Peter Xu <peterx@redhat.com>
KVM: selftests: Test guest-memfd full share mode on 2M pages Signed-off-by: Peter Xu <peterx@redhat.com>
fixup! mm: Introduce ARCH_SUPPORTS_HUGE_PFNMAP and special bits to pm… …d/pud Signed-off-by: Peter Xu <peterx@redhat.com>
mm/arm64: Support large pfn mappings Support huge pfnmaps by using bit 56 (PTE_SPECIAL) for "special" on pmds/puds. Provide the pmd/pud helpers to set/get special bit. There's one more thing missing for arm64 which is the pxx_pgprot() for pmd/pud. Add them too, which is mostly the same as the pte version by dropping the pfn field. These helpers are essential to be used in the new follow_pfnmap*() API to report valid pgprot_t results. Cc: linux-arm-kernel@lists.infradead.org Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: Peter Xu <peterx@redhat.com>
HACK: always fault MMIO on demand Signed-off-by: Peter Xu <peterx@redhat.com>
fs/userfaultfd: Use exclusive waitqueue for poll() Userfaultfd is the kind of fd that does not need a wake all semantics when wake up. Enqueue using the new POLL_ENQUEUE_EXCLUSIVE flag. Signed-off-by: Peter Xu <peterx@redhat.com>
fixup! hugetlb: add HGM support for follow_hugetlb_page Signed-off-by: Peter Xu <peterx@redhat.com>
selftests/uffd: Enable uffd-wp for shmem/hugetlbfs After we added support for shmem and hugetlbfs, we can turn uffd-wp test on always now. Signed-off-by: Peter Xu <peterx@redhat.com>
mm: Rework zap ptes on swap entries The goal of this small series is to replace the previous patch (which is the 5th patch of the series): https://lore.kernel.org/linux-mm/20210908163628.215052-1-peterx@redhat.com/ This patch used a more aggresive (but IMHO cleaner and correct..) approach by removing that trick to skip swap entries, then we handle swap entries always. The behavior of "skipping swap entries" existed starting from the initial git commit that we'll try to skip swap entries when zapping ptes if zap_detail pointer specified. I found that it's probably broken because of the introduction of page migration mechanism that does not exist yet in the world of 1st git commit, then we could errornously skip scanning the swap entries for file-backed memory, like shmem, while we should. I'm afraid we'll have RSS accounting wrong for those shmem pages during migration so there could have leftover SHMEM RSS accounts. Patch 1 did that removal of the trick, details in the commit message. Patch 2 is a further cleanup for zap pte swap handling that can be done after patch 1, in which there's no functional change intended. The change should be on the slow path for zapping swap entries (e.g., we handle none/present ptes in early code path always, so they're totally not affected), but if anyone worries about specific workload that may be affected by this patchset, please let me know and I'll be happy to run some more tests. I could also overlook something that was buried in history, in that case please kindly point that out. Marking the patchset RFC for this. Smoke tested only. Please review, thanks.
mm: Install marker pte when page out for shmem pages When shmem pages are swapped out, instead of clearing the pte entry, we leave a marker pte showing that this page is swapped out as a hint for pagemap. A new TTU flag is introduced to identify this case. This can be useful for detecting swapped out cold shmem pages. Then after some memory background scanning work (which will fault in the shmem page and confusing page reclaim), we can do MADV_PAGEOUT explicitly on this page to swap it out again as we know it was cold. For pagemap, we don't need to explicitly set PM_SWAP bit, because by nature SWP_PTE_MARKER ptes are already counted as PM_SWAP due to it's format as swap. Signed-off-by: Peter Xu <peterx@redhat.com>
PreviousNext