RDMA/mlx5: Fix page size bitmap calculation for KSM mode
[ Upstream commit 372fdb5c75b61f038f4abf596abdcf01acbdb7af ]
When using KSM (Key Scatter-gather Memory) access mode, the HW requires
the IOVA to be aligned to the selected page size.
Without this alignment, the HW may not function correctly.
Currently, mlx5_umem_mkc_find_best_pgsz() does not filter out page sizes
that would result in misaligned IOVAs for KSM mode. This can lead to
selecting page sizes that are incompatible with the given IOVA.
Fix this by filtering the page size bitmap when in KSM mode, keeping
only page sizes to which the IOVA is aligned to.
Fixes: fcfb03597b ("RDMA/mlx5: Align mkc page size capability check to PRM")
Signed-off-by: Edward Srouji <edwards@nvidia.com>
Link: https://patch.msgid.link/20250824144839.154717-1-edwards@nvidia.com
Reviewed-by: Michael Guralnik <michaelgur@nvidia.com>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
This commit is contained in:
committed by
Greg Kroah-Hartman
parent
f6ca78b753
commit
5beca7388b
@@ -1803,6 +1803,10 @@ mlx5_umem_mkc_find_best_pgsz(struct mlx5_ib_dev *dev, struct ib_umem *umem,
|
||||
|
||||
bitmap = GENMASK_ULL(max_log_entity_size_cap, min_log_entity_size_cap);
|
||||
|
||||
/* In KSM mode HW requires IOVA and mkey's page size to be aligned */
|
||||
if (access_mode == MLX5_MKC_ACCESS_MODE_KSM && iova)
|
||||
bitmap &= GENMASK_ULL(__ffs64(iova), 0);
|
||||
|
||||
return ib_umem_find_best_pgsz(umem, bitmap, iova);
|
||||
}
|
||||
|
||||
|
||||
Reference in New Issue
Block a user