mm/slab: only allow SLAB_OBJ_EXT_IN_OBJ for unmergeable caches

While SLAB_OBJ_EXT_IN_OBJ allows to reduce memory overhead to account
slab objects, it prevents slab merging because merging can change
the metadata layout.

As pointed out Vlastimil Babka, disabling merging solely for this memory
optimization may not be a net win, because disabling slab merging tends
to increase overall memory usage.

Restrict SLAB_OBJ_EXT_IN_OBJ to caches that are already unmergeable for
other reasons (e.g., those with constructors or SLAB_TYPESAFE_BY_RCU).

Suggested-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Harry Yoo <harry.yoo@oracle.com>
Link: https://patch.msgid.link/20260127103151.21883-3-harry.yoo@oracle.com
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
This commit is contained in:
Harry Yoo
2026-01-27 19:31:51 +09:00
committed by Vlastimil Babka
parent a77d6d3386
commit 2f35fee943
3 changed files with 4 additions and 3 deletions

View File

@@ -411,6 +411,7 @@ extern void create_boot_cache(struct kmem_cache *, const char *name,
unsigned int useroffset, unsigned int usersize);
int slab_unmergeable(struct kmem_cache *s);
bool slab_args_unmergeable(struct kmem_cache_args *args, slab_flags_t flags);
slab_flags_t kmem_cache_flags(slab_flags_t flags, const char *name);

View File

@@ -174,8 +174,7 @@ int slab_unmergeable(struct kmem_cache *s)
return 0;
}
static bool slab_args_unmergeable(struct kmem_cache_args *args,
slab_flags_t flags)
bool slab_args_unmergeable(struct kmem_cache_args *args, slab_flags_t flags)
{
if (slab_nomerge)
return true;

View File

@@ -8382,7 +8382,8 @@ static int calculate_sizes(struct kmem_cache_args *args, struct kmem_cache *s)
*/
aligned_size = ALIGN(size, s->align);
#if defined(CONFIG_SLAB_OBJ_EXT) && defined(CONFIG_64BIT)
if (aligned_size - size >= sizeof(struct slabobj_ext))
if (slab_args_unmergeable(args, s->flags) &&
(aligned_size - size >= sizeof(struct slabobj_ext)))
s->flags |= SLAB_OBJ_EXT_IN_OBJ;
#endif
size = aligned_size;