Casino88

10 Critical Requirements for New Linux File-Systems to Avoid Kernel Bloat

Linux kernel's VFS burden from many file-systems leads to strict guidelines for new ones: maintainability, testing, uniqueness, and long-term commitment.

Casino88 · 2026-05-04 20:13:39 · Linux & DevOps

Introduction

The Linux kernel's ever-growing collection of file-systems has become a significant maintenance burden, especially for developers maintaining the Virtual File System (VFS) layer. With each new proposal, the core code grows more complex and harder to manage. In response, the Linux community has drafted guidelines to ensure that future file-systems meet strict criteria before being accepted into the mainline kernel. Here are the ten key requirements that any new file-system must fulfill to avoid contributing to kernel bloat.

10 Critical Requirements for New Linux File-Systems to Avoid Kernel Bloat

1. The Growing Burden of File-System Proliferation

The Linux kernel now supports dozens of file-systems, from legacy ones like ext2 to modern designs such as Btrfs and ZFS (via external modules). This proliferation strains the VFS layer, which must interface with each file-system through a common API. Each addition introduces potential bugs, compatibility issues, and code duplication. As a result, upstream maintainers spend increasing amounts of time on VFS-related patches and bug fixes, reducing bandwidth for innovation. To prevent this burden from worsening, new file-systems must justify their inclusion by addressing real use cases not already covered by existing solutions.

2. Impact on Virtual File-System (VFS) Maintenance

The VFS code is the heart of Linux's storage stack. Every new file-system adds hooks, flags, and data structures that can destabilize the entire layer. For example, the introduction of overlay file-systems required changes to VFS locking and namespace handling. Similarly, network file-systems like NFS and CIFS demand special attention to caching and consistency. Future file-systems must work within existing VFS abstractions without forcing major overhauls. They should re-use common infrastructure (e.g., page cache, inode operations) and minimize new VFS-specific code. This requirement reduces the risk of regression and eases long-term maintenance.

3. The Push for Clear Acceptance Guidelines

To curb the flood of underbaked proposals, kernel developers have compiled a formal set of guidelines—similar to the rules for drivers and networking code. These guidelines mandate that any new file-system must be accompanied by a detailed design document, performance benchmarks, and evidence of real-world usage. Moreover, the proposal must pass through a dedicated review process involving VFS maintainers and storage subsystem experts. The goal is to ensure that only file-systems with a clear purpose, robust design, and committed maintainers enter the mainline. This filter saves community effort and prevents orphaned code.

4. Code Quality and Maintainability Standards

New file-systems must adhere to the same coding standards as the rest of the kernel: strict use of kernel APIs, proper error handling, and avoidance of global state. The code should be modular, well-commented, and easy to navigate. Additionally, maintainers expect a clear separation between generic file-system logic and hardware-specific parts. The use of existing helper functions and libraries is encouraged over reinventing the wheel. A file-system that is clean and maintainable today reduces the chance of bugs tomorrow and makes it easier for other developers to contribute improvements.

5. Comprehensive Testing Requirements

Before acceptance, a new file-system must demonstrate passing the Linux Test Project (LTP) suite for file-systems, as well as stress tests like fsx and fsstress. Test coverage should include metadata operations, concurrent access, power failure recovery, and large-scale configurations. The submitter must also provide a test driver that can be run automatically by the kernel's continuous integration (CI) systems. Furthermore, the file-system needs to be tested on multiple architectures (x86, ARM, etc.) and under different memory/page size configurations. This thorough testing ensures that the code doesn't introduce latent bugs in other parts of the kernel.

6. Performance Benchmarks and Scalability

Any new file-system must prove it can perform comparably or better than existing options for its intended workload. Submitters must provide benchmarks on a variety of workloads: sequential and random I/O, small and large file operations, directory tree traversals, and mixed read/write patterns. Scalability across many CPU cores and high-capacity storage devices is also scrutinized. For example, a file-system targeting SSDs should demonstrate low latency under high concurrency, while a network file-system must show acceptable throughput over typical links. Performance regressions in the VFS or core kernel due to the new code are unacceptable.

7. Uniqueness of Features and Use Cases

One of the strongest arguments for a new file-system is that it offers features not available in existing ones. For instance, ZFS brought snapshots and checksumming, while Btrfs introduced subvolumes and online defragmentation. A new proposal must clearly identify the gap it fills—whether it's a specialized on-disk format for embedded devices, a log-structured design for flash memory, or a distributed file-system with POSIX compatibility. If the same features can be achieved by modifying an existing file-system, the community may prefer that approach. Uniqueness reduces redundancy and keeps the kernel lean.

8. Documentation and User Support

Complete documentation is mandatory: a Documentation/filesystems/*.rst file describing usage, mount options, on-disk layout, and recovery procedures. Man pages for userspace tools (e.g., mkfs.foo) should also be provided. Additionally, the maintainer is expected to be responsive on mailing lists and participate in bug triage. Without clear documentation, administrators cannot adopt the file-system reliably. The kernel community has learned from past mistakes where poorly documented file-systems hindered debugging. A commitment to ongoing support ensures the file-system remains usable over kernel releases.

9. Long-Term Maintenance Commitment

Kernel acceptance is not a one-time submission; it's a long-term commitment. The submitter (or a designated team) must pledge to maintain the file-system for at least several years, including handling bug reports, security fixes, and compatibility adjustments as the VFS evolves. Orphaned file-systems are regularly removed from the kernel (e.g., OMFS, HFS+). The new guidelines discourage short-lived experiments. A maintenance plan should specify how critical fixes will be backported to stable kernel series. This requirement protects users who depend on the file-system for production environments.

10. Integration with Existing Ecosystem

Finally, a new file-system must integrate smoothly with the rest of the Linux stack: udev for device detection, systemd for mount management, fsck, dump/restore, and backup tools. It cannot break existing applications or filesystem-independent tools. For example, it should expose standard attributes via statx() and support ACLs, extended attributes, and quotas where applicable. Network-aware file-systems need to work with NFS export and samba. Poor integration leads to a fragmented user experience. Meeting this requirement guarantees that users can rely on familiar admin tools and workflows.

Conclusion

The Linux kernel's VFS maintainers are rightly cautious about adding new file-systems. The ten requirements outlined above—from code quality to long-term support—serve as a filter to ensure only well-designed, essential file-systems make it into the mainline. Developers considering submitting a new file-system should study these guidelines carefully and prepare to address each point. This disciplined approach preserves the kernel's stability and performance while still encouraging innovation in storage technology.

Recommended