(Rocky Linux 9/10, RHEL-family, and most systemd-based distros — vendor neutral guide)
If you’ve ever rebooted a Linux server and landed on a black screen that says something like:
- “Give root password for maintenance”
- “You are in emergency mode”
- “Cannot mount /home”
- “Dependency failed for Local File Systems”
…there’s a strong chance the culprit is /etc/fstab.
This article explains why it happens, where it happens, and how to fix it safely (without guessing device names like /dev/sdb).
What is /etc/fstab and why does it break boots?
/etc/fstab (filesystem table) tells Linux what to mount automatically at boot:
- Root filesystem (
/) - EFI partition (
/boot/efi) on UEFI systems - Separate data partitions (
/home,/data,/mnt/...) - Swap (
swapfileor swap partition)
On modern Linux (systemd-based), boot expects the entries in fstab to be mountable. If a required mount fails, systemd often drops into emergency/maintenance mode to prevent booting into a half-broken system.
Common reasons /etc/fstab causes emergency mode
1) Using unstable device names (/dev/sdb, /dev/vdb, /dev/nvme1n1)
Device naming can change between boots depending on:
- Disk detection order
- VM device enumeration changes
- Adding/removing volumes
- Hypervisor or storage controller changes
So an entry like this is risky:
/dev/sdb /home ext4 defaults 0 2
If the disk shows up as /dev/sdc after a reboot, the mount fails → emergency mode.
Best practice: Use UUID or stable /dev/disk/by-id/... paths.
2) Wrong partition vs whole disk (/dev/sdb vs /dev/sdb1)
A common mistake is pointing to the whole disk when the filesystem is on a partition.
Example:
- Filesystem is on
/dev/sdb1 fstabsays/dev/sdb
Mount fails → emergency mode.
3) The disk/volume isn’t attached at boot
Common in cloud and virtual environments:
- Extra data disks attach late or fail to attach
- Volumes are detached accidentally
- Storage/network delays
If fstab insists that disk must mount, boot can fail.
4) Filesystem needs repair (unclean shutdown)
Power loss, crash, or forced reset can mark a filesystem inconsistent.
Boot may attempt an fsck; if it fails or the mount can’t proceed, emergency mode may appear.
5) Swapfile listed but missing
If your fstab references /swapfile but the file is gone or moved, that can also create boot-time failures on some setups.
Where does this problem happen?
This is vendor-neutral and can appear in many environments:
Bare metal
- Added/removed disks
- Controller changes
- RAID/HBA enumeration changes
Virtual machines
- KVM/QEMU, VMware, Hyper-V, VirtualBox
- Disks added, bus changes (SCSI/VirtIO), disk order changes
Cloud instances
- Attached volumes (block storage) are not guaranteed to appear in the same order
- “Cloud-init” templates sometimes add fstab entries that aren’t safe long-term
Any systemd-based distro
- RHEL family, Debian family, SUSE family, and more
- The behavior varies slightly, but the root issue is the same
How to fix it (step-by-step)
Step 1: Boot into the maintenance prompt
When you see “Give root password for maintenance”, enter the root password.
You’re now in a minimal shell where you can inspect logs and fix fstab.
Step 2: Identify what failed
Run:
systemctl --failed
systemctl status local-fs.target
journalctl -xb --no-pager | tail -200
If the failure mentions a mount unit like home.mount, it’s likely your /home entry.
Step 3: Check real disk layout and UUIDs
Run:
lsblk -f
blkid
This shows which filesystem exists on which device and the UUID you should use in fstab.
Step 4: Fix /etc/fstab properly
Open the file:
nano /etc/fstab
Replace device names with UUID
Instead of:
/dev/sdb /home ext4 defaults 0 2
Use:
UUID=YOUR-UUID-HERE /home ext4 defaults 0 2
Boot-safe mounting for “optional” disks (recommended)
If /home or a data disk is not required to boot, make it boot-safe:
UUID=YOUR-UUID-HERE /home ext4 defaults,nofail,x-systemd.device-timeout=10s 0 2
What these options do
nofail→ don’t drop to emergency mode if it isn’t presentx-systemd.device-timeout=10s→ don’t hang the boot for a long time waiting for the disk
This is extremely useful in cloud/VM environments where volumes can attach late.
Swapfile entries: make them resilient
If your system uses /swapfile, ensure it won’t block boot:
/swapfile none swap defaults,nofail 0 0
Example: Clean, modern, vendor-neutral fstab
UUID=ROOT-UUID-HERE / ext4 defaults 1 1
UUID=EFI-UUID-HERE /boot/efi vfat umask=077,uid=0,gid=0 0 2
UUID=HOME-UUID-HERE /home ext4 defaults,nofail,x-systemd.device-timeout=10s 0 2
/swapfile none swap defaults,nofail 0 0
Validate before reboot (very important)
After editing fstab, always validate it:
systemctl daemon-reload
mount -av
swapon -a
If mount -av returns errors, fix them now — don’t reboot yet.
Recovery trick: boot now, fix later
If you just need the system to boot immediately:
- Comment the failing line in
fstabby adding# - Run
mount -avagain - Reboot
This confirms fstab was the cause.
One important warning about /home
If you mount a separate disk on /home, anything currently in /home on the root filesystem becomes hidden (not deleted, just covered by the mount).
If you already had users/data in /home on /, copy it to the new disk first before permanently mounting it.
Summary
When Linux drops to emergency mode during boot, fstab is one of the first things to suspect. The most reliable fixes are:
- Stop using
/dev/sdbstyle device paths infstab - Use
UUID=(or stable/dev/disk/by-id/…) - Add
nofail+x-systemd.device-timeout=10sfor optional mounts - Always test with
mount -avbefore rebooting