Skip to main content

Storage Models

July 17, 2025

The UFS Storage Model

For backwards compatibility with AhsayUBS v6, the UFS storage model is also supported. After upgrade, the 'geom_concat.ko,' 'geom_stripe.ko,' and 'geom_raid5.ko' module will be loaded by the FreeBSD to support the UFS storage model. To check if these kernel modules have been loaded correctly, you can run the "kldstat" command, which will return the following output:

ahsayubs:/# kldstat
Id 	  Refs 	  Address 			    Size 		Name
1 	    46 	  0xffffffff80200000    10d1490 	kernel
2 	     1    0xffffffff812d2000 	8cf0 		vesa.ko
3 	     1    0xffffffff8139c000	17378 	    ahci.ko
4 	     1 	  0xffffffff813b4000    f108 		mvs.ko
5 	     1 	  0xffffffff8c3c4000    7b68 		geom_concat.ko
6 	     1 	  0xffffffff8c3cc000 	8f60 		geom_stripe.ko
7 	     1 	  0xffffffff8c3d5000 	25ae8 	    geom_mirror.ko
8 	     1 	  0xffffffff8c3fb000 	25c38 	    geom_raid5.ko
9 	     1 	  0xffffffff8c611000 	221398 	    zfs.ko
10 	     1 	  0xffffffff8c833000	7500 		opensolaris.ko
11 	     1 	  0xffffffff8c83b000 	11150 	    krpc.ko
12 	     1 	  0xffffffff8c84d000	9afc 		iscsi_initiator.ko
13 	     1 	  0xffffffff8c857000 	14bd 		splash_bmp.ko

The "Master Storage Device" on AhsayUBS is preserved in UFS format which is mounted on '/ubs/mnt/esosfw' and '/ubs/mnt/esfmfw' upon system boot time. The following example shows a UFS filesystem mount as '/ubs/mnt/esosfw' and '/ubs/mnt/esfmfw'.

ahsayubs:/# df -h
Filesystem 						Size 	Used 	Avail 	Capacity 	Mounted on
/dev/md0 						170M 	152M 	18M 	90% 		/
devfs 							1.0K 	1.0K 	 0B 	100% 		/dev
/dev/mirror/9689F4EFxesosfw 	186M 	108M 	63M 	63% 		/ubs/mnt/esosfw 
/dev/mirror/9689F4EFxesfmfw 	744M 	 20K   684M 	0% 			/ubs/mnt/esfmfw
eslsfwx9689F4EF 				 87G 	 87G 	66M 	100% 		/ubs/mnt/eslsfw
/dev/mdl 						 15M 	2.5M 	11M 	18% 		/var

The Optional Labelled Device in AhsayUBS v6 will be migrated in this version of AhsayUBS which is one of the storage types called "Optional Storage" inside the "Additional Storage". Vollume status and UFS filesystem integrity checking (fsck) are also available in this AhsayUBS version. For details, please refer to the section Storage.

The ZFS Storage Model

AhsayUBS v9 is implemented with ZFS v5 and ZPOOL v28. The existing ZPOOL(s) created on AhsayUBS v6 using ZPOOL v13 will not be upgraded to ZPOOL v28, only newly created ZPOOLs will be applied with ZIL (ZFS Intent Log).

As the ZFS storage model is based on GMIRROR and ZFS design, therefore the 'geom_mirror.ko', 'opensolaris.ko', and 'zfs.ko' kernel modules will be loaded by the FreeBSD. The GEOM kernel modules used previously for UFS support 'geom_concat.ko', 'geom_stripe.ko', and 'geom_raid5.ko' will also be loaded. To check if these kernel modules have been loaded correctly you can run the 'kldstat' command, which will return the following output:

ahsayubs:/# kldstat
Id 	Refs 	Addess 				Size 	Name
1 	  46 	0xffffffff80200000 	10d1490 kernel
2 	   1 	0xffffffff812d2000 	8cf0 	versa.ko
3 	   1 	0xffffffff8139c000 	17378 	ahci.ko
4 	   1 	0xffffffff813b4000 	f108 	mvs.ko
5 	   1 	0xffffffff8c3c4000 	7b68 	geom_concat.ko
6 	   1 	0xffffffff8c3cc000 	8f60 	geom_stripe.ko
7 	   1 	0xffffffff8c3d5000 	25ae8 	geom_mirror.ko
8 	   1 	0xffffffff8c3fb000 	25c38 	geom_raid5.ko
9 	   1 	0xffffffff8c611000 	221398 	zfs.ko
10 	   1 	0xffffffff8c833000 	7500 	opensolaris.ko
11 	   1 	0xffffffff8c83b000 	11150 	krpc.ko
12 	   1 	0xffffffff8c84d000 	9afc 	iscsi_initiator.ko
13 	   1 	0xffffffff8c857000 	14bd 	splash_bmp.ko

The “Master Storage Device” on AhsayUBS is configured as a ZPOOL with the following pool name ‘eslfwx{UID}’ format. The ZFS pool will be mounted on ‘/ubs/mnt/eslsfw’ upon system boot time. The following example shows a zpool volume of size 87GB “eslsfwx9689F4EF” mount as ‘/ubs/mnt/eslsfw’.

ahsayubs:/# df -h
Filesystem 					Size 	Used   Avail   Capacity   Mounted on
/dev/md0 					170M 	152M     18M        90%   /
devfs 						1.0K 	1.0K      0B       100%   /dev
/dev/mirror/9689F4EFxesosfw 186M 	108M     63M        63%   /ubs/mnt/esosfw
/dev/mirror/9689F4EFxesfmfw 744M 	 20K    684M         0%   /ubs/mnt/esfmfw
eslsfwx9689F4EF 			87G 	 87G     66M       100%   /ubs/mnt/eslsfw
/dev/mdl 					15M 	2.5M     11M        18%   /var

For volume status and ZFS filesystem integrity checking, please refer to Storage for details.

ahsayubs:/# zpool status
	pool: eslsfwx9689F4EF
   state: ONLINE
	scan: scrub repaired 0 in 0h42m with 0 errors on Sun Mar 6 00:00:07 2022
config:
		  NAME 						STATUS 	READ  WRITE  CKSUM
		  eslsfwx9689F4EF 			ONLINE 	   0      0      0
			label/9689F4EFxd00p09 	ONLINE     0      0      0
		  logs
			label/9689F4EFxd00p07 	ONLINE     0      0      0
			
errors: No known data errors

The other "esgpbt", "esosfw", and "esfmfw" System Firmware Devices are still mounted from the /etc/fstab file.

ahsayubs:/# cat /etc/fstab
/dev/md0 	    /       ufs 	rw     0     0
/dev/mirror/9689F4EFxesosfw   /ubs/mnt/esosfw  ufs  ro  1  1
/dev/mirror/9689F4EFxesfmfw   /ubs/mnt/esfmfw  ufs  ro  1  1

The ZFS storage model is used for the following AhsayCBS locations:

  1. /ubs/mnt/eslsfw/obsr/user
  2. /ubs/mnt/eslsfw/obsr/system
  3. /ubs/mnt/eslsfw/obsr/system/obs/policies
  4. /ubs/mnt/eslsfw/obsr/conf
  5. /ubs/mnt/eslsfw/obsr/system/obr/webapps
  6. /ubs/mnt/eslsfw/obsr/rcvshome

The other System Firmware Devices such as "esgpbt", "esosfw", and "esfmfw" will remain unchanged as GEOM MIRROR based on UFS volumes. The GEOM device names are in the following formats:

  1. GPT Boot – {UID}xesgpbt
  2. Operating System Framework – {UID}xesosfw
  3. Firmware Module Framework – {UID}xesfmfw
For production AhsayUBS servers configured with ZFS volume(s). It is strongly recommended to install at least 4 GB RAM, as ZFS volumes require relatively large amounts of memory to run. The amount of memory required is dependent on the side of the ZFS volume and the amount of I/O activity.

ZFS Integrity Checking

In order to safeguard the data integrity of the files on the ZFS volume, a weekly "zpool scrub" (zpool volume data integrity check) is performed starting at 00:00 every Sunday morning, to verify the checksums of all the data in the specified ZFS pools are correct.

The scheduled start time of the "zpool scrub" is currently not user configurable, and it cannot be disabled in this version of AhsayUBS.

Once the "zpool scrub" job is started, it is not possible to stop it.

To check the status of the "zpool scrub", you can use the "zpool status" command which will return the following output. For the following example, the "zpool scrub" has checked 14.51% of the pool: eslsfwx9689F4EF

ahsayubs:/# zpool status
   pool: eslsfwx9689F4EF
  state: ONLINE
   scan: scrub in progress since Mon Apr 26 08:31:27 2021
         12.6G scanned out of 86.6G at 35.9M/s, 0h35m to go
         0 repaired, 14.51% done
config:
        NAME                     STATUS   READ   WRITE   CKSUM
        eslsfwx9689F4EF          ONLINE      0       0       0
          label/9689F4EFxd00p09  ONLINE      0       0       0
        logs
          label/9689F4EFxd00p07  ONLINE      0       0       0
errors: No known data errors

If an additional data integrity check is required in between the scheduled weekly checks, it is possible to initiate a manual "zpool scrub" using the "zpool scrub {%POOL_NAME%}" command. As with the weekly "zpool scrub", the AhsayCBS service and backup/restore operations can continue to run as normal.

There may be some performance overhead associated with a "zpool scrub", i.e. CPU utilization, memory, and increased I/O activity. The performance overhead is proportional to the amount of data on the ZFS volume.

FreeBSD and ZFS Implementation

The ZFS version 5 and ZPOOL v28 on AhsayUBS have undergone an extended period of intensive performance and load testing, which has consistently delivered superior performance and data integrity results in comparison to UFS.

ahsayubs:/# dmesg | grep ZFS
ZFS filesystem verion: 5
ZFS storage pool version: features support (5000)
ZFS filesystem version: 5
ZFS storage pool version: features support (5000)
ahsayubs:/# zpool get version
NAME 				PROPERTY  VALUE  	SOURCE
eslsfwx9689F4EF 	version    - 		default
ahsayubs:/# dmesg | tail -20
SMP: AP CPU #11 Launched!
SMP: AP CPU #13 Launched!
SMP: AP CPU #2 Launched!
SMP: AP CPU #8 Launched!
SMP: AP CPU #9 Launched!
SMP: AP CPU #4 Launched!
SMP: AP CPU #6 Launched!
SMP: AP CPU #15 Launched!
SMP: AP CPU #7 Launched!
SMP: AP CPU #12 Launched!
SMP: AP CPU #5 Launched!
Random: unblocking device.
Trying to mount root from ufs:/dev/md0 []…
GEOM_MIRROR: Device mirror/9689F4EFxesgpbt launched (1/1).
GEOM_MIRROR: Device mirror/9689F4EFxesosfw launched (1/1).
GEOM_MIRROR: Device mirror/9689F4EFxesfmfw launched (1/1).
ZFS filesystem version: 5
ZFS storage pool version: features support (5000)
iscsi: version 2.3.1

Storage Module Migration

For AhsayUBS v6 environments that wish to migrate from UFS to ZFS storage model, only a manual migration method is available where you need to offload your locally stored User Home data, AhsayUBS settings, and AhsayCBS settings; to a temporary storage device, reinstall AhsayUBS from new, then copy the data and settings from the temporary storage to the new AhsayUBS installation.

The migration process will generally involve:

  1. Copying the existing user data from AhsayUBS server to another storage device.

    • This refers to the data in all locally stored User Homes stored on the filesystem.
  2. Backup your AhsayUBS configuration via the AhsayUBS Management Console.

    • System > Backup/Restore > Backup Configuration
  3. Backup of your AhsayCBS configuration (conf/*), policies (system/obs/policies/*), export your branding properties, and any non-standard customizations. If you need to retain logs (logs/*) and (system/*), exclude (system/cbs/Installers/*) which contain old branded builds.
  4. Use the latest AhsayUBS installer to install a new version of AhsayUBS on the existing machine, which will overwrite all existing data, returning your server to bare state.
  5. Set the AhsayUBS IP, so that you can login to the management console.
  6. Restore your AhsayUBS configuration.
  7. Stop AhsayCBS service.
  8. Restore your AhsayCBS configuration.
  9. Copy the user data from the temporary storage device back to the AhsayUBS server.
  10. Start the AhsayCBS service and verify AhsayCBS state is normal.