Storage Optimisation - Separate User Config / Index from Mail Data
Idea shared by Nathan - 5/27/2021 at 1:58 AM
SSD is getting relatively cheaper but it is still expensive compared to HDD, particularly as everyone 'needs' 50GB mailboxes because they cannot possibly delete anything. So, on that note, it would be beneficial if we could separate User Config/Index data so it is stored in 1 path but mail data is stored in another. This would allow config and index to be kept on SSD for rapid search but mail data to be kept on more effective HDD.

Extending this further, it would be even better if we could have a policy to store recent mail data on SSD (configurable for X days) then automatically move to HDD.  That way the latest 'hot' email would be speedy but the emails from 1998 which must be kept but never read could be retrieved from slow disk if needed.

Having the 'static' split would be a great feature with the 'tiering' a nice to have.

10 Replies

Reply to Thread
This is an excellent idea.
I do have another thought for admins thinking about drive utilization.  We have used a RAM disk as the location for the spool (also Declude and Message Sniffer) for years.  It completely eliminates beating the heck out of your drives and improves performance significantly.  There are commercial RAMdisk solutions that will take care of storing the RAM disk image to HDD as a late part in the reboot process and then restoring it on the way backup so that everything is in place when the email services start.

Storing the historical log files to another location would be great.  As a thought, maybe the current uncompressed ones could be treated differently than the compressed ones.  That would provide rapid loading and  searches on the SSD for troubleshooting current items and then, when they are compressed, optionally move them to another location.  We are doing a compression and disk write at that point anyway.
SmarterMail(tm) 17
MAPI over HTTP - Let's flesh it out with Exchange like features!
We also use this (a ramdisk for the spool) - is one of the biggest performance improvements you can do (which will also increase the lifespan of your disks). 
We use Primo Ramdisk for this from Romex software: https://www.romexsoftware.com/en-us/index.html, as well as Primo Cache for other applications (e.g. accelerating file server shares) with very good success, we highly recommend them (the software is also very cheap).

Now, I would not recommend write caching for the mail store, but using a RAMdisk for the spool, that works great.
This is just a idea but maybe SmarterTools could implement some kind of ram spool directly into SmarterMail as an option like disk/ram spool storage and handle situations like mail service stopping/starting for handling messages in spool to be saved during shutdown and import spool messages into ram during service startup. Of course this could only work if no custom made software is interacting inside spool folder (like declude -> for that mentioned disk ram software or one built in inside system should be used).
Webio, that would be great but in case of a crash of the process, the spool would be lost and therefore important mails not yet delivered could be lost.

Why not just use a PCIe Nvme disk for the spool ? They are quite cheap and can handle 3.5GB/s and ~600000 IOPS, for example:

Okay it is slower than RAM but should be sufficient to have a robust spool directory.
Sébastien Riccio
System & Network Admin

For Domains Data we use a QNAP NAS with HDD Drives (Raid 10) and SSD Cache connected by iSCSI ( Mellanox 40 Gbe Network cards ) to the Host Server. And we have a NVMe SSD drive on the Server only for Spool handling.

We are having very good results with this solution.
Zach Sylvester Replied
Employee Post
Hey Everyone,

I really like this idea. I went ahead and submitted this as a feature request. We will update this thread as updates come. 

Zach Sylvester
Technical Support Specialist
SmarterTools Inc.
(877) 357-6278
Tim Uzzanti Replied
Employee Post
SmarterMail is already optimized for NAS storage solutions that use SSD caches. In fact, we built SmarterMail to store email in daily GRP files for this very reason. When mail clients are polling the server for "new email" it is often coming from the SSD cache. Another reason we use GRP files avoid disk fragmentation and backup issues that occur with an overwhelming amount of files.

SmarterMail was built for ISPs, hosting companies and service providers. Basically, any company that services a very large, diverse number of end users spread across multiple servers. I owned one of the largest hosting companies in the world before selling it to a public company and built SmarterMail specifically for my own hosting business and my customers. That's why it is so efficient from a cpu, memory and disk i/o standpoint.

Regarding Spool handling, that should not be on your NAS and would greatly benefit from local storage. Many customers use leftover small SSD's for this purpose.

I hope this helps.
Tim Uzzanti
SmarterTools Inc.
(877) 357-6278
Another reason for separating the config from the data is if you need to go 'dial tone' in the event of recovery, to expedite the return of some degree of service quickly. With that in mind it is a darn site easier to restore the 'config' folder for all users than to have to pick through folders.

@Manuel, out of interest which QNAP do you have and how many spindles? I have seen mixed performance even with their enterprise (ZFS based) storage so it is interesting someone has good real world experience with iSCSI.
Hi @Nathan,

We are using QNAP TES-1885U ZFS Based ( RAID 10 HDD drives for Domains DATA + RAID 0 SSD for SSDCache ) Connect by iSCSI with Mellanox 40 Gbe Network Cards to a Double Xeon Server with Windows Server 2019 Standard.

On the Windows Server we have 2 SSD's, one SSD for Operating System + Smatermail Config and another SSD only for Spool handling.

On the QNAP TES-1885U we make hourly Snapshots of the Domains DATA and after each Snapshot we Replicate the Snapshot to another QNAP NAS TS-h1886XU-RP ( connect also with Mellanox 40 Gbe Network Cards) so if the first NAS fails we have all Domains DATA replicated on the secound NAS ( 59 min. different maximum ), just redirect the iSCSI to the secound NAS and the system is online again.

So far we are experiencing very good performance with this solution, hope we never need the secound NAS to go live but i think its a very good disaster recover solution.

If you want to know anything more, just ask.

Reply to Thread