4
Migrating SmarterMail to a new server
Question asked by Ashish Shah - 2/10/2022 at 3:05 AM
Answered
Hello,
Let me first explain the scenario first

Currently we are running SM server with 250 website and 8TB of data, but its a standalone server with no data backup.

We have setup new server with primary/secondary SM server and shared NFS storage with RAID 10 configured. So in total 3 servers for our new setup.

We want to migrate all the data with SM license to new server but the data copy is pain, with help of robocopy it took almost 4 days to transfer 8TB of data. Since its production server we cannot afford huge downtime and have to do it with minimum downtime.

The re-sync also consumes lot of time so we cannot move all the domains at same time. So below are the 2 options we have with us

Option 1
We take huge downtime like 4 hrs or more and change domain path and re-synce data from one server to another. 

Option2
Map the shared storage in current production server move the data of one website at a time change its domain path so that all the new data is written on shared storage.
Once all the website domain path are shifted to shared storage, change the SM license and domain path to meet new environment. 

The option 2 seems doable but facing issue while changing the domain path.

1) We tried changing the domain path in domains.jason file and start stop SM server, but doing this the service does not start. Once we change the domain path to original, service starts within 2-3 seconds.  

2) Tried to attach/detach the domain but it give me error. When I map the NFS storage as H: it gives me error "Invalid folder path or domain name" and when I give the UNC storage path it gives error "domain path does not exists".

I tried creating new domain on UNC path it creates it properly, just the problem is with moving domain.
The same path is mapped in new server and it works properly there, so what am I doing wrong here.

If it is permission issue what could it be. NFS client is installed on all the SM servers.

Please can someone help me here.

5 Replies

Reply to Thread
0
Tim Uzzanti Replied
Employee Post Marked As Answer
Having two live boxes and moving data over requires a careful process.  Please open a ticket with support and they can evaluate current and new servers and what mind of bandwidth and other variables you have involved.  They can give you ideas based on your situation and goals.  
Tim Uzzanti CEO SmarterTools Inc. www.smartertools.com
0
Ashish Shah Replied
I already did that but they were able to help me at certain point. I was told that this is something to do with permission but where and which I am not aware. 

In their test environment it is working. If you can please guide me to the right team it would be really helpful.
1
Manuel Martins Replied
Hi Ashish,

Why do you use shared NFS storage  ? Have you considered iSCSI or Fibre Channel connection instead ?

Are you implementing automatic SM Server Failover ?

0
Ashish Shah Replied
we have implemented SM server fail-over and hence used NFS.
The only issue I am facing is that current live server is not allowing me to change the domains path to NFS storage, but same works if I add new domain and assign shared path from beginning. 

Both the servers are connected with each other on private IP and 10gig switch.
1
Ashish Shah Replied
Hello,

We were able to deploy the SM HA setup with 2 servers for failover and 1 storage server, but soon after migration we were facing issues with indexing and settings not reflecting.

The indexing wont work it was something to do with write.lock error

Error indexing user: Lucene.Net.Store.LockObtainFailedException: Lock obtain timed out: NativeFSLock@\\smstorage\SmartermailDomains\Domains\xxxxx.com\Users\xyz\IndexV2\write.lock: System.IO.IOException: The request is not supported.

Above was the error we were getting in logs for indexing. Where as whenever we changed any setting for domain/user it used to reset back to previous after some time. 

We reverted back to single server with storage on same server and the issue was resolved. So I believe this has something to do with NFS share.

Has anyone setup similar architecture for SM and was able to solve the issue we are facing?

Reply to Thread