In this article, I will be discussing on migration of on-premise system to Azure cloud using DMO procedure (Serial Mode) and emphasizing on data transfers methods and optimizing it.
There are two options to transfer the SUM directory data to the target host using either a serial or parallel transfer mode.
Note: It is strongly recommended to use parallel mode since shadow operations takes long time. However, for any reason if it is not possible to use parallel mode, we can proceed with serial mode and leverage on AzCopy tool which gives similar advantage as running on parallel mode which is explained further in this article.
Database Migration Option (part of Software Update Manager) gives you an option to migrate your system to HANA/SYBASE as your target database which can be combined with upgrading system and Unicode conversion.
SELECTION OF SOFTWARE UPDATE MANAGER (SUM)
Based on the NetWeaver release of your source system, below SUM version needs to be selected with the latest patch level
(Source: SAP)
Our scenario was lift and shift migration from on-premise to Azure cloud and no system upgrade was involved. DMO was invoked only for migration scenario without system upgrade/Unicode conversion.
We need to add a parameter called migration_only = 1 on the file SAPup_add.par in the path /usr/sap/SID/SUM/abap/bin/
Once the SUM is started, it reads the above parameter and enables SUM for migration only scenario where stack xml is not required as shown in below screen.
Since we are migrating to Azure cloud from on-premise, we need to select the option “Enable the migration with System Move”
Based on the number of CPUs, table sizes and etc, finalize the optimal number for below parameters. It plays a significant role in package splitting of big tables and further has impact on export and import phase.
SUM continues with configuration, checks and preprocessing phase which would take around 3 – 4 hours.
In the preprocessing phase development environment must be locked and shadow instance will be built.
Since we had already decided to proceed with “SERIAL MODE” option, no action is required in the below step. SUM will proceed further for export phase.
EXPORT phase is completed now. Its time to move the data to target server.
DATA TRANSFER OPTIONS AND OPTIMISATION
Below table can be used to estimate the time and based on that, choose between an offline transfer or over the network transfer. The table shows the projected time for network data transfer, for various available network bandwidths (assuming 90% utilization).
(Source: Microsoft)
While migrating system from on-premise to azure, data needs to be transferred in 2 phases if we are using online transfer (This approach was used during this migration)
AzCopy – Use this command-line tool to easily copy data to and from Azure Blobs, Files, and Table storage with optimal performance. AzCopy supports concurrency and parallelism, and the ability to resume copy operations when interrupted
We have 2 options while copying data from source to target system via AzCopy tool.
Once the data was available on target server, start the SUM and proceed with IMPORT phase of DMO.
We can decide whether we prefer to reuse the current database tenant or recreate existing tenant
NOTE: Before proceeding further, its good to perform disk defragmentation on the target server if the system is already in use.
IMPORT begins as shown below
Import is completed now and should proceed with post processing part of DMO procedure
Once the post processing is completed, DMO procedure comes to an end.
In a nutshell we can still use serial mode for DMO procedure and have the similar advantage as using parallel mode by considering option – 2 which minimizes/eliminates the data transfer time. This will significantly bring down the downtime window of entire migration.
Source :
______________________________________________________________________
I blog this article to share information that is intended as a general resource and personal insights. Errors or omissions are not intentional. Opinions are my own and not the views of my employers (past, present or future) or any organization that I may be affiliated with. Content from third party websites, Microsoft, SAP and other sources reproduced in accordance with Fair Use criticism, comment, news reporting, teaching, scholarship, and research.
Okumaya devam et...
There are two options to transfer the SUM directory data to the target host using either a serial or parallel transfer mode.
- Serial data transfer mode
SUM exports all files to the file system and this file system needs to be transferred manually to the target system
(Source: SAP)
- Parallel data transfer mode
In this mode we transfer the SUM directory including all files to the target host in phase HOSTCHANGE_MOVE. Then you continue the DMO procedure on the source host. SUM starts to create export files that you copy to the target system. SUM on the target host is started in parallel for import phase.
Note: It is strongly recommended to use parallel mode since shadow operations takes long time. However, for any reason if it is not possible to use parallel mode, we can proceed with serial mode and leverage on AzCopy tool which gives similar advantage as running on parallel mode which is explained further in this article.
Database Migration Option (part of Software Update Manager) gives you an option to migrate your system to HANA/SYBASE as your target database which can be combined with upgrading system and Unicode conversion.
SELECTION OF SOFTWARE UPDATE MANAGER (SUM)
Based on the NetWeaver release of your source system, below SUM version needs to be selected with the latest patch level
- NetWeaver 7.4 or lower – SUM 1.0 should be used
- NetWeaver 7.5 or higher – SUM 2.0 should be used
(Source: SAP)
Our scenario was lift and shift migration from on-premise to Azure cloud and no system upgrade was involved. DMO was invoked only for migration scenario without system upgrade/Unicode conversion.
We need to add a parameter called migration_only = 1 on the file SAPup_add.par in the path /usr/sap/SID/SUM/abap/bin/
Once the SUM is started, it reads the above parameter and enables SUM for migration only scenario where stack xml is not required as shown in below screen.
Since we are migrating to Azure cloud from on-premise, we need to select the option “Enable the migration with System Move”
Based on the number of CPUs, table sizes and etc, finalize the optimal number for below parameters. It plays a significant role in package splitting of big tables and further has impact on export and import phase.
SUM continues with configuration, checks and preprocessing phase which would take around 3 – 4 hours.
In the preprocessing phase development environment must be locked and shadow instance will be built.
Since we had already decided to proceed with “SERIAL MODE” option, no action is required in the below step. SUM will proceed further for export phase.
EXPORT phase is completed now. Its time to move the data to target server.
DATA TRANSFER OPTIONS AND OPTIMISATION
Below table can be used to estimate the time and based on that, choose between an offline transfer or over the network transfer. The table shows the projected time for network data transfer, for various available network bandwidths (assuming 90% utilization).
While migrating system from on-premise to azure, data needs to be transferred in 2 phases if we are using online transfer (This approach was used during this migration)
- Data from source system to Azure Blob storage
- Data from Azure Blob storage to Target server on Azure
AzCopy – Use this command-line tool to easily copy data to and from Azure Blobs, Files, and Table storage with optimal performance. AzCopy supports concurrency and parallelism, and the ability to resume copy operations when interrupted
We have 2 options while copying data from source to target system via AzCopy tool.
- OPTION – 1
Once the export phase is completed, copy the entire SUM folder to the Azure blob and from Azure Blob to target server.
For example: It would take 12 hours for copying data(1.5 TB) from source to target with 500 Mbps speed
- OPTION – 2
When the export phase is started, data can be transferred intermittently to Azure blob and from Azure Blob to target server.
For example, if EXPORT phase runs 16 hours, data can be copied every 3 hours intermittently to Azure blob and from Azure Blob to target server since AzCopy supports parallelism, and has the ability to resume copy operations when interrupted.
In this scenario data is copied to target server parallelly while the EXPORT phase is running and hence optimizing the copying method and saving around 12 hours (compared to OPTION -1 as mentioned above)
Once the data was available on target server, start the SUM and proceed with IMPORT phase of DMO.
We can decide whether we prefer to reuse the current database tenant or recreate existing tenant
NOTE: Before proceeding further, its good to perform disk defragmentation on the target server if the system is already in use.
IMPORT begins as shown below
Import is completed now and should proceed with post processing part of DMO procedure
Once the post processing is completed, DMO procedure comes to an end.
In a nutshell we can still use serial mode for DMO procedure and have the similar advantage as using parallel mode by considering option – 2 which minimizes/eliminates the data transfer time. This will significantly bring down the downtime window of entire migration.
Source :
- Azure data transfer options for large datasets, moderate to high network bandwidth
- SAP Help Portal
- www.sap.com
______________________________________________________________________
I blog this article to share information that is intended as a general resource and personal insights. Errors or omissions are not intentional. Opinions are my own and not the views of my employers (past, present or future) or any organization that I may be affiliated with. Content from third party websites, Microsoft, SAP and other sources reproduced in accordance with Fair Use criticism, comment, news reporting, teaching, scholarship, and research.
Okumaya devam et...