Data Migration Toolkit for Dynamics 365 Experience Overview

Introduction

Previously, I wrote a post about a data upgrade in self-service environments. This time I would like to share my experience with the "Data Migration Toolkit for Dynamics 365". 

Parameters

Based on the standard documentation, to optimize the replication latency/performance, you can update the following distributor parameters in the App.config file:

  • MaxBcpThreads – By default, this parameter is set to 6. If the machine has fewer than six cores, update the value to the number of cores. The maximum value that you can set is 8.

  • NumberOfPublishers – By default, this parameter is set to 2. The common recommendation is to use this value. However, there can be situations where you may want to increase the number of publishers, to distribute smaller numbers of tables to each publisher. This in conjunction with the manual snapshot start process, allows you to run smaller initial snapshots, that can be useful if you have limited maintenance windows and must split the startup of the replication over several.

  • SnapshotPostPublication - This option will add in a 5-minute delay between automatic snapshot processes starting, that can assist with loads on the source server. The toolkit also allows manual snapshot starts, if you choose that option, you don't need to set this.

Let me share my experience with the parameters:

  • MaxBcpThreads – I adjusted the parameter according to the recommendations. 

  • NumberOfPublishers - During my data migration projects if the Ax 2012 database was not so huge (30-50 Gb) I used the default NumberOfPublishers value. If the database is 80 Gb or bigger it’s better to select “Yes” to the “Manual startup” option on Step 5 when doing the replication, then run each publication one at a time, and wait for it to push the snap show before you start the next one. Also, in the toolkit config file “DataMigrationTool.exe.config”, for larger databases it’s better to set the number of publishers to 4 or 6. The advantage is that if one of the publishers has an issue, you only then have to reinitialize 25% or less of the data.

  • SnapshotPostPublication – I have not changed this parameter. Each time the process worked fine with the default value.

    One more piece of advice:

    Each time I need to perform the data migration in Tier-2 or higher environments I deploy a new environment on LCS from the scratch. I have tried to perform the data migration with the same LCS self-service environment but the process failed and I could not resume it. After this case, I prefer to deploy a new LCS Tier-2 and higher environment for data migration and not waste my time on unnecessary troubleshooting.

    And don't forget to clean up extra data in Ax 2012 with the standard operations. It can reduce the time required for the data upgrade and you may avoid the issues mentioned below.

    Experienced issues

    There is a brilliant troubleshooting guide from Microsoft. It is really useful and contains reasonable advice. If you have any issues with the data upgrade you have to look at it first. It is constantly updating by Microsoft. Therefore I have experienced the issues that have not been described in the guide yet.

    Batch step failed with a time-out 

    The message was: The postsync-wait for batch step is in in-progress state for more than 240 minutes.

    I asked Microsoft support team for advice and the response was:

    In some scenarios its expected behaviour for the sandbox environment to longer time to finish, hence request you to wait for step to be completed using the DS command, then resume the operation.

    If you can get JIT access, you can try to run the following SQL query to check for current queries running in the database, you may need to run the query a few times to check:

    SELECT   
    SPID   = er.session_id, S
    STATUS  = ses.STATUS, 
    [Login]        = ses.login_name, 
    Host           = ses.host_name,
    BlkBy          = er.blocking_session_id, 
    DBName  = DB_Name(er.database_id), 
    CommandType    = er.command,
    ObjectName     = OBJECT_NAME(st.objectid),
    CPUTime        = er.cpu_time,
    StartTime      = er.start_time,
    TimeElapsed    = CAST(GETDATE() - er.start_time AS TIME),
    SQLStatement   = st.text
    FROM    sys.dm_exec_requests er
        OUTER APPLY sys.dm_exec_sql_text(er.sql_handle) st
        LEFT JOIN sys.dm_exec_sessions ses
        ON ses.session_id = er.session_id
    WHERE   st.text IS NOT NULL

    Once the final step is "Completed" you can then make a Resume from the toolkit. This will quickly cycle through the servicing again in LCS to get the environment status to "Completed". That might take around 10-20 minutes, sometimes less.

    In my particular case, this solution worked.

    DBSync step failed with a time-out

    It is happening due to the size of the tables and the precision changing on numeric fields between AX 2012 and D365. SQL Server is exceptionally slow when doing an alter table on these data types. If the table size in Ax 2012 is more than 5Gb and this change type was applied to the table fields in D365 you will probably face a synchronization issue.

    To get past this, run DS from the toolkit then resume, it will get through but could timeout on another table.

    If there are multiple huge tables in Ax 2012 I would recommend to ask Microsoft for support. They have a solution for this. It is called the Shadow Copy sync.

    INVENTSUMLOGTTS table size

    In some cases, the INVENTSUMLOGTTS table can have a huge size. In Ax 2012 the INVENTSUMLOGTTS table is used for two purposes:

    • As part of the Inventory Multi Transaction System (IMTS) in order to roll back unsuccessful transactions.

    • It allows Master scheduling to do an update based on changes to the on hand. After the update, the records are deleted by Master scheduling.  
    When a customer is registered for Master planning but does not use Master scheduling, this table will continue to grow and will never get cleared out. Running Master scheduling will automatically clear the contents of this table.  

    If you are registered for Master planning but are not running Master scheduling, you can manually purge the table through SQL with the following statement: 
    DELETE INVENTSUMLOGTTS WHERE ISCOMMITTED =1

    Troubleshooting the deployable packages installation from the command line on a development (one box) environment

    Overview

    The deployment package installation in a development environment is a common task when you need to install an ISV solution or another partner. The installation process is well described in the standard guide. Theoretically, the process should go smoothly. In reality, you may face various errors during this process.

    Time out issue

    Sometimes, the installation process can fail with the error:

    Generated runbook with id: Runbook20230907093027
    Start executing runbook : Runbook20230907093027
    Executing step: 1
    Stop script for service model: AOSService on machine: localhost
    Stop AOS service and Batch service

    Error during AOS stop: File locks found on K:\AosService\WebRoot\bin\AOSKernel.dll for more than 300 seconds. Check previous logs for identified locking processes. [Log: C:\Temp\ZZZ_10.02.15.12\RunbookWorkingFolder\Runbook20230907093027\localhost\AOSService\1\Log\AutoStopAOS.log]

    The step failed


    In this case, you need to stop IIS Express in your dev box and repeat the installation process.

    Don't forget to run database synchronization. You must run database synchronization from Microsoft Visual Studio after you install the deployable package. 

    Check in issue

    When you try to check in the installed binary files in Visual Studio you may face the issue below: 

    **:\AosService\PackageLocalDirectory\ModelName\bin\file_name
    TFS30063: You are not authorized to access.

    You  can try to do the following:

    • Reopen Visual Studio 
    • Try to do a check-in again
    If the previous actions have not worked:
    • Reopen Visual Studio
    • Stop The Batch Service and IIS Express in the Dev Box  
    • Try to do a check-in again

    Additional guidance to database upgrade scripts for D365 Finance and Operations. Part 2. Custom inventory dimensions data upgrade.

    Introduction

    About a year ago I posted an article about inventory dimensions data migration from Ax 2012 to D365 version. That post has general ideas regarding the inventory dimension data migration approach. This time I'm going to share some more consistent guidance. All my outcomes and ideas are based on my personal experience with custom dimensions data migration projects. The intercompany functionality is not revised in this post.

    Before you go on reading this post, please read my other posts about the data upgrade process since some terms and definitions I use, were explained there.

    Data requirements

    The "InventDim" table is different in Ax 2012 and D365. In Ax 2012 you could add the custom inventory dimension as new fields to the "InventDim" table and change some InventDim* macros. In D365 you should map your custom dimension to the "InventDimension1" – "InventDimension10" based on this guide.  

    In fact, during the data migration, you need to move values from Ax 2012 "InventDim" table fields to D365 "InventDimension1" – "InventDimension10"  "InventDim" table fields before the data upgrade logic is triggered. It should be done during the "PreSync" stage when the database has the table structure from Ax 2012 and D365 versions at that moment.

    Also, the structure of the "WHSInventReserveTable" has been changed in D365 and those changes should be taken into consideration.

    Functional requirements

    During the data upgrade execution, the system should be configured properly, from an inventory management perspective. It means that:

    1. The required configuration keys ("InventDimension1" – "InventDimension10") should be activated.
    2. All reservation hierarchies should be set correctly.
    3. All item dimension groups (Tracking, Storage, Product) should have proper setups.

    If one of the mentioned requirements is not met during the date upgrade at the "PostSync" step, the custom inventory dimension migration will fail from a data consistency perspective.

    Data upgrade

    The "PreSync" stage

    In order to meet the data requirements, we need to develop a script like this at the "PreSync" step:

    [
      UpgradeScriptDescription("Script description"),
      UpgradeScriptStage(ReleaseUpdateScriptStage::PreSync),
      UpgradeScriptType(ReleaseUpdateScriptType::StandardScript),
      UpgradeScriptTable(tableStr(InventDim), falsetruetruefalse)
    ]
    public void updateInventDimensionField()
    {
        FieldName     field2012Name  = 'YOURDIMENSIONFIELD';
      FieldName     field365Name   = 'INVENTDIMENSION1';
      SysDictTable  inventDimDT    = SysDictTable::newTableId(tableNum(InventDim));
      TableName     inventDimName  = inventDimDT.name(DbBackend::Sql);

      str sqlStatement = strFmt(@"

                    UPDATE [dbo].[%1]
                    SET [dbo].[%1].[%2] = [dbo].[%1].[%3]
                    WHERE [dbo].[%1].[%3] <> ''",
                    inventDimName,
                    field365Name,
                    field2012Name);

      ReleaseUpdateDB::statementExeUpdate(sqlStatement);
    }

    It’s one of the possible options. The code is provided "as is" without any warranty. You can develop your own script in another way. It’s up to you. The key point is that the "InventDim" field values are to be moved at the "PreSync" stage properly.

    The "PostSync" stage

    At the "PostSync" stage when the database has a D365 table structure, the system performs the key steps from an inventory dimension data upgrade perspective.

    There is a class "ReleaseUpdateDB73_WHS". It has the methods of the "WHSInventReserveTable" data upgrade. The key methods here are "populateParentInventDimIdOfWhsInventReserveMinor" and "populateParentInventDimIdOfWhsInventReserveMajor". Those methods call the "WHSInventReservePopulateParentInventDimId::populateForAllItems();" method. The method "populateForAllItems" populates the "WHSInventReserve" table based on the system setups that I mentioned in the functional requirements paragraph.

    So, we need to develop an extension of this method and place the configuration keys activation code and update reservation hierarchies and dimension groups before the "next" method call. This is how we can meet the functional requirements before the data upgrade is triggered. In order to do it - add the extension to this method exactly since the data upgrade scripts can execute independently as I mentioned here.

    The code changes can be like this:

    /// <summary>

    /// <c>WHSInventReservePopulateParentInventDimIdAXPUPG_Extension</c> class extension of <c>WHSInventReservePopulateParentInventDimId</c> class
    /// </summary>
    [ExtensionOf(classStr(WHSInventReservePopulateParentInventDimId))]
    final class WHSInventReservePopulateParentInventDimIdAXPUPG_Extension
    {
        /// <summary>
        /// Populates the <c>ParentInventDimId</c> field for all items.
        /// </summary>
        public static void populateForAllItems()
        {
            FieldId     fieldId2012; //you have to set your field identifier
            FieldId     fieldIdD365;

            ConfigurationKeySet keySet = new ConfigurationKeySet();
            SysGlobalCache      cache = appl.globalCache();
            boolean             isConfigChanged;
     
            void updateWHSReservationHierarchyElement()
            {
                WHSReservationHierarchyElement  hierarchyElement;

                hierarchyElement.skipDatabaseLog(true);

                ttsbegin;

                update_recordset hierarchyElement

                setting DimensionFieldId = fieldIdD365
                    where hierarchyElement.DimensionFieldId == fieldId2012;

                ttscommit;

            }
     
            void updateEcoResTrackingDimensionGroupFldSetup()
            {
                EcoResTrackingDimensionGroupFldSetup     dimensionGroupFldSetup;
     
                dimensionGroupFldSetup.skipDatabaseLog(true);
     
                ttsbegin;
     
                update_recordset dimensionGroupFldSetup
                setting DimensionFieldId = fieldIdD365
                    where dimensionGroupFldSetup.DimensionFieldId == fieldId2012;
     
                ttscommit;
            }
     
            if (//don’t forget to add a check 
                that the extension is calling during the data upgrade)
            {
                keySet.loadSystemSetup();
                if (
                 !isConfigurationkeyEnabled(configurationKeyNum(InventDimension1)))
                {
                    keySet.enabled(configurationKeyNum(InventDimension1), true);
                    isConfigChanged  = true;
                }
     
                if (isConfigChanged)
                {
                    SysDictConfigurationKey::save(keySet.pack());
                    SysSecurity::reload(true, true, true, false, true);
                }
     
                fieldIdBPHContainer = fieldNum(InventDim, InventDimension1);
           
                updateWHSReservationHierarchyElement();
                updateEcoResTrackingDimensionGroupFldSetup();
            }
     
            next populateForAllItems();
        }
    }

    It’s one of the possible options. The code id provided "as is" without any warranty. You can develop your own script in another way. It’s up to you. The key point is to activate the required inventory setups and configuration keys before the system can start the upgrade of the "WHSInventReserve" table.

    Final steps

    When the data upgrade is completed the "InventDimension1" - "InventDimension10" configuration keys could be disabled. You should check their status on the form "License configuration" ( System Administration/Setup/License configuration) under the configuration key "Trade". You need to enable the required keys manually if needed and align SQL warehouse procedures with the command:

    https://YOUR_ENVIRONMENT_URL/?mi=SysClassRunner&cls=Tutorial_WHSSetup

    Then you can open the "On-Hand" form, and add your custom dimensions to display and verify the outcomes. Also, it makes sense to do some functional tests. You can pick items via inventory journals, for instance.


    Additional guidance to database upgrade scripts for D365 Finance and Operations. Part 1. General recommendations.

    Introduction

    As you may know, developing your own data upgrade scripts is something one can do. There are several posts related to this topic in my blog. Also, you can find documentation on this topic. This time, I would like to share my notes and experience with the technical aspects of development and execution of scripts. Some of the points are not described on the Microsoft Docs website.

    Data upgrade methods name convention

    All methods with data upgrade attributes must have unique names in the system. If you create 2 methods with the same names in the same module in different AXPUPGReleaseUpdate* classes the compiler will not show errors. You will get an error during the data upgrade execution:

    Failed operation step '/DataUpgrade/PreSync/ExecuteScripts/ScheduleScripts' 
    Cannot create a record in Release update scripts (ReleaseUpdateScripts).
    Class ID: YOUR CLASS ID, METHOD NAME.
    The record already exists.
       at Microsoft.Dynamics.Ax.MSIL.Interop.throwException(Int32 ExceptionValue, interpret* ip)
       at Microsoft.Dynamics.Ax.MSIL.cqlCursorIL.insert(IntPtr table)
       at Microsoft.Dynamics.Ax.Xpp.NativeCommonImplementation.Insert()

    Also, if your methods have unique names, it’s easier to find them in the logs, if we also consider tracking.

    Data upgrade script execution isolation

    Technically each data upgrade method will be considered a separate batch task. It means that data upgrade scripts are executed separately and "don't know" about each other. If you didn't specify dependencies between data upgrade scripts you cannot be sure about their execution sequence. It is good to know this approach if you are going to develop complex data upgrade scenarios.

    Using configuration keys

    When the data upgrade is triggered the configuration keys in D365 Finance and Operations will be activated based on the "Enabled by Default" property. If there is a case when the configuration key is not enabled by default however it is needed to use the business logic covered by this key it is possible to enable the required key via code.

    ConfigurationKeySet keySet = new ConfigurationKeySet();
    SysGlobalCache      cache = appl.globalCache();
     
    keySet.loadSystemSetup();
    keySet.enabled(configurationKeyNum(RetailCDXBackwardCompatibility), true);
     
    SysDictConfigurationKey::save(keySet.pack());
     
    // Call SysSecurity::reload with the following parameters:
    // _configurationChanged: true,
    // _allowSynchronize: false,
    // flushtable: true,
    //_promptSynchronize: false,
    //_syncRoleLicensesOnConfigurationChange: false

    SysSecurity::reload(true, false, true, false, false);

    Based on the previous point regarding the data upgrade script isolation it is needed to enable the required configuration key in each data upgrade method for the tables and fields under the disabled by default configuration key in order to be on the safe side.

    I had several cases when I needed to develop data upgrade scripts for the functionality with disabled configuration keys by default. When I enabled the configuration key in one method and did not enable the same key in another method, I got wrong results from a data consistency perspective.

    General performance guidelines

    You can find some points copied from the guide related to the Ax 2012 version and adjusted to the D365 version below. In fact, most of these recommendations also apply to D365. Since the performance is a critical part of the upgrade process I believe it’s a good idea to highlight these points once again.

    In fact, most companies will perform this task over a weekend, so the entire upgrade process must be able to be completed within 48 hours. 

    When you develop a new script, please try to apply it to your upgrade script:

    • Use record set functions whenever possible. If the script performs inserts, updates, or deletes within a loop, you should consider changing the logic to use one of the set-based statements. If possible, use these set options to perform a single set-based operation.

      • If your script runs delete_from or update_from on a large table where the delete() or update() methods of the target table have been overwritten, the bulk database operation will fall back to record-by-record processing. To prevent this, call the skipDataMethods(true) method to cause the update() and delete() methods to be skipped. Also, you can call the skipDatabaseLog(true) method to improve performance.

      • If the business scenario cannot be written as insert_recordset, consider using the RecordInsertList class to batch multiple inserts to reduce network calls. This operation is not as fast as insert_recordset, but is faster than individual inserts in a loop.

    • Break down your scripts into smaller pieces. For example, do not upgrade two independent tables in the same script even if there is a pattern in how the scripts work. This is because:

      • Each script, by default, runs in one transaction (=one rollback segment) separately. If the segment becomes too large, the database server will start swapping memory to disk, and the script will slowly halt.

      • Each script can be executed in parallel with other scripts as it was mentioned above.

    • Take care when you sequence the scripts. For example, do not update data first and then delete it afterward.

    • Be careful when calling normal business logic in your script. Normal business logic is not usually optimized for upgrade performance. For example, the same parameter record may be fetched for each record you need to upgrade. The parameter record is cached, but just calling the Find method takes an unacceptable amount of time. For example, the kernel overhead for each function call is about 5 ms. Usually, 10-15 ms will elapse before the Find method returns (when the record is cached). If there are a million rows, two hours will be spent getting the information you already have. The solution is to cache whatever is possible in local variables.

    • If there is no business logic in the script, rewrite the script to issue a direct query to bulk update the data.

    New requirement for developer cloud-hosted (OneBox) environments running version 10.0.36 or later

    If you are going to upgrade your developer cloud-hosted (OneBox) environments to version 10.0.36 or later you should keep in mind that additional components should be installed manually in advance. The required components are listed here.

    Otherwise, you will see the following error message during the upgrade process:

    Error during AOS stop: Please upgrade to the latest Visual C++ redistributable package to continue with the installation. Fore more details, visit aka.ms/PreReqChanges [Log: C:\Temp\PU20\AXPlatformUpdate\RunbookWorkingFolder\AZH81-runbook\localhost\RetailHQConfiguration\2\Log\AutoStopAOS.log]

    The step failed


    P.S. All new required components are automatically installed in all newly deployed cloud-hosted environments.

    Dynamics 365 Supply Chain Management new update policy

    Currently, the vendor releases 7 updates for Dynamics 365 Finance & Supply Chain Management over the year and only two or three of them are major ones. 

    Starting 2024, there will be significant changes to the release pattern and cadence. The changes go into effect with updates to some of the release milestones for 10.0.38. 

    At the moment, based on the official documentation the key points of the new approach are: 

    • The vendor will release four updates in December, March, June, and September. 
    • The major updates will be released in March and September.
    • Starting February 19, 2024, the maximum number of consecutive pauses of updates allowed will be reduced from three to one.
    • With the release durations extended, the same minimum of two annual service updates is maintained
    The following table illustrates allowed pauses by month based on your installed version until the transition is completed.


    If you have any other questions, please go to One Version service updates FAQ to learn how these changes affect the release process.


    Deploy development environment for Dynamics 365 Finance and Operations

    When you deploy a new development environment there are 2 options:

    • A cloud development environment in your Lifecycle Services (LCS) project
    • VM that is running locally

    All necessary documentation is available at the Microsoft resources. I would like to tell a few words about my experience with local VMs deployment.

    1. You must be an administrator on the instance for developer access. To ensure your own credentials as an administrator on a local VM, run the "Admin user provisioning tool". On the local VM, you can find a link on the desktop. The tool should run as an administrator option (right-click the icon and then click Run as administrator). 

    Note: If you see the Admin Provisioning Tool Error: "The value’s length for key ‘password’ exceeds it’s limit of ‘128’" it means that most probably you are using the VM with a virtual hard drive (VHD) that was released for versions 10.0.24 and later. In this case, you should follow the guidelines to ensure the process in order to resolve the issue.

    Another possible reason is that your email is related to the inactive Azure Active Directory tenant or you just made a typo. 

    2. If there is more than one developer using the local VMs and they are going to be linked in the same DevOps project I would recommend having unique environment names so that developer workspaces have unique names.

    For this purpose, you need to go to the Control Panel and rename the VM:


    The system will ask for a restart. I would postpone this action until the SQL Server instance will be renamed.

    3. The next step is to rename the SQL Server instance. 

    Note: My advice is: The SQL server instance should have the same name as the VM.

    You need to run SQL Server Management Studio as an administrator option (right-click the icon and then click Run as administrator). Then you need to run the query:

    --Run this with the updated names

    sp_dropserver 'MININT-F36S5EH'--Old Name

    GO 

    sp_addserver 'New VM Name'LOCAL--New Name

    GO

    After that, the VM should be restarted. When it is running again, you should be able to connect the SQL Server instance with the new name via SQL  Server Management Studio. After that, you can run Visual Studio in order to establish the connection with your DevOps project and configure your workspace.

    Run a runnable class (a job in terming of Ax 2009/2012) in Dynamics 365 Finance and Operation

    Overview

    In the previous versions of the system (AX 2009, AX 2012), you can create a new job in AOT: 

    And run it via F5 button from AOT later:

    In Dynamics 365 Finance and Operation there is another way. The option depends on the environment type.

    Tier-1 environment

    In Tier-1 environment, you have Visual Studio installed so you can create a runnable class (a job in terming of Ax 2009/2012) with the “main” method. Then you can run it via the link:

    https://<D365URL> /?cmp=<YourCompanyName>&mi=SysClassRunner&cls=<YourRunnableClassName>

    Tier-2 and higher environments

    If your runnable class (a job in terming of Ax 2009/2012) was included in a binary package and the package is installed in Tier-2 environment, you can follow the same way - you can run it via the link:

    https://<D365URL> /?cmp=<YourCompanyName>&mi=SysClassRunner&cls=<YourRunnableClassName>

    If you need to run a runnable class (a job in terming of Ax 2009/2012) that was not installed in the environment you can use the feature "X++ scripts with zero downtime". This feature allows you to upload and run deployable packages that contain custom X++ scripts without having to go through Microsoft Dynamics Lifecycle Services (LCS) or suspending your system. Therefore, you can correct minor data inconsistencies without causing any disruptive downtime.

    Of course, the feature requires a regular deployable package that can be created in Visual Studio. The deployable package must contain only one runnable X++ class. In other words, it must have one class that includes a “main” method. Then you need to upload and run a deployable package in the environment as described in the documentation.

    In fact, you can create a package in a development machine (Tier-1) and add one class to this package as mentioned in the requirements. It is not necessary to do check-ins of the code to your Azure Dev Ops. You can create a deployable package in your development environment and use it with the feature.

    From a technical perspective, the "Run custom X++ scripts with zero downtime" feature works as follows:

    The system uses Assembly.LoadFrom API. This means the package is never deployed or installed in a traditional way. Once the execution is completed, there is no way to access this code again, when the AOS eventually restarts this assembly disappears from the memory too. No other AOSes will know, no other users can be influenced. Since it is loaded temporarily for the shortest duration possible no action is needed from ALM or uninstalling perspectives.

    If you upload the package/model with the same name but with a new runnable class (a job in terming of Ax 2009/2012) inside, the system can show the following message:


    If you ignore this message and run the new script, the previous runnable class will be executed.

    It means that you should give unique names to your X++ scripts binary packages otherwise you might get unexpected results.

    If you would like to test the feature in a Tier-1 environment you can enable the "AppConsistencyCustomScriptFlight" flag within the “SYSFLIGHTING” table.

    Upgrade from AX 2012 to D365 Finance and Operations. Data upgrade in self-service environments.

    Overview

    When a successful upgrade test has been completed in a Tier-1 environment with customer data and developed data upgrade scripts you can start the data migration process in a Tier-2 machine. 

    It has almost been 2 years since the old process via "backpack" files is no longer available. The only way to upgrade data is by using "Data Migration Toolkit for Dynamics 365" which uses SQL replication process. 

    Note: An old name of the "Data Migration Toolkit for Dynamics 365" is "AX2012 Database Upgrade Toolkit for Dynamics365 Version".

    The data upgrade process is described in Microsoft standard documentation. There is also a tech-talk session about the migration process. I would like to share my personal experience of the data upgrade in self-service Tier-2 environments below.

    Key points before you start

    • You should have free disc space in Ax 2012 database SQL server for distribution and snapshot folders. The disc space should be about 2 times more than Ax 2012 business database size.

    • "Data Migration Toolkit for Dynamics 365" uses native SQL logins only. I would recommend to create a new SQL server login for this purpose.

    • A new SQL login should have DB_Owner privilege in the source Ax 2012 database and access to the master database in the source SQL Server instance.

    • Make sure that the replication feature is installed and enabled in the source Ax 2012 SQL Server instance. If the replication components aren't installed, follow the steps in Install SQL Server replication to install them.

    • Don’t forget to enable and start SQL Server Agent on the source Ax 2012 database server.

    • You have to know the external IP address of the SQL Server machine. Use can use this website to help you. You should use IPv4 address field value.

    • It is required to enable support for TLS 1.2 in your Ax 2012 SQL Server machine for Azure AD. To find some information on how to do it use this link

    If you don’t enable TLS 1.2 support in advance, LCS authentication window might not work properly or not appear at all and then you see errors in the logs:

    2023-06-25 06:47:27.202 -04:00 [Information] User Login started.
    2023-06-25 06:49:38.538 -04:00 [Information] User Login failed.
    2023-06-25 06:49:38.541 -04:00 [Error] AADSTS1002016: You are using TLS version 1.0, 1.1 and/or 3DES cipher which are deprecated to improve the security posture of Azure AD. 
    Your TenantID is: g9370196-3d9a-9d85-a5e9-3604ec7ffbdd. 
    Please refer to https://go.microsoft.com/fwlink/?linkid=2161187 and conduct needed actions to remediate the issue. For further questions, please contact your administrator.
    Trace ID: g02f1232-caf4-479c-8c0e-0442059e5e01
    Correlation ID: 52f25db8-e61a-4023-9e2a-c487f87e6c5d
    Timestamp: 2023-06-25 10:49:37Z 2023-06-25 06:49:38.545 -04:00 
    [Error] User login failed / not authorized.

    • You need to make sure that TLS 1.2 is enabled by default and that the previous versions are disabled:


    Tips to improve the process performance

    • Since the "Data Migration Toolkit for Dynamics 365" is based on the SQL Server replication feature it moves data as it is from the source database. If you are short on time for data migration and would like to reduce the time span it is a good idea to perform cleanup operations in the source database.

    • It is recommended that you start the replication during off hours when the system resources are at minimum usage (during off-peak time). 

    • Also, you can use this article to improve replication performance on the Ax 2012 SQL server.


    How to run batch tasks using the SysOperation framework

    Overview As you may know, the system has batch tasks functionality . It can be used to create a chain of operations if you want to set an or...