Additional guidance to database upgrade scripts for D365 Finance and Operations. Part 2. Custom inventory dimensions data upgrade.

Introduction

About a year ago I posted an article about inventory dimensions data migration from Ax 2012 to D365 version. That post has general ideas regarding the inventory dimension data migration approach. This time I'm going to share some more consistent guidance. All my outcomes and ideas are based on my personal experience with custom dimensions data migration projects. The intercompany functionality is not revised in this post.

Before you go on reading this post, please read my other posts about the data upgrade process since some terms and definitions I use, were explained there.

Data requirements

The "InventDim" table is different in Ax 2012 and D365. In Ax 2012 you could add the custom inventory dimension as new fields to the "InventDim" table and change some InventDim* macros. In D365 you should map your custom dimension to the "InventDimension1" – "InventDimension10" based on this guide.  

In fact, during the data migration, you need to move values from Ax 2012 "InventDim" table fields to D365 "InventDimension1" – "InventDimension10"  "InventDim" table fields before the data upgrade logic is triggered. It should be done during the "PreSync" stage when the database has the table structure from Ax 2012 and D365 versions at that moment.

Also, the structure of the "WHSInventReserveTable" has been changed in D365 and those changes should be taken into consideration.

Functional requirements

During the data upgrade execution, the system should be configured properly, from an inventory management perspective. It means that:

  1. The required configuration keys ("InventDimension1" – "InventDimension10") should be activated.
  2. All reservation hierarchies should be set correctly.
  3. All item dimension groups (Tracking, Storage, Product) should have proper setups.

If one of the mentioned requirements is not met during the date upgrade at the "PostSync" step, the custom inventory dimension migration will fail from a data consistency perspective.

Data upgrade

The "PreSync" stage

In order to meet the data requirements, we need to develop a script like this at the "PreSync" step:

[
  UpgradeScriptDescription("Script description"),
  UpgradeScriptStage(ReleaseUpdateScriptStage::PreSync),
  UpgradeScriptType(ReleaseUpdateScriptType::StandardScript),
  UpgradeScriptTable(tableStr(InventDim), falsetruetruefalse)
]
public void updateInventDimensionField()
{
    FieldName     field2012Name  = 'YOURDIMENSIONFIELD';
  FieldName     field365Name   = 'INVENTDIMENSION1';
  SysDictTable  inventDimDT    = SysDictTable::newTableId(tableNum(InventDim));
  TableName     inventDimName  = inventDimDT.name(DbBackend::Sql);

  str sqlStatement = strFmt(@"

                UPDATE [dbo].[%1]
                SET [dbo].[%1].[%2] = [dbo].[%1].[%3]
                WHERE [dbo].[%1].[%3] <> ''",
                inventDimName,
                field365Name,
                field2012Name);

  ReleaseUpdateDB::statementExeUpdate(sqlStatement);
}

It’s one of the possible options. The code is provided "as is" without any warranty. You can develop your own script in another way. It’s up to you. The key point is that the "InventDim" field values are to be moved at the "PreSync" stage properly.

The "PostSync" stage

At the "PostSync" stage when the database has a D365 table structure, the system performs the key steps from an inventory dimension data upgrade perspective.

There is a class "ReleaseUpdateDB73_WHS". It has the methods of the "WHSInventReserveTable" data upgrade. The key methods here are "populateParentInventDimIdOfWhsInventReserveMinor" and "populateParentInventDimIdOfWhsInventReserveMajor". Those methods call the "WHSInventReservePopulateParentInventDimId::populateForAllItems();" method. The method "populateForAllItems" populates the "WHSInventReserve" table based on the system setups that I mentioned in the functional requirements paragraph.

So, we need to develop an extension of this method and place the configuration keys activation code and update reservation hierarchies and dimension groups before the "next" method call. This is how we can meet the functional requirements before the data upgrade is triggered. In order to do it - add the extension to this method exactly since the data upgrade scripts can execute independently as I mentioned here.

The code changes can be like this:

/// <summary>

/// <c>WHSInventReservePopulateParentInventDimIdAXPUPG_Extension</c> class extension of <c>WHSInventReservePopulateParentInventDimId</c> class
/// </summary>
[ExtensionOf(classStr(WHSInventReservePopulateParentInventDimId))]
final class WHSInventReservePopulateParentInventDimIdAXPUPG_Extension
{
    /// <summary>
    /// Populates the <c>ParentInventDimId</c> field for all items.
    /// </summary>
    public static void populateForAllItems()
    {
        FieldId     fieldId2012; //you have to set your field identifier
        FieldId     fieldIdD365;

        ConfigurationKeySet keySet = new ConfigurationKeySet();
        SysGlobalCache      cache = appl.globalCache();
        boolean             isConfigChanged;
 
        void updateWHSReservationHierarchyElement()
        {
            WHSReservationHierarchyElement  hierarchyElement;

            hierarchyElement.skipDatabaseLog(true);

            ttsbegin;

            update_recordset hierarchyElement

            setting DimensionFieldId = fieldIdD365
                where hierarchyElement.DimensionFieldId == fieldId2012;

            ttscommit;

        }
 
        void updateEcoResTrackingDimensionGroupFldSetup()
        {
            EcoResTrackingDimensionGroupFldSetup     dimensionGroupFldSetup;
 
            dimensionGroupFldSetup.skipDatabaseLog(true);
 
            ttsbegin;
 
            update_recordset dimensionGroupFldSetup
            setting DimensionFieldId = fieldIdD365
                where dimensionGroupFldSetup.DimensionFieldId == fieldId2012;
 
            ttscommit;
        }
 
        if (//don’t forget to add a check 
            that the extension is calling during the data upgrade)
        {
            keySet.loadSystemSetup();
            if (
             !isConfigurationkeyEnabled(configurationKeyNum(InventDimension1)))
            {
                keySet.enabled(configurationKeyNum(InventDimension1), true);
                isConfigChanged  = true;
            }
 
            if (isConfigChanged)
            {
                SysDictConfigurationKey::save(keySet.pack());
                SysSecurity::reload(true, true, true, false, true);
            }
 
            fieldIdBPHContainer = fieldNum(InventDim, InventDimension1);
       
            updateWHSReservationHierarchyElement();
            updateEcoResTrackingDimensionGroupFldSetup();
        }
 
        next populateForAllItems();
    }
}

It’s one of the possible options. The code id provided "as is" without any warranty. You can develop your own script in another way. It’s up to you. The key point is to activate the required inventory setups and configuration keys before the system can start the upgrade of the "WHSInventReserve" table.

Final steps

When the data upgrade is completed the "InventDimension1" - "InventDimension10" configuration keys could be disabled. You should check their status on the form "License configuration" ( System Administration/Setup/License configuration) under the configuration key "Trade". You need to enable the required keys manually if needed and align SQL warehouse procedures with the command:

https://YOUR_ENVIRONMENT_URL/?mi=SysClassRunner&cls=Tutorial_WHSSetup

Then you can open the "On-Hand" form, and add your custom dimensions to display and verify the outcomes. Also, it makes sense to do some functional tests. You can pick items via inventory journals, for instance.


Additional guidance to database upgrade scripts for D365 Finance and Operations. Part 1. General recommendations.

Introduction

As you may know, developing your own data upgrade scripts is something one can do. There are several posts related to this topic in my blog. Also, you can find documentation on this topic. This time, I would like to share my notes and experience with the technical aspects of development and execution of scripts. Some of the points are not described on the Microsoft Docs website.

Data upgrade methods name convention

All methods with data upgrade attributes must have unique names in the system. If you create 2 methods with the same names in the same module in different AXPUPGReleaseUpdate* classes the compiler will not show errors. You will get an error during the data upgrade execution:

Failed operation step '/DataUpgrade/PreSync/ExecuteScripts/ScheduleScripts' 
Cannot create a record in Release update scripts (ReleaseUpdateScripts).
Class ID: YOUR CLASS ID, METHOD NAME.
The record already exists.
   at Microsoft.Dynamics.Ax.MSIL.Interop.throwException(Int32 ExceptionValue, interpret* ip)
   at Microsoft.Dynamics.Ax.MSIL.cqlCursorIL.insert(IntPtr table)
   at Microsoft.Dynamics.Ax.Xpp.NativeCommonImplementation.Insert()

Also, if your methods have unique names, it’s easier to find them in the logs, if we also consider tracking.

Data upgrade script execution isolation

Technically each data upgrade method will be considered a separate batch task. It means that data upgrade scripts are executed separately and "don't know" about each other. If you didn't specify dependencies between data upgrade scripts you cannot be sure about their execution sequence. It is good to know this approach if you are going to develop complex data upgrade scenarios.

Using configuration keys

When the data upgrade is triggered the configuration keys in D365 Finance and Operations will be activated based on the "Enabled by Default" property. If there is a case when the configuration key is not enabled by default however it is needed to use the business logic covered by this key it is possible to enable the required key via code.

ConfigurationKeySet keySet = new ConfigurationKeySet();
SysGlobalCache      cache = appl.globalCache();
 
keySet.loadSystemSetup();
keySet.enabled(configurationKeyNum(RetailCDXBackwardCompatibility), true);
 
SysDictConfigurationKey::save(keySet.pack());
 
// Call SysSecurity::reload with the following parameters:
// _configurationChanged: true,
// _allowSynchronize: false,
// flushtable: true,
//_promptSynchronize: false,
//_syncRoleLicensesOnConfigurationChange: false

SysSecurity::reload(true, false, true, false, false);

Based on the previous point regarding the data upgrade script isolation it is needed to enable the required configuration key in each data upgrade method for the tables and fields under the disabled by default configuration key in order to be on the safe side.

I had several cases when I needed to develop data upgrade scripts for the functionality with disabled configuration keys by default. When I enabled the configuration key in one method and did not enable the same key in another method, I got wrong results from a data consistency perspective.

General performance guidelines

You can find some points copied from the guide related to the Ax 2012 version and adjusted to the D365 version below. In fact, most of these recommendations also apply to D365. Since the performance is a critical part of the upgrade process I believe it’s a good idea to highlight these points once again.

In fact, most companies will perform this task over a weekend, so the entire upgrade process must be able to be completed within 48 hours. 

When you develop a new script, please try to apply it to your upgrade script:

  • Use record set functions whenever possible. If the script performs inserts, updates, or deletes within a loop, you should consider changing the logic to use one of the set-based statements. If possible, use these set options to perform a single set-based operation.

    • If your script runs delete_from or update_from on a large table where the delete() or update() methods of the target table have been overwritten, the bulk database operation will fall back to record-by-record processing. To prevent this, call the skipDataMethods(true) method to cause the update() and delete() methods to be skipped. Also, you can call the skipDatabaseLog(true) method to improve performance.

    • If the business scenario cannot be written as insert_recordset, consider using the RecordInsertList class to batch multiple inserts to reduce network calls. This operation is not as fast as insert_recordset, but is faster than individual inserts in a loop.

  • Break down your scripts into smaller pieces. For example, do not upgrade two independent tables in the same script even if there is a pattern in how the scripts work. This is because:

    • Each script, by default, runs in one transaction (=one rollback segment) separately. If the segment becomes too large, the database server will start swapping memory to disk, and the script will slowly halt.

    • Each script can be executed in parallel with other scripts as it was mentioned above.

  • Take care when you sequence the scripts. For example, do not update data first and then delete it afterward.

  • Be careful when calling normal business logic in your script. Normal business logic is not usually optimized for upgrade performance. For example, the same parameter record may be fetched for each record you need to upgrade. The parameter record is cached, but just calling the Find method takes an unacceptable amount of time. For example, the kernel overhead for each function call is about 5 ms. Usually, 10-15 ms will elapse before the Find method returns (when the record is cached). If there are a million rows, two hours will be spent getting the information you already have. The solution is to cache whatever is possible in local variables.

  • If there is no business logic in the script, rewrite the script to issue a direct query to bulk update the data.

New requirement for developer cloud-hosted (OneBox) environments running version 10.0.36 or later

If you are going to upgrade your developer cloud-hosted (OneBox) environments to version 10.0.36 or later you should keep in mind that additional components should be installed manually in advance. The required components are listed here.

Otherwise, you will see the following error message during the upgrade process:

Error during AOS stop: Please upgrade to the latest Visual C++ redistributable package to continue with the installation. Fore more details, visit aka.ms/PreReqChanges [Log: C:\Temp\PU20\AXPlatformUpdate\RunbookWorkingFolder\AZH81-runbook\localhost\RetailHQConfiguration\2\Log\AutoStopAOS.log]

The step failed


P.S. All new required components are automatically installed in all newly deployed cloud-hosted environments.

Dynamics 365 Supply Chain Management new update policy

Currently, the vendor releases 7 updates for Dynamics 365 Finance & Supply Chain Management over the year and only two or three of them are major ones. 

Starting 2024, there will be significant changes to the release pattern and cadence. The changes go into effect with updates to some of the release milestones for 10.0.38. 

At the moment, based on the official documentation the key points of the new approach are: 

  • The vendor will release four updates in December, March, June, and September. 
  • The major updates will be released in March and September.
  • Starting February 19, 2024, the maximum number of consecutive pauses of updates allowed will be reduced from three to one.
  • With the release durations extended, the same minimum of two annual service updates is maintained
The following table illustrates allowed pauses by month based on your installed version until the transition is completed.


If you have any other questions, please go to One Version service updates FAQ to learn how these changes affect the release process.


How to run batch tasks using the SysOperation framework

Overview As you may know, the system has batch tasks functionality . It can be used to create a chain of operations if you want to set an or...