Dynamics 365 Finance and Operations. New development features in version 10.0.40 PU64.

In version 10.0.40 PU64 new features for developers have been introduced.

First of all, it is possible to add modified objects to the current project automatically. It means no more worries about adding objects to the right project. The modified objects will be added automatically. In order to enable this feature, the "Add files to solution on save" option should be enabled.


The second feature is related to the building process. It is possible to select the label file to be built.



All label files will be compiled by default. If you would like to build only labels or speed up the building process you can use this option.

D365 Finance and Operations. Data modifications in Production and Sandbox environments.

Introduction

If you migrate from one of the previous versions (Ax 2009/Ax 2012) you might have a kind of job pack for data correction for typical cases. From my perspective, it makes sense to adjust, test and move those jobs to D365 Finance and Operations so that you can use them by demand as before. 

In addition, you can create and use a set of classes for data corrections that can be applied by demand while the IT team is solving the root cause of the data inconsistencies.

Anyway, the question that is still here: How can we fix urgent data issues in D365 Finance and Operations in Production or Sandbox environments?

Important: Please keep in mind: Any data adjustment in the Production environment must be tested in a Prod copy (Sandbox UAT) environment first. Do not apply data modification operations in the Production environment if they were not tested previously. In the production environment you have no second attempt if the data modification works wrong.

Overview

In Ax 2012 we had access to AOT and could develop a job or open the table via the table browser and adjust data in close to real-time. In D365 Finance and Operations production or Sandbox environments, there is no AOT anymore and the table browser can be opened in the view mode.

Luckily, there are some ways to correct data in D365 Finance and Operations as well.

•  First of all, it is still possible to develop a kind of jobs. In D365 Finance and Operations it is
    called "X++ scripts with zero downtime". I described this standard feature in one of my previous
    posts. With this feature, you can call the standard system business logic and use all available 
    X++ commands. In my opinion, it is the best option if you can’t deploy (for some reason) 
    a new deployable package with the new jobs into the production environment.

•  Another option can be using the Microsoft Dynamics Excel add-in and Data Entities. If you 
    have data entities for the required tables/forms and there are not so many records in the 
    table/form that can be a convenient option. From my perspective, this option can be applied to 
    master data mostly. In addition, the standard entities have edit restrictions similar to their 
    source tables. If you would like to create new entities for data modifications it may be 
    needed to copy standard entities and change their field properties. If you choose this way you 
    need to keep this approach in mind and develop the required entities or add the required fields 
    to the standard entities in advance. However, this is a cheating way as well, as you may 
    bypass some system restrictions. You need to be sure that you understand all the 
    consequences of the changes.

•  Another available option is using the Open Data Protocol (OData). In this case, you need to 
    use the Postman application and connect the application to D365 Finance and Operations 
    instance. Please keep in mind, the data entities that you are going to use should have 
    the "Public" property - "Yes", and the "Public Collection Name" property should be populated. 
    As I mentioned above, you can use the standard entities or create new ones for the data 
    modification purpose.

    In the Internet you can find a lot of posts how to use Postman with D365 Finance and 
    Operations OData entities. Just to give you an idea there are a couple of examples: 
    link1, link2.

The next options are not the best ones, if you ask me. They can be applied only if you know exactly what you are doing and it is your "last hope". With those options, you can make direct changes in the database and bypass validation or other code that would run normally.

  The next option can be using the Just-In-Time (JIT) database access. 
    From a data manipulation perspective, it is the same opportunity you had in Ax 2012 when you
    connected to Ax 2012 SQL Database via SQL Server Management Studio. In this way, 
    you can create, edit, and delete records in Production and Sandbox database tables directly. 
    However, this is not really recommended, as you may bypass validation or other code 
    that would normally run if you were using the system form and table methods for changing 
    data in the table. Before introducing any changes in the tables via SQL you have to 
    understand all the consequences of the changes.

•  The final option is to use a custom tool for data correction. As far as I know, there are some 
    available tools and extensions. For example, this one. You can find other ones. You should be 
    aware that Microsoft may decline support cases if they find this kind of tool in your 
    Prod environment. I have seen this notification in one of the posts on the Microsoft Community 
    forum. Unfortunately, I could not find it to share the link.

Conclusion

As you may see there are different options for data correction. Most likely you will be able to solve any data issue with one of the available options. My personal advice would be: to collect job/classes and Postman queries. As a result, you will have a list of tested classes and Postman requests for data corrections that you can use by demand.

I would recommend using the Just-In-Time (JIT) database access really carefully. Before introducing any changes with this option, you have to understand all the consequences of the changes.

Troubleshooting the package installation to a Tier-1 environment by using LCS

Introduction

When you install the package in a Tier-1 environment by using the LCS portal the process should be completed without any issue in 99%. Unfortunately,  there is still 1% when you may experience various issues.

Issue types


Disc space

Before you install the binary package in a Tier-1 environment via LCS portal (Maintain – Apply updates), please make sure that there is enough space on the disk where the folder *\AosService\PackageLocalDirectory is hosted. Usually, it is the "K" drive.

Otherwise, the preparation process can fail and the LCS environment will be in the "Preparation failed" state. If you download the log, it will be empty and the reason for the failure can be unclear.

Visual Studio

If the process fails at the Preparation stage or later and when you download the log it contains the data about preparation steps only, and the file "PreServicing-xxxxxxxxxxxxxxxxxxxx.error" contains only one line:

The specified module 'C:\Program Files\Microsoft Security Client\MpProvider' was not loaded because no valid module file was found in any module directory.

It may mean that a Visual studio instance was open in the environment when you started the process. In this case, you need to log in to the environment via an RDP connection. Then you should close the Visual Studio instance in all user sessions, close your RDP connection, and resume the process.

If you see no open Visual Studio instances in the environment you need to open Visual Studio in your session. Most probably it will install some updates or components behind. You need to keep it open for 5-10 minutes. Then you should close the Visual Studio instance, close your RDP connection, and resume the process.

Maintenance mode

Occasionally, the environment can be in the maintenance mode. In this case, you can face an issue on the step "GlobalUpdate script for service model: AOSService on machine: Machine Name" and the environment will be in the "Failed" state.

In this case, you need to log in to the environment via an RDP connection and check the Services (Control Panel\System and Security\Administrative Tools\Services). If you see that the "Microsoft Dynamics 365 Unified Operations: Batch Management Service" is stopped and it cannot be started it is the root cause of the issue. You need to be able to start this service otherwise you can’t resume the installation process.

In order to find a reason, you need to take a look at the Windows event logs.

If you see there 3 errors in a row it may mean that the environment is in the maintenance mode.

  The description for Event ID 110 from source Microsoft Dynamics 365 for Finance and 
    Operations cannot be found. Either the component that raises this event is not installed on 
    your local computer or the installation is corrupted. 
    You can install or repair the component on the local computer.

    If the event originated on another computer, the display information had to be saved with the 
    event. The following information was included in the event:

    AX is shutting down due to an error.
    Microsoft.Dynamics.AX.Batch.BatchService+MaintenanceModeBatchStartupException: 
    Exception of type
    'Microsoft.Dynamics.AX.Batch.BatchService+MaintenanceModeBatchStartupException' 
    was thrown.
    at Microsoft.Dynamics.AX.Batch.BatchService.OnStart(String[] args)
    Exception details:
    Microsoft.Dynamics.AX.Batch.BatchService+MaintenanceModeBatchStartupException: 
    Exception of type
    'Microsoft.Dynamics.AX.Batch.BatchService+MaintenanceModeBatchStartupException' 
    was thrown.
    at Microsoft.Dynamics.AX.Batch.BatchService.OnStart(String[] args)

  Faulting application name: Batch.exe, version: 7.0.7198.49, time stamp: 0xb4453638
    Faulting module name: unknown, version: 0.0.0.0, time stamp: 0x00000000
    Exception code: 0x80131623
    Fault offset: 0x00007ffb9cb54093
    Faulting process id: 0x1508
    Faulting application start time: 0x01da8cdf84d8bc61
    Faulting application path: K:\AosService\WebRoot\bin\Batch.exe
    Faulting module path: unknown
    Report Id: 6054eadb-7058-44e4-966c-fcd600a10af7
    Faulting package full name: 
    Faulting package-relative application ID:

 Application: Batch.exe
   Framework Version: v4.0.30319
   Description: The application requested process termination through       System.Environment.FailFast(string message).
   Message: Tearing the process down due to an unhandled exception.
   Stack:
   at System.Environment.FailFast(System.String, System.Exception)
   at <Module>.FailFast(UInt32, System.String, System.Exception)
   at <Module>.OnAppDomainUnhandledException(System.Object,
   System.UnhandledExceptionEventArgs)
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.ServiceProcess.ServiceBase.Run(System.ServiceProcess.ServiceBase[])
   at Microsoft.Dynamics.AX.Batch.Entrypoint.Main(System.String[])

To be on the safe side you can use the standard documentation. You can query data from the "SQLSYSTEMVARIABLES" object and if you see that 'CONFIGURATIONMODE' is 1, the system is in the Maintenance mode.

If it is true you need to revert the environment to the normal state. Then you need to start the "Microsoft Dynamics 365 Unified Operations: Batch Management Service" manually. If it  starts - you fixed the issue. After that, you can close your RDP connection, and resume the process. 

Dynamics 365 Supply Chain Management WHS extending. Adding a new work type for "User directed" mobile device flows.

Introduction

I would like to share my experience with adding a completely new work type. In my case, I have added a new work type for "User directed" mobile device flows. My post is not a complete guide on how to do it. It is more about the main idea and some thoughts. 

Note: There is a post on how to work with the “Custom” work type. It is different.

Overview

WHSWorkType enum extending

First, it is necessary to extend the standard "WHSWorkType" enum in order to add a new work type.

Adding new mobile device step

Then, since it is a new work type it makes sense to add a new step so that the system can process the new work type. The new step should not have one of the existing numbers in the standard macro “WHSWorkExecuteDisplayCases”. 
We can create a new macro or use an existing one for this purpose and add a new step:

#define.NewWorkStep(10000)

Adding a new class handler for the new work 

After that, we need to create a new class handler for the new work type:

/// <summary>
/// The <c>WhsNewWorkTypeHandler</c> class handles new work type.
/// </summary>
[WhsWorkTypeFactory(WhsWorkType::NEWWorkType)]
class WHSNewWorkTypeHandler extends WhsWorkTypeHandler
{}

In this class, we need to implement the methods:

  • findWorkCreateLocationQty – sets the parameters of the processing.
  • determineStep – defines the first step of the flow for the work type. Also, you can add a mobile device screen to be shown for users.
  • executeWorkLine – defines the actions with the work line.

Note: In the system, there are “WhsWorkTypePrintHandler” and “WhsWorkTypeCustomHandler” classes that can be used for a better understanding of the work type handler classes.

Below is a mockup of the possible solution:

public WhsWorkCreateLocationQtyResult

    findWorkCreateLocationQty(WhsWorkCreateLocationQtyParameters _parameters)
{
   WhsWorkCreateLocationQtyResult result; 
   
   result = WhsWorkCreateLocationQtyResult::construct();

   result.locationId       = '';
   result.pickUnitId       = _parameters.unitId;
   result.pickQty          = _parameters.qtyWork;
   result.inventPickQty    = _parameters.inventQtyWork;

   return result;
}

public void determineStep(WhsWorkStepContext _context)

{
    WhsWorkExecuteDisplay   workExecuteDisplay = _context.workExecuteDisplay;
      
    //we can to go to the custom dialog if we would like to
    _context.nextForm   = workExecuteDisplay.DrawNewScreen();
    _context.step       = #NewWorkStep;
}

public WHSWorkLine executeWorkLine(WhsWorkExecute     _workExecute, 

                                   WHSWorkLine        _workLine, 
                                   WHSUserId          _userId)
{
    return _workExecute.processNewWorkType(_workLine.WorkId, 
                                           _workLine.LineNum, 
                                           _userId);
}

When we implement the handler class and the new step in the mobile device flow for the new work type we need to be able to process the new step correctly.


WhsWorkExecuteDisplay class extending

If we take a look at the "processWorkLine" method of the "WHSWorkExecuteDisplay" class we will see that there is a default section for new mobile device steps.

default:

   boolean finishedProcessing = this.processWorkLineForCustomStep(state);
   
   if (finishedProcessing)
   {
       return [state.nextForm, step, state.recall, pass.pack()];
   }
   break;

So, the next step is to create an extension of the WhsWorkExecuteDisplay class and implement the "processWorkLineForCustomStep" method. The "processWorkLineForCustomStep" method has the "Replaceable" attribute so in your extension you can write any business logic

Note: I would recommend using the “next” command in the “processWorkLineForCustomStep”, for instance:

protected boolean 

        processWorkLineForCustomStep(WhsWorkProcessWorkLineState _state)
{
    boolean    ret;
    ……………………………………
    switch (step)
    {
               case #NewWorkStep:
           //do something
           ret = true; //in case the step has been processed correctly
           break;

       default :
           ret = next processWorkLineForCustomStep(_state);
    }
   
    return ret;
}

In this case, if there is more than one extension of this method (for example, from multiple vendors) all of them will be called by the system. 

If you don’t call the “next” command, only your method extension will be called. All other method extensions can be ignored by the system.

If the new step has been processed correctly it is required to return the "True" value. In this case, the system returns the values from your method in the mobile device flow and the standard code after the "processWorkLineForCustomStep" is not executed.

When you jump into your new step you can develop your own mobile device screens and switch between them depending on the buttons and controls in use. When you are done with the new work type you need to "go back" to the standard mobile device steps.


Conclusion

For sure some code adjustments and extensions can be desirable in other objects too. It depends on the business logic that is planned to be implemented with a new work type. In my opinion, the text above can be used as a high-level guide.


Workflow Issue: Stopped (error): X++ Exception: Work item could not be created. Insufficient security permissions for user XXXX. Please review the user's security permissions to ensure they are sufficient for this workflow document and then resume the workflow.

Recently, we experienced a workflow security issue when the new D365 update was installed. Originally, the workflow process was copied from one of the standard ones and had worked for at least one year.

The error message was:

Workflow Issue: Stopped (error): X++ Exception: Work item could not be created. Insufficient security permissions for user XXXX. Please review the user's security permissions to ensure they are sufficient for this workflow document and then resume the workflow.

The first idea was that there are some changes in the standard Workflow process that we should apply. But there were no changes. We read the standard documentation and this article on the workflow security.

We searched the Internet and found a lot of posts and articles on this error.

Below you can find a list of possible reasons for the error:

1. User doesn't have approval rights (Approval related security is not assigned)
2. User doesn't have access to the menu item associated with workflow.
3. No employee is mapped to the user.
4. There is a dynamic rule assigned to the user that prevents this user from doing the operation.
5. The menu items specified on the approval or task elements for the step in the workflow have their configuration key disabled.

Unfortunately, all those options did not apply to our case.

We tried:

  • Operations "Data > Synchronize all" and "Data > Repair" on the "Security Configuration" form. 
  • Re-assigning user roles. (System admin > Security > Assign users to roles)
  • Applying the "System administrator" role to user accounts as much as possible.

and so on.

Finally, we were able to find the cause of the issue. The error message says that it is a security issue but, in our case, it was a workflow query issue.  Users have started doing documents a bit differently and the query inside our workflow returned no record. The system interpreted this case as a security issue and raised the security error.

So, the lesson is: It can be not only about security permissions or configuration keys. If you see this error it makes sense to verify that the workflow query returns data.

Refresh caches in Dynamics 365 Finance and Operations

Sometimes, it is necessary to refresh different types of caches in the system. In Ax 2012 you can do it from the development workspace: Tools - Caches and select the required option.


In D365 we can clear or refresh the cache types using the following commands in the browser:
  • Refresh dictionary:                                                                        https://ENVIRONMENT_URL/?mi=SysClassRunner&cls=SysFlushDictionary
  • Refresh data:
    https://ENVIRONMENT_URL/?mi=SysClassRunner&cls=SysFlushData
  • Refresh elements:
    https://ENVIRONMENT_URL/?mi=SysClassRunner&cls=SysFlushAOD
Note: The refreshing of the code extension cache was included in SysFlushAOD::Main() method in 2021. The method SysExtensionCache::clearAllScopes() has been marked as deprecated since then.
  • Refresh report server:                                                                          https://ENVIRONMENT_URL/?mi=SysClassRunner&cls=SysFlushReportServer
As it was in Ax 2012, we can refresh the cache for database logging:
  • Refresh database log:                                                                    https://ENVIRONMENT_URL/?mi=SysClassRunner&cls=SysFlushDatabaseLogSetup
There is a new cache type in D365. It is related to a cross-company data-sharing feature. This feature resembles the virtual companies feature in Microsoft Dynamics AX 2012.
  • Refresh the cross-company data sharing:                                    https://ENVIRONMENT_URL/?mi=SysClassRunner&cls=SysFlushSysSharingRules
There is still "SysFlushSystemSequence" class in D365. In the previous versions, it was used to record identifier alignment during the data import process via group definitions. Starting D365 this approach has been deprecated the use of this class makes no sense anymore.

The relations or tables are not available when configuring the electronic reporting report

Recently we experienced an issue when we heeded to change one of the reports. The report was configured via electronic reporting. The issue was that we were not able to see the actual relations between the tables. Moreover, we could not see new tables when we were configuring the electronic reporting report. 

We checked our configuration in  Organization administration > Workspaces > Electronic reporting and opened Designer > Map model to datasource > Designer. Here is where we map the list of fields to be printed in the report against the data source in Dynamics 365.

I was not told that the tables which should be added to the report configuration were new. Those tables were delivered with the latest code release from our partners.

When I learned about that the solution was pretty clear - It is necessary to refresh the ER metadata to make the custom field that is added visible in the ER model mapping designer. It must be done with the Rebuild table references menu item (Organization administration > Electronic reporting > Rebuild table references) to bring the AOT changes into the ER metadata.

As a result, the lessons are:

  • Read the documentation thoroughly.
  • When you add new Application Object Tree (AOT) artifacts or update existing AOT artifacts that are used as data sources (tables, views, or data entities) in ER, use the Rebuild table references menu item (Organization administration > Electronic reporting > Rebuild table references) to bring your AOT changes into the ER metadata.

Data Migration Toolkit for Dynamics 365 Experience Overview

Introduction

Previously, I wrote a post about a data upgrade in self-service environments. This time I would like to share my experience with the "Data Migration Toolkit for Dynamics 365". 

Parameters

Based on the standard documentation, to optimize the replication latency/performance, you can update the following distributor parameters in the App.config file:

  • MaxBcpThreads – By default, this parameter is set to 6. If the machine has fewer than six cores, update the value to the number of cores. The maximum value that you can set is 8.

  • NumberOfPublishers – By default, this parameter is set to 2. The common recommendation is to use this value. However, there can be situations where you may want to increase the number of publishers, to distribute smaller numbers of tables to each publisher. This in conjunction with the manual snapshot start process, allows you to run smaller initial snapshots, that can be useful if you have limited maintenance windows and must split the startup of the replication over several.

  • SnapshotPostPublication - This option will add in a 5-minute delay between automatic snapshot processes starting, that can assist with loads on the source server. The toolkit also allows manual snapshot starts, if you choose that option, you don't need to set this.

Let me share my experience with the parameters:

  • MaxBcpThreads – I adjusted the parameter according to the recommendations. 

  • NumberOfPublishers - During my data migration projects if the Ax 2012 database was not so huge (30-50 Gb) I used the default NumberOfPublishers value. If the database is 80 Gb or bigger it’s better to select “Yes” to the “Manual startup” option on Step 5 when doing the replication, then run each publication one at a time, and wait for it to push the snap show before you start the next one. Also, in the toolkit config file “DataMigrationTool.exe.config”, for larger databases it’s better to set the number of publishers to 4 or 6. The advantage is that if one of the publishers has an issue, you only then have to reinitialize 25% or less of the data.

  • SnapshotPostPublication – I have not changed this parameter. Each time the process worked fine with the default value.

    One more piece of advice:

    Each time I need to perform the data migration in Tier-2 or higher environments I deploy a new environment on LCS from the scratch. I have tried to perform the data migration with the same LCS self-service environment but the process failed and I could not resume it. After this case, I prefer to deploy a new LCS Tier-2 and higher environment for data migration and not waste my time on unnecessary troubleshooting.

    And don't forget to clean up extra data in Ax 2012 with the standard operations. It can reduce the time required for the data upgrade and you may avoid the issues mentioned below.

    Experienced issues

    There is a brilliant troubleshooting guide from Microsoft. It is really useful and contains reasonable advice. If you have any issues with the data upgrade you have to look at it first. It is constantly updating by Microsoft. Therefore I have experienced the issues that have not been described in the guide yet.

    Batch step failed with a time-out 

    The message was: The postsync-wait for batch step is in in-progress state for more than 240 minutes.

    I asked Microsoft support team for advice and the response was:

    In some scenarios its expected behaviour for the sandbox environment to longer time to finish, hence request you to wait for step to be completed using the DS command, then resume the operation.

    If you can get JIT access, you can try to run the following SQL query to check for current queries running in the database, you may need to run the query a few times to check:

    SELECT   
    SPID   = er.session_id, S
    STATUS  = ses.STATUS, 
    [Login]        = ses.login_name, 
    Host           = ses.host_name,
    BlkBy          = er.blocking_session_id, 
    DBName  = DB_Name(er.database_id), 
    CommandType    = er.command,
    ObjectName     = OBJECT_NAME(st.objectid),
    CPUTime        = er.cpu_time,
    StartTime      = er.start_time,
    TimeElapsed    = CAST(GETDATE() - er.start_time AS TIME),
    SQLStatement   = st.text
    FROM    sys.dm_exec_requests er
        OUTER APPLY sys.dm_exec_sql_text(er.sql_handle) st
        LEFT JOIN sys.dm_exec_sessions ses
        ON ses.session_id = er.session_id
    WHERE   st.text IS NOT NULL

    Once the final step is "Completed" you can then make a Resume from the toolkit. This will quickly cycle through the servicing again in LCS to get the environment status to "Completed". That might take around 10-20 minutes, sometimes less.

    In my particular case, this solution worked.

    DBSync step failed with a time-out

    It is happening due to the size of the tables and the precision changing on numeric fields between AX 2012 and D365. SQL Server is exceptionally slow when doing an alter table on these data types. If the table size in Ax 2012 is more than 5Gb and this change type was applied to the table fields in D365 you will probably face a synchronization issue.

    To get past this, run DS from the toolkit then resume, it will get through but could timeout on another table.

    If there are multiple huge tables in Ax 2012 I would recommend to ask Microsoft for support. They have a solution for this. It is called the Shadow Copy sync.

    INVENTSUMLOGTTS table size

    In some cases, the INVENTSUMLOGTTS table can have a huge size. In Ax 2012 the INVENTSUMLOGTTS table is used for two purposes:

    • As part of the Inventory Multi Transaction System (IMTS) in order to roll back unsuccessful transactions.

    • It allows Master scheduling to do an update based on changes to the on hand. After the update, the records are deleted by Master scheduling.  
    When a customer is registered for Master planning but does not use Master scheduling, this table will continue to grow and will never get cleared out. Running Master scheduling will automatically clear the contents of this table.  

    If you are registered for Master planning but are not running Master scheduling, you can manually purge the table through SQL with the following statement: 
    DELETE INVENTSUMLOGTTS WHERE ISCOMMITTED =1

    Troubleshooting the deployable packages installation from the command line on a development (one box) environment

    Overview

    The deployment package installation in a development environment is a common task when you need to install an ISV solution or another partner. The installation process is well described in the standard guide. Theoretically, the process should go smoothly. In reality, you may face various errors during this process.

    Time out issue

    Sometimes, the installation process can fail with the error:

    Generated runbook with id: Runbook20230907093027
    Start executing runbook : Runbook20230907093027
    Executing step: 1
    Stop script for service model: AOSService on machine: localhost
    Stop AOS service and Batch service

    Error during AOS stop: File locks found on K:\AosService\WebRoot\bin\AOSKernel.dll for more than 300 seconds. Check previous logs for identified locking processes. [Log: C:\Temp\ZZZ_10.02.15.12\RunbookWorkingFolder\Runbook20230907093027\localhost\AOSService\1\Log\AutoStopAOS.log]

    The step failed


    In this case, you need to stop IIS Express in your dev box and repeat the installation process.

    Don't forget to run database synchronization. You must run database synchronization from Microsoft Visual Studio after you install the deployable package. 

    Check in issue

    When you try to check in the installed binary files in Visual Studio you may face the issue below: 

    **:\AosService\PackageLocalDirectory\ModelName\bin\file_name
    TFS30063: You are not authorized to access.

    You  can try to do the following:

    • Reopen Visual Studio 
    • Try to do a check-in again
    If the previous actions have not worked:
    • Reopen Visual Studio
    • Stop The Batch Service and IIS Express in the Dev Box  
    • Try to do a check-in again

    Additional guidance to database upgrade scripts for D365 Finance and Operations. Part 2. Custom inventory dimensions data upgrade.

    Introduction

    About a year ago I posted an article about inventory dimensions data migration from Ax 2012 to D365 version. That post has general ideas regarding the inventory dimension data migration approach. This time I'm going to share some more consistent guidance. All my outcomes and ideas are based on my personal experience with custom dimensions data migration projects. The intercompany functionality is not revised in this post.

    Before you go on reading this post, please read my other posts about the data upgrade process since some terms and definitions I use, were explained there.

    Data requirements

    The "InventDim" table is different in Ax 2012 and D365. In Ax 2012 you could add the custom inventory dimension as new fields to the "InventDim" table and change some InventDim* macros. In D365 you should map your custom dimension to the "InventDimension1" – "InventDimension10" based on this guide.  

    In fact, during the data migration, you need to move values from Ax 2012 "InventDim" table fields to D365 "InventDimension1" – "InventDimension10"  "InventDim" table fields before the data upgrade logic is triggered. It should be done during the "PreSync" stage when the database has the table structure from Ax 2012 and D365 versions at that moment.

    Also, the structure of the "WHSInventReserveTable" has been changed in D365 and those changes should be taken into consideration.

    Functional requirements

    During the data upgrade execution, the system should be configured properly, from an inventory management perspective. It means that:

    1. The required configuration keys ("InventDimension1" – "InventDimension10") should be activated.
    2. All reservation hierarchies should be set correctly.
    3. All item dimension groups (Tracking, Storage, Product) should have proper setups.

    If one of the mentioned requirements is not met during the date upgrade at the "PostSync" step, the custom inventory dimension migration will fail from a data consistency perspective.

    Data upgrade

    The "PreSync" stage

    In order to meet the data requirements, we need to develop a script like this at the "PreSync" step:

    [
      UpgradeScriptDescription("Script description"),
      UpgradeScriptStage(ReleaseUpdateScriptStage::PreSync),
      UpgradeScriptType(ReleaseUpdateScriptType::StandardScript),
      UpgradeScriptTable(tableStr(InventDim), falsetruetruefalse)
    ]
    public void updateInventDimensionField()
    {
        FieldName     field2012Name  = 'YOURDIMENSIONFIELD';
      FieldName     field365Name   = 'INVENTDIMENSION1';
      SysDictTable  inventDimDT    = SysDictTable::newTableId(tableNum(InventDim));
      TableName     inventDimName  = inventDimDT.name(DbBackend::Sql);

      str sqlStatement = strFmt(@"

                    UPDATE [dbo].[%1]
                    SET [dbo].[%1].[%2] = [dbo].[%1].[%3]
                    WHERE [dbo].[%1].[%3] <> ''",
                    inventDimName,
                    field365Name,
                    field2012Name);

      ReleaseUpdateDB::statementExeUpdate(sqlStatement);
    }

    It’s one of the possible options. The code is provided "as is" without any warranty. You can develop your own script in another way. It’s up to you. The key point is that the "InventDim" field values are to be moved at the "PreSync" stage properly.

    The "PostSync" stage

    At the "PostSync" stage when the database has a D365 table structure, the system performs the key steps from an inventory dimension data upgrade perspective.

    There is a class "ReleaseUpdateDB73_WHS". It has the methods of the "WHSInventReserveTable" data upgrade. The key methods here are "populateParentInventDimIdOfWhsInventReserveMinor" and "populateParentInventDimIdOfWhsInventReserveMajor". Those methods call the "WHSInventReservePopulateParentInventDimId::populateForAllItems();" method. The method "populateForAllItems" populates the "WHSInventReserve" table based on the system setups that I mentioned in the functional requirements paragraph.

    So, we need to develop an extension of this method and place the configuration keys activation code and update reservation hierarchies and dimension groups before the "next" method call. This is how we can meet the functional requirements before the data upgrade is triggered. In order to do it - add the extension to this method exactly since the data upgrade scripts can execute independently as I mentioned here.

    The code changes can be like this:

    /// <summary>

    /// <c>WHSInventReservePopulateParentInventDimIdAXPUPG_Extension</c> class extension of <c>WHSInventReservePopulateParentInventDimId</c> class
    /// </summary>
    [ExtensionOf(classStr(WHSInventReservePopulateParentInventDimId))]
    final class WHSInventReservePopulateParentInventDimIdAXPUPG_Extension
    {
        /// <summary>
        /// Populates the <c>ParentInventDimId</c> field for all items.
        /// </summary>
        public static void populateForAllItems()
        {
            FieldId     fieldId2012; //you have to set your field identifier
            FieldId     fieldIdD365;

            ConfigurationKeySet keySet = new ConfigurationKeySet();
            SysGlobalCache      cache = appl.globalCache();
            boolean             isConfigChanged;
     
            void updateWHSReservationHierarchyElement()
            {
                WHSReservationHierarchyElement  hierarchyElement;

                hierarchyElement.skipDatabaseLog(true);

                ttsbegin;

                update_recordset hierarchyElement

                setting DimensionFieldId = fieldIdD365
                    where hierarchyElement.DimensionFieldId == fieldId2012;

                ttscommit;

            }
     
            void updateEcoResTrackingDimensionGroupFldSetup()
            {
                EcoResTrackingDimensionGroupFldSetup     dimensionGroupFldSetup;
     
                dimensionGroupFldSetup.skipDatabaseLog(true);
     
                ttsbegin;
     
                update_recordset dimensionGroupFldSetup
                setting DimensionFieldId = fieldIdD365
                    where dimensionGroupFldSetup.DimensionFieldId == fieldId2012;
     
                ttscommit;
            }
     
            if (//don’t forget to add a check 
                that the extension is calling during the data upgrade)
            {
                keySet.loadSystemSetup();
                if (
                 !isConfigurationkeyEnabled(configurationKeyNum(InventDimension1)))
                {
                    keySet.enabled(configurationKeyNum(InventDimension1), true);
                    isConfigChanged  = true;
                }
     
                if (isConfigChanged)
                {
                    SysDictConfigurationKey::save(keySet.pack());
                    SysSecurity::reload(true, true, true, false, true);
                }
     
                fieldIdBPHContainer = fieldNum(InventDim, InventDimension1);
           
                updateWHSReservationHierarchyElement();
                updateEcoResTrackingDimensionGroupFldSetup();
            }
     
            next populateForAllItems();
        }
    }

    It’s one of the possible options. The code id provided "as is" without any warranty. You can develop your own script in another way. It’s up to you. The key point is to activate the required inventory setups and configuration keys before the system can start the upgrade of the "WHSInventReserve" table.

    Final steps

    When the data upgrade is completed the "InventDimension1" - "InventDimension10" configuration keys could be disabled. You should check their status on the form "License configuration" ( System Administration/Setup/License configuration) under the configuration key "Trade". You need to enable the required keys manually if needed and align SQL warehouse procedures with the command:

    https://YOUR_ENVIRONMENT_URL/?mi=SysClassRunner&cls=Tutorial_WHSSetup

    Then you can open the "On-Hand" form, and add your custom dimensions to display and verify the outcomes. Also, it makes sense to do some functional tests. You can pick items via inventory journals, for instance.


    Ax 2012 data upgrade on the self-service environment. The error "There is no row at position 0."

    During the Ax 2012 data upgrade, after providing connection details to Ax2012 SQL Server, the migration tool closed and I received the follo...