How to run batch tasks using the SysOperation framework

Overview

As you may know, the system has batch tasks functionality. It can be used to create a chain of operations if you want to set an order of tasks execution according to certain criteria.

Issue description


When you create a new batch tasks you have to specify the "Class name" field. If you know the class name in advance you can put it directly and not wait for the lookup list. Theoretically not all classes are supposed to be used with the mentioned feature. In this casse, the following warning message will appear: 
"The specified class is not fully designed for use in the Batch job form. Execution might provide unexpected results."

If you ignore this message and try to define the class parameters you will see the error: 
"Access denied: Class name"

If we use the "RunBaseBatch" framework it is required to override the method "canGoBatchJournal" and set its return value to "true".

If you use the new SysOperation framework it is required to decorate the controlled class with the 
"[SysOperationJournaledParametersAttribute(true)]". 

[SysOperationJournaledParameters(true)]

class WHSReleaseOutboundShipmentOrderToWarehouseController

If you add this command the issue with the warning message "The specified class is not fully designed for use in the Batch job form. Execution might provide unexpected results." will be solved.

But if you try to set the parameters you will still see the error: "Access denied: Class name".

In order to solve it, it is requerd to add to the "new" method a reference to the service class and method. You can take a look at the "WHSReleaseOutboundShipmentOrderToWarehouseController" class to get an idea of the pattern.

void new(

    IdentifierName _className  = '',
    IdentifierName _methodName = '',
    SysOperationExecutionMode _executionMode = SysOperationExecutionMode::Synchronous)
{
    IdentifierName parmClassName    = _className != '' ? _className :                 
            classStr(WHSReleaseOutboundShipmentOrderToWarehouseService);

    IdentifierName parmMethodName = _methodName != '' ? _methodName : 
        methodStr(WHSReleaseOutboundShipmentOrderToWarehouseService
            autoReleaseOutboundShipmentOrders);

    super(parmClassName, parmMethodName, _executionMode);

    this.parmDialogCaption(
        WHSReleaseOutboundShipmentOrderToWarehouseController::description());
}

When the "new" method is executed, the framework looks at the "run" method and determines the contract based on the input parameters. The call to "startOperation" method causes the system to build the user interface and execute code based on the user’s actions. A dialog is created using data types specified in data contract member methods. This explains why the new method requires class and method names. The third parameter specifies the execution mode, which takes effect if we programmatically execute the controller. Synchronous execution runs in line with the current process unless the user chooses to run it in the background.

So the reason for the error "Access denied: Class name" was that the service class names and method names were not set by default in the "new" method. As a result, the default UI builder class did not have a reference to the contract class, so parameters could not be set for batch tasks.

In case you experience other challenges with the SysOperation batch tasks feature you can use "WHSReleaseOutboundShipmentOrderToWarehouse*" classes to guide you.

Data upgrade from AX 2012 in development environments: database collation change.

Overview

As you may know the collation of the AX 2012 database must be SQL_Latin1_General_CP1_CI_AS when you perform a data upgrade in development environments. If your database is a different collation, you have to change it. Otherwise, you will experience weird errors during the data upgrade process. I would like to share my experience with this process.

Issue description

In order to change the collation of the AX 2012 database you can follow this guide: Change the database collation for development environments. When I tried to use the command from the mentioned guide:

SqlPackage.exe /Action:Export /SourceServerName:localhost /SourceDatabaseName:MicrosoftDynamicsAX /TargetFile:"C:\Temp\MicrosoftDynamicsAX.bacpac" /Properties:CommandTimeout=1200 /Properties:VerifyFullTextDocumentTypesSupported=False

The error appeared:

*** Changes to connection setting default values were incorporated in a recent release.  More information is available at https://aka.ms/dacfx-connection
*** Error exporting database:Could not connect to database server.
A connection was successfully established with the server, but then an error occurred during the login process. 
(provider: SSL Provider, error: 0 - The certificate chain was issued by an authority that is not trusted.)
The certificate chain was issued by an authority that is not trusted
*** The settings for connection encryption or server certificate trust may lead to connection failure if the server is not properly configured.

The cause of the error is that the server may not have encryption enabled or the configured certificate may not be issued from a trusted certificate authority (such as a self-signed certificate).

In order to avoid this error, I used the following guide: SqlPackage Export parameters and properties. As a result, I modified the export command given in the guide:

SqlPackage.exe /Action:Export /SourceServerName:localhost /SourceDatabaseName:MicrosoftDynamicsAX /TargetFile:"C:\Temp\MicrosoftDynamicsAX.bacpac" /Properties:CommandTimeout=4200 /Properties:VerifyFullTextDocumentTypesSupported=False /SourceTrustServerCertificate:True

The idea is to add the "/SourceTrustServerCertificate:True" command. It requires to use TLS to encrypt the source database connection and bypass walking the certificate chain to validate trust. As a result, you change the SqlPackage command to either connect without encryption or to trust the server certificate. In addition, I extended the timeout to 4.200 seconds.

When you change the “Collation” property in the model.xml file you need to import the *.bacpac file back into the database server to create a new database. Therefore, it makes sense to apply a similar change to the export command. The idea of the changes is the same: trust the server's certificate and extend the timeout.

As a result, I modified the import command given in the guide: 

SqlPackage.exe /Action:Import /SourceFile:"C:\Temp\ MicrosoftDynamicsAX.bacpac" /TargetServerName:localhost /TargetDatabaseName:MicrosoftDynamicsAX_NewCollation /Properties:CommandTimeout=4200 /ModelFilePath:"C:\Temp\model.xml" /TargetTrustServerCertificate:True

After that, I was able to import the *.bacpac file into a new database with the SQL_Latin1_General_CP1_CI_AS collation.

Ax 2012 data upgrade on the self-service environment hosted on the Europe region (eu.lcs.dynamics.com).

When you run the DataMigrationTool.exe application a console window opens where you can specify the cloud environment type:

  • Public: [ lcs.dynamics.com ]
  • GCC: [ gov.lcs.microsoftdynamics.us ]
  • UAE: [ uae.lcs.dynamics.com ]


As you can see, there are no options to select the EU or European option for LCS geography. As a result, if your environment is hosted in the European LCS region, you will not be able to proceed with the data upgrade.

To solve this issue, you need to do the following:

    1. Edit the "DataMigrationTool.exe.config" file. It is located in the folder where you extracted 
    the Data Migration Toolkit

    2. You need to find the lines: 
    <add key="lcsApiEndpoint" value="https://lcsapi.lcs.dynamics.com/" />
    <add key="lcsApiEndpoint_us" value="https://lcsapi.gov.lcs.microsoftdynamics.us/" />
    <add key="lcsApiEndpoint_uae" value="https://lcsapi.uae.lcs.dynamics.com/" />

    3. Modify the line: 
    <add key="lcsApiEndpoint" value="https://lcsapi.lcs.dynamics.com/" />
    It should be like this:
    <add key="lcsApiEndpoint" value="https://lcsapi.eu.lcs.dynamics.com/" />

    4. Run the DataMigrationTool.exe application again and select the default option. 
    It should now point to the EU LCS geography.

As a result, you should be able to proceed with the data upgrade on the self-service environment hosted in the Europe region (https://eu.lcs.dynamics.com/).

Dynamics 365 Finance and Operations. New development features in version 10.0.40 PU64.

In version 10.0.40 PU64 new features for developers have been introduced.

First of all, it is possible to add modified objects to the current project automatically. It means no more worries about adding objects to the right project. The modified objects will be added automatically. In order to enable this feature, the "Add files to solution on save" option should be enabled.


The second feature is related to the building process. It is possible to select the label file to be built.



All label files will be compiled by default. If you would like to build only labels or speed up the building process you can use this option.

D365 Finance and Operations. Data modifications in Production and Sandbox environments.

Introduction

If you migrate from one of the previous versions (Ax 2009/Ax 2012) you might have a kind of job pack for data correction for typical cases. From my perspective, it makes sense to adjust, test and move those jobs to D365 Finance and Operations so that you can use them by demand as before. 

In addition, you can create and use a set of classes for data corrections that can be applied by demand while the IT team is solving the root cause of the data inconsistencies.

Anyway, the question that is still here: How can we fix urgent data issues in D365 Finance and Operations in Production or Sandbox environments?

Important: Please keep in mind: Any data adjustment in the Production environment must be tested in a Prod copy (Sandbox UAT) environment first. Do not apply data modification operations in the Production environment if they were not tested previously. In the production environment you have no second attempt if the data modification works wrong.

Overview

In Ax 2012 we had access to AOT and could develop a job or open the table via the table browser and adjust data in close to real-time. In D365 Finance and Operations production or Sandbox environments, there is no AOT anymore and the table browser can be opened in the view mode.

Luckily, there are some ways to correct data in D365 Finance and Operations as well.

•  First of all, it is still possible to develop a kind of jobs. In D365 Finance and Operations it is
    called "X++ scripts with zero downtime". I described this standard feature in one of my previous
    posts. With this feature, you can call the standard system business logic and use all available 
    X++ commands. In my opinion, it is the best option if you can’t deploy (for some reason) 
    a new deployable package with the new jobs into the production environment.

•  Another option can be using the Microsoft Dynamics Excel add-in and Data Entities. If you 
    have data entities for the required tables/forms and there are not so many records in the 
    table/form that can be a convenient option. From my perspective, this option can be applied to 
    master data mostly. In addition, the standard entities have edit restrictions similar to their 
    source tables. If you would like to create new entities for data modifications it may be 
    needed to copy standard entities and change their field properties. If you choose this way you 
    need to keep this approach in mind and develop the required entities or add the required fields 
    to the standard entities in advance. However, this is a cheating way as well, as you may 
    bypass some system restrictions. You need to be sure that you understand all the 
    consequences of the changes.

•  Another available option is using the Open Data Protocol (OData). In this case, you need to 
    use the Postman application and connect the application to D365 Finance and Operations 
    instance. Please keep in mind, the data entities that you are going to use should have 
    the "Public" property - "Yes", and the "Public Collection Name" property should be populated. 
    As I mentioned above, you can use the standard entities or create new ones for the data 
    modification purpose.

    In the Internet you can find a lot of posts how to use Postman with D365 Finance and 
    Operations OData entities. Just to give you an idea there are a couple of examples: 
    link1, link2.

The next options are not the best ones, if you ask me. They can be applied only if you know exactly what you are doing and it is your "last hope". With those options, you can make direct changes in the database and bypass validation or other code that would run normally.

  The next option can be using the Just-In-Time (JIT) database access. 
    From a data manipulation perspective, it is the same opportunity you had in Ax 2012 when you
    connected to Ax 2012 SQL Database via SQL Server Management Studio. In this way, 
    you can create, edit, and delete records in Production and Sandbox database tables directly. 
    However, this is not really recommended, as you may bypass validation or other code 
    that would normally run if you were using the system form and table methods for changing 
    data in the table. Before introducing any changes in the tables via SQL you have to 
    understand all the consequences of the changes.

•  The final option is to use a custom tool for data correction. As far as I know, there are some 
    available tools and extensions. For example, this one. You can find other ones. You should be 
    aware that Microsoft may decline support cases if they find this kind of tool in your 
    Prod environment. I have seen this notification in one of the posts on the Microsoft Community 
    forum. Unfortunately, I could not find it to share the link.

Conclusion

As you may see there are different options for data correction. Most likely you will be able to solve any data issue with one of the available options. My personal advice would be: to collect job/classes and Postman queries. As a result, you will have a list of tested classes and Postman requests for data corrections that you can use by demand.

I would recommend using the Just-In-Time (JIT) database access really carefully. Before introducing any changes with this option, you have to understand all the consequences of the changes.

Troubleshooting the package installation to a Tier-1 environment by using LCS

Introduction

When you install the package in a Tier-1 environment by using the LCS portal the process should be completed without any issue in 99%. Unfortunately,  there is still 1% when you may experience various issues.

Issue types


Disc space

Before you install the binary package in a Tier-1 environment via LCS portal (Maintain – Apply updates), please make sure that there is enough space on the disk where the folder *\AosService\PackageLocalDirectory is hosted. Usually, it is the "K" drive.

Otherwise, the preparation process can fail and the LCS environment will be in the "Preparation failed" state. If you download the log, it will be empty and the reason for the failure can be unclear.

Visual Studio

If the process fails at the Preparation stage or later and when you download the log it contains the data about preparation steps only, and the file "PreServicing-xxxxxxxxxxxxxxxxxxxx.error" contains only one line:

The specified module 'C:\Program Files\Microsoft Security Client\MpProvider' was not loaded because no valid module file was found in any module directory.

It may mean that a Visual studio instance was open in the environment when you started the process. In this case, you need to log in to the environment via an RDP connection. Then you should close the Visual Studio instance in all user sessions, close your RDP connection, and resume the process.

If you see no open Visual Studio instances in the environment you need to open Visual Studio in your session. Most probably it will install some updates or components behind. You need to keep it open for 5-10 minutes. Then you should close the Visual Studio instance, close your RDP connection, and resume the process.

Maintenance mode

Occasionally, the environment can be in the maintenance mode. In this case, you can face an issue on the step "GlobalUpdate script for service model: AOSService on machine: Machine Name" and the environment will be in the "Failed" state.

In this case, you need to log in to the environment via an RDP connection and check the Services (Control Panel\System and Security\Administrative Tools\Services). If you see that the "Microsoft Dynamics 365 Unified Operations: Batch Management Service" is stopped and it cannot be started it is the root cause of the issue. You need to be able to start this service otherwise you can’t resume the installation process.

In order to find a reason, you need to take a look at the Windows event logs.

If you see there 3 errors in a row it may mean that the environment is in the maintenance mode.

  The description for Event ID 110 from source Microsoft Dynamics 365 for Finance and 
    Operations cannot be found. Either the component that raises this event is not installed on 
    your local computer or the installation is corrupted. 
    You can install or repair the component on the local computer.

    If the event originated on another computer, the display information had to be saved with the 
    event. The following information was included in the event:

    AX is shutting down due to an error.
    Microsoft.Dynamics.AX.Batch.BatchService+MaintenanceModeBatchStartupException: 
    Exception of type
    'Microsoft.Dynamics.AX.Batch.BatchService+MaintenanceModeBatchStartupException' 
    was thrown.
    at Microsoft.Dynamics.AX.Batch.BatchService.OnStart(String[] args)
    Exception details:
    Microsoft.Dynamics.AX.Batch.BatchService+MaintenanceModeBatchStartupException: 
    Exception of type
    'Microsoft.Dynamics.AX.Batch.BatchService+MaintenanceModeBatchStartupException' 
    was thrown.
    at Microsoft.Dynamics.AX.Batch.BatchService.OnStart(String[] args)

  Faulting application name: Batch.exe, version: 7.0.7198.49, time stamp: 0xb4453638
    Faulting module name: unknown, version: 0.0.0.0, time stamp: 0x00000000
    Exception code: 0x80131623
    Fault offset: 0x00007ffb9cb54093
    Faulting process id: 0x1508
    Faulting application start time: 0x01da8cdf84d8bc61
    Faulting application path: K:\AosService\WebRoot\bin\Batch.exe
    Faulting module path: unknown
    Report Id: 6054eadb-7058-44e4-966c-fcd600a10af7
    Faulting package full name: 
    Faulting package-relative application ID:

 Application: Batch.exe
   Framework Version: v4.0.30319
   Description: The application requested process termination through       System.Environment.FailFast(string message).
   Message: Tearing the process down due to an unhandled exception.
   Stack:
   at System.Environment.FailFast(System.String, System.Exception)
   at <Module>.FailFast(UInt32, System.String, System.Exception)
   at <Module>.OnAppDomainUnhandledException(System.Object,
   System.UnhandledExceptionEventArgs)
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.ServiceProcess.ServiceBase.Run(System.ServiceProcess.ServiceBase[])
   at Microsoft.Dynamics.AX.Batch.Entrypoint.Main(System.String[])

To be on the safe side you can use the standard documentation. You can query data from the "SQLSYSTEMVARIABLES" object and if you see that 'CONFIGURATIONMODE' is 1, the system is in the Maintenance mode.

If it is true you need to revert the environment to the normal state. Then you need to start the "Microsoft Dynamics 365 Unified Operations: Batch Management Service" manually. If it  starts - you fixed the issue. After that, you can close your RDP connection, and resume the process. 

Dynamics 365 Supply Chain Management WHS extending. Adding a new work type for "User directed" mobile device flows.

Introduction

I would like to share my experience with adding a completely new work type. In my case, I have added a new work type for "User directed" mobile device flows. My post is not a complete guide on how to do it. It is more about the main idea and some thoughts. 

Note: There is a post on how to work with the “Custom” work type. It is different.

Overview

WHSWorkType enum extending

First, it is necessary to extend the standard "WHSWorkType" enum in order to add a new work type.

Adding new mobile device step

Then, since it is a new work type it makes sense to add a new step so that the system can process the new work type. The new step should not have one of the existing numbers in the standard macro “WHSWorkExecuteDisplayCases”. 
We can create a new macro or use an existing one for this purpose and add a new step:

#define.NewWorkStep(10000)

Adding a new class handler for the new work 

After that, we need to create a new class handler for the new work type:

/// <summary>
/// The <c>WhsNewWorkTypeHandler</c> class handles new work type.
/// </summary>
[WhsWorkTypeFactory(WhsWorkType::NEWWorkType)]
class WHSNewWorkTypeHandler extends WhsWorkTypeHandler
{}

In this class, we need to implement the methods:

  • findWorkCreateLocationQty – sets the parameters of the processing.
  • determineStep – defines the first step of the flow for the work type. Also, you can add a mobile device screen to be shown for users.
  • executeWorkLine – defines the actions with the work line.

Note: In the system, there are “WhsWorkTypePrintHandler” and “WhsWorkTypeCustomHandler” classes that can be used for a better understanding of the work type handler classes.

Below is a mockup of the possible solution:

public WhsWorkCreateLocationQtyResult

    findWorkCreateLocationQty(WhsWorkCreateLocationQtyParameters _parameters)
{
   WhsWorkCreateLocationQtyResult result; 
   
   result = WhsWorkCreateLocationQtyResult::construct();

   result.locationId       = '';
   result.pickUnitId       = _parameters.unitId;
   result.pickQty          = _parameters.qtyWork;
   result.inventPickQty    = _parameters.inventQtyWork;

   return result;
}

public void determineStep(WhsWorkStepContext _context)

{
    WhsWorkExecuteDisplay   workExecuteDisplay = _context.workExecuteDisplay;
      
    //we can to go to the custom dialog if we would like to
    _context.nextForm   = workExecuteDisplay.DrawNewScreen();
    _context.step       = #NewWorkStep;
}

public WHSWorkLine executeWorkLine(WhsWorkExecute     _workExecute, 

                                   WHSWorkLine        _workLine, 
                                   WHSUserId          _userId)
{
    return _workExecute.processNewWorkType(_workLine.WorkId, 
                                           _workLine.LineNum, 
                                           _userId);
}

When we implement the handler class and the new step in the mobile device flow for the new work type we need to be able to process the new step correctly.


WhsWorkExecuteDisplay class extending

If we take a look at the "processWorkLine" method of the "WHSWorkExecuteDisplay" class we will see that there is a default section for new mobile device steps.

default:

   boolean finishedProcessing = this.processWorkLineForCustomStep(state);
   
   if (finishedProcessing)
   {
       return [state.nextForm, step, state.recall, pass.pack()];
   }
   break;

So, the next step is to create an extension of the WhsWorkExecuteDisplay class and implement the "processWorkLineForCustomStep" method. The "processWorkLineForCustomStep" method has the "Replaceable" attribute so in your extension you can write any business logic

Note: I would recommend using the “next” command in the “processWorkLineForCustomStep”, for instance:

protected boolean 

        processWorkLineForCustomStep(WhsWorkProcessWorkLineState _state)
{
    boolean    ret;
    ……………………………………
    switch (step)
    {
               case #NewWorkStep:
           //do something
           ret = true; //in case the step has been processed correctly
           break;

       default :
           ret = next processWorkLineForCustomStep(_state);
    }
   
    return ret;
}

In this case, if there is more than one extension of this method (for example, from multiple vendors) all of them will be called by the system. 

If you don’t call the “next” command, only your method extension will be called. All other method extensions can be ignored by the system.

If the new step has been processed correctly it is required to return the "True" value. In this case, the system returns the values from your method in the mobile device flow and the standard code after the "processWorkLineForCustomStep" is not executed.

When you jump into your new step you can develop your own mobile device screens and switch between them depending on the buttons and controls in use. When you are done with the new work type you need to "go back" to the standard mobile device steps.


Conclusion

For sure some code adjustments and extensions can be desirable in other objects too. It depends on the business logic that is planned to be implemented with a new work type. In my opinion, the text above can be used as a high-level guide.


Workflow Issue: Stopped (error): X++ Exception: Work item could not be created. Insufficient security permissions for user XXXX. Please review the user's security permissions to ensure they are sufficient for this workflow document and then resume the workflow.

Recently, we experienced a workflow security issue when the new D365 update was installed. Originally, the workflow process was copied from one of the standard ones and had worked for at least one year.

The error message was:

Workflow Issue: Stopped (error): X++ Exception: Work item could not be created. Insufficient security permissions for user XXXX. Please review the user's security permissions to ensure they are sufficient for this workflow document and then resume the workflow.

The first idea was that there are some changes in the standard Workflow process that we should apply. But there were no changes. We read the standard documentation and this article on the workflow security.

We searched the Internet and found a lot of posts and articles on this error.

Below you can find a list of possible reasons for the error:

1. User doesn't have approval rights (Approval related security is not assigned)
2. User doesn't have access to the menu item associated with workflow.
3. No employee is mapped to the user.
4. There is a dynamic rule assigned to the user that prevents this user from doing the operation.
5. The menu items specified on the approval or task elements for the step in the workflow have their configuration key disabled.

Unfortunately, all those options did not apply to our case.

We tried:

  • Operations "Data > Synchronize all" and "Data > Repair" on the "Security Configuration" form. 
  • Re-assigning user roles. (System admin > Security > Assign users to roles)
  • Applying the "System administrator" role to user accounts as much as possible.

and so on.

Finally, we were able to find the cause of the issue. The error message says that it is a security issue but, in our case, it was a workflow query issue.  Users have started doing documents a bit differently and the query inside our workflow returned no record. The system interpreted this case as a security issue and raised the security error.

So, the lesson is: It can be not only about security permissions or configuration keys. If you see this error it makes sense to verify that the workflow query returns data.

Refresh caches in Dynamics 365 Finance and Operations

Sometimes, it is necessary to refresh different types of caches in the system. In Ax 2012 you can do it from the development workspace: Tools - Caches and select the required option.


In D365 we can clear or refresh the cache types using the following commands in the browser:
  • Refresh dictionary:                                                                        https://ENVIRONMENT_URL/?mi=SysClassRunner&cls=SysFlushDictionary
  • Refresh data:
    https://ENVIRONMENT_URL/?mi=SysClassRunner&cls=SysFlushData
  • Refresh elements:
    https://ENVIRONMENT_URL/?mi=SysClassRunner&cls=SysFlushAOD
Note: The refreshing of the code extension cache was included in SysFlushAOD::Main() method in 2021. The method SysExtensionCache::clearAllScopes() has been marked as deprecated since then.
  • Refresh report server:                                                                          https://ENVIRONMENT_URL/?mi=SysClassRunner&cls=SysFlushReportServer
As it was in Ax 2012, we can refresh the cache for database logging:
  • Refresh database log:                                                                    https://ENVIRONMENT_URL/?mi=SysClassRunner&cls=SysFlushDatabaseLogSetup
There is a new cache type in D365. It is related to a cross-company data-sharing feature. This feature resembles the virtual companies feature in Microsoft Dynamics AX 2012.
  • Refresh the cross-company data sharing:                                    https://ENVIRONMENT_URL/?mi=SysClassRunner&cls=SysFlushSysSharingRules
There is still "SysFlushSystemSequence" class in D365. In the previous versions, it was used to record identifier alignment during the data import process via group definitions. Starting D365 this approach has been deprecated the use of this class makes no sense anymore.

The relations or tables are not available when configuring the electronic reporting report

Recently we experienced an issue when we heeded to change one of the reports. The report was configured via electronic reporting. The issue was that we were not able to see the actual relations between the tables. Moreover, we could not see new tables when we were configuring the electronic reporting report. 

We checked our configuration in  Organization administration > Workspaces > Electronic reporting and opened Designer > Map model to datasource > Designer. Here is where we map the list of fields to be printed in the report against the data source in Dynamics 365.

I was not told that the tables which should be added to the report configuration were new. Those tables were delivered with the latest code release from our partners.

When I learned about that the solution was pretty clear - It is necessary to refresh the ER metadata to make the custom field that is added visible in the ER model mapping designer. It must be done with the Rebuild table references menu item (Organization administration > Electronic reporting > Rebuild table references) to bring the AOT changes into the ER metadata.

As a result, the lessons are:

  • Read the documentation thoroughly.
  • When you add new Application Object Tree (AOT) artifacts or update existing AOT artifacts that are used as data sources (tables, views, or data entities) in ER, use the Rebuild table references menu item (Organization administration > Electronic reporting > Rebuild table references) to bring your AOT changes into the ER metadata.

How to run batch tasks using the SysOperation framework

Overview As you may know, the system has batch tasks functionality . It can be used to create a chain of operations if you want to set an or...