Skip to main content

AX2012 R3: InventSum findSum* methods (Inventory On Hand)

Introduction

In this post, I try to describe performance issues regarding on-hand activities that I faced on different clients and a way how I handled it.
Based on detailed performance analysis, I found out that SQL server generates bad query execution plans regarding the joins of InventSum and InventDim tables and it’s a commonly known issue that called “parameters sniffing” issue. We had many cases randomly when such queries stopped the process of whole company,  and it could be once a week, once a month, many times a day.

Pre-steps

Before changing or adjusting these functions, please make sure that the next clean-up jobs are scheduled from Inventory and Warehouse Management modules (Periodic/Clean up). Based on my experience (these jobs can be a root cause of DB locks during executing and I’ll describe it later in another post), these jobs have to be scheduled one by one in one batch job as a batch task. Also please make sure that you have KB  4033280 (Ax 2012 R2, Ax 2012 R3) installed what is a quite important for cleanup unused records. 
  • Warehouse management on-hand entries cleanup
  • On-hand entries cleanup
  • On-hand entries aggregation by financial dimensions
  • Clean up unused inventory dimensions
Having less physically and financially closed on-hand records and inventory dimensions could boost the performance a lot.

InventSum findSum* methods

Most of the issues occur from these functions are occurred during the second month after go-live, but I had some customers reported such issues in 1 year after going live, or after opening new warehouse or legal entity.
FindSum and FindSumQty methods from InventSum table are called thousands of times during a day and we had to find a way how to boost and stable it. MS team has redesigned these functions in D365 and published a few KBs for Ax 2009, Ax 2012, but still related issues occur and stop business time to time(including customers in D365 ðŸ˜Š maybe one of the reason why MS enabled back index hint features https://docs.microsoft.com/en-us/dynamics365/fin-ops-core/fin-ops/get-started/whats-new-platform-update-23#enabling-index-hints-in-x-again).
Let’s start with existing data, as examples by having two customers.
Customer
Legal entities
InventSum size
InventTrans size
InventDim size
Total DB size
SQL parameters
A
7
351320 (1 GB)
42 153 290 (70 GB)
21 788 431 (32 GB)
600 GB
256 RAM
8 cores
B
2
835345 (4 GB)
67 235 345 (95 GB)
34 888 293 (48 GB)
900 GB
384 RAM
8 cores

I tried many variants to get it stable by using forceLiterals, forceSelectOrder and forceNestedLoop; creating custom plans; creating additional indexes; frequently updating of statistics, upgrading SQL server, Ax kernel; DBCC FREEPROCCACHE and etc. (some of them are not the best practice, but no way) and I ended up with next solution that I would like to share. Now, it’s being implemented with 5 customers, which have similar proportion of data in InventSum and InventDim tables.

Going back to the solution, as you know “License plate” field is a very selective field and in most of the cases related queries should always use InventDim(LicensePlateIdIdx) index, but in real cases SQL often doesn’t use it and especially when a range on “Invent status” is used in where clause.  . SQL server uses a bad plan, and query execution can take more than 5 minutes.  
Well, I had to adjust findSum* methods by adding next statements after first if condition if (_inventDimParm.InventSerialIdFlag && _inventDimCriteria.InventSerialId) for “License plate” case.
else if (_inventDimParm.LicensePlateFlag && _inventDimCriteria.LicensePlateId)
{
        switch (_sumFields)
        {
            case InventSumFields::Financial:
                #inventDimSelect(inventDim,_inventDimCriteria,_inventDimParm,forcePlaceholders tableId from,LicensePlateIdIdx)
                join sum(PostedQty),sum(PostedValue)
                from inventSum
                    index hint ItemDimIdx
                    where inventSum.InventDimId == inventDim.InventDimId
                       && inventSum.ItemId      == _itemId
                       && inventSum.Closed      == NoYes::No;
                break;

            case InventSumFields::Physical:
                #inventDimSelect(inventDim,_inventDimCriteria,_inventDimParm,forcePlaceholders tableId from,LicensePlateIdIdx)
                join sum(PostedQty),sum(PostedValue),sum(Received),sum(Deducted),sum(Picked),sum(Registered)
                from inventSum
                    index hint ItemDimIdx
                    where inventSum.InventDimId == inventDim.InventDimId
                       && inventSum.ItemId      == _itemId
                       && inventSum.Closed      == NoYes::No;
                break;

            default:
                #inventDimSelect(inventDim,_inventDimCriteria,_inventDimParm,forcePlaceholders tableId from,LicensePlateIdIdx)
                join
                #inventSumFields
                from inventSum
                    index hint ItemDimIdx
                    where inventSum.InventDimId == inventDim.InventDimId
                       && inventSum.ItemId      == _itemId
                       && inventSum.Closed      == NoYes::No;
        }
    }

I used InventDim table in first place, as MS has done for the serial numbers and let’s go to results.

Based on an external temporary log table, I determined that most than 85% of all queries executed thought these functions have license plate id populated as a range and based on our traces after new change implemented(a new change is developed under a new check box), the execution time of a call became less than 1 millisecond.  
I created a few jobs and classes to test this approach, as an instance please find one of them as a simple job with calls through a list of items and dimension sets. I compared both functions – standard and custom. Before each test run I cleared SQL plan cache using the following command DBCC FREEPROCCACHE .

The standard function took between 4-9 minutes, and custom one always less 30 sec.:).

I made more and more tests and found out that sometimes for some customers we still had bad plans for some queries from last “else” condition from methods. I tried to use forceLiterals keywords, and we got it almost 2-3 times faster if you call the functions hundred thousand times.  Even forceLiterals causes the plan compilation and it’s quite complex one and consumes a lot of CPU and memory, but as by the change of “License plate” we reduced number of “else” executions and we didn’t have a huge impact on SQL CPU.

And you think that is it. ðŸ˜Š Nope… in a few months after the fix we still had to create about 2-3 plan guides to cover some bad plans from that last “else” statement.
But since that we’ve not been facing with the “expensive” queries created from that functions.
I hope this post will be helpful for you, and do not hesitate to ask my assistant if you have any questions or you’d like share results.
Please test it carefully before implementing. (Different customers have different sets of data, issues and solutions ðŸ˜Š).


Comments

Post a Comment