Business central blog
Subscribe to blog and get news about new posts.

Performance of Number Series and Number Sequence in Cloud. Detailed Analysis.

This is a continuation of the discussion of Number Series and Number Sequence. This time we will be testing in Cloud Business Central. We will also encounter unexpected results and try to understand the reasons. We will do SQL statement analysis as well.
Part 1

Local performance measurements

Important: The test was performed in Local Docker environment with type Sandbox BC23 US. Computer specifications can be found in the first article.
First of all, I would like to thank Duilio Tacconi and Marcus Nordlund for their leading questions after the first article was published. I would probably have been lazy to develop this topic further, but their interest also inspired me.
So, let's remind ourselves of old measurements that took place on my local Sandbox Docker Environment, i.e. my computer was acting as a server.
In addition, I added a new test that uses a simple Number Increment in combination with FindLast. This is actually a common use case when we need to add some record with a new Entry No. or Line No. to a table with existing records. In addition, I use a filter in this test, in order to bypass the optimization that is bound to happen if we try to loop through several thousand FindLast calls for a table. In general, this often happens when we add SalesLine with filter by Document Type and Document No.

local procedure TestNumberIncrementFindLast()
var
	ListOfText: List of [Text];
	StartDateTime: DateTime;
	DummyEntryNo: Integer;
	i: Integer;
begin
	if not IsEntryTableReady then
		Error('You must Init Entry Table before you go.');

	StartDateTime := CurrentDateTime();
	for i := 1 to NumberOfIterations do begin
		DummyEntryNo := SimulateGetNextEntryNo(i - 1);
		ListOfText.Add(Format(i));
	end;
	Message('Number increment: %1 iterations took %2.', NumberOfIterations, CurrentDateTime() - StartDateTime);
end;

local procedure SimulateGetNextEntryNo(CurrentCounter: Integer): Integer
var
	EntryTable: Record "NSP Entry Table";
begin
	EntryTable.SetRange("Entry No.", CurrentCounter);
	if EntryTable.FindLast() then
		exit(EntryTable."Entry No." + 1);
	exit(1);
end;
More over I think it's good idea to add Auto Increment test which is usually used to increment single numeric primary key value. But, you can't control or reset number from Business Central in this case.

Unfortunately the Auto Increment property can't be measured in isolation from Insert so I'll calculate this roughly using the difference between insert 100.000 records with Auto Increment and without.

local procedure TestAutoIncrementInsertRecord()
var
	EntryTable2: Record "NSP Entry Table2";
	StartDateTime: DateTime;
	i: Integer;
begin
	if not IsEntryTableReady then
		Error('You must Init Entry Table before you go.');

	StartDateTime := CurrentDateTime();
	for i := 1 to NumberOfIterations do begin
		EntryTable2.Init();
		EntryTable2."Entry No." := 0;
		EntryTable2.Description := 'test';
		EntryTable2.Insert();
	end;
	Message('Auto Increment on Insert: %1 iterations took %2.', NumberOfIterations, CurrentDateTime() - StartDateTime);
end;
Of course the results of these tests can't be clean, but let's see nonetheless.
Here are our old measurements with new tests:

Cloud performance measurements

Important: The test was performed in Cloud Sandbox/Production US Business Central 23.4 (Platform 23.0.16919.0 + Application 23.4.15643.15715)
But what kind of results will we get in Cloud? Basically I expected it to be a little slower, but nothing more. To measure in Cloud Business Central I used Telemetry with KQL query. Event RT0018 used for analyzing long running AL methods.

traces
| where timestamp > ago(1h)
| where customDimensions.eventId == 'RT0018'
| where customDimensions.clientType == "WebClient"
| where customDimensions.environmentName == "DEV"
| where customDimensions.companyName == "DEV"
| project
    timestamp
    ,
    companyName = customDimensions.companyName
    ,
    alMethod = customDimensions.alMethod
    ,
    executionTimeInMs = toreal(totimespan(customDimensions.executionTime)) / 10000
    ,
    sqlExecutes = customDimensions.sqlExecutes
    ,
    sqlRowsRead = customDimensions.sqlRowsRead
What's unexpected about this result? Well, there are two main points.
  • All tests are much slower on the Cloud environment.
  • Tests that use Number Sequence are the slowest in Cloud.
Let's get to the bottom of this. Why is the result so different? Let's start with the first point, why is Cloud so much slower than Local environment? Here the answer is actually very simple, Cloud Business Central allocates less CPU, as well as less memory and has worse hard disk than my local computer.
Logically, you shouldn't expect every Cloud Business Central environment to have 24 cores, 64 gigabytes of RAM, and be hosted on the fastest NVME. Although, why not give such an opportunity at least for a fee?
In any case, I assume that resources for Cloud Business Central are most likely allocated dynamically depending on the load, number of users, time of day and phase of the moon :)
But why is Number Sequence the slowest in Cloud? This is quite a tricky question, let's try to analyze it. Let's start by looking at the number of SQL Rows Read and the number of Sql Executed in Cloud environment, you can do it via Visual Studio Code debugger or Telemetry. Since I have already used Telemetry to measure performance, it was convenient for me to see these numbers there right away.
As expected, No Series where gaps are not allowed gets the highest number of SQL executes and SQL rows reads. But why the hell both Number Sequence methods are slowest with almost no SQL rows read?

SQL statements analysis

In order to understand a little bit of what is going on, I suggest looking at what SQL statements are being sent from AL to SQL Server.
For No. Series where gaps are now allowed it will be two main SQL statements:

SELECT TOP (1) "309"."timestamp",
       "309"."Series Code",
       "309"."Line No_",
       "309"."Starting Date",
       "309"."Starting No_",
       "309"."Ending No_",
       "309"."Warning No_",
       "309"."Increment-by No_",
       "309"."Last No_ Used",
       "309"."Open",
       "309"."Last Date Used",
       "309"."Allow Gaps in Nos_",
       "309"."Sequence Name",
       "309"."Starting Sequence No_",
       "309"."Series",
       "309"."Authorization Code",
       "309"."Authorization Year",
       "309"."$systemId",
       "309"."$systemCreatedAt",
       "309"."$systemCreatedBy",
       "309"."$systemModifiedAt",
       "309"."$systemModifiedBy"
FROM "db_bcprodus_t81151820_20240123_07045562_df96".dbo."DEV$No_ Series Line$437dbf0e-84ff-417a-965d-ed2bb9650972" "309" WITH(UPDLOCK)
WHERE ("309"."Series Code"=@0
       AND "309"."Starting Date"=@1
       AND "309"."Open"=@2)
ORDER BY "Series Code" ASC,
         "Starting Date" ASC,
         "Starting No_" ASC,
         "Line No_" ASC OPTION(OPTIMIZE
                               FOR UNKNOWN, FAST 50)

UPDATE "db_bcprodus_t81151820_20240123_07045562_df96".dbo."DEV$No_ Series Line$437dbf0e-84ff-417a-965d-ed2bb9650972"
SET "Last No_ Used"=@0,
                     "$systemModifiedAt"=@1,
                                          "$systemModifiedBy"=@2 OUTPUT inserted."timestamp"
WHERE ("Series Code"=@3
       AND "Line No_"=@4
       AND "timestamp"=@5)
Literally we get the No Series Line from the database with UPDLOCK to update the Last No. Used field. So far, so clear.
For Number Increment FindLast with Filter result is also clear. We just get row from SQL, since we don't specify any locktable or readisolation it's by default do READUNCOMMITTED.

SELECT "81751"."timestamp",
       "81751"."Entry No_",
       "81751"."Description",
       "81751"."$systemId",
       "81751"."$systemCreatedAt",
       "81751"."$systemCreatedBy",
       "81751"."$systemModifiedAt",
       "81751"."$systemModifiedBy"
FROM "db_bcprodus_t81151820_20240123_07045562_df96".dbo."DEV$NSP Entry Table$53b8213d-c4d8-429c-b4cf-168ec4470685" "81751" WITH(READUNCOMMITTED)
WHERE ("81751"."Entry No_"=@0) OPTION(OPTIMIZE
                                      FOR UNKNOWN, FAST 50)
No. Series where gaps allowed and Number Sequence (which is almost same):

SELECT NEXT VALUE
FOR dbo.[$SEQ$NumberSequence$EA9B2114-5ECB-40A6-AF53-124F75ED0B92$DEV]

SELECT NEXT VALUE
FOR dbo.[$SEQ$NumberSequence$NSP_NumberSequence$DEV]
This is where it gets interesting, we don't use some table to get the number in Number Sequence. We are actually creating a Number Sequence object on SQL server! This object is specially designed to generate numeric sequences across tables. NEXT VALUE FOR is used to retrieve new number.
So NumberSequence.Insert(NumberSequenceLbl) converted to Create Sequence SQL statement:

CREATE SEQUENCE dbo.[$SEQ$NumberSequence$NSP_NumberSequence$DEV] AS BIGINT
START WITH 0 INCREMENT BY 1 NO CACHE
As far as I know NEXT VALUE FOR statement is dependent on CPU and hard-disk. Perhaps even from the hard drive more than anything else. Which means that Cloud Business Central sits on a very weak hard disk. It's so weak computer hardware that even Number Sequence is slower than SQL UPDATE lol :D
Just imagine 3ms for each next number! Now it's clear why it's took 300s for 100.000 numbers, just 3ms * 100000 = 300000ms = 300s

How to improve Improve Number Sequence performance?

So, the main question is how to improve the performance of Number Sequence on Cloud environment? The first thing that comes to mind is to improve the hardware for Cloud Business Central, maybe Microsoft will give this possibility in the future. But there is another way that does not require any additional hardware.
As you and I have learned Business Central creates a Number Sequence in SQL Server, it does so with the NO CACHE parameter. Here is documentation for this paramater:
From this information we learn that we can reduce disk IOs by using cache. Of course, this is not a panacea and in case of unexpected shutdown for SQL server we may have gaps. But let's be realistic, how often do Microsoft servers lose power or just have unexpected shutdowns? This is very rare, so why not use this option?
Currently, NumberSequence in Business Central does not allow you to control this parameter, but I decided to get around this limitation by installing Business Central OnPrem 2023 Wave2.
After installation, I made a direct query to SQL Server to create a NumberSequence bypassing the Business Central interface. I decide to create Number Sequence with 100 CACHE.

CREATE SEQUENCE dbo.[$SEQ$NumberSequence$PERFORMANCETESTCACHE$CRONUS USA, Inc_] AS BIGINT
START WITH 0 INCREMENT BY 1 CACHE 100
This query created an object in the Sequences folder on SQL Server, here is a screenshot of that object from my local server. As we can see the cache parameter is set to 100.
And yes, after that I was able to directly access that NumberSequence without calling NumberSequence.Insert() in Business Central. On my fast local computer, it sped up the generation of 100,000 numbers by half. I am sure this setting will speed up Number Sequence on a slow hard disk many times over!
So I'm pretty sure we need additional Cache parameter for NumberSequence.Insert() in Business Central.

Summary

  • Hardware strongly influences the result of performance measurements. Sometimes even unexpectedly.
  • Cloud Business Central is often much slower than your local server.
  • Cloud Business Central most likely has a rather weak hard disk.
  • We can improve performance of NumberSequence with better hardware on Cloud Business Central
  • We can improve performance of NumberSequencer with additional Cache parameter
  • We need ability to control Cache parameter in Business Central, on NumberSequence.Insert()
  • For generating numbers, the fastest way is still Number Increment, but Auto Increment(for record insert) isn't bad either
  • Each number generation methods serves its own purpose and should be applied accordingly.
  • Many simple things hide a depth of problems and mysteries
Important: Be careful using the No Gaps method, any locks can slow down performance a lot.
Important: Cloud environments use Azure SQL and this is not same as SQL Server
Important: OnPrem BC can use Azure SQL or SQL Server

Source Code

March 4, 2024