Some of you may already know the limitation of 2 billion entries in a single table / partition.
Another hint regarding this limitation is the alert ID 17 ‘<table> contains xxx records’. The default threshold of this alert is 300,000,000 rows. This limitation applies to BSoH / S/4HANA and BWoH / BW/4HANA => general HANA limitation. But both have its own distribution rules. So I will split this blog into two parts BWoH and BSoH.
At first you have to know that you have to execute manual steps to create new partitions and distribute the data! I have heard this very often that some customers think this is a automatically task.
There is one exception: If you use dynamic range partitioning, but also this must be implemented manually.
Test Environment:
VMware 6.0
11xCPU: Intel(R) Xeon(R) CPU E7-8880 v3 @ 2.30GHz
22vCPUs
SLES for SAP 12 SP1
HANA 1.0 SPS12 Rev. 122.11
Type SAP Note
BWoH 1908075 – BW on SAP HANA: Table placement and landscape redistribution
BWoH 2334091 – BW/4HANA: Table Placement and Landscape Redistribution
BWoH 2019973 – Handling Very Large Data Volumes in SAP BW on SAP HANA
BSoH 1899817 – Business Suite on SAP HANA database: table placement
If you face such an issue in a BW system, you have to check first if you can solve it in the application layer. So check if you can use partitioning options in the backend system with SAP note 2019973.
When this is not possible anymore, than you have to do the following manual steps.
1. Check table distribution rules / table placement
Follow the steps in SAP note 1908075 for BWoH / 2334091 for BW/4HANA.
Download the attachment and choose your script regarding your HANA revision and topology.
In this example I have chosen the most spread variant: HANA 1.0 SPS12 Scale-up
Normally at the time of installation or migration this step is already performed but some thresholds may have changed over time. So first check the current parameter and thresholds:
ALTER SYSTEM ALTER CONFIGURATION ('global.ini','system')
SET ('table_placement','same_num_partitions') = 'true' WITH RECONFIGURE;
ALTER SYSTEM ALTER CONFIGURATION ('global.ini','system')
SET ('table_placement','max_rows_per_partition') = '1500000000' WITH RECONFIGURE;
SELECT
SCHEMA_NAME,GROUP_TYPE,MIN_ROWS_FOR_PARTITIONING,
INITIAL_PARTITIONS,REPARTITIONING_THRESHOLDS,
LOCATION,DYNAMIC_RANGE_PARTITIONING
FROM
"SYS"."TABLE_PLACEMENT";
If all the values match to the values from the downloaded SQL script, there is nothing to do for you.
In the other case replace $$PLACEHOLDER with the SAP schema and execute the script with a user with sufficient priviliges.
The current rules explained also in the attached pdf inside the same folder as your SQL script:
InfoCube fact tables are ROUNDROBIN partitioned on first level (sap.bw.cube).
DSO tables are HASH partitioned on first level (sap.bw.dso). They have 1 partition on first level regardless of the number of records in the tables – except for tables with more than 1.5 billion records, see remarks below. Tables for InfoCubes and DSOs are located on the master node.
InfoObjects tables (sap.bw.iobj) are not partitioned, except for InfoObjects with high cardinality. Those tables are HASH partitioned on first level. All other InfoObject tables are not partitioned i.e. they do not have a partitioning specification. InfoObject tables are located on the master node.
PSA tables (sap.bw.psa) and errorstack tables (sap.bw.dtp) are HASH partitioned on first level. They have 1 partition on first level regardless of the number of records in the tables – except for tables with more than 1.5 billion records, see remarks below. PSA and errorstack tables are located on the master node.
SAP HANA DataSources (sap.bw.dsrc) can have an insert and an upsert table. The insert table is dynamic range and the upsert table is HASH partitioned on first level.
Temporary BW tables (sap.bw.temp, sap.bw.trex) and OpenHub tables (sap.bw.openhub) are not partitioned i.e. they do not have a partitioning specification. They are located on the master node.
The number of partitions on first level according to the rules above is only set when the table is initially created; the number of partitions on first level is not adapted dynamically.
The number of first level partitions of a table does not exceed the number of nodes that are potential valid locations for that table. This rule is disregarded if a higher number of first level partitions is required to avoid first level partitions with more than 1.5 billion records (global.ini, section [table_placement], max_rows_per_partition = 1500000000).
=> this means a split will not performed before the table has reached 1.5 billion rows.
=> in a scale-up system all tables are created without partitioning besides the table was above 1.5 billion records at the time of the migration – SUM will take care
2. Check table grouping
If you have an old release or your migration was not completed correctly, some tables have a missing table group type.
select * from "SYS"."TABLE_GROUPS" where TABLE_NAME like '%<TABLE_NAME>%';
When the group type is missing, you have to implement the latest notes for report RSDU_TABLE_CONSISTENCY (currently: 2264053 – RSHDB: SAP HANA Tools 08 for BW) and execute the it to classify the tables.
Usage: 2569097 – How to use of report RSDU_TABLE_CONSISTENCY [VIDEO]
3. Repartitioning plan
In the SQL select statement before you checked the table group type. In the output you’ll also found the group name.
This one we have to use in the reorg plan => the steps can be found at the end of SAP note 2019973.
Note: In case of a scale-out system you can skip this step and execute a table redistribution.
Be sure that you execute this steps in one SQL session with the schema owner or a user with sufficient priviliges to tables and content!
1. Generate a reorg plan
call reorg_generate(6,'GROUP_NAME=><your-BW-group-name>');
2. Check the generated plan
select * from reorg_plan;
If there is no output, you have to check if your table is above the defined threshold (max_rows_per_partition). I have adjusted the value that my plan results in 8 partitions => currently: 1.
3. Execute the plan
call reorg_execute(?);
Check the free ressources (data+log disk space + CPU + memory) of the system and the configured parallelism.
Adjust the parameters or execute it only in a time frame without high business load.
indexserver.ini => partitioning => split_threads => 16 (default)
indexserver.ini => partitioning => bulk_load_threads => 4 (default)
4. Check the status of the plan execution
select * from reorg_steps
where reorg_id = <value returned by reorg_execute before>;
select * from reorg_steps where reorg_id = 1;
Normally reorg_id should be 1. Now you can monitor the process via the statement and the running sessions.
You will see the splitting per col as adjusted via split_threads. The number of ‘ZipResultJob’ threads correlate with the value of parameter bulk_load_threads.
In my example the splitting operation (step 1-8) tooks 50min for 1.8 billion rows (23GB of data).
Note: Be aware of this limitation and the tasks to split the table to solve the issue. If you won’t do this no more inserts are possible if you reach the limit and you will get a lot of errors in the backend.
Okumaya devam et...
Another hint regarding this limitation is the alert ID 17 ‘<table> contains xxx records’. The default threshold of this alert is 300,000,000 rows. This limitation applies to BSoH / S/4HANA and BWoH / BW/4HANA => general HANA limitation. But both have its own distribution rules. So I will split this blog into two parts BWoH and BSoH.
At first you have to know that you have to execute manual steps to create new partitions and distribute the data! I have heard this very often that some customers think this is a automatically task.
There is one exception: If you use dynamic range partitioning, but also this must be implemented manually.
Test Environment:
VMware 6.0
11xCPU: Intel(R) Xeon(R) CPU E7-8880 v3 @ 2.30GHz
22vCPUs
SLES for SAP 12 SP1
HANA 1.0 SPS12 Rev. 122.11
Type SAP Note
BWoH 1908075 – BW on SAP HANA: Table placement and landscape redistribution
BWoH 2334091 – BW/4HANA: Table Placement and Landscape Redistribution
BWoH 2019973 – Handling Very Large Data Volumes in SAP BW on SAP HANA
BSoH 1899817 – Business Suite on SAP HANA database: table placement
If you face such an issue in a BW system, you have to check first if you can solve it in the application layer. So check if you can use partitioning options in the backend system with SAP note 2019973.
When this is not possible anymore, than you have to do the following manual steps.
1. Check table distribution rules / table placement
Follow the steps in SAP note 1908075 for BWoH / 2334091 for BW/4HANA.
Download the attachment and choose your script regarding your HANA revision and topology.
In this example I have chosen the most spread variant: HANA 1.0 SPS12 Scale-up
Normally at the time of installation or migration this step is already performed but some thresholds may have changed over time. So first check the current parameter and thresholds:
ALTER SYSTEM ALTER CONFIGURATION ('global.ini','system')
SET ('table_placement','same_num_partitions') = 'true' WITH RECONFIGURE;
ALTER SYSTEM ALTER CONFIGURATION ('global.ini','system')
SET ('table_placement','max_rows_per_partition') = '1500000000' WITH RECONFIGURE;
SELECT
SCHEMA_NAME,GROUP_TYPE,MIN_ROWS_FOR_PARTITIONING,
INITIAL_PARTITIONS,REPARTITIONING_THRESHOLDS,
LOCATION,DYNAMIC_RANGE_PARTITIONING
FROM
"SYS"."TABLE_PLACEMENT";
If all the values match to the values from the downloaded SQL script, there is nothing to do for you.
In the other case replace $$PLACEHOLDER with the SAP schema and execute the script with a user with sufficient priviliges.
The current rules explained also in the attached pdf inside the same folder as your SQL script:
InfoCube fact tables are ROUNDROBIN partitioned on first level (sap.bw.cube).
DSO tables are HASH partitioned on first level (sap.bw.dso). They have 1 partition on first level regardless of the number of records in the tables – except for tables with more than 1.5 billion records, see remarks below. Tables for InfoCubes and DSOs are located on the master node.
InfoObjects tables (sap.bw.iobj) are not partitioned, except for InfoObjects with high cardinality. Those tables are HASH partitioned on first level. All other InfoObject tables are not partitioned i.e. they do not have a partitioning specification. InfoObject tables are located on the master node.
PSA tables (sap.bw.psa) and errorstack tables (sap.bw.dtp) are HASH partitioned on first level. They have 1 partition on first level regardless of the number of records in the tables – except for tables with more than 1.5 billion records, see remarks below. PSA and errorstack tables are located on the master node.
SAP HANA DataSources (sap.bw.dsrc) can have an insert and an upsert table. The insert table is dynamic range and the upsert table is HASH partitioned on first level.
Temporary BW tables (sap.bw.temp, sap.bw.trex) and OpenHub tables (sap.bw.openhub) are not partitioned i.e. they do not have a partitioning specification. They are located on the master node.
The number of partitions on first level according to the rules above is only set when the table is initially created; the number of partitions on first level is not adapted dynamically.
The number of first level partitions of a table does not exceed the number of nodes that are potential valid locations for that table. This rule is disregarded if a higher number of first level partitions is required to avoid first level partitions with more than 1.5 billion records (global.ini, section [table_placement], max_rows_per_partition = 1500000000).
=> this means a split will not performed before the table has reached 1.5 billion rows.
=> in a scale-up system all tables are created without partitioning besides the table was above 1.5 billion records at the time of the migration – SUM will take care
2. Check table grouping
If you have an old release or your migration was not completed correctly, some tables have a missing table group type.
select * from "SYS"."TABLE_GROUPS" where TABLE_NAME like '%<TABLE_NAME>%';
When the group type is missing, you have to implement the latest notes for report RSDU_TABLE_CONSISTENCY (currently: 2264053 – RSHDB: SAP HANA Tools 08 for BW) and execute the it to classify the tables.
Usage: 2569097 – How to use of report RSDU_TABLE_CONSISTENCY [VIDEO]
3. Repartitioning plan
In the SQL select statement before you checked the table group type. In the output you’ll also found the group name.
This one we have to use in the reorg plan => the steps can be found at the end of SAP note 2019973.
Note: In case of a scale-out system you can skip this step and execute a table redistribution.
Be sure that you execute this steps in one SQL session with the schema owner or a user with sufficient priviliges to tables and content!
1. Generate a reorg plan
call reorg_generate(6,'GROUP_NAME=><your-BW-group-name>');
2. Check the generated plan
select * from reorg_plan;
If there is no output, you have to check if your table is above the defined threshold (max_rows_per_partition). I have adjusted the value that my plan results in 8 partitions => currently: 1.
3. Execute the plan
call reorg_execute(?);
Check the free ressources (data+log disk space + CPU + memory) of the system and the configured parallelism.
Adjust the parameters or execute it only in a time frame without high business load.
indexserver.ini => partitioning => split_threads => 16 (default)
indexserver.ini => partitioning => bulk_load_threads => 4 (default)
4. Check the status of the plan execution
select * from reorg_steps
where reorg_id = <value returned by reorg_execute before>;
select * from reorg_steps where reorg_id = 1;
Normally reorg_id should be 1. Now you can monitor the process via the statement and the running sessions.
You will see the splitting per col as adjusted via split_threads. The number of ‘ZipResultJob’ threads correlate with the value of parameter bulk_load_threads.
In my example the splitting operation (step 1-8) tooks 50min for 1.8 billion rows (23GB of data).
Note: Be aware of this limitation and the tasks to split the table to solve the issue. If you won’t do this no more inserts are possible if you reach the limit and you will get a lot of errors in the backend.
Okumaya devam et...