Estimating Disk and Memory Requirements Skip Navigation
Essbase® Analytic Services Database Administrator's Guide | Update Contents | Previous | Next | Print | ? |
Information Map

Estimating Disk and Memory Requirements


This appendix helps you estimate disk and memory requirements. This appendix contains the following sections:

This appendix uses a worksheet approach to help you keep track of the many components that you calculate. If you are using the printed version of this book, you can photocopy the worksheets. Otherwise, you can simulate the worksheets on your own paper. Labels, such as DA and MA help you keep track of the various calculated disk and memory component values.

Note: The calculations in this appendix apply only to block storage databases.

Understanding How Analytic Services Stores Data

You need to understand the units of storage that Analytic Services uses in order to size a database. This discussion assumes that you are familiar with the following basic concepts before you continue:

An Analytic Services database consists of many different components. In addition to an outline file and a data file, Analytic Services uses several types of files and memory structures to manage data storage, calculation, and retrieval operations.

Table 1 describes the major components that you must consider when you estimate the disk and memory requirements of a database. "Yes" means the type of storage indicated is relevant, "No" means the type of storage is not relevant.


Table 1: Storage Units Relevant to Calculation of Disk and Memory Requirements  

Storage Unit
Description
Disk
Memory

Outline

A structure that defines all elements of a database. The number of members in an outline determines the size of the outline.

Yes

Yes

Data files

Files in which Analytic Services stores data values in data blocks in data files.

Named essxxxxx.pag, where xxxxx is a number. Analytic Services increments the number, starting with ess00001.pag, on each disk volume. Memory is also affected because Analytic Services copies the files into memory.

Yes

Yes

Data blocks

Subdivisions of a data file. Each block is a multidimensional array that represents all cells of all dense dimensions relative to a particular intersection of sparse dimensions.

Yes

Yes

Index files

Files that Analytic Services uses to retrieve data blocks from data files. Named essxxxxx.ind, where xxxxx is a number. Analytic Services increments the number, starting with ess00001.ind, on each disk volume

Yes

Yes

Index pages

Subdivisions of an index file. Contain index entries that point to data blocks. The size of index pages is fixed at 8 KB.

Yes

Yes

Index cache

A buffer in memory that holds index pages. Analytic Services allocates memory to the index cache at startup of the database.

No

Yes

Data file cache

A buffer in memory that holds data files. When direct I/O is used, Analytic Services allocates memory to the data file cache during data load, calculation, and retrieval operations, as needed. Not used with buffered I/O.

No

Yes

Data cache

A buffer in memory that holds data blocks. Analytic Services allocates memory to the data cache during data load, calculation, and retrieval operations, as needed.

No

Yes

Calculator cache

A buffer in memory that Analytic Services uses to create and track data blocks during calculation operations.

No

Yes



Determining Disk Space Requirements

Analytic Services uses disk space for its server software and for each database. Before estimating disk storage requirements for a database, you must know how many dimensions the database includes, the sparsity and density of the dimensions, the number of members in each dimension, and how many of the members are stored members.

To calculate the disk space required for a database, perform these tasks:

  1. Calculate the factors identified in Calculating the Factors To Be Used in Sizing Disk Requirements.

  2. Use the process described in Estimating Disk Space Requirements for a Single Database to calculate the space required for each component of a single database. If a server contains more than one database, you must perform calculations for each database.

  3. Use the procedure outlined in Estimating the Total Analytic Server Disk Space Requirement to calculate the final estimate for the server.

Note: The database sizing calculations in this chapter assume an ideal scenario with an optimum database design and unlimited disk space. The amount of space required is difficult to determine precisely because most multidimensional applications are sparse.

Calculating the Factors To Be Used in Sizing Disk Requirements

Before estimating disk space requirements for a database, you must calculate the factors to be used in calculating the estimate. Later in the chapter you use these values to calculate the components of a database. For each database, you add together the sizes of its components.

Table 2 lists the sections that provide instructions to calculate these factors. Go to the section indicated, perform the calculation, then write the calculated value in the Value column.


Table 2: Factors Affecting Disk Space Requirements of a Database

Database Sizing Factor
Label
Value

Potential Number of Data Blocks

DA

Number of Existing Data Blocks

DB

 

Size of Expanded Data Block

DC

 

Size of Compressed Data Block

DD

 



Potential Number of Data Blocks

The potential number of data blocks is the maximum number of data blocks possible in the database.

If the database is already loaded, you can see the potential number of blocks on the Statistics tab of the Database Properties dialog box of Administration Services.

If the database is not already loaded, you must calculate the value.

To determine the potential number of data blocks, assume that data values exist for all combinations of stored members.

  1. Using Table 3 as a worksheet, list each sparse dimension and its number of stored members. If there are more than seven sparse dimensions, list the dimensions elsewhere and include all sparse dimensions in the calculation.

    The following types of members are not stored members:

  2. Multiply the number of stored members of the first sparse dimension (line a.) by the number of stored members of the second sparse dimension (line b.) by the number of stored members of the third sparse dimension (line c.), and so on. Write the resulting value to the cell labeled DA in Table 2.
    a * b * c * d * e * f * g (and so on) 
    = potential number of blocks
    


    Table 3: List of Sparse Dimensions with Numbers of Stored Members

    Enter Sparse Dimension Name
    Enter Number of Stored Members

     

    a.

     

    b.

     

    c.

     

    d.

     

    e.

     

    f.

     

    g.



Example

The Sample Basic database contains the following sparse dimensions:

Therefore, there are 19 * 25 = 475 potential data blocks.

Number of Existing Data Blocks

As compared with the potential number of blocks, the term existing blocks refers to those data blocks that Analytic Services actually creates. For Essbase to create a block, at least one value must exist for a combination of stored members from sparse dimensions. Because many combinations can be missing, the number of existing data blocks is usually much less than the potential number of data blocks.

To see the number of existing blocks for a database that is already loaded, look for the number of existing blocks on the Statistics tab of the Database Properties dialog box of Administration Services. Write the value in the cell labeled DB in Table 2.

If the database is not already loaded, you must estimate a value.

To estimate the number of existing data blocks, perform these tasks:

  1. Estimate a database density factor that represents the percentage of sparse dimension stored-member combinations that have values.

  2. Multiply this percentage against the potential number of data blocks and write the number of actual blocks to the cell labeled DB in Table 2.
    number of existing blocks 
    = estimated density 
    * potential number of blocks

Example

The following three examples show different levels of sparsity and assume 100,000,000 potential data blocks:

Size of Expanded Data Block

The potential, expanded (uncompressed) size of each data block is based on the number of cells in a block and the number of bytes used for each cell. The number of cells in a block is based on the number of stored members in the dense dimensions. Analytic Services uses eight bytes to store each intersecting value in a block.

To see the number of existing blocks for a database that is already loaded, look for the size of an expanded data block on the Statistics tab of the Database Properties dialog box of Administration Services.

If the database is not already loaded, you must estimate the value.

To determine the size of an expanded data block, perform these tasks:

  1. Using Table 4 as a worksheet, enter each dense dimension and its number of stored members. If there are more than seven dense dimensions, list the dimensions elsewhere and include all dense dimensions in the calculation.

    The following types of members are not stored members:

  2. Multiply the number of stored members of the first dense dimension (line a) by the number of stored members of the second dense dimension (line b) by the number of stored members of the third dense dimension (line c), and so on, to determine the total number of cells in a block.
    a * b * c * d * e * f * g (and so on) 
    = the total number of cells 

  3. Multiply the resulting number of cells by 8 bytes to determine the expanded block size. Write the resulting value to the cell labeled DC in Table 2.

    (Total number of cells) * 8 bytes per cell
    = expanded block size


    Table 4: Determining the Size of a Data Block

    Enter Dense Dimension Name
    Number of Stored Members

     

    a.

     

    b.

     

    c.

     

    d.

     

    e.

     

    f.

     

    g.



Example

The Sample Basic database contains the following dense dimensions:

Perform the following calculations to determine the potential size of a data block in Sample Basic:

12 * 8 * 2 = 192 data cells 

192 data cells 
* 8 bytes 
= 1,536 bytes (potential data block size) 
 

Size of Compressed Data Block

Compression affects the actual disk space used by a data file. The four types of compression, bitmap, run-length encoding (RLE), zlib, and index-value affect disk space differently. For a comprehensive discussion of data compression unrelated to estimating size requirements, see Data Compression.

If you are not using compression or if you have enabled RLE compression, skip this calculation and proceed to Stored Data Files.

Note: Due to sparsity also existing in the block, actual (compressed) block density varies widely from block to block. The calculations in this discussion are only for estimation purposes.

To calculate an average compressed block size when bitmap compression is enabled, perform the following tasks:

  1. Determine an average block density value.

  2. To determine the compressed block size, perform the following calculation and write the resulting block size to the cell labeled DD in Table 2.
    expanded block size * block density 
    = compressed block size
Example

Assume an expanded block size of 1,536 bytes and a block density of 25% (.25):

1,536 bytes 
* .25 
= 384 bytes (compressed block size) 
 

Estimating Disk Space Requirements for a Single Database

To estimate the disk-space requirement for a database, make a copy of Table 5 or use a separate sheet of paper as a worksheet for a single database. If multiple databases are on a server, repeat this process for each database. Write the name of the database on the worksheet.

Each row of this worksheet refers to a section that describes how to size that component. Perform each calculation and write the results in the appropriate cell in the Size column. The calculations use the factors that you wrote in Table 2.


Table 5: Worksheet for Estimating Disk Requirements for a Database  

Database Name:
Database Component
Size

Stored Data Files

DE

Index Files

DF

Fragmentation Allowance

DG

Outline

DH

Work Areas (sum of DE through DH)

DI

Linked Reporting Objects Considerations, if needed

DJ

Total disk space required for the database.

Total the size values from DE through DJ and write the result to Table 6.

 



After writing all the sizes in the Size column, add them together to determine the disk space requirement for the database. Add the database name and size to the list in Table 6. Table 6 is a worksheet for determining the disk space requirement for all databases on the server.

Repeat this exercise for each database on the server. After estimating disk space for all databases on the server, proceed to Estimating the Total Analytic Server Disk Space Requirement.

The following sections describe the calculations to use to estimate components that affect the disk-space requirements of a database.

Stored Data Files

The size of the stored database depends on whether or not the database is compressed and the compression method chosen for the database. Analytic Services provides five compression-method options: bitmap, run-length encoding (RLE), zlib, index-value, and none.

Calculating the size of a compressed database is complex for a number of reasons including the following:

For a comprehensive discussion of data compression unrelated to estimating size requirements, see Data Compression. The calculations in this discussion are for estimation purposes only.

The calculation for the space required to store the compressed data files (essxxxxx.pag) uses the following factors:

Calculations for No Compression

To calculate database size when the compression option is none, use the following formula:

Number of blocks * (72 bytes + size of expanded data block) 
 

Write the result in cell labeled DE in Table 5. Proceed to Index Files.

Calculations for Compressed Databases

Because the compression method used can vary per block, the following calculation formulas are, at best, general estimates of the database size.

Bitmap Compression

To estimate database size when the compression option is bitmap, use the following formula:

Number of blocks 
* (72 bytes 
+ size of expanded data block/64) 
 

Write the result in cell labeled DE in Table 5. Proceed to Index Files.

Index-Value Compression

To estimate database size when the compression option is Index-value, use the following formula:

Number of blocks * (72 bytes 
+ (1.5 * database density * expanded data block size) 
 

Write the result in cell labeled DE in Table 5. Proceed to Index Files.

RLE Compression

To estimate database size when the compression option is RLE, use the formula for calculating Bitmap Compression.

When the compression method is RLE, Analytic Services automatically uses the bitmap or Index-value method for a block if it determines better compression can be gained. Estimating using the bitmap calculation estimates the maximum size.

Write the result in cell labeled DE in Table 5. Proceed to Index Files.

zlib Compression

To estimate database size when the compression option is zlib, use the formula for calculating Bitmap Compression:

It is very difficult to determine the size of a data block when zlib compression is used. Individual blocks could be larger or smaller than if compressed using other compression types. Calculating using the bitmap compression formula at least provides an approximation to use for this exercise.

Write the result in cell labeled DE in Table 5. Proceed to Index Files.

Index Files

The calculation for the space required to store the index files (essxxxxx.ind) uses the following factors:

To calculate the total size of a database index, including all index files, perform the following calculation. Write the size of the compressed data files to the cell labeled DF in Table 5.

number of existing blocks * 112 bytes = the size of database index 
 
Example

Assume a database with 15,000,000 blocks.

15,000,000 blocks 
* 112 
= 1,680,000,000 bytes 
 

Note: If the database is already loaded, select the Storage tab on the Database Properties window.

Fragmentation Allowance

If you are using bitmap or RLE compression, a certain amount of fragmentation occurs. The amount of fragmentation is based on individual database and operating system configurations and cannot be precisely predicted.

As a rough estimate, calculate 20% of the compressed database size (value DE from Table 5) and write the result to the cell labeled DG in the same table.

Example

Calculating fragmentation allowance assuming a compressed database size of 5,769,000,000 bytes:

5,769,000,000 bytes 
* .2 
= 1,153,800,000 bytes 
 

Outline

The space required by an outline can have two components.

To estimate the size of the outline file, perform these tasks:

  1. Estimate the main area of the outline by multiplying the number of members by a name-length factor between 350 and 450 bytes.

    If the database includes few aliases or very short aliases and short member names, use a smaller number within this range. If you know that the member names or aliases are very long, use a larger number within this range.

    Because the name-length factor is an estimated average, the following formula provides only a rough estimate of the main area of the outline.

    number of members
    * name-length factor
    =
    size of main area of outline

    Note: See Limits, for the maximum sizes for member names and aliases.

    For memory space requirements calculated later in this chapter, use the size of the main area of the outline.

  2. For disk space requirements, if the outline includes attribute dimensions, calculate the size of the attribute association area for the database. Calculate the size of this area for each base dimension. Multiply the number of members of the base dimension by the sum of the count of members of all attribute dimensions associated with the base dimension, and then divide by 8.

    Note: Within the count of members, do not include Label Only members and shared members.

    (number of base-dimension members
    * sum of count of attribute-dimension members)/8
    = size of attribute association area for a base dimension

  3. Sum the attribute association areas of each dimension to determine the total attribute association area for the outline.

  4. For the total disk space required for the outline, add together the main outline area and the attribute association area, and write the result of this calculation to the cell labeled DH in Table 5.
    main area of outline + total attribute association area
Example

Assume the outline has the following characteristics:

Perform the following calculations:

  1. Calculate the main area of the outline:
    name-length factor of 400 bytes 
    * 26,000 members 
    = 10,400,000 bytes
    

  2. Calculate the attribute association areas:

  3. Sum these areas for the total attribute association area for the database:
    201,250 bytes + 3,750 bytes = 205,000 bytes
    

  4. For a total estimate of outline disk space, add the main area of the outline and the total attribute association area:

    10,400,000 bytes
    + 205,000 bytes
    = 10,605,000 bytes (outline disk space requirement)

Note: Do not use this procedure to calculate outline memory space requirements. Use the process described in Outline Size Used in Memory.

Work Areas

Three different processes create temporary work areas on the disk:

To create these temporary work areas, Analytic Services may require disk space equal to the size of the entire database. Restructuring and migration need additional work space the size of the outline. Because none of these activities occur at the same time, a single allocation can represent all three requirements.

To calculate the size of a work area used for restructuring, migration, and recovery, calculate the sum of the sizes of the following database components from Table 5:

Use the following formula to calculate the size of the work area:

work area = size of compressed data files 
+ size of index files 
+ fragmentation allowance 
+ outline size  
 

Write the result of this calculation to the cell labeled DI in Table 5.

Linked Reporting Objects Considerations

You can use the Linked Reporting Objects (LROs) feature to associate objects with data cells. The objects can be flat files, HTML files, graphics files, and cell notes. For a comprehensive discussion of linked reporting objects, see Linking Objects to Analytic Services Data.

Two aspects of LROs affect disk space:

To estimate the disk space requirements for linked reporting objects, perform the following tasks:

  1. Estimate the size of the objects. If a limit is set, multiply the number of LROs by that limit. Otherwise, sum the size of all anticipated LROs.

  2. Size the LRO catalog. Multiply the total number of LROs by 8192 bytes.

  3. Add together the two areas and write the result of this calculation to the cell labeled DJ in Table 5.
Example

Assume the database uses 1500 LROs which are composed of the following:

Perform the following calculations:

  1. Multiply 1000 * 512 bytes for 512,000 bytes maximum required for the stored URLs.

  2. Calculate the size of the LRO catalog. Multiply 1500 total LROs * 8192 bytes = 12,288,000 bytes.

  3. Add together the two areas; for example:
    512,000 bytes 
    + 12,288,000 bytes 
    = 12,800,000 bytes total LRO disk space requirement

Estimating the Total Analytic Server Disk Space Requirement

The earlier calculations in this chapter estimate the data storage requirement for a single database. Often, more than one database resides on the server.

In addition to the data storage required for each database, the total Analytic Services data storage requirement on a server includes Analytic Services software. Allow approximately 200 MB (209,715,200 bytes) for the base installation of Analytic Services software and sample applications. The allowance varies by platform and file management system. For details, see the Essbase Analytic Services Installation Guide.

To estimate the total server disk space requirement, perform the following tasks:

  1. In the worksheet in Table 6, list the names and disk space requirements that you calculated for each database.

  2. Sum the database requirements and write the total in bytes in the cell labeled DL.

  3. In the cell labeled DM, write the appropriate disk space requirement for the software installed on the server; for example, 209,715,200 bytes.

  4. For the total server disk space requirement in bytes, sum the values in cells DL and DM. Write this value in the cell labeled DN.

  5. Convert the total in cell DN to megabytes (MB) by dividing the value by 1,048,576 bytes. Write this value in the cell labeled DO.


Table 6: Worksheet for Total Server Disk Space Requirement  

List of Databases (From Table 5)
Size

a.

 

b.

 

c.

 

d.

 

e.

 

f.

 

g.

 

Sum of database disk sizes a + b + c + d + e + f + g 

DL:

Size in bytes for Analytic Services server software

DM:

Total Analytic Server disk requirement in bytes: DL + DM

DN:

Total Analytic Server disk requirement in megabytes (MB): DN divided by 1,048,576 bytes

DO:



Estimating Memory Requirements

The minimum memory requirement for running Analytic Services is 64 MB. On UNIX systems, the minimum requirement is 128 MB. Based on the number of applications and databases and the database operations on the server, the amount of memory you require may be more.

Analytic Services provides a memory management feature that enables you to specify the maximum memory to be used for all server activities, or a maximum memory that can be used to manage specific applications. For additional information about this feature, see the Memory Manager Configuration section of the Technical Reference.

If you use the memory management feature to limit the amount of memory available to the server, you do not need to calculate a memory requirement. The total memory required on the computer is equal to the sum of the operating system memory requirement plus the Analytic Server limit you specify in the MEMORYLIMIT configuration setting in the config.mem memory configuration file.

To estimate the memory required on Analytic Server, use the Worksheet for Total Memory Requirements, Table 11, to collect and total server memory requirements. To calculate the requirements for this worksheet, review the following topics:

Estimating Memory Requirements for Applications

If you use the memory management feature to control the amount of memory used by Analytic Server for all applications, do not calculate application and database memory requirements. See Estimating Memory Requirements.

The approach to determining the amount of memory required for an application varies, depending on whether or not you set memory limits on individual applications. As appropriate to your individual applications, follow the instructions in the following topics:

Application Memory Limited by Memory Manager

If you use the memory management feature to limit the amount of memory available for individual applications, you do not need to calculate the memory requirements for those applications. For information about setting memory maximums for individual applications, see the Memory Manager Configuration section of the Technical Reference.

To determine the maximum amount of memory that can be used by applications for which memory limits are established in application memory configuration files, list the applications in Table 7 and write the associated memory limit in the Maximum Size column.


Table 7: Applications With Specified Maximum Sizes

Application Name:
Maximum Size, in Megabytes (MB)

a.

b.

 

c.

 

d.

 

e.

 

f.

 

g.

 



Total the memory values and write the result to the cell labeled ML in Worksheet for Total Server Memory Requirement.

Startup Memory Requirement for Applications

For application memory use that is not controlled by Memory Manager, you need to calculate overall memory used at application startup plus the memory requirements for each database.

Each open application has the following memory requirements at startup:

Multiply the number of applications that will be running simultaneously on the server by the appropriate startup requirement and write the resulting value to the cell labeled MM in Table 11. Do not include in this calculation applications for which the amount of memory used is controlled by Memory Manager.

Estimating Startup Memory Requirements for Databases

Calculate memory requirements for each database on Analytic Server.

Note: Do not include in this calculation databases within applications for which you use the memory management feature to limit the amount of memory available to them.

For each database, make a copy of Table 11 or use a separate sheet of paper as a worksheet for a single database. If multiple databases are on Analytic Server, repeat this process for each database. Write the name of the database on the worksheet.

Each row links to information that describes how to size that component. Perform each calculation and note the results in the appropriate cell in the Size column. Some calculations use the factors that you wrote in Table 9. After filling in all the sizes in the Size column, add them together to determine the memory requirement for that database.

After estimating disk space for all databases on the server, proceed to Estimating the Total Analytic Server Disk Space Requirement.


Table 8: Worksheet for Estimating Memory Requirements for a Database

Database Name:
Size in Megabytes (MB)
Memory Requirement:

Startup requirements per database:

 

MA:

MB:

MC:

MD:

Operational Requirements:

ME:

MF:

Summarize the size values from MA through MF for an estimate of the total memory required for a database.

MG:

Divide the value from MG by 1,048,576 bytes for the total database memory requirement in megabytes (MB).

MH:



In Table 11, enter the name of the database and the total memory requirement in megabytes, MH.

Factors to Be Used in Sizing Memory Requirements

Before you start the estimate, calculate factors to be used in calculating the estimate.

Table 9 lists sizing factors with references to sections in this chapter and other chapters that provide information to determine these sizes. Go to the section indicated, perform the calculation, then return to Table 9 and write the size, in bytes, in the Value column of this table.

Later in this chapter, you can refer to Table 9 for values to use in various calculations.


Table 9: Factors Used to Calculate Database Memory Requirements

Database Sizing Factor
Value

The number of cells in a logical block. (See The Number of Cells in a Logical Block.)

MI:

The number of threads allocated through the SERVERTHREADS ESSCMD. (See the Technical Reference.)

MJ:

Potential stored-block size. (See Size of Expanded Data Block.)

MK:



The calculations in this chapter do not account for other factors that affect how much memory is used. The following factors have complex implications and are not included within the discussion:

Outline Size Used in Memory

The attribute association area included in disk space calculations is not a sizing factor for memory. Calculate only the main area of the outline.

For memory size requirements, outline size is calculated using the following factors:

To calculate the outline memory requirement, multiply the number of members by a name-length factor between 300 and 400 bytes and write the result to the cell labeled MA in Table 8.

If the database includes few aliases or very short aliases and short member names, use a smaller number within the 300-400 byte range. If you know that the names or aliases are very long, use a larger number within this range.

Because the name-length factor is an estimated average, the following formula provides only a rough estimate of the main area of the outline:

memory size of outline 
= number of members 
* name-length factor 
 

Note: See Limits, for the maximum sizes for member names and aliases.

Example

Assuming the outline has 26,000 members and a median name-length, use the following calculation to estimate the outline size used in memory:

26,000 members 
* 350 bytes per member 
= 9,100,000 bytes 
 

Index Cache

At startup, Essbase sets aside memory for the index cache, the size of which can be user-specified. To determine the size of the index cache, see Sizing Caches and write the size in the cell labeled MB in Table 8.

Cache-Related Overhead

Analytic Services uses additional memory while it works with the caches.

The calculation for this cache-related overhead uses the following factors:

To calculate the cache-related overhead at startup, perform the following tasks:

  1. Calculate half the index cache size, in bytes.
    index cache size 
    * .5 
    = index cache-related overhead
    

  2. Calculate additional cache overhead in bytes using the following formula:
    ((# of server threads allocated to the Analytic Server process * 3) 
    * 256) 
    + 5242880 bytes 
    = additional cache overhead
    

  3. Sum the index cache overhead plus the additional cache overhead. Write the result to the cell labeled MC in Table 8.
    cache-related overhead 
    = index cache-related overhead 
    + additional cache overhead

The Number of Cells in a Logical Block

The term logical block refers to an expanded block in memory.

To determine the cell count of a logical block, multiply together all members of each dense dimension (including Dynamic Calc and Dynamic Calc and Store members but excluding Label Only and shared members).

  1. Using Table 10 as a worksheet, enter each dense dimension and its number of members excluding Label Only and shared members. If there are more than seven dense dimensions, list the dimensions elsewhere and include all dense dimensions in the calculation.

  2. Multiply the number of members of the first dense dimension (line a.) by the number of members of the second dense dimension (line b.) by the number of members of the third dense dimension (line c.), and so on, to determine the total number of cells in a logical block. Write the result to the cell labeled MI in Table 9.

    a * b * c * d * e * f * g = the total number of cells


    Table 10: Determining the Number of Cells in a Logical Block  

    Enter Dense Dimension Name
    Number of Members

     

    a.

     

    b.

     

    c.

     

    d.

     

    e.

     

    f.

     

    g.



Example

Excluding Label Only and shared members, the dense dimensions in Sample Basic contain 17 (Year), 14 (Measures), and 4 (Scenario) members. The calculation for the cell count of a logical block in Sample Basic is as follows:

17 * 14 * 4 = 952 cells 
 

Memory Area for Data Structures

At application startup time, Analytic Services sets aside an area of memory based on the following factors:

To calculate the data structure area in memory, perform the following tasks:

  1. Use the following formula to calculate the size in bytes:
    Number of threads 
    * ((Number of members in the outline * 26 bytes) 
    + (Logical block cell count * 36 bytes))
    

  2. Write the result to the cell labeled MD in Table 8.
Example

Assuming 20 threads for the Sample Basic database, the startup area in memory required for data structures is calculated as follows:

20 threads 
* ((79 members * 26 bytes) + (952 cells * 36 bytes)) 
= 726,520 bytes 726,520 bytes 
/ 1,048,576 bytes = .7 MB  
 

Estimating Additional Memory Requirements for Database Operations

In addition to startup memory requirements, operations such as queries and calculations require additional memory. Because of many variables, the only way to estimate memory requirements of operations is to run sample operations and monitor the amount of memory used during these operations. This topic provides guidelines for the following estimation tasks:

Estimating Additional Memory Requirements for Data Retrievals

Analytic Services processes requests for database information (queries) from a variety of sources. For example, Analytic Services processes queries from the Spreadsheet Add-in and from Report Writer. Analytic Services uses additional memory when it retrieves the data for these queries, especially when Analytic Services must perform dynamic calculations to retrieve the data. This section describes Analytic Services memory requirements for query processing.

Analytic Services is a multithreaded application in which queries get assigned to threads. Threads are automatically created when Analytic Services is started. In general, a thread exists until you shut down Analytic Server. For an explanation of how Analytic Services uses threads, see Multithreading.

As Analytic Services processes queries, it cycles through the available threads. For example, assume 20 threads are available at startup. As each query is processed, Analytic Services assigns each succeeding query to the next sequential thread. After it has assigned the 20th thread, Analytic Services cycles back to the beginning, assigning the 21st query to the first thread.

While processing a query, a thread allocates some memory, and then releases most of it when the query is completed. Some of the memory is released to the operating system and some of it is released to the dynamic calculator cache for the database being used. However, the thread holds on to a portion of the memory for possible use in processing subsequent queries. As a result, after a thread has processed its first query, the memory held by the thread is greater than it was when Analytic Services first started.

Analytic Services uses the maximum amount of memory for query processing when both of these conditions are true:

In the example where 20 threads are available at startup, the maximum amount of memory is used for queries when at least 20 queries have been processed and the maximum number of simultaneous queries are in process.

Calculating the Maximum Amount of Additional Memory Required

To estimate query memory requirements by observing actual queries, perform the following tasks:

  1. Observe the memory used during queries.

  2. Calculate the maximum possible use of memory for query processing by adding together the memory used by queries that will be run simultaneously, then add the extra memory that had been acquired by threads that are now waiting for queries.

Use the following variables when you calculate the formula in Estimating the Maximum Memory Usage for A Query Before and After Processing:

Determining the Total Number of Threads

The potential number of threads available is based on the number of licensed ports that are purchased. The actual number of threads available depends on settings you define for the Agent or the server. Use the number of threads on the system as the value for Total#Threads in later calculations.

Estimating the Maximum Number of Concurrent Queries

Determine the maximum number of concurrent queries and use this value for Max#ConcQueries in later calculations. This value cannot exceed the value for Total#Threads.

Estimating the Maximum Memory Usage for A Query Before and After Processing

The memory usage of individual queries depends on the size of each query and the number of data blocks that Analytic Services needs to access to process each query. To estimate the memory usage, calculate the additional memory Analytic Services uses during processing and after processing each query.

Decide on several queries that you expect to use the most memory. Consider queries that must process large numbers of members; for example, queries that perform range or rank processing.

To estimate the memory usage of a query, perform the following tasks:

  1. Turn the dynamic calculator cache off by setting the essbase.cfg setting DYNCALCACHEMAXSIZE to 0 (zero). Turning off the dynamic calculator cache enables measurement of memory still held by a thread by ensuring that, after the query is complete, the memory used for blocks during dynamic calculations is released by the ESSSVR process to the operating system.

  2. Start the Analytic Services application.

  3. Using memory monitoring tools for the operating system, note the memory used by Analytic Server before processing the query. Use the value associated with the ESSSVR process.

    Use this value for MemBeforeP.

  4. Run the query.

  5. Using memory monitoring tools for the operating system, note the peak memory usage of Analytic Server while the query is processed. This value is associated with the ESSSVR process.

    Use this value for MemDuringP.

  6. Using memory monitoring tools for the operating system, after the query is completed, note the memory usage of Analytic Services. This value is associated with the ESSSVR process.

    Use this value for MemAfterP.

  7. Calculate the following two values:

  8. When you have completed the above calculations for all the relevant queries, compare all results to determine the following two values:

  9. Insert the two values from step 7 into the formula in the following statement.

    The amount of additional memory required for data retrievals will not exceed the following:

    Max#ConcQueries
    * MAXAdditionalMemDuringP
    + (Total#Threads - Max#ConcQueries)
    * MAXAdditionalMemAfterP

    Write the result of this calculation, in bytes, to the cell labeled ME in Table 8.

Because this calculation method assumes that all of the concurrent queries are maximum-sized queries, the result may exceed your actual requirement. It is difficult to estimate the actual types of queries that will be run concurrently.

To adjust the memory used during queries, you can set values for the retrieval buffer and the retrieval sort buffer. For a review of methods, see Setting the Retrieval Buffer Size and Setting the Retrieval Sort Buffer Size.

Estimating Additional Memory Requirements Without Monitoring Actual Queries

If you cannot perform this test with actual queries, you can calculate a very rough estimate for operational query requirements. Requirements for each retrieval vary considerably. As a generalization, this estimate uses the following fixed factors:

This estimate also uses the following variables:

You can then use the following two calculations for the memory needed for retrievals:

Summarize the calculations and write the result, in bytes, to the cell labeled ME in Table 8.

Example

To estimate the maximum memory needed for concurrent queries, assume the following values for this example:

Estimated memory for retrievals is as follows:

184,000 bytes + (20 concurrent inquiries 
* (10,240 bytes + 20,480 bytes + 144,000 bytes 
+ 761,600 bytes + 3,145,728 bytes + 400,000 bytes)) 
= 75,824,960 bytes 
 

Estimating Additional Memory Requirements for Calculations

For existing calculation scripts, you can use the memory monitoring tools provided for the operating system on the server to observe memory usage. Run the most complex calculation and take note of the memory usage both before and while running the calculation. Calculate the difference and use that figure as the additional memory requirement for the calculation script.

For a comprehensive discussion of calculation performance, see Optimizing Calculations.

If you cannot perform a test with a calculation script, you can calculate a very rough estimate for the operational requirement of a calculation by adding the following values:

For the total calculation requirement, summarize the amount of memory needed for all calculations that will be run simultaneously and write that total to the cell labeled MF in Table 8.

Note: The size and complexity of the calculation scripts affect the amount of memory required. The effects are difficult to estimate.

Estimating Total Essbase Memory Requirements

You can use Table 11 as a worksheet on which to calculate an estimate of the total memory required on the server.


Table 11: Worksheet for Total Server Memory Requirement  

Component
Memory Required, in Megabytes (MB)

Sum of memory maximums established for individual applications (See Application Memory Limited by Memory Manager.)

ML

Sum of application startup memory requirements (See Startup Memory Requirement for Applications.)

MM:

In rows a through g below, list concurrent databases (from copies of Table 8) and enter their respective memory requirements (MH) in the column to the right.

a.

MH:

b.

MH:

c.

MH:

d.

MH:

e.

MH:

f.

MH:

g.

MH:

Operating system memory requirement

MN:

Total estimated memory requirement for the server

MO:



To estimate the total Analytic Services memory requirement on a server, perform the following tasks:

  1. If the memory management feature is used to limit the amount of memory used by Analytic Services, the total memory required on the computer is equal to the sum of the operating system memory requirement plus the Analytic Server limit you specify in the MEMORYLIMIT configuration setting in the config.mem memory configuration file. No further calculation is necessary.

  2. If the memory management feature is not used to limit the amount of memory used by Analytic Services for all applications, perform the following steps:

    1. Record in the cell labeled ML the sum of memory maximums defined for individual applications, as described in topic Application Memory Limited by Memory Manager.

    2. Record in the cell labeled MM the total startup memory requirement for applications, as described in topic Startup Memory Requirement for Applications.

    3. List the largest set of databases that will run concurrently on the server. In the Memory Required column, for the MH value for each database, note the memory requirement estimated in the database requirements worksheet, Table 8.

    4. Determine the operating system memory requirement and write the value in megabytes to the cell labeled MN in Table 11.

    5. Total all values and write the result in the cell labeled MO.

    6. Compare the value in MN with the total available random-access memory (RAM) on the server.

      Note: In addition, be sure to consider memory requirements for client software, such as Essbase Administration Services, that may be installed on the Analytic Server computer. See the appropriate documentation for details.

      If cache memory locking is enabled, the total memory requirement should not exceed two-thirds of available RAM; otherwise, system performance can be severely degraded. If cache memory locking is disabled, the total memory requirement should not exceed available RAM.

      If there is insufficient memory available, you can redefine the cache settings and recalculate the memory requirements. This process can be iterative. For guidelines and considerations, see Fine Tuning Cache Settings. In some cases, you may need to purchase additional RAM.



Hyperion Solutions Corporation link