Hyperion Essbase Tutorials
Hyperion Essbase Tutorials
The following list provides links to the main topics in the Database
Administrator's Guide:
Preface
Introducing Hyperion Essbase
Understanding Multidimensional Databases
Quick Start for Implementing Analytic Services
Basic Architectural Elements
Case Study: Designing a?Single-Server, Multidimensional Database
About Essbase XTD Administration Services
Creating Applications and Databases
Creating and Changing Database Outlines
Setting Dimension and Member Properties
Working with Attributes
Linking Objects to Analytic Services Data
Designing and Building Currency Conversion Applications
Designing Partitioned Applications
Creating and Maintaining Partitions
Accessing Relational Data with Hybrid Analysis
Understanding Data Loading and Dimension Building
Creating Rules Files
Using a Rules File to Perform Operations on Records, Fields, and Data
Performing and Debugging Data Loads or Dimension Builds
Understanding Advanced Dimension Building Concepts
Calculating Analytic Services Databases
Developing Formulas
Reviewing Examples of Formulas
Defining Calculation Order
Dynamically Calculating Data Values
Calculating Time Series Data
Developing Calculation Scripts
Reviewing Examples of Calculation Scripts
Developing Custom-Defined Calculation Macros
Developing Custom-Defined Calculation Functions
Understanding Report Script Basics
Developing Report Scripts
Mining an Analytic Services Database
Copying Data Subsets and Exporting Data to Other Programs
Learning to Write Queries With MaxL Data Manipulation Language
Managing Security for Users and Applications
Controlling Access to Database Cells
Security Examples
The Analytic Services Implementation of Unicode
Working With Unicode-Mode Applications
Running Analytic Servers, Applications, and Databases
Managing Applications and Databases
Using Analytic Services Logs
Managing Database Settings
Allocating Storage and Compressing Data
Ensuring Data Integrity
Backing Up and Restoring Data
Using MaxL Data Definition Language
Monitoring Performance
Improving Analytic Services Performance
Optimizing Analytic Services Caches
Optimizing Database Restructuring
Optimizing Data Loads
Optimizing Calculations
Optimizing with Intelligent Calculation
Optimizing Reports and Other Types of Retrieval
Limits
Handling Errors and Troubleshooting Analytic Services
Estimating Disk and Memory Requirements
Using ESSCMD
Glossary
Preface
Welcome to the Essbase XTD Analytic Services Database
Administrator's Guide. This preface discusses the following topics:
• Purpose
• Audience
• Document Structure
• Where to Find Documentation
• Conventions
• Additional Support
Purpose
This guide provides you with all the information that you need to
implement, design, and maintain an optimized Essbase XTD Analytic
Services multidimensional database. It explains the Analytic Services
features and options, and contains the concepts, processes, and
examples that you need to use the software.
Audience
This guide is for database administrators or system administrators who
are responsible for designing, creating, and maintaining applications,
databases, and database objects (for example, data load rules, and
calculation scripts).
Document Structure
This document contains the following information:
Where to Find
Documentation
All Analytic Services documentation is accessible from the following
locations:
Conventions
The following table shows the conventions that are used in this
document:
Ellipses (...) Ellipsis points indicate that text has been omitted
from an example.
Education Services
Hyperion offers instructor-led training, custom training, and eTraining
covering all Hyperion applications and technologies. Training is geared
to administrators, end users, and information systems (IS)
professionals.
Consulting Services
Experienced Hyperion consultants and partners implement software
solutions tailored to clients' particular reporting, analysis, modeling,
and planning requirements. Hyperion also offers specialized consulting
packages, technical assessments, and integration solutions.
Technical Support
Hyperion provides enhanced electronic-based and telephone support to
clients to resolve product issues quickly and accurately. This support is
available for all Hyperion products at no additional cost to clients with
current maintenance agreements.
Documentation
Feedback
Hyperion strives to provide complete and accurate documentation. We
value your opinions on this documentation and want to hear from you.
Send us your comments by clicking the link for the Documentation
Survey, which is located on the Information Map for your product.
Introducing Hyperion
Essbase
This chapter provides an architectural overview of the product
components and introduces the key features of Essbase products,
including:
• Key Features
• Essbase Product Components
Key Features
Essbase products provide the analytic solution that integrates data
from multiple sources and meets the needs of users across an
enterprise. Essbase products enable the quick and easy
implementation of solutions, add value to previously inaccessible data,
and transform data into actionable information.
Integration with Existing Infrastructure
Essbase products integrate with your existing business intelligence
infrastructure. Essbase products meet the enterprise analytic demands
of users for critical business information with a minimum of
information technology (IT) overhead and thus enable organizations to
realize maximum return on their existing IT investments:
Data Integration
Essbase products allow organizations to leverage data in their data
warehouses, legacy systems, online transaction processing (OLTP)
systems, enterprise resource planning (ERP) systems, e-business
systems, customer relationship management (CRM) applications, Web
log files and other external data. For relational database integration,
Essbase XTD Integration Services provide a suite of graphical tools,
data integration services, and a metadata catalog that tie into a data
warehouse environment.
Powerful Querying
Large communities of business users can interact with data in real
time, analyzing business performance at the speed of thought. Using
Essbase products you can organize and present data along familiar
business dimensions, thus enabling users to view and explore the data
intuitively and to turn the data into actionable information.
Complex Calculations
Essbase XTD Analytic Services includes powerful calculation features
for demanding analytic requirements. A rich library of functions makes
it easy to define advanced and sophisticated business logic and
relationships. Analytic Services gives users the flexibility to build,
customize and extend the calculator through custom-defined macros
and functions, as well as the ability to span calculations across
databases. On multiprocessor systems, a database administrator can
configure a single calculation request to use multiple threads to
accomplish the calculation, providing enhanced calculation speed.
Essbase Product
Components
Essbase products incorporate powerful architectural features to handle
a wide range of analytic applications across large multi-user
environments. This section provides information on the main product
components and the information flow from the source data to the end
user. Figure?1 provides a high-level view of the information flow
between the source data and the product components.
Analytic Services
Analytic Services-a multi-threaded OLAP database software that takes
advantage of symmetric multi processing hardware platforms-is based
upon Web-deployable, thin-client architecture. The server acts as a
shared resource, handling all data storage, caching, calculations, and
data security. The Analytic Server client needs only to retrieve and
view data that resides on a server.
All Analytic Services application components, including database
outlines and calculation scripts, application control, and multi-
dimensional database information, reside on a server. With Analytic
Services you can configure server disk storage to span multiple disk
drives, enabling you to store large databases. Analytic Services
requires a server to run a multi-threaded operating system so a server
can efficiently manage multiple, simultaneous requests. A server also
runs a server agent process that acts as a traffic coordinator for all
user requests to applications.
Administration Services
Administration Services-the database and system administrators'
interface to Analytic Services-provides a single-point-of-access console
to multiple Analytic Servers. Using Administration Services you can
design, develop, maintain, and manage multiple Analytic Servers,
applications, and databases. You can also use custom Java plug-ins to
leverage and extend key functionality.
Deployment Services
Deployment Services allows multiple instances of Analytic Server to
run on multiple machines, while serving the user as one logical unit
and removing and single point of failure. Deployment Services enables
database clustering with load balancing and fail-over capabilities.
Integration Services
Developer Products
Essbase developer products enable the rapid creation, management
and deployment of tailored enterprise analytic applications-with or
without programming knowledge.
Data Mining
Data Mining-an optional product component of Analytic Services-shows
you hidden relationships and patterns in your data, enabling you to
make better business decsions. Using Data Mining you can plug in
various data mining algorithms, build models, and then apply them to
existing Analytic Services applications and databases.
Understanding
Multidimensional
Databases
Essbase XTD Analytic Services contains multidimensional databases
that support analysis and management reporting applications. This
chapter discusses multidimensional concepts and terminology. This
chapter contains the following topics:
OLAP and
Multidimensional
Databases
Online analytical processing (OLAP) is a multidimensional, multi-user,
client-server computing environment for users who need to analyze
enterprise data. OLAP applications span a variety of organizational
functions. Finance departments use OLAP for applications such as
budgeting, activity-based costing (allocations), financial performance
analysis, and financial modeling. Sales departmentsuse OLAP for sales
analysis and forecasting. Among other applications, marketing
departments use OLAP for market research analysis, sales forecasting,
promotions analysis, customer analysis, and market/customer
segmentation. Typical manufacturing OLAP applications include
production planning and defect analysis.
• How did Product A sell last month? How does this figure
compare to sales in the same month over the last five years?
How did the product sell by branch, region, and territory?
• Did this product sell better in particular regions? Are there
regional trends?
• Did customers return Product A last year? Were the returns
due to product defects? Did the company manufacture the
products in a specific plant?
• Did commissions and pricing affect how salespeople sold the
product? Did particular salespeople do a better job of selling
the product?
Dimensions and
Members
This section introduces the concepts of outlines, dimensions and
members within a multidimensional database. If you understand
dimensions and members, you are well on your way to understanding
the power of a multidimensional database.
Outline Hierarchies
All Analytic Services database development begins with creating a
database outline. A database outline accomplishes the following:
Figure 4: Generations
• Level also refers to a branch within a dimension; however,
levels reverse the numerical ordering that Analytic Services
uses for generations. The levels count up from the leaf
member toward the root. The root level number varies
depending on the depth of the branch. In the example in
Figure?3, Sales and Cost of Goods Sold are level 0. All other
leaf members are also level 0. Margin is level 1, and Profit is
level 2. Notice that the level number of Measures varies
depending on the branch. For the Ratios branch, Measures is
level 2. For the Total Expenses branch, Measures is level 3.
Figure?5 shows part of the Product dimension with its levels
numbered.
Figure 5: Levels
TBC does not sell every product in every market; therefore, the data
set is reasonably sparse. Data values do not exist for many
combinations of members in the Product and Market dimensions. For
example, if Caffeine Free Cola is not sold in Florida, then data values
do not exist for the combination Caffeine Free Cola (100-30)->Florida.
This example effectively concentrates all the sparseness into the index
and concentrates all the data into fully utilized blocks. This
configuration provides efficient data storage and retrieval.
Now consider a reversal of the dense and sparse dimension selections.
In the following example, Region and Product are dense dimensions,
and Time and Accounts are sparse dimensions.
Figure?11 shows 12 data blocks. Data values exist for all combinations
of members in the Time and Accounts dimensions; therefore, Analytic
Services creates data blocks for all the member combinations. Because
data values do not exist for all products in all regions, the data blocks
have many empty cells. Data blocks with many empty cells store data
inefficiently.
Figure 11: Data Blocks Created for Sparse Members on Time and
Accounts
Data Storage
This topic describes how data is stored in a multidimensional database.
Each data value is stored in a single cell in the database. You refer to a
particular data value by specifying its coordinates along each standard
dimension.
The shaded cells in Figure?14 illustrate that when you specify Sales,
you are specifying the portion of the database containing eight Sales
values.
When you specify Actual Sales, you are specifying the four Sales
values where Actual and Sales intersect as shown by the shaded area
in Figure?15.
Analytic Services creates an index entry for each data block. The index
represents the combinations of sparse standard dimension members.
It contains an entry for each unique combination of sparse standard
dimension members for which at least one data value exists.
Figure 17: Product and Market Dimensions from the Sample Basic
Database
If data exists for Caffeine Free Cola in New York, then Analytic
Services creates a data block and an index entry for the sparse
member combination of Caffeine Free Cola (100-30)?->?New York. If
Caffeine Free Cola is not sold in Florida, then Analytic Services does
not create a data block or an index entry for the sparse member
combination of Caffeine Free Cola (100-30)?->?Florida.
Figure?18 shows part of a data block for the Sample Basic database.
Each dimension of the block represents a dense dimension in the
Sample Basic database: Time, Measures, and Scenario. A data block
exists for each unique combination of members of the Product and
Market sparse dimensions (providing that at least one data value
exists for the combination).
Figure 18: Part of a Data Block for the Sample Basic Database
Analytic Services orders the cells in a data block according to the order
of the members in the dense dimensions of the database outline.
A (Dense)
a1
a2
B (Dense)
b1
b11
b12
b2
b21
b22
C (Dense)
c1
c2
c3
D (Sparse)
d1
d2
d21
d22
E (Sparse)
e1
e2
e3
Data blocks, such as the one shown in Figure?19, may include cells
that do not contain data values. A data block is created if at least one
data value exists in the block. Analytic Services compresses data
blocks with missing values on disk, expanding each block fully as it
brings the block into memory. Data compression is optional, but is
enabled by default. For more information, see Data Compression.
Note: This chapter assumes that you are a new Analytic Services
user. If you are migrating from a previous version of Analytic
Services, see the Essbase XTD Analytic Services Installation Guide
for important migration information.
Basic Architectural
Elements
In this chapter, you will learn how Essbase XTD Analytic Services
improves performance by reducing storage space and speeding up
data retrieval for multidimensional databases. This chapter contains
the following sections:
Attribute Dimensions
and Standard
Dimensions
Analytic Services has two types of dimensions: attribute dimensions
and standard dimensions (non-attribute dimensions). This chapter
primarily considers standard dimensions because Analytic Services
does not allocate storage for attribute dimension members. Instead it
dynamically calculates the members when the user requests data
associated with them.
An attribute dimension is a special type of dimension that is associated
with a standard dimension. For comprehensive discussion of attribute
dimensions, see Working with Attributes.
Analytic Services creates an index entry for each data block. The index
represents the combinations of sparse standard dimension members.
It contains an entry for each unique combination of sparse standard
dimension members for which at least one data value exists.
Figure 24: Product and Market Dimensions from the Sample Basic
Database
If data exists for Caffeine Free Cola in New York, then Analytic
Services creates a data block and an index entry for the sparse
member combination of Caffeine Free Cola (100-30)?->?New York. If
Caffeine Free Cola is not sold in Florida, then Analytic Services does
not create a data block or an index entry for the sparse member
combination of Caffeine Free Cola (100-30)?->?Florida.
Figure?26 shows part of a data block for the Sample Basic database.
Each dimension of the block represents a dense dimension in the
Sample Basic database: Time, Measures, and Scenario. A data block
exists for each unique combination of members of the Product and
Market sparse dimensions (providing that at least one data value
exists for the combination).
Figure 26: Part of a Data Block for the Sample Basic Database
Each data block is a multidimensional array that contains a fixed,
ordered location for each possible combination of dense dimension
members. Accessing a cell in the block does not involve sequential or
index searches. The search is almost instantaneous, resulting in
optimal retrieval and calculation speed.
Analytic Services orders the cells in a data block according to the order
of the members in the dense dimensions of the database outline.
A (Dense)
a1
a2
B (Dense)
b1
b11
b12
b2
b21
b22
C (Dense)
c1
c2
c3
D (Sparse)
d1
d2
d21
d22
E (Sparse)
e1
e2
e3
The block in Figure?27 represents the three dense dimensions from
within the combination of the sparse members d22 and e3 in the
preceding database outline. In Analytic Services, member
combinations are denoted by the cross-dimensional operator. The
symbol for the cross-dimensional operator is ->. So d22, e3 is written
d22?->?e3. A, b21, c3 is written A?->?b21?->?c3.
Data blocks, such as the one shown in Figure?27, may include cells
that do not contain data values. A data block is created if at least one
data value exists in the block. Analytic Services compresses data
blocks with missing values on disk, expanding each block fully as it
brings the block into memory. Data compression is optional, but is
enabled by default. For more information, see Data Compression.
TBC does not sell every product in every market; therefore, the data
set is reasonably sparse. Data values do not exist for many
combinations of members in the Product and Market dimensions. For
example, if Caffeine Free Cola is not sold in Florida, then data values
do not exist for the combination Caffeine Free Cola (100-30)->Florida.
So, Product and Market are sparse dimensions. Therefore, if no?data
values exist for a specific combination of members in these
dimensions, Analytic Services does not create a data block for the
combination.
Analytic Services creates dense blocks that can fit into memory easily
and creates a relatively small index as shown in Figure?32. Your
database runs efficiently using minimal resources.
Figure 34: Data Blocks Created for Sparse Members on Region and
Product
This example effectively concentrates all the sparseness into the index
and concentrates all the data into fully utilized blocks. This
configuration provides efficient data storage and retrieval.
Figure?36 shows 12 data blocks. Data values exist for all combinations
of members in the Time and Accounts dimensions; therefore, Analytic
Services creates data blocks for all the member combinations. Because
data values do not exist for all products in all regions, the data blocks
have many empty cells. Data blocks with many empty cells store data
inefficiently.
Figure 36: Data Blocks Created for Sparse Members on Time and
Accounts
TBC has determined that Analytic Services is the best tool for creating
a centralized repository for financial data. The data repository will
reside on a server that is accessible to analysts throughout the
organization. Users will have access to the server and will be able to
load data from various sources and retrieve data as needed. TBC has a
variety of users, so TBC expects that different users will have different
security levels for accessing data.
Make sure that the data is ready to load into Analytic Services.
• Does data come from a single source or from multiple
sources?
• Is data in a format that Analytic Services can import? For a
list of valid data sources that you can import into Analytic
Services, see Data Sources.
• Is all data that you want to use readily available?
• Who are the users and what permissions should they have?
• Who should have load data permissions?
• Which users can be grouped, and as a group, given similar
permissions?
• Time periods
• Accounting measures
• Scenarios
• Products
• Distribution channels
• Geographical regions
• Business units
Use the following topics to help you gather information and make
decisions:
The dimensions that you choose determine what types of analysis you
can perform on the data. With Analytic Services, you can use as many
dimensions as you need for analysis. A typical Analytic Services
database contains at least seven standard dimensions (non-attribute
dimensions) and many more attribute dimensions.
When you have an idea of what dimensions and members you need,
review the following topics and develop a tentative database design:
After you determine the dimensions of the database model, choose the
elements or items within the perspective of each dimension. These
elements become the members of their respective dimensions. For
example, a perspective of time may include the time periods that you
want to analyze, such as quarters, and within quarters, months. Each
quarter and month becomes a member of the dimension that you
create for time. Quarters and months represent a two-level hierarchy
of members and their children. Months within a quarter consolidate to
a total for each quarter.
• What are sales for a particular month? How does this figure
compare to sales in the same month over the last five years?
• By what percentage is profit margin increasing?
• How close are actual values to budgeted values?
The cells within the cube, where the members intersect, contain the
data relevant to all three intersecting members; for example, the
actual sales in January.
Scenario Actual
Budget
Variance
Variance %
Cust A Cust B Cust C
New York 100 N/A N/A
Illinois N/A 150 N/A
California N/A N/A 30
Cust A is only in New York, Cust B is only in Illinois, and Cust C is only
in California. The company can define the data in one standard
dimension:
Market
New York
Cust A
Illinois
Cust B
California
Cust C
However, if you look at a larger sampling of data, you may see that
there can be many customers in each market. Cust A and Cust E are in
New York; Cust B, Cust M, and Cust P are in Illinois; Cust C and Cust F
are in California. In this situation, the company typically defines the
large dimension, Customer, as a standard dimension and the smaller
dimension, Market, as an attribute dimension. The company associates
the members of the Market dimension as attributes of the members of
the Customer dimension. The members of the Market dimension
describe locations of the customers.
Customer (Standard dimension)
Cust A (Attribute:New York)
Cust B (Attribute:Illinois)
Cust C (Attribute:California)
Cust E (Attribute:New York)
Cust F (Attribute:California)
Cust M (Attribute:Illinois)
Cust P (Attribute:Illinois)
Market (Attribute dimension)
New York
Illinois
California
Cust A Cust B Cust C
New York 100 75 N/A
Illinois N/A 150 N/A
California 150 N/A 30
Cust A is in New York and California. Cust B is in New York and Illinois.
Cust C is only in California. Using an attribute dimension does not work
in this situation; a customer member cannot have more than one
attribute member. Therefore, the company designs the data in two
standard dimensions:
Customer
Cust A
Cust B
Cust C
Market
New York
Illinois
California
Dimension Combinations
Break each combination of two dimensions into a two-dimensional
matrix. For example, proposed dimensions at TBC (as listed in
Table?2) include the following combinations:
To help visualize each dimension, you can draw a matrix and include a
few of the first generation members. Figure?39 shows a simplified set
of matrixes for three dimensions.
Repetition in Outlines
The repetition of elements in an outline often indicates a need to split
dimensions. Here is an example of repetition and a solution:
Repetition No Repetition
Accounts Accounts
???Budget ???Profit
?????Profit ??????Margin
????????Margin ?????????Sales
???????????Sales ?????????COGS
???????????COGS ??????Expenses
????????Expenses Scenario
???Actual ????Budget
?????Profit ????Actual
????????Margin
???????????Sales
???????????COGS
????????Expenses
The left column of this table uses shared members in the Diet
dimension to analyze diet beverages. You can avoid the repetition of
the left column and simplify the design of the outline by creating a Diet
attribute dimension, as shown in the second example.
Repetition No Repetition
Product Product (Diet)
????100 (Alias: Colas) ????100 (Alias: Colas)
??????????100-10 (Alias: ??????????100-10 (Alias: Cola)
Cola) (Diet: False)
??????????100-20 (Alias: Diet ??????????100-20 (Alias: Diet
Cola) Cola) (Diet: True)
????200 (Alias: Root Beer) ????200 (Alias: Root Beer)
??????????200-20 (Alias: Diet ??????????200-20 (Alias: Diet
Root Beer) Root Beer) (Diet: True)
??????????200-30 (Alias: ??????????200-30 (Alias: Birch
Birch Beer) Beer) (Diet: False)
????300 (Alias Cream Soda) ????300 (Alias Cream Soda)
??????????300-10 (Alias: Dark ??????????300-10 (Alias: Dark
Cream) Cream) (Diet: False)
??????????300-20 (Alias: Diet ??????????300-20 (Alias: Diet
Cream) Cream) (Diet: True)
????Diet (Alias: Diet Drinks) Diet Attribute (Type: Boolean)
??????????100-20 (Alias: Diet ?????True
Cola) ?????False
??????????200-20 (Alias: Diet
Root Beer)??????
??????????300-20 (Alias: Diet
Cream)
Attribute dimensions also provide additional analytic capabilities. For a
review of the advantages of using attribute dimensions, see Designing
Attribute Dimensions.
Interdimensional Irrelevance
Interdimensional irrelevance occurs when many members of a
dimension are irrelevant across other dimensions. Analytic Services
defines irrelevant data as data that Analytic Services stores only at the
summary (dimension) level. In such a situation, you may be able to
remove a dimension from the database and add its members to
another dimension or split the model into separate databases.
Revenue x
Variable x
Costs
COGS x
Advertising x
Salaries x x x x
Fixed Costs x
Expenses x
Profit x
There are many reasons for splitting a database; for example, suppose
that a company maintains an organizational database that contains
several international subsidiaries located in several time zones. Each
subsidiary relies on time-sensitive financial calculations. You may want
to split the database for groups of subsidiaries in the same time zone
to ensure that financial calculations are timely. You can also use a
partitioned application to separate information by subsidiary.
Drafting Outlines
At this point, you can create the application and database and build
the first draft of the outline in Analytic Services. The draft defines all
dimensions, members, and consolidations. Use the outline to design
consolidation requirements and identify where you need formulas and
calculation scripts.
Note: Before you create a database and build its outline, you must
create an Analytic Services application to contain it.
The TBC planners issued the following draft for a database outline. In
this plan, the bold words are the dimensions-Year, Measures, Product,
Market, Scenario, Pkg Type, and Ounces. Observe how TBC anticipated
consolidations, calculations and formulas, and reporting requirements.
The planners also used product codes rather than product names to
describe products.
Database:Design
???Year (Type: time)
???Measures (Type: accounts)
???Product
???Market
???Scenario
???Pkg Type (Type: attribute)
???Ounces (Type: attribute)
Time Defines the time periods for which you report and
update data. You can tag only one dimension as
time. The time dimension enables several
accounts dimension functions, such as first and
last time balances.
You can change the default logic for each member by changing the
data storage property tag for the member. For example, you can
cahnge a store data member to label only member. Members with the
label only tag, for example, do not have data associated with them.
Store data The member stores data. Store data is the default
storage property.
Checking System
Requirements
After you determine the approximate number of dimensions and
members in your Analytic Services database, you are ready to
determine the system requirements for the database.
• Make sure that you have enough disk space. See Determining
Disk Space Requirements.
• Make sure that you have enough memory. See Estimating
Memory Requirements.
• Make sure that your caches are set correctly. See Optimizing
Analytic Services Caches..
Loading Test Data
Before you can test calculations, consolidations, and reports, you need
data in the database. During the design process, loading mocked-up
data or a subset of real data provides flexibility and shortens the time
required to test and analyze results.
After you run your preliminary test, if you are satisfied with your
database design, test the loading of the complete set of real data with
which you will populate the final database, using the test rules files if
possible. This final test may reveal problems with the source data that
you did not anticipate during earlier phases of the database design
process.
Defining Calculations
Calculations are essential to derive certain types of data. Data that is
derived from a calculation is called calculated data; basic
noncalculated data is called input data.
The following topics use the Product and Measures dimensions of the
TBC application to illustrate several types of common calculations that
are found in many Analytic Services databases:
Be aware that Analytic Services always begins with the top member
when it consolidates, so the order and the labels of the members is
very important. For an example of how Analytic Services applies
operators, see Calculating Members with Different Operators.
Table?7 shows the Analytic Services consolidation operators.
Ending Inventory data represents the inventory that TBC carries at the
end of each month. The quarterly value for Ending Inventory is equal
to the ending value for the quarter. Ending Inventory requires the time
balance tag, TB last. Table?8 shows the time balance tags for the
accounts dimension.
Time The value for the first child is carried to the parent.
Balance First For?example, Jan is carried to Qtr1.
Accounts 20 25 21 20 20
Member2
(TB First)
For examples of the use of time balance tags, see Setting Time
Balance Properties.
Variance Reporting
One of the TBC Analytic Services requirements is the ability to perform
variance reporting on actual versus budget data. The variance
reporting calculation requires that any item that represents an
expense to the company must have an expense reporting tag.
Inventory members, Total Expense members, and the COGS member
each receive an expense reporting tag for variance reporting.
Two-Pass Calculations
In the TBC database, both Margin % and Profit % contain the label
two-pass. This default label indicates that some member formulas
need to be calculated twice to produce the desired value. The two-pass
property works only on members of the dimension tagged as accounts
and on members tagged as Dynamic Calc and Dynamic Calc and Store.
The following examples illustrate why Profit % (based on the formula
Profit%Sales) has a two-pass tag.
.
Sales 1000 1000 1000
. . . .
Profit %
.
Sales 1000 1000 1000
Next, Analytic Services calculates the Year dimension. The data rolls
up across the dimension.
Measures -> Year Jan Feb Mar Qtr1
The result in Profit % -> Qtr1 of 30% is not correct. However, because
TBC tagged Profit% as two-pass calculation, Analytic Services
recalculates profit percent at each occurrence of the member Profit %.
The data is then correct and is displayed as follows:
Analytic Services provides several tools that can help you during the
design process to display and format data quickly and to test whether
the database design meets user needs. You can use Administration
Services Console Report Script Editor to write report scripts quickly.
Those familiar with spreadsheets can use the Spreadsheet Add-in or
Spreadsheet Services (Spreadsheet Services requires Deployment
Services).
If you provide predesigned reports for users, now is the time to use
the appropriate tool to create those reports against the test data. The
reports that you design should provide information that meets your
original objectives. The reports should be easy to use. They should
provide the right combinations of data and the right amount of data.
Reports with too many columns and rows are very hard to use. It may
be better to create a number of different reports instead of one or two
all-inclusive reports.
Do the calculations give them the information they need? Are they
able to generate reports quickly? Are they satisfied with consolidation
times? In short, ask users if the database works for them.
Near the end of the design cycle, you need to test with real data. Does
the outline build correctly? Does all data load? If the database fails in
any area, repeat the steps of the design cycle to identify the cause of
the problem.
Most likely, you will need to repeat one or more steps of the design
process to arrive at the ideal database solution.
Administration Services
Architecture
Administration Services works with Analytic Servers in a three-tiered
system that consists of a client user interface, a middle-tier server, and
one or more Analytic Servers. The middle tier coordinates interactions
and resources between the user interface and Analytic Servers. The
three tiers may or may not be on the same computer or platform. The
three tiers include the following components, as illustrated below:
Deploying
Administration Services
Administration Services can be deployed in a variety of scenarios. For
example, you can install Analytic Server on a computer running UNIX
and install Administration Server and Administration Services Console
on a computer running Windows. You can also install Administration
Server and Administration Services Console on separate computers
and platforms.
Connecting to
Administration Services
In Administration Services, connections to individual Analytic Servers
are handled by the middle tier Administration Server. You do not need
to provide a username and password to connect to individual Analytic
Servers. For information about how Analytic Server connections are
established, see "About Analytic Services Connections and Ports" in
Essbase XTD Administration Services Online Help.
Note: If you change the value for the Administration Server port,
you must specify the new port value when you log in to the
Administration Services Console.
Creating Applications
and Databases
An Analytic Services application is a container for a database and its
related files. This chapter provides an overview of Analytic Services
applications and databases and explains how to create applications,
databases, and some Analytic Services objects, including substitution
variables and location aliases. For information on everyday
management of applications, databases, and their associated files, see
the Essbase XTD Analytic Services Optimization and Database
Administration information in this guide.
This chapter includes the following sections:
ARBORPATH/app/sample/basic/basic.otl
• Text files
• Spreadsheet files
• Spreadsheet audit log files
• External databases, such as an SQL database
For information about creating rules files, see Rules Files and Creating
Rules Files.
Understanding Calculation Scripts
Calculation scripts are text files that contain sets of instructions telling
Analytic Services how to calculate data in the database. Calculation
scripts perform different calculations than the consolidations and
mathematical operations that are defined in the database outline.
Because calculation scripts perform specific mathematical operations
on members, they are typically associated with a particular database.
You can, however, define a calculation script for use with multiple
databases. Calculation scripts files have the .CSC extension.
For more information, see the Essbase XTD Spreadsheet Add-in User's
Guide for Excel or Lotus?1-2-3.
For more information, see the Essbase XTD Spreadsheet Add-in User's
Guide for Excel or Lotus?1-2-3.
• Dataload
• Calculation
• Lock and send from Spreadsheet Add-in
Creating Applications
and Databases
Since applications contain one or more databases, first create an
application and then create databases. If desired, annotate the
databases. The following sections describe how to create applications,
databases, and database notes:
Annotating a Database
A database note can provide useful information in situations where you
need to broadcast messages to users about the status of a database,
deadlines for updates, and so on. Users can view database notes in
Spreadsheet Add-in. In Excel, for example, users use the Note button
in the Connect dialog box.
Enter the name in the case you want it to appear in. The application or
database name will be created exactly as you enter it. If you enter the
name as all capital letters (for instance, NEWAPP), Analytic Services
will not automatically convert it to upper and lower case (for instance,
Newapp).
Using Substitution
Variables
Substitution variables act as global placeholders for information that
changes regularly; each variable has a value assigned to it. The value
can be changed at any time by the database designer; thus, manual
changes are reduced.
After you create a location alias, you can use the alias to refer to that
database. If?the location of the database changes, you can edit the
location definition accordingly.
Note: You can use location aliases only with the @XREF function.
With this function, you can retrieve a data value from another
database to include in a calculation on the current database. In this
case, the location alias points to the database from which the value
is to be retrieved. For more information on @XREF, see the
Technical Reference.
You can also change outlines using data sources and rules files. For
more information, see Understanding Data Loading and Dimension
Building.
When you open an outline in Outline Editor, you can view and
manipulate the dimensions and members graphically. An outline is
always locked when it is opened in edit mode. If you have Supervisor
permissions, you can unlock a locked outline. For more information,
see "Locking and Unlocking Outlines" in Essbase XTD Administration
Services Online Help.
Caution: If you open the same outline with two instances of the
Administration Services Console using the same login ID, each save
overwrites the changes of the other instance. Because it can be
difficult to keep track of what changes are saved or overwritten,
Hyperion does not recommend this practice.
Understanding the
Rules for Naming
Dimensions and
Members
When naming dimensions, members, and aliases in the database
outline, follow these rules:
Positioning Dimensions
and Members
Dimensions are the highest level of organization in an outline.
Dimensions contain members. You can nest members inside of other
members in a hierarchy. For more information on dimensions and
members, see Dimensions and Members.
Verifying Outlines
You can verify an outline automatically when you save it or you can
verify the outline manually at any time. When verifying an outline,
Analytic Services checks the following items:
• All member and alias names are valid. Members and aliases
cannot have the same name as other members, aliases,
generations, or levels. See Understanding the Rules for
Naming Dimensions and Members for more information.
• Only one dimension is tagged as accounts, time, currency
type, or country.
• Shared members are valid as described in Understanding the
Rules for Shared Members.
• Level 0 members are not tagged as label only.
• Label-only members have not been assigned formulas.
• The currency category and currency name are valid for the
currency outline.
• Dynamic Calc members in sparse dimensions do not have
more than 100 children.
• If a parent member has one child and if that child is a
Dynamic Calc member, the parent member must also be
Dynamic Calc.
• If a parent member has one child and if that child is a
Dynamic Calc, Two-Pass member, the parent member must
also be Dynamic Calc, Two-Pass.
• The two names of members of Boolean attribute dimensions
are the same as the two Boolean attribute dimension member
names defined for the outline.
• The level 0 member name of a date attribute dimension must
match the date format name setting (mm-dd-yyyy or dd-mm-
yyyy). If the dimension has no members, because the
dimension name is the level 0 member, the dimension name
must match the setting.
• The level 0 member name of a numeric attribute dimension is
a numeric value. If the dimension has no members, because
the dimension name is the level 0 member, the dimension
name must be a numeric value.
• Attribute dimensions are located at the end of the outline,
following all standard dimensions.
• Level 0 Dynamic Calc members of standard dimensions have
a formula.
• Formulas for members are valid.
• In a Hybrid Analysis outline, only the level 0 members of a
dimension can be Hybrid Analysis-enabled.
Saving Outlines
You can save outlines to the Analytic Server or to a client computer or
network. By default, Analytic Services saves outlines to the database
directory on Analytic Server. If you are saving changes to an existing
outline, Analytic Services may restructure the outline. For example, if
you change a member name from Market to Region, Analytic Services
moves data stored in reference to Market to Region. Each time that
you save an outline, Analytic Services verifies the outline to make sure
that it is correct.
If you add one or more new standard dimensions and then attempt to
save the outline, Analytic Services prompts you to associate data of
previously existing dimensions to one member of each new dimension.
If you delete one or more dimensions and then attempt to save the
outline, Analytic Services prompts you to select a member of the
deleted dimension whose data values will be retained and associated
with the members of the other dimensions.
Set the time balance as first when you want the parent value to
represent the value of the first member in the branch (often at the
beginning of a time period).
OpeningInventory (TB First), Cola, East, Actual, Jan(+), 50
OpeningInventory (TB First), Cola, East, Actual, Feb(+), 60
OpeningInventory (TB First), Cola, East, Actual, Mar(+), 70
OpeningInventory (TB First), Cola, East, Actual, Qtr1(+), 50
Set the time balance as last when you want the parent value to
represent the value of the last member in the branch (often at the end
of a time period).
EndingInventory (TB Last), Cola, East, Actual, Jan(+), 50
EndingInventory (TB Last), Cola, East, Actual, Feb(+), 60
EndingInventory (TB Last), Cola, East, Actual, Mar(+), 70
EndingInventory (TB Last), Cola, East, Actual, Qtr1(+), 70
Set the time balance as average when you want the parent value to
represent the average value of its children.
AverageInventory (TB Average), Cola, East, Actual, Jan(+),
60
AverageInventory (TB Average), Cola, East, Actual, Feb(+),
62
AverageInventory (TB Average), Cola, East, Actual, Mar(+),
67
AverageInventory (TB Average), Cola, East, Actual, Qtr1(+),
63
Setting Skip Properties
If you set the time balance as first, last, or average, you must set the
skip property to?tell Analytic Services what to do when it encounters
missing values or values of 0.
Missing and Skips both #MISSING data and data that equals
Zeros zero when calculating the parent value.
Cola, East, Actual, Jan, EndingInventory (Last), 60
Cola, East, Actual, Feb, EndingInventory (Last), 70
Cola, East, Actual, Mar, EndingInventory (Last), #MI
Cola, East, Actual, Qtr1, EndingInventory (Last), 70
Setting Variance Reporting Properties
Variance reporting properties determine how Analytic Services
calculates the difference between actual and budget data in a member
with the @VAR or @VARPER function in its member formula. Any
member that represents an expense to the company requires an
expense property.
When you are budgeting expenses for a time period, the actual
expenses should be lower than the budget. When actual expenses are
greater than budget, the variance is negative. The @VAR function
calculates Budget - Actual. For example, if budgeted expenses were
$100, and you actually spent $110, the variance is -10.
When you are budgeting non-expense items, such as sales, the actual
sales should be higher than the budget. When actual sales are less
than budget, the variance is negative. The @VAR function calculates
Actual - Budget. For example, if budgeted sales were $100, and you
actually made $110 in sales, the variance is 10.
Setting Member
Consolidation
Member consolidation properties determine how children roll up into
their parents. By default, new members are given the addition (+)
operator, meaning that members are added. For example, Jan, Feb,
and Mar figures are added and the result stored in their parent, Qtr1.
Calculating Members
with Different
Operators
When siblings have different operators, Analytic Services calculates the
data in top-down order. The following section describes how Analytic
Services calculates the members in Figure?53.
Parent1
Member1 (+) 10
Member2 (+) 20
Member3 () 25
Member4 (*) 40
Member5 (%) 50
Member6 (/) 60
Member7 (~) 70
(((Member1 + Member2) + (1)Member3) * Member4) = X
(((10 + 20) + (25)) * 40) = 200
(X/Member5) * 100 = Y
(200/50) * 100 = 400
Y/Member6 = Z
400/60 = 66.67
Determining How
Members Store Data
Values
You can determine how and when Analytic Services stores the data
values for a member. For example, you can tell Analytic Services to
only calculate the value for a member when a user requests it and
then discard the data value. Table?11 describes each storage property.
You cannot associate attributes with label only members. If you tag as
label only a base dimension member that has attributes associated
with it, Analytic Services removes the attribute associations and
displays a warning message.
If you created a test dimension with all shared members based on the
members of the dimension East from the Sample Basic outline, the
outline would be similar to the following:
If you retrieved just the children of East, all results would be from
stored members because Analytic Services retrieves stored members
by default.
If, however, you retrieved data with the children of test above it in the
spreadsheet, Analytic Services would retrieve the shared members:
New York
Massachusetts
Florida
Connecticut
New Hampshire
test
If you moved test above its last two children, Analytic Services would
retrieve the first three children as shared members, but the last two as
stored members. Similarly, if you inserted a member in the middle of
the list above which was not a sibling of the shared members (for
example, California inserted between Florida and Connecticut), then
Analytic Services would retrieve shared members only between the
non-sibling and the parent (in this case, between California and test).
You could modify the Sample Basic outline to create a shared member
whose stored member counterpart was a sibling to its own parent:
If you created a spreadsheet with shared members in this order,
Analytic Services would retrieve all the shared members, except it
would retrieve the stored member West, not the shared member west:
west
New York
Massachusetts
Connecticut
New Hampshire
test
500 (+)
????50010 (+)
• A parent has only one child that consolidates to the
parent. If the parent has four?children, but three of them are
marked as no consolidation, then the parent and child that
consolidates contain the same data. Analytic Services ignores
the consolidation property on the child and stores the data
only once-thus the parent has an implied shared relationship
with the child. In Figure?58, for example, the parent 500 has
only one child, 500-10, that rolls up to it. The other children
are marked as No Consolidate(~), so the parent implicitly
shares the value of 500-10.
500 (+)
????50010 (+)
????50020 (~)
????50030 (~)
Setting Aliases
An alias is an alternate name for a member or shared member.
For?example, members in the Product dimension in the Sample Basic
database are identified both by product codes, such as 100, and by
more descriptive aliases, such as Cola. Alias are stored in alias tables.
Aliases can improve the readability of an outline or a report.
You can set more than one alias for a member using alias tables. For
example, you could use different aliases for different kinds of reports-
users may be familiar with 100-10 as Cola, but advertisers and
executives may be familiar with it as The Best Cola. This list shows
some products in the Sample Basic database that have two descriptive
alias names:
Product Default Long Names
10010 Cola The Best Cola
10020 Diet Cola Diet Cola with Honey
10030 Caffeine Free Cola All the Cola,
none of the Caffeine
For a comprehensive discussion of alias tables, see Alias Tables.
• Alias Tables
• Creating Aliases
• Creating and Managing Alias Tables
Alias Tables
Aliases are stored in one or more tables as part of a database outline.
An alias table maps a specific, named set of aliase names to member
names. When you create a database outline, Analytic Services creates
an empty alias table named Default. If you don't create any other alias
tables, the aliases that you create are stored in the Default alias table.
If you want to create more than one set of aliases for outline
members, create a new alias table for each set. When you view the
outline or retrieve data, you can use the alias table name to indicate
which set of alias names you want to see. Identifying which alias table
contains the names that you want to see while viewing an outline is
called making an alias table the active alias table. See Setting an Alias
Table as Active for further information.
Creating Aliases
You can provide an alias for any member. Alias names must follow the
same rules as member names. See Understanding the Rules for
Naming Dimensions and Members.
When you first create an alias table, it is empty. For information about
adding aliases to an alias table and assigning them to members, see
Creating Aliases.
Setting an Alias Table as Active
The active alias table contains the aliases that Analytic Services
currently displays in the outline.
To view a list of alias tables in the outline and to set the current alias
table, use any of the following methods:
• The first line in the file starts with $ALT_NAME. Add one or
two spaces followed by the name of the alias table. If the
alias table name contains a blank character, enclose the name
in single quotation marks.
• The last line of the file must be $END.
• Each line between the first and the last lines contains two
values that are separated by one or more spaces or tabs. The
first value must be the name of an existing outline member;
the second value is the alias for the member.
• Any member or alias name that contains a blank or
underscore must be enclosed in double quotation marks.
$ALT_NAME 'Quarters'
Qtr1 Quarter1
Jan January
Feb February
Mar March
$END
You can also export an alias table from the Analytic Services outline to
a text file. The alias table contains all the member names with aliases
and the corresponding aliases.
Setting Two-Pass
Calculations
By default, Analytic Services calculates outlines from the bottom up-
first calculating the values for the children and then the values for the
parent. Sometimes, however, the values of the children may be based
on the values of the parent or the values of other members in the
outline. To obtain the correct values for these members, Analytic
Services must first calculate the outline and then re-calculate the
members that are dependent on the calculated values of other
members. The members that are calculated on the second pass
through the outline are called two-pass calculations.
For example, to calculate the ratio between Sales and Margin, Analytic
Services needs first to calculate Margin, which is a parent member
based on its children, including Sales. To ensure that the ratio is
calculated based on a freshly calculated Margin figure, tag the Margin
% ratio member as a two-pass calculation. Analytic Services calculates
the database once and then calculates the ratio member again. This
calculation produces the correct result.
Creating Formulas
You can apply formulas to standard dimensions and members. You
cannot set formulas for attribute dimensions and their members. The
formula determines how Analytic Services calculates the outline data.
For more a comprehensive discussion about formulas, see Developing
Formulas.
To add formulas to a dimension or member, see "Creating and
Editing Formulas in Outlines" in the Essbase XTD Administration
Services Online Help.
Naming Generations
and Levels
You can create names for generations and levels in an outline, such as
a word or phrase that describes the generation or level. For example,
you might create a generation name called Cities for all cities in the
outline. For information about generations and levels, see Dimension
and Member Relationships.
You can define only one name for each generation or level. When you
name generations and levels, follow the same naming rules as for
members. See Understanding the Rules for Naming Dimensions and
Members.
Creating UDAs
You can create your own user-defined attributes for members. A user-
defined attribute (UDA) is a word or phrase about a member. For
example, you might create a UDA called Debit. Use UDAs in the
following places:
Adding Comments
You can add comments to dimensions and members. A comment can
be up to 255 characters long. Outline Editor displays comments to the
right of the dimension or member in the following format:
/* comment */
To add comments to a dimension or member, see "Setting
Comments on Dimensions and Members" in the Essbase XTD
Administration Services Online Help.
Understanding
Attributes
You can use the Analytic Services attribute feature to retrieve and
analyze data not only from the perspective of dimensions, but also in
terms of characteristics, or attributes, of those dimensions. For
example, you can analyze product profitability based on size or
packaging, and you can make more effective conclusions by
incorporating into the analysis market attributes such as the
population size of each market region.
Such an analysis could tell you that decaffeinated drinks sold in cans in
small (less than 6,000,000-population) markets are less profitable
than you anticipated. For more details, you can filter the analysis by
specific attribute criteria, including minimum or maximum sales and
profits of different products in similar market segments.
Product Year Florida Profit Actual
Bottle Can Pkg Type
========= ========= =========
32 946 N/A 946
20 791 N/A 791
16 714 N/A 714
12 241 2,383 2,624
Ounces 2,692 2,383 5,075
Understanding
Attribute Dimensions
In the Sample Basic database, products have attributes that are
characteristics of the products. For example, products have an
attribute that describes their packaging. In the outline, you see these
characteristics as two dimensions, the Products dimension, and the
Pkg Type attribute dimension that is associated with it. An attribute
dimension has the word Attribute next to its name in the outline.
Figure?61 shows part of the Sample Basic outline featuring the Product
dimension and three attribute dimensions, Caffeinated, Ounces, and
Pkg Type.
Data Storage
.
You can associate with dense
dimensions.
Data Retrieval
.
Because attributes have a
text, Boolean, date, or
numeric type, you can use
appropriate operators and
functions to work with and
display attribute data. For
example, you can view sales
totals of all products
introduced after a specific
date.
.
You can group numeric
attributes into ranges of
values and let the dimension
building process automatically
associate the base member
with the appropriate range.
For example, you can group
sales in various regions based
on ranges of their populations-
less than 3 million, between 3
and 6 million, and so on.
.
Through the Attribute
Calculations dimension, you
can view aggregations of
attribute values as sums,
counts, minimums,
maximums, and averages.
.
You can use an attribute in a
calculation that defines a
member. For example, you can
use the weight of a product in
ounces to define the profit per
ounce member of the
Measures dimension.
Data Conversion
Calculation Scripts
.
You can perform calculations
on base members whose
attribute value satisfies
conditions that you specify.
For example, you can calculate
the Profit per Ounce of each
base member.
Designing Attribute
Dimensions
Analytic Services provides more than one way to design attribute
information into a database. Most often, defining characteristics of the
data through attribute dimensions and their members is the best
approach. The following sections discuss when to use attribute
dimensions, when to use other features, and how to optimize
performance when using attributes.
You can view attribute data only when you want to, you can
create meaningful summaries through crosstabs, and using type-
based comparisons, you can selectively view just the data you
want to see.
• Additional calculation functionality
Define the member name settings before you define or build the
attribute dimensions. Changing the settings after the attribute
dimensions and members are defined could result in invalid member
names.
The convention that you select applies to the level 0 member names of
all numeric, Boolean, and date attribute dimensions in the outline. You
can define aliases for these names if you wish to display shorter names
in retrievals.
Before you can set an attribute dimension type as Boolean, you must
delete all existing members in the dimension.
If you change the date member name format, the names of existing
members of date attribute dimensions may be invalid. For example, if
the 10-18-1999 member exists and you change the format to dd-mm-
yyyy, outline verification will find this member invalid. If you change
the date format, you must rebuild the date attribute dimensions.
In the dimension build rules file, specify the size of the range for each
member of the numeric attribute dimension. In the above example,
each attribute represents a range of 3,000,000.
Regardless of the name that you use for a member, its function
remains the same. For example, the second (Count) member always
counts, no matter what you name it.
To change the names of the members in the Attribute Calculations
dimension, see "Changing Member Names of Attribute Calculations
Dimensions" in the Essbase XTD Administration Services Online Help.
Calculating Attribute
Data
Analytic Services calculates attribute data dynamically at retrieval
time, using members from a system-defined dimension created
specifically by Analytic Services. Using this dimension, you can apply
different calculation functions, such?as a sum or an average, to the
same attribute. You can also perform specific calculations on members
of attribute dimensions; for example, to determine profitability by
ounce for products sized by the ounce.
You can change these default member names, subject to the same
naming conventions as standard members. For a discussion of
Attribute Calculations member names, see Changing the Member
Names of the Attribute Calculations Dimension.
Note: For syntax information and examples for these functions, see
the Technical Reference. For an additional example using
@ATTRIBUTEVAL in a formula, see Calculating an Attribute Formula.
For example:
Member 1 (stored)
Member A (stored)
Member 2 (shared)
Member B (stored)
Member 1 (shared member whose stored member is Member 1
above)
In this example, when an attribute calculation is performed, the
calculation starts with level 0 Member 2, and stops when it encounters
the first stored member, Member A. Therefore, Member 1 would not be
included in the calculation.
Linking Objects to
Analytic Services Data
This chapter described how you can link various kinds of data with any
cell in an Analytic Services database, using a linked reporting object
(LRO). This ability is similar to the file attachment features in an e-
mail software package.
• Understanding LROs
• Understanding LRO Types and Data Cells
• Setting Up Permissions for LROs
• Viewing and Deleting LROs
• Exporting and Importing LROs
• Limiting LRO File Sizes for Storage Conservation
Understanding LROs
LROs are objects that you associate with specific data cells in an
Analytic Services database. Users create linked objects through
Spreadsheet Add-in by selecting a data cell and choosing a menu item.
There is no limit to the number of objects you can link to a cell. The
objects are stored on the Analytic Server where they are available to
any user with the appropriate access permissions. Users retrieve and
edit the objects through the Linked Objects Browser, which displays all
objects linked to the selected cell.
Understanding LRO
Types and Data Cells
LROs are linked to data cells-not to the data contained in the cells. The
link is based on a specific member combination in the database.
Adding or removing links to a cell does not affect the cell contents.
Before you perform any tasks related to LROs, be aware of these facts:
Setting Up Permissions
for LROs
Users who add, edit, and delete LROs through client interfaces need to
have the appropriate security permissions in the active database. If
the object is a linked partition, the user must also have the required
permissions in the database containing the linked partition.
This table lists the permissions required for several different tasks.
Sometimes you might want to prevent users from linking files to data
cells without changing their access to other data in a database. You
can accomplish this by setting the maximum file size for linked files to
1. Users can then create cell notes, link to a URL, or view linked
partitions but can only attach very small files (under 1?kilobyte).
To set the maximum LRO file size for an application, see "Limiting
LRO File Sizes" in Essbase XTD Administration Services Online Help.
To view the linked objects for a database, use any of the following
methods:
To delete the linked objects for a database, use any of the following
methods:
Exporting and
Importing LROs
To improve backup and data-migration capabilities, you can export and
re-import LROs from data intersections in a database.
To export and import linked objects for a database, use any of the
following methods:
To prevent users from attaching anything except very small files, enter
1. Setting the file size to 1, lets users link only cell notes, URLs, and
files less than 1 kilobyte in size.
Note: The maximum file size setting applies only to linked files and
does not affect cell notes or URLs. The maximum cell note length is
fixed at 599 characters. The maximum URL string length is fixed at
512 characters.
To limit the size of a linked object, use any of the following methods:
Data for each TBC market location is captured in local currency. U.S.
dollar values are derived by applying exchange rates to local values.
After all actuals are processed, budget data is converted with budget
exchange rates.
Structure of Currency
Applications
In a business application requiring currency conversion, the main
database is divided into at least two slices. One slice handles input of
the local data, and another slice holds a copy of the input data
converted to a common currency.
Main Database
To enable Analytic Services to generate the currency database outline
automatically, you modify dimensions and members in the main
database outline. In the Sample currency application, the main
database is Interntl.
Europe
UK GBP (British pound)
Germany EUR (Euro)
Switzerland CHF (Swiss franc)
Sweden SEK (Swedish krona)
CurName - Country
USD US dollar
CND Canadian dollar
GBP British pound
EUR Euro
CHF Swiss franc
SEK Swedish krona
Conversion Methods
Different currency applications have different conversion requirements.
Analytic Services supports two conversion methods:
Building Currency
Conversion Applications
and Performing
Conversions
To build a currency conversion application and perform conversions,
use the following process:
1. Create or open the main database outline. See Creating Main
Database Outlines.
2. Prepare the main database outline for currency conversion.
See Preparing Main Database Outlines.
3. Generate the currency database outline. See Generating
Currency Database Outlines.
4. Link the main and currency databases. See Linking Main and
Currency Databases.
5. Convert currency values. See Converting Currency Values.
6. Track currency conversions. See Tracking Currency
Conversions.
7. If necessary, troubleshoot currency conversion. See
Troubleshooting Currency Conversion.
You can convert all or part of the main database using the rates
defined in the currency database. You can overwrite local values with
converted values, or you can keep both local and converted values in
the main database, depending on your tracking and reporting needs.
Note: When running a currency conversion, ensure that the data
being converted is not simultaneously being updated by other user
activities (for example, a calculation, data load, or currency
conversion against the same currency partition). Concurrent activity
on the data being converted may produce incorrect results. Analytic
Services does not display a warning message in this situation.
CCONV USD;
CALC ALL;
CCONV Jan>USD;
CALC ALL;
CCONV TOLOCALRATE "Act xchg";
CALC ALL;
Note: You cannot use the FIX command unless you are using a
currency partition dimension and the CCTRACK setting is TRUE in
the essbase.cfg file.
/* Copy data from the local partition to the master
partition (for converted values) */
DATACOPY Act TO Actual;
DATACOPY Bud TO Budget;
/* Convert the Actual data values using the "Act xchg" rate
*/
FIX(Actual)
CCONV "Act xchg">US$;
ENDFIX
/* Convert the Budget data values using the "Bud xchg" rate
*/
FIX(Budget)
CCONV "Bud xchg">US$;
ENDFIX
/* Convert the "Actual @ Bud XChg" data values using the
"Bud xchg" rate */
FIX("Actual @ Bud XChg")
CCONV "Bud xchg">US$;
ENDFIX
/* Recalculate the database */
CALC ALL;
CALC TWOPASS;
The following calculation script converts the Actual and Budget values
back to their original local currency values:
FIX(Actual)
CCONV TOLOCALRATE "Act xchg";
ENDFIX
FIX(Budget)
CCONV TOLOCALRATE "Bud xchg";
ENDFIX
CALC ALL;
Calculating Databases
If you execute a CALC ALL command to consolidate the database after
running a conversion, meaningful total-level data is generated in the
converted base rate partition, but the local rate partition contains a
meaningless consolidation of local currency values. To prevent
meaningless consolidation, use the calculation command SET
UPTOLOCAL, which restricts consolidations to parents with the same
defined currency. For example, all cities in the US use dollars as the
unit of currency. Therefore, all children of US consolidate to US.
Consolidation stops at the country level, however, because North
America contains countries that use other currencies.
Converting Currencies in Report Scripts
You can convert currencies in report scripts, using the CURRENCY
command to set the output currency and the currency type. For the
syntax and definitions of Report Writer commands, see the Technical
Reference.
The following Sample report contains first quarter Budget Sales for
colas, using the January exchange rate for the Peseta currency.
????????????????????????Illinois Sales Budget
??????????????????????Jan??????Feb??????Mar
??????????????????????========?========?========
10010????????????????3???????????3????????3
10020????????????????2???????????2????????2
10030????????????????#Missing?#Missing?#Missing
100???????????????????5???????????5????????5
????????????Currency: Jan>Peseta>Act xchg
????????????Currency: Jan>Peseta>Act xchg
??????????????????????Illinois Sales Budget
??????????????????????Jan??????Feb??????Mar
??????????????????????======== ======== ========
10010????????????????3???????????3????????3
10020????????????????2???????????2????????2
10030????????????????#Missing?#Missing?#Missing
100???????????????????5???????????5????????5
Use the following script to create the Sample currency conversion
report:
<Page (Market, Measures, Scenario)
{SupCurHeading}
Illinois Sales Budget
<Column (Year)
<children Qtr1
<Currency "Jan>Peseta>Act xchg"
<Ichildren Colas
!
{CurHeading}
Illinois Sales Budget
<Column (Year)
<children Qtr1
!
Note: Always do a partial data load to the local partition and use
the DATACOPY command to copy the entire currency partition to the
converted partition before running the currency conversion.
Updating data directly into the converted partition causes incorrect
results.
Designing Partitioned
Applications
An Analytic Services partitioned application can span multiple servers,
processors, or computers. A partition is the piece of a database that is
shared with another database. Partitioning applications can provide the
following benefits:
• What Is a Partition?
• Data Sources and Data Targets
• Overlapping Partitions
• Attributes in Partitions
What Is a Partition?
A partition is a piece of a database that is shared with another
database. Partitions contain the following parts, as illustrated in
Figure?69.
A single database can serve as the data source or data target for
multiple partitions. To share data among many databases, create
multiple partitions, each with the same data source and a different
data target:
Overlapping Partitions
An overlapping partition occurs when similar data from two or more
databases serve as the data source for a single data target in a
partition. For example, IDESC East, Sales from database 1 and
Boston, Sales from database 2 are mapped to IDESC East, Sales
and Boston, Sales in database 3. Because Boston is a member of the
dimension East, the data for Boston mapped to database 3 from
database 1 and database 2, overlap. This data overlap results in an
overlapping partition:
Attributes in Partitions
You can use attribute functions for partitioning on attribute values. But
you cannot partition an attribute dimension. Use attribute values to
partition a database when you want to access members of a dimension
according to their characteristics.
For example, in the Sample Basic database, you cannot partition the
Pkg Type attribute dimension. But you can create a partition that
contains all the members of the Product dimension that are associated
with either or both members (Bottle and Can) of the Pkg Type
dimension. If you create a partition that contains members associated
with Can, you can access data only on Product members that are
packaged in cans; namely, 100-10, 100-20, and 300-30.
Source???????????????????Target
@ATTRIBUTE(Caffeinated)??@ATTRIBUTE(Caffeinated)
Source???????????????????Target
Caffeinated??????????????Caffeinated
Deciding Whether to
Partition a Database
Partitioning a database is not always the correct option. The following
sections provide questions you can use to determine if partitioning the
database is the best solution for you.
Determining Which
Data to Partition
When designing a partitioned database, find out the following
information about the data in the database:
• Which database should be the data source and which the data
target? The database that "owns" the data should be the data
source. Owning the data means that this is the database
where the data is updated and where most of the detail data
is stored.
• Are some parts of the database accessed more frequently
than others?
• What data can you share among multiple sites?
• How granular does the data need to be at each location?
• How frequently is the data accessed, updated, or calculated?
• What are the available resources? How much disk space is
available? CPUs? Network resources?
• How much data needs to be transferred over the network?
How long does that take?
• Where is the data stored? Is it in one location or in more than
one location?
• Where is the data accessed? Is it in one location or in more
than one location?
• Is there information in separate databases that should be
accessed from a central location? How closely are groups of
data related?
Replicated Partitions
A replicated partition is a copy of a portion of the data source that is
stored in the data target. Some users can then access the data in the
data source while others access it in the data target.
Changes to the data in a replicated partition flow from the data source
to the data target. Changes made to replicated data in the data target
do not flow back to the data source. If users change the data at the
data target, Analytic Services overwrites their changes when the
database administrator updates the replicated partition.
The database administrator can prevent the data in the replicated
portion of the data target from being updated. This setting takes
precedence over access provided by security filters and is also honored
by batch operations such as dataload and calculation. By default,
replicated partitions are not updateable. For directions on how to set a
partition as updateable, see the Essbase XTD Administration Services
Online Help.
• You need more disk space, because you are storing the data
in two or more locations.
• The data must be refreshed regularly by the database
administrator, so it is not up-to-the-minute.
Transparent Partitions
A transparent partition allows users to manipulate data that is stored
remotely as if it were part of the local database. The remote data is
retrieved from the data source each time that users at the data target
request it. Users do not need to know where the data is stored,
because they see it as part of their local database.
• Keep the partition fully within the calculator cache area (see
Sizing the Calculator Cache). Keeping a partition fully within
the calculator cache means that any sparse members in the
partition definition must be contained within the calculator
cache. For example, in the Sample Basic database, if a
partition definition includes @IDESC(East), all descendants of
East must be within the calculator cache.
• Enable the calculator cache, and assign a sufficient amount of
memory to it.
• Do not use complex formulas on any members that define the
partition. For example, in Sample Basic, assigning a complex
formula to New York or New Jersey (both children of East)
forces Analytic Services to use the top-down calculation
method. For more information, see Bottom-Up and Top-Down
Calculation.
For example, suppose that the data source and data target outlines
both contain a Market dimension with North and South members, and
children of North and South. On the data target, Market is calculated
from the data for the North and South members (and their children)
on the data source. If any of these members on the data source
contain member formulas, these formulas are calculated, thus
affecting the calculated value of Market on the data target. These
results may be different from how the Market member are calculated
from the North and South members on the data target, where these
formulas may not exist.
Make sure that any formulas you assign to members in the data source
and data target produce the desired results.
Linked Partitions
A linked partition connects two different databases with a data cell.
When the end user clicks the linked cell in the data source, you drill
across to a second database, the?data target, and view the data there.
If you are using Spreadsheet Add-in, for example, a new sheet opens
displaying the dimensions in the second database. You can then drill
down into these dimensions.
For example, if TBC grew into a large company, they might have
several business units. Some data, such as profit and sales, exists in
each business unit. TBC can store profit and sales in a centralized
database so that the profit and sales for the entire company are
available at a glance. The database administrator can link individual
business unit databases to the corporate database. For an example of
creating a linked partition, see Case Study 3: Linking Two Databases.
A user in such a scenario can perform these tasks:
For linked partitions, the spreadsheet that the user first views is
connected to the data target, and the spreadsheet that opens when
the user drills across is connected to the data source. This setup is the
opposite of replicated and transparent databases, where users move
from the data target to the data source.
• You can view data in a different context; that is, you can
navigate between databases containing many different
dimensions.
• You do not have to keep the data source and data target
outlines closely synchronized, because less of the outline is
shared.
• A single data cell can allow the user to navigate to more than
one database. For?example, the Total Profit cell in the
Accounting database can link to the Profit cells in the
databases of each business unit.
• Performance may improve, because Analytic Services is
accessing the database directly and not through a data
target.
Smaller databases x x
Easier to recover x
Less synchronization x
required
Everyone agreed that the eastern region needed to access its own data
directly, without going through the company database. In addition,
TBC decided to change where budgeting information was stored. The
corporate budget stays at company headquarters, but the eastern
region budget moves to the eastern region's database.
So, assume that TBC decided to ask you to partition their large
centralized database into two smaller databases-Company and East.
Now that the Sample Basic database is partitioned, users and database
administrators see the following benefits:
The database administrator for the Sample Basic database notices that
more and more users are requesting that she add channel information
to the Sample Basic database. But, since she does not own the data
for channel information, she is reluctant to do so. She decides instead
to allow her users to link to the TBC Demo database which already
contains this information.
Now that the databases are linked, users and database administrators
see the following benefits:
Creating and
Maintaining Partitions
When you build a new partition, each database in the partition uses a
partition definition file to record all information about the partition,
such as its data source and data target and the areas to share. You
must have Database Designer permissions or higher to create a
partition.This chapter contains the following sections that describe how
to create a replicated, transparent, or linked partition:
After you create a partition, you must maintain the partition. This
chapter contains the following sections that describe how to maintain
an existing partition:
• Transfer data between the data source and the data target for
replicated and transparent partitions. Local security filters
apply to prevent end users from seeing privileged data.
• Synchronize database outlines for all partition types.
When you define a replicated area, make sure that both the data
source and data target contain the same number of cells. This verifies
that the two partitions have the same shape. For example, if the area
covers 18 cells in the data source, the data target should contain an
area covering 18 cells into which to put those values. The cell count
does not include the cells of attribute dimensions.
Mapping Members
To create a partition, Analytic Services must be able to map all shared
data source members to data target members. Hyperion recommends
that data source member names and data target member names are
the same to reduce the maintenance requirements for the partition,
especially when the partition is based on member attributes.
If the data source and the data target contain the same number of
members and use the same member names, Analytic Services
automatically maps the members. You need only validate, save, and
test the partitions, as described in Validating Partitions, Saving
Partitions, and Testing Partitions. If Analytic Services cannot map
automatically, you must map manually.
Map data source members to data target members in any of the
following ways:
• Enter or select member names in manually.
• Import the member mappings from an external data file.
• Create area-specific mappings.
Source Target
Product Product
???Cola ???Cola
Year Year
???1998 ???1998
Market Market
???East ???East_Region
Source Target
Product Product
???Cola ???Cola
Market Market
???East ???East
Year
???1999
???1998
???1997
If you want to map member 1997 of the Year dimension from the data
source to the data target, you can map it to Void in the data target.
But first, you must define the areas of the data source to share with
the data target:
Source Target
@DESCENDANTS(Market), 1997 @DESCENDANTS(Market)
You can then map the data source member to Void in the data target:
Source Target
1997 Void
If you do not include at least one member from the extra dimension in
the area definition, you will receive an error message when you
attempt to validate the partition.
The following example illustrates a case where the data target includes
more dimensions than the data source:
Source Target
Product Product
???Cola ???Cola
Market
???East
Year Year
???1997 ???1997
In such cases, you must first define the shared areas of the data
source and the data target:
Source Target
@IDESCENDANTS(Product) @IDESCENDANTS(Product), East
You can then map member East from the Market dimension of the data
target to Void in the data source:
Source Target
Void East
If member East from the Market dimension in the data target is not
included in the target areas definition, you will receive an error
message when you attempt to validate the partition.
In the following example, the outline for the data source contains a
Product dimension with a member 100 (Cola). Children 100-10 and
100-20 are associated with member TRUE of the Caffeinated attribute
dimension, and child 100-30 is associated with member FALSE of the
Caffeinated attribute dimension.
The data target outline has a Product dimension with a member 200
(Cola). Children 200-10 and 200-20 are associated with member Yes
of the With_Caffeine attribute dimension, and child 200-30 is
associated with No of the With_Caffeine attribute dimension.
First define the areas to be shared from the data source to the data
target:
Source Target
@DESCENDANTS(100) @DESCENDANTS(200)
@DESCENDANTS(East) @DESCENDANTS(East)
Source Target
10010 20010
10020 20020
10030 20030
Caffeinated With Caffeine
Caffeinated_True With_Caffeine_True
Caffeinated_False With_Caffeine_No
Source Target
Caffeinated
???True
???False
If, however, you need to control how Analytic Services maps members
at a more granular level, you may need to use area-specific mapping.
Area-specific mapping maps members in one area to members in
another area only in the context of a particular area map.
The data source and data target contain the following dimensions and
members:
Source Target
Product Product
???Cola ???Cola
Market Market
???East ???East
Year Year
???1998 ???1998
???1999 ???1999
Scenario
???Actual
???Budget
You know that 1998 in the data source should correspond to 1998,
Actual in the data target and 1999 in the data source should
correspond to 1999, Budget in the data target. So, for example, if the
data value for Cola, East, 1998 in the data source is 15, then the data
value for Cola, East, 1998, Actual in the data target should be?15.
Because the data source does not have Actual and Budget members,
you must also map these members to Void in the data target.
You can also use advanced area-specific mapping if the data source
and data target are structured very differently but contain the same
kind of information.
This strategy works, for example, if your data source and data target
contain the following dimensions and members:
Source Target
Market Customer_Planning
???NY ???NY_Actual
???CA ???NY_Budget
???CA_Actual
???CA_Budget
Scenario
???Actual
???Budget
You know that NY and Actual in the data source should correspond to
NY_Actual in?the data target and NY and Budget in the data source
should correspond to NY_Budget in the data target. So, for example, if
the data value for NY, Budget in the data source is 28, then the data
value for NY_Budget in the data target should be 28.
Because the data target does not have NY and CA members, you must
also map these members to Void in the data target so that the
dimensionality is complete when going?from the data source to the
data target.
Validating Partitions
When you create a partition, validate it to ensure that it is accurate
before you use it. In order to validate a partition, you must have
Database Designer permissions or higher. After you validate, save the
partition definition. If necessary, you can edit an existing partition.
After you validate, save the partition. When you save a partition, the
partition definition is saved to two different .ddb files, on both the data
source server and the data target server.
Saving Partitions
After you validate the partition definition, you can save the partition
definition to any of the following locations:
• To both the data source server and the data target server. The
partition definition is stored in two .ddb files.
• To a client machine. The partition definition is stored in a
single .ddb file.
To save a partition definition, see "Saving Partitions" in the Essbase
XTD Administration Services Online Help.
Testing Partitions
To test a partition:
Synchronizing Outlines
When you partition a database, Analytic Services must be able to map
each dimension and member in the data source outline to the
appropriate dimension and member in the data target outline. After
you map the two outlines to each other, Analytic Services can make
the data in the data source available from the data target as long as
the outlines are synchronized and the partition definitions are up-to-
date.
If you make changes to one of the outlines, the two outlines are no
longer synchronized. Although Analytic Services does try to make
whatever changes it can to replicated and transparent partitions when
the outlines are not synchronized, Analytic Services may not be able to
make the data in the data source available in the data target.
By default, the source outline is from the same database as the data
source; that is, outline and data changes flow in the same direction.
For example, if the East database is the data source and the Company
database is the data target, then the default source outline is East.
You can also use the data target outline as the source outline. You
might want to do this if the structure of the outline (its dimensions,
members, and properties) is maintained centrally at a corporate level,
while the data values in the outline are maintained at the regional level
(for example, East). This allows the database administrator to make
changes in the Company outline and apply those changes to each
regional outline when she synchronizes the outline.
To set the source outline, see Setting up the Data Source and the
Data Target.
Action You
Action Analytic Services Takes
Take
Any change made to a member that does not have at least one actual
member (or shared member) in the defined partition area is not
propagated to the target outline. For example, in Figure?82, a change
to the parent 100 is not propagated to the target outline because it is
in the undefined partition area and does not have an associated shared
member in the defined partition area.
If a shared member is included in the partition area, then it is
recommended to include its parent. In the above example, the parent
Diet is included in the outline because its children are shared members
and in the defined partition area.
The reverse is true again. If A1 is not defined in the partition area and
its implied shared member is, then any change to A1 is propagated to
the target outline.
Analytic Services also tracks which cells in a partition are changed. You
can choose to update:
• Just the cells that have changed since the last replication-It is
fastest to update just the cells that have changed.
• All cells-This is much slower. You may need to update all cells
if you are recovering from a disaster where the data in the
data target has been destroyed or corrupted.
This chapter helps you understand Hybrid Analysis and explains how
you can take advantage of its capabilities. The chapter includes the
following topics:
Understanding Hybrid
Analysis
Hybrid Analysis integrates a relational database with an Analytic
Services multidimensional database so that applications and reporting
tools can directly retrieve data from both databases. Figure?84
illustrates the hybrid analysis architecture:
Using the model, you define hierarchies and tag levels whose members
are to be enabled for hybrid analysis. You then build the metaoutline, a
template containing the structure and rules for creating the Analytic
Services outline, down to the desired hybrid analysis level. The
information enabling hybrid analysis is stored in the OLAP Metadata
Catalog, which describes the nature, source, location, and type of data
in the hybrid analysis relational source.
Data Retrieval
Applications and reporting tools, such as spreadsheets and Report
Writer interfaces, can directly retrieve data from both databases (2 in
Figure?84). Using the dimension and member structure defined in the
outline, Analytic Services determines the location of a member and
then retrieves data from either the Analytic Services database or the
hybrid analysis relational source. If the data resides in the hybrid
analysis relational source, Analytic Services retrieves the data through
SQL commands. Data retrieval is discussed in Retrieving Hybrid
Analysis Data.
If you want to modify the outline, you can use Outline Editor in
Administration Services to enable?or disable dimensions for hybrid
analysis on an as-needed basis. (3 in Figure?84). For information on
using Outline Editor, see Using Outline Editor with Hybrid Analysis.
Defining Hybrid
Analysis Relational
Sources
A hybrid analysis relational source is defined in Integration Services
Console. Detailed information and the specific procedures for
performing the following steps is available in Integration Services
online help.
Note: If you are using two servers during hybrid analysis, the
Data Source Name (DSN) must be configured on both servers
and the DSN must be the same.
Note: For detailed information on the above steps, see the Essbase
XTD Integration Services Console online help.
Retrieving Hybrid
Analysis Data
In Hybrid Analysis, applications and reporting tools can directly
retrieve data from both the relational and the Analytic Services
databases by using the following tools:
The <ASYM and <SYM commands are not supported with Hybrid
Analysis. If these commands are present in a report, errors may
result. The <SPARSE command is ignored in reports retrieving data
from a hybrid analysis relational source and does not generate errors.
<PAGE (Accounts, Scenario, Market)
Sales
Actual
<Column (Time)
<CHILDREN Time
<Row (Product)
<IDESCENDANTS 10010
!
Managing Data
Consistency
When you create a hybrid analysis relational source, the data and
metadata are stored and managed in the relational database and in
the Analytic Services database:
Warnings are listed in the application log. You should decide if the
warnings reflect a threat to data consistency. To view the application
log, see Viewing the Analytic Server and Application Logs.
The Analytic Services administrator has the responsibility to ensure
that the Analytic Services multidimensional database, the relational
database, and the Integration Services OLAP model and metaoutline
remain in sync. Both Administration Services and Integration Services
Console provide commands that enable the administrator to perform
consistency checks and make the appropriate updates.
Managing Security in
Hybrid Analysis
The Analytic Services administrator determines access to the hybrid
analysis relational source on an individual Analytic Services user level.
Access for Hybrid Analysis is governed by the same factors that affect
overall Analytic Services security:
Assume that you have the following outline, where San Francisco and
San Jose are relational children of California, and Miami and Orlando
are relational children of Florida:
In this example, if a filter allows you to view only level 0 member
California and its descendants, you can view California and its
relational children, San Francisco and San Jose; however, you cannot
view the children of level 0 member Florida.
Error executing formula for member
[membernametowhichformulaisattached] (line [line#
where the offending function appears inside the formula):
function [Name of the offending function] cannot be used in
Hybrid Analysis.
Unsupported Functions
in Hybrid Analysis
Hybrid Analysis does not support all Analytic Services functions. The
following topics specify the categories of significant Analytic Services
functions not supported by Hybrid Analysis.
Relationship Functions
Hybrid Analysis does not support functions that look up specific values
in the database based on current cell location and a series of
parameters. Some examples of these functions are given next.
@ANCEST @SPARENT
@SANCEST @CURLEV
@PARENT @CURGEN
@ISIANCEST @ISLEV
@ISIPARENT @ISSAMEGEN
@ISISIBLING @ISUDA
@ISMBR
Range Functions
Hybrid Analysis does not support functions that use a range of
members as arguments. Rather than return a single value, these
functions calculate a series of values internally based on the range
specified. Some examples of range functions that are not supported
are listed next.
@PRIOR @MOVAVG
@SHIFT @ALLOCATE
@PRIORS @MDALLOCATE
@SHIFTS @VAR
@NEXT @VARPER
@MDSHIFT @MEDIAN
@MOVSUM @RANK
Attribute Functions
Hybrid Analysis does not support any Analytic Services functions that
deal with attributes. Some examples of these functions are listed next.
@ATTRIBUTEVAL
@ATTRIBUTESVAL
@WITHATTR
@CURRMBR
@XREF
Understanding Data
Loading and Dimension
Building
An Analytic Services database contains dimensions, members, and
data values.
• You can add data values, that is, numbers, to an Analytic
Services database from a data source, such as a spreadsheet
or a SQL database. This process is called loading data. If the
data source is not perfectly formatted, you need a rules file to
load the data values.
• You can add dimensions and members to an Analytic Services
database manually, by using Outline Editor. You can also load
dimensions and members into a database by using a data
source and a rules file. This process is called building
dimensions.
Data Sources
Data sources contain the information that you want to load into the
Analytic Services database. A data source can contain data values;
information about members, such as member names, member aliases,
formulas and consolidation properties; generation and level names;
currency name and category; data storage properties; attributes; and
UDAs (user-defined attributes).
Analytic Services reads data sources starting at the top and proceeding
from left to right.
• Data fields contain the numeric data values that are loaded
into the intersections of the members of the database. Each
data value must map to a dimension intersection. In
Figure?87, for example, 42 is the data value that corresponds
to the intersection of Texas, 100-10, Jan, Sales, and Actual.
You can specify information in the header and in an individual
record. In Figure?88, for example, 100 is the data value that
corresponds to the intersection of Jan, Actual, Cola, East, Sales
and 200 is the data value that corresponds to the intersection of
Jan, Actual, Cola, West, Sales.
Jan,??ActualCola???East???Sales???100
Cola???West???Sales???200
Cola???South??Sales???300
Data fields are used only for data loading; dimension builds ignore
data fields. The following sections describe each item in a data source:
A dimension field must contain a valid dimension name. If you are not
performing a dimension build, the dimension must already exist in the
database. If you are performing a dimension build, the dimension
name can be new, but the new name must be specified in the rules
file.
Either the data source or the rules file must contain enough
information for Analytic Services to determine where to put each data
value. A data field contains the data value for its intersection in the
database. In Figure?87, for example, 42 is a data field. It is the dollar
sales of 100-10 (Cola) in Texas in January.
If the data source contains a member field for every dimension and
one field that contains data values, you must define the field that
contains data values as a data field in the rules file. To read Figure?89
into the Sample Basic database, for example, define the last field as a
data field.
Jan Cola East Sales Actual 100
Feb Cola East Sales Actual 200
To define a data field, see "Defining a Column as a Data Field" in
Essbase XTD Administration Services Online Help.
Note: If the data source contains blank fields for data values,
replace them with #MI or #MISSING. Otherwise, the data may not
load correctly. For instructions on how to replace a blank field with
#MI or #MISSING, see Replacing an Empty Field with Text.
Valid Delimiters
You must separate fields from each other with delimiters. If you are
loading data without a rules file, you must uses spaces to delimit
fields.
If you are using a rules file, delimiters can be any of the following:
East??????Cola???Actual???Jan???Sales???10
East???Cola???Actual???Feb???Sales???21
East???Cola???Actual???Mar???Sales???30
East,,Cola,Actual,Jan,Sales,10
East,Cola,Actual,Feb,Sales,21
East,Cola,Actual,Mar,Sales,30
To solve the problem, delete the extra delimiter from the data source.
Valid Formatting Characters
Analytic Services views some characters in the data source as
formatting characters only. For that reason, Analytic Services ignores
the following characters:
East?????Actual????"10010"
?????????Sales?????Marketing
?????????=====?????=========
Jan???????10???????????8
Feb???????21??????????16
Rules Files
Rules are a set of operations that Analytic Services performs on data
values or on dimensions and members when it processes a data
source. Use rules to map data values to an Analytic Services database
or to map dimensions and members to an Analytic Services outline.
After you create a dimension build rules file, you may want to
automate the process of updating dimensions. Using ESSCMD.
You do not need a rules file if you are performing a data load and the
data source maps perfectly to the database. For a description of a data
source that maps perfectly, see Data Sources That Do Not Need a
Rules File.
Note: If you are using a rules file, each record in the rules file must
have the same number of fields. See Dealing with Missing Fields in
a Data Source.
Sales "10010" Ohio Jan Actual 25
Sales "10020" Ohio Jan Actual 25
Sales "10030" Ohio Jan Actual 25
If the data source is not correctly formatted, it will not load. You can
edit the data source using a text editor and fix the problem. If you find
that you must perform many edits (such as moving several fields and
records), it might be easier to use a rules file to load the data source.
For a definition and discussion of rules files, see Rules Files.
The following sections describe more complicated ways to format free-
form data sources:
A data source can contain ranges from more than one dimension at a
time.
In Figure?95, for example, Jan and Feb form a range in the Year
dimension and Sales and COGS form a range in the Measures
dimension.
Actual Texas ?????Sales????????COGS
????????????????Jan????Feb????Jan???Feb
"10010"????????98?????89?????26????19
"10020"????????87?????78?????23????32
In Figure?95, Sales is defined for the first two columns and COGS for
the last two columns.
Texas Sales
????????????????????Jan???Feb???Mar
Actual????"10010"??98????89????58
??????????"10020"??87????78????115
Figure?97, for example, contains more data fields than member fields
in the defined range of members. The data load stops when it reaches
the 10 data field. Analytic Services loads the 100 and 120 data fields
into the database.
Figure 97: Extra Data Values
Cola ???Actual???East
??????????Jan????Feb
Sales?????100????120????10
COGS??????30?????34?????32
The file in Figure?98 contains two ranges: Actual to Budget and Sales
to COGS. It also contains duplicate members.
Cola East
Actual Budget Actual Budget
Sales Sales COGS COGS
Jan 108 110 49 50
Feb 102 120 57 60
Cola East
Actual Budget Actual Budget
Sales Sales COGS COGS
Jan 108 110 49 50
Feb 102 120 57 60
For Actual, the first member of the first range, Analytic Services maps
data values to each member of the second range (Sales and COGS).
Analytic Services then proceeds to the next value of the first range,
Budget, similarly mapping values to each member of the second
range. As a result, Analytic Services interprets the file as shown in
Figure?100.
Cola East
Actual Budget
Sales COGS Sales COGS
Jan 108 110 49 50
Feb 102 120 57 60
Symmetric Columns
If you are performing a dimension build, skip this section. You cannot
perform a dimension build without a rules file.
Product????Measures????Market????Year????Scenario
"10010"???Sales???????Texas?????Jan?????Actual??????112
"10010"???Sales???????Ohio??????Jan?????Actual??????145
The columns in the following file are also symmetric, because Jan and
Feb have the same number of members under them:
Jan Feb
Actual Budget Actual Budget
"10010" Sales Texas 112 110 243 215
"10010" Sales Ohio 145 120 81 102
Asymmetric Columns
If you are performing a dimension build, skip this section. You cannot
perform a dimension build without a rules file.
Jan Jan Feb
Actual Budget Budget
"10010" Sales Texas 112 110 243
"10010" Sales Ohio 145 120 81
The file in Figure?104, for example, is not valid because the column
labels are incomplete. The Jan label must appear over both the Actual
and Budget columns.
Jan Feb
Actual Budget Budget
"10010" Sales Texas 112 110 243
"10010" Sales Ohio 145 120 81
This file in Figure?105 is valid because the Jan label is now over both
Actual and Budget. It is clear to Analytic Services that both of those
columns map to Jan.
Jan Jan Feb
Actual Budget Budget
"10010" Sales Texas 112 110 243
"10010" Sales Ohio 145 120 81
• Security Issues
You can load data values while multiple users are connected to a
database. Analytic Services uses a block locking scheme for
handling multi-user issues. When you load data values, Analytic
Services does the following:
o Locks the block it is loading into so that no one can
write to the block.
See Ensuring Data Integrity for information on Analytic
Services transaction settings, such as identifying whether
other users get read-only access to the locked block or
noting how long Analytic Services waits for a locked block
to be released.
o Updates the block.
1. Determine whether to use the same rules file for data loading
and dimension building.
For a discussion of factors that influence your decision, see
Combining Data Load and Dimension Build Rules Files.
2. Create a new rules file.
For a process map, see Creating Rules Files.
3. Set the file delimiters for the data source.
For a description of file delimiters, see Setting File Delimiters.
4. If necessary, set record, field, and data operations to change
the data in the data source during loading.
For a comprehensive discussion, see Using a Rules File to
Perform Operations on Records, Fields, and Data.
5. Validate and save the rules file.
For references for pertinent topics, see Validating and Saving.
Understanding the
Process for Creating
Dimension Build Rules
Files
To create a dimension build rules file, follow these steps:
1. Determine whether to use the same rules file for data loading
and dimension building.
For a discussion of factors that influence your decision, see
Combining Data Load and Dimension Build Rules Files.
2. Create a new rules file.
For a process map, see Creating Rules Files.
3. Set the file delimiters for the data source.
For a description of file delimiters, see Setting File Delimiters.
4. If you are creating a new dimension, name the dimension.
For references to pertinent topics, see Naming New Dimensions.
5. Select the build method.
For references to pertinent topics, see Selecting a Build Method.
6. If necessary, change or set the properties of members and
dimensions you are building.
For references to pertinent topics, see Setting and Changing
Member and Dimension Properties.
7. If necessary, set record and field operations to change the
members in the data source during loading.
For a comprehensive discussion, see Using a Rules File to
Perform Operations on Records, Fields, and Data.
8. Set field type information, including field type, field number,
and dimension.
For references to pertinent topics, see Setting Field Type
Information.
9. Validate and save the rules file.
For references to pertinent topics, see Validating and Saving.
Use the same rules file for both data load and dimension build if you
wish to load the data source and build new dimensions at the same
time.
Use separate rules files for data load and dimension build under any of
the following circumstances:
You can open a SQL data source only if you have licensed Essbase SQL
Interface. The Essbase SQL Interface Guide provides information on
supported environments, installation, and connection to supported
data sources. Contact your Analytic Services administrator for more
information. When you open a SQL data source, the rules fields default
to the column names of the SQL data source. If the names are not the
same as the Analytic Services dimension names, you need to map the
fields to the dimensions. For a comprehensive discussion of mapping,
see Changing Field Names.
To open text files and spreadsheet files, see "Opening a Data File" in
the Essbase XTD Administration Services Online Help.
To open SQL data sources, see "Opening a SQL Data Source" in the
Essbase XTD Administration Services Online Help.
Setting File Delimiters
A file delimiter is the character (or characters) used to separate fields
in the data source. By default, a rules file expects fields to be
separated by tabs. You can set the file delimiter expected to be a
comma, tab, space, fixed-width column, or custom value. Acceptable
custom values are characters in the standard ASCII character set,
numbered from 0 through 127. Usually, setting the file delimiters is the
first thing you do after opening a data source.
Note: You do not need to set file delimiters for SQL data.
Naming New
Dimensions
If you are not creating a new dimension in the rules file, skip this
section.
If you are creating a new dimension, you must name it in the rules file.
Before choosing a dimension name, see Understanding the Rules for
Naming Dimensions and Members.
If you are performing a dimension build, you can set or change the
properties of the members and dimensions in the outline. Some
changes affect all members of the selected dimension, some affect
only the selected dimension, and some affect all dimensions in the
rules file.
You can set or change member and dimension properties using the
Data Prep Editor or a change the member properties in the data
source.
Using the Data Prep Editor to Set Dimension and Member
Properties
To set dimension properties, see "Setting Dimension Properties" in
the Essbase XTD Administration Services Online Help.
To set member properties, see Setting Member Properties in the
Essbase XTD Administration Services Online Help.
The following table lists all member codes used to assign properties to
members in the data source.
Code Description
Formula A formula
If the rules file is correct, you can perform a data load or dimension
build. For a comprehensive discussion of how to load data and
members, see Performing and Debugging Data Loads or Dimension
Builds.
• Selecting Records
• Rejecting Records
• Combining Multiple Select and Reject Criteria
• Setting the Records Displayed
• Defining Header Records
• Ignoring Fields
• Arranging Fields
• Changing Field Names
Performing Operations
on Records
You can perform operations at the record level. For example, you can
reject certain records before they are loaded into the database.
• Selecting Records
• Rejecting Records
• Combining Multiple Select and Reject Criteria
• Setting the Records Displayed
• Defining Header Records
Selecting Records
You can specify which records Analytic Services loads into the
database or uses to build dimensions by setting selection criteria.
Selection criteria are string and number conditions that must be met
by one or more fields within a record before Analytic Services loads the
record. If a field or fields in the record do not meet the selection
criteria, Analytic Services does not load the record. You can define one
or more selection criteria. For example, to load only 2003 Budget data
from a data source, create a selection criterion to load only records in
which the first field is Budget and the second field is 2003.
Note: If you define selection criteria on more than one field, you
can specify how Analytic Services combines the criteria. For a brief
discussion, see Combining Multiple Select and Reject Criteria.
Rejecting Records
You can specify which records Analytic Services ignores by setting
rejection criteria. Rejection criteria are string and number conditions
that, when met by one or more fields within a record, cause Analytic
Services to reject the record. You can define one or more rejection
criteria. If no field in the record meets the rejection criteria, Analytic
Services loads the record. For example, to reject Actual data from a
data source and load only Budget data, create a rejection criterion to
reject records in which the first field is Actual.
Note: If you define rejection criteria on more than one field, you
can specify how Analytic Services should combine the criteria. For a
brief discussion, see Combining Multiple Select and Reject Criteria.
Rules files contain records that translate the data of the data source to
map it to the database. As part of that information, rules files can also
contain header records. For example, the Sample Basic database has a
dimension for Year. If several data sources arrive with monthly
numbers from different regions, the month itself might not be specified
in the data sources. You must set header information to specify the
month.
You can create a header record using either of the following methods:
• You can define header information in the rules file. Rules file
headers are used only during data loading or dimension
building and do not change the data source. Header
information set in a rules file is not used if the rules file also
points to header records in the data source.
• You can define header information in the data source by using
a text editor or spreadsheet and then pointing to the header
records in the rules file. Placing header information in the
data source makes it possible to use the same rules file for
multiple data sources with different formats, because the data
source format is specified in the data source header and not
in the rules file.
When you add one or more headers to the data source, you
must also specify the location of the headers in the data source
in the rules file. The rules file then tells Analytic Services to read
the header information as a header record and not a data
record. You can also specify which type of header information is
in which header record.
Header information defined in the data source takes precedence
over header information defined in the rules file.
To define a header in the rules file, see "Setting Headers in the Rules
File" in the Essbase XTD Administration Services Online Help.
To define a header in the data source, see "Setting Headers in the
Data Source" in the Essbase XTD Administration Services Online Help.
The header record lists field definitions for each field. The field
definition includes the field type, the field number, and the dimension
name into which to load the fields. The format of a header record is
illustrated in Figure?107:
After you set the header information in the data source, you must
specify the location of the header information in the rules file. If a
rules file refers to header information in a data source, Analytic
Services uses the information in the data source-rather than the
information in the rules file-to determine field types and dimensions.
For each field type that you set, you must also enter a field number.
When the field type is the name of an attribute dimension, the field
number cannot be greater than 9. For a brief discussion and references
to pertinent topics, see Setting Field Type Information.
Performing Operations
on Fields
You can perform operations at the field level, for example, moving a
field to a new position in the record.
• Ignoring Fields
• Ignoring Strings
• Arranging Fields
• Mapping Fields
• Changing Field Names
Ignoring Fields
You can ignore all fields of a specified column of the data source. The
fields still exist in the data source, but they are not loaded into the
Analytic Services database.
If the data source contains fields that you do not want to load into the
database, tell Analytic Services to ignore those fields. For example, the
Sample Basic database has five standard dimensions: Year, Product,
Market, Measures, and Scenario. If the data source has an extra field,
such as Salesperson, that is not a member of any dimension, ignore
the Salesperson field.
Ignoring Strings
You can ignore any field in the data source that matches a string called
a token. When you ignore fields based on string values, the fields are
ignored everywhere they appear in the data source, not just in a
particular column. Consider, for example, a data source that is a
computer generated report in text format. Special ASCII characters
might be used to create horizontal lines between pages or boxes
around headings. These special characters can be defined as tokens to
be ignored.
To ignore all instances of a string, see "Ignoring Fields Based on
String Matches" in the Essbase XTD Administration Services Online
Help.
Arranging Fields
You can set the order of the fields in the rules file to be different from
the order of the fields in the data source.The data source is
unchanged. The following sections describe:
• Moving Fields
• Joining Fields
• Creating a New Field by Joining Fields
• Copying Fields
• Splitting Fields
• Creating Additional Text Fields
• Undoing Field Operations
Note: To undo a single operation, select Edit > Undo. To undo one
or more field operations, see "Undoing Field Operations" in the
Essbase XTD Administration Services Online Help.
Moving Fields
You can move fields to a different location using a rules file. For
example, a field might be the first field in the data source, but you
want to move it to be the third field during the data load or dimension
build.
1<tab>2<tab>3
1<tab>2<tab>(null)
If you move a field that contains empty cells and the moved field
becomes the last field in the record, the field may merge with the field
to its left.
Joining Fields
You can join multiple fields into one field. The new field is given the
name of the first field in the join. For example, if you receive a data
source with separate fields for product number (100) and product
family (-10), you must join the fields (100-10) before you load them
into the Sample Basic database.
Before you join fields, move the fields to join into the order in which
you want to join them. If you do not know how to move fields, see
"Moving Fields" in the Essbase XTD Administration Services Online
Help.
For example, if you receive a data source with separate fields for
product number (100) and product family (-10), you must join the
fields (100-10) before you load them into the Sample Basic database.
But suppose that you want the 100 and -10 fields to exist in the data
source after the join; that is, you want the data source to contain
three fields: 100, -10, and 100-10. To do this, create the new field
using a join.
Before you join fields, move the fields to join into the order in which
you want to join them. If you do not know how to move fields, see
Moving Fields.
To create a new field by joining existing fields, see "Creating a New
Field Using Joins" in the Essbase XTD Administration Services Online
Help.
Copying Fields
You can create a copy of a field while leaving the original field intact.
For example, assume that, during a single dimension build, you want
to define a multilevel attribute dimension and associate attributes with
members of a base dimension. To accomplish this task, you need to
copy some of the fields. For more information about attribute
dimensions, see Working with Multilevel Attribute Dimensions.
To copy a field, select one field and then create a new field using a
join.
To create a new field by joining existing fields, see "Creating a New
Field Using Joins" in the Essbase XTD Administration Services Online
Help.
Splitting Fields
You can split a field into two fields. For example, if a data source for
the Sample Basic database has a field containing UPC100-10-1, you
can split the UPC out of the field and ignore it. Then only 100-10-1,
that is, the product number, is loaded. To ignore a field, see Ignoring
Fields.
To create a new field and populate it with text, see "Creating a New
Field Using Text" in the Essbase XTD Administration Services Online
Help.
Note: To undo a field you created using text, see "Undoing Field
Operations" in the Essbase XTD Administration Services Online
Help.
Mapping Fields
This section applies to data load only. If you are performing a
dimension build, skip this section.
You use a rules file to map data source fields to Analytic Services
member names during a data load. You can map fields in a data source
directly to fields in the Analytic Services database during a data load
by specifying which field in the data source maps to which member or
member combination in the Analytic Services database. The data
source is not changed.
Note: When you open a SQL data source, the fields default to the
SQL data source column names. If the SQL column names and the
Analytic Services dimension names are the same, you do not have
to map the column names.
Performing Operations
on Data
This section applies to data load only. If you are performing a
dimension build, skip this section.
You can perform operations on the data in a field, for example, moving
a field to a new position in the record.
Market, Product, Year, Measures, Scenario
Texas 10010 Jan Sales Actual 42
Texas 10020 Jan Sales Actual 82
Texas 10010 Jan Sales Actual 37
You can use incoming data values to add to or subtract from existing
database values. For example, if you load weekly values, you can add
them to create monthly values in the database.
You can clear existing data values from the database before you load
new values. By default, Analytic Services overwrites the existing
values of the database with the new values of the data source. If you
are adding and subtracting data values, however, Analytic Services
adds or subtracts the new data values to and from the existing values.
Before adding or subtracting new values, make sure that the existing
values are correct. Before loading the first set of values into the
database, you must make sure that there is no existing value.
For example, assume that the Sales figures for January are calculated
by adding the values for each week in January:
January Sales = Week 1 Sales + Week 2 Sales + Week 3 Sales +
Week 4 Sales
When you load Week 1 Sales, clear the database value for January
Monthly Sales. If there is an existing value, Analytic Services performs
the following calculation:
January Sales = Existing Value + Week 1 Sales + Week 2 Sales
+ Week 3 Sales + Week 4 Sales
You can also clear data from fields that are not part of the data load.
For example, if a data source contains data for January, February, and
March and you want to load only the March data, you can clear the
January and February data.
Note: If you are using transparent partitions, clear the values using
the steps that you use to clear data from a local database.
You can scale data values if the values of the data source are not in
the same scale as the values of the database.
For example, assume the real value of sales was $5,460. If the Sales
data source tracks the values in hundreds, the value is 54.6. If the
Analytic Services database tracks the real value, you need to multiply
the value coming in from the Sales data source (54.6) by 100 to have
the value display correctly in the Analytic Services database (as 5460).
To scale data values, see "Scaling Data Values" in the Essbase XTD
Administration Services Online Help.
You can reverse or flip the value of a data field by flipping its sign.
Sign flips are based on the UDAs (user-defined attributes) of the
outline. When loading data into the accounts dimension, for example,
you can specify that any record whose accounts member has a UDA of
Expense change from a plus sign to a minus sign. See Creating UDAs
for more information on user-defined attributes.
Performing and
Debugging Data Loads
or Dimension Builds
This chapter describes how to load data or members from one or more
external data sources to an Analytic Server. You can load data without
updating the outline, you can update the outline without loading data,
or you can load data and build dimensions simultaneously. For
information about setting up data sources and rules files, see
Understanding Data Loading and Dimension Building and Creating
Rules Files.
Note: If you are loading data into a transparent partition, follow the
same steps as for loading data into a local database.
If you load data into a parent member, when you calculate the
database, the consolidation of the children's data values can overwrite
the parent data value. To prevent overwriting, be aware of the
following:
The methods in this table work only if the child values are empty
(#MISSING). If the children have data values, the data values
overwrite the data values of the parent. For a discussion of how
Analytic Services calculates #MISSING values, see Aggregating
#MISSING Values.
Note: You cannot load data into Dynamic Calc, Dynamic Calc and
Store, or attribute members. For example, if Year is a Dynamic Calc
member, you cannot load data into it. Instead, load data into Qtr1,
Qtr2, Qtr3, and Qtr4, which are not Dynamic Calc members.
Actual Ohio Sales Cola
Jan Feb Mar Apr
10 15 20
Figure?110 is valid because #MI replaces the missing field.
Actual Ohio Sales Cola
Jan Feb Mar Apr
10 15 20 #MI
If a rules file has extra blank fields, join the empty fields with the field
next to them. For a brief discussion, see Joining Fields.
Note: You cannot reject more records than the error log can
hold. By default, the limit is 1000, but you can change it by
setting DATAERRORLIMIT in the essbase.cfg file. See the
Technical Reference for more information.
Debugging Data Loads
and Dimension Builds
If you try to load a data source into Analytic Server, but it does not
load correctly, check the following:
If you can answer both of the above questions with a "yes," something
is probably wrong. Use the following sections to determine what the
problem is and to correct the problem.
When you correct the problems, you can reload the records that did
not load by reloading the error log. For more information, see Loading
Dimension Build and Data Load Error Logs.
• Is the data source already open? The data source will already
be open if a user is editing the data source. Analytic Services
can load only data sources that are not locked by another
user or application.
• Does the data source have the correct file extension? All text
files must have a file extension of .TXT. All rules files must
have a file extension of .RUL.
• Is the data source name and the path name correct? Check
for misspellings.
• Is the data source in the specified location? Check to make
sure that no one has moved or deleted the data source.
• If you are using a SQL data source, is the connection
information (such as the user name, password, and database
name) correct?
• If you are using a SQL data source, can you connect to the
SQL data source without using Analytic Services?
• Did the person running the data load set up an error log?
Click Help in the Data Load dialog box for information on
setting up an error log. By default, when you use a rules file,
Analytic Services creates an error log.
• Are you sure that the data source and Analytic Server are
available? See Verifying That Analytic Server Is Available and
Verifying That the Data Source Is Available for lists of items to
check.
• Did the Analytic Server crash during the data load? If so, you
probably received a time-out error on the client. If the server
crashed, see Recovering from an Analytic Server Crash to
identify a recovery procedure.
• Check the application log. For a review of log information, see
Understanding Analytic Server and Application Logs.
If the error log exists but is empty, Analytic Services does not think
that an error occurred during loading. Check the following:
• Are you sure that you loaded the correct data source? If so,
check the data source again to make sure that it contains the
correct values.
• Are there any blank fields in the data source? You must insert
#MI or #MISSING into a data field that has no value.
Otherwise, the data source may?not load correctly. To replace
a blank field with #MI or #MISSING using a rules file, see
Replacing an Empty Field with Text.
• Is the data source formatted correctly? Are all ranges set up
properly?
• Are there any implicitly shared members that you were
unaware of? Implicit shares happen when a parent and child
share the same data value. This situation occurs if a parent
has only one child or only one child rolls up into the parent.
For a definition and discussion of implied sharing, see
Understanding Implied Sharing.
• Did you add incoming data to existing data instead of
replacing incoming data with existing data? For a discussion
of the adding and subtracting process, see Adding to and
Subtracting from Existing Values.
• Have you selected or rejected any records that you did not
intend to select or reject? For a brief discussion of selecting
and rejecting, see Selecting Records and Rejecting Records.
• If the sign is reversed (for example, a minus sign instead of a
plus sign), did you perform any sign flips on UDAs (user-
defined attributes)? For a discussion of the sign flipping
process, see Flipping Field Signs.
• Did you clear data combinations that you did not intend to
clear? For a discussion of the process of clearing data, see
Clearing Existing Data Values.
• Did you scale the incoming values incorrectly? For examples
of scaling data, see Scaling Data Values.
• Are all member and alias names less than 79 characters long?
Note: You can check data by exporting it, by running a report on it,
or by using a spreadsheet. If doing exports and reports, see
Developing Report Scripts and Using ESSCMD. If using a
spreadsheet, see the Essbase XTD Spreadsheet Add-in User's
Guide.
For example, when you load Figure?111 into the Sample Basic
database, Analytic Services maps the Ohio member field into the
Market dimension for all records, including the records that have Root
Beer and Diet Cola in the Product dimension.
Jan Sales Actual Ohio
Cola 25
"Root Beer" 50
"Diet Cola" 19
Analytic Services stops the data load if no prior record contains a value
for the missing member field. If you try to load Figure?112 into the
Sample Basic database, for example, the data load stops, because the
Market dimension (Ohio, in Figure?111) is not specified.
Jan Sales Actual
Cola 25
"Root Beer" 50
"Diet Cola" 19
For information on restarting the load, see Loading Dimension Build
and Data Load Error Logs.
Jan, Sales, Actual
Ohio Cola 2
"Root Beer" 12
"Ginger Ale" 15
"Cream Soda" 11
Note: If you are performing a dimension build, you can add the
new member to the database. See Performing Data Loads or
Dimension Builds.
East Cola Actual
Sales Jan $10
Feb $21
Mar $15
Apr $16
Understanding
Advanced Dimension
Building Concepts
This chapter discusses dimension building.
Understanding Build
Methods
The build method that you select determines the algorithm that
Analytic Services uses to add, change, or remove dimensions,
members, and aliases in the outline. The kind of build method that you
select depends on the type of data in the data source.
When you use the generation references build method, you can choose
to use null processing. Null processing specifies what actions Analytic
Services takes when it encounters empty fields, also know as null
fields, in the data source.
GEN2,Products???GEN3,Products???GEN4,Products
100?????????????????????????????10010a
• If a null occurs directly before a secondary field, Analytic
Services ignores the secondary field. Secondary field types
are alias, property, formula, duplicate generation, duplicate
generation alias, currency name, currency category, attribute
parent, UDA, and name of an attribute dimension. In
Figure?119, for example, there is no field in the GEN2,
Products or the ALIAS2,Products column. When Analytic
Services reads the following record, it ignores the ALIAS2
field and promotes the GEN3 field (100-10) to GEN2 and the
GEN4 field (100-10a) to GEN3.
GEN2,Products??ALIAS2,Products??GEN3,Products??GEN4,Pro
ducts
???????????????Cola?????????????10010?????????10010a
• If the null occurs where Analytic Services expects a secondary
field, Analytic Services ignores the secondary null field and
continues loading. In Figure?120, for example, there is no
field in the ALIAS2, Products column. When Analytic Services
reads the following record, it ignores the ALIAS2 field.
GEN2,Products??ALIAS2,Products??GEN3,Products??GEN4,Pro
ducts
100????????????? ???????????????10010?????????10010a
To build the outline in Figure?121, you can use the bottom-up data
source shown in Figure?122.
1001012 10010 100
1002012 10020 100
In a level reference build, the lowest level members are sequenced left
to right. Level 0 members are in the first field, level 1 members are in
the second field, and so on. This organization is the opposite of how
data is presented for generation references (top-down).
The rules file in Figure?123 uses the level reference build method to
add members to the Product dimension of the Sample Basic database.
The first column of the data source contains new members (600-10-
11, 600-20-10, and 600-20-18). The second column contains the
parents of the new members (600-10 and 600-20), and the third
column contains parents of the parents (600).
The rules file specifies the level number and the field type for each
field of the data source. For more information on setting field types
and references to pertinent topics, see Setting Field Type Information.
To build the tree in Figure?124, for example, use Figure?123 to set up
the data source, LEVEL.TXT, and the rules file, LEVEL.RUL.
When you use the level references build method, you can choose to
use null processing. Null processing specifies what actions Analytic
Services takes when it encounters empty fields, also know as null
fields, in the data source.
LEVEL0,Products??ALIAS0,Products??LEVEL1,Products??LEVEL2,Pr
oducts
?????????????????Cola?????????????10010???????????100
• If a null occurs where Analytic Services expects a secondary
field, Analytic Services ignores the secondary null field and
continues loading. In Figure?120, for example, there is no
field in the ALIAS0, Products column. When Analytic Services
reads the following record, it ignores the ALIAS0 field.
LEVEL0,Products??ALIAS0,Products??LEVEL1,Products??LEVEL2,Pr
oducts
10010a??????????10010????????????????????????????100
Using Parent-Child
References
Use the parent-child references build method when every record of the
data source specifies the name of a new member and the name of the
parent to which you want to add the new member.
After Analytic Services adds all new members to the outline, it may be
necessary to move the new members into their correct positions using
Outline Editor. For a brief discussion and references to pertinent topics,
see Positioning Dimensions and Members.
To add the example members to the database, set the following values
in the rules file:
For brief
In the rules Perform the following discussions and
file task references to
pertinent topics
Figure 131: Rules File Fields Set to Add Members as Siblings with
String Matches
Figure?132 shows the tree that Analytic Services builds from this data
source and rules file.
Figure 132: Tree for Adding Members as Siblings with String Matches
For brief
In the rules Perform the following discussions and
file task references to
pertinent topics
Figure 133: Rules File Fields Set to Add Members as Siblings of the
Lowest Level
Figure?134 shows the tree that Analytic Services builds from this data
source and rules file.
Figure 134: Tree for Adding Members as Siblings of the Lowest Level
Figure 135: Rules File Fields Set to Add Members as a Child of a Spec
ified Parent
Figure?136 shows the tree that Analytic Services builds from this data
source and rules file.
• If the base dimension does not exist, you must build it.
• You must build the attribute dimension.
• You must associate members of the base dimension with
members of the attribute dimension.
• Build both the base and attribute dimensions and perform the
associations all at once. When you use an all-at-once
approach, you use a single rules file to build the base
dimension and one or more attribute dimensions and to
associate the each attribute with the appropriate member of
the base dimension. Because this approach uses a single rules
file, it can be the most convenient. Use this approach if the
base dimension does not exist and each source data record
contains all attribute information for each member of the base
dimension.
• Build the attribute dimension and perform the associations in
one rules file. Assuming that the base dimension is built in a
separate step or that the base dimension already exists, you
can build an attribute dimension and associate the attributes
with the members of the base dimension in a single step. You
need only to define the attribute associations in the rules file.
For a brief description of this process, see Associating
Attributes.
• Build the attribute dimension and then perform the
associations using separate rules files. Assuming that the
base dimension is built in a separate step or that the base
dimension already exists, you can build an attribute
dimension and associate the attributes with the members of
the base dimension in separate steps. Build the attribute
dimension, and then associate the attribute members with
members of the base dimension. You must use this approach
when you build numeric attribute dimensions that are
multilevel or that have members that represent different-
sized ranges.
You can build attribute dimensions in either of the following two ways:
When you define the rules file for building attribute dimensions, be
sure to specify the base dimension and the name of the attribute
dimension file.
Associating Attributes
Whether you build the attribute dimension and associate the attribute
members with the members of the base dimension in one step or in
separate steps, define the fields as described in this section.
Every record of the source data must include at least two columns, one
for the member of the base dimension and one for the attribute value
of the base dimension member. In the same source data record you
can include additional columns for other attributes that you want to
associate with the member of the base dimension. You must position
the field for the member of the base dimension before any of the fields
for the members of the attribute dimension.
Define the field type for the attribute dimension member as the name
of the attribute dimension, use the generation or level number of the
associated member of the base dimension, and specify the base
dimension name. For example, as?shown in the ATTRPROD.RUL file in
Figure?137, the field definition Ounces3,Product specifies that the field
contains members of the Ounces attribute dimension. Each member of
this field is associated with the data field that is defined as the
generation 3 member of the base dimension Product. Based on this
field definition, Analytic Services associates the attribute 64 with the
500-10 member.
You can have Analytic Services use the attribute columns to build the
members of the attribute dimensions. In Data Prep Editor, in the
Dimension Build Settings tab of the Dimension Build Settings dialog
box, for the base dimension, clear the Do Not Create Mbrs option. For
more information, see "Setting Member Properties" in the Essbase XTD
Administration Services Online Help.
When you are working with numeric ranges, you may need to build
attribute dimensions and perform associations in separate steps. For a
discussion and example of using separate steps, see Working with
Numeric Ranges.
The following steps describe how to define the fields in the rules file to
build a multilevel attribute dimension and associate its members with
members of its base dimension. This example uses the level references
build method.
As shown in Figure?140, the rules file now contains the field definitions
to build the attribute dimension Size and to associate the members of
Size with the appropriate members of the base dimension Product.
Figure 140: Source Data and Rules File for Building a Multilevel
Attribute Dimension
When you run a dimension build with the data shown in Figure?140,
Analytic Services builds the Size attribute dimension and associates its
members with the appropriate members of the base dimension.
Figure?141 shows the updated outline.
You must use one rules file to build the Population dimension and
another rules file to associate the Population dimension members as
attributes of members of the base dimension.
Building Attribute Dimensions that Accommodate
Ranges
First, create a rules file that uses the generation, level, or parent-child
build method to build the attribute dimension. In the rules file, be sure
to specify the following:
Figure 143: Rules File for Building a Numeric Attribute Dimension with
Ranges
Figure?143 also shows how you can associate aliases with attributes.
To allow for values in the source data that are outside the ranges in
the attribute dimension, enter a range size, such as 1000000.
Analytic Services uses the range size to add members to the
attribute dimension above the existing highest member or below
the existing lowest member, as needed.
Caution: After you associate members of the base dimension with
members of the attribute dimension, be aware that if you manually
insert new members into the attribute dimension or rename
members of the attribute dimension, you may invalidate existing
attribute associations.
Adding Members to the Base Dimension: You can use the same
rules file to add new members to the base dimension and to associate
the new members with their numeric range attributes simultaneously.
Be sure to provide a value for the range size. In Data Prep Editor, on
the Dimension Building Properties tab in the Field Properties dialog
box, click the Ranges button and specify the range size for the
attribute dimension.
Getting Ready
In the Dimension Build Settings dialog box, select the Do Not Create
Mbrs option for the attribute dimension. For more information, see
"Setting Member Properties" in the Essbase XTD Administration
Services Online Help.
Controlling Associations
Association to
How to Control the Association
Control
Note: Because attributes are defined only in the outline, the data
load process does not affect them.
Building Shared
Members by Using a
Rules File
The data associated with a shared member comes from a real member
with the same name as the shared member. The shared member
stores a pointer to data contained in the real member; thus the data is
shared between the members and is stored only one time.
In the Sample Basic database, for example, the 100-20 (Diet Cola)
member rolls up into the 100 (Cola) family and into the Diet family.
You can share members among as many parents as you want. Diet
Cola has two parents, but you can define it to roll up into even more
parents.
This scenario is the simplest way to share members. You can share
members at the same generation by using any of these build methods.
These methods are discussed in the following sections:
For example, to create the Diet parent and share the 100-20, 200-20,
300-20, and 400-20 members, use the sample file, SHGENREF.TXT, and
set up the rules file so that the fields look like SHGENREF.RUL, shown in
Figure?148. Remember 100 is the Cola family, 200 is the Root Beer
family, 300 is the Cream Soda family, and the -20 after the family
name indicates a diet version of the soda.
The data source and rules file illustrated in Figure?148 build the
following tree:
Define the field type for the shared member as LEVEL. Then enter the
level number. To create a shared member of the same generation, set
the level number of the secondary roll-up to have the same number of
levels as the primary roll-up. While processing the data source,
Analytic Services creates a parent at the specified level and inserts the
shared members under it.
For example, to create the shared 100-20 (Diet Cola), 200-20 (Diet
Root Beer), 300-20 (Diet Cream Soda), and 400-20 (Fruit Soda)
members in the Sample Basic database, use the sample file,
SHLEV.TXT, and set up the rules file so that the fields look like
SHLEV.RUL shown in Figure?150.
The data source and rules file illustrated in Figure?150 build the
following tree:
The data source and rules file illustrated in Figure?152 build the
following tree:
Define the field type for the shared member as LEVEL. Then enter the
level number. While processing the data source, Analytic Services
creates a parent at the specified level and inserts the shared members
under it.
For example, to share the products 100-20, 200-20, and 300-20 with
a parent called Diet and two parents called TBC (The Beverage
Company) and Grandma's, use the sample data file and the rules file
in Figure?155.
Figure 155: Level References Sample Rules File for Shared Members at
Different Generations
The data source and rules file illustrated in Figure?155 build the tree
illustrated in Figure?154.
The data source and rules file illustrated in Figure?156 build the tree
illustrated in Figure?154.
Define the field type for the parent of the shared member as duplicate
level (DUPLEVEL). Then enter the level number. To create a shared
member of the same generation, set the level number of the
secondary roll-up to have the same number of levels as the primary
roll-up. While processing the data source, Analytic Services creates a
parent at the specified level and inserts the shared members under it.
For example, to share the product lines 100, 200, and 300 with a
parent called Soda and two parents called TBC and Grandma's, use the
sample data file and rules file shown in Figure?158. This data source
and rules file work only if the Diet, TBC, and Grandma's members exist
in the outline. The DUPLEVEL field is always created as a child of the
dimension (that is, at generation 2), unless the named level field
already exists in the outline.
Figure 158: Level References Sample Rules File for Non-Leaf Shared
Members at Different Generations
The data source and rules file illustrated in Figure?158 build the tree
illustrated in Figure?157.
Using Parent-Child References to Create Non-Leaf
Shared Members
To create shared non-leaf members at the same generation using the
parent-child references build method, define the PARENT and CHILD
field types. Make sure that Analytic Services is set up to allow sharing
(clear Do Not Share in the Dimension Build Settings tab of the
Dimension Build Settings dialog box). When sharing is enabled,
Analytic Services automatically creates duplicate members under
a?new parent as shared members.
The data source and rules file illustrated in Figure?159 build the tree
illustrated in Figure?157.
Building Multiple Roll-Ups by Using Level References
To enable the retrieval of totals from multiple perspectives, you can
also put shared members at different levels in the outline. Use the
level references build method. The rules file, LEVELMUL.RUL, in
Figure?160 specifies an example of build instructions for levels in the
Product dimension.
Figure 160: Rules File Fields Set to Build Multiple Roll-Ups Using Level
References
Because the record is so long, this second graphic shows the rules file
after it has been scrolled to the right to show the extra members:
When you run the dimension build using the data in Figure?160,
Analytic Services builds the following member tree:
"Soft Drinks" Cola
"Soft Drinks" "Root Beer"
Cola TBC
"Root Beer" Grandma's
Vendor TBC
Vendor Grandma's
Calculating Analytic
Services Databases
This chapter explains the basic concept of multidimensional database
calculation and provides information about how to calculate an Analytic
Services database.
Analytic Services offers two ways that you can calculate a database:
• Outline calculation
• Calculation script calculation
Which way you choose depends on the type of calculation that you
want to do.
Outline Calculation
Outline calculation is the simplest method of calculation. Analytic
Services bases the calculation of the database on the relationships
between members in the database outline and on any formulas that
are associated with members in the outline.
About Multidimensional
Calculation Concepts
For an illustration of the nature of multidimensional calculations,
consider the following, simplified database:
The Time dimension has four quarters. The example displays only the
members in Qtr1-Jan, Feb, and Mar.
The Scenario dimension has two child members-Budget for budget
values and Actual for actual values.
1. Margin -> Jan -> Actual as a percentage of Sales -> Jan ->
Actual. The result is placed in Margin% -> Jan -> Actual.
2. Margin -> Feb -> Actual as a percentage of Sales -> Feb ->
Actual. The result is placed in Margin% -> Feb -> Actual.
3. Margin -> Mar -> Actual as a percentage of Sales -> Mar ->
Actual. The result is placed in Margin% -> Mar -> Actual.
4. Margin -> Qtr1 -> Actual as a percentage of Sales -> Qtr1 ->
Actual. The result is placed in Margin% -> Qtr1 -> Actual.
5. Margin -> Jan -> Budget as a percentage of Sales -> Jan ->
Budget. The result is placed in Margin% -> Jan -> Budget.
6. Analytic Services continues cycling through the database until
it has calculated Margin% for every combination of members
in the database.
However, you can specify any calculation script as the default database
calculation. Thus, you can assign a frequently-used script to the
database rather than loading the script each time you want to perform
its calculation. Also, if you want a calculation script to work with
calculation settings defined at the database level, you must set the
calculation script as the default calculation.
Canceling Calculations
To stop a calculation before Analytic Services completes it, click the
Cancel button while the calculation is running.
Security Considerations
In order to calculate a database, you must have Calculate permissions
for the database outline. If you have calculate permissions, you can
calculate any value in the database. With calculate permissions, you
can calculate a value even if a security filter denies you read and
update permissions. Careful consideration should be given to providing
users with calculate permissions.
Developing Formulas
This chapter explains how to develop and use formulas to calculate a
database. It provides detailed examples of formulas, which you may
want to adapt for your own use. For more examples, see Reviewing
Examples of Formulas.
This chapter includes the following topics:
• Understanding Formulas
• Understanding Formula Calculation
• Understanding Formula Syntax
• Reviewing the Process for Creating Formulas
• Displaying Formulas
• Composing Formulas
• Estimating Disk Size for a Calculation
• Using Formulas in Partitions
Understanding
Formulas
Formulas calculate relationships between members in a database
outline. You can use formulas in two ways:
The following figure shows the Measures dimension from the Sample
Basic database. The Margin %, Profit %, and Profit per Ounce
members are calculated using the formulas applied to them.
• Operators
• Functions
• Dimension and Member Names
• Constant Values
• Non-Constant Values
Operators
The following table shows the types of operators you can use in
formulas:
Functions
Functions are predefined routines that perform specialized calculations
and return sets of members or data values. The following table shows
the types of functions you can use in formulas.
Scenario
10010
Feb
Constant Values
You can assign a constant value to a member:
California = 120;
In this formula, California is a member in a sparse dimension and 120
is a constant value. Analytic Services automatically creates all possible
data blocks for California and assigns the value 120 to all data cells.
Many thousands of data blocks may be created. To assign constants in
a sparse dimension to only those intersections that require a value,
use FIX as described in Constant Values Assigned to Members in a
Sparse Dimension.
Non-Constant Values
If you assign anything other than a constant to a member in a sparse
dimension, and no data block exists for that member, new blocks may
not be created unless Analytic Services is enabled to create blocks on
equations.
For example, to create blocks for West that didn't exist prior to running
the calculation, you need to enable Create Blocks on Equations for this
formula:
West = California + 120;
Understanding Formula
Calculation
For formulas applied to members in a database outline, Analytic
Services calculates formulas when you do the following:
You cannot use substitution variables in formulas that you apply to the
database outline. For an explanation of how substitution variables can
be used, see Using Substitution Variables.
Understanding Formula
Syntax
When you create member formulas, make sure the formulas follow
these rules:
When writing formulas, you can check the syntax using the Formula
Editor syntax checker. For a comprehensive discussion, including
examples, of the main types of formulas, see Checking Formula
Syntax.
Formulas are plain text. If required, you can create a formula in the
text editor of your choice and paste it into Formula Editor.
Displaying Formulas
To display an existing formula, use any of the following methods:
Composing Formulas
The following sections discuss and give examples of the main types of
formulas:
• Basic Equations
• Conditional Tests
• Examples of Conditional Tests
• Value-Related Formulas
• Member-Related Formulas
• Formulas That Use Various Types of Functions
Basic Equations
You can apply a mathematical operation to a formula to create a basic
equation. For example, you can apply the following formula to the
Margin member in Sample Basic.
Sales COGS;
Member = mathematical operation;
where Member is a member name from the database outline and
mathematical operation is any valid mathematical operation, as
illustrated in the following example:
Margin = Sales COGS;
(Retail Cost) % Retail;
Markup = (Retail Cost) % Retail;
Conditional Tests
You can define formulas that use a conditional test or a series of
conditional tests to control the flow of calculation.
Use This
Information You Need To Find
Function
IF(Sales > 500000)
Commission = Sales * .01;
ENDIF;
Commission(IF(Sales > 500000)
Commission = Sales * .01;
ENDIF;)
In the next example, the formula tests the ancestry of the current
member and then applies the appropriate Payroll calculation formula.
IF(@ISIDESC(East) OR @ISIDESC(West))
Payroll = Sales * .15;
ELSEIF(@ISIDESC(Central))
Payroll = Sales * .11;
ELSE
Payroll = Sales * .10;
ENDIF;
Payroll(IF(@ISIDESC(East) OR @ISIDESC(West))
Payroll = Sales * .15;
ELSEIF(@ISIDESC(Central))
Payroll = Sales * .11;
ELSE
Payroll = Sales * .10;
ENDIF;)
Value-Related Formulas
Use this section to find information about formulas related to values:
Sales 50 70 100
Addition 70 60 150
1. January Ending = January Opening Sales + Additions
2. February Opening = January Ending
3. February Ending = February Opening Sales + Additions
4. March Opening = February Ending
5. March Ending = March Opening Sales + Additions
IF(NOT @ISMBR (Jan))
"Opening Inventory" = @PRIOR("Ending Inventory");
ENDIF;
"Ending Inventory" = "Opening Inventory" Sales +
Additions;
Allocating Values
You can allocate values that are input at the parent level across child
members in the same dimension or in different dimensions by using
the following allocation functions.
Forecasting Values
You can manipulate data for the purposes of smoothing data,
interpolating data, or calculating future values by using the following
forecasting functions.
Function To
Data Manipulation
Use
To apply a moving average to a data set and replace each term @MOVAVG
in the list with a trailing average. This function modifies the
data set for smoothing purposes.
To apply a moving maximum to a data set and replace each @MOVMAX
term in the list with a trailing maximum. This function
modifies the data set for smoothing purposes.
To apply a moving median to a data set and replace each term @MOVMED
in the list with a trailing median. This function modifies the
data set for smoothing purposes.
To apply a moving minimum to a data set and replace @MOVMIN
each?term in the list with a trailing minimum. This function
modifies the data set for smoothing purposes.
To apply a moving sum to a data set and replace each term @MOVSUM
with a trailing sum. This function modifies the data set for
smoothing purposes.
To apply a moving sum to a data set and replace each term @MOVSUMX
with a trailing sum. Specify how to assign values to members
before you reach the number to sum. This function modifies
the data set for smoothing purposes.
To apply a smoothing spline to a set of data points. A spline is @SPLINE
a mathematical curve that is used to smooth or?interpolate
data.
To calculate future values and base the calculation on curve- @TREND
fitting to historical values.
@ISMBR(&UpToCurr)
@ISMBR(Jan:Jun)
Member-Related Formulas
This section provides information you need to create formulas that
refer to members:
A range of all members at the same The two defining member names
level, between and including the separated by a colon (:). For
two defining members example: Jan2000:Dec2000
A range of all members in the same The two defining member names
generation, between and including separated by two colons (::). For
the two defining members example: Q1_2000::Q4_2000
Misc_Expenses = Misc_Expenses?>?Market?>?Product *
(Sales?/?(?Sales?>?Market?>?Product));
• Mathematical Operations
• Statistical Functions
• Range Functions
• Financial Functions
• Date and Time Functions
• Calculation Mode Functions
• Custom-Defined Functions
Mathematical Operations
You can perform many mathematical operations in formulas by using
the following mathematical functions.
Operation Function
To return the absolute value of an expression @ABS
Statistical Functions
You can use these statistical functions to calculate advanced statistics
in Analytic Services.
Range Functions
You can execute a function for a range of members by using these
range functions.
Financial Functions
You can include financial calculations in formulas by using these
financial functions.
Function To
Specification
Use
To specify that Analytic Services uses cell, block, bottom-up, @CALCMODE
and top-down calculation modes to calculate a formula.
Note: You can also use the configuration setting CALCMODE to set
calculation modes to BLOCK or BOTTOMUP at the database,
application, or server level. For details, see the Technical Reference,
under "essbase.cfg Settings" for CALCMODE or "Analytic Services
Functions" for @CALCMODE.
Custom-Defined Functions
Custom-defined functions are calculation functions that you create to
perform calculations not otherwise supported by the Analytic Services
calculation scripting language. You can use custom-defined functions in
formulas and calculation scripts. These custom-developed functions
are written in the Java programming language and registered on the
Analytic Server. The Analytic Services calculator framework calls them
as external functions.
Error: line 1: invalid statement; expected semicolon
After you have corrected the formula and saved the outline, the
message in the member comment is deleted. You can view the
updated comment when you reopen the outline.
Using Formulas in
Partitions
An Analytic Services partition can span multiple Analytic Servers,
processors, or computers. For a comprehensive discussion of
partitioning, see Designing Partitioned Applications and Creating and
Maintaining Partitions.
You can use formulas in partitioning, just as you use formulas on your
local database. However, if a formula you use in one database
references a value from another database, Analytic Services has to
retrieve the data from the other database when calculating the
formula. In this case, you need to ensure that the referenced values
are up-to-date and to consider carefully the performance impact on
the overall database calculation. For a discussion of how various
options affect performance, see Writing Calculation Scripts for
Partitions.
With transparent partitions, you need to consider carefully how you use formulas on the
data target. For a detailed example of the relationship between member formulas and
transparent partitioning, see Transparent Partitions and Member Formulas. For a
discussion of the performance implications, see Performance Considerations for
Transparent Partition Calculations.
Reviewing Examples of
Formulas
This chapter provides detailed examples of formulas, which you may
want to adapt for your own use. For examples of using formulas in
calculation scripts, see Reviewing Examples of Calculation Scripts.
Calculating Period-to-
Date Values
If the outline includes a dimension tagged as accounts, you can use
the @PTD function to calculate period-to-date values. You can also use
Dynamic Time Series members to calculate period-to-date values. For
an explanation of how to calculate time series data, see Calculating
Time Series Data.
For example, the following figure shows the Inventory branch of the
Measures dimension from the Sample Basic database.
Inventory (~) (Label Only)
???Opening Inventory (+) (TB First) (Expense Reporting)
IF(NOT @ISMBR(Jan))
???Additions (~) (Expense Reporting)
???Ending Inventory (~) (TB Last) (Expense Reporting)
To calculate period-to-date values for the year and for the current
quarter, add?two members to the Year dimension, QTD for quarter-to-
date and YTD for year-to-date:
QTD?(~)?@PTD(Apr:May)
YTD?(~)?@PTD(Jan:May);
For example, assuming that the current month is May, you would add
this formula to the QTD member:
@PTD(Apr:May);
@PTD(Jan:May);
Measures -> Time Jan Feb Mar Apr May QTD YTD
Opening Inventory 100 110 120 110 140 110 100
Opening Inventory has a First tag. For QTD, Analytic Services takes
the first value in the current quarter, which is Apr. For YTD, Analytic
Services takes the first value in the year, which is Jan.
Ending Inventory has a Last tag. For QTD, Analytic Services takes the
last value in the current quarter, which is May. For YTD, Analytic
Services takes the last value in the year, which is also May.
Calculating Rolling
Values
You can use the @AVGRANGE function to calculate rolling averages
and the @ACCUM function to calculate rolling year-to-date values.
@AVGRANGE(SKIPNONE, Sales, @CURRMBRRANGE(Year, LEV, 0, ,
0));
@ACCUM(Sales);
Analytic Services calculates the average Sales values across the
months in the dimension tagged as time. The SKIPNONE parameter
means that all values are included, even #MISSING values. Analytic
Services places the results in AVG_Sales. For an explanation of how
Analytic Services calculates #MISSING values, see Aggregating
#MISSING Values.
This table shows the results when Analytic Services calculates the
cumulative Sales values and places the results in YTD_Sales:
The values for YTD_Sales are the cumulative values up to the current
month. So YTD_Sales -> Feb is the sum of Sales -> Jan and Sales ->
Feb.
Calculating Monthly
Asset Movements
You can use the @PRIOR function to calculate values based on a
previous month's value.
For example, assume that a database contains assets data values that
are stored on a month-by-month basis. You can calculate the
difference between the assets values of successive months (the asset
movement) by subtracting the previous month's value from the
present month's value.
Assume these three members manage the asset values for the
database:
IF(@ISMBR(Jan)) Asset_MVNT = Assets Opening_Balance;
ELSE Asset_MVNT = Assets @PRIOR(Assets);
ENDIF;
This table shows the results when Analytic Services calculates the
difference between the values of assets in successive months:
.
Asset_MVNT 200 -100 500
IF(Sales <> #MISSING) Commission = Sales * .1;
ELSE Commission = #MISSING;
ENDIF;
Commission(IF(Sales <> #MISSING) Commission = Sales * .1;
ELSE Commission = #MISSING;
ENDIF;);
Calculating an Attribute
Formula
You can perform specific calculations on attribute-dimension members
in a database.
Profit/@ATTRIBUTEVAL(Ounces);
Each data block contains all the dense dimension member values for
its unique combination of sparse dimension members.
Figure 180: Product and Market Dimensions from the Sample Basic
Database
Member Calculation
Order
Analytic Services calculates a database at the data block level,
bringing one or more blocks into memory and calculating the required
values within the block. Analytic Services calculates the blocks in
order, according to their block numbers. The database outline tells
Analytic Services how to order the blocks. Within each block, Analytic
Services calculates the values in order according to the hierarchy in
the database outline. Therefore, overall, Analytic Services calculates a
database based on the database outline.
You can override the default order by using a calculation script. For a
comprehensive discussion of how to develop and use calculation
scripts, see Developing Calculation Scripts.
If a parent member has a label only tag, Analytic Services does not
calculate the parent from its children. If a member has a ~ tag,
Analytic Services does not consolidate the member up to its parent.
In the above example, assume that the user wants to divide the total
of Child 2 and Child 3 by Child 1. However, if Child 1 is the first
member, Analytic Services starts with Child 1, taking the value of
Parent 1 (currently #MISSING) and dividing it by Child 1. The result is
#MISSING. Analytic Services then adds Child 2 and Child 3. Obviously,
this result is not the required one.
To calculate the correct result, make Child 1 the last member in the
branch. For more information on #MISSING values, see Aggregating
#MISSING Values.
Consider the five members under Diet. The members P100-20, P300-
20, and P500-20 have forward calculation references:
The other sparse dimension is Market. The first 19 data blocks contain
the first member to be calculated in the Market dimension, which is
New York.
This table shows the sparse member combinations for the first 5 of
these 19 data blocks.
Block # Product Member Market Member
This table shows the sparse member combinations for the block
numbers 19 through 23.
Analytic Services continues until blocks have been created for all
combinations of sparse dimension members for which at least one data
value exists.
Analytic Services creates a data block only if at least one value exists
for the block. For example, if no data values exist for Old Fashioned
Root Beer (200-10) in Massachusetts, then Analytic Services does not
create a data block for 200-10 -> Massachusetts. However, Analytic
Services does reserve the appropriate block number for 200-10 ->
Massachusetts in case data is loaded for that member combination in
the future.
Data Block
Renumbering
Analytic Services renumbers the data blocks when you make any of
these changes:
The order in which Analytic Services calculates the cells within each
block depends on how you have configured the database. How you
have configured the database defines the member calculation order of
dense dimension members within each block. It also defines the
calculation order of blocks that represent sparse dimension members.
Market and Year are both dense dimensions. The table shows a subset
of the cells in a data block. Data values have been loaded into the
input cells. Analytic Services calculates the shaded cells. The numbers
in bold show the calculation order for these cells. The cell with multiple
consolidation paths is darkly shaded.
???Qtr1 1 2 6
Analytic Services knows that Qtr1 -> East has multiple consolidation
paths. Therefore, it calculates Qtr1 -> East only once and uses the
consolidation path of the dimension calculated last. In the above
example, this dimension is Market.
Note: Qtr1 -> East has been calculated only once by aggregating
the values for Qtr1.
From the calculation order, you can see that if you place a member
formula on Qtr1 in the database outline, Analytic Services ignores it
when calculating Qtr1 -> East. If you place a member formula on East
in the database outline, the formula is calculated when Analytic
Services consolidates Qtr1 -> East on the Market consolidation path. If
required, you can use a calculation script to calculate the dimensions
in the order you choose. For a comprehensive discussion of how to
develop and use calculation scripts, see Developing Calculation Scripts.
Cell Calculation Order: Example 2
Consider a second case in which both of these conditions are true:
Market and Year are both dense dimensions. The table shows a subset
of the cells in a data block. Data values have been loaded into the
input cells. Analytic Services calculates the shaded cells. The numbers
in bold show the calculation order for these cells. The cell with multiple
consolidation paths is darkly shaded.
???Qtr1 1 2 3/7
The results are identical to the previous case. However, Qtr1 -> East
has been calculated twice. This fact is significant when you need to
load data at parent levels. For an example in which data is loaded at
the parent level, see Cell Calculation Order: Example 3.
From the calculation order, you can see that if you place a member
formula on Qtr1 in the database outline, its result is overwritten when
Analytic Services consolidates Qtr1 -> East on the Market
consolidation path. If you place a member formula on East in the
database outline, the result is retained because the Market
consolidation path is calculated last.
Market and Year are both dense dimensions. The table shows a subset
of the cells in a data block. Data values have been loaded into cells at
the parent level.
Year -> Market New York Massachusetts East
The cells are calculated in the same order as in Example 2. Qtr1 ->
East is calculated on both the Year and Market consolidation paths.
However, if any of the child data values were not #MISSING, these
values are consolidated and overwrite the parent values. For example,
if Jan -> New York contains 50000.00, this value overwrites the values
loaded at parent levels.
Analytic Services first correctly calculates the Qtr1 -> East cell by
aggregating Jan?-> East, Feb -> East, and Mar -> East. Second, it
calculates on the Market consolidation path. However, it does not
aggregate the #MISSING values in Qtr1?-> New York and Qtr1 ->
Massachusetts and so the value in Qtr1 -> East is not overwritten.
Analytic Services needs to calculate the Qtr1 -> East cell twice in order
to ensure that a value is calculated for the cell. If Qtr1 -> East is
calculated according to only the last consolidation path, the result is
#MISSING, which is not the required result.
This table shows a subset of the cells in a data block. Data values have
been loaded into the input cells. Analytic Services calculates the
shaded cells. The numbers in bold show the calculation order for these
cells. Cells with multiple consolidation paths are darkly shaded.
The Marketing, Payroll, and Misc Expenses values have been loaded at
the Qtr1, parent level.
Measures/Year Jan Feb Mar Qtr1
???Margin 1 4 7 10/15
??????Profit 3 6 9 12/17
From the calculation order, you can see that if you place a member
formula on, for example, Margin in the database outline, its result is
overwritten by the consolidation on Qtr1.
Calculation Passes
Whenever possible, Analytic Services calculates a database in one
calculation pass through the database. Thus, it reads each of the
required data blocks into memory only once, performing all relevant
calculations on the data block and saving it. However, in some
situations, Analytic Services needs to perform more than one
calculation pass through a database. On subsequent calculation
passes, Analytic Services brings data blocks back into memory,
performs further calculations on them, and saves them again.
To display the application log, see Viewing the Analytic Server and
Application Logs.
Calculation of Shared
Members
Shared members are those that share data values with other
members. For example, in the Sample Basic database, Diet Cola, Diet
Root Beer, and Diet Cream are consolidated under two different
parents. They are consolidated under Diet. They are also consolidated
under their individual product types-Colas, Root Beer, and Cream
Soda.
The members under the Diet parent are shared members. For a
comprehensive discussion of shared members, see Understanding
Shared Members.
Dynamically Calculating
Data Values
This chapter explains how you calculate data values dynamically and
how you benefit from doing so. Dynamically calculating some of the
values in a database can significantly improve the performance of an
overall database calculation.
The information in this chapter assumes that you are familiar with the
concepts of member combinations, dense and sparse dimensions, and
data blocks. For a comprehensive discussion of these concepts, see
Understanding Multidimensional Databases.
Understanding
Dynamic Calculation
When you design the overall database calculation, it may be more
efficient to calculate some member combinations when you retrieve
their data, instead of pre-calculating the member combinations during
a batch database calculation.
• Dynamic Calc
• Dynamic Calc and Store
Recalculation of Data
When Analytic Services detects that the data value for a Dynamic Calc
and Store member needs recalculating, it places an indicator on the
data block that contains the value, so that Analytic Services knows to
recalculate the block on the next retrieval of the data value.
Analytic Services places the indicator on the data block containing the
value and not on the data value itself. In other words, Analytic
Services tracks Dynamic Calc and Store members at the data block
level. For detailed information on data blocks, see Data Blocks and the
Index System.
Analytic Services recalculates the indicated data blocks when you next
retrieve the data value.
If you load data into the children of a Dynamic Calc and Store member,
and the member is a consolidation of its child members, Analytic
Services does not know to recalculate the Dynamic Calc and Store
member during the next retrieval. The parent member is recalculated
only during the next batch calculation.
For example, assume that Market is a parent member and that East
and West are Dynamic Calc and Store child members that consolidate
up to Market. When you retrieve a data value for Market, Analytic
Services calculates East and West, even though you have not
specifically retrieved them. However, Analytic Services does not store
the values of East or West.
Benefitting from
Dynamic Calculation
Dynamically calculating some database values can significantly
improve the performance of an overall database calculation.
Using Dynamic
Calculation
You can tag any member as Dynamic Calc or Dynamic Calc and Store,
except the following:
Outline Editor shows which members are Dynamic Calc and which
members are Dynamic Calc and Store.
Choosing Values to
Calculate Dynamically
Dynamically calculating some data values decreases calculation time,
lowers disk usage, and reduces database restructure time, but
increases retrieval time for dynamically calculated data values.
FIX (East, Colas)
Qtr1;
ENDFIX
Qtr1 = Jan + Feb;
Qtr1 = Jan + Feb;
Year = Qtr1 + Qtr2;
Choosing Between
Dynamic Calc and
Dynamic Calc and
Store
In most cases you can optimize calculation performance and lower disk
usage by using Dynamic Calc members instead of Dynamic Calc and
Store members. However, in specific situations, using Dynamic Calc
and Store members is optimal:
Analytic Services stores only the data blocks that contain the
requested data values. If Analytic Services needs to calculate any
intermediate data blocks in order to calculate the requested data
blocks, it does not store the intermediate blocks.
Figure 192: Sample Basic Outline, Market is Dynamic Calc and Store
Member
Understanding How
Dynamic Calculation
Changes Calculation
Order
Using dynamically calculated data values changes the order in which
Analytic Services calculates the values and can have implications for
the way you administer a database:
1. Sparse dimensions
o If the dimension tagged as time is sparse and the
database outline uses time series data, Analytic
Services bases the sparse calculation on the time
dimension.
o Otherwise, Analytic Services bases the calculation on
the dimension that it normally uses for a batch
calculation.
2. Dense dimensions
a.Dimension tagged as accounts, if dense
b.Dimension tagged as time, if dense
c.Time series calculations
d.Remaining dense dimensions
e.Two-pass calculations
f.Attributes
This calculation order does not produce the required result because
Analytic Services needs to calculate Margin % -> Variance using the
formula on Margin %, and not the formula on Variance. You can avoid
this problem by making Scenario a dense dimension. This problem
does not occur if the Measures dimension (the accounts dimension) is
sparse, because Analytic Services still calculates Margin% first.
The calculation for Qtr1-> Profit produces the same result whether you
calculate along the dimension tagged as time or the dimension tagged
as accounts. Calculating along the time dimension, add the values for
Jan, Feb, and Mar:
50+100+150=300
Calculating along the accounts dimension, subtract Qtr1 -> COGS from
Qtr1 -> Sales:
600300=300
UnitsSold 10 20 20 50
Price 5 5 5 15
50+100+100=250
15 * 50=750
If East and Sales are tagged as Dynamic Calc, then Analytic Services
calculates a different result than it does if East and Sales are not
tagged as Dynamic Calc.
If East and Sales are not Dynamic Calc members, Analytic Services
produces the correct result by calculating as follows:
To avoid this problem and ensure that you obtain the required results,
do not tag the Sales member as Dynamic Calc or Dynamic Calc and
Store.
The following sections discuss ways you can analyze and manage the
effect of Dynamic Calc members on a database:
Note: For a list of functions that have the most significant effect on
query retrieval, see Choosing Between Member Set Functions and
Performance for details.
An outline with a high retrieval factor (for example, greater than 2000)
can cause long delays when users retrieve data. However, the actual
impact on retrieval time also depends on how many dynamically
calculated data values a user retrieves. The retrieval factor is only an
indicator. In some applications, using Dynamic Calc members may
reduce retrieval time because the database size and index size are
reduced.
[Wed Sep 20 20:04:13 2000] Local/Sample///Info (1012710)
Essbase needs to retrieve [1] Essbase kernel blocks in order
to calculate the top dynamicallycalculated block.
This message tells you that Analytic Services needs to retrieve one
block in order to calculate the most expensive dynamically calculated
data block.
[Wed Sep 20 20:04:13 2000]Local/Sample///Info(1007125)
The number of Dynamic Calc NonStore Members = [ 8 6 0 0 2]
[Wed Sep 20 20:04:13 2000]Local/Sample///Info(1007126)
The number of Dynamic Calc Store Members = [ 0 0 0 0 0]
This message tells you that there are eight Dynamic Calc members in
the first dimension of the database outline, six in the second
dimension, and two in the fifth dimension. Dynamic Time Series
members are included in this count.
This example does not include Dynamic Calc and Store members.
By default, the retrieval buffer size is 10 KB. However, you may speed
up retrieval time if you set the retrieval buffer size greater than 10 KB.
For information about sizing the retrieval buffer, see Setting the
Retrieval Buffer Size.
Use any of the following methods to set the retrieval buffer size:
Note: The dynamic calculator cache and the calculator cache use
different approaches to optimizing calculation performance.
For details about sizing and reviewing dynamic calculator cache usage,
see Sizing the Calculator Cache.
Figure 193: Application Log Example of Memory Usage for Data Blocks
Containing Dynamic Calc Members
[Thu Aug 03 14:33:00
2000]Local/Sample/Basic/aspen/Info(1001065)
Regular Extractor Elapsed Time : [0.531] seconds
[Thu Aug 03 14:33:00
2000]Local/Sample/Basic/aspen/Info(1001401)
Regular Extractor Big Blocks Allocs Dyn.Calc.Cache : [30]
nonDyn.Calc.Cache : [0]
Using Dynamic
Calculations with
Standard Procedures
Using dynamic calculations with standard Analytic Services procedures
affects these processes:
When you load data, Analytic Services does not load data into
member combinations that contain a Dynamic Calc or Dynamic
Calc and Store member. Analytic Services skips these members
during data load. Analytic Services does not display an error
message.
To place data into Dynamic Calc and Dynamic Calc and Store
members, after loading data, you need to ensure that Analytic
Services recalculates Dynamic Calc and Store members. For an
explanation of how to ensure that Analytic Services recalculates
Dynamic Calc and Store members, see Effect of Updated Values
on Recalculation.
• Exporting data
Restructuring
Databases
When you add a Dynamic Calc member to a dense dimension, Analytic
Services does not reserve space in the data block for the member's
values. Therefore, Analytic Services does not need to restructure the
database. However, when you add a Dynamic Calc and Store member
to a dense dimension, Analytic Services does reserve space in the
relevant data blocks for the member's values and therefore needs to
restructure the database.
When you add a Dynamic Calc or a Dynamic Calc and Store member to
a sparse dimension, Analytic Services updates the index, but does not
change the relevant data blocks. For information on managing the
database index, see Index Manager.
Analytic Services can save changes to the database outline
significantly faster if it does not have to restructure the database.
Dynamically Calculating
Data in Partitions
You can define Dynamic Calc and Dynamic Calc and Store members in
transparent, replicated, or linked regions of the partitions. For a
comprehensive discussion of partitions, see Designing Partitioned
Applications.
For example, you might want to tag an upper level, sparse dimension
member with children that are on a remote database (transparent
database partition) as Dynamic Calc and Store. Because Analytic
Services needs to retrieve the child values from the other database,
retrieval time is increased. You can use Dynamic Calc instead of
Dynamic Calc and Store; however, the impact on subsequent retrieval
time might be too great.
If you are using a replicated partition, then you might want to use
Dynamic Calc members instead of Dynamic Calc and Store members.
When calculating replicated data, Analytic Services does not retrieve
the child blocks from the remote database, and therefore the impact
on retrieval time is not great.
Note: When Analytic Services replicates data, it checks the time
stamp on each source data block and each corresponding target
data block. If the source data block is more recent, Analytic
Services replicates the data in the data block. However, for
dynamically calculated data, data blocks and time stamps do not
exist. Therefore Analytic Services always replicates dynamically
calculated data.
Figure 194: Sample Basic Outline Showing Accounts and Time Tags
Calculating Period-to-
Date Values
You can calculate period-to-date values for data. For example, you can
calculate the sales values for the current quarter up to the current
month. If the current month is May, using a standard calendar quarter,
the quarter total is the total of the values for April and May.
In Analytic Services, you can calculate period-to-date values in two
ways:
You do not create the Dynamic Time Series member directly in the
database outline. Instead, you enable a predefined Dynamic Time
Series member and associate it with an appropriate generation
number. This procedure creates a Dynamic Time Series member for
you.
H-T-D History-to-date
Y-T-D Year-to-date
S-T-D Season-to-date
P-T-D Period-to-date
Q-T-D Quarter-to-date
M-T-D Month-to-date
W-T-D Week-to-date
D-T-D Day-to-date
If the database contains monthly data for the past 5 years, you might
want to report year-to-date (Y-T-D) and history-to-date (H-T-D)
information, up to a specific year.
If the database tracks data for seasonal time periods, you might want
to report period-to-date (P-T-D) or season-to-date (S-T-D) information.
You can associate a Dynamic Time Series member with any generation
in the time dimension except the highest generation number,
irrespective of the data. For example, if you choose, you can use the P-
T-D member to report quarter-to-date information. You cannot
associate Dynamic Time Series members with level 0 members of the
time dimension.
Year Time (Active Dynamic Time Series Members: HTD, QTD)
(Dynamic Calc)
Disabling Dynamic Time Series Members
To disable a Dynamic Time Series member, tell Analytic Services not to
use the predefined member.
You can create up to eight alias names for each Dynamic Time Series
member. Analytic Services saves each alias name in the Dynamic Time
Series alias table that you specify.
The following table shows the Dynamic Time Series members and their
corresponding generation names:
Member Generation Name Member Generation Name
These member and generation names are reserved for use by Analytic
Services. If you use one of these generation names to create a
generation name on the time dimension, Analytic Services
automatically creates and enables the corresponding Dynamic Time
Series member for you.
Developing Calculation
Scripts
This chapter explains how to develop calculation scripts and how to
use them to control the way Analytic Services calculates a database.
This chapter provides some examples of calculation scripts, which you
may want to adapt for your own use. This chapter also shows you how
to create and execute a simple calculation script.
Understanding
Calculation Scripts
A calculation script contains a series of calculation commands,
equations, and formulas. You use a calculation script to define
calculations other than the calculations that are defined by the
database outline.
Calculation scripts are ASCII text. Using Calculation Script Editor, you
can create calculation scripts by:
• Typing the contents of the calculation script directly into the
text area of the script editor
• Using the user interface features of the script editor to build
the script
• Creating the script in the text editor of your choice and
pasting it into Calculation Script Editor.
FIX (Actual)
CALC DIM(Year, Measures, Market, Product);
ENDFIX
You can use a calculation script to specify exactly how you want
Analytic Services to calculate a database. For example, you can
calculate part of a database or copy data values between members.
You can design and run custom database calculations quickly by
separating calculation logic from the database outline.
Understanding
Calculation Script
Syntax
Analytic Services provides a flexible set of commands that you can use
to control how a database is calculated. You can construct calculation
scripts from commands and formulas. In Calculation Script Editor, the
different elements of the script are color-coded to aid in script
readability.
Example 2:
DATACOPY Plan TO Revised_Plan;
Example 3:
"Market Share" = Sales % Sales?>?Market;
Example 4:
IF
(Sales <> #MISSING) Commission = Sales * .9;
ELSE
Commission = #MISSING;
ENDIF;
You do not need to end the following commands with
semicolons-IF, ENDIF, ELSE, ELSEIF, FIX, ENDFIX, LOOP, and
ENDLOOP.
Although ending ENDIF statements with a semicolon (;) is not
required, it is good practice to follow each ENDIF statement in a
formula with a semicolon.
• Enclose a member name in double quotation marks (" ") if
that member name meets any of the following conditions:
o Contains spaces; for example,
"Opening Inventory" = "Ending Inventory" Sales +
Additions;
o Is the same as an operator or function name. For a list
of operator and functions names, see Understanding the
Rules for Naming Dimensions and Members.
o Includes any non-alphanumeric character; for example,
hyphen (-), asterisk (*), and slash (/ ). For a complete
list of special characters, see Understanding the Rules
for Naming Dimensions and Members.
o Contains only numerals or starts with a numeral; for
example, "100" or "10Prod".
o Begins with an ampersand (&). The leading ampersand
(&) is reserved for substitution variables. If a member
name begins with &, enclose it in quotation marks. Do
not enclose substitution variables in quotation marks in
a calculation script.
o Contains a dot (.); for example, 1999.Jan or .100.
• If you are using an IF statement or an interdependent
formula, enclose the formula in parentheses to associate it
with the specified member. For example, the following formula
is associated with the Commission member in the database
outline:
• Commission
• (IF(Sales < 100)
• Commission = 0;
• ENDIF;)
• End each IF statement in a formula with an ENDIF statement.
For example, the previous formula contains a simple
IF...ENDIF statement.
• If you are using an IF statement that is nested within another
IF statement, end each IF with an ENDIF statement; for
example:
• "Opening Inventory"
• (IF (@ISMBR(Budget))
• IF (@ISMBR(Jan))
• "Opening Inventory" = Jan;
• ELSE
• "Opening Inventory" = @PRIOR("Ending
Inventory");
• ENDIF;
• ENDIF;)
• You do not need to end ELSE or ELSEIF statements with
ENDIF statements; for example:
Marketing
(IF (@ISMBR(@DESCENDANTS(West)) OR
@ISMBR(@DESCENDANTS(East)))
Marketing = Marketing * 1.5;
ELSEIF(@ISMBR(@DESCENDANTS(South)))
Marketing = Marketing * .9;
ELSE Marketing = Marketing * 1.1;
ENDIF;)
When you write a calculation script, you can use the Calculation Script
Editor syntax checker to check the syntax. For a brief discussion of the
syntax checker, see Checking Syntax.
Note: For detailed information on calculation script syntax, see the
Technical Reference.
Calculation Command
Calculation Commands
You can also use the IF and ENDIF commands to specify conditional
calculations.
Calculation Commands
Variable and array names are character strings that contain any of the
following characters:
ARRAY Discount[Scenario];
Calculation Command
SET MSG DETAIL;
CALC DIM(Year);
SET MSG SUMMARY;
CALC DIM(Measures);
SET AGGMISSG ON;
Qtr1;
SET AGGMISSG OFF;
East;
/* This is a calculation script comment
that spans two lines.*/
Planning Calculation
Script Strategy
You can type a calculation script directly into the text area of
Calculation Script Editor, or you can use the user interface features of
Calculation Script Editor to build the calculation script.
Variance;
Expenses = Payroll + Marketing + Misc;
Basic Equations
You can use equations in a calculation script to assign value to a
member, as follows:
Member = mathematical expression;
Margin = Sales COGS;
The next formula cycles through the database subtracting the values in
Cost from the values in Retail, calculating the resulting values as a
percentage of the values in Retail, and?placing the results in Markup:
Markup = (Retail Cost) % Retail;
Conditional Equations
When you use an IF statement as part of a member formula in a
calculation script, you need to perform both of the following tasks:
Profit
(IF (Sales > 100)
Profit = (Sales COGS) * 2;
ELSE
Profit = (Sales COGS) * 1.5;
ENDIF;)
Interdependent Formulas
When you use an interdependent formula in a calculation script, the
same rules apply as for the IF statement. You need to perform both of
the following tasks:
"Opening Inventory"
(IF(NOT @ISMBR (Jan))"Opening Inventory" =
@PRIOR("Ending Inventory"));
ENDIF;
"Ending Inventory" = "Opening Inventory" Sales +
Additions;)
Analytic Services always recalculates the data block that contains the
formula, even?if the data block is marked as clean for the purposes of
Intelligent Calculation. For more information, see Calculating Data
Blocks. For more information about Intelligent Calculation, see
Optimizing with Intelligent Calculation.
Profit = (Sales COGS) * 1.5;
Market = East + West;
Similarly, the following configurations cause Analytic Services to cycle
through the database only once, calculating the formulas on the
members Qtr1, Qtr2, and Qtr3:
Qtr1;
Qtr2;
Qtr3;
or
(Qtr1;
Qtr2;
Qtr3;)
(Qtr1;
Qtr2;)
Qtr3;
CALC DIM(Year, Measures);
FIX(&CurQtr)
CALC DIM(Measures, Product);
ENDFIX
You then define the substitution variable CurQtr as the current
quarter; for example, Qtr3. Analytic Services replaces the variable
CurQtr with the value Qtr3 when it runs the calculation script.
Clearing Data
You can use the following commands to clear data. If you want to clear
an entire database, see "Clearing Data" in the Essbase XTD
Administration Services Online Help.
Calculation Command
FIX(Actual)
CLEARBLOCK NONINPUT;
ENDFIX
For example, the following formula clears all the Actual data values for
Colas:
CLEARDATA Actual?>?Colas;
Copying Data
You can use the DATACOPY calculation command to copy data cells
from one range of members to another range of members in a
database. The two ranges must be the same size.
DATACOPY Actual TO Budget;
FIX (Jan)
DATACOPY Actual TO Budget;
ENDFIX
Note: When you have Intelligent Calculation turned on, the newly
calculated data blocks are not marked as clean after a partial
calculation of a database. When you calculate a subset of a
database, you can use the SET CLEARUPDATESTATUS AFTER
command to ensure that the newly calculated blocks are marked as
clean. Using this command ensures that Analytic Services
recalculates the database as efficiently as?possible using Intelligent
Calculation. For a comprehensive discussion of Intelligent
Calculation, see Optimizing with Intelligent Calculation.
For detailed information on these and other member set functions, see
the Technical Reference.
FIX(@CHILDREN(East) AND @UDA(Market,"New Mkt"))
Marketing = Marketing * 1.1;
ENDFIX
The next example uses a wildcard match to fix on member names that
end in the characters -10. In Sample Basic, this example fixes on the
members 100-10, 200-10, 300-10, and 400-10.
FIX(@MATCH(Product, "???10"))
Price = Price * 1.1;
ENDFIX
When you use the FIX command only on a dense dimension, Analytic
Services retrieves the entire block that contains the required value or
values for the member or members that you specify. Thus, I/O is not
affected, and the calculation performance time is improved.
Analytic Services cycles through the database once for each FIX
command that you use on dense dimension members. When possible,
combine FIX blocks to improve calculation performance. For example,
the following calculation script causes Analytic Services to cycle
through the database only once, calculating both the Actual and the
Budget values:
FIX(Actual,Budget)
CALC DIM(Year, Measures);
ENDFIX
FIX(Actual)
CALC DIM(Year, Measures);
ENDFIX
FIX(Budget)
CALC DIM(Year, Measures);
ENDFIX
FIX(@CHILDREN(East) AND @UDA(Market,"New Mkt"))
CALC DIM(Year, Measures, Product, Market);
ENDFIX
For detailed information on using the FIX command, see the Technical
Reference.
FIX(Budget)
(Sales = Sales?>?Actual * 1.1;
Expenses = Expenses?>?Actual * .95;)
ENDFIX
Note that Sales and Expenses, the results of the equations, are dense
dimension members, and the operand, Actual, is in a sparse
dimension. Because Analytic Services executes dense member
formulas only on existing data blocks, the calculation script above does
not create the required data blocks and Budget data values are not
calculated for blocks that do not already exist.
You can solve the problem using the following techniques, each with its
own advantages and disadvantages:
DATACOPY Sales?>?Actual TO Sales?>?Budget;
DATACOPY Expenses?>?Actual TO Expenses?>?Budget;
FIX(Budget)
(Sales = Sales?>?Actual * 1.1;
Expenses = Expenses?>?Actual * .95;)
ENDFIX
Analytic Services creates blocks that contain the Budget values for
each corresponding Actual block that already exists. After the
DATACOPY commands are finished, the remaining part of the script
changes the values.
FIX(Budget)
SET CREATENONMISSINGBLK ON
(Sales = Sales?>?Actual * 1.1;
Expenses = Expenses?>?Actual * .95;)
ENDFIX
However, when you use partitioning, you need to perform both of the
following tasks:
• Consider carefully the performance impact on the overall
database calculation. You might choose to use any of the
following methods to improve performance:
o Redesign the overall calculation to avoid referencing
remote values that are in a transparent partition in a
remote database.
o Dynamically calculate a value in a remote database. See
Dynamically Calculating Data in Partitions.
o Replicate a value in the database that contains the
applicable formula. For example, if you are replicating
quarterly data for the Eastern region, replicate only the
values for Qtr1, Qtr2, Qtr3, and Qtr4, and calculate the
parent Year values locally.
• Ensure that a referenced value is up-to-date when Analytic
Services retrieves it. Choose one of the options previously
discussed (redesign, dynamically calculate, or replicate) or
calculate the referenced database before calculating the
formula.
West, Central, and East contain only actual values. Corporate contains
actual and budgeted values. Although you can view the West, Central,
and East data in the Corporate database, the data exists only in the
West, Central, and East databases; it is not duplicated in the Corporate
database.
Checking Syntax
Analytic Services includes a syntax checker that tells you about any
syntax errors in a calculation script. For example, Analytic Services
tells you if you have typed a function name incorrectly. The syntax
checker cannot tell you about semantic errors in a calculation script.
Semantic errors occur when a calculation script does not work as you
expect. To find semantic errors, always run the calculation, and check
the results to ensure they are as you expect.
Error: line 1: invalid statement; expected semicolon
When you reach the first or last error, Analytic Services displays the
following message:
No more errors
To check the syntax of a calculation script in Calculation Script Editor,
see "Checking Script Syntax" in the Essbase XTD Administration
Services Online Help.
To display the application log, see Viewing the Analytic Server and
Application Logs.
Reviewing Examples of
Calculation Scripts
The examples in this chapter illustrate different types of calculation
scripts, which you may want to adapt for your own use.
• Calculating Variance
• Calculating Database Subsets
• Loading New Budget Values
• Calculating Product Share and Market Share Values
• Allocating Costs Across Products
• Allocating Values Within or Across Dimensions
• Goal Seeking Using the LOOP Command
• Forecasting Future Values
Calculating Variance
The Sample Basic database includes a calculation of the percentage of
variance between Budget and Actual values.
Figure 202: Calculating Variance and Variance %
CALC ALL;
SET UPDATECALC OFF;
SET CLEARUPDATESTATUS AFTER;
"Variance %";
Calculating Database
Subsets
In this example, based on the Sample Basic database, the Marketing
managers of the regions East, West, South, and Central need to
calculate their respective areas of the database.
/* Calculate the Budget data values for the descendants of
East */
FIX(Budget, @DESCENDANTS(East))
CALC DIM(Year, Measures, Product);
ENDFIX
/* Consolidate East */
FIX(Budget)
@DESCENDANTS(East);
ENDFIX
The script calculates the Year, Measures, and Product dimensions for
each child of East.
/* Calculate all Budget values */
FIX(Budget)
CALC DIM(Year, Product, Market, Measures);
ENDFIX
/* Recalculate the Variance and Variance % formulas, which
require two passes */
Variance;
"Variance %";
Calculating Product
Share and Market
Share Values
This example, based on the Sample Basic database, calculates product
share and market share values for each market and each product.
/* First consolidate the Sales values to ensure that they
are accurate */
FIX(Sales)
CALC DIM(Year, Market, Product);
ENDFIX
/* Calculate each market as a percentage of the
total market for each product */
"Market Share" = Sales % Sales > Market;
/* Calculate each product as a percentage of the
total product for each market */
"Product Share" = Sales % Sales > Product;
/* Calculate each market as a percentage of its
parent for each product */
"Market %" = Sales % @PARENTVAL(Market, Sales);
/* Calculate each product as a percentage its
parent for each market */
"Product %" = Sales % @PARENTVAL(Product, Sales);
The overhead costs are allocated based on each product's Sales value
as a percentage of the total Sales for all products.
/* Declare a temporary array called ALLOCQ
based on the Year dimension */
ARRAY ALLOCQ[Year];
/*Turn the Aggregate Missing Values setting off.
If this is your system default, omit this line */
SET AGGMISSG OFF;
/* Allocate the overhead costs for Actual values */
FIX(Actual)
OH_Costs (ALLOCQ=Sales/Sales>Product; OH_Costs =
OH_TotalCost>Product * ALLOCQ;);
/* Calculate and consolidate the Measures dimension */
CALC DIM(Measures);
ENDFIX
/* Allocate budgeted total expenses based on prior year */
/* Allocate budgeted total expenses based on prior year */
FIX("Total Expenses")
Budget = @ALLOCATE(Budget>"Total Expenses",
@CHILDREN("Total Expenses"),"PY Actual",,
spread,SKIPMISSING,roundAmt,0,errorsToHigh)
ENDFIX
. Budget PY Actual
Colas Marketing 334* 150
For this example, a value of 750 (for Budget -> Total Expenses ->
Product -> East?-> Jan) needs to be allocated to each expense
category for the children of product 100 across the states in the East.
The allocation uses values from PY Actual to determine the percentage
share that each category should receive.
/* Allocate budgeted total expenses based on prior year,
across 3 dimensions */
SET UPDATECALC OFF;
FIX (East, "100", "Total Expenses")
BUDGET = @MDALLOCATE(750,3,@CHILDREN("100"),@CHILDREN("Total
Expenses"),@CHILDREN(East),"PY Actual",,share);
ENDFIX
This table shows the values for PY Actual:
Jan
PY Actual
Total
Marketing Payroll Misc
Expenses
Florida 27 31 0 58
Connecticut 40 31 0 71
New 15 31 1 47
Hampshire
Connecticut 26 23 0 49
Massachusetts 12 22 1 35
Florida 12 22 1 35
Connecticut 94 51 0 145
New 23 31 1 55
Hampshire
Jan Budget
Total
Marketing Payroll Misc
Expenses
You want to know what sales value you have to reach in order to
obtain a certain profit on a specific product.
This example adjusts the Budget value of Sales to reach a goal of
15,000 Profit for Jan. The results are shown for product 100-10.
Profit 12,278.50
????????Margin 30,195.50
????????????????Sales 49,950.00
????????????????COGS 19,755.00
????????Total Expenses 17,917.00
????????????????Marketing 3,515.00
????????????????Payroll 14,402.00
????????????????Misc 0
VAR
Target = 15000,
AcceptableErrorPercent = .001,
AcceptableError,
PriorVar,
PriorTar,
PctNewVarChange = .10,
CurTarDiff,
Slope,
Quit = 0,
DependencyCheck,
NxtVar;
/*Declare a temporary array variable called Rollback and
base it on the Measures dimension */
ARRAY Rollback [Measures];
/* Fix on the appropriate member combinations and perform
the goalseeking calculation*/
FIX(Budget, Jan, Product, Market)
LOOP (35, Quit)
Sales (Rollback = Budget;
AcceptableError = Target *
(AcceptableErrorPercent);
PriorVar = Sales;
PriorTar = Profit;
Sales = Sales + PctNewVarChange * Sales;);
CALC DIM(Measures);
Sales (DependencyCheck = PriorVar PriorTar;
IF(DependencyCheck <> 0) CurTarDiff = Profit
Target;
IF(@ABS(CurTarDiff) >
@ABS(AcceptableError))
Slope = (Profit PriorTar) / (Sales
PriorVar);
NxtVar = Sales (CurTarDiff /
Slope);
PctNewVarChange = (NxtVar Sales) /
Sales;
ELSE
Quit = 1;
ENDIF;
ELSE
Budget = Rollback;
Quit = 1;
ENDIF;);
ENDLOOP
CALC DIM(Measures);
ENDFIX
Profit 15,000.00
????????Margin 32,917.00
????????????????Sales 52,671.50
????????????????COGS 19,755.00
????????Total Expenses 17,917.00
????????????????Marketing 3,515.00
????????????????Payroll 14,402.00
????????????????Misc 0
Forecasting Future
Values
The following example uses the @TREND function to forecast sales
data for June through December, assuming that data currently exists
only up to May. Using the linear regression forecasting method, this
example produces a trend, or line, that starts with the known data
values from selected previous months and continues with forecasted
values based on the known values. In addition, this example
demonstrates how to check the results of the trend for "goodness of
fit" to the known data values.
Sales
(@TREND(@LIST(Jan,Mar,Apr),@LIST(1,3,4),,
@RANGE(ErrorLR,@LIST(Jan,Mar,Apr)),
@LIST(6,7,8,9,10,11,12),
Jun:Dec,LR););
Parameter Desccription
For more details about the macro language syntax and rules, and
examples of the use of the macro language, see the Technical
Reference.
Understanding Custom-
Defined Macros
Custom-defined macros use an internal macro language that enables
you to combine calculation functions and operate on multiple input
parameters.
Viewing Custom-
Defined Macros
View a custom-defined macro to determine whether a macro has been
successfully created or whether a custom-defined macro is local or
global.
Creating Custom-
Defined Macros
When you create a custom-defined macro, Analytic Services records
the macro definition and stores it for future use. Create the macro
once, and then you can use it in formulas and calculation scripts until
the macro is updated or removed from the catalog of macros.
• Understanding Scope
• Naming Custom-Defined Macros
• Creating Macros
• Refreshing the Catalog of Custom-Defined Macros
Understanding Scope
You can create custom-defined macros locally or globally. When you
create a local custom-defined macro, the macro is only available in the
application in which it was created. When you create a global custom-
defined macro, the macro is available to all applications on the server
where it was created.
Be sure to add the application name plus a period (.) as a prefix before
the name of the local macro. In this example, Sample is the prefix for
the local macro name. This prefix assigns the macro to an application,
so the macro is only available within that application.
For example, use the following MaxL statement to create a local macro
named @COUNTRANGE used only in the Sample application:
create macro Sample.'@COUNTRANGE'(Any) AS
'@COUNT(SKIPMISSING, @RANGE(@@S))'
spec '@COUNTRANGE(MemberRange)'
comment 'counts all nonmissing values';
create macro'@COUNTRANGE'(Any) AS
'@COUNT(SKIPMISSING, @RANGE(@@S))'
spec '@COUNTRANGE(MemberRange)'
comment 'counts all nonmissing values';
For example, use the following MaxL statement to refresh the catalog
of custom-defined macros for the Sample application:
refresh custom definition on application sample;
Using Custom-Defined
Macros
After creating custom-defined macros, you can use them like native
calculation commands. Local macros-created using the AppName.
prefix on the macro name-are only available for use in calculation
scripts or formulas within the application in which they were created.
Global macros-created without the AppName. prefix-are available to all
calculation scripts and formulas on the server where they were
created.
Updating Custom-
Defined Macros
When you update a custom-defined macro, you must determine
whether the macro is registered locally or globally. Local custom-
defined macros are created using an AppName. prefix in the macro
name and can be used only within the application where they were
created. For a review of the methods used to determine whether a
custom-defined macro is local or global, see Viewing Custom-Defined
Macros. For a review of the methods used to update the catalog of
macros after you change a custom-defined macro, see Refreshing the
Catalog of Custom-Defined Macros.
For example, use the following MaxL statement to change the local
macro @COUNTRANGE which is used only in the Sample application:
create or replace macro Sample.'@COUNTRANGE'(Any)
as '@COUNT(SKIPMISSING, @RANGE(@@S))';
For example, use the following MaxL statement to change the global
macro @COUNTRANGE:
create or replace macro '@COUNTRANGE'(Any)
as '@COUNT(SKIPMISSING, @RANGE(@@S))';
Copying Custom-
Defined Macros
You can copy custom-defined macros to any Analytic Server and
application to which you have appropriate access.
Removing Custom-
Defined Macros
When removing a custom-defined macro, you must first determine
whether the macro is registered locally or globally. The procedure for
removing global custom-defined macros is more complex than that for
removing local custom-defined macros and should only be performed
by a database administrator. For a review of methods used to
determine whether a custom-defined macro is local or global, see
Viewing Custom-Defined Macros.
For example, use the following MaxL statement to remove the local
macro @COUNTRANGE which is used only in the Sample application:
drop macro Sample.'@COUNTRANGE';
For example, use the following MaxL statement to remove the global
macro @COUNTRANGE:
drop macro '@COUNTRANGE';
Developing Custom-
Defined Calculation
Functions
This chapter explains how to develop custom-defined functions and
use them in Analytic Services formulas and calculation scripts.
Custom-defined functions are written in the JavaTM programming
language and enable you to create calculation functions not otherwise
supported by the Analytic Services calculation scripting language.
Analytic Services does not provide tools for creating Java classes and
archives. This chapter assumes that you have a compatible version of
the Java Development Kit (JDK) and a text editor installed on the
computer you use to develop custom-defined functions. For
information on compatible versions of Java, see the Essbase XTD
Analytic Services Installation Guide.
Viewing Custom-
Defined Functions
You can view custom-defined functions to determine whether a
function has been registered successfully and whether it is registered
locally or globally. No custom-defined functions are displayed until they
have been created and registered. Analytic Services does not supply
sample custom-defined functions.
For example, use the following MaxL statement to view the custom-
defined functions in the Sample application and any registered global
functions:
display function Sample;
Creating Custom-
Defined Functions
There are several steps required to create a custom-defined function:
You can create more than one method in a class for use as a custom-
defined function. In general, it is recommended that you create all the
methods you want to use as custom-defined functions in a single class.
However, if you want to add new custom-defined functions that are not
going to be used across all applications on the Analytic Server, create
them in a new class and add them to the Analytic Server in a separate
.jar file.
When creating multiple Java classes that contain methods for use as
custom-defined functions, verify that each class name is unique.
Duplicate class names cause methods in the duplicate class not to be
recognized, and you cannot register those methods as custom-defined
functions.
After creating the Java classes and methods for custom-defined
functions, test them using test programs in Java. When you are
satisfied with the output of the methods, install them on Analytic
Server and register them in a single test application. Do not register
functions globally for testing, because registering functions globally
makes it more difficult to update them if you find problems.
boolean byte
char java.lang.String
The method return data type can be void or any of the preceding data
types. Returned data types are converted to Analytic Services-specific
data types. Strings are mapped to a string type. Boolean values are
mapped to the CalcBoolean data type. All other values are mapped to
a double type.
Use the same process for updating the catalog of functions as you do
for updating the catalog of macros. After you register a custom-
defined function, see Refreshing the Catalog of Custom-Defined
Macros.
create function '@JSUM'
as 'CalcFunc.sum';
spec '@JSUM(memberRange)'
comment 'adds list of input members';
The AppName. prefix is not included in the name of the function. The
lack of a prefix makes a function global.
Using Registered
Custom-Defined
Functions
After registering custom-defined functions, you can use them like
native Analytic Services calculation commands. Functions you
registered locally-using the AppName. prefix on the function name-are
only available for use in calculation scripts or formulas within the
application in which they were registered. If you registered the
custom-defined functions globally, then the functions are available to
all calculation scripts and formulas on the Analytic Server where the
functions are registered.
Updating Custom-
Defined Functions
When you update a custom-defined function, there are two major
issues to consider:
• Is the function registered locally or globally?
• Have the class name, method name, or input parameters
changed in the Java code for the custom-defined function?
Removing Custom-
Defined Functions
When removing a custom-defined function, you must first determine
whether the function is registered locally or globally to identify the
security permissions required:
drop function Sample.'@JSUM';
For example, use the following MaxL statement to remove the global
@JSUM function:
drop function '@JSUM';
Copying Custom-
Defined Functions
You can copy custom-defined functions to any Analytic Server and
application to which you have appropriate access.
Considering How
Custom-Defined
Functions Affect
Performance and
Memory
The ability to create and run custom-defined functions is provided as
an extension to the Analytic Services calculator framework. When you
use custom-defined functions, consider how their use affects memory
resources and calculator performance.
Performance Considerations
Because custom-defined functions are implemented as an extension of
the Analytic Services calculator framework, you can expect custom-
defined functions to operate less efficiently than functions that are
native to the Analytic Services calculator framework. In tests using a
simple addition function running in the Java Runtime Environment 1.3
on the Windows NT 4.0 platform, the Java function ran 1.8 times
(about 80%) slower than a similar addition function performed by
native Analytic Services calculation commands.
Memory Considerations
Use of the Java Virtual Machine (JVM) and Java API for XML Parsing
has an initial effect on the memory required to run Analytic Services.
The memory required to run these additional components is
documented in the memory requirements for Analytic Services. For
more information about memory requirements, see the Essbase XTD
Analytic Services Installation Guide.
Creating a Simple
Report Script
When you combine report commands that include page, row, and
column dimension declarations with selected members, you have all
the elements of a simple report script.
1. Create a report.
For more information, see "Creating Scripts" in the Essbase XTD
Administration Services Online Help.
2. Type the following information in the report, with the
exception of the commented (//) lines, which are for your
reference:
3. // This is a simple report script example
4. // Define the dimensions to list on the current
page, as below
5. <PAGE (Market, Measures)
6.
7. // Define the dimensions to list across the page, as
below
8. <COLUMN (Year, Scenario)
9.
10.// Define the dimensions to list down the page, as
below
11.<ROW (Product)
12.
13.// Select the members to include in the report
14.Sales
15.<ICHILDREN Market
16.Qtr1 Qtr2
17.Actual Budget Variance
18.<ICHILDREN Product
19.
20.// Finish with a bang
21. !
22.Save the report script.
For more information, see "Saving Scripts" in the Essbase XTD
Administration Services Online Help.
23.Execute the report script.
For information, see "Executing Report Scripts" in the Essbase
XTD Administration Services Online Help.
When you execute this report against the Sample Basic database, the
script produces the following report:
East Sales
Qtr1
Qtr2
Actual Budget Variance Actual Budget
Variance
======== ======== ======== ========
======== ========
100 9,211 6,500 2,711 10,069 6,900
3,169
200 6,542 3,700 2,842 6,697 3,700
2,997
300 6,483 4,500 1,983 6,956 5,200
1,756
400 4,725 2,800 1,925 4,956 3,200
1,756
Product 26,961 17,500 9,461 28,678 19,000
9,678
West Sales
Qtr1
Qtr2
Actual Budget Variance Actual Budget
Variance
======== ======== ======== ========
======== ========
100 7,660 5,900 1,760 7,942 6,500
1,442
200 8,278 6,100 2,178 8,524 6,200
2,324
300 8,599 6,800 1,799 9,583 7,600
1,983
400 8,403 5,200 3,203 8,888
6,300 2,588
Product 32,940 24,000 8,940 34,937 26,600
8,337
South Sales
Qtr1
Qtr2
Actual Budget Variance Actual Budget
Variance
======== ======== ======== ========
======== ========
100 5,940 4,100 1,840 6,294 4,900
1,394
200 5,354 3,400 1,954 5,535 4,000
1,535
300 4,639 4,000 639 4,570 3,800
770
400 #Missing #Missing #Missing #Missing
#Missing #Missing
Product 15,933 11,500 4,433 16,399 12,700
3,699
Central Sales
Qtr1
Qtr2
Actual Budget Variance Actual Budget
Variance
======== ======== ======== ========
======== ========
100 9,246 6,500 2,746 9,974 7,300
2,674
200 7,269 6,800 469 7,440 7,000
440
300 10,405 6,200 4,205 10,784 6,800
3,984
400 10,664 5,200 5,464 11,201 5,800
5,401
Product 37,584 24,700 12,884 39,399 26,900
12,499
Market Sales
Qtr1
Qtr2
Actual Budget Variance Actual Budget
Variance
======== ======== ======== ========
======== ========
100 32,057 23,000 9,057 34,279 25,600
8,679
200 27,443 20,000 7,443 28,196 20,900
7,296
300 30,126 21,500 8,626 31,893 23,400
8,493
400 23,792 13,200 10,592 25,045 15,300
9,745
Product 113,418 77,700 35,718 119,413 85,200
34,21
Understanding How
Report Writer Works
The Report Writer consists of three main components:
• Report Script Editor is a text editor that you use to write the
report script. In Report Script Editor, you use report
commands to define formatted reports, export data subsets
from a database, and produce free-form reports. You can then
execute the script to generate a report. Report Script Editor
features a text editing window, a customized right-click menu,
a toolbar, and a message panel. Saved report scripts have the
file extension .rep.
• Report Extractor retrieves the data information from the
Analytic Services database when you run a report script.
• Report Viewer displays the complete report. Saved reports
have the file extension .txt.
Report Extractor
The Report Extractor processes the report script and retrieves data in
the following order:
Parts of a Report
Understanding the parts of a report is essential as you plan and design
your own reports.
You can enter one or more report scripts in a report script file. A report
script file is an ASCII text file that you create with Report Script Editor
or any text editor.
See the Technical Reference for detailed information about the various
report commands that you can use.
Planning Reports
Report design is an important part of presenting information.
Designing a report is easy if you include the proper elements and
arrange information in an attractive, easy-to-read layout.
Note: As you plan the report, minimize use of numeric row names.
To avoid ambiguity, give the rows names that describe their
content.
Considering Security
and Multiple-User
Issues
You must use Essbase Administration Services in order to use Report
Script Editor to create or modify a report script. You can also use any
text editor to create script files. If you use Report Script Editor, it lets
you create and modify report scripts stored on your desktop machine,
as well as the Analytic Server. To modify report scripts stored on the
server, you must have Application Designer or Database Designer
access.
To users who are only reporting data, locks placed by other users are
transparent. Even if a user has locked and is updating part of the data
required by the report, the lock does not interfere with the report in
any way. The data in the report reflects the data in the database at the
time you run the report. Running the same report later reflects any
changes made after the last report ran.
To save a report script using Report Script Editor, see "Saving Report
Scripts" in Essbase XTD Administration Services Online Help.
Executing Report
Scripts
When you execute a report script using Essbase Administration
Services, you can send the results to the Report Viewer window, to a
printer, and/or to a file. From the Report Viewer window, you can print,
save, and copy the report.
Using Essbase Administration Services, you can execute a report in the
background so that you can continue working as the report processes.
You can then check the status of the background process to see when
the report has completed.
Developing Free-Form
Reports
Free-form reports are often easier to create than structured reports.
The free-form reporting style is ideal for ad hoc reporting in the Report
Script Editor window.
Sales Colas
Jan Feb Mar
Actual Budget
Illinois
Ohio
Wisconsin
Missouri
Iowa
Colorado
{UCHARACTERS}
Central
!
Sales 100
Jan Feb
Mar
Actual Budget Actual Budget Actual
Budget
======= ======= ====== ====== ======
======
Illinois 829 700 898 700 932
700
Ohio 430 300 397 300 380
300
Wisconsin 490 300 518 400 535
400
Missouri 472 300 470 300 462
300
Iowa 161 0 162 0 162
0
Colorado 643 500 665 500 640
500
======== === === === === ===
===
Central 3,025 2,100 3,110 2,200 3,111
2,200
Sales
Jan Feb Mar
Actual Budget
Apr May Jun
California
Oregon
Washington
Utah
Nevada
{UCHARACTERS}
West
!
Product Sales
Actual Budget
Apr May Jun Apr May
Jun
======= ====== ====== ====== ======
======
California 3,814 4,031 4,319 3,000 3,400
3,700
Oregon 1,736 1,688 1,675 1,100 1,000
1,100
Washington 1,868 1,908 1,924 1,500 1,600
1,700
Utah 1,449 1,416 1,445 900 800
800
Nevada 2,442 2,541 2,681 1,900 2,000
2,100
====== ===== ===== ===== ===== =====
=====
West 11,309 11,584 12,044 8,400 8,800
9,400
Understanding
Extraction and
Formatting Commands
Extraction commands perform the following actions:
• Determine the selection, orientation, grouping, and ordering
of raw data records extracted from the database. Extraction
commands are based on either dimension or member names,
or keywords. Their names begin with the greater-than symbol
(>).
• Apply to the report from the line on which they occur until the
end of the report. If another extraction command occurs on a
subsequent line of the report, it overrides the previous
command.
Understanding Report
Script Syntax
To build a report, you enter commands that define the layout, member
selection, and format you want in Report Script Editor. The different
elements of a script are color-coded to aid in readability. When you
write a report script, follow these guidelines:
<PAGE????????????<COLUMN????????????<ROW
6. Press Enter.
<PAGE (Product, Measures)
<COLUMN (Scenario, Year)
Actual
<ICHILDREN Qtr1
<ROW (Market)
<IDESCENDANTS East
!
Product Measures Actual
Jan Feb Mar Qtr1
======== ======== ======== ========
New York 512 601 543 1,656
Massachusetts 519 498 515 1,532
Florida 336 361 373 1,070
Connecticut 321 309 290 920
New Hampshire 44 74 84 202
East 1,732 1,843 1,805 5,380
You can create page, column, and row headings with members of
attribute dimensions. The following report script is based on the
Sample Basic database:
<PAGE (Measures,Caffeinated)
Profit
<COLUMN (Year,Ounces)
Apr May
"12"
<ROW (Market,"Pkg Type")
Can
<ICHILDREN East
!
Profit Caffeinated 12
Scenario
Apr May
======== ========
New York Can 276 295
Massachusetts Can 397 434
Florida Can 202 213
Connecticut Can 107 98
New Hampshire Can 27 31
East Can 1,009 1,071
Modifying Headings
You can perform the following modifications to headings in the report:
Report
Task
Command
East West
Budget Actual Budget Actual
Q1 Q2 Q3 Q1 Q2 Q3 Q1 Q2 Q3 Q1 Q2
Q3
East West
Budget Actual Budget
Q1 Q2 Q3 Q1 Q2 Q3 Q1 Q2 Q3
By default, Analytic Services creates a symmetric report unless you
select the same number of members for all column dimensions.
<PAGE (Measures, Market)
Texas Sales
<COLUMN (Scenario, Year)
Actual Budget
{DECIMAL 2 3 }
Jan Feb Mar
{DECIMAL 1 1 4 }
<ROW (Product)
<DESCENDANTS "100"
!
Sales Texas
Actual Budget
Jan Feb Mar Jan Feb
Mar
=== === === === ===
===
10010 452.0 465 467.00 560.0 580
580.00
10020 190.0 190 193.00 230.0 230
240.00
10030 #MISSING #MISSING #MISSING #MISSING #MISSING
#MISSING
//Script One: Format Columns by Distributing the Formats
<PAGE (Measures, Market)
California Sales
<COLUMN (Scenario, Year)
Actual Budget Variance
{DECIMAL 1 1 }
{DECIMAL 2 3 }
Jan Feb
// {DECIMAL 1 1 4 } These lines are commented;
the
// {DECIMAL 2 3 6 } Report Extractor ignores
them.
<ROW (Product)
<DESCENDANTS "100"
!
The two {DECIMAL} commands are positioned to format the individual
columns 1, 3, 4, and 6.
// Script Two: Format Columns by Direct Assignment
<PAGE (Measures, Market)
California Sales
<COLUMN (Scenario, Year)
Actual Budget Variance
// {DECIMAL 1 1 } These lines are commented;
the
// {DECIMAL 2 3 } Report Extractor ignores
them.
Jan Feb
{DECIMAL 1 1 4 7 }
{DECIMAL 2 3 6 9 }
<ROW (Product)
<DESCENDANTS "100"
!
Sales California
Actual Budget
Variance
Jan Feb Jan Feb Jan
Feb
===== ==== ==== ==== =====
====
10010 678.0 645 840.00 800.0 (162)
(155.00)
10020 118.0 122 140.00 150.0 (22)
(28.00)
10030 145.0 132 180.00 160.0 (35)
(28.00
Report
Task
Command
Totaling Columns
The CALCULATE COLUMN command lets you create a new report
column, perform on-the-fly calculations, and display the calculation
results in the newly created column.
If you use the same name for more than one column, Analytic Services
creates only the last column specified in the CALCULATE COLUMN
command. Use a leading space with the second name (and two leading
spaces with the third name, and so on) to create a unique column
name.
Alternately, you can add descriptive text far enough to the right that it
is truncated to the column width. You can, for example, use the names
Q1 Actual and Q1 Budget to distinguish similar column names without
affecting the appearance of the report. Column names are printed with
right justification until the column header space is filled. Excess
characters are then truncated to the right.
Divide lengthy column name labels into two or more lines. The
maximum number of lines across which you can divide a label is equal
to the number of column dimensions designated in the report
specification. To break a column name, insert the tilde character (~) in
the name at the point where you want the break. You must also
specify at least two members for each column dimension to use the
maximum number of lines.
Sales East
Actual Year to Date Budget Year
to Date
Jan Feb Actual Total Jan Feb Budget
Total
===== ====== ============= ====== ======
=============
40010 562 560 1,122 580 580
1,702
40020 219 243 462 230 260
722
40030 432 469 901 440 490
1,391
In cases where there are fewer column header dimensions than the
number of levels that you want, you can create multi-line column
labels. In this case, use TEXT, STARTHEADING, ENDHEADING, and
other formatting commands to create a custom heading.
Numbering Columns
If the number of regular (not calculated) columns varies in the report
because multiple sections in the report have different numbers of
columns, the column numbers used to identify the calculated columns
shift accordingly, as illustrated in the following examples:
In the example, CC1, CC2, and CC3 represent the names of three
calculated column names. The column numbering for a report with two
different sections with varying numbers of regular columns looks as
follows:
internal
col # s: 0 1 2 3 4 5 6 7
Jan Feb Mar Apr CC1 CC2 CC3
=== === === === === === ===
Sales 1 3 5 3 22 55 26
Expense 1 2 5 3 23 65 33
same report new section
internal
col # s: 0 1 2 3 4 5
Qtr1 YTD CC1 CC2 CC3
=== === === === ===
Sales 2 9 22 57 36
Expense 4 8 56 45 33
Totaling Rows
Row calculations create summary rows in a report. You can use
summary rows to calculate the sum of data across a range of rows or
to calculate an arithmetic expression composed of simple
mathematical operators.
For the syntax and definitions of row calculation commands, see the
Technical Reference.
Commands that designate columns must use valid data column
numbers, as determined by the original order of the columns.
{ CALC ROW "Total Sales" = "Sales..Group1"
+ "Sales..Group2" }
The example creates "Total Sales" based on two other calculated rows.
Underlining
Use underlining as a visual aid to break up blocks of information in a
report.
Task Report Command
Report
Suppress
Command
Indenting
Use indenting to provide visual clues to row levels of the script.
Report
Task
Command
Titles repeat at the top of each report page, and provide the following
information about a report:
To add a title to the report, use the TEXT command, combined with
any of the following:
Note: You can also use the TEXT command at the bottom of the
report to provide summary information.
See the Technical Reference for the syntax and definitions of Report
Writer commands.
The report displays the default #MISSING label in the data cell when
no data values are found.
To replace the #MISSING label with a text label add the following to
the report script:
At the point in the script where you want to replace the #MISSING
label with a text label, type:
{MISSINGTEXT ["text"]}
where text is any text string that you want to display in the data cells.
You can place the MISSINGTEXT command at any point in the report
script; the command applies throughout the script.
To replace zeros with a text label add the following to the report
script:
At the point in the script where you want to replace zeros with a text
label, type
{ZEROTEXT ["text"]}
where text is any text string that you want to display in the data cells.
Report
Task
Command
Turn on the display of commas for numbers greater than 999 COMMAS
after commas have been suppressed with either a
SUPCOMMAS or SUPALL command.
Turn on the display of brackets around negative numbers BRACKETS
instead of negative signs, after using the SUPBRACKETS
command earlier in the script.
Include a percentage sign or other character after the data AFTER
values.
Include a dollar sign or other character before the data values. BEFORE
Selecting Members
Member selection commands are extraction commands that select
ranges of members based on database outline relationships, such as
sibling, generation, and level. Using member selection commands
ensures that any changes to the outline are automatically reflected in
the report, unless you change the member name on which the
member selection command is based. Attribute dimensions can be
included in member selection commands.
Report
Task
Command
Select all the members from the same dimension as the ALLINSAMEDIM
dimension member.
Include all the siblings of the specified member. ALLSIBLINGS
When you use generation and level names, changes to the outline are
automatically reflected in the report. You can either define your own
generation and level names, or you can use the default names
provided by Analytic Services. For a discussion, including examples, of
generations and levels, see Generations and Levels.
To use default level names add the following to the report script:
At the point in the script where you want to select a member by the
default level name, use the following format:
Levn,dimensionName
For example, Lev1,Year selects all the level 1 members of the Year
dimension.
At the point in the script where you want to select a member by the
default generation name, use the following format:
Genn,dimensionName
Note: These default generation and level names are not displayed
in Outline Editor.
The following example is based on the Sample Basic database. It uses
the default generation name Gen2,Year to generate a report that
includes the members Qtr1, Qtr2, Qtr3, and Qtr4 from the Year
dimension.
<PAGE(Product)
<COLUMN(Year)
<ROW (Measures)
{OUTALTNAMES}
Cola
Gen2,Year
Sales Profit
!
Cola Market Scenario
Qtr1 Qtr2 Qtr3 Qtr4
======== ======== ======== ========
Sales 14,585 16,048 17,298 14,893
Profit 5,096 5,892 6,583 5,206
Note: The database header message for the outline identifies the
number of dynamic members that are enabled in the current
outline.
At the point in the script where you want to select a Dynamic Time
Series member, use either of the following formats:
<LATEST memberName
If you use this syntax to specify a Dynamic Time Series, the time
series name is associated only to the member listed in the argument.
When you run the report script, the members are dynamically
updated, and the information is incorporated into the final report.
Note: You must type the Dynamic Time Series string exactly as it is
displayed in the database outline; you cannot create your own
string and incorporate it into the final report.
You can create an alias table for the Dynamic Time Series members
in the database outline, and use the aliases instead of the
predefined generation names.
At the point in the script where you want to use linking, enter the
following format:
<LINK (extractionCommand [operator extractionCommand ])
where extractionCommand is the member selection command to
retrieve data from, and operator is either the AND or OR operator.
Note: You must select members from the same dimension, and all
extraction command arguments must be enclosed in parentheses,
as in the example above. NOT can only be associated with an
extraction command, and does not apply to the entire expression.
Examples:
<LINK ((<IDESCENDANTS("100") AND <UDA(Product,Sweet)) OR
ONSAMELEVELAS "100"10")
selects sweet products from the "100" subtree, plus all products on the
same level as "100-10."
<LINK ((<IDESCENDANTS("100") AND NOT <UDA (Product,Sweet))
OR ONSAMELEVELAS "100"10")
selects products that are not sweet from the "100" subtree, plus all
products on the same level as "100-10.
At the point in the script where you want to use the variable, use the
following format:
&variableName
For example,
<ICHILDREN &CurQtr
becomes
<ICHILDREN Qtr1
At the point in the script where you want to select members based on
a specific attribute, use the following format:
<ATTRIBUTE memberName
<ATTRIBUTE Bottle
<ATTRIBUTE Ounces_24
Attribute types can be text, numeric, date, and Boolean. For a
description of each attribute type, see Understanding Attribute Types.
<WITHATTR (attributeDimensionName, "Operator", Value)
The following command returns all base dimension members that are
associated with the attribute Small from the Population attribute
dimension.
<WITHATTR (Population, "IN", Small)
The following command returns all base dimension members that are
associated with the attribute 32 from the Ounces attribute dimension.
<WITHATTR (Ounces, "<", 32)
<LINK ((<WITHATTR (Ounces, "<", 32) AND <WITHATTR ("Pkg
Type", "=", Can))
<WITHATTR ("Intro Date", "=", <TODATE ("mmddyyyy", "1210
1996")
The following format returns data on all products that were introduced
before December 10, 1996.
<WITHATTR ("Intro Date", "<", <TODATE ("mmddyyyy", "1210
1996")
The following format returns data on all products that were introduced
after December 10, 1996.
<WITHATTR ("Intro Date", ">", <TODATE ("mmddyyyy", "1210
1996")
UDAs are different from attributes. UDAs are member labels that you
create to extract data based on a particular characteristic, but you
cannot use UDAs to group data, to perform crosstab reporting, or to
retrieve data selectively. Hence, for data analysis, UDAs are not as
powerful as attributes.
You can use the UDA command in conjunction with Boolean operators
to refine report queries further. See Selecting Members by Using
Boolean Operators for examples of the UDA command being used with
a Boolean operator.
At the point in the script where you want to select members based on
the UDA, use the following format:
<UDA (dimensionName,"UDAstring ")
<UDA (product,"Sweet")
When you run the report script, Analytic Services incorporates the UDA
members into the final report.
Note: You must type the UDA string exactly as it is displayed in the
database outline; you cannot create your own UDA string and
incorporate it into the report script.
At the point in the script where you want to select members using a
trailing wildcard, use the following format:
<MATCH (memberName,"character*")
where memberName is the name of the member that you select, and
character is the beginning character in the following member. Using
the Sample Basic database,
<MATCH (Year,"J*")
At the point in the script where you want to select members using a
pattern-matching wildcard, use the following format:
<MATCH (memberName,"???characters")
<MATCH (Product,"???10")
"Unknown Member [Widgets]."
<PAGE(Year)
<COLUMN(Product)
<ROW (Measures)
Qtr1
ProductGroups
Sales Profit
!
The report script produces the following report:
Qtr1 Market Scenario
100 200 300 400
Diet
======== ======== ======== ========
========
Sales 25,048 26,627 23,997 20,148
25,731
Profit 7,048 6,721 5,929 5,005
7,017
When you run a report that includes static member definitions, the
report displays members in order of their definition in the report script
by member name. Sort commands have no effect on static member
definitions. See Sorting Members for a discussion of the effects of
sorting members.
• Generation names
• Level names
• DIMBOTTOM command
• OFSAMEGEN command
• ONSAMELEVELAS command
<SUPSHARE
Report
Task
Command
Display the alias set in the current alias table, without the OUTALT
member name.
Display the alias set in the current alias table, followed by OUTALTMBR
the member name.
Display the member name, followed by the alias set in the OUTMBRALT
current alias table.
Display the alias set in the current alias table, without the OUTALTNAMES
member name, for all members in the report.
Reset the default display of member names after using the OUTMBRNAMES
OUTALTNAMES command.
Include several alias tables within one report script. OUTALTSELECT
<PAGE (Product, Measures)
<COLUMN (Scenario, Year)
{OUTALTNAMES}
<OUTALTMBR
Actual
<ICHILDREN Qtr1
<ROW (Market)
<IDESCENDANTS "300"
!
Dark Cream 30010 Measures Actual
Jan Feb Mar Qtr1
======== ======== ======== ========
Market 800 864 880 2,544
Vanilla Cream 30020 Measures Actual
Jan Feb Mar Qtr1
======== ======== ======== ========
Market 220 231 239 690
Diet Cream 30030 Measures Actual
Jan Feb Mar Qtr1
======== ======== ======== ========
Market 897 902 896 2,695
Cream Soda 300 Measures Actual
Jan Feb Mar Qtr1
======== ======== ======== ========
Market 1,917 1,997 2,015 5,929
Sorting Members
When you sort the members you include in a report, be aware that
sorting commands affect members differently, depending on whether
they are referenced by member selection commands or by static
member definitions. Report Writer commands sort members either by
member name or data values.
Report
Task
Command
Sort all members alphabetically by the alias name of the SORTALTNAMES
member, if aliases are used in the report script.
Sort all following members in ascending order starting SORTASC
with the lowest generation and moving toward the
highest generation.
Sort all following members in descending order starting SORTDESC
with the highest generation and moving toward the
lowest generation.
Sort all following members according to the generation SORTGEN
of the member in the database outline.
Sort all following members according to the level of the SORTLEVEL
member in the database outline.
Sort all members alphabetically by member name. SORTMBRNAMES
Restricting and
Ordering Data Values
Several Report Writer commands let you perform conditional retrieval
and data sorting in reports.
Report
Task
Command
Specify the number of rows to return. These rows must TOP
contain the top values of a specific data column.
Specify the number of rows to return. These rows must BOTTOM
contain the lowest values of a specific data column.
Specify the conditions the columns of a data row must satisfy RESTRICT
before the row is returned.
Specify the ordering of the rows of a report, based on the data ORDERBY
values of data columns.
Using RESTRICT
The arguments of the <RESTRICT command let you specify
qualifications for selecting rows. Analytic Services includes only
qualified rows in the resulting report output.
<RESTRICT works only on the range of rows that you specify in a row
member selection.
Analytic Services processes the restrictions from left to right, and does
not allow grouping with parentheses in the list of arguments.
RESTRICT (... (@DATACOL(1) > 300 AND @DATACOL(2) < 600)...)
RESTRICT (@DATACOL(1) > @DATACOL(2) AND 800 < @DATACOL(3)
OR @DATACOL(4) <> #MISSING)
RESTRICT (((@DATACOL(1) > @DATACOL(2)) AND
(800<@DATACOL(3))) OR (@DATACOL(4) <> #MISSING))
Using ORDERBY
The <ORDERBY command orders the output rows according to the
data values in the specified columns. You can specify either ascending
<ASC (the default) or descending <DESC.You can specify different
sorting directions in different columns of the same report .
You can use <TOP and <BOTTOM together in the same report, but only
one <TOP and one <BOTTOM is allowed per report. In this case, the
two commands should have the same data column as their argument
in order to prevent confusion. The result of the <TOP and <BOTTOM
command is sorted by the value of the data column specified in the
command in descending order.
<TOP and <BOTTOM work only on the range of rows specified in row
member selection.
Note: If <TOP or <BOTTOM occurs with <ORDERBY, the ordering
column of the <ORDERBY does not have to be the same as the data
column of the <TOP or the <BOTTOM.
For example, this command returns two rows with the highest data
values in col2 (Actual, Qtr2) per row group:
1 TOP (2, @DATACOL(2))
When you run this command against the Sample Basic database, the
row grouping is Product, which implies that for Florida, the report
returns 100-10 and 100-30 product rows, and for Maine, the report
returns 100-10, 100-40 product rows, and so on.
Actual Budget
Qtr1 Qtr2 Qtr1
Qtr2
Florida 10010 570 670 570
650
10020 235 345 321
432
10030 655 555 455
865
10040 342 342 432
234
Maine 10010 600 800 800
750
10020 734 334 734
534
10030 324 321 235
278
10040 432 342 289
310
New York 10010 1010 1210 1110 910
10020 960 760 650
870
10030 324 550 432
321
10040 880 980 880
1080
10050 #MI #MI #MI
#MI
This example returns rows with the highest data values in col2 (Actual,
Qtr2) per report, because the row grouping is the "market."
2 TOP("market", 3, @DATACOL(2))
New York 10010 1010 1210 1110 910
10040 880 980 880 1080
Maine 10010 600 800 800 750
This example returns two rows with the lowest data values in col2
(Actual, Qtr2) per row group.
3 BOTTOM ("market", 2, @DATACOL(2))
Maine 10020 734 334 734
534
10030 324 321 235
278
SCRIPT 1 SCRIPT 2
.... ....
< ROW Market {UCOL}
{UCOL } < Florida (row member)
<ICHILDREN Market
< TOP .... < BOTTOM ....
Converting Data to a
Different Currency
If the database has a currency partition, you can calculate currency
conversions in report scripts. Use the <CURRENCY command to set the
output currency and currency type. Use the <CURHEADING command
to display the currency conversion heading.
Note: Currency conversion is not supported across transparent
partitions.
For the syntax and definitions of Report Writer commands, see the
Technical Reference.
Generating Reports
Using the C, Visual
Basic, and Grid APIs
Use the following table to determine the report API calls that you can
make:
See the API Reference for syntax and descriptions of these API
functions.
The following sections describe the data mining process in more detail.
The target accessor has the same set of domains as the predictor. You write MaxL DML
expressions to define the predictor and target accessors.
For example, consider this sample data mining problem:
Given the number of TVs, DVDs, and VCRs sold during a particular period, in the East
region, how many cameras were sold in the same period in the East? Restrict sales data to
prior year actual sales.
Using the regression algorithm, the predictor and target accessors to define the model for
this problem are as follows:
Note: In this example, the target accessor is the same with regard to all the predictor
attributes except the target domain ({[Camera]} ). However, the domain expressions for
different accessors are not required to be the same. The only requirement is that a
predictor component (for example predictor.sequence) and the corresponding target
component (target.sequence) must be the same size.
For each city in the East ({[East].Children}), the algorithm models camera sales as a
function of TV, DVD, and VCR sales. The Data Mining Framework creates, under the
same name, a family of results, or models; a separate result for each city in the East.
Training the Model
The final step of specifying a build task is to execute the algorithm against the data
specified by the accessors to build or train the model. During the training process, the
algorithm discovers and describes the patterns and relationships in the data that can be
used for prediction.
Internally, the algorithm represents the patterns and relationships it has discovered as a
set of mathematical coefficients. Later, the trained model can use these patterns and
relationships to generate new information from a different, but similarly structured, set of
data.
Note: If you cancel a data mining model while you are training it, the transaction is rolled
back.
See "Creating or Modifying a Test Task" in Essbase Administration Services Online Help
for information about creating a test task.
Viewing Data Mining Results
Data Mining Framework writes mining results back to the Analytic Services cube. Data
Mining Framework creates a result record, in XML format, that contains accessors that
specify the location of the result data in the cube.
You can view data mining results through the Data Mining node in Administration
Services or by using MaxL statements.
Preparing for Data Mining
The one essential prerequisite for performing data mining is that you understand your
data and the problem you are trying to solve. Data mining is a powerful tool and can yield
new insights. However, if you already have a strong hunch about your data, then data
mining can be particularly useful in confirming or denying your hunch, and giving you
some additional insights and directions to follow.
Before you mine an Analytic Services database, make sure that the database is loaded and
calculated.
Built-in Algorithms
Hyperion supplies the following basic algorithms:
• Regression. Identifies dependencies between a specific value and other values.
For example, multilinear regression can determine how the amount of money
spent on advertising and payroll affects sales values.
• Clustering. Arranges items into groups with similar patterns. You use the
clustering algorithm for unsupervised classification. The algorithm examines data
and determines itself how to split the data into groups, or clusters, based on
specific properties of the data. The input required to build the model consists of a
collection of vectors with numeric coordinates. The algorithm organizes these
vectors into clusters based on their proximity to each other. The basic assumption
is that the clusters are relatively smaller than the distance between them, and,
therefore, can be effectively represented by their respective centers. Hence the
model consists of coordinates of center vectors.
Sequential runs on the same training set may produce slightly different results due
to the stochastic nature of the method. You specify the number of clusters to
generate, but it is possible the algorithm will find fewer clusters than requested.
Clusters can provide useful information about market segmentation and can be
used with other predictive tools. For example, clusters can determine the kinds of
users most likely to respond to an advertising campaign and then target just those
users.
• Neural network. Generalizes and learns from data. For example, neural networks
can be used to predict financial results.
You can use the neural net algorithm for both prediction and classification. This
algorithm is much more powerful and flexible than linear regression. For
example, you can specify multiple targets as well multiple predictors.
On the other hand, the model generated by the neural net algorithm is not as easy
to interpret as that from linear regression.
One use of neural nets is binary classification. A series of inputs (predictors)
produces a set of results (targets) normalized to values between zero and one. For
example, a set of behaviors results in values between 0 and 1, with 1 being risky
and 0 being risk free. Values in between require interpretation; for example, 0.4 is
the high end of safe and 0.6 is the low end of risky.
• Decision tree. Determines simple rules for making decisions. The algorithm
results are the answers to a series of yes and no questions. A yes answer leads to
one part of the tree and a no answer to another part of the tree. The end result is a
yes or no answer. Decision trees are used for classification and prediction. For
example, a decision tree can tell you to suggest ice cream to a particular customer
because that customer is more likely to buy ice cream with root beer.
Use the decision tree algorithm to organize a collection of data belonging to
several different classes or types. In the build phase, you specify a set of data
vectors and provide the class of each. In the apply phase, you provide a set of
previously unknown vectors and the algorithm deduces their classes from the
model.
The algorithm constructs a series of simple tests or predicates to create a tree
structure. To determine the class of a data vector, the algorithm takes the input
data and traverses the tree from the root to the leaves performing a test at each
branch.
• Association Rules. Discovers rules in a series of events. The typical application
for this algorithm is market basket analysis: people who buy particular items also
buy which other items. For example, the result of a market basket analysis might
be that men who buy beer also buy diapers.
You define support and confidence parameters for the algorithm. The algorithm
selects sufficiently frequent subsets selected from a predefined set of items. On
input it reads a sequence of item sets, and looks for an item set (or its subset),
whose frequency in the whole sequence is greater than support level. Such item
sets are broken into antecedent-consequent pairs, called rules. Rule confidence is
the ratio of its item set frequency to the antecedent frequency in all the item sets.
Rules with confidence greater than the given confidence level are added to the list
of "confident" rules.
Although the algorithm uses logical shortcuts during computations, thus avoiding
the need to consider all the combinations of the item sets, whose number can be
practically infinite, the speed with which the algorithm executes depends on the
number of attributes to consider and the frequency with which they occur.
• Naive Bayes. Predicts class membership probabilities. Naive Bayes is a light-
weight classification algorithm. It is fast, takes small memory and in a good
number of applications behaves quite satisfactory, so you can use it first before
going to the decision tree or fully fledged clustering schemes.
The algorithm treats all the attributes of the case vector as if they were
independent of each other. It uses a training sequence of vectors and the
theoretical definition of the conditional probability to calculate the probabilities or
likelihoods that an attribute with a certain value belongs to a case with a certain
class. The model stores these probabilities. In the apply mode the case attributes
are used to calculate the likelihood of the case for each class. Then a class with
the maximal likelihood is assigned to the case.
Copying a Database
Subset
You can install both the Analytic Server and client on a Windows NT or
Windows 2000 workstation using Personal Essbase. Personal Essbase
is a one-port license and has its own license number. For information
about installing and configuring Personal Essbase on a computer, see
the Essbase XTD Analytic Services Installation Guide.
Once you have installed Personal Essbase, you can copy the outline file
(dbname.otl) and a data subset from the Analytic Server and load
them into Personal Essbase. The Personal Essbase server does not
communicate with the OLAP Server.
If required, you can repeat steps 3 and 4 to create an output file from
the database on the Personal Essbase server and load the data back
into the main Analytic Services database on a different computer.
How you copy the outline file depends on whether you can connect to
the source Analytic Services database from the Personal Essbase
computer.
You now have a copy of the database outline on the Personal Essbase
server.
To create a text file that contains the required data subset, follow
these steps:
1. Select the source database. For example, select West
Westmkts.
See "Navigating and Selecting Objects" in the Essbase XTD
Administration Services Online Help.
o If you can connect to the main Analytic Services
database from the Personal Essbase computer, you can
select the source database from the Personal Essbase
computer.
o If you cannot connect, use a different computer from
the Personal Essbase computer to select the source
database.
2. Create a new report.
See "Creating Scripts" in the Essbase XTD Administration
Services Online Help.
3. Write a report script that selects the required data subset. For
fundamental information on writing report scripts, see
Understanding Report Script Basics.
For example, the following report script selects the Actual,
Measures data for the West market from Sample Basic:
{TABDELIMT}
<QUOTEMBRNAMES
Actual
<IDESC West
<IDESC Measures
o Use TABDELIMIT to place tab stops between data,
instead of spaces to ensure that no member names or
data values are truncated.
o Use QUOTEMBRNAMES to place quotation marks (" ")
around member names that contain blank spaces.
Analytic Services then recognizes the member names
when it loads the data.
4. Execute the report script.
See "Executing Report Scripts" in the Essbase XTD
Administration Services Online Help.
5. Save the report script with a .txt extension; for example,
westout.txt.
If you are not using the Personal Essbase computer, save the
output file anywhere on the current computer. By default,
Analytic Services saves the file on the Analytic Services client
computer, and not on the server. When you run the report, use
the operating system to copy the file to the
\ARBORPATH\App\appname\dbname directory on the Personal
Essbase server. For example, use a disk to copy the file.
If you are not using the Personal Essbase computer, remember
to download and copy the file from the Analytic Server client
directory to the \ARBORPATH\app\appname\dbname directory on
the Personal Essbase server. For example, copy the output file to
c:\essbase\app\west\westmkts\westout.txt.
You are now ready to load the text file into the new database.
To load a file into a database, see "Loading Data" in the Essbase XTD
Administration Services Online Help.
The following example illustrates how to load data into the Westmkts
database:
For detailed information on loading data and any errors that may
occur, see Performing and Debugging Data Loads or Dimension Builds.
You can now view the data on the Personal Essbase computer. You
might need to recalculate the database subset. Because you are
viewing a subset of the database, a percentage of the data values will
be #MISSING.
If required, you can copy report scripts and other object files to the
Personal Essbase computer to use with the database subset you have
created.
Before you can import data into some programs, you must separate,
or delimit, the data with specific characters.
<ROW (Year, Measures, Product, Market, Scenario)
{ROWREPEAT}
<ICHILDREN Year
Sales
<ICHILDREN "400"
East
Budget
!
Qtr1 Sales 40010 East
Budget 900
Qtr1 Sales 40020 East
Budget 1,100
Qtr1 Sales 40030 East
Budget 800
Qtr1 Sales 400 East
Budget 2,800
Qtr2 Sales 40010 East
Budget 1,100
Qtr2 Sales 40020 East
Budget 1,200
Qtr2 Sales 40030 East
Budget 900
Qtr2 Sales 400 East
Budget 3,200
Qtr3 Sales 40010 East
Budget 1,200
Qtr3 Sales 40020 East
Budget 1,100
Qtr3 Sales 40030 East
Budget 900
Qtr3 Sales 400 East
Budget 3,200
Qtr4 Sales 40010 East
Budget 1,000
Qtr4 Sales 40020 East
Budget 1,200
Qtr4 Sales 40030 East
Budget 600
Qtr4 Sales 400 East
Budget 2,800
Year Sales 40010 East
Budget 4,200
Year Sales 40020 East
Budget 4,600
Year Sales 40030 East
Budget 3,200
Year Sales 400 East
Budget 12,000
<ROW (Year, Measures, Product, Market, Scenario)
{ROWREPEAT}
{DECIMAL 2}
<CHILDREN Qtr1
Sales
<DIMBOTTOM Product
East
Budget
!
Jan Sales 10010 East Budget
1,600.00
Jan Sales 10020 East Budget
400.00
Jan Sales 10030 East Budget
200.00
Jan Sales 20010 East Budget
300.00
Jan Sales 20020 East Budget
200.00
Jan Sales 20030 East Budget
#Missing
Jan Sales 20040 East Budget
700.00
Jan Sales 30010 East Budget
#Missing
Jan Sales 30020 East Budget
400.00
Jan Sales 30030 East Budget
300.00
Jan Sales 40010 East Budget
300.00
Jan Sales 40020 East Budget
400.00
Jan Sales 40030 East Budget
200.00
Feb Sales 10010 East Budget
1,400.00
Feb Sales 10020 East Budget
300.00
Feb Sales 10030 East Budget
300.00
Feb Sales 20010 East Budget
400.00
Feb Sales 20020 East Budget
200.00
Feb Sales 20030 East Budget
#Missing
Feb Sales 20040 East Budget
700.00
Feb Sales 30010 East Budget
#Missing
Feb Sales 30020 East Budget
400.00
Feb Sales 30030 East Budget
300.00
Feb Sales 40010 East Budget
300.00
Feb Sales 40020 East Budget
300.00
Feb Sales 40030 East Budget
300.00
Mar Sales 10010 East Budget
1,600.00
Mar Sales 10020 East Budget
300.00
Mar Sales 10030 East Budget
400.00
Mar Sales 20010 East Budget
400.00
Mar Sales 20020 East Budget
200.00
Mar Sales 20030 East Budget
#Missing
Mar Sales 20040 East Budget
600.00
Mar Sales 30010 East Budget
#Missing
Mar Sales 30020 East Budget
400.00
Mar Sales 30030 East Budget
300.00
Mar Sales 40010 East Budget
300.00
Mar Sales 40020 East Budget
400.00
Mar Sales 40030 East Budget
300.00
For an additional example of formatting for data export, see Sample
12 on the Examples of Report Scripts page in the Report Writer
Commands section of the Technical Reference.
Exporting Data
To export data from a database, use any of the following methods:
SELECT in line 1 is the keyword that begins the main body of all MaxL DML
statements.
The curly braces {} in line 2 are a placeholder for a set. In the above query, the set is
empty, but the curly braces remain as a placeholder.
Exercise 1: Creating a Query Template
In the following query, {([100-10], [Actual])} is a also a set consisting of one tuple,
though in this case, the tuple is not a single member name. Rather, ([100-10], [Actual])
represents a tuple consisting of members from two different dimensions, Product and
Scenario.
SELECT
{[100-10], [Actual]}
ON COLUMNS
FROM Sample.Basic
When a set has more than one tuple, the following rule applies: In each tuple of the set,
members must represent the same dimensions as do the members of other tuples of the
set. Additionally, the dimensions must be represented in the same order. In other words,
each tuple of the set must have the same dimensionality.
For example, the following set consists of two tuples of the same dimensionality.
{(West, Feb), (East, Mar)}
The following set breaks the dimensionality rule because Feb and Sales are from different
dimensions.
{(West, Feb), (East, Sales)}
The following set breaks the dimensionality rule because although the two tuples contain
the same dimensions, the order of dimensions is reversed in the second tuple.
{(West, Feb), (Mar, East)}
A set can also be a collection of sets, and it can also be empty (containing no tuples).
A set must be enclosed in curly brackets {} except in some cases where the set is
represented by a MaxL DML function which returns a set.
100-10 100-20
Qtr1 5096 1359
Qtr2 5892 1534
Qtr3 6583 1528
Qtr4 5206 1287
Exercise 4: Querying Multiple Dimensions on a Single Axis
100-10 100-20
East East
Qtr1 Profit 2461 212
Qtr2 Profit 2490 303
Qtr3 Profit 3298 312
Qtr4 Profit 2430 287
Cube Specification
A cube specification is the part of the MaxL DML query that determines which database
is being queried. The cube specification fits into a DML query as follows:
SELECT <axis> [, <axis>...]
FROM <database>
The <database> section follows the FROM keyword and should consist of delimited or
non delimited identifiers that specify an application name and a database name.
The first identifier should be an application name and the second one should be a
database name. For example, all of the following are valid cube specifications:
1. FROM Sample.Basic
1. FROM [Sample.Basic]
1. FROM [Sample].[Basic]
1. FROM'Sample'.'Basic'
where the first argument you provide is the member that begins the range, and the second
argument is the member that ends the range.
Note: An alternate syntax for MemberRange is to use a colon between the two members,
instead of using the function name: member1 : member2.
The CrossJoin function takes two sets from different dimensions as input and creates a set
that is a cross product of the two input sets. This is useful for creating symmetric reports.
When using CrossJoin, the order of arguments has an effect on the order of tuples in the
output.
Exercise 7: Using the Children Function
The Children function returns a set of all child members of the given member. Its syntax
is as follows:
Children (member)
Note: An alternate syntax for Children is to use it like an operator on the input member,
as follows: member.Children. We will use the operator syntax in this exercise.
where the layer argument you provide indicates the generation or level of members you
want returned.
Note: An alternate syntax for Members is layer.Members.
The slicer axis is used to set the context of the query, and is usually the default context
for all the other axes.
For example, if you want a query to select only Actual Sales in the Sample Basic
database, excluding budgeted sales, the WHERE clause might look like the following:
WHERE ([Actual], [Sales])
Because (Actual, Sales) is specified in the slicer axis, it is not necessary to include them
in the ON AXIS(n) set specifications.
Exercise 9: Limiting the Results with a Slicer Axis
Relationship
Description
Function
Children Returns the children of the input member.
Siblings Returns the siblings of the input member.
Descendants Returns the descendants of a member, with varying options.
The following functions are also relationship functions, but they return a single member
rather than a set:
Relationship
Description
Function
Ancestor Returns an ancestor at the specified layer.
Cousin Returns a child member at the same position as a member from
another ancestor.
Parent Returns the parent of the input member.
FirstChild Returns the first child of the input member.
LastChild Returns the last child of the input member.
FirstSibling Returns the first child of the input member's parent.
LastSibling Returns the last child of the input member's parent.
For examples using relationship functions, see the MaxL DML section of the Technical
Reference.
Pure Set
Description
Function
CrossJoin Returns a cross-section of two sets from different dimensions.
Distinct Deletes duplicate tuples from a set.
Except Returns a subset containing the differences between two sets.
Generate An iterative function. For each tuple in set1, returns set2.
Head Returns the first n members or tuples present in a set.
Intersect Returns the intersection of two input sets.
Subset Returns a subset from a set, in which the subset is a numerically
specified range of tuples.
Tail Returns the last n members or tuples present in a set.
Union Returns the union of two input sets.
)
ON COLUMNS
FROM Sample.Basic
• Add two comma-separated pairs of curly braces to use as placeholders for the two
set arguments you will provide to the Intersect function:
SELECT
Intersect (
{ },
{ }
)
ON COLUMNS
FROM Sample.Basic
• Specify children of East as the first set argument.
SELECT
Intersect (
{ [East].children },
{ }
)
ON COLUMNS
FROM Sample.Basic
9. For the second set argument, specify all members of the Market dimension that
have a UDA of "Major Market."
Note: To learn more about how the UDA function works, see the Technical
Reference.
SELECT
Intersect (
{ [East].children },
{ UDA([Market], "Major Market") }
)
ON COLUMNS
FROM Sample.Basic
10. Paste the query into the MaxL Shell and run it, as described in Exercise 2:
Running Your First Query.
The results will be all children of East that have a UDA of "Major Market":
You can use the Union function to lump two sets together into one set.
For more examples using pure set-operative functions, see the Technical Reference.
Calculated Members
A calculated member is a hypothetical member existing for the duration of the query
execution. Calculated members enable you to perform complex analysis without the
necessity of adding new members to the database outline. Essentially, calculated
members are a storage place for calculation results performed on real members.
You can give a calculated member any name you want, with the following guidelines:
1. You must associate the calculated member with a dimension; for example, to
associated the member MyCalc with the Measures dimension, you would name it
[Measures].[MyCalc].
1. Do not use real member names to name calculated members; for example, do not
name a calculated member [Measures].[Sales], because Sales already exists in
the Measures dimension.
Exercise 12: Creating a Calculated Member
This exercise will include the Max function, a common function for calculations. The
Max function returns the maximum of values found in the tuples of a set. Its syntax is as
follows:
Max (set, numeric_value)
Named Sets
A named set is a set specification just like those you would define in the SELECT axes,
except you define the sets in the WITH section of the query, and associate them with a
name. This is useful because you can reference the set by name when building the
SELECT section of the query.
For example, a named set called Best5Prods identifies a set of the five top-selling
products in December:
WITH
SET [Best5Prods] AS
'Topcount (
[Product].members,
5,
([Measures].[Sales], [Scenario].[Actual],
[Year].[Dec])
)'
SELECT [Best5Prods] ON AXIS(0),
{[Year].[Dec]} ON AXIS(1)
FROM Sample.Basic
F
unct Description
ion
Filte Returns the subset of tuples in set for which the value of the search condition is
r TRUE.
IIF Performs a conditional test, and returns an appropriate numeric expression or set
depending on whether the test evaluates to TRUE or FALSE.
C Performs conditional tests and returns the results you specify.
ase
G An iterative function. For each tuple in set1, returns set2.
ener
ate
The Filter function in MaxL DML is comparable to the RESTRICT command in Report
Writer.
For more examples of Filter and other iterative functions, see the Technical Reference.
Including the optional keywords NON EMPTY before the set specification in an axis
causes suppression of slices in that axis that would contain entirely #MISSING values.
For any given tuple on an axis (such as (Qtr1, Actual)), a slice consists of the cells
arising from combining this tuple with all tuples of all other axes. If all of these cell
values are #MISSING, the NON EMPTY keyword causes the tuple to be eliminated.
For example, if even one value in a row is not empty, the entire row is returned. Including
NON EMPTY at the beginning of the row axis specification would eliminate the
following row slice from the set returned by a query:
Qtr1
Actual #Missing #Missing #Missing #Missing #Missing
When querying for member properties using the DIMENSION PROPERTIES section of
an axis, a property can be identified by the dimension name and the name of the property,
or just by using the property name itself. When a property name is used by itself, that
property information is returned for all members from all dimensions on that axis, for
which that property applies. In the following query. the MEMBER_ALIAS property is
evaluated on the row axis for both Year and Product dimensions.
SELECT
[Market].Members
DIMENSION PROPERTIES [Market].[GEN_NUMBER] on columns,
CrossJoin([Product].Children, Year.Children)
DIMENSION PROPERTIES [MEMBER_ALIAS] on rows
from Sample.Basic
In the second approach, properties can be used inside value expressions in a MaxL DML
query. For example, you can filter a set based on a value expression that uses properties
of members in the input set.
The following query returns all caffeinated products that are packaged in cans.
SELECT
Filter([Product].levels(0).members,
[Product].CurrentMember.Caffeinated and
[Product].CurrentMember.[Pkg Type] = "Can")
Dimension Properties
[Caffeinated], [Pkg Type] on columns
FROM Sample.Basic
The following query calculates the value [BudgetedExepenses] based on whether the
current Market is a major market, using the UDA [Major Market].
WITH
MEMBER [Measures].[BudgetedExpenses] AS
'IIF([Market].CurrentMember.[Major Market],
[Marketing] * 1.2, [Marketing])'
SELECT
{[Measures].[BudgetedExpenses]} ON COLUMNS,
[Market].Members ON ROWS
WHERE ([Budget])
FROM Sample.Basic
The Value Type of Properties
The value of a MaxL DML property can be a numeric, Boolean, or string type.
MEMBER_NAME and MEMBER_ALIAS properties return string values.
LEVEL_NUMBER and GEN_NUMBER properties return numeric values.
The attribute properties return numeric, Boolean, or string values based on the attribute
dimension type. For example, in Sample Basic, the [Ounces] attribute property is a
numeric property. The [Pkg Type] attribute property is a string property. The
[Caffeinated] attribute property is a Boolean property.
Analytic Services allows attribute dimensions with date types. The date type properties
are treated as numeric properties in MaxL DML. When comparing these property values
with dates, you need to use the TODATE function to convert date strings to numeric
before comparison.
The following query returns all Product dimension members that have been introduced on
date 03/25/1996. Since the property [Intro Date] is a date type, the TODATE function
must be used to convert the date string "03-25-1996" to a number before comparing it.
SELECT
Filter ([Product].Members,
[Product].CurrentMember.[Intro Date] =
TODATE("mm-dd-yyyy","03-25-1996"))ON COLUMNS
FROM Sample.Basic
When a property is used in a value expression, you must use it appropriately based on its
value type: string, numeric, or Boolean.
You can also query attribute dimensions with numeric ranges.
The following query retrieves Sales data for Small, Medium and Large population ranges.
SELECT
{Sales} ON COLUMNS,
{Small, Medium, Large} ON ROWS
FROM Sample.Basic
When attributes are used as properties in a value expression, you can use range members
to check whether a member's property value falls within a given range, using the IN
operator.
For example, the following query returns all Market dimension members with the
population range in Medium:
SELECT
Filter(
Market.Members, Market.CurrentMember.Population
IN "Medium"
)
ON AXIS(0)
FROM Sample.Basic
none of the members in the Year dimension have aliases defined for them. Therefore, the
query returns NULL values for the MEMBER_ALIAS property for members in the Year
dimension.
The attribute properties are defined for members of a specific dimension and a specific
level in that dimension. In the Sample Basic database, the [Ounces] property is defined
only for level-0 members of the Product dimension.
Therefore, if you query for the [Ounces] property of a member from the Market
dimension, as shown in the following query, you will get a syntax error:
SELECT
Filter([Market].members,
[Market].CurrentMember.[Ounces] = 32) ON COLUMNS
FROM Sample.Basic
Additionally, if you query for the [Ounces] property of a non level-0 member of the
dimension, you will get a NULL value.
When using property values in value expressions, you can use the function IsValid() to
check for NULL values. The following query returns all Product dimension members
with an [Ounces] property value of 12, after eliminating members with NULL values.
SELECT
Filter([Product].Members,
IsValid([Product].CurrentMember.[Ounces]) AND
[Product].CurrentMember.[Ounces] = 12)
ON COLUMNS
FROM Sample.Basic
Understanding Security
and Permissions
The Analytic Services security system addresses a wide variety of
database security needs with a multi layered approach to enable you
to develop the best plan for your environment. Various levels of
permission can be granted to users and groups or defined at the
system, application, or database scope. You can apply security in the
following ways:
Define database permissions that users and groups can have for
particular members, down to the individual data value (cell). To
learn more about how and why to use filters, see Controlling
Access to Database Cells.
Table?28 describes all security permissions and the tasks that can be
performed with those permissions.
• Creating Users
• Creating Groups
If you are using Administration Services, you also need to create users
on the Administration Server. For more information, see About
Administration Services Users.
Creating Users
To create a user means to define the user name, password, and
permission. You can also specify group membership for the user, and
you can specify that the user is required to change the password at
the next login attempt, or that the user name is disabled, preventing
the user from logging on.
User names can contain only characters defined within the code page
referenced by the ESSLANG variable and they cannot contain the
backslash (\) character. User names must begin with a letter or a
number.
For example, to create a user named admin and grant that user
Supervisor permissions, use the following MaxL statements:
create user admin identified by 'password';
grant supervisor to admin;
Creating Groups
A group is a collection of users who share the same minimum access
permissions. Placing users in groups can save you the time of
assigning identical permissions to users again and again.
When you create a new user, you can assign the user to a group.
Similarly, when you create a new group, you can assign users to the
group. You must define a password for each user; there are no
passwords for groups.
Granting Permissions to
Users and Groups
You can define security permissions for individual users and groups.
Groups are collections of users that share the same minimum
permissions. Users inherit all permissions of the group and can
additionally have access to permissions that exceed those of the
group.
• Supervisor.
This type of user or group can create and delete applications and
control permissions and resources applicable to those
applications or databases they created.
Users with Create/Delete Applications permission cannot create
or delete users, but they can manage application-level
permission for those applications that they have created. For a
comprehensive discussion of application-level permission, see
Managing Global Security for Applications and Databases.
For instructions about creating users and groups, see Creating Users
and Creating Groups.
You can grant or modify user and group application and database
permissions from an edit-user standpoint or from an application or
database security perspective. The results are the same.
Editing Groups
To edit a group means to modify the security profile established when
the group was created.
You can also create new groups by copying the security profile of an
existing group. The new group is assigned the same group type, user
membership, and application access as the original group.
You can copy users and groups on the same Analytic Server or from
one Analytic Server to another, according to your permissions. You can
also migrate users and groups across servers along with an
application. For more information, see "Copying Users" in Essbase XTD
Administration Services Online Help.
Users and groups with lower than the minimum permissions inherit at
least the minimum permissions for any applications or databases.
A session is the time between login and logout for a user connected to
Analytic Server at the system, application, or database scope. A user
can have more than one session open at any given time. For example,
a user may be logged on to different databases. If you have the
appropriate permissions, you can log off sessions based on any criteria
you choose; for example, an administrator can log off a user from all
databases or from a particular database.
Only Supervisors can view users holding locks and remove their locks.
To view or remove locks, use either of the following methods:
Understanding the
essbase.sec Security
File and Backup
All information about users, groups, passwords, permissions, filters,
applications, databases, and their corresponding directories is stored in
the essbase.sec file in the ARBORPATH\bin directory. Each time you
successfully start the Analytic Server, a backup copy of the security file
is created as essbase.bak. You can also update the security backup
file more often by using one of the following methods:
Access
Description
Level
None No data can be retrieved or updated for the specified member list.
Read Data can be retrieved but not updated for the specified member list.
Write Data can be retrieved and updated for the specified member list.
MetaRe Metadata (dimension and member names) can be retrieved and updated for
ad the corresponding member specification.
Note:
The MetaRead access level overrides all other access levels. If additional filters for data
are defined, they are enforced within any defined MetaRead filters.
If you have assigned a MetaRead filter on a substitution variable, then try to retrieve the
substitution variable, an unknown member error occurs, but the value of the substitution
variable gets displayed. This is expected behavior.
Metadata security cannot be completely turned off in partitions. Therefore, do not set
metadata security at the source database; otherwise, incorrect data may result at the target
partition.
When drilling up or retrieving on a member that has metadata security turned on and has
shared members in the children, an unknown member error occurs because the original
members of the shared members have been filtered. To avoid getting this error, be sure to
give the original members of the shared members metadata security access.
Any cells that are not specified in the filter definition inherit the database access level.
Filters can, however, add or remove access assigned at the database level, because the
filter definition, being more data-specific, indicates a greater level of detail than the more
general database access level.
Note: Data values not covered by filter definitions default first to the access levels
defined for users and second to the global database access levels. For a detailed
discussion of user access levels, see Granting Permissions to Users and Groups. For a
detailed discussion of global access levels, see Managing Global Security for
Applications and Databases.
Calculation access is controlled by minimum global permissions or by permissions
granted to users and groups. Users who have calculate access to the database are not
blocked by filters-they can affect all data elements that the execution of their calculations
would update.
Creating Filters
You can create a filter for each set of access restrictions you need to place on database
values. There is no need to create separate filters for users with the same access needs-
once you have created a filter, you can assign it to multiple users or groups of users.
However, only one filter per database can be assigned to a user or group.
Note: If you use a calculation function that returns a set of members, such as children or
descendants, and it evaluates to an empty set, the security filter is not created. An error is
written to the application log stating that the region definition evaluated to an empty set.
Before creating a filter perform the following actions:
1. Connect to the server and select the database associated with the filter.
1. Check the naming rules for filters in Limits.
All data for Sales is blocked from view, as well as all data for January, inside and outside
of the Sales member. Data for COGS (Cost of Goods Sold), a sibling of Sales and a child
of Margin, is available, with the exception of COGS for January.
Filtering Member Combinations
To filter data for member combinations, define the access for each member combination
using a single row in Filter Editor. A filter definition using one row and a comma is
treated as an AND relationship.
For example, assume that user RChinn is assigned this filter:
Figure 213: Filter Blocking Access to Sales for Jan
The filter blocks only the intersection of the members Sales and Jan in the Sample Basic
database.
The next time user RChinn connects to Sample Basic, she has no access to the data value
at the intersection of members Sales and Jan. Her spreadsheet view of the profit margin
for Qtr1 looks like this view:
Figure 214: Results of Filter Blocking Access to Sales, Jan
Sales data for January is blocked from view. However, Sales data for other months is
available, and non-Sales data for January is available.
Managing Filters
You can perform the following actions on filters:
1. Viewing Filters
1. Editing Filters
1. Copying Filters
1. Renaming Filters
1. Deleting Filters
Viewing Filters
Editing Filters
Copying Filters
You can copy filters to applications and databases on any Analytic Server, according to
your permissions. You can also copy filters across servers as part of application
migration.
To copy an existing filter, use any of the
following methods:
Renaming Filters
Deleting Filters
Assigning Filters
Once you have defined filters, you can assign them to users or groups. Assigning filters to
users and groups lets you manage multiple users who require the same filter settings.
Modifications to the definition of a filter are automatically inherited by users of that filter.
Filters do not affect users who have the role of Supervisor. Only one filter per database
can be assigned to a user or group.
The third specification defines security at a greater level of detail than the other two.
Therefore Read access is granted to all Actual data for members in the New York branch.
Because Write access is a higher access level than None, the remaining data values in
Actual are granted Write access.
All other data points, such as Budget, are accessible according to the minimum database
permissions.
Note: If you have Write access, you also have Read access.
Changes to members in the database outline are not reflected automatically in filters. You
must manually update member references that change.
In addition, Mary uses the filter object RED (for the database FINPLAN). The filter has
two filter rows:
Figure 217: RED Filter for Database FINPLAN
The Group Marketing also uses a filter object BLUE (for the database FINPLAN). The
filter has two filter rows:
Figure 218: BLUE Filter for Database FINPLAN
Mary's effective rights from the overlapping filters, and the permissions assigned to her
and her group, are as follows:
Security Examples
This chapter describes some sample security problems and solutions, which are based on
the Sample application. These examples use security procedures described in Managing
Security for Users and Applications.
1. Security Problem 1
1. Security Problem 2
1. Security Problem 3
1. Security Problem 4
1. Security Problem 5
Security Problem 1
Three employees need to use Analytic Services-Sue Smith, Bill Brown, and Jane Jones.
Each?requires update access to all databases in the Sample application.
Solution:
Because the users need update access to only one application, they do not need to have
Supervisor permission. Because the users do not need to create or delete applications,
users, or groups, they do not need to be defined as special types of users with
Create/Delete permission. All these users need is Application Designer permission for the
Sample application.
The supervisor should perform the following tasks:
• Set up the users with Administration Services.
For more information, see Essbase XTD Administration Services Installation
Guide.
• Create Sue, Bill, and Jane as ordinary users with Application Designer
permission.
If Sue, Bill, and Jane are created without Application Designer permission, assign
Application Designer permission to the three users.
For more information, see Creating Users or Granting Designer Permissions to
Users and Groups.
Security Problem 2
Three employees need to use Analytic Services-Sue Smith, Bill Brown, and Jane Jones.
Sue and Bill require full access to all databases in the Sample application. Jane requires
full calculate access to all databases in the Sample application, but she does not need to
define or maintain database definitions.
Solution:
The supervisor should perform the following tasks:
9. Set up the users with Administration Services.
See the Essbase XTD Administration Services Installation Guide.
10. Create Sue and Bill as ordinary users with Application Designer permission.
If Sue and Bill are created without Application Designer permission, assign
Application Designer permission to the two users.
For more information, see Creating Users or Granting Designer Permissions to
Users and Groups.
11. Define global Calculate access for the Sample application as the Minimum
Database Access setting to give all additional users Calculate access to all
databases for the application.
See "Setting Minimum Permissions for Applications" in the Essbase XTD
Administration Services Online Help.
9. Create Jane as an ordinary user with no additional permissions. She inherits the
Calculate access from the application global setting.
For more information, see Creating Users.
Security Problem 3
Three employees need to use Analytic Services-Sue Smith, Bill Brown, and Jane Jones.
Sue and Bill require full access to all databases in the Sample application. Jane requires
full update and calculate access to all databases within the Sample application, but she
will not define or maintain the database definitions. Additional users will be added, all of
whom will require Read access to all databases.
Solution:
Because the current users have different needs for application and database access, define
their user permissions individually. Then, to save time assigning individual Read
permissions for future users, make Read the global setting for the application. (It does not
matter in what order you assign the user permissions and the global access.)
The supervisor should perform the following tasks:
10. Set up the users with Administration Services.
For more information, see Essbase XTD Administration Services Installation
Guide.
11. Create or edit Sue and Bill as ordinary users with Application Designer
permissions.
For more information, see Creating Users and Granting Designer Permissions to
Users and Groups.
12. Create Jane as an ordinary user, and give her Calculate permission for the Sample
application.
For more information, see Creating Users and Granting Application and Database
Access to Users and Groups.
• Define global Read access for the Sample application as the Minimum Database
Access setting to give all additional users Read access to all databases in the
Sample application.
See "Setting Minimum Permissions for Databases" in the Essbase XTD
Administration Services Online Help.
Security Problem 4
Three employees need to use Analytic Services-Sue Smith, Bill Brown, and Jane Jones.
Sue requires full access only to the Sample application; Jane requires calculate access to
all members of the Basic database; Bill requires Read access to all members. No other
users should have access to the databases.
Furthermore, Jane and Bill need to run report scripts that are defined by Sue.
Solution:
Because the different users have different needs for application and database access,
define the global access setting as None, and assign the user permissions individually.
The supervisor should perform the following tasks:
• Set up the users with Administration Services. (Because Jane and Bill need to run
the report scripts, they must use Administration Services.)
For more information, see Essbase XTD Administration Services Installation
Guide.
• Create Sue as an ordinary user, but grant her Application Designer permission for
the Sample application.
For more information, see Creating Users and Granting Designer Permissions to
Users and Groups.
• Create Jane as an ordinary user, and give her Calculate permission for the Sample
application.
For more information, see Creating Users and Granting Application and Database
Access to Users and Groups.
• Create Bill as an ordinary user and give him Read permission on the Sample
application.
For more information, see Creating Users and Granting Application and Database
Access to Users and Groups.
Security Problem 5
The Supervisor, Sue Smith, needs to perform some maintenance on the Sample
application. She must make changes to the database outline and reload actual data. While
she changes the application, Sue must prevent other users from connecting to the
application.
Solution:
Sue should perform the following tasks:
• Disable the Allow Commands setting to prevent other users from connecting to
the application, and also prevent connected users from performing any further
operations.
For more information, see "Clearing Applications of User Activity" in the Essbase
XTD Administration Services Online Help.
• Check to see if any users have active locks.
If any users have active locks, Sue's calculation or data load command might halt,
waiting for access to the locked records. Sue can allow the users to complete their
updates or clear their locks.
For more information, see "Viewing Data Locks" in the Essbase XTD
Administration Services Online Help.
• After confirming that no users have active locks, proceed to perform maintenance
on the application.
About Unicode
Sharing data across national and language boundaries is a challenge for multi-national
businesses Traditionally, each computer stores and renders text based on its locale
specification. A locale identifies the local language and cultural conventions such as the
formatting of currency and dates, sort order of the data, and the character set encoding to
be used on the computer. The encoding of a character set refers to the specific set of bit
combinations used to store the character text as data, as defined by a code page or an
encoding format. In Analytic Services, code pages map characters to bit combinations for
non-Unicode encodings.
Because different encodings can map the same bit combination to different characters, a
file created on one computer can be misinterpreted by another computer that has a
different locale.
The Unicode Standard was developed to enable computers with different locales to share
character data. Unicode provides encoding forms with thousands of bit combinations,
enough to support the character sets of multiple languages simultaneously. By combining
all character mappings into a single encoding form, Unicode enables users to correctly
view character data created on computers with different locale settings.
Analytic Services conforms to version 2.1 of the Unicode Standard and uses the popular
UTF-8 encoding form within the Unicode Standard.
For additional information about the Unicode Standard, see www.unicode.org.
User-defined character sets (UDC) are not supported and the Chinese National Standard
GB 18030-2000 is not supported. Unicode-mode applications do not support the hybrid
analysis, query logging, triggers, and data mining features. Unicode-mode applications
also do not support the MaxL Data Manipulation Language (MaxL DML). SQL Interface
does not work with Unicode-mode applications.
Table 34: Compatibility Between Different Versions of Clients and Analytic Server ?
Unicode-
enabled
Unicode-enabled
Analytic
Not Unicode-enabled Analytic Server, Non-
Server,
Analytic Server Unicode-mode
Unicode-
application
mode
application
Not Unicode-enabled Yes Yes No
Client
For example:
Application Manager
and pre-7.0
Spreadsheet Add-in
Non-Unicode-mode Yes, except for clients Yes Yes, but
Client?Program on a using the grid API except no
Unicode-enabled changes can
Client be made to
For example: outlines.
The MaxL Shell and No outline
Excel?Spreadsheet synchronizati
Add-in packaged on
with Unicode-
enabled Analytic
Services
Unicode-mode Yes, except for clients Yes. Yes
Client Program on a using the grid API Synchronization
Unicode-enabled Synchronization of of?outlines not
Client outlines not supported supported for
Examples: for applications applications encoded to
Administration encoded to locales with locales with multi-byte
Services and multi-byte characters characters
Spreadsheet Services
Unicode-Enabled Administration Tools
Hyperion provides Administration Services and MaxL to administer Unicode-mode
applications. The main administration activities include, in addition to the normal
Analytic Services administration activities, changing the Unicode-related mode of the
server to enable or disable creation of Unicode-mode applications, creation of Unicode-
mode applications, migration of non-Unicode-mode applications to Unicode mode, and
viewing the Unicode-related status of servers and applications.
Administration Services is a Unicode-mode client. You can use Administration Services
with both Unicode and non-Unicode-mode applications. See Working With Unicode-
Mode Applications for information about Unicode-related administration tasks.
To administer non-Unicode-mode applications, you can use Application Manager from
previous Analytic Services releases that were not Unicode enabled.
Unicode-Enabled C API
Without recompilation, existing custom-written client programs are not Unicode-enabled.
These programs use short strings and short buffers. You can continue to use these
programs with non-Unicode-mode applications.
Depending on how they are written, in order to provide restricted access to Unicode-
mode applications, existing custom-written client programs can be recompiled in a
Unicode-enabled release of Analytic Services. Simply recompiled, these programs work
with long buffers but short strings.
For complete access to Unicode-mode and non-Unicode-mode applications, existing
custom-written applications need to be modified using the new Analytic Services API
functions for Unicode. Rewritten and compiled clients work with long buffers and long
strings for full Unicode support. For information about updating custom-written client
programs, see the API Reference.
Spreadsheet Retrieval
The Analytic Services Spreadsheet Add-in for Excel supports viewing data in both
Unicode and non-Unicode-mode applications. Older versions of the spreadsheet add-ins
for Excel and Lotus?1-2-3 can view data only in non-Unicode-mode applications. The
older versions of these add-ins are available only through older releases of Analytic
Services that are not Unicode-enabled.
You can use Spreadsheet Services to view data in both Unicode-mode applications and
non-Unicode-mode applications. To run Spreadsheet Services you must also run
Deployment Services. See the installation guides for each of these products for
preparation and installation information.
Sample_U Basic
To demonstrate Unicode-mode applications, the sample applications include a Unicode-
mode application and database: Sample_U Basic. Member names in Sample_U Basic are
in English.
Sample_U Basic includes four non-English alias tables and their import files:
nameschn.alt (Chinese), namesger.alt (German), namesjpn (Japanese), and namesrsn
(Russian).
When deciding on using Unicode-mode applications, you should also consider the
following points:
1. Using non-Unicode text files with Unicode-mode applications requires an
understanding of locales and care in managing to them. To prevent errors that
could cause database corruption, using UTF-8-encoded files is recommended. For
details, see Managing File Encoding.
1. To work with Unicode-mode applications, custom client applications that were
written to support non-Unicode-mode applications must be built to use the longer
string lengths used by Unicode-mode applications. This may be a simple re-build
or may involve re-programming, depending on the design of the applications.
Also, depending on how they are coded, the new client applications may require
more memory. For details, see the API Reference.
UTF-8 fonts and Unicode editors are made available by several software manufacturers.
Note: You can work with Unicode-mode applications without Analytic Server being set to
Unicode mode.
When Analytic Services performs a dimension build or data load, the rules file and data
file can have different encodings; for example, the text in a rules file can be in UTF-8
encoding while the data source can be encoded to a non-Unicode computer locale.
Note: When you use Administration Services Console to create script files or data
sources, the appropriate encoding indicator is automatically included in the file. When
you use any other tool than Administration Services Console to create text Unicode-
encoded files, you must ensure that the UTF-8 signature is included. Locale indicators are
not needed in text non-Unicode-encoded files require a locale indicator if the encoding is
different than the locale of Analytic Server.
The following text Analytic Services system files are encoded to the locale specified by
the ESSLANG value defined for Analytic Server.
1. The configuration file (essbase.cfg)
1. ESSCMD scripts
Encoding Indicators
To properly interpret text such as member names, Analytic Services must know how it is
encoded. Many files contain an encoding indicator, but you may occasionally be
prompted to specify the encoding of a file; for example, when you create a new file and
store it in a different location than the Analytic Server or read a file created by a previous
release of Analytic Server. The type of encoding indicator depends on the type of file:
1. Files that are internal to applications and databases and that users cannot directly
edit are primarily binary files and do not contain any type of encoding indicator.
Character text in these files is encoded to the application which is either a
Unicode-mode or non-Unicode mode application.
1. Text in Unicode-mode application files is UTF-8 encoded.
1. Text in non-Unicode-mode application files is encoded to the locale
specified in the ESSLANG of the Analytic Server where the application
was created.
1. Binary files that you can edit include outline files and rules files. As needed,
Analytic Services keeps track internally in outline files and rules files whether or
not the character text is in UTF-8 encoding. If not UTF-8, Analtic Services uses
an internal locale indicator to identify the locale used for character text encoding.
1. The following text files that you can edit use a UTF-8 signature or a locale header
to indicate their encoding.
1. Calculation scripts
1. Report scripts
1. MaxL scripts
1. Data sources for dimension builds and data loads
1. Alias table import files
Note: Essbase Administration Services requires alias table import files to
be UTF-8 encoded.
The UTF-8 signature is a mark at the beginning of a text file. The UTF-8 signature,
visible in some third-party editors, indicates that the file is encoded in UTF-8. Many
UTF-8 text editors can create the UTF-8 signature. You can also use the Analytic Services
Unicode File Utility (ESSUTF8) to insert the UTF-8 signature into a file. For more
information, see Analytic Services Unicode File Utility. When you create one of these
files using Administration Services Console, a UTF-8 signature is automatically inserted
in the file.
UTF-8-encoded text files must contain the UTF-8 signature.
The locale header record is an additional text record that identifies the encoding of the
non-Unicode-encoded text file. You can add the locale header at the time you create the
file or you can use the Analytic Services Unicode File Utility to insert the record. For
details about the locale header, see Locale Header Records.
Note: Do not combine a UTF-8 signature and locale header in the same file. If a text file
contains both types of encoding indicators, the file will be interpreted as UTF-8 encoded,
and the locale header will be read as the first data record.
Caution: Do not use non-Unicode-encoded files containing locale indicators with
Analytic Server installations that are not Unicode-enabled; that is, installed using releases
prior to Release 7.0. The Analytic Services Unicode File Utility (ESSUTF8) enables you
to remove locale indicators.
//ESS_LOCALE <locale-name>
<locale-name> is a supported Global C locale in the same format as is used for the
ESSLANG variable:
Note: Analytic Services consults only the <code page name> portion of the record. The
<sortsequence> specification does not affect sort sequences in report scripts.
See Supported Locales for a list of supported Global C locales.
The following example displays a locale header for a specific Russian code page:
//ESS_LOCALE Russian_Russia.ISO-8859-5@Default
Arabic_SaudiArabia.ISO-8859-6@Default
Arabic_SaudiArabia.MS1256@Default
Croatian_Croatia.ISO-8859-2@Croatian
Croatian_Croatia.MS1250@Croatian
CyrillicSerbian_Yugoslavia.ISO-8859-5@Default
CyrillicSerbian_Yugoslavia.MS1251@Default
Czech_CzechRepublic.ISO-8859-2@Czech
Czech_CzechRepublic.MS1250@Czech
Danish_Denmark. ISO-8859-15@Danish
Danish_Denmark.IBM500@Danish
Danish_Denmark.Latin1@Danish
Danish_Denmark.MS1252@Danish
Dutch_Netherlands.IBM037@Default
Dutch_Netherlands.IBM500@Default
Dutch_Netherlands.ISO-8859-15@Default
Dutch_Netherlands.Latin1@Default
Dutch_Netherlands.MS1252@Default
English_UnitedStates.IBM037@Binary
English_UnitedStates.IBM285@Binary
English_UnitedStates.IBM500@Binary
English_UnitedStates.Latin1@Binary
English_UnitedStates.MS1252@Binary
English_UnitedStates.US-ASCII@Binary
Finnish_Finland.IBM500@Finnish
Finnish_Finland.ISO-8859-15@Finnish
Finnish_Finland.Latin1@Finnish
Finnish_Finland.MS1252@Finnish
French_France.IBM297@Default
French_France.IBM500@Default
French_France.ISO-8859-15@Default
French_France.Latin1@Default
French_France.MS1252@Default
German_Germany.IBM273@Default
German_Germany.IBM500@Default
German_Germany.ISO-8859-15@Default
German_Germany.Latin1@Default
German_Germany.MS1252@Default
Greek_Greece.ISO-8859-7@Default
Greek_Greece.MS1253@Default
Hebrew_Israel.ISO-8859-8@Default
Hebrew_Israel.MS1255@Default
Hungarian_Hungary.ISO-8859-2@Hungarian
Hungarian_Hungary.MS1250@Hungarian
Italian_Italy.IBM280@Default
Italian_Italy.IBM500@Default
Italian_Italy.ISO-8859-15@Default
Italian_Italy.Latin1@Default
Italian_Italy.MS1252@Default
Japanese_Japan.IBM930@Binary
Japanese_Japan.JapanEUC@Binary
Japanese_Japan.JEF@Binary
Japanese_Japan.MS932@Binary
Japanese_Japan.Shift_JIS@Binary
Korean_Korea.MS1361@Binary
Korean_Korea.MS949@Binary
Norwegian_Norway.IBM500@Danish
Norwegian_Norway.ISO-8859-10@Danish
Norwegian_Norway.ISO-8859-15@Danish
Norwegian_Norway.ISO-8859-4@Danish
Norwegian_Norway.Latin1@Danish
Norwegian_Norway.MS1252@Danish
Polish_Poland.ISO-8859-2@Polish
Polish_Poland.MS1250@Polish
Portuguese_Portugal.IBM037@Default
Portuguese_Portugal.IBM500@Default
Portuguese_Portugal.ISO-8859-15@Default
Portuguese_Portugal.Latin1@Default
Portuguese_Portugal.MS1252@Default
Romanian_Romania.ISO-8859-2@Romanian
Romanian_Romania.MS1250@Romanian
Russian_Russia.ISO-8859-5@Default
Russian_Russia.MS1251@Default
Serbian_Yugoslavia.ISO-8859-2@Default
Serbian_Yugoslavia.MS1250@Default
SimplifiedChinese_China.IBM935@Binary
SimplifiedChinese_China.MS936@Binary
SimplifiedChinese_China.UTF-8@Binary
Slovak_Slovakia.ISO-8859-2@Slovak
Slovak_Slovakia.MS1250@Slovak
Slovenian_Slovenia.ISO-8859-10@Slovenian
Slovenian_Slovenia.ISO-8859-2@Slovenian
Slovenian_Slovenia.ISO-8859-4@Slovenian
Slovenian_Slovenia.MS1250@Slovenian
Spanish_Spain.IBM500@Spanish
Spanish_Spain.ISO-8859-15@Spanish
Spanish_Spain.Latin1@Spanish
Spanish_Spain.MS1252@Spanish
Swedish_Sweden.IBM500@Swedish
Swedish_Sweden.ISO-8859-15@Swedish
Swedish_Sweden.Latin1@Swedish
Swedish_Sweden.MS1252@Swedish
Thai_Thailand.MS874@Thai
TraditionalChinese_Taiwan.EUC-TW@Binary
TraditionalChinese_Taiwan.IBM937@Binary
TraditionalChinese_Taiwan.MS950@Binary
Turkish_Turkey.ISO-8859-3@Turkish
Turkish_Turkey.ISO-8859-9@Turkish
Turkish_Turkey.MS1254@Turkish
Ukrainian_Ukraine.ISO-8859-5@Ukrainian
Ukrainian_Ukraine.MS1251@Ukrainian
Located in the ARBORPATH\bin directory, this utility program is called essutf8.exe (in
Windows) or ESSUTF8 (in UNIX). You can use the Analytic Services Unicode File
utility program with the following files:
• Calculation scripts
• Report scripts
• MaxL scripts
• Text data sources for dimension builds and data loads
• Alias table import files
• Outline files
• Rules files
For information about this utility and its command syntax, see the Technical Reference.
http://dev.hyperion.com/techdocs/essbase/essbase_70/Docs/dbag/frameset.htm?dba_html.
htm