Purchase your Section 508 Compliance Support guide now!

Purchase your Section 508 Compliance Support guide now!

XY Theory McGregor

http://www.businessballs.com/mcgregor.htm

XY Theory McGregor

http://www.businessballs.com/mcgregor.htm

Data Integration Challenge – Storing Timestamps

Data Integration Challenge – Storing Timestamps

Storing timestamps along with a record indicating its new arrival or a change in its value is a must in a data warehouse. We always take it for granted, adding timestamp fields to table structures and tending to miss that the amount of storage space a timestamp field can occupy is huge, the storage occupied by timestamp is almost double against a integer data type in many databases like SQL Server, Oracle and if we have two fields one as insert timestamp and other field as update timestamp then the storage spaced required gets doubled. There are many instances where we could avoid using timestamps especially when the timestamps are being used for primarily for determining the incremental records or being stored just for audit purpose.

   

How to effectively manage the data storage and also leverage the benefit of a timestamp field?

One way of managing the storage of timestamp field is by introducing a process id field and a process table. Following are the steps involved in applying this method in table structures and as well as part of the ETL process.

Data Structure

  1. Consider a table name PAYMENT with two fields with timestamp data type like INSERT_TIMESTAMP and UPDATE_TIEMSTAMP used for capturing the changes for every present in the table
  2. Create a table named PROCESS_TABLE with columns PROCESS_NAME Char(25), PROCESS_ID Integer and PROCESS_TIMESTAMP Timestamp
  3. Now drop the fields of the TIMESTAMP data type from table PAYMENT
  4. Create two fields of integer data type in the table PAYMENT like INSERT_PROCESS_ID and UPDATE_PROCESS_ID
  5. These newly created id fields INSERT_PROCESS_ID and UPDATE_PROCESS_ID would be logically linked with the table PROCESS_TABLE through its field PROCESS_ID

ETL Process

  1. Let us consider an ETL process called 'Payment Process' that loads data into the table PAYMENT
  2. Now create a pre-process which would run before the 'payment process', in the pre-process build the logic by which a record is inserted with the values like ('payment process', SEQUNCE Number, current timestamp) into the PROCESS_TABLE table. The PROCESS_ID in the PROCESS_TABLE table could be defined as a database sequence function.
  3. Pass the currently generated PROCESS_ID of PROCESS_TABLE as 'current_process_id'  from pre-process step to the 'payment process' ETL process
  4. In the 'payment process' if a record is to inserted into the PAYMENT table then the current_prcoess_id value is set to both the columns INSERT_PROCESS_ID and UPDATE_PROCESS_ID else if a record is getting updated in the PAYMENT table then the current_process_id value is set to only the column UPDATE_PROCESS_ID
  5. So now the timestamp values for the records inserted or updated in the table PAYMENT can be picked from the PROCESS_TABLE by joining by the PROCESS_ID with the INSERT_PROCESS_ID and UPDATE_PROCESS_ID columns of the PAYMENT table

Benefits

  • The fields INSERT_PROCESS_ID and UPDATE_PROCESS_ID occupy less space when compared to the timestamp fields
  • Both the columns INSERT_PROCESS_ID and UPDATE_PROCESS_ID are Index friendly
  • Its easier to handle these process id fields in terms picking the records for determining the incremental changes or for any audit reporting.

Data Integration Challenge – Storing Timestamps

Data Integration Challenge – Storing Timestamps

Storing timestamps along with a record indicating its new arrival or a change in its value is a must in a data warehouse. We always take it for granted, adding timestamp fields to table structures and tending to miss that the amount of storage space a timestamp field can occupy is huge, the storage occupied by timestamp is almost double against a integer data type in many databases like SQL Server, Oracle and if we have two fields one as insert timestamp and other field as update timestamp then the storage spaced required gets doubled. There are many instances where we could avoid using timestamps especially when the timestamps are being used for primarily for determining the incremental records or being stored just for audit purpose.

   

How to effectively manage the data storage and also leverage the benefit of a timestamp field?

One way of managing the storage of timestamp field is by introducing a process id field and a process table. Following are the steps involved in applying this method in table structures and as well as part of the ETL process.

Data Structure

  1. Consider a table name PAYMENT with two fields with timestamp data type like INSERT_TIMESTAMP and UPDATE_TIEMSTAMP used for capturing the changes for every present in the table
  2. Create a table named PROCESS_TABLE with columns PROCESS_NAME Char(25), PROCESS_ID Integer and PROCESS_TIMESTAMP Timestamp
  3. Now drop the fields of the TIMESTAMP data type from table PAYMENT
  4. Create two fields of integer data type in the table PAYMENT like INSERT_PROCESS_ID and UPDATE_PROCESS_ID
  5. These newly created id fields INSERT_PROCESS_ID and UPDATE_PROCESS_ID would be logically linked with the table PROCESS_TABLE through its field PROCESS_ID

ETL Process

  1. Let us consider an ETL process called 'Payment Process' that loads data into the table PAYMENT
  2. Now create a pre-process which would run before the 'payment process', in the pre-process build the logic by which a record is inserted with the values like ('payment process', SEQUNCE Number, current timestamp) into the PROCESS_TABLE table. The PROCESS_ID in the PROCESS_TABLE table could be defined as a database sequence function.
  3. Pass the currently generated PROCESS_ID of PROCESS_TABLE as 'current_process_id'  from pre-process step to the 'payment process' ETL process
  4. In the 'payment process' if a record is to inserted into the PAYMENT table then the current_prcoess_id value is set to both the columns INSERT_PROCESS_ID and UPDATE_PROCESS_ID else if a record is getting updated in the PAYMENT table then the current_process_id value is set to only the column UPDATE_PROCESS_ID
  5. So now the timestamp values for the records inserted or updated in the table PAYMENT can be picked from the PROCESS_TABLE by joining by the PROCESS_ID with the INSERT_PROCESS_ID and UPDATE_PROCESS_ID columns of the PAYMENT table

Benefits

  • The fields INSERT_PROCESS_ID and UPDATE_PROCESS_ID occupy less space when compared to the timestamp fields
  • Both the columns INSERT_PROCESS_ID and UPDATE_PROCESS_ID are Index friendly
  • Its easier to handle these process id fields in terms picking the records for determining the incremental changes or for any audit reporting.

Business Intelligence Value Curve

Every business software system has an economic life. This essentially means that a software application exists for a period of time to accomplish its intended business functionality after which it has to be replaced or re-engineered. This is a fundamental truth that has to be taken into account when a product is bought or for a system that is developed from scratch.

During its useful life, the software system goes through a maturity life cycle – I would like to call it the "Value Curve" to establish the fact that the real intention of creating the system is to provide business value. As a BI practitioner, my focus is on the "Business Intelligence Value Curve" and in my humble opinion it typically goes thro' the following phases as shown in the diagram.

Ss7

Stage 1 – Deployment and Proliferation
The BI infrastructure is created at this stage catering to one or two subject areas. Both the process and technology infrastructure are established and there will be tangible benefits to the business users (usually the finance team!). Seeing the initial success, more subject areas are brought into the BI landscape that leads to the first list of problems – lack of data quality, completeness and duplication of data across data marts / repositories.

Stage 2 – Leveraging for Enterprise Decision Making
This stage takes off by addressing the problems seen in Stage-1 and overall enterprise data warehouse architecture starts taking shape. There is increased business value as compared to Stage-1 as the Enterprise Data Warehouse becomes a single source of truth for the enterprise. But as the data volume grows, the value is diminished due to scalability issues. For example, the data loads that used to take 'x' hours to complete now needs at-least '2x' hours.

Stage 3 – Integrating and Sustaining
The scalability issues seen at the end of Stage-2 are alleviated and the BI landscape sees much higher levels of integration. Knowledge is built into the set up by leveraging the metadata and the user adoption of the BI system is almost complete. But the emergence of a disruptive technology (for example – BI Appliances) or a completely different service model for BI (Ex: Cloud Analytics) or a regulatory mandate (Ex: IFRS) may force the organization to start evaluating completely different ways of analyzing information.

Stage 4 – Reinvent
The organization, after appropriate feasibility tests and ROI calculations, reinvents its business intelligence landscape and starts constructing one that is relevant for its future.

I do acknowledge the fact that not all organizations will go through this particular lifecycle but based on my experience in architecting BI solutions, most of them do have stages of evolution similar to the one described in this blog. A good understanding of the value curve would help BI practitioners provide the right solutions to the problems encountered at different stages.

Business Intelligence Value Curve

Every business software system has an economic life. This essentially means that a software application exists for a period of time to accomplish its intended business functionality after which it has to be replaced or re-engineered. This is a fundamental truth that has to be taken into account when a product is bought or for a system that is developed from scratch.

During its useful life, the software system goes through a maturity life cycle – I would like to call it the "Value Curve" to establish the fact that the real intention of creating the system is to provide business value. As a BI practitioner, my focus is on the "Business Intelligence Value Curve" and in my humble opinion it typically goes thro' the following phases as shown in the diagram.

Ss7

Stage 1 – Deployment and Proliferation
The BI infrastructure is created at this stage catering to one or two subject areas. Both the process and technology infrastructure are established and there will be tangible benefits to the business users (usually the finance team!). Seeing the initial success, more subject areas are brought into the BI landscape that leads to the first list of problems – lack of data quality, completeness and duplication of data across data marts / repositories.

Stage 2 – Leveraging for Enterprise Decision Making
This stage takes off by addressing the problems seen in Stage-1 and overall enterprise data warehouse architecture starts taking shape. There is increased business value as compared to Stage-1 as the Enterprise Data Warehouse becomes a single source of truth for the enterprise. But as the data volume grows, the value is diminished due to scalability issues. For example, the data loads that used to take 'x' hours to complete now needs at-least '2x' hours.

Stage 3 – Integrating and Sustaining
The scalability issues seen at the end of Stage-2 are alleviated and the BI landscape sees much higher levels of integration. Knowledge is built into the set up by leveraging the metadata and the user adoption of the BI system is almost complete. But the emergence of a disruptive technology (for example – BI Appliances) or a completely different service model for BI (Ex: Cloud Analytics) or a regulatory mandate (Ex: IFRS) may force the organization to start evaluating completely different ways of analyzing information.

Stage 4 – Reinvent
The organization, after appropriate feasibility tests and ROI calculations, reinvents its business intelligence landscape and starts constructing one that is relevant for its future.

I do acknowledge the fact that not all organizations will go through this particular lifecycle but based on my experience in architecting BI solutions, most of them do have stages of evolution similar to the one described in this blog. A good understanding of the value curve would help BI practitioners provide the right solutions to the problems encountered at different stages.

Valuing your Business Intelligence System - Part 1

Valuing your Business Intelligence System - Part 1

Sample these statements:

  • Dow Jones Industrial Average jumped 200 points today, a 2% increase from the previous close
  • The carbon footprint of an average individual in the world is about 4 tonnes per year which is a 3% increase over last year
  • The number of unique URL's as on July 2008 in the World Wide web is 1 trillion. The previous landmark of 1 billion was reached in 2000
  • One day 5% VaR (Value at Risk) for the portfolio is $ 1 Million as compared to the VaR of $ 1.3 Million a couple of weeks back

Most of us buy into the idea of having a single number that encapsulates complex phenomena. Though the details of the underlying processes are important, the single number (and the trend) does act like a bellwether of sorts helping us quickly get a feel of the current situation.

As a BI practitioner, I feel that it is about time that we formulated a way for valuing the BI infrastructure in organizations. Imagine a scenario where the Director of BI in company X can announce thus: "The value of the BI system in this organization has grown 15% over the past 1 year to touch $50 Million" (substitute your appropriate currencies here!).

The core idea of this post is to find a way to "scientifically put a number to your data warehouse". Here are a few level setting points:

  1. Valuation of BI systems is different from computing the Return on Investment (ROI) for BI initiatives. ROI calculations are typically done using Discounted Cash Flow techniques and are used in organizations to some extent
  2. More than the absolute number, the trends are important which means that the BI system has to be valued using the same norms at different points in time. Scientific / Mathematical rigor helps in bringing the consistency aspect.

My perspective to valuation is based on the "Outside-in" logic where the fundamental premise is that the value of the BI infrastructure is completely determined by its consumption. Or in other words, if there are no consumers for your data warehouse, the value of such a system is zero. One simple, yet powerful technique in the "Outside-in" category is RFM Analysis. RFM stands for Recency, Frequency and Monetary and is very popular in the direct marketing world. My 2-step hypothesis for BI system valuation using the RFM technique is:

  • Step 1: Value of BI system = Sum of the values of individual BI consumers
  • Step 2: Value of each individual consumer = Function (Recency, Frequency, Monetary parameters)

Qualitatively speaking, from the business user standpoint, one who has accessed information from the BI system more recently, has been using data more frequently and uses that information to make decisions that are critical to the organization will be given a higher value. A calibration chart will provide the specific value associated with RFM parameters based on the categories within them. For example: For the Recency parameter, usage of information within the last 1 day can be fixed at 10 points while access 10 days back will fetch 1 point. I will explain my version of the calibration chart in detail in subsequent posts. (Please note that the conversion of points to dollar values is also an interesting, non-trivial exercise)

Am sure that people acknowledge the fact that valuing data assets are difficult, tricky at best. But then, lot more difficult questions on nature and behavior have been reduced to mathematical equations - probably, the day on which BI practitioners can apply standardized techniques to value their BI infrastructure is not too far off.

Valuing your Business Intelligence System - Part 1

Valuing your Business Intelligence System - Part 1

Sample these statements:

  • Dow Jones Industrial Average jumped 200 points today, a 2% increase from the previous close
  • The carbon footprint of an average individual in the world is about 4 tonnes per year which is a 3% increase over last year
  • The number of unique URL's as on July 2008 in the World Wide web is 1 trillion. The previous landmark of 1 billion was reached in 2000
  • One day 5% VaR (Value at Risk) for the portfolio is $ 1 Million as compared to the VaR of $ 1.3 Million a couple of weeks back

Most of us buy into the idea of having a single number that encapsulates complex phenomena. Though the details of the underlying processes are important, the single number (and the trend) does act like a bellwether of sorts helping us quickly get a feel of the current situation.

As a BI practitioner, I feel that it is about time that we formulated a way for valuing the BI infrastructure in organizations. Imagine a scenario where the Director of BI in company X can announce thus: "The value of the BI system in this organization has grown 15% over the past 1 year to touch $50 Million" (substitute your appropriate currencies here!).

The core idea of this post is to find a way to "scientifically put a number to your data warehouse". Here are a few level setting points:

  1. Valuation of BI systems is different from computing the Return on Investment (ROI) for BI initiatives. ROI calculations are typically done using Discounted Cash Flow techniques and are used in organizations to some extent
  2. More than the absolute number, the trends are important which means that the BI system has to be valued using the same norms at different points in time. Scientific / Mathematical rigor helps in bringing the consistency aspect.

My perspective to valuation is based on the "Outside-in" logic where the fundamental premise is that the value of the BI infrastructure is completely determined by its consumption. Or in other words, if there are no consumers for your data warehouse, the value of such a system is zero. One simple, yet powerful technique in the "Outside-in" category is RFM Analysis. RFM stands for Recency, Frequency and Monetary and is very popular in the direct marketing world. My 2-step hypothesis for BI system valuation using the RFM technique is:

  • Step 1: Value of BI system = Sum of the values of individual BI consumers
  • Step 2: Value of each individual consumer = Function (Recency, Frequency, Monetary parameters)

Qualitatively speaking, from the business user standpoint, one who has accessed information from the BI system more recently, has been using data more frequently and uses that information to make decisions that are critical to the organization will be given a higher value. A calibration chart will provide the specific value associated with RFM parameters based on the categories within them. For example: For the Recency parameter, usage of information within the last 1 day can be fixed at 10 points while access 10 days back will fetch 1 point. I will explain my version of the calibration chart in detail in subsequent posts. (Please note that the conversion of points to dollar values is also an interesting, non-trivial exercise)

Am sure that people acknowledge the fact that valuing data assets are difficult, tricky at best. But then, lot more difficult questions on nature and behavior have been reduced to mathematical equations - probably, the day on which BI practitioners can apply standardized techniques to value their BI infrastructure is not too far off.

Zachman Framework for BI Assessments

Zachman Framework for BI Assessments

The Zachman Framework for Enterprise Architecture has become the model around which major organizations view and communicate their enterprise information infrastructure. Enterprise Architecture provides the blueprint, or architecture, for the organization's information infrastructure. More information on the Zachman Framework can be obtained at www.zifa.com.

For BI practitioners, the Zachman Framework provides a way of articulating the current state of the BI infrastructure in the organization. Ralph Kimball in his eminently readable book "The Data Warehouse Lifecycle Toolkit" illustrates how the Zachman Framework can be adapted to the Business Intelligence context.

Given below is a version of the Zachman Framework that I have used in some of my consulting engagements. This is just one way of using this framework but does illustrate the power of this model in some measure.

Framework 

Some Salient Points with respect to the above diagram are:

1)       The framework answers the basic questions of "What", "How", "Who" and "Where" across 4 important dimensions – Business Requirements, Conceptual Model, Logical/Physical Model and Actual Implementation.

2)       Zachman Framework reinforces the fact that a successful enterprise system combines the ingredients of business, process, people and technology in proper measure.

3)       It is typically used to assess the current state of the BI infrastructure in any organization

4)       Each of the cells that lies at the intersection of the rows and columns (Ex: Information Requirements of Business) has to be documented in detail as part of the assessment document

5)       Information on each cell is gathered through subjective and objective questionnaires.

6)       Scoring Models can be developed to provide an assessment score for each of the cells. Based on the scores, a set of recommendations can be provided to achieve the intended goals.

7)       Another interesting thought is to create a As-Is Zachman framework and overlay that with To-Be one in situations where re-engineering of a BI environment is undertaken. This will help us provide a transition path from the current state to the future.

Thanks for reading. If you have used the Zachman framework differently in your environment, please do share your thoughts.

Zachman Framework for BI Assessments

Zachman Framework for BI Assessments

The Zachman Framework for Enterprise Architecture has become the model around which major organizations view and communicate their enterprise information infrastructure. Enterprise Architecture provides the blueprint, or architecture, for the organization's information infrastructure. More information on the Zachman Framework can be obtained at www.zifa.com.

For BI practitioners, the Zachman Framework provides a way of articulating the current state of the BI infrastructure in the organization. Ralph Kimball in his eminently readable book "The Data Warehouse Lifecycle Toolkit" illustrates how the Zachman Framework can be adapted to the Business Intelligence context.

Given below is a version of the Zachman Framework that I have used in some of my consulting engagements. This is just one way of using this framework but does illustrate the power of this model in some measure.

Framework 

Some Salient Points with respect to the above diagram are:

1)       The framework answers the basic questions of "What", "How", "Who" and "Where" across 4 important dimensions – Business Requirements, Conceptual Model, Logical/Physical Model and Actual Implementation.

2)       Zachman Framework reinforces the fact that a successful enterprise system combines the ingredients of business, process, people and technology in proper measure.

3)       It is typically used to assess the current state of the BI infrastructure in any organization

4)       Each of the cells that lies at the intersection of the rows and columns (Ex: Information Requirements of Business) has to be documented in detail as part of the assessment document

5)       Information on each cell is gathered through subjective and objective questionnaires.

6)       Scoring Models can be developed to provide an assessment score for each of the cells. Based on the scores, a set of recommendations can be provided to achieve the intended goals.

7)       Another interesting thought is to create a As-Is Zachman framework and overlay that with To-Be one in situations where re-engineering of a BI environment is undertaken. This will help us provide a transition path from the current state to the future.

Thanks for reading. If you have used the Zachman framework differently in your environment, please do share your thoughts.

CAM-CRP-1085 when running cogconfig.sh on UNIX

IBM Cognos Configuration displays an error when attempting to start on UNIX. The option when receiving the error is to exit the application.

Error Message:

CAM-CRP-1085 An error occurred while verifying that the security provider classes were loaded.
Reason: java.lang.ClassNotFoundException: org.bouncycastle125.jce.provider.BouncyCastleProvider

The error with IBM Cognos 8.4 will be:

CAM-CRP-1085 An error occurred while verifying that the security provider classes were loaded.
Reason: java.lang.ClassNotFoundException: org.bouncycastle134.jce.provider.BouncyCastleProvider

Root Cause:

An Operating System user, other than the one launching IBM Cognos Configuration was used to copy the cryptographic jar files from beneath the /bin/jre/1.X.X/lib/ext directory, to the /lib/ext folder. The files that were copied into the new directory were left with file permissions that prevented read access to anyone but the User that copied the files.

The inability for the user launching IBM Cognos Configuration to access the files, prevents the application from using them.

Solution:

Ensure that all the files and sub-directories referenced by the JAVA_HOME environment variable are readable by the user running cogconfig.sh.

CAM-CRP-1085 when running cogconfig.sh on UNIX

IBM Cognos Configuration displays an error when attempting to start on UNIX. The option when receiving the error is to exit the application.

Error Message:

CAM-CRP-1085 An error occurred while verifying that the security provider classes were loaded.
Reason: java.lang.ClassNotFoundException: org.bouncycastle125.jce.provider.BouncyCastleProvider

The error with IBM Cognos 8.4 will be:

CAM-CRP-1085 An error occurred while verifying that the security provider classes were loaded.
Reason: java.lang.ClassNotFoundException: org.bouncycastle134.jce.provider.BouncyCastleProvider

Root Cause:

An Operating System user, other than the one launching IBM Cognos Configuration was used to copy the cryptographic jar files from beneath the /bin/jre/1.X.X/lib/ext directory, to the /lib/ext folder. The files that were copied into the new directory were left with file permissions that prevented read access to anyone but the User that copied the files.

The inability for the user launching IBM Cognos Configuration to access the files, prevents the application from using them.

Solution:

Ensure that all the files and sub-directories referenced by the JAVA_HOME environment variable are readable by the user running cogconfig.sh.

SDK - Bad version number in .class file

An error occurs trying to implement the SDK sample TrustedSignonSample.

Error Message:

[ ERROR ] CAM-AAA-0064 The function 'CAM_AAA_JniAuthProvider::Configure' failed.
CAM-AAA-0154 Unable to load the Java authentication provider class 'TrustedSignonSample'.
Bad version number in .class file


Root Cause:

This error will occur when compiling with JDK 1.6 when Cognos 8.4 is using 1.5.

Solution:

Use JDK version 1.5 to build the classes and JAR file.

Steps:

example build.bat

@echo off

rem Copyright ? 2008 Cognos ULC, an IBM Company. All Rights Reserved.
rem Cognos and the Cognos logo are trademarks of Cognos ULC (formerly Cognos Incorporated).

rem Build Java files in directory TrustedSignonSample

echo Building TrustedSignonSample

rem Build the CLASSPATH required to build Java files in the directory TrustedSignonSample

set _CLASSPATH=..\lib\CAM_AAA_CustomIF.jar;..\adapters

rem Compile Java files
D:\jdk1.5.0_11\bin\javac -classpath %_CLASSPATH% -d . *.java

rem Create jar file
D:\jdk1.5.0_11\bin\jar cfm0 CAM_AAA_TrustedSignonSample.jar MANIFEST *.class

echo done

SDK - Bad version number in .class file

An error occurs trying to implement the SDK sample TrustedSignonSample.

Error Message:

[ ERROR ] CAM-AAA-0064 The function 'CAM_AAA_JniAuthProvider::Configure' failed.
CAM-AAA-0154 Unable to load the Java authentication provider class 'TrustedSignonSample'.
Bad version number in .class file


Root Cause:

This error will occur when compiling with JDK 1.6 when Cognos 8.4 is using 1.5.

Solution:

Use JDK version 1.5 to build the classes and JAR file.

Steps:

example build.bat

@echo off

rem Copyright ? 2008 Cognos ULC, an IBM Company. All Rights Reserved.
rem Cognos and the Cognos logo are trademarks of Cognos ULC (formerly Cognos Incorporated).

rem Build Java files in directory TrustedSignonSample

echo Building TrustedSignonSample

rem Build the CLASSPATH required to build Java files in the directory TrustedSignonSample

set _CLASSPATH=..\lib\CAM_AAA_CustomIF.jar;..\adapters

rem Compile Java files
D:\jdk1.5.0_11\bin\javac -classpath %_CLASSPATH% -d . *.java

rem Create jar file
D:\jdk1.5.0_11\bin\jar cfm0 CAM_AAA_TrustedSignonSample.jar MANIFEST *.class

echo done

Incorrect Filters When Users Drill Through to Upgraded 8.4 GA Targets in Analysis Studio

If the target of a drill-through definition is an Analysis Studio report with a drill-through filter, and
the application has been upgraded from IBM Cognos 8 version 8.3 to IBM Cognos 8 version 8.4,
then filters may not be correctly passed from the source to the target. Instead, the Analysis Studio
report appears with the authored settings unchanged, or users may be prompted to select a context.
This is true for authored drill-through definitions (created in a Report Studio report) and package
drill-through definitions (created in Cognos Connection) that use parameterized drill through.


Environment:

Drill through reports upgraded from 8.3 to 8.4 GA

Root Cause:

This problem occurs because of changes in how parameters are automatically named in Analysis
Studio.

Solution:

To correct the problem, recreate the mapping in the drill-through definition, and save the
definition.


Steps:

Steps for Authored Drill Through
1. In Report Studio, Professional Authoring, open the source report.
2. Select the report item that contains the drill-through definition.
3. From the Properties pane, open the drill-through definition (Data, Drill-Through Definitions).
4. From the Drill-Through Definitions window, open the Parameters table, and re-select the target
parameter(s).
5. Save the drill-through definition settings and then save the report.
6. Test the drill through to confirm that the problem is resolved.
For more information, see the Report Studio Professional Authoring User Guide.

Steps for Package Drill Through
1. In IBM Cognos Connection, launch Drill-through Definitions.
2. Navigate to the root of the source package, locate the drill-through definition to be updated,
and click Set Properties.
3. In the Target tab, under Parameter mapping, re-select the target parameters.
4. Save the drill-through definition.
5. Test the drill through to confirm that the problem is resolved.
For more information, see the IBM Cognos 8 BI Administration and Security Guide.

Incorrect Filters When Users Drill Through to Upgraded 8.4 GA Targets in Analysis Studio

If the target of a drill-through definition is an Analysis Studio report with a drill-through filter, and
the application has been upgraded from IBM Cognos 8 version 8.3 to IBM Cognos 8 version 8.4,
then filters may not be correctly passed from the source to the target. Instead, the Analysis Studio
report appears with the authored settings unchanged, or users may be prompted to select a context.
This is true for authored drill-through definitions (created in a Report Studio report) and package
drill-through definitions (created in Cognos Connection) that use parameterized drill through.


Environment:

Drill through reports upgraded from 8.3 to 8.4 GA

Root Cause:

This problem occurs because of changes in how parameters are automatically named in Analysis
Studio.

Solution:

To correct the problem, recreate the mapping in the drill-through definition, and save the
definition.


Steps:

Steps for Authored Drill Through
1. In Report Studio, Professional Authoring, open the source report.
2. Select the report item that contains the drill-through definition.
3. From the Properties pane, open the drill-through definition (Data, Drill-Through Definitions).
4. From the Drill-Through Definitions window, open the Parameters table, and re-select the target
parameter(s).
5. Save the drill-through definition settings and then save the report.
6. Test the drill through to confirm that the problem is resolved.
For more information, see the Report Studio Professional Authoring User Guide.

Steps for Package Drill Through
1. In IBM Cognos Connection, launch Drill-through Definitions.
2. Navigate to the root of the source package, locate the drill-through definition to be updated,
and click Set Properties.
3. In the Target tab, under Parameter mapping, re-select the target parameters.
4. Save the drill-through definition.
5. Test the drill through to confirm that the problem is resolved.
For more information, see the IBM Cognos 8 BI Administration and Security Guide.

Cannot Start IBM Cognos Configuration on Red Hat 5 LinuxCannot Start IBM Cognos Configuration on Red Hat 5 Linux

You are using IBM Java on Red Hat 5 Linux and while running the ./cogconfig.sh command, the
following error message appears:

Exception in thread "main" java.lang.UnsatisfiedLinkError: ///jre/bin/xawt/libmawt.so (/usr/libXft.so.2:undefined symbol: FT_ClyphSlot_Embolden)


Error Message:

Exception in thread "main" java.lang.UnsatisfiedLinkError: ///jre/bin/xawt/libmawt.so (/usr/libXft.so.2:undefined symbol: FT_ClyphSlot_Embolden)

Environment:

Cognos 8.4 GA

Root Cause:

The versions of Freetype that are used by IBM Cognos 8 and Red Hat 5 are not compatible.

Solution:

To resolve the problem, you must add the LD_PRELOAD environment variable on your Red Hat 5
Linux system.


Steps:

?? Set the environment variable by using the following command:
?? On 32-bit systems, type
setenv LD_PRELOAD /usr/lib/libfreetype.so

?? On 64-bit systems, type
setenv LD_PRELOAD /usr/lib64/libfreetype.so

Cannot Start IBM Cognos Configuration on Red Hat 5 LinuxCannot Start IBM Cognos Configuration on Red Hat 5 Linux

You are using IBM Java on Red Hat 5 Linux and while running the ./cogconfig.sh command, the
following error message appears:

Exception in thread "main" java.lang.UnsatisfiedLinkError: ///jre/bin/xawt/libmawt.so (/usr/libXft.so.2:undefined symbol: FT_ClyphSlot_Embolden)


Error Message:

Exception in thread "main" java.lang.UnsatisfiedLinkError: ///jre/bin/xawt/libmawt.so (/usr/libXft.so.2:undefined symbol: FT_ClyphSlot_Embolden)

Environment:

Cognos 8.4 GA

Root Cause:

The versions of Freetype that are used by IBM Cognos 8 and Red Hat 5 are not compatible.

Solution:

To resolve the problem, you must add the LD_PRELOAD environment variable on your Red Hat 5
Linux system.


Steps:

?? Set the environment variable by using the following command:
?? On 32-bit systems, type
setenv LD_PRELOAD /usr/lib/libfreetype.so

?? On 64-bit systems, type
setenv LD_PRELOAD /usr/lib64/libfreetype.so