Purchase your Section 508 Compliance Support guide now!

Purchase your Section 508 Compliance Support guide now!

TOAD

Toad®, Market-leading tool that provides quick and easy database development and administration.
Welcome to ToadSoft!

Toad is a powerful, low-overhead tool that makes database and application development faster and easier and simplifies day-to-day administration tasks.

Whether you are a database developer, application developer, DBA or business analyst, Toad offers specific features to make you more productive than ever before.

http://www.toadsoft.com/

TOAD

Toad®, Market-leading tool that provides quick and easy database development and administration.
Welcome to ToadSoft!

Toad is a powerful, low-overhead tool that makes database and application development faster and easier and simplifies day-to-day administration tasks.

Whether you are a database developer, application developer, DBA or business analyst, Toad offers specific features to make you more productive than ever before.

http://www.toadsoft.com/

The following article outlines the key points for installing and configuring a networked workgroup for Maximizer 9

  1. Follow the instructions on pages 23 to 29 in the Maximizer User’s Guide to install Maximizer 9 on each computer you are networking.

    If you would like the Pervasive database engine to be installed on a drive other than Maximizer, follow the instructions on page 21. Both Maximizer and the Pervasive database engine are installed on each computer in all situations.

  2. After Maximizer is installed on all the computers you are connecting in the networked environment, install and apply Maximizer and Pervasive Workgroup licensing on the dedicated file server by following the instructions on page 35 to 36. You must ensure you have adequate licensing for all users connecting to Maximizer.


  3. On the computer you are using as the file server, share the folder where your Address Books will be hosted allowing full access permissions to all workgroup computers. The default folder for the shared Address Books is ...\All Users\Application Data\Maximizer\AddrBks. As described in the manual, the location of the default folder varies depending on the server’s operating system. For example, on Windows 2000, this folder is found under the Documents and Settings folder.


  4. If you are creating an Address Book for the first time, do so by following the instructions on page 35. Although not all steps listed in the Checklist are necessary, setting up a new Address Book must be performed first, as you will need at least one Address Book for other workstations to connect to. You should also create all users that will be accessing the shared Address Book before proceeding. Depending on your needs, you can go back and adjust security for each user after you have the workgroup environment up and running.


  5. On each of the workgroup computers, ensure you can connect to file server. You can do this by pinging each computer. For example:

    1. Click Start > Run
    2. Type CMD and click OK
    3. Type PING [ComputerName]


    If name of the computer is MAX01 the command would be PING MAX01.


  6. On each of the workgroup computers, map a network drive to the file server's shared Address Book folder:

    1. Locate and select the folder in Windows Explorer
    2. Right-click and choose Map Network Drive.
    3. For Windows XP:
      1. Click Tools > Map Network Drive.
      2. Specify the drive letter for connection and the folder you want to connect to.
      3. Select Reconnect at logon.
      4. Click Finish to complete the mapping of a network drive.


    Verify that the "Reconnect at logon" option in the Map Network Drive dialog is enabled.


  7. On each of the workgroup computers, launch Maximizer and select File > New Address Book. Enter the Address Book name (this can be any name) and click the ellipsis button beside the Location of the Address Book field to browse to the shared Address Book folder. Note that if the Address Book does not yet exist on the file server, you must create it on the file server before browsing to it on your workgroup computers.


  8. On the file server, configure your Pervasive Gateway. Although Maximizer does not require a gateway, it may be beneficial to do so in terms of network performance.

    By default, Maximizer is installed with a Floating Gateway configuration. You can leave the gateway configuration with the default Floating Gateway. However, the file server must be left running and logged on at all times when the workgroup computers are accessing the Maximizer Address Book.

    To configure a Fixed Gateway, follow the instructions on pages 30 to 32 of the manual.

APPLIES TO

Maximizer 9

The following article outlines the key points for installing and configuring a networked workgroup for Maximizer 9

  1. Follow the instructions on pages 23 to 29 in the Maximizer User’s Guide to install Maximizer 9 on each computer you are networking.

    If you would like the Pervasive database engine to be installed on a drive other than Maximizer, follow the instructions on page 21. Both Maximizer and the Pervasive database engine are installed on each computer in all situations.

  2. After Maximizer is installed on all the computers you are connecting in the networked environment, install and apply Maximizer and Pervasive Workgroup licensing on the dedicated file server by following the instructions on page 35 to 36. You must ensure you have adequate licensing for all users connecting to Maximizer.


  3. On the computer you are using as the file server, share the folder where your Address Books will be hosted allowing full access permissions to all workgroup computers. The default folder for the shared Address Books is ...\All Users\Application Data\Maximizer\AddrBks. As described in the manual, the location of the default folder varies depending on the server’s operating system. For example, on Windows 2000, this folder is found under the Documents and Settings folder.


  4. If you are creating an Address Book for the first time, do so by following the instructions on page 35. Although not all steps listed in the Checklist are necessary, setting up a new Address Book must be performed first, as you will need at least one Address Book for other workstations to connect to. You should also create all users that will be accessing the shared Address Book before proceeding. Depending on your needs, you can go back and adjust security for each user after you have the workgroup environment up and running.


  5. On each of the workgroup computers, ensure you can connect to file server. You can do this by pinging each computer. For example:

    1. Click Start > Run
    2. Type CMD and click OK
    3. Type PING [ComputerName]


    If name of the computer is MAX01 the command would be PING MAX01.


  6. On each of the workgroup computers, map a network drive to the file server's shared Address Book folder:

    1. Locate and select the folder in Windows Explorer
    2. Right-click and choose Map Network Drive.
    3. For Windows XP:
      1. Click Tools > Map Network Drive.
      2. Specify the drive letter for connection and the folder you want to connect to.
      3. Select Reconnect at logon.
      4. Click Finish to complete the mapping of a network drive.


    Verify that the "Reconnect at logon" option in the Map Network Drive dialog is enabled.


  7. On each of the workgroup computers, launch Maximizer and select File > New Address Book. Enter the Address Book name (this can be any name) and click the ellipsis button beside the Location of the Address Book field to browse to the shared Address Book folder. Note that if the Address Book does not yet exist on the file server, you must create it on the file server before browsing to it on your workgroup computers.


  8. On the file server, configure your Pervasive Gateway. Although Maximizer does not require a gateway, it may be beneficial to do so in terms of network performance.

    By default, Maximizer is installed with a Floating Gateway configuration. You can leave the gateway configuration with the default Floating Gateway. However, the file server must be left running and logged on at all times when the workgroup computers are accessing the Maximizer Address Book.

    To configure a Fixed Gateway, follow the instructions on pages 30 to 32 of the manual.

APPLIES TO

Maximizer 9

Can You Trust Your Metadata If You Have Poor Quality Data?

Peter Ku

Over the past several quarters, I've had the privilege of speaking with a number of companies involved in data governance. The interesting thing I found: firms who identified both drivers as critical, but only invest in one and not the other.

Case in point: a leading financial services firm implemented a data governance program to improve the comprehension and accuracy of the company's existing board reports. I learned that one of their goals was to define their business terms and definitions (i.e. business metadata) to help non-technical users improve their understanding of the data used to run the business. What I found fascinating was that this was being done prior to addressing their data quality issues. In fact, when asked, "Do you have data quality challenges?" most business users said "yes". Unfortunately, no one at this company knew to what extent. Instead, their focus was on defining their business metadata. This leads me to ask, "Can you trust your metadata without addressing your data quality issues as part of a data governance practice?"

If metadata is information about your data which your business users are relying on to drive decisions, but the source data is not clean, how will that affect your business? The answers seem self-explanatory. Of course you can't trust your metadata if you have poor quality data.  For example, business metadata is defined from an approved list of valid values. Unfortunately, if the data used to define those values are incorrect, the downstream impact is you end up with inaccurate metadata.

Organizations implementing data governance programs need to consider the lifecycle of how data is captured, processed, and delivered to downstream systems— whether that is your data warehouse, master data management application, data hub or CRM system. Creating, defining, and publishing business metadata without addressing your data quality issues may not help companies looking to benefit from data governance.

Can You Trust Your Metadata If You Have Poor Quality Data?

Peter Ku

Over the past several quarters, I've had the privilege of speaking with a number of companies involved in data governance. The interesting thing I found: firms who identified both drivers as critical, but only invest in one and not the other.

Case in point: a leading financial services firm implemented a data governance program to improve the comprehension and accuracy of the company's existing board reports. I learned that one of their goals was to define their business terms and definitions (i.e. business metadata) to help non-technical users improve their understanding of the data used to run the business. What I found fascinating was that this was being done prior to addressing their data quality issues. In fact, when asked, "Do you have data quality challenges?" most business users said "yes". Unfortunately, no one at this company knew to what extent. Instead, their focus was on defining their business metadata. This leads me to ask, "Can you trust your metadata without addressing your data quality issues as part of a data governance practice?"

If metadata is information about your data which your business users are relying on to drive decisions, but the source data is not clean, how will that affect your business? The answers seem self-explanatory. Of course you can't trust your metadata if you have poor quality data.  For example, business metadata is defined from an approved list of valid values. Unfortunately, if the data used to define those values are incorrect, the downstream impact is you end up with inaccurate metadata.

Organizations implementing data governance programs need to consider the lifecycle of how data is captured, processed, and delivered to downstream systems— whether that is your data warehouse, master data management application, data hub or CRM system. Creating, defining, and publishing business metadata without addressing your data quality issues may not help companies looking to benefit from data governance.

Can Data Governance Help Wall Street Firms Survive?

Can Data Governance Help Wall Street Firms Survive?

Peter Ku

Now that the $700 billion dollar Troubled Asset Rescue Program (TARP) has been approved by the government, firms on Wall Street are preparing themselves for even more oversight and scrutiny by lawmakers and taxpayers. Survivors from the market meltdown will be required to establish tighter controls, policies, standards, and processes for managing and delivering trusted information for decision making, auditing, and regulatory reporting than ever before.

While many companies have invested in data integration, data quality, data warehousing, and business intelligence technologies over the years, only a handful have adopted formal data governance practices to manage one of the most important assets in their company, their "Data". Take for example GE Asset Management. Chief Information Officer Anthony Sirabella, states:

"Data governance is one of our top 3 enterprise projects to generate more-consistent, accurate and timely data across the business; increase confidence in the data; and increase productivity for those supporting the data infrastructure. The data governance initiative involves a business focus on carefully defining data sources and definitions to meet user requirements, consistently applying that data across various analytic tools and reports, greater control over supplying systems and processes, and continuous monitoring to make sure that the data remains accurate and timely."

As firms journey into these unchartered waters, data governance is not only interesting but will soon effectively be a requirement for all to document their data management processes, policies, standards, and participants. In fact, companies may be required to measure the quality and value of their data just like any other asset on their books.

Why?  Business transparency is no longer a luxury but a requirement. Companies can no longer let departments operate in silos or rogue individuals to perform fraudulent activities that can harm shareholders (remember Société Générale?). Managing risk is impossible without insight to cross business functions, access to accurate and timely information, and policies to enforce and protect the data and information used to run the business.  This is the essence of data governance.

Therefore, can companies on Wall St. survive without a formal data governance program?  I highly doubt it.

Can Data Governance Help Wall Street Firms Survive?

Can Data Governance Help Wall Street Firms Survive?

Peter Ku

Now that the $700 billion dollar Troubled Asset Rescue Program (TARP) has been approved by the government, firms on Wall Street are preparing themselves for even more oversight and scrutiny by lawmakers and taxpayers. Survivors from the market meltdown will be required to establish tighter controls, policies, standards, and processes for managing and delivering trusted information for decision making, auditing, and regulatory reporting than ever before.

While many companies have invested in data integration, data quality, data warehousing, and business intelligence technologies over the years, only a handful have adopted formal data governance practices to manage one of the most important assets in their company, their "Data". Take for example GE Asset Management. Chief Information Officer Anthony Sirabella, states:

"Data governance is one of our top 3 enterprise projects to generate more-consistent, accurate and timely data across the business; increase confidence in the data; and increase productivity for those supporting the data infrastructure. The data governance initiative involves a business focus on carefully defining data sources and definitions to meet user requirements, consistently applying that data across various analytic tools and reports, greater control over supplying systems and processes, and continuous monitoring to make sure that the data remains accurate and timely."

As firms journey into these unchartered waters, data governance is not only interesting but will soon effectively be a requirement for all to document their data management processes, policies, standards, and participants. In fact, companies may be required to measure the quality and value of their data just like any other asset on their books.

Why?  Business transparency is no longer a luxury but a requirement. Companies can no longer let departments operate in silos or rogue individuals to perform fraudulent activities that can harm shareholders (remember Société Générale?). Managing risk is impossible without insight to cross business functions, access to accurate and timely information, and policies to enforce and protect the data and information used to run the business.  This is the essence of data governance.

Therefore, can companies on Wall St. survive without a formal data governance program?  I highly doubt it.

Informatica Performance Tuning -- Minimizing Deadlocks

Minimizing Deadlocks

If the Integration Service encounters a deadlock when it tries to write to a target, the deadlock only affects targets in the same target connection group. The Integration Service still writes to targets in other target connection groups.

Encountering deadlocks can slow session performance. To improve session performance, you can increase the number of target connection groups the Integration Service uses to write to the targets in a session. To use a different target connection group for each target in a session, use a different database connection name for each target instance. You can specify the same connection information for each connection name.

Informatica Performance Tuning --- Optimizing Oracle Target Databases

Optimizing Oracle Target Databases

If the target database is Oracle, you can optimize the target database by checking the storage clause, space allocation, and rollback or undo segments.

When you write to an Oracle database, check the storage clause for database objects. Make sure that tables are using large initial and next values. The database should also store table and index data in separate tablespaces, preferably on different disks.

When you write to Oracle databases, the database uses rollback or undo segments during loads. Ask the Oracle database administrator to ensure that the database stores rollback or undo segments in appropriate tablespaces, preferably on different disks. The rollback or undo segments should also have appropriate storage clauses.

To optimize the Oracle database, tune the Oracle redo log. The Oracle database uses the redo log to log loading operations. Make sure the redo log size and buffer size are optimal. You can view redo log properties in the init.ora file.

If the Integration Service runs on a single node and the Oracle instance is local to the Integration Service process node, you can optimize performance by using IPC protocol to connect to the Oracle database. You can set up Oracle database connection in listener.ora and tnsnames.ora.

Informatica Performance Tuning -- Minimizing Deadlocks

Minimizing Deadlocks

If the Integration Service encounters a deadlock when it tries to write to a target, the deadlock only affects targets in the same target connection group. The Integration Service still writes to targets in other target connection groups.

Encountering deadlocks can slow session performance. To improve session performance, you can increase the number of target connection groups the Integration Service uses to write to the targets in a session. To use a different target connection group for each target in a session, use a different database connection name for each target instance. You can specify the same connection information for each connection name.

Informatica Performance Tuning --- Optimizing Oracle Target Databases

Optimizing Oracle Target Databases

If the target database is Oracle, you can optimize the target database by checking the storage clause, space allocation, and rollback or undo segments.

When you write to an Oracle database, check the storage clause for database objects. Make sure that tables are using large initial and next values. The database should also store table and index data in separate tablespaces, preferably on different disks.

When you write to Oracle databases, the database uses rollback or undo segments during loads. Ask the Oracle database administrator to ensure that the database stores rollback or undo segments in appropriate tablespaces, preferably on different disks. The rollback or undo segments should also have appropriate storage clauses.

To optimize the Oracle database, tune the Oracle redo log. The Oracle database uses the redo log to log loading operations. Make sure the redo log size and buffer size are optimal. You can view redo log properties in the init.ora file.

If the Integration Service runs on a single node and the Oracle instance is local to the Integration Service process node, you can optimize performance by using IPC protocol to connect to the Oracle database. You can set up Oracle database connection in listener.ora and tnsnames.ora.

Informatica Performance Tuning --- Using External Loads

Using External Loads

You can use an external loader to increase session performance. If you have a DB2 EE or DB2 EEE target database, you can use the DB2 EE or DB2 EEE external loaders to bulk load target files. The DB2 EE external loader uses the Integration Service db2load utility to load data. The DB2 EEE external loader uses the DB2 Autoloader utility.

If you have a Teradata target database, you can use the Teradata external loader utility to bulk load target files. To use the Teradata external loader utility, set up the attributes, such as Error Limit, Tenacity, MaxSessions, and Sleep, to optimize performance.

If the target database runs on Oracle, you can use the Oracle SQL*Loader utility to bulk load target files. When you load data to an Oracle database using a pipeline with multiple partitions, you can increase performance if you create the Oracle target table with the same number of partitions you use for the pipeline.

If the target database runs on Sybase IQ, you can use the Sybase IQ external loader utility to bulk load target files. If the Sybase IQ database is local to the Integration Service process on the UNIX system, you can increase performance by loading data to target tables directly from named pipes. If you run the Integration Service on a grid, configure the Integration Service to check resources, make Sybase IQ a resource, and make the resource available on all nodes of the grid. Then, in the Workflow Manager, assign the Sybase IQ resource to the applicable sessions.

Informatica Performance Tuning --- Using External Loads

Using External Loads

You can use an external loader to increase session performance. If you have a DB2 EE or DB2 EEE target database, you can use the DB2 EE or DB2 EEE external loaders to bulk load target files. The DB2 EE external loader uses the Integration Service db2load utility to load data. The DB2 EEE external loader uses the DB2 Autoloader utility.

If you have a Teradata target database, you can use the Teradata external loader utility to bulk load target files. To use the Teradata external loader utility, set up the attributes, such as Error Limit, Tenacity, MaxSessions, and Sleep, to optimize performance.

If the target database runs on Oracle, you can use the Oracle SQL*Loader utility to bulk load target files. When you load data to an Oracle database using a pipeline with multiple partitions, you can increase performance if you create the Oracle target table with the same number of partitions you use for the pipeline.

If the target database runs on Sybase IQ, you can use the Sybase IQ external loader utility to bulk load target files. If the Sybase IQ database is local to the Integration Service process on the UNIX system, you can increase performance by loading data to target tables directly from named pipes. If you run the Integration Service on a grid, configure the Integration Service to check resources, make Sybase IQ a resource, and make the resource available on all nodes of the grid. Then, in the Workflow Manager, assign the Sybase IQ resource to the applicable sessions.

Informatica Performance Tuning

Dropping Indexes and Key Constraints

When you define key constraints or indexes in target tables, you slow the loading of data to those tables. To improve performance, drop indexes and key constraints before you run the session. You can rebuild those indexes and key constraints after the session completes.

If you decide to drop and rebuild indexes and key constraints on a regular basis, you can use the following methods to perform these operations each time you run the session:

Use pre-load and post-load stored procedures.
Use pre-session and post-session SQL commands. For more information about pre-session and post-session SQL commands.
 
Note: To optimize performance, use constraint-based loading only if necessary.

Informatica Performance Tuning -- Using Bulk Loads

Using Bulk Loads

You can use bulk loading to improve the performance of a session that inserts a large amount of data into a DB2, Sybase ASE, Oracle, or Microsoft SQL Server database. Configure bulk loading in the session properties.

When bulk loading, the Integration Service bypasses the database log, which speeds performance. Without writing to the database log, however, the target database cannot perform rollback. As a result, you may not be able to perform recovery. When you use bulk loading, weigh the importance of improved session performance against the ability to recover an incomplete session.

When bulk loading to Microsoft SQL Server or Oracle targets, define a large commit interval to increase performance. Microsoft SQL Server and Oracle start a new bulk load transaction after each commit. Increasing the commit interval reduces the number of bulk load transactions, which increases performance.

Informatica Performance Tuning

Dropping Indexes and Key Constraints

When you define key constraints or indexes in target tables, you slow the loading of data to those tables. To improve performance, drop indexes and key constraints before you run the session. You can rebuild those indexes and key constraints after the session completes.

If you decide to drop and rebuild indexes and key constraints on a regular basis, you can use the following methods to perform these operations each time you run the session:

Use pre-load and post-load stored procedures.
Use pre-session and post-session SQL commands. For more information about pre-session and post-session SQL commands.
 
Note: To optimize performance, use constraint-based loading only if necessary.

Informatica Performance Tuning -- Using Bulk Loads

Using Bulk Loads

You can use bulk loading to improve the performance of a session that inserts a large amount of data into a DB2, Sybase ASE, Oracle, or Microsoft SQL Server database. Configure bulk loading in the session properties.

When bulk loading, the Integration Service bypasses the database log, which speeds performance. Without writing to the database log, however, the target database cannot perform rollback. As a result, you may not be able to perform recovery. When you use bulk loading, weigh the importance of improved session performance against the ability to recover an incomplete session.

When bulk loading to Microsoft SQL Server or Oracle targets, define a large commit interval to increase performance. Microsoft SQL Server and Oracle start a new bulk load transaction after each commit. Increasing the commit interval reduces the number of bulk load transactions, which increases performance.

How to Harness the Power of Grid Computing to Achieve Greater Data Integration Scalability and Performance

Historically, IT organizations have relied on large, multi-CPU symmetric multiprocessing (SMP) servers for data

processing. The underlying assumption was that by adding capacity-more CPUs, memory, and disk-IT could

answer the need to process greater data volumes in ever-shrinking load windows.

That capacity, however, came at a high price. Acquisition, maintenance, and support of a single SMP server could

amount to millions of dollars. And SMP systems offered little flexibility to "scale down," meaning that costly

resources were often underutilized except for periodic peak loads. Faced with budget reductions in the early 21st

century, IT organizations began to explore alternatives for more cost-effective data processing platforms.

The grid computing architecture is rapidly emerging as a compelling alternative for data processing. A grid is

typically a collection of low-cost, commodity blade x86 processor-based server nodes connected over a high

speed Ethernet network in which resources are pooled and shared. Grid computing can offer several advantages

over monolithic SMP systems:

• Greater flexibility for incremental growth

• Cost-effective scalability and capacity on demand

• Optimized resource utilization

Despite its benefits, the grid computing paradigm presents a number of challenges to both IT organizations and

infrastructure vendors. Software applications running on the grid-databases, application servers, storage systems,

data integration platforms, and others-must be equipped with grid-specific functionality to take advantage of a

grid's capability to evenly disperse workloads across multiple servers.

How to Harness the Power of Grid Computing to Achieve Greater Data Integration Scalability and Performance

Historically, IT organizations have relied on large, multi-CPU symmetric multiprocessing (SMP) servers for data

processing. The underlying assumption was that by adding capacity-more CPUs, memory, and disk-IT could

answer the need to process greater data volumes in ever-shrinking load windows.

That capacity, however, came at a high price. Acquisition, maintenance, and support of a single SMP server could

amount to millions of dollars. And SMP systems offered little flexibility to "scale down," meaning that costly

resources were often underutilized except for periodic peak loads. Faced with budget reductions in the early 21st

century, IT organizations began to explore alternatives for more cost-effective data processing platforms.

The grid computing architecture is rapidly emerging as a compelling alternative for data processing. A grid is

typically a collection of low-cost, commodity blade x86 processor-based server nodes connected over a high

speed Ethernet network in which resources are pooled and shared. Grid computing can offer several advantages

over monolithic SMP systems:

• Greater flexibility for incremental growth

• Cost-effective scalability and capacity on demand

• Optimized resource utilization

Despite its benefits, the grid computing paradigm presents a number of challenges to both IT organizations and

infrastructure vendors. Software applications running on the grid-databases, application servers, storage systems,

data integration platforms, and others-must be equipped with grid-specific functionality to take advantage of a

grid's capability to evenly disperse workloads across multiple servers.