Setting additional properties for ODBC or text sources

After creating and setting up the general properties of the ODBC or text sources, you need to set up the columns, rows, sorting, and script properties of the source.

To set up additional properties for an ODBC or text source:

  1. Click the Columns tab to specify the column properties for the source.

If you are creating a text source and the text file does not contain column names, you may want to edit the column names to make them more meaningful.

If you rename any of the column names, you must rename all of the column names in order for the integration to be successful.


Column Name   These are the names of the data items that are available for the source. If you marked the option indicating that the first row of the text file contained column names, those names appear here. If the text file does not contain column names, Integration Manager supplies its own names. You can edit the column names to make them more meaningful.

Don’t use ODBC reserved words in the source name. They can prevent the integration from running properly. For a list of ODBC reserved words, go to the MSDN online library (http://msdn.microsoft.com) and search for “ODBC reserved words”. You’ll find a list of reserved keywords in Appendix C of the ODBC Programmer’s Reference.


Datatype   This value indicates what type of data is contained in the column. If you are creating a text source, Integration Manager examines the data in the text file and provides a default value. If you’re using a simple ODBC data source, the data type value is automatically retrieved from the ODBC data source and can’t be changed.

Column names and the data types you have chosen are stored in the Schema.ini file, which is in the same location as the text file.


You can change the value in the Datatype column, based on the type of data in the text file and how the information is used in the destination of the integration. Refer to Data types to help determine the most appropriate data type for each column in the text file.

Size   If you are creating a simple ODBC source, the Size column indicates the data size of each column, in bytes. You can’t edit this value.

Show   Select the Show check box to include the data in this column in your integration. If you are using a large source file and you do not want all the data to integrate, clear this check box for the columns to exclude.

Is Key   Select the Is Key check box to indicate that the data items within a column are unique identifiers. For example, you might select a column called Customer ID as Is Key to indicate that the values within a column are unique.

Integration Manager uses the values selected as Is Key to identify specific rows that cause errors in the integration.


You don’t need to select this check box for any columns.

If the properties of the text file or the ODBC data source used for the source have changed, click the Refresh Columns button to update the columns list.


  1. Choose the Filter tab to display the restriction criteria for the source (optional). Restrictions allow you to specify the rows that will be included in the source. All rows that do not fit the criteria are excluded from the source results.

A restriction is composed of several individual criteria. You can use logical AND and logical OR operators to apply several criteria. If necessary, you can include parentheses to group items in the criteria.

The LIKE operator allows you to perform basic pattern matching with string data types. You can use the percent sign (%) as a wildcard character, representing any sequence of characters. The percent sign can be used multiple times within a single string. For example, to include all customers whose names include the word “ACME”, the expression would be similar to the following:

CustomerName Like %ACME%

To exclude all rows where a particular field is blank, use a single wildcard character. For example, to exclude all customers where the “CustomerName” field is blank, the expression would be similar to the following:


CustomerName <>0.

  1. Choose the Sorting tab to specify sorting and grouping information for the source.

By adding columns to the Order By list, you indicate the order rows should appear in the source. Individual columns can be sorted in ascending or descending order. Specifying a sort order is needed when the data in the source text file is not in the desired order. This is useful for integrations such as General Ledger transactions, where the order in which items are added is important.

For example, you typically want General Ledger transactions created in transaction date order. If the transactions in the source text file are not in this order, you can use the sorting capability to import them in the desired order.

To remove a column from the list, click the line selection button (to the left of the item) to select the entire line, and press Delete.


Sorting also can improve the performance of integrations. If multiple sources are used for an integration, sort the source results by the columns that are used to define the relationship. For example, if a relationship between two sources is based on the DocNumber column, sort both sources by that column.

The Group By list allows you to “collapse” the rows in the source that have identical values for corresponding columns. To compare values, specify the columns containing the values by adding them to the Group By list. These columns must also have the Show option marked in the Columns tab. When the rows are collapsed, only one row is included in the source results for any group of rows for which all of the selected columns match.

If you group the items in a source, only columns included in the group are included in the source results. No other columns are included in the source results.


Grouping is typically used when you have header information and line item information in a single source file. For example, General Ledger transactions are composed of a transaction header and line items. The transaction header contains the transaction date and reference. The line items contain information for each item in the transaction.

The following text file contains General Ledger transaction information.

To get the transaction header information from the text file, the items in the file need to be “collapsed.” Each transaction is uniquely identified by the transaction date and reference, but several linesin the text file have the same transaction date and reference. To collapse the file, the Trx Date and Reference columns are added to the Group By list. The results of the source are shown in the following illustration.

The results contain only one row for each unique transaction date and reference combination. The only columns included in the source results are those in the Group By list.

  1. To attach a script to the integration (optional), choose the Scripts tab.

Scripts are written in VBScript, a subset of the Microsoft Visual Basic® programming language. Refer to Using scripts. for more information about using scripts.

  1. After you set up properties for the text or simple ODBC source query, click OK. The source you created is added to the Integration window.

Be sure to save your source after setting the source properties. From the File menu, click Save Integration.



Documentation Feedback