i don't enable the conneciton filter, but somehow almost databases can't list in Data Source Explorer. but all databases can be listed in when I open Database Cache Properties setting window.
Did anybody come across this issue?
Almost databases can't list in my Data Source Explorer
Mload upsert - 6705 - SAP BW as source
Greetings Experts,
We use Informatica to load data from SAP BW to Teradata staging layer and could see a job not dropping the ET table (some records exist with error code 6705) during Mload operation (upsert) and succeeds. During the next run of the job, it fails in its first phase of Mload as it couldn't create the ET table which already exists.
How to identify the source records (for which we don't have access to) that have violated and landed into ET table? (The source data is landed into Informatica server through the flat file, data is kept in pipe in Informatica which is deleted once the job is completed)
If Teradata has database_link like Oracle, we could have accessed the source data of SAP BW right from Teradata and could have troubleshooted the issue. Can you explain why Teradata implicitly doesn't support db link (I guess). I have gone through the following link and couldn't find a implicit db_link support (for external source systems like SAP, oracle,...) rather than through external tools.
http://forums.teradata.com/forum/database/teradata-dblink
One option is to find all the distinct values on the column which is recorded in ET table of Teradata from the source SAP BW system and identify the records missing in source which may be a tedious task or
use a flat file in Informatica rather than pipe so that we could read the file.
Are there any provision to find the missing records right from Teradata?
TQS only schedules queries if submitted by TDWM
I just upgraded to 14.10 software from 13.10. Some parameter has been changed, all jobs were working fine.
Now any user except TDWM will get bad user account or password error.
All profiles are set the same as before. What am I forgetting?
RGlass
Error while loading data from teradata to SQL Server - TPT registry key cannot be opened
Hi all,
I am building an SSIS package in SQL Server BIDS 2008 version to pull data from Teradata and dump to SQL Server.
1. I am able to connect to Teradata sucessfully as Source.
2. Connection to SQL Server as Destination is also good.
2. When I am running the package I am getting the error
********* ERROR *******
[Teradata Source [173]] Error: The Teradata TPT registry key cannot be opened. Verify that the TPT API 12.0 or 13.0 Edition 2 (13.0.0.2) for
Windows x86 is installed properly.
Deatils :
Windows 7
Teradata 14.0 version.
Microsoft® Connectors v1.2 for Oracle and Teradata
SQL Server BIDS 2008 to create SSIS.
When I am checking my control panel -> Programs and Feature -> (have)
1. Teradata Parallel Transport Base 14.0
2. Teradata Parallel Transport Stream 14.0
on my Machine TPT is
HKEY_LOCAL_MACHINE\SOFTWARE\Teradata\Client\14.00\Teradata Parallel Transporter Base
HKEY_LOCAL_MACHINE\SOFTWARE\Teradata\Client\14.00\Teradata Parallel Transporter Stream
then why it is looking for TPT API 12.0 or 13.0 Edition 2 (13.0.0.2)
How to solve this issue.
Please help.
Abhay Nautiyal
TPT error - TPT
Hi,
I'm getting the following error when launching a TPT job from the command line -
"Teradata Parallel Transporter Version 13.10.00.07
TPT_INFRA: TPT04013: Error: opening temporary job script output file: "No such file or directory" (2).
Job script compilation failed.
Job terminated with status 12."
The user id does have write access to the checkpoint and logs folder as specified in the twbcfg.ini File. Any suggestions?
Thanks,
Sebastian
Standard File Extensions for Teradata Tools Scripts
Is there any standard File Extension recommended by Teradata for its tool Scripts like bteq, fastexport, fastload, mload, tpump, tpt, JobVariables?
any suggestion for better naming standard for different scripts that distinguishes itself from other?
Also, I am finding most of the sample scripts in Teradata Documentation are in .txt files and of course all scripts are plain-text,, So, is that a standard to use .txt for all TD scripts?
query scheduler
Does the query scheduler tool allow you to check table status before running, or only set a specific time for a query to run? E.g. set up so that a query will only run once table xyz has been updated instead of running at a specified hardcoded time each day.
BTeq date issue
Hi,
I am having an issue when using Bteq in Linux.
Let's say I have a table in teradata that has two columns:
Article Date_issue
Article1 12/18/2013
When I qu ery this table from Bteq in Linux the result is:
Article Date_Issue
Article1 13/12/18
Is there a way to get the date format correct just like it exist in the table?
TPT API, TimeStamp(0)
Hi:
I am trying to use TPT API to import data into Teradata. From other forum posts, Date is 4 Bytes and can be represented as "date = (Year - 1900) * 10000 + Month * 100 + Day".
For TimeStamp, I cannot find anything in the sample code or forum about it. How to store timestamp with 0 precision in a byte array and write the buffer into Teradata using TPT API?
Thanks,
J.D.
bteq blank Spaces
I am very new to Teradata. I am using bteq to unload data from data base. I have to unload data for multiple file so i using generic unix script and passing each sql as a file to breq (.run file <sql_file>). I am getting the data with column names and spaces in the file. I want to remove them, kindly advise ?
Output file format:
cntry_grp_type_cd|cntry_grp_cd|cntry_cd
R |EMEA |TD
L |ISC |MV
IR |APAC |AU
IR |EMEA |FR
L |BEE |TM
R |LAC |GP
R |EMEA |BT
L |SGP |MM
IR |EMEA |ES
I am looking like as follows:
R|EMEA|TD
L|ISC|MV
IR|APAC|AU
IR|EMEA|FR
L|BEE|TM
R|LAC|GP
R|EMEA|BT
L|SGP|MM
IR|EMEA|ES
TPT with Unicode columns in source.
SCHEMA SECTION (NOT ALL COLUMNS): -------------- DESCRIPTION 'TABLE table_ld ODBC SCHEMA' ( SourceCol_1 NUMBER(38) , SourceCol_2 TIMESTAMP , SourceCol_3 VARCHAR(60) CHARACTER SET UNICODE , SourceCol_4 TIMESTAMP , SourceCol_5 VARCHAR(60) CHARACTER SET UNICODE , SourceCol_6 VARCHAR(60) CHARACTER SET UNICODE , SourceCol_7 TIMESTAMP , SourceCol_8 VARCHAR(20) CHARACTER SET UNICODE , SourceCol_9 VARCHAR(255) CHARACTER SET UNICODE , SourceCol_10 VARCHAR(20) CHARACTER SET UNICODE , SourceCol_11 VARCHAR(20) CHARACTER SET UNICODE , SourceCol_12 VARCHAR(255) CHARACTER SET UNICODE , SourceCol_13 VARCHAR(255) CHARACTER SET UNICODE , SourceCol_14 VARCHAR(60) CHARACTER SET UNICODE , SourceCol_15 VARCHAR(255) CHARACTER SET UNICODE , SourceCol_16 VARCHAR(60) CHARACTER SET UNICODE , SourceCol_17 VARCHAR(60) CHARACTER SET UNICODE , SourceCol_18 VARCHAR(255) CHARACTER SET UNICODE , SourceCol_19 VARCHAR(255) CHARACTER SET UNICODE , SourceCol_20 VARCHAR(255) CHARACTER SET UNICODE , SourceCol_21 VARCHAR(255) CHARACTER SET UNICODE , SourceCol_22 VARCHAR(255) CHARACTER SET UNICODE , SourceCol_23 VARCHAR(4000) CHARACTER SET UNICODE , SourceCol_24 VARCHAR(255) CHARACTER SET UNICODE , SourceCol_25 VARCHAR(60) CHARACTER SET UNICODE APPLY SECTION: ------------- APPLY ( 'INSERT INTO LoadTablesaa.branch_ld ( id_num , created_date , created_by , last_modified , last_modified_by , lock_user , lock_date , state , cr_state , branch_type , release_target , cr_release_target , branch_name , responsible , cr_responsible , branch_manager , branch_steward , keywords , baseline_planned , baseline_actual ...... ) VALUES ( :SourceCol_1 , :SourceCol_2 , :SourceCol_3 , :SourceCol_4 , :SourceCol_5 , :SourceCol_6 , :SourceCol_7 , :SourceCol_8 , :SourceCol_9 , :SourceCol_10 , :SourceCol_11 , :SourceCol_12 , :SourceCol_13 , :SourceCol_14 , :SourceCol_15 .... ) TO OPERATOR ( STREAM_OPERATOR[1]) SELECT * FROM OPERATOR ( table_ld_READ_OPERATOR[1] ); TBUILD CALL: ----------- tbuild -f $CTL_FILE -u "TDPassword = '$DSS_PWD', SRCPassword = '$SRC_PWD'" wsl-$LOAD_TABLE-$SEQUENCE >> $AUD_FILE ERROR MESSAGE RECEIVED: ---------------------- TPT_INFRA: At "CHARACTER" missing { RPAREN_ COMMA_ MACROCHARSET_ METADATA_ OFFSET_ } in Rule: Column Definition Compilation failed due to errors. Execution Plan was not generated Job script compilation failed. Teradata Parallel Transporter Version 14.10.00.02 Job terminated with status 8.
Hi,
We have been trying to load data using TPT from an oracle source that has unicode columns but have been getting errors while loading it. The problem we are getting is with the UNICODE columns.
Regards,
Indrajit
When are secondary indexes required?
Hi All,
I understand that a Secondary Index is a 2-AMP or an all-AMP operation, and also the subtable that is created and its contents.
However, I am not sure in what scenarios should a secondary index (unique or non-unique) be defined.
I would really appreciate some help here.
Thanks,
Aarsh
TPT Template operators with LDAP authentication
I have coded a simple TPT script using operator templates ($EXPORT, $INSERT and $LOAD) to copy data of Table A from TDSERVER-A to TDSERVER-B. TDSERVER-A and TDSERVER-B use LDAP based authentication. TPT (TBUILD) raises errors when I run the script either by initializing LogonMech, UserName, UserPassword variables for Source/Target operators in JobVars file or when I run the script by overriding the mentioned variables using ATTR option inline in the TPT script. The error messages I noticed are given below. The strange thing is I was able to run the same script in a different TD env that does not use LDAP based authentication. Did anyone experience similar issue ? Are there any tweaks that should be done in the TPT script to use operator templates successfully with LDAP authentication? Any input is appreciated.
Error Msgs:
Teradata Parallel Transporter Version 14.00.00.08
TPT_INFRA: TPT05014: RDBMS error 8017: The UserId, Password or Account is invalid.
TPT_INFRA: TPT04032: Error: Schema generation failed for table 'DBNAME.TABLENAME' in DBS 'T7DEV':
"GetTableSchema" status: 48.
Job script preprocessing failed.
Job terminated with status 12.
Variant records and TPT
I have a file with fixed-length records, but they need to be interpreted in various ways, depending on first field (let's call it RECORD_TYPE). We currently do it easily with Multiload:
.LAYOUT usage_record_layout; .FIELD RECORD_TYPE 1 CHAR(5); /* First variant */ .FIELD variant1_field1 6 CHAR(15); .FIELD variant1_field2 * CHAR(15); /* Second variant */ .FIELD variant2_field1 6 CHAR(10); .FIELD variant2_field2 * CHAR(12); .FIELD variant2_field3 * CHAR(8); /* (......) */ .IMPORT INFILE file_name FORMAT TEXT LAYOUT usage_record_layout APPLY DML_for_variant1 WHERE RECORD_TYPE='TYPE1' APPLY DML_for_variant2 WHERE RECORD_TYPE='TYPE2';
Can something similar be done with TPT?
I can't find a way - other than multiple passes through the file, which is unacceptable.
Data connector producer may have only one schema and the schema is just a collection of fields, it's not possible to create variant records (you can't specify offset or starting position of field).
Is it a case where Multiload cannot be replaced with TPT?
Regards,
Jacek
BTET/ANSI Transaction in TPT/FASTLOAD
Hi All,
I am having one problem.
I know BTET transaction is "all or none". If I submit a multistatement transaction and any one statemetn fails then entire transaction will roll back in BTET mode. However in ANSI mode if I use COMMIT then only the failed statements will roll back while the others will commit.
Also I know TPT runs in Teradata mode, as it is defined in DBSControl. I guess FastLoad also runs the same.
Now, I was running a TPT job (using Load operator) through informatica. In the source file there was a 'NULL' data at one column position and that column was defined as NOT NULL in DDL. My expectation was the entire transaction will fail. But only the affected records were excluded with error written in _ET table, rest were committed in target table.
Then I tried to load the same source file using FASTLOAD with CHECKPOINT 10 and received same result. Only the affected records were excluded with error in ET, rest were committed.
Why am I receiving this result? What is the logic behind this?
Does this have any relation with the fact that FLOAD/TPT LOAD happens with 64K block?
Please let me know your replies.
Thanking You
Santanu
Delimiter Issue With TPT on Linux Platform
We are experiencing a delimiter issue with TPT on Linux. The same script works fine on AIX.
The problem shows itself as incorrect data in the selection of CURRENT_DATE and also in incorrect data in a character field.
There are delimiters that can be used and yield correct data, but these delimeters do not solve our problem because they are 2 character delimiters(^^) and we have downstream systems that cannot use them.
Is this a known TPT issue? Can anyone suggest solutions?
TPT Error - Not able to get rows having incorrect format
Hi TPT Users,
I see there's some issue with the datafile that I have been trying to load.
I keep getting exit status 8 and exit status 12 whenever trying to load full file.
If I just try to load 1 rec from that datafile it is getting loaded.So I believe somewhere there is issue with the data that I have received in one of the record.
Now I am trying to find out that bad record.
I am using below variables
VARCHAR RecordErrorFilePrefix='tablename_err.dat',
VARCHAR RecordErrorVerbosity ='Med',
But the logfile says
WARNING! Definition of attribute RecordErrorFilePrefix ignored
WARNING! Definition of attribute RecordErrorVerbosity ignored
Any Idea Why ? All I am trying to achieve is to find the record with bad record format.
Please suggest !
Cpu skew of Fastload
Hi all,
We use fastload utility and we recognized that the cpu skew of Fastload utility is generally so high like around 99%. When the fastload finishes its work, we make sure that the data of the loaded table is well distributed over amps. So, why the cpu skew of the fastload is so high although the data of the loaded table is equally distributed over amps?
Thanks.
OLELoad and FastLoad on Linux
Hi,
Two questions:
1. We have AMJ files generated by OLELoad on Windows. Can I use them with FastLoad tool on Linux machine and export data from Microsoft SQL Server to Teradata?
2. What kind of driver should I install on Linux so FastLoad can communicate with Microsoft SQL Server?
Thanks!
J.D.
Unable to update a row that contains a blank/null value in an indexed column.
Hello,
I have a table with a primary index made up of two columns, let's say colA and colB. It is possible for colA to have blank/null values, colB is always populated. Using mload, when I execute the upsert step of the tpt script, the records with a non-blank value in both colA and colB are updated successfully. The records with a blank in colA are treated as an insert and are duplicated on my table. In this case, the records already exist in the table and I am trying to apply updates.
I defined the table as SET since it is not supposed to allow duplicates. The source is a staging table which I first populate from a flat file. I am new to Teradata so it is possible I don't have this set up correctly, but I have run out of ideas. Has anyone encountered this problem also? If I have not provided enough information, please let me know.
Thank you,
Jim