Hi All,
Can anyone please provide me sample a tpt script or mload script which can load data from multiple flat files into a teradata table and a sample script to upsert,insert a teradata table from a staging table.
Loading a teradata table using multiple files sample script
Sample Script for loading data from multiple files into teradata table
Hi All,
Can anyone provide a sample script for loading data from multiple files into teradata table (TPT/MLOAD) and a sample script which upsert,insert data from a one table to another table in teradata.
TPT output from ODBC includes Indicator Bytes?
Hi,
I'm trying to transfer data from a MySQL table into a Teradata table. The export uses the ODBC operator and the inport uses the LOAD operator.
All records go into the ET table, usually with 2673 (source parcel length incorrect) errors, some of them fail with a 6760 (invalid timestamp field).
Looking at the DataParcel column what it appears is happening is that every data record sent to Teradata has an additional 2 bytes added to the front. I'm certain that these two bytes are indicator bytes. They are not 'record length' (or if they are then they are completely wrong !) or field length because the first field in the output is a BIGINT (i.e. a fixed length field).
Also, interpreting them as indicator bytes and comparing to the source data they match up. I confess that I haven't checked all records, but the first few match up.
So it looks like the data parcel includes indicator bytes which makes sense because the data may include nulls, but the LOAD operator is not expecting them (and therefore is not telling the DBMS to expect them). There is no "indicator bytes" attribute in the LOAD operator that I can see.
Looking at the TPT LOAD operator documentation it appears that you have to use the data connector operator to handle 'indicators'. Is this correct? (I have to say that if so then this would seem to be a missing piece of design/functionality in the TPT load operator).
I am using TPT 14.10.
Cheers,
Dave
TMM - Replace Data Representation
Imported a Logical Model Data Representation previously and performed some mappings from the Source Tables to the LDM. I made some updates to the LDM and trying to re-import it in so that it is replaces the original LDM DR with the new information.
I tried Import->Replace Contents to existing DR and Import-> New Data Representation->Relate this new DR to an existing DR, but they both seem to create a new DR and does not replace the existing version, and the mappings I created based on the original DR is not copied over to the new DR.
Has anyone been able to get the Import wizard to replace an existing DR with a new version? Thanks
End of CSV File / Fastload
I wrote a fastload sript which works fine to load my CSV files. It looks like following:
sessions 2; errlimit 25; logon xxxx/yyyy,zzzz; DROP TABLE scsDataImport; CREATE TABLE scsDataImport( .... ); set record VARTEXT " "; define .... file = testEndSequence.CSV; show; RECORD 2; begin loading scsDataImport errorfiles error_1, error_2; insert into scsDataImport( ..... ); end loading; logoff;
Just there is something I don't like. At the end of each CSV file, I have two empty lines. That means that the loading "crashes" and displays the message "not enough fields in in vartext data record nummer: n". If I remove the empty lines, that means if I just make a newline after the last entry, everything works fine and the import is successfull.
Well I know that I can change all the CSV files manually, but that means periodically work and is not a beautifull solution for this problem. I'm sure there is a workaround, but I couldn't find it out so far. Do you guys have any suggestion?
Rejected Rows in Teradata
Hi All,
Is there anyway through which we can redirect the rejected rows into some other file while loading the teradata table from a flat file.
Thanks
SQL Assist - CTRL-D not working
I am using version 14.01.0.03 of SQL Assistant. In the query window when I press CTRL-D, instead of commenting out the selected text, it brings up a dialog box titled "Choose Command". One of the options in this box is CommentSelection. When chosen it does comment out the text. In previous versions, the comment was inserted without having to go through this Choose Command box. Is that a thing of the past or have I inadvertently changed a setting somewhere?
TPT logtable
In program, we have define clause as like this, but while execution, it is giving error saying that log table is not there.
Do we need to log table as parameter.?
DEFINE OPERATOR INPUTLAYOUT
TYPE DATACONNECTOR PRODUCER
SCHEMA INPUTLAYOUT
ATTRIBUTES
(
VARCHAR TdpId = @TargetTdpId
,VARCHAR UserName = @TargetUserName
,VARCHAR UserPassword = @TargetUserPassword
,VARCHAR LogTable = 'TPT_TABLES.PUBLIC_AUTOSYS_TPTLOG'
,VARCHAR FileName = 'cex_shared.dat'
,VARCHAR DirectoryPath = '/home/gssa0/data'
,VARCHAR ErrorTable1 = 'TPT_TABLES.PUBLIC_ERR1'
,VARCHAR ErrorTable2 = 'TPT_TABLES.PUBLIC_ERR2'
,VARCHAR TargetWorkingDatabase = 'TPT_TABLES'
,VARCHAR TargetDatabase = 'TPT_TABLES'
,VARCHAR TargetTable ='TPT_TABLES.PUBLIC_AUTOSYS_TEMP'
,VARCHAR Format = 'Text'
,VARCHAR OpenMode = 'Read'
,VARCHAR IndicatorMode = 'N'
,VARCHAR PrivateLogName = 'Read'
);
This is the output error messageTeradata Parallel Transporter Version 14.00.00.07
Job log: /opt/teradata/client/14.00/tbuild/logs/cel1_shared-579.out
Job id is cel1_shared-579, running on teraqds4.ca.xxxxxcom
Found CheckPoint file: /opt/teradata/client/14.00/tbuild/checkpoint/cel1_sharedLVCP
This is a restart job; it restarts at step MAIN_STEP.
Teradata Parallel Transporter Load Operator Version 14.00.00.07
$LOAD: private log not specified
Teradata Parallel Transporter INPUTLAYOUT: TPT19006 Version 14.00.00.07
INPUTLAYOUT Instance 1 directing private log report to 'Read-1'.
INPUTLAYOUT: TPT19008 DataConnector Producer operator Instances: 1
INPUTLAYOUT: TPT19003 ECI operator ID: INPUTLAYOUT-11498
INPUTLAYOUT: TPT19222 Operator instance 1 processing file '/home/ptersa0/data/cex_shared.dat'.
$LOAD: TPT10306: Error 5 retrieving attribute 'LogTable'
INPUTLAYOUT: TPT19221 Total files processed: 0.
$LOAD: Total processor time used = '0.01 Second(s)'
$LOAD: Start : Fri Jul 19 12:40:58 2013
$LOAD: End : Fri Jul 19 12:40:58 2013
Job step MAIN_STEP terminated (status 12)
Job cel1_shared terminated (status 12)
pls advice.
Zip/GZip Support in TTU 14?
Hello everyone,
I am currently operating in TTU 13.10 and have been reading about TTU 14, pending an upgrade that I found out we are due to receive; I noted that in the TPT Reference Manual for Version 14, dated June 2012, it states that the DataConnector operator now supports GZip and Zip files. I want to check an implicit assumption that I am making for this statement .... does this imply that the DataConnector will read the entire zipped archive, i.e. and zip file that contains multiple files within it. Or is it assumed that each zip file will only contain 1 file in it?
I would love to test it out, but I do not have TTU 14 yet and the answer to this question alters my workload for the near future.
Many thanks in advance.
connecting teradata through perl
Hello,
I am not able to connect teradata through my perl script.
I have installed the driver DBD-Teradata-1.52 which i got from internet
But still when i try to connect
$dbh = DBI->connect('dbi:Teradata:hostname',$user ,$pw); --im getting below error
"perhaps the DBD::teradata perl module hasnt been fully instaled"
Please help me here!!!!!!
-Aravind
BTEQ Export not showing column name if no rows are returned
My BTEQ Export is working fine.
However, when there is no rows to return , it is not even returning COLUMN NAMEs. Hence a blank file is generated.
I want to have COLUMN NAMES atleast in my output file even if no rows are present.
I am using BTEQ Export.
I don't want to hardcode the columnnames. Is there an option available for atleast exporting COLUMN NAMEs even if rows are not returned.
I am using below options as per my requirement:
.set width 5000
.Set echoreq off
.Set Titledashes off
.Set Separator '~'
.Set Format off
Thanks in advance.
BTEQ
Can some one please help me with the script to create table using excel sheet?
Thanks in advance.
Thank You,
Mayank
TPT Wizard 14.0 Can't see columns when defining Job Source
Hello,
I'm connecting to an Oracle database using the ODBC DSN type. When specifying my Job Source, I can connect to the source to see a list of tables, but when I select a table, the columns of the table do not appear. I just get the spinning "hourglass".
I tried logging on as a "dba" type user but I got the same result.
Are there specific privileges required on the source system to be able to retrieve the column information ?
thanks
TMM - Grid Columns Displayed
Hello,
Have a 2 part question regarding the grid columns within Teradata Mapping Manager
1. Is it possible to configure a default of list of grid columns to always be displayed? For instance, in a working data set, I can use the "Change Grid Columns Displayed" interface to modify the columns, but after a DR import-relate, you have to reconfigure the grid columns of the new Working Data Set to match the existing Dataset it was related to.
2. Is it possible to reference the standard other grid column properties (Modify User, Modify Timestamp, etc) in the Source Element List template? I tried to add the columns Modify User and Modify Timestamp in the import template, but they are imported in as custom columns [i.e (DE) Modify User (custom)] in the grid.
Thank you for the help
when replacing view with comments from SQL assistant, comments get truncated on database
Hi ! In some cases, I am not noticing comments in the view definition stored in data dictionary (DBC.TABLES.REQUESTTEXT column) or thru SHOW VIEW command after I execute REPLACE view statements containing comments in SQL assistant. The comments are truncated/dropped for some reason. As mentioned this behavior is not consistent though. I am using TD SQL Assistant 13.10.0.02 with Teradata .Net provider version 13.1.0.2 to connect to the database. Did any one experience similar issue ?
64bit TPTAPI Download for Windows Server 2008
Hello,
I am looking to upgrade our current version of Teradata TPT API from 13.00 to 13.10. Can any one tell me if these downloads are available?
TPT API 13.10 X8664
TPT Export Operator 13.10 X8664
TPT Load Operator 13.10 X8664
TPT Stream Operator 13.10 X8664
TPT Update Operator 13.10 X8664
Thx in advanced.
TPUMP process
Hi all,
Can you please comment on the implementation/solutions of the below requirements-->
this is our current process-->
1)we get some data files every day from source which we use to load through tpump utility in to a work table .
2) If some records gets inserted into error table of tpump then teradata automatically aborts that tpump operation.
3) if there is no record in error table and tpump is successful then we insert -select all the records from work table into the main table.
New requirement -->
Even If we some errorneous records are present in the data file of a particular like date- time, not null issues then
a) we dont want tpump operation to be aborted based on that but we would like to capture those records into the separate error table having the complete
copy of the errorneous record( unlike E1,E2 tables of tpump where it just captures the sequence no. of the erroneous record)
so that these are available for the investigation in future.
b)And all the good records to be processed into the work table with out any abort in tpump.
Solution-->
1)We are thinking abt increasing the ERRLIMIT feature in tpump but still we would need to capture the complete copy of the errorneous records
which seems not to be possible here.
2) the other solution may be if we define all the columns in the work table of tpump as varchar and nullable so that it doesnt leads to rejection
in tpump process.. and we get all reords in work table.
from the work table we can filter the good records and errorneous records by checking the not nullability and date -time validations
during update/Insert operations in a bteq script
Please let me know ur suggestions on the above..
thanks in advance!
cheers!
Nishant
TPT - Instances Vs Sessions
Greetings Experts,
What is the basic difference between the instances and sessions in TPT.
If I declare a Maxsessions attribute for a operator to be 10 over a 10 AMP system, and the consumer operator uses all 10 sessions (as producer is able to keep the consumer busy)
Having said that 10 sessions, say if I increase the number of consumer instances to 2 from 1 instance, each using 5 sessions, what kind of advantage is gained over here so that work done by 2 instances each with 5 sessions >= work done by 10 sessions.
I have gone through some where that the multiple instances are used when we read from multiple sources. Are multiple instances used when we use multiple data sources? I have seen some scripts that use multiple instances for a single data source.
Does TPT allow parallel extraction from different data sources with in a single script?(I know the data is not landed to any files, but in data stream) What kind of performace advantage is gained from TPT when we could achieve the same from Load utilities/scripting (Say if I have to extract the data from 3 sources with 3 producers, these should be done in sequence I guess with multiple job steps in TPT; it can also be done with multiple loader utility jobs if applicable) Can the extraction be done in parallel for 3 data sources in a single TPT script?
Mload Vs Fastload in loading empty target table
Greetings Experts,
Say I have a empty target table which is ideal to load with fastload.
1) If I load the target table with Fastload and Multiload, which would be faster and why?
2) Suppose the target table is a fallback table, will the fallback on the table carried out simultaneously while the table is loading or after completion?
I have gone through some where that the fastload is faster than multiload to load under above condition. I do suppose it's due to the fallback capability that Multiload presents for its ET,UV,ML tables which may take a bit longer (Does fallback capability is induced for Auxiliary tables in Fastload?).
3) The NUSI to be created which will take a bit of time if defined on target, will it be created only after completion of loading or will it be in parallel while loading the table.
Thank you for your valuable insights.
Mload through Informatica - errorlimit, errorcode 6705, restore tgt table if auxiliary table dropped in application phase
Greetings Experts,
We are using some jobs that use Mload connection in Informatica. If the number of rows in ET table is >= errorlimit specified, will the session be forced to fail?
We have a job which is often failing with "External Loader Error" and we could see the presence of the rows in ET table with error code 6705 on a column "abc". I have used the translate_check on the column "abc" and got a row that has a null value for the column "abc". The data is same in both PROD and QA for the column "abc". But not sure why it is failing in QA and not in PROD. I know that it's hard to go with this little info. Does some other settings may be different that might be the case, if so, can you please name them.
How is control file generated through Informatica? Is it through the API on Teradata side?
Say if one of the Auxiliary tables be dropped accidentally on a Mload target table (Not staging table) in the Application phase, it is not possible to re-start or re-run the job. Can it restored to its original form by using the before Permanent journal if enabled? If it is not, how can we restore the table to its original state if the backup copy is out of date to that of current data (say 2-3 days, every day there are major changes)
Thank you for your values insights.