hi,
i search a TPT syntax but i'm not sure if it's possible.
i have an input fixed file's. Two columns as a varchar(15) and a varchar(8) and the second column can contain 8 space caracter wich convert it as null value.
In the block SCHEMA or OPERATOR, is it possible to define the conversion or is it only possible to the APPLY statement with case when syntax ?
DEFINE SCHEMA FILE_xxx
(
aaa VARCHAR(15)
, bbb VARCHAR(8)
);
DEFINE OPERATOR FILE_xxx_READER
DESCRIPTION ''
TYPE DATACONNECTOR PRODUCER
SCHEMA FILE_xxx
ATTRIBUTES
(
VARCHAR DirectoryPath = @DirectoryPath,
VARCHAR FileName = @FileName,
VARCHAR Format = 'Text',
VARCHAR TrimColumns = 'Both',
);
Thanks lot for response
TPT read fixed file and convert space to null value
TDCH-TPT Interface--For loading into Hadoop
I need to ingest volume of data from Teradata to Hadoop using TPT.I saw in the TPT documentation that we can achieve this using TDCH-TPT interface.I would like to know the following about the process:
- Whether it follows the same process and extracts data block by block.
- Whether it utilizes all the nodes in the cluster while loading into Hadoop.
- In this case whether TPT needs to be installed in all the nodes in the hadoop cluster?
- For 1 single table ingestion and export to hadoop whether both the read(Teradata) and write(Hadoop) whether both the process are multithreaded while using TDCH-TPT interface.
In List Blues - How to make large In List Queries More Efficient
I work for a bit company with a lot of Business Objects users. One of our most popular universes points to a Teradata environment. This universe is by far our best BI resource. It has a ton of data and it delivers it to the users lightning fast. However, some of our users like to utilize a prompt that allows them to input a tremendous amount of data as an "in list" where clause.
While Teradata does a tremendous job on all other queries, it seems to choke on large "in list" queries. The amount of CPU usage for just a few of these queries is sometimes equivalent to hundreds of other queries. I've been told that this is because Teradata is not able to make use of parallel processing for these type of queries.
Our goal is to improve the efficiency of these searches with minimal impact on the users.
So, one possible solution is to create a global temp table in Teradata and fill that table with the prompt values from the user. The temp table would include pertanent joins, which should, theoretically, improve performance by allowing Teradata to make use of its parallel processing. Keep in mind that the majority of queries use the same object(column) as a search criteria and this object happens to be the unique key that is indexed, so this solution could work well.
The problem is that Business Objects does not really have a way to write data to a database. So one of our lines of thought is to use a Bteq script to load the objects into the global temp table.
Is it possible to create an if/then statement using BTEQ on the Teradata database that looks for a condition of something like:
If user = user X and Object X in list"Abc, ABd......" then insert into global temp table
Then query and deliver the results to the customer. Can this be done on the database or does it have to be done in Business Objects?
Determine #sessions to set for load / exp jobs
TPT - Scripting
Hello Folks
I have a requirement to move 90 tables totalling up to 200 GB from one Teradata system to an another Teradata system.. While i know there are different options i plan to use TPT to move them..All the column data has to be moved..
TPT requires to define schema layout .. Is there a way to do without the schema layout? I dont want to do it manually .. Has any one developed a reusable script for moving data where the scipts reads 1 table moves the data then moves to 2 nd table and so on?
thanks
KN
Float data type export using Fast Export
Hello everyone,
I am trying to export a table with float data type using FastExport utility.As we know Teradata rounds-off any float values where number of digit is greater than 15. For instance 1231231231234585.1111 will be stored as 1,231,231,231,234,590.00.
I am using the expression "TRIM(cast((columnNmae(FORMAT '-Z(17)9.99')) as char(100)))" to pull the data in the FastExport script but after exporting the data the output comes as "1231231231234585.11"instead of the rounded value stored in table i.e 1,231,231,231,234,590.00
Also if the size of the column is more than 17 in this case the data is getting filtered out.
Can anyone suggest any workaround for this scenario?
Regards,
Indranil Roy
FOR EACH clause in Teratada SQL
I have the statment below:
INSERT INTO P_WENL.RR_tmpParque2
SELECT
Prqe.dt_foto_lnha
,Pssa.cdCliente
,Pssa.cdGrupo
,Pssa.nmCliente
,Pssa.nmGrupo
,Pssa.dsRazaoSocial
,Pssa.nmGerenteSenior
,Pssa.nmDiretor
,Prqe.id_uf
,TpMtrl.ds_tipo_mtrl
,Area.nm_area_rgnl
,Plno.ds_plno
,EstdLnha.ds_estd_lnha
,CAST(CAST(( Prqe.dt_prmr_atvc_lnha (FORMAT 'YYYYMM') ) AS VARCHAR(6)) AS INTEGER) AS prmr_atvc
,COUNT(*) AS qtde
FROM P_Viedb.vw_fat_prqe_lnha_dspt Prqe
INNER JOIN P_WENL.RR_tmpPessoa2 Pssa
ON CASE WHEN Prqe.Id_Pssa_rspl_cnta <= 0 THEN Prqe.Id_Pssa ELSE Prqe.Id_Pssa_rspl_cnta END = Pssa.Id_Pssa
LEFT JOIN P_Viedb.Vw_Dim_Crtr Crtr
ON Prqe.Id_Crtr = Crtr.Id_Crtr
LEFT JOIN P_Viedb.Vw_Dim_Tipo_Mtrl TpMtrl
ON Prqe.Id_Tipo_Mtrl_Srvc = TpMtrl.Id_Tipo_Mtrl
LEFT JOIN P_Viedb.Vw_Dim_Area_Rgnl Area
ON Prqe.Id_Area_Rgnl = Area.Id_Area_Rgnl
LEFT JOIN P_Viedb.Vw_Dim_Tipo_Crtr TpCrtr
ON Prqe.Id_Tipo_Crtr = TpCrtr.Id_Tipo_Crtr
LEFT JOIN P_Viedb.Vw_Dim_Sist_Pgto SisPgto
ON Prqe.Id_Sist_Pgto = SisPgto.Id_Sist_Pgto
LEFT JOIN P_Viedb.Vw_Dim_Plno Plno
ON Prqe.Id_Plno = Plno.Id_Plno
LEFT JOIN P_Viedb.Vw_Dim_Estd_Lnha EstdLnha
ON Prqe.Id_Estd_Lnha = EstdLnha.Id_Estd_Lnha
WHERE Prqe.dt_foto_lnha = '2016-01-31'
AND Prqe.Fl_Prqe_Ofcl = 1
GROUP BY Prqe.dt_foto_lnha
,Pssa.cdCliente
,Pssa.cdGrupo
,Pssa.nmCliente
,Pssa.nmGrupo
,Pssa.dsRazaoSocial
,Pssa.nmGerenteSenior
,Pssa.nmDiretor
,Prqe.id_uf
,TpMtrl.ds_tipo_mtrl
,Area.nm_area_rgnl
,Plno.ds_plno
,EstdLnha.ds_estd_lnha
,CAST(CAST(( Prqe.dt_prmr_atvc_lnha (FORMAT 'YYYYMM') ) AS VARCHAR(6)) AS INTEGER)
As you can see in the WHERE, I have a date as condition. I need to do the same statment for the year 2015 monthly.
I thinking use a FOR EACH statment, but I didn't find anything about it.
How can I do this without manually change the date?
Thanks
How read consistency is maintained during Teradata Fast Export
Hi guys,
I would like to know how read consistency is managed by teradata Fast Export.When it tries to read data from the database does it automatically request for rows on the table before the export process starts.Is there any special variable that needs to be used to maintain read consistency?I do find something like LOCKING modifier in the FastExport reference guide but the usage is not clear?
Installing Teradata ODBC on AIX
Hi,
I am in need of installing Teradata ODBC on AIX 7.1.
I have downloaded the AIX Teradata ODBC file. According to it's read file:
Prior to installing the "ODBC Driver for Teradata", the TTU dependency products that are listed
in section 3.3 must be installed first and in the order listed below.
TTU Products
============
1. Shared common components for Internationalization for Teradata (tdicu)
2. Teradata GSS client package (TeraGSS_aix_power)
It appears that both of the dependencies can be installed as part of the Teradata Tools and Utilities (TTU) package. I have only be able to locate the Windows TTU download. It's readme seems to offer hope that the Windows TTU can install the necessary packages for AIX but it's installation .exe files are DOS excutables:
TTUExpress_Windows.15.10.05.readme.txt
----------------------------
The 15.10.05 release now supports installing TTU products in any location for all OSes (Windows, OS X and UNIX).
I remain confused. How do I download and install the required components for the ODBC AIX driver?
Many thanks,
-- John
Tasdfas
h
asdfe 15.10.05 release now supports installing TTU products in any location for all OSes (Windows, OS X and UNIX).
SQL Error highlighting w/.Net connector not working (SQL Assistant v15)
I've got 2 installs of SQL Assistant, and neither are highlighting the error line when running queries, and I'm using the .Net connector. One of them is the latest version, and a clean install on a new PC.
A coworker has his working and he's on v14 or v15 I beleive. Is there some setting that I should check to enable this feature?
Windows(x64)
DB Version: Teradata 15.10.01.07
Teradata.Net 15.0.0.0 (and also Teradata.Net 15.11.0.0)
SQL Assistant: 15.00.0.04 (and also 15.10.1.4)
See what I get here:
http://imgur.com/a/je23V
can not download presto JDBC/ODBC client package
Hi, I cannot download client package from the following link, I can only download a 6 byte file. Can somebody help me? Thank you in advance.
http://www.teradata.com/Presto-Download-Get-Started/?LangType=1033&LangSelect=true
/Xiaoyong.
Ctrl-A delimiter with TPT Export.
Hi,
I want to use CTRL-A as delimiter for TPT exports, I would like to know how to specify this in the TPT script.
Thanks & Regards,
Srivignesh KN
Export with header using TPT
Hi,
I am using TPT export for doing data export from TD to a file and want to include column headers to the generated file.
I would like to know if there are any attributes that is included with newer version of tpt that we can use for this., I am using TPT version 14.10
Regards,
Srivignesh KN
Delimiter issue in Fastload
Hi All,
when I was loading CSV file into staging table by using Fastload getting below issue
In my script delimiter is "," but few fields in CSV file was having comma (,) is a part of the data since data was not loading respective columns in table.
e.g.
Data available in CSV file is as mentioned below
eno,ename,sal
"101","narsi", 1000
"102","varchan,reddy", 2000
"103","madhu", 5000
Regards,
Narsi
not Sign Delimiter issue in Fastload
Hi All,
In my flat file delimiter is “¬". This file I need to run in UTF8 format.
Can any one of you guide me how to run fastload script in UTF8 form in windows command prompt?
Regards,
Narsi
HELP cannot display long field names
Does anyone know how to let HELP in SQL Assistant to display field names longer than 30 char?
I use Teradata 15 so the field names can be up to 128 chars long. However when I use HELP to list the fields the long field names are all truncated to 30 chars.
Queries for Teradata Fast Export Utility
We need to offload huge volume of data from a Teradata table to a network attached filesystem using the FastExport Utility.
The database will be a standalone server while the Fastexport script will be executed from the edge node of a hadoop cluster.
The data will be offloaded into the local file system of the edge node of the cluster .I have the following concerns with respect to the above
job:
- Do I need to install the FastExport utility on the database server or the edge node i.e from where the script will be executed?
- Performance Tuning:According to the FastExport reference documentation the maximum number of sessions will be same as the number of AMP.Is there any other ways to fine tune the export performance of FastExport?
- What should be the CPU and memory configuration of the edge node to get the maximum performance from the export?
Regards,
I Roy
MLOAD APPLY WHERE condition which is not an equality
I have a very simple working MLOAD and have a new requirement to block data from being INSERTED. I see many examples where the WHERE of an APPLY statement is specified as equality (=) and am sure I've done that in the past successfully, but my new requirement is to skip based on 'not equal to'.
Layout of file:
.LAYOUT INFILE_LAYOUT;
.FIELD INP_STR * INTEGER ;
.FIELD INP_ITM * INTEGER ;
Current IMPORT statement
.IMPORT INFILE INSERTS
LAYOUT INFILE_LAYOUT
APPLY INSERT
;
I have attempted all these variations, and all cause all records to fail the "Apply conditions satisfied"
0017 .IMPORT INFILE INSERTS
LAYOUT INFILE_LAYOUT
APPLY INSERT
WHERE INP_STR <> 10108
;
0017 .IMPORT INFILE INSERTS
LAYOUT INFILE_LAYOUT
APPLY INSERT
WHERE INP_STR <> '10108'
;
0017 .IMPORT INFILE INSERTS
LAYOUT INFILE_LAYOUT
APPLY INSERT
WHERE INP_STR ¬= 10108
;
0017 .IMPORT INFILE INSERTS
LAYOUT INFILE_LAYOUT
APPLY INSERT
WHERE INP_STR NOT IN (10108)
;
0017 .IMPORT INFILE INSERTS
LAYOUT INFILE_LAYOUT
APPLY INSERT
WHERE INP_STR < 10000
;
All the attempted filters skip all the records. (I have not yet put data in the file for the entity to be filtered.)
. IMPORT 1 Total thus far
. ========= ==============
Candidate records considered:........ 4659129....... 4659129
Apply conditions satisfied:.......... 0....... 0
Candidate records not applied:....... 4659129....... 4659129
Candidate records rejected:.......... 0....... 0
I've confirmed the data is applicable to be satisfied by all the conditions I have applied. Example data follows:
File-AID - Browse - T01.SEH.TD3
COMMAND ===>
store item FILLER
4/BI 4/BI 1/AN
(1-4) (5-8) (9-9)
1---------- 2---------- 3-------
********************************
1802 105472
866 551254528
482 551255040
1080 551255808
2136 105472
MultiLoad Utility Release MLOD.15.10.00.002
Platform MVS
Teradata Parallel Transporter - error with automatic schema
Using Teradata parallel transport with simplified syntax we get error
TPT02638: Error: Conflicting data length for column(3) - "sr_abo_netto_do_oferty". Source column's data length (16) Target column's data length (8).
Source table:
CREATE SET TABLE DB_APL_CM_TEMP.tt_konwersje_oferty ,NO FALLBACK , NO BEFORE JOURNAL, NO AFTER JOURNAL, CHECKSUM = DEFAULT, DEFAULT MERGEBLOCKRATIO ( oferta_id INTEGER, linia VARCHAR(10) CHARACTER SET LATIN NOT CASESPECIFIC, sr_abo_netto_do_oferty DECIMAL(20,2), karencja INTEGER, oferta_kat VARCHAR(2) CHARACTER SET LATIN NOT CASESPECIFIC, sr_subsydium DECIMAL(20,2), data_wstawienia DATE FORMAT 'YY/MM/DD', oferta_nazwa VARCHAR(60) CHARACTER SET UNICODE NOT CASESPECIFIC, abo_brutto DECIMAL(20,2), nr_oferty BYTEINT) PRIMARY INDEX ( oferta_id );
Target table:
CREATE SET TABLE DB_APL_CM_TEMP.tt_konwersje_oferty ,NO FALLBACK , NO BEFORE JOURNAL, NO AFTER JOURNAL, CHECKSUM = DEFAULT, DEFAULT MERGEBLOCKRATIO ( oferta_id INTEGER, linia VARCHAR(10) CHARACTER SET LATIN NOT CASESPECIFIC, sr_abo_netto_do_oferty DECIMAL(20,2), karencja INTEGER, oferta_kat VARCHAR(2) CHARACTER SET LATIN NOT CASESPECIFIC, sr_subsydium DECIMAL(20,2), data_wstawienia DATE FORMAT 'YY/MM/DD', oferta_nazwa VARCHAR(60) CHARACTER SET UNICODE NOT CASESPECIFIC, abo_brutto DECIMAL(20,2), nr_oferty BYTEINT) PRIMARY INDEX ( oferta_id );
Script:
Set SelectStmt = 'LOCKING TABLE DB_APL_CM_TEMP.tt_konwersje_oferty FOR ACCESS SELECT * FROM DB_APL_CM_TEMP.tt_konwersje_oferty '; SET SourceTdpId = 'dbc'; SET SourceUserName = 'QA_ROOT'; SET SourceUserPassword = '********'; /* LOAD */ SET TargetTable = 'DB_APL_CM_TEMP.tt_konwersje_oferty'; SET TargetTdpId = 'dbi'; SET TargetUserName = 'QA_ROOT'; SET TargetUserPassword = '********'; /* LOAD SPECIFIC */ SET LogTable = 'QA_ROOT_TPP_WD.LT_tt_konwersje_oferty'; SET ErrorTable1 = 'QA_ROOT_TPP_WD.ET_tt_konwersje_oferty'; SET ErrorTable2 = 'QA_ROOT_TPP_WD.UV_tt_konwersje_oferty'; USING CHAR SET ASCII DEFINE JOB TransferData ( APPLY $INSERT TO OPERATOR ( $LOAD ) SELECT * FROM OPERATOR ( $EXPORT ); );
Teradata Parallel Transporter Version 15.00.00.00
TPT with JDBC to Kafka
I can get TPT working from the Teradata Utilities on Linux, but I have to write this massive script to specify each column name and data type.
I'm trying to get it working from Kafka's JDBC connector, so I can just tell it to pull in data from a table incrementally.
Is is possible to use JDBC with TPT? Or do I just use JDBC with FastExport? Is that as good as TPT in terms of speed?
Do I just need to specify TYPE=FASTEXPORT in the connection string?
thanks,
imran