Quantcast
Channel: Teradata Downloads - Tools
Viewing all 870 articles
Browse latest View live

MLOAD will not read in null timestamps

$
0
0

I am using mload to load in a large data set. However, I can't get any of hte data to load because of the error 6760. I know it deals with the timestamp.
 
There is a column that gets read in perfectly, and it is a timestamp(0) column. Then there is another column that has no values, but it is a timestamp(0) format. The difference with this column though, is that there are no values in the column at all, but I'm still getting the 6760 error. How can I successfully read in an entire column of null timestamps without getting this error!!!!??????

Forums: 

How to Trim multiple characters from a string

$
0
0

Hello
 
I found this code on your forun but need help to tweek it a bit:\  
My list has two delimiters Comma (,) and Line (|) (see below)

1580046|58421,1580047|58421,1580048|58421,1580049|58421,1580050|58421,1580051|58421,1580052|58421,1580053|58421,1580054|58421,1580055|58421,1580057|58421,1580058|58421,1580059|58421,1580060|58421,1580061|58421,1580062|58421,1580063|58421,1580064|58421,1*

I need the data before the (|) and omit the other data, so the final data rows will look like this:  Any help tweeking the code below would be helpful. thanks

1580046
1580047
1580048
1580049
1580050
1580051
1580052
1580053
1580054
1580055
1580057
1580058
1580059
1580060
1580061
1580062
1580063
1580064

 

WITH RECURSIVE SPLIT_ONE_TO_MANY (POS,SEQ, NEW_STRING, REAL_STRING) AS
(
SELECT
  0,0, CAST('' AS VARCHAR(100)),TRIM( order_quesn_list)
FROM cl_otl_1
where  cl_otl_1.OTL_ID ='1509136706'
UNION ALL
SELECT
  CASE WHEN POSITION('|' IN REAL_STRING) >0 
    THEN POSITION('|' IN REAL_STRING)
    ELSE CHARACTER_LENGTH(REAL_STRING)
  END DPOS,
  SEQ + 1,
  TRIM( both '|' FROM SUBSTR(REAL_STRING, 0, DPOS )),
  TRIM(SUBSTR ( REAL_STRING, DPOS +1 ))
FROM SPLIT_ONE_TO_MANY
WHERE DPOS > 0 
)
SELECT *
FROM SPLIT_ONE_TO_MANY
WHERE SEQ > 0 ;

 

 

Forums: 

SAS to Teradata

$
0
0

Hi , i have a sas syntax like this
prxmatch(prxparse('~^abcd_\d{4}_\w{3}\d{3}$~o'), trim(external_id)) 
 
how can i write this in teradata

Forums: 

Execute Oracle's stored procedures from tpt

$
0
0

Hi everyone,
I found a subject where I see it is not possible to delete from tpt on Oracle, but I was wondering if :
it's possible to launch Oracle's stored procedures from tpt ?
Or doing updates on Oracle's tables from tpt ?
Thanks,
py
 

Forums: 

WINCLI32.dll

Exporting CSV from TPT

$
0
0

Hi,
I have installed FastExport but apparently it cannot output CSV files.
So now I'm trying to get ahold of Teradata parallel transporter, to export a large table (hundreds of millions of rows).
I've read this post and the quickstart guide for TPT, but I still don't know how to export a CSV from a table.
Could someone please provide a sample script that connects to a database with username/password and saves a table as a .CSV file?
 
thank you,
imran

Tags: 
Forums: 

Using Rank Numbers are getting Skipped in Teradata 13.10

$
0
0
Select Region,Store,Sales,
   Rank()over(partition by Region Order by sales desc) as SalesRank 
from SalesTable 
Qualify SalesRank>=4

Input	
Region	Store	Sales	SalesRank
2	821	82224	1
2	615	5323	2
2	143	5323	2
2	242	4123	4

Desired Output	
Region	Store	Sales	SalesRank
2	821	82224	1
2	615	5323	2
2	143	5323	2
2	242	4123	3

My ranking should be in countinous manner e.g. 1,2,3,4,5,....... 
Request you to please suggest how to achieve the desired output.

Forums: 

TPTEXP - TPTLOAD - using Named pipe hanging at step 'EXPORT_OPERATOR: connecting sessions'

$
0
0

Hi,
I am testing Named pipe option for TPTexport and TPTLoad in one script by taking just 100 records in the source table. However it is hanging at below step(connecting sessions) for hours. I have tried generic named pipe and also access module 'np_axsmod.so' with no luck. Have used 'mknod mypipe p' to create the pipe. Manual suggested fexp with TPTLoad for named pipe. But I want to test this using TPTexp and TPTload in one single script.(Consumer --> export --> Producer -->Load are the sequence of operators I am invoking in ctl). The same works fine if I use an out file (instead of named pipe). Could anyone please suggest what the issue is. Appreciate your help.
 
=========
Log:
====
$tbuild -f tptexp_loadpi.ctl pi02
Teradata Parallel Transporter Version 14.10.00.10 

Job log: /opt/teradata/client/14.10/tbuild/logs/pi02-8238703.out

Job id is pi02-8238703, running on xyzfileserver

Teradata Parallel Transporter DataConnector Operator Version 14.10.00.10D2D.23102.2

FILE_WRITER: Instance 1 directing private log report to 'dataconnector_log-1'.

Teradata Parallel Transporter Export Operator Version 14.10.00.10

EXPORT_OPERATOR: private log specified: export_log

FILE_WRITER: DataConnector Consumer operator Instances: 1

FILE_WRITER: ECI operator ID: 'FILE_WRITER-11141658'

EXPORT_OPERATOR: connecting sessions

 

 

=========

CTL:

=========

DEFINE OPERATOR FILE_WRITER 

TYPE DATACONNECTOR CONSUMER 

SCHEMA contact_schema

ATTRIBUTES

(

VARCHAR PrivateLogName='dataconnector_log',

VARCHAR DirectoryPath ='/home/tmp/',

VARCHAR FileName = 'mypipe',

VARCHAR Format = 'DELIMITED',

VARCHAR OpenMode = 'Write',

VARCHAR IndicatorMode = 'N',

VARCHAR TextDelimiter = '@#$',

VARCHAR EscapeTextDelimiter = '\',

VARCHAR DateForm = 'ANSIDATE'

);       

 

DEFINE OPERATOR EXPORT_OPERATOR

TYPE EXPORT

SCHEMA contact_schema

ATTRIBUTES

(.....)

 

STEP STEP_NAME

(

APPLY 

TO OPERATOR (FILE_WRITER[1])

SELECT *  

FROM OPERATOR(EXPORT_OPERATOR[1]);

);

 

DEFINE OPERATOR FILE_READER

TYPE DATACONNECTOR PRODUCER

SCHEMA contact_schema_load

ATTRIBUTES

(

VARCHAR PrivateLogName='dataconnector_log',

INTEGER ErrorLimit = 1,

VARCHAR DirectoryPath = '/home/tmp/',

VARCHAR FileName = 'mypipe',

VARCHAR AccessModuleName = '/usr/lib/np_axsmod.so',

VARCHAR AccessModuleInitStr = 'ld=. fd=.',

VARCHAR IndicatorMode = 'N',

VARCHAR Format = 'Delimited',

VARCHAR TextDelimiter = '@#$',

VARCHAR OpenMode = 'Read',

VARCHAR DateForm = 'ANSIDATE'

);

 

DEFINE OPERATOR LOAD_OPERATOR

TYPE LOAD

SCHEMA *

ATTRIBUTES

(...)

 

Thanks.

Forums: 

TPT Error loading CSV as Double Quotes

$
0
0

Hello, I'm loading a set of CSV files, some of which will have an occational double quoted text field, with a header. With this examples below, I keep getting a TPT error: DataConnection Error, Delimited Data Parsing error: Column Length overflow(s) in row 1.
Doing this in TPT it doesn't load anything, and errors with an exit code of 8 with above mentioned error. I backcheked this by setting up a FASTLOAD job, and I was able to make it load the data file, but with an RC=4 error, and it would reject the 1st row, complaining out the 2nd column, yet remove the 1st row from the data set in case of bad data, and it would do the same exact thing. Looking at the raw data, nothing apppears afoul.
I have tried to increase the length of the tables columns to make wide enough varchars for the incoming data, but that didn't work for either TPT or FASTLOAD. I also tried just removing the 1st row, but it keeps failing the same way.
What else can I try to get the TPT to load sucessfully?
Thank you kindly for help and direction.
--Sample of data file:
ID,Login,TaskId,QueId,TaskStartTime,CNumber,Dur,FUp,Upddt,RunDur
62081217,rr0581,7850,0,3/3/2016 12:31:38 PM -06:00,62081217,127.761,False,,127.761
62126271,rr0581,7850,0,3/4/2016 10:57:34 AM -06:00,62126271,353.269,False,,353.269
62394805,pr4033,7850,0,3/10/2016 11:29:19 AM -06:00,62394805,4869.313,False,,4869.313
62396446,rr0581,7850,0,3/10/2016 11:43:54 AM -06:00,62396446,143.903,False,,143.903
62662785,rr0581,7850,0,3/16/2016 1:00:29 PM -05:00,62662785,60.157,False,,60.157
62664082,rr0581,7850,0,3/16/2016 1:14:01 PM -05:00,62664082,122.775,False,,122.775
62665044,rr0581,7850,0,3/16/2016 1:21:57 PM -05:00,62665044,502.476,False,,502.476
-- Table DDL
CREATE TABLE database.table1
,NO FALLBACK
,NO BEFORE JOURNAL
,NO AFTER JOURNAL
(
  ID VARCHAR(8)
, Login VARCHAR(6)
, TaskID VARCHAR(4)
, QueID VARCHAR(5)
, TaskStartTime VARCHAR(30) 
, CNumber VARCHAR(8)
, Dur VARCHAR(10)
, FUp VARCHAR(5)
, UpdDt VARCHAR(30)
, RunDur VARCHAR(10)
)
UNIQUE PRIMARY INDEX (ID)
;

-- TPT Load Script
DEFINE JOB LOAD_TPT_TABLE1_RAW
DESCRIPTION 'LOAD TABLE1 WORK RAW TABLE FROM FLAT FILE'
(
DEFINE SCHEMA FFILESCHEMA
DESCRIPTION 'DATABASE.TABLE1'
(
  ID VARCHAR(8)
, Login VARCHAR(6)
, Task_ID VARCHAR(4)
, QueID VARCHAR(5)
, TaskStartTime varchar(30)  
, CNumber VARCHAR(8)
, Dur VARCHAR(10)
, FUp VARCHAR(5)
, UpdDt VARCHAR(30)
, RunDur VARCHAR(10)
) ;
DEFINE OPERATOR DATACONNECTION
DESCRIPTION 'TPT CONNECTIONS OPERATOR'
TYPE DATACONNECTOR PRODUCER
SCHEMA FFILESCHEMA
ATTRIBUTES
(
  VARCHAR PrivateLogName = 'table1_work_hist_raw.log'
, VARCHAR DirectoryPath  = '**dir path**\' 
, VARCHAR FileName  = '**file_name.csv*'
, VARCHAR Format  = 'Delimited'
, VARCHAR TextDelimiter  = ','
, VARCHAR QuotedData  = 'optional'
, VARCHAR OpenQuoteMark  = '"'
, VARCHAR CloseQuoteMark = '"'
, VARCHAR OpenMode  = 'read'
, VARCHAR SkipRowsEveryFile = 'y'
, Integer SkipRows   = 1
) ;
DEFINE OPERATOR INSERT_START_RAW
DESCRIPTION 'START INSERT OPERATOR'
TYPE INSERTER
SCHEMA *
ATTRIBUTES
(
  VARCHAR PrivateLogName = 'TPT_TABLE1_RAW.log'
, VARCHAR TdpId   = '**sys dns name*'
, VARCHAR UserName  = '**userid*'
, VARCHAR UserPassword  = '**pw*'
, VARCHAR TargetTable  = 'db.table1'
, VARCHAR LogTable  = 'db.table1_log'
, VARCHAR ErrorTable1  = 'db.table1_ERR1'
, VARCHAR ErrorTable2  = 'db.table1_ERR2'
) ;
 APPLY
( 'INSERT INTO db.table1
  (
   :ID
  ,:Login
  ,:TaskID
  ,:QueID
  ,:TaskStartTime 
  ,:CNumber
  ,:Dur
  ,:FUp
  ,:UpdDt
  ,:RunDur
  ) ;
' )
TO OPERATOR (INSERT_START_RAW[8])
SELECT
  ID
, Login
, TaskID
, QueID
, TaskStartTime
, CNumber
, Dur
, FUp
, UpdDt
, RunDur
FROM OPERATOR
  (DATACONNECTION[8]) ;
)
;

Forums: 

Error locating in SQL Assistant 15

$
0
0

We are writing queries against our data to find data quality issues. Currently, there are about 500 and have quite a few more to do. 
We have a request for high level count of failures for each one. With some pixie dust, we generate a Select ID#, BU, Count(*) for each of these.
In an effort to automate these, we generated a new query with a union inbetween the Count(*) queries.
Now... when we run this, because of the modification of these queries to count (*), there are some errors that we need to resolve.
Is there a way that SQL Assistant can highlight the error or display the line # where the error occurs? It kind of defeats the puropse of automation id we still have to run one at a time to locate the errors..
Is this possible?
 
 

Forums: 

ARC MultiStream & ARC Server Questions

$
0
0

Hi! I have some questions about the arcmain utility that I'm hoping someone could answer... I couldn't find documentation to specifically answer...
 
1. In order to use the "multistream" option, is an arc server required? (I assume yes)
2. As far as ARC servers go, I believe both TARA and the Data Mover Agent act as arc servers.... correct?
3. Is there any kind of standalone arc server? In other words, a way I could use multistream archives without heavy coupling of BAR/DM?
4. Does a multistream archive have any performance enhancements over, say, multiple cluster level backups, where each arcmain instance is pointing to the node that contains the AMPs? Or, is it mostly for convenience/easier and accurate configuration and management?
 
Essentially, I'd like to have a "named pipe" style multi-stream arc without using TARA or Data Mover.

Forums: 

TPT script generator and PDCR query to find AJI

$
0
0

Hi Friends,
I am new to this forum.
I hav 2 queries. Kindly request your help.
1) Could any one of you let me know how to generate a TPT script based on parameters. My requirement is to offload data from TD into flat files. A TPT script has to be generated on the fly for each table in a db.  I understand that a Fast export is also feasible but the requirement is to create a automated TPT script..Could anyone pls provide any ideas or pointers?
2) My second request is - I need to find out the build time taken for creation of Aggregated Join Index  for a database using PDCR. This is to analyse the peak AJI and then we need to do some enhancements on them. Could anyone please let me know the query or tbl or macro name to get AJI info in PDCR?
thanks in advance.

Tags: 
Forums: 

TPT Export issue -- Not able to export large tables

$
0
0

Good morning  Gentlemen,
 
I am facing a weird issue while doing a TPT export on linux . I am facing this issue when I am trying to export more than 4million records(not sure of the row size) .
The same code ran fine for  fewer records (apprx ~ 3 mil) .  
I verified the DBQL logs for this session and found that there are no error codes related to it (this being an export job is there nothing to be err'ed out ?.) 
 
My tpt file is below :
 

DEFINE JOB EXPORT_RN_TABLE_TO_FILE

DESCRIPTION 'EXPORT  RN VIEWS TO FILES'

(

 

DEFINE SCHEMA RN_SCHEMA FROM TABLE @MyTableName;

 

   DEFINE OPERATOR FILE_WRITER()

   DESCRIPTION 'TERADATA PARALLEL TRANSPORTER DATA CONNECTOR OPERATOR'

   TYPE DATACONNECTOR CONSUMER

   SCHEMA *

   ATTRIBUTES

   (

      VARCHAR PrivateLogName    = 'RN_LODG_DATA',

      VARCHAR FileName          = @OutputFileName,

      VARCHAR IndicatorMode     = 'N',

      VARCHAR OpenMode          = 'Write',

      VARCHAR Format            = 'Delimited',

      VARCHAR TextDelimiter     = '|'

   );

 

   DEFINE OPERATOR EXPORT_OPERATOR()

   DESCRIPTION 'TERADATA PARALLEL TRANSPORTER EXPORT OPERATOR'

   TYPE EXPORT

   SCHEMA RN_SCHEMA

   ATTRIBUTES

   (

      VARCHAR PrivateLogName    = 'EXPORT_RN_LODG_DATA',

      INTEGER MaxSessions       =  32,

      INTEGER MinSessions       =  1,

      INTEGER MaxDecimalDigits  = 38,

      VARCHAR DateForm       = 'ANSIDATE',

      VARCHAR TdpId          = @MyTdpId,

      VARCHAR UserName       = @MyUserName,

      VARCHAR UserPassword   = @MyPassword,

      VARCHAR AccountId,

      VARCHAR SelectStmt        = 'select * from <table_name>'

   );

 

   STEP export_to_file

   (

      APPLY TO OPERATOR (FILE_WRITER()[1])

      SELECT * FROM OPERATOR (EXPORT_OPERATOR() [1] );

   );

);

 

-----

 

[username@servername ~]$ cat log_RN_tpt_RN_td_test_3_6.log

Teradata Parallel Transporter Version 15.00.00.02

Job log: /home/<username>/tmp/tbuild/logs/TEST_JOB_RN_36-7053.out

Job id is TEST_JOB_RN_36-7053, running on servername.domainname

Teradata Parallel Transporter Export Operator Version 15.00.00.02

EXPORT_OPERATOR: private log specified: EXPORT_RN_LODG_DATA

Teradata Parallel Transporter FILE_WRITER[1]: TPT19006 Version 15.00.00.02

FILE_WRITER[1]: TPT19010 Instance 1 directing private log report to 'RN_LODG_DATA-1'.

FILE_WRITER[1]: TPT19007 DataConnector Consumer operator Instances: 1

FILE_WRITER[1]: TPT19003 ECI operator ID: 'FILE_WRITER-11137'

FILE_WRITER[1]: TPT19222 Operator instance 1 processing file '/tmp/outputfile_RN_3_6.dat'.

EXPORT_OPERATOR: connecting sessions

EXPORT_OPERATOR: sending SELECT request

Unexpected error.

TPT_INFRA: TPT01036: Error: Task (TaskID: 4, Task Name: INSERT_1[0001]) terminated due to the receipt of signal number 6

 

TPT_INFRA: TPT01037: Error: Task (TaskID: 4, Task Name: INSERT_1[0001]) core dumped

 

TPT_INFRA: TPT02595: Error: DSAC_DataStreamSingularOutput - send error

TPT_INFRA: TPT02268: Error: Cannot write message to Data Stream, status = DataStream Error

TPT_INFRA: TPT02269: Error: Data Stream status = -3197160

OS_GetLocalShmAddr: Invalid cntl shm address 31303030

 

TPT_INFRA: TPT01036: Error: Task (TaskID: 5, Task Name: SELECT_2[0001]) terminated due to the receipt of signal number 6

 

TPT_INFRA: TPT01037: Error: Task (TaskID: 5, Task Name: SELECT_2[0001]) core dumped

 

Job step export_to_file terminated (status 8)

Job TEST_JOB_RN_36 terminated (status 8)

Job start: Tue Apr 19 16:25:42 2016

Job end:  Tue Apr 19 16:28:26 2016

[username@servername ~]$

 

--------------

 

Any ideas or suggestions to resolve this ?.
 

Forums: 

Suppressing TPT warnings

$
0
0

Just wanted to check if there is a way to supress warnings arised during TPTLOAD. Our requirement is to receive RC 0.

(we don't want to fix the warnings..fyi). Tried VARCHAR Errorlist/WarningList in FILE_READER and LOAD_OPERATOR sections but it's not helping. Could someone please suggest.
$ tbuild -f tptload_xyz.ctl xyz4
Teradata Parallel Transporter Version 14.10.00.10 
Job log: /opt/teradata/client/14.10/tbuild/logs/xyz4- 5227283.out
Job id is xyz4-5227283, running on ..
Teradata Parallel Transporter DataConnector Operator Version 14.10.00.10
FILE_READER: Instance 1 directing private log report to 'dataconnector_log-1'.
Teradata Parallel Transporter Load Operator Version 14.10.00.10
LOAD_OPERATOR: private log specified: load_log
FILE_READER: DataConnector Producer operator Instances: 1
FILE_READER: ECI operator ID: 'FILE_READER-10944852'
FILE_READER: Operator instance 1 processing file 'xyz.out'.
LOAD_OPERATOR: connecting sessions
LOAD_OPERATOR: preparing target table
LOAD_OPERATOR: entering Acquisition Phase
FILE_READER: TPT19003 TPT Exit code set to 4.   ------> Warning
LOAD_OPERATOR: entering Application Phase
LOAD_OPERATOR: Statistics for Target Table:  'db.tab1'
LOAD_OPERATOR: Total Rows Sent To RDBMS:      2
LOAD_OPERATOR: Total Rows Applied:            2
LOAD_OPERATOR: Total Rows in Error Table 1:   0
LOAD_OPERATOR: Total Rows in Error Table 2:   0
LOAD_OPERATOR: Total Duplicate Rows:          0
LOAD_OPERATOR: disconnecting sessions
FILE_READER: Total files processed: 1.
LOAD_OPERATOR: Total processor time used = '0.432734 Second(s)'
LOAD_OPERATOR: Start : Wed Apr 20 20:45:51 2016
LOAD_OPERATOR: End   : Wed Apr 20 20:46:28 2016
Job xyz4 completed successfully, but with warning(s). 
Job start: Wed Apr 20 20:45:50 2016
Job end:   Wed Apr 20 20:46:28 2016

 

Forums: 

Different behavior in reserved words between TPT 14.00.00.03 and 14.10.00.13

$
0
0

Hi,
I've got a TPT LOAD job that has reserved words as the names of two of the columns in the target table. Ignoring the fact that this is a bad idea (I don't have control over what a column is going to be named), I went ahead and put double quotes around those reserved words for every occurance in the TPT script. When using TPT from version 14.10.00.13 (in my dev environment), this works fine. When using version 14.00.00.03 (in my production environment), it failes in the aquisition phase

LOAD_OPERATOR: connecting sessions
LOAD_OPERATOR: preparing target table
LOAD_OPERATOR: entering Acquisition Phase
LOAD_OPERATOR: TPT10508: RDBMS error 3707: Syntax error, expected something like a name or a Unicode delimited identifier or an 'UDFCALLNAME' keyword between ',' and the 'DATE' keyword.
LOAD_OPERATOR: disconnecting sessions

From my TPT script, here's the load step:

STEP Load_Trans_Table
(
APPLY
('INSERT INTO <TARGET>(
COL1
, COL2
, COL3
, "DATE"
, "TIME"
, COL6
, COL7
, COL8
, COL9
)
VALUES (
:COL1
, :COL2
, :COL3
, :"DATE"
, :"TIME"
, :COL6
, :COL7
, :COL8
, :COL9
);')
TO OPERATOR (LOAD_OPERATOR[2])

My question is, why the different behavior between the two versions and what can be done to make them behave the same? The options I've come up with are to upgrade the production server to TPT 14.10.00.13 or to change the column names. Target database is version 15.00.04.07.
Thanks,
Mike

Forums: 

ASCII value of the delimiter in TPT Export Operator

$
0
0

Hi All,
Is there any way to pass ASCII value of character in TextDelimiter for TPT Export Operator? I want to export  a "TAB" delimited file. But when I'm giving tab a separator, it's taking space as delimiter. So if we can give ASCII character instead of actual character, it might produce the export file with tab as delimiter.
Below is my tpt control file:

DEFINE JOB EXPORT_CODE_ORG_HIER_V_TO_FILE

DESCRIPTION 'export EXPORT_CODE_ORG_HIER_V_TO_FILE'

     (

        DEFINE SCHEMA SCHEMA_CODE_ORG_HIER_V

(

            SYSTEM_CD   VARCHAR(1000)

,PRINCIPAL_CD VARCHAR(1000)

,CORPORATION_CD VARCHAR(1000)

,ENTITY_NAME VARCHAR(1000)

,STATE VARCHAR(1000)

,REGION_NAME VARCHAR(1000)

,DIVISION_NAME VARCHAR(1000)

     );

 

        DEFINE OPERATOR o_ExportOper

        TYPE EXPORT

        SCHEMA SCHEMA_CODE_ORG_HIER_V

        ATTRIBUTES (

            VARCHAR UserName = @UserName

           ,VARCHAR UserPassword = @UserPassword

           ,VARCHAR TdpId = @TdpId

           ,INTEGER MaxSessions = @MaxSessions

           ,INTEGER MinSessions = @MinSessions

  ,VARCHAR PrivateLogName = 'Export'

           ,VARCHAR SpoolMode = 'NoSpool'

           ,VARCHAR WorkingDatabase = @WorkingDatabase

  ,VARCHAR SourceTable = @SourceTable

           ,VARCHAR SelectStmt = @SelectStmt

        );

 

        DEFINE OPERATOR o_FileWritter

        TYPE DATACONNECTOR CONSUMER

        SCHEMA SCHEMA_CODE_ORG_HIER_V

        ATTRIBUTES (

         VARCHAR FileName = @FileName

        ,VARCHAR Format = @Format

        ,VARCHAR TextDelimiter = @TextDelimiter

        ,VARCHAR IndicatorMode = 'N'

        ,VARCHAR OpenMode = 'Write'

,VARCHAR PrivateLogName = 'DataConnector'

        );

        APPLY TO OPERATOR (o_FileWritter[@LoadInst])

           SELECT * FROM OPERATOR (o_ExportOper[@ReadInst]);

     )

     ;

 

Below is the tbuild command I'm executing:

 

tbuild -f /home/aroy001c/Sample/ctl/code_org_hier_v.tpt.ctl -v /home/aroy001c/Sample/logon/aroy001c_tpt.logon -u " WorkingDatabase='NDW_EXTRACT_VIEWS' , SourceTable='CODE_ORG_HIER_V' , MacroDatabase='NDW_TEMP' , load_op=o_ExportOper , LoadInst=1 , ReadInst=1 , MaxSessions=10 , MinSessions=5 , FileName='/home/aroy001c/Sample/tgtfile/code_org_hier_v.out' , LOAD_DTS='2016-04-21 08:21:34' , Format='DELIMITED' TextDelimiter='' , SkipRows=0 , SelectStmt='SELECT TRIM(CAST(SYSTEM_CD  AS VARCHAR(1000))),TRIM(CAST(PRINCIPAL_CD  AS VARCHAR(1000))),TRIM(CAST(CORPORATION_CD  AS VARCHAR(1000))),TRIM(CAST(ENTITY_NAME  AS VARCHAR(1000))),TRIM(CAST(STATE  AS VARCHAR(1000))),TRIM(CAST(REGION_NAME  AS VARCHAR(1000))),TRIM(CAST(DIVISION_NAME  AS VARCHAR(1000))) FROM NDW_EXTRACT_VIEWS.CODE_ORG_HIER_V;'" CODE_ORG_HIER_V

 

Below is the sample outpot file

 

  01626 Belt Ce VA BEY REG ND

8497 9500  Mex NM MOU REG WD

Please help.
Thanks & Regards,
Arpan.
(+919903062694)

Forums: 

SQL Assistant - Copy to Notepad

TPT api UpdateDriver why fail ??

$
0
0

/* teradata odbc sample */

#include <stdio.h>
#include <iostream>
#include "connection.h"
#include "schema.h"
#include "DMLGroup.h"
#include "sqlunx.h"
#include <sql.h>
#include <sqlext.h>
#include <stdlib.h>

#define TRUE  1
#define FALSE  0

#define MAX_STR  256
#define MAXLINE  1000000
#define MAX_NAME_LEN 50
#define MAXCOLS  2048

#define MAXCONNECTIONS 100 

using namespace std;
using namespace teradata::client::API;

struct COL_INFO{
    char colname[32];
    int coltype;
    int colnamelen;
    int nullable;
    int scale;
    int collen;
};   

struct ENV_HANDLE{
    HENV henv;
    HDBC hdbc;
    HSTMT hstmt;

    char  dbName[MAX_STR];
    char  user[MAX_STR];
    char  passwd[MAX_STR];
    char  tableName[MAX_STR];
    COL_INFO* colInfo;
};

int main(void)
{
    int            i;
    ENV_HANDLE     env;
    int            len, returnValue;
    int            rc;
    SWORD          nresultcols;
    char           db[MAX_STR];
    SQLSMALLINT    dblen;
    TD_ErrorType   errorType;
    
    Connection *conn = new Connection();
    char* errorMsg = NULL;
    SQLCHAR           dbname[256] = "testdsn";
    SQLCHAR           ip[256] = "192.168.10.112";
    SQLCHAR           user[256] = "truser2";
    SQLCHAR           pass[256] = "trpass2";
    SQLCHAR           query[256] = "select * from coldata_test where 1 = 0";
    
    memset(&env, 0x00, sizeof(ENV_HANDLE) );
    rc = SQLAllocHandle(SQL_HANDLE_ENV,SQL_NULL_HANDLE,&env.henv);
    if (rc != SQL_SUCCESS && rc != SQL_SUCCESS_WITH_INFO)
        printf("fail Alloc Handle\n");

    rc = SQLSetEnvAttr(env.henv, SQL_ATTR_ODBC_VERSION,(void *)SQL_OV_ODBC3,4);
    if (rc != SQL_SUCCESS && rc != SQL_SUCCESS_WITH_INFO)
        printf("SEtEnvAttr fail \n");

    rc = SQLAllocHandle(SQL_HANDLE_DBC,env.henv, &env.hdbc);
    if (rc != SQL_SUCCESS && rc != SQL_SUCCESS_WITH_INFO)
        printf("SQLAllocHandle fail ");    

    rc = SQLConnect(env.hdbc, 
    dbname, 14, 
    user,7 , 
    pass, 7);

    if (rc != SQL_SUCCESS && rc != SQL_SUCCESS_WITH_INFO)
        printf("connect fail\n");

    printf("connection ok\n");

    rc = SQLGetInfo(env.hdbc, SQL_ODBC_VER, &db, (SWORD)sizeof(db), &dblen);
    if (rc != SQL_SUCCESS && rc != SQL_SUCCESS_WITH_INFO)
        printf("GetInfo ODBC VERSION");

    cout <<"ODBC version        = -"<< (char *)db << "-"<< endl;    


    rc = SQLGetInfo(env.hdbc, SQL_DBMS_NAME, &db, (SWORD) sizeof(db), &dblen);
    if (rc != SQL_SUCCESS && rc != SQL_SUCCESS_WITH_INFO)
     printf("DBMS NAME GET FAIL");

    cout <<"DBMS name           = -"<< (char *)db << "-"<< endl;
    

    rc = SQLGetInfo(env.hdbc, SQL_DBMS_VER, &db, (SWORD) sizeof(db), &dblen);
    if (rc != SQL_SUCCESS && rc != SQL_SUCCESS_WITH_INFO)
        printf("faile");

    cout <<"DBMS version        = -"<< (char *)db << "-"<< endl;

    rc = SQLAllocHandle(SQL_HANDLE_STMT, env.hdbc, &env.hstmt);    
    if (rc != SQL_SUCCESS && rc != SQL_SUCCESS_WITH_INFO)
       printf("handler alloc fail \n");
    
    rc = SQLExecDirect(env.hstmt, query , SQL_NTS);
    if (rc != SQL_SUCCESS)
    {
       printf("fail execute query \n");
    }
    
    do{

        rc = SQLNumResultCols(env.hstmt, &nresultcols);
        env.colInfo = (COL_INFO*)malloc( sizeof(COL_INFO) * nresultcols );
        memset( env.colInfo, 0x00, sizeof(COL_INFO) * nresultcols );
        for( i=0; i< nresultcols; i++)
        {
            SQLDescribeCol( env.hstmt, i+1, (SQLCHAR*)env.colInfo[i].colname, (SWORD) sizeof(env.colInfo[i].colname),
                            (SWORD*)&env.colInfo[i].colnamelen, (SWORD*)&env.colInfo[i].coltype, (SQLUINTEGER*)&env.colInfo[i].collen,
                            (SWORD*)&env.colInfo[i].scale, (SWORD*)&env.colInfo[i].nullable);         
            printf("name [%s], len [%d], type [%d]\n", env.colInfo[i].colname, env.colInfo[i].collen, env.colInfo[i].coltype);
        }

    }
    while((rc = SQLMoreResults(env.hstmt)) == 0 );

    conn->AddAttribute(TD_SYSTEM_OPERATOR, TD_UPDATE);
    conn->AddAttribute(TD_TRACE_OUTPUT,"update.txt");
    conn->AddArrayAttribute(TD_TRACE_LEVEL, 2, TD_OPER_ALL,TD_GENERAL, NULL);
    //conn->AddAttribute(TD_BUFFER_MODE,"yes"); 

    conn->AddAttribute(TD_TDP_ID, (char*)ip );
    conn->AddAttribute(TD_USER_NAME, (char*)user );
    conn->AddAttribute(TD_USER_PASSWORD, (char*)pass );
    conn->AddArrayAttribute(TD_TARGET_TABLE, 1,"coldata_test", NULL);
    conn->AddArrayAttribute(TD_WORK_TABLE , 1,"coldata_test_wt", NULL);
    conn->AddAttribute(TD_LOG_TABLE,"log_table");
    conn->AddArrayAttribute(TD_ERROR_TABLE_1,1,"coldata_test_e1",NULL);
    conn->AddArrayAttribute(TD_ERROR_TABLE_2,1,"coldata_test_e2",NULL);
    Schema * schema = new Schema("input");
    for( i= 0; i< nresultcols; i++){
    //schema->AddColumn(env.colInfo[i].colname, (TD_DataType)env.colInfo[i].coltype, env.colInfo[i].collen);
    schema->AddColumn(env.colInfo[i].colname, TD_VARCHAR, env.colInfo[i].collen);
    }
    conn->AddSchema(schema);
    TD_Index dmlGroupIndex = 0;
    DMLGroup * dmlGr = new DMLGroup();
    memset(query , 0x00, sizeof( query ) );
    sprintf((char*)query, "INSERT INTO coldata_test ( :%s, :%s, :%s ); ", (char*)env.colInfo[0].colname, (char*)env.colInfo[1].colname, (char*)env.colInfo[2].colname );
    printf("%s\n", query);
    dmlGr->AddStatement((char*)query);
    dmlGr->AddDMLOption(MARK_DUPLICATE_ROWS);
    returnValue = conn->AddDMLGroup(dmlGr,&dmlGroupIndex);
    cout << "DMLGroups added with status "<< returnValue << endl;


    
    returnValue = conn->Initiate();
    cout << "Driver Initiated with status "<< returnValue << endl;
    if( returnValue < TD_Error ){
        printf("Initiate OK!\n");
        char rowBuffer[1024];
        TD_Length bytes;
        int loadStatus = 0;
        memset(rowBuffer, 0x00, 765);
        strcpy(rowBuffer,"col01,col02,col03\n"); 
        returnValue = conn->UseDMLGroups(&dmlGroupIndex,1 );
        if( returnValue >= TD_Error){
            printf("UseDMLGroup fail \n");
        }
        cout << "Sending First Row"<< endl;
        returnValue = conn->PutRow( rowBuffer ,18);
        if( returnValue < TD_Error ){
            returnValue = conn->EndAcquisition();
            if( returnValue < TD_Error ){
                returnValue = conn->ApplyRows();
            }
            else{
                conn->GetErrorInfo(&errorMsg,&errorType);
                if ( errorMsg != NULL ){
                    cout << errorMsg << endl;
                    cout << "Type: "<< errorType << endl;
                }else{
                    cout << "No Error Info Available"<< endl;
                }
            }
        }
        else{
            conn->GetErrorInfo(&errorMsg,&errorType);
            if ( errorMsg != NULL ){
                cout << errorMsg << endl;
                cout << "Type: "<< errorType << endl;
            }
            else{
                cout << "No Error Info Available"<< endl;
            }
        }

    }
    else{
        cout << "Error occured during Acquisition"<< endl;
        conn->GetErrorInfo(&errorMsg,&errorType);
        if ( errorMsg != NULL ){
            cout << errorMsg << endl;
            cout << "Type: "<< errorType << endl;
        }else{
            cout << "No Error Info Available"<< endl;
        }
    }
    returnValue = conn->Terminate();
   if ( returnValue >= TD_Error )
   {
      //Get Error Information
      cout << "Error occured during Terminate"<< endl;
      conn->GetErrorInfo(&errorMsg,&errorType);
      if ( errorMsg != NULL ){
         cout << errorMsg << endl;
         cout << "Type: "<< errorType << endl;
      }else{
         cout << "No Error Info Available"<< endl;
      }
   }

   delete dmlGr;
   delete schema;
   delete conn;
    return 0;
}

teradata version : 13.10
Use API  = odbc , TPT API
 
Table Schema :
           create table coldata_test(
               col01 varchar(255),
               col02 varchar(255),
               col03 varchar(255)
            )
 
execute : Not Fail Message
               create error table  ANd errorcode = 2673
 
why not execute success??
 
 

Forums: 

Teradata procedure creation || Aquadata studio

$
0
0

Hi all,
I am using Aquadata studio and was trying to create Procedure and everytime it can not compile and gives error. While that works with "Teradata Studio Express" does any one know or faced similar issues and potential fix? Any help will be greatly appreciated.
 
Regards,
Chandan

Forums: 

report with different numbering expected * BTEQ *

$
0
0

Good afternoon.

I'm trying to export the query result below by BTEQ. I am using the following script.

 

SELECT DATABASENAME AS "Banco de Dados"

,SUM(CURRENTPERM) AS "Ocupado"

,(MAX(CURRENTPERM) * 72) AS "Ocupado_comskew"

,SUM(MAXPERM) AS "Alocado"

,Alocado - Ocupado AS "Disponível"

,Alocado - Ocupado_comskew AS "Disponível_comskew"

,(CASE WHEN Alocado = 0 THEN 0

ELSE (Ocupado * 100 / Alocado )

END) AS "Porcentagem_ocup"

,(CASE WHEN Alocado = 0 THEN 0

ELSE (Ocupado_comskew * 100 / Alocado )

END) AS "Porcentagem_ocup_comskew"

,Porcentagem_ocup_comskew - Porcentagem_ocup (INTEGER) AS "Diferenca"

FROM DBC.ALLSPACE

WHERE TABLENAME = 'all'

GROUP BY 1

ORDER BY 2 DESC, 1,3,4,5 ASC

;

 

In sql assistant returns the result as expected when I run select for BTEQ, the result returns different from sql assistant

 

 

=================================================

 

Script used in BTEQ

 

 

.RUN FILE RUNFILE_XXX;

 

.SET TITLEDASHES OFF;

.SET FORMAT OFF;

.SET FOLDLINE ON 1;

.SET WIDTH 254;

 

.HEADING ''

 

 

.EXPORT REPORT FILE=D:\teste\scripts\\RELAT.txt

SELECT DATABASENAME AS "Banco de Dados"

,SUM(CURRENTPERM) AS "Ocupado"

,(MAX(CURRENTPERM) * 10) AS "Ocupado_comskew"

,SUM(MAXPERM) AS "Alocado"

,Alocado - Ocupado AS "Disponível"

,Alocado - Ocupado_comskew AS "Disponível_comskew"

,(CASE WHEN Alocado = 0 THEN 0

ELSE (Ocupado * 100 / Alocado )

END) AS "Porcentagem_ocup"

,(CASE WHEN Alocado = 0 THEN 0

ELSE (Ocupado_comskew * 100 / Alocado )

END) AS "Porcentagem_ocup_comskew"

,Porcentagem_ocup_comskew - Porcentagem_ocup (INTEGER) AS "Diferenca"

FROM DBC.ALLSPACE

WHERE TABLENAME = 'all'

GROUP BY 1

ORDER BY 2 DESC, 1,3,4,5 ASC

;

 

   

 

.IF ACTIVITYCOUNT = 0 THEN .QUIT 99

 

.EXPORT RESET

 

.LOGOFF

 

.QUIT

 

===================================

result of export for bteq = incorrect

 

Ocupado_comskel = 1.13235904512000E 012

Porcentagem_ocup = 4.18930010627790E 001

Porcentagem_ocup_comskew =  4.20911320595271E 001

===================================

 

result of export correct for sql assistant

 

Ocupado_comskel = 157.321.559.040,00
Porcentagem_ocup = 2.532.934.059.456,00
Porcentagem_ocup_comskew =  41,91

 

Tags: 
Forums: 
Viewing all 870 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>