Quantcast
Channel: Teradata Downloads - Tools
Viewing all 870 articles
Browse latest View live

Viewpoint: Not able to access VP Portal via Google Chrome

$
0
0

Hi ,
One of my user getting an error "Server has a weak ephemeral Diffie-Hellman public key" when he is trying to access viewpoint Portal via Google Chrome, even after he tried the following options
1) Cleared the cache and rechecked
2) Alternate way to fix the issue
Step 1 : Place the “Google Chrome” icon on the desktop.
Step 2: Right Click on the “Google Chrome” icon and select the option “Properties”.
Step 3 : Enter the below highlighted text in the field “Target” (which is editable).
"C:\Program Files (x86)\Google\Chrome\Application\chrome.exe" --cipher-suite-blacklist=0x0088,0x0087,0x0039,0x0038,0x0044,0x0045,0x0066,0x0032,0x0033,0x0016,0x0013"
Step 4: Click on “Apply” button and Click on “OK”.
Step 5: Open the “Google Chrome” as usual.
 
But still getting any error.
Is there any other solution to fix this issue.
 
Thanks,
Harsha.

Forums: 

TPT 15.00 For Linux and Informatica ETL

$
0
0

Hello,
 
I have installed on linux server the following :
a. Teradata GSS Client (teragss)
b. Shared ICU Libraries for Teradata (tdicu)
c. Teradata Call-Level Interface version 2 (cliv2)
d. Teradata Data Connector API (piom)
e. tptbase1500__linux_indep.15.00.00.05-1.tar.gz
f. tptstream1500__linux_indep.15.00.00.05-1.tar.gz
g. tdodbc__linux_indep.15.00.00.04-1.tar
 
After this i have set the following environment variable on linux:
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/INFA/PowerExchange9.6.1
export LD_LIBRARY_PATH=/opt/teradata/client/15.00/lib:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=/opt/teradata/client/ODBC_32/lib:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=/opt/teradata/client/ODBC_64/lib:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=/opt/INFA/java/bin:/opt/INFA/java/jre/bin:$LD_LIBRARY_PATH
export COPLIB=/opt/teradata/client/15.00/lib/clispb.dat
export COPERR=/opt/teradata/client/15.00/lib/errmsg.cat
export TD_ICU_DATA=/opt/teradata/client/15.00/tdicu/lib64
export THREADONOFF=1
export TWB_ROOT=/opt/teradata/client/15.00/tbuild/bin
export PATH=${JAVA_HOME}/bin:${JRE_HOME}/bin:${PATH}
export NLSPATH=/opt/teradata/client/15.00/odbc_64/msg:/opt/teradata/client/15.00/odbc_32/msg:/opt/teradata/client/15.00/tbuild/msg/%N:/opt/teradata/client/15.00/tbuild/msg/%N:/opt/teradata/client/15.00/odbc_32/msg/%N
 
When i try to start a workflows a receive the following errors:
 
Message Code: TPTWR_35132
Message: [ERROR] Plug-in failed to load Teradata Parallel Transporter API Library.
Message Code: TPTWR_31301
Message: [ERROR] Plug-in failed to initialize Teradata PT Writer Plug-In.
Message Code: SDKS_38007
Message: Error occurred during [initializing] writer plug-in #315000.
Message Code: SDKS_38003
Message: Plug-in #315000's method PPluginDriver::init() failed.
 
Could you please help me with this errors?

Forums: 

Please Help -Error: (UPDATE_OPERATOR: TPT10322: Error -1 while trying to initialize CLI)

$
0
0

Hello,
I am installing TPT 15.00 in a linux server that has Informatica on it.
I have set the environment variables and now I am getting the error below when I run my workflows
Message Code: TPTWR_10322

[ERROR] Type:(Teradata PT API Error), Error: (UPDATE_OPERATOR: TPT10322: Error -1 while trying to initialize CLI)

 

 

Could you please help me resolving this issue?

 

Thanks in advance

VA

Forums: 

TPT - Timestamp format error 6760

$
0
0

I have a simple load step

DEFINE JOB Run_Load
(
     Step Load_Tables
     (
		APPLY $INSERT TO OPERATOR
         (
             $LOAD()
             ATTRIBUTES
             (
                 TargetTable = @TargetDatabase || '.' || @TargetTable,
			LogTable = @TargetDatabase || '.' || @TargetTable || '_LOG',
			ErrorTable1 = @TargetDatabase || '.' || @TargetTable || '_E1',
			ErrorTable2 = @TargetDatabase || '.' || @TargetTable || '_E2'
             )
         )
		SELECT * FROM OPERATOR($FILE_READER( DELIMITED  @TargetDatabase || '.' || @TargetTable ) );
     );
);

If I have understood correctly, the schema will be inferred from the target table
This is working fine when I have a table defined as:

CREATE MULTISET TABLE MYDATABASE.MYTABLE

     (   cola TIMESTAMP(6) FORMAT 'YYYY/MM/DDbHH:MI:SS')

 

and data:

 

2015/09/09 06:24:02

 

BUT, this is not the format my data is coming in.  What I really want to get working is for data such as:

 

08/09/2015 06:24:02

 

So, I am using this table structure:

 

CREATE MULTISET TABLE MYDATABASE.MYTABLE      

(       cola TIMESTAMP(6) FORMAT 'DD/MM/YYYYbHH:MI:SS')

 

However, I then get an error 6760.  I don't understand why?  Isn't the date format being taken from the table DDL?  This approach seems to have worked ok with dates previously.

 

 

 

Forums: 

Installing Teradata Administrator

Issue with muting column names in bteq export file

$
0
0

Hi,
 I am trying to export the 3 differrent select queries output to a single file (refer below)
bteq << EOB > log.txt 
.logon **********************
.export report file=result.txt
.set width 1000
.set TITLEDASHES ON;
--1st query
select metric, count(*) from tab1
-- 2nd query
select metric,count(*) from tab2;
--3rd query
select metric, count(*) from tab3;
.export reset
.quit;
EOB
 
The above script is printing the column names 3 times in the result.txt but i want to print only once. I tried multiple options to avoid printing the column names for the 2nd and 3rd query but i couldn't able to figure it out. Could someone please help me on this.
 
Thanks,

Tags: 
Forums: 

30 minute window from date time field

$
0
0

Hi
 
I have a date time field with values like "1/1/2014 01:52:23" , i need to count the number of records with a condition of flag = y , within 30minutes window of this date time column.
So basically if the flag = y within 30 minutes window count that record if not ignore .. \
 
Please help!

Forums: 

MDS synchronization ERROR

$
0
0

good,
I have a database only assigned for views and when I'm trying to update the MDS with new views had been created, it gives me the following error when synchronizing this database:
10/14/2015 16:49:45|PID-3512|THREAD-3772|2|[ReformatODBCTeradataVersion] Teradata Release 14.00.0205  14.00.02.05 is malformed - assuming 12.0 for now
10/14/2015 16:49:46|PID-3512|THREAD-3772|2|[ReformatODBCTeradataVersion] Teradata Release 14.00.0205  14.00.02.05 is malformed - assuming 12.0 for now
10/14/2015 16:49:48|PID-3512|THREAD-3772|2|[ReformatODBCTeradataVersion] Teradata Release 14.00.0205  14.00.02.05 is malformed - assuming 12.0 for now
10/14/2015 16:50:11|PID-3992|THREAD-3340|2|CDBAccess::DBODBCError: ODBC error (connection=00EE1348, return code=-1, native error=0, SQL state=HY000) [NCR][ODBC Teradata Driver] Major Status=0x04bd Minor Status=0xe0000007-Success/Continue needed
10/14/2015 16:50:11|PID-3992|THREAD-3340|2|CDBAccess::DBConnect: Database connect failed (DSN, user) (DSN_MDS, MDS_ADM), result=8000701B
10/14/2015 16:50:11|PID-3992|THREAD-3340|2|[metaload::StartupChild] STxInitialize() ERROR=META_E_DBACCESS_ERROR
10/14/2015 16:52:03|PID-3512|THREAD-3772|1|metaload::WriteSQLBuf: Child process had died
10/14/2015 16:52:03|PID-3512|THREAD-3772|2|metaload::ChildProcessCleanup: child process 1196 returned error (META_E_DBACCESS_ERROR)
10/14/2015 16:52:04|PID-3512|THREAD-3772|2|metaload:  ERROR=META_E_METALOAD_CHILD_PROCESS_DIED
10/14/2015 16:52:04|PID-3548|THREAD-3892|2|CMetaLaunch::OnMetaloadComplete()           MetaLoad returned 0x8000711E
 
The user which i use to syncronize the MDS has permission of select access on all databases.
Can you help me? Thanks in advance.
Thanks in advance.
 

Forums: 

FastLoad UTF8 charset issue - for special characters

$
0
0

Hi,
I am using charset as UTF8 in fastlload, whatever record has special character like box went to Error table (ET). Pls refer the attachment for the sample issued value. Kindly help me how to handle this situation, we need to load data as it is there in source.
Thanks,
Prem.

Forums: 

TPT sessions are left behind?

$
0
0

A user test scenario uses TPT STREAM operator.
 
Test is inspected with respect to sensitivity to network outage / broken connection.
 
Following query is issued repeatedly in order to follow our testing "SUPPORT" team sessions.
 
SELECT Username , DefaultDatabase, LogonSequenceNo , PARTITION AS Utility_Type,
     MAXIMUM ( CURRENT_TIMESTAMP - ( CAST( ( CAST( LogonDate AS DATE FORMAT 'YYYY-MM-DD'  ) ( CHAR ( 10 ) )  ) || '' || LogonTime  AS TIMESTAMP ) ) HOUR TO SECOND ) AS TimeLoggedIn ,
     COUNT ( * ) AS num_of_sessions   
FROM Dbc.Sessioninfo       
GROUP BY 1 , 2 , 3, 4;
 
Initially - following is reported:
 
Username, DefaultDatabase, LogonSequenceNo, Utility_Type, Time_LoggedIn, num_of_sessions
 
DBC,DBC    ,0x00000000,DBC/SQL                         ,  0:00:00.060000,1
DBC,SUPPORT,0x00000000,DBC/SQL                         ,  0:03:27.590000,2
 
So - number of “SUPPORT” sessions are 2.
 
TPT program runs.
Now - our SUPPROT guys cause a connection error by blocking the IP of the Teradata DB.
The C/API based TPT program senses this exception, issues conn->Terminate() and exits.
 
Repeating the above query now shows:
 
DBC,DBC    ,0x00000000,DBC/SQL                         ,  0:00:00.060000,1
DBC,SUPPORT,0x00000000,DBC/SQL                         ,  0:06:03.920000,3
 
Number of “SUPPORT” sessions are 3
 
 
Repeating this sequence again and we see:
 
 
DBC,DBC    ,0x00000000,DBC/SQL                         ,  0:00:00.130000,1
DBC,SUPPORT,0x00000000,DBC/SQL                         ,  0:08:32.410000,4
 
Number of “SUPPORT” sessions are 4
 

These sessions were initially subjected to conn->Initiate()as they started and were cleaned up by a corresponding conn->Terminate() thereafter.

But we see that the number of session is constantly growing, regardless of the fact that backend TPT neatly terminates.
 
 
What is the point here?
Why these TPT session are left behind?
 
 
 
 

Forums: 

Cannot connect to database using Linux Teradata client 15.10

$
0
0

I have a Linux machine with Teradata client 14.10 which works nicely with athe database.
On another Linux machine where client 15.10 is installed, I cannot connect to the same database using the same connection string.
The error I get is "STATE=632, CODE=0, MSG=523 630", which, unfortunately, is not too helpful.
What could be the problem?

./tdxodbc  -C "DRIVER=Teradata;SERVER=teradata141;UID=dbc;PWD=dbc;DBCNAME=MSTEST"
Connecting with SQLDriverConnect("DRIVER=Teradata;SERVER=teradata141;UID=dbc;PWD=*;DBCNAME=MSTEST")...
adhoc: (SQL Diagnostics) STATE=632, CODE=0, MSG=523 630
ODBC connection closed.

Thanks 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Forums: 

comparing 2 Teradata tables with different structure

$
0
0

Hi All,
 
I have to compare 2 tables having different structure,i.e. in one table there are some additional columns at the end. I have to do a sample record comparison . Please suggest if Minus query will work.
Also, suggest how I can highlight the mismatched data. Earlier, I was using a manual process by exporting the records into excel and then comapring and highlighting the data values. Can it be possible that the mismatched values are highlighted by comparing in teradata.
Thanks in Advance!

Forums: 

Difference in Aquisition phase of Mload & FastLoad

$
0
0

Hello Everyone,
I have a question on Mload & FastLoad.
Lets assume that I have an empty table and I am trying to  load a file into this table using Mload or fastload. Now, based on below knowledge, I want to conclude, in application phase , which of these utilities will perform better.
To my knowledge, in Mload, data gets loaded to worktable in aquisition phase and then will be moved to actual table in application phase. My question here is

  1. Will we have copy of work table on all amps
  2. Will the parser hash the data from file and then pass the record to worktable on corresponding amp or will it push randomly like in Fastload and then re-distribute in application phase.

In Fastload, data first gets pushed on to the amps in aquisition phase and then in application phase, it gets re-distributed across amps. Does this mean that hashing in fastload happens in application phase and will the data gets moved across amps over BYNET. If this is true, application phase of Fastload takes more time compared to Mload as hashing is involved here.
Please help & correct me if my understanding is wrong 
 
 
 

Forums: 

BTEQ TPT

$
0
0

I want to understand the difference in the usage of a TPT script vs BTEQ script. 
 
Say I am writing a complex SQL with joins on 15 tables, (each table approximately having 1 million rows) with few ordered analytical functions. I want to load the data produced by this SQL to another table in the same database/ same server, which of the following is better considering the performance and also best practice.
 
1. Create a BTEQ script, add this complex SQL with a INSERT INTO TABLE and run that bteq from command line using BTEQ command
2. Create a TPT script, use a stream or export operator to retreive the data and then use an update operator and load it to a table.
 
 

Tags: 
Forums: 

TPT binary mode export

$
0
0

I tried exporting the data using TPT in binary mode and it worked. File has been created.
When I tried to understand the exported data file, I am not able to understand it.
Is that will be the LPF(Length Prefixed File) ? 
How to interpret the data meaning what is the starting point and End point of record.
what is the record terminator if we export data in binary mode.
 

Forums: 

Teradata CRM Help Needed

$
0
0

Hi all,
I'm a technical recruiter working with Randstad Technologies, but I'm not here trying to badger people about new job opportunities (though if you're interested, let me know). I am working on a Teradata Administrator/Specilaist role for an insurance client of ours on the east coast that needs a strong background with the Teradata CRM suite, and also interaction with tools such as Customer Interaction Manager (CIM), Real-time Interaction Manager (RTIM), and Digital Media Center (DMC). After weeks of searching and researching, I have yet to find anyone with that sort of experience/skillset. Am I way off in my search or is there another way to go about finding these qualified candidates? Please let me know if anyone has any insight... 
PS - $500 referral bonus paid by Randstad for placed candidates, so if you know of anyone (Teradata or otherwise), feel free to get in touch.
 
- Mike McKeon
mike.mckeon@randstadusa.com
860-256-8651

Forums: 

TTU 15.0 not properly installing or uninstalling

$
0
0

Need help manually removing TTU 15. It didn't install properly so I tried uninstalling it to install again. The uninstall does not work and a repair install does not work. How can I get it properly fixed on my laptop?
 
 

Forums: 

Date formatting error when porting SQL from TD 13 to TD 15

$
0
0

Hi
We are migrating from TD 13 to TD 15.
Following is the SQL that we need to run:

SELECT CAST( '15/11/03 00:00:00' AS TIMESTAMP(0) FORMAT 'YY/MM/DDbhh:mi:ss')

Output on TD15:
11/3/1915 00:00:00
Output on TD13 which is the correct date that is required:
11/3/2015 00:00:00
One way to fix this is to write the script with the complete year:

SELECT CAST( '2015/11/03 00:00:00' AS TIMESTAMP(0) FORMAT 'YY/MM/DDbhh:mi:ss')

The problem is that this script is being used in a lot of places and it is going to be a problem tracing all the places and replacing the dates.
Any help on how to fix this?
Thanks
Wasim

Forums: 

Scheduling Stored Procedures in Teradata

$
0
0

Good Afternoon All,
Hoping someone can help me out with an issue that's causing me a great deal of frustration...
I need to schedule a stored procedure to run at midnight every night (the procedure generates a new table). I had thought that teradata query scheduler would be able to help me with this, but since downloading and installing it I am unable to open it. Every time I try, it comes up with the following 2 error pop-ups:
Query Scheduler Startup Error
(system time)|COMM |009616|QmCreateShm     |CreateFileMap|00005|00005

Query Scheduler Startup Error
(system time)|COMM |009616|Comm:DllMain     |CreateShm|00000|99999

I saw that someone had a similar error a few years ago (here https://forums.teradata.com/forum/tools/td-13-query-scheduler-startup-error-on-windows-7-64-bit) and went through all the troubleshooting steps suggested there, but it was unable to solve the issue. I have also tried uninstalling and reinstalling the scheduler, to no effect.

Additionally, now that I have installed the query scheduler, I now get the same errors when I start up SQL Assistant (although after the errors the SQL assistant runs fine). 
 
My question is: does anyone know how to solve this issue and get query scheduler working? If not, does anyone know of an alternative tool that I can use to schedule queries ahead of time? 
Thank you in advance!
 
I am running windows 7 64 bit, and using teradata tools 15.0.0.0. 
 
 

Forums: 

Error while loading data from specific row number using Multiload

$
0
0

I have a csv file having some rows with less columns causing issues during loading using MULTILOAD. I am getting error-- Access module error '61' received during 'pmReadDDparse'  operation: 'Warning, too few columns !ERROR! Delimited Data Parsing error: Too few columns in row 1524856'.
I know the row which has less columns (say in this case it is row # 1524856) . So to skip the load of that row, i am initially loading my data from RECORD 2 FOR 1524855 which works perfectly fine. But when i am skipping the error row and loading FROM 1524858 THRU <the last record>, i am still getting the above error. Find below the MLOAD script that is written for the same:
#!/bin/ksh

mload << !
.LOGTABLE DBNAME.TEST_LOGTABLE_MLOAD;
.LOGON tpid/userid,password;

DROP TABLE DBNAME.TEST_WT;
DROP TABLE DBNAME.TEST_ET;
DROP TABLE DBNAME.TEST_UV;

.BEGIN IMPORT MLOAD TABLES DBNAME.TEST
     WORKTABLES  DBNAME.TEST_WT
     ERRORTABLES DBNAME.TEST_ET
                 DBNAME.TEST_UV
        ERRLIMIT 5000
;

.LAYOUT TEST_LAYOUT;
          .FIELD COL1 * VARCHAR(100);
          .FIELD COL2 * VARCHAR(100);
          .FIELD COL3 * VARCHAR(100);
          .FIELD COL4 * VARCHAR(100);
          .FIELD COL5 * VARCHAR(100);
          .FIELD COL6 * VARCHAR(100);
          .FIELD COL7 * VARCHAR(100);
          .FIELD COL8 * VARCHAR(300);
          .FIELD COL9 * VARCHAR(300);
          .FIELD COL10 * VARCHAR(100);

.DML LABEL INSERT_TEST;
INSERT INTO DBNAME.TEST
(
      COL1,
      COL2,
      COL3,
      COL4,
      COL5,
      COL6,
      COL7,
      COL8,
      COL9,
      COL10
) VALUES
(
      :COL1,
      :COL2,
      :COL3,
      :COL4,
      :COL5,
      :COL6,
      :COL7,
      :COL8,
      :COL9,
      :COL10
)
;

.IMPORT INFILE /data/testdata.csv
        FROM 2  FOR 1524855
        FORMAT VARTEXT  ','
LAYOUT  TEST_LAYOUT
APPLY INSERT_TEST
;

.END MLOAD;
.LOGOFF;

!
Above run was successful.
To skip error record and do the load, i gave IMPORT statement as below:
.IMPORT INFILE /data/testdata.csv
        FROM 1524858 THRU <last record>
        FORMAT VARTEXT  ','
LAYOUT  TEST_LAYOUT
APPLY INSERT_TEST
;
Getting the error: Access module error '61' received during 'pmReadDDparse'  operation: 'Warning, too few columns !ERROR! Delimited Data Parsing error: Too few columns in row 1524856'.
Looks like it is trying to read the data again from RECORD 1 for the above import.
Please suggest how to skip the error record and continue the data loading.

 

Forums: 
Viewing all 870 articles
Browse latest View live