Quantcast
Channel: Teradata Downloads - Tools
Viewing all 870 articles
Browse latest View live

Attempted to read or write protected memory error

$
0
0

Hello,
I am pretty new to Teradata so I was hoping that somebody could give me some insight on a problem I am having. Using Teradata SQL Assistant, I am trying to load a text file (477MB) from my computer into a database on a server. The file is very large and the load always fails with a memory error when it has read in about 1.0-1.1 million of the rows. It does not make sense to me because I have 8GB of RAM and at the point that the error is thrown, only about 4GB of total RAM is used and Teradata is using about 1.7GB of it.
The code used is:

INSERT INTO DB_NAME.TABLE_NAME VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,DB_NAME.TIME_STAMP())

The error window is titled "Unable to open History database" and the error says "Attempted to read or write protected memory. This is often an indication that other memory is corrupt..."
The log says: 

12/10/2014 9:05:23 AM
SQLA Version: 13.10.0.2
System.AccessViolationException
Attempted to read or write protected memory. This is often an indication that other memory is corrupt.
   at Microsoft.Office.Interop.Access.Dao.DBEngineClass.OpenDatabase(String Name, Object Options, Object ReadOnly, Object Connect)
   at Teradata.SQLA.HistoryTbl.OpenDatabase() in F:\ttu1310_efix_snap\tdcli\qman\sqla\HistoryTbl.vb:line 508

 

Forums: 

Fast Export Data File Issue

$
0
0

Hi,
I am trying to export below column using Fastexport.
 
SELECT
CAST(COL1 as VARCHAR(4000))
from
TABLE;
 
When COL1 actual size exceeds 2559 chars , new record is getting added after that row having only one alphabetic char. 
Also, I observed one strage behaviour: usually when we try to export varchar data using FEXP, it gives some control chars which are nothing but VARCHAR data size (2 byte).  But in above case when VARCHAR record length is exceeding 2559 then that record is coming with no control char and one new records with one char is getting added after it in a file.
 

Forums: 

Different Setting of MLoadDiscardDupRowUPI on Teradata 14.10 and Teradata 12

$
0
0

Hi all,
we have just migrated from Teradata 12 to Teradata 14.10, we have a problem with mload with duplicate rows, in the old version (TD12) we skypped this rows due to the setting of the MLoadDiscardDupRowUPI = TRUE.
On new sytem (TD14.10) the MLoadDiscardDupRowUPI = FALSE in the TERADATA 14.10 manual is written:

  • TERADATA 14.10: MLoadDiscardDupRowUPI = FALSE: "MultiLoad logs rows with UPI violations to the application error table."

In the TERADATA 12 manual is written:
MLoadDiscardDupRowUPI = TRUE: "MultiLoad logs rows with UPI violations to the application error table."
so it seems that we will have the same behavior on the two system,  but we are facing error due to UPI violation discarded on ERR2 table on TERADATA 14.10, that we do not have had on Teradata 12, we have done test with the same data on the two system.
Could you please help me to understand this strange behavior ?
 
Thanks
 

Forums: 

confidence levels wrt date of stats

$
0
0

Will optimizer check the date on which the stats are collected before suggesting the confidence levels? 

Tags: 
Forums: 

Teradata tpt

$
0
0

Hi All, I am new to this post and in teradata.

I am trying to create one unix script that will do the table sync from one to another for COB sync with tpt utility.

 

I need script to unload from source table and load to target table by tpt and if possible unload file in delimited.But also I need to generate the dml of the unload file and save it for future use.

 

Can you please help if there are anything in tpt like .ml script in fast export automatically created, else how to achieve that.

Also I want the tpt load should be upsert which tdload is not doing.I tried with tdload but it is only doing insert.

 

Forums: 

TPT - passing CHAR SET as variable

$
0
0

Hi all
I'm new to TPT so please be gentle with me. I have a generic script whcih will load data files to a Teradata table. One if my files contains Unicode (UTF16) data.
I have successfully laoded the data by invoking a TPT script with the following statement:
USING CHAR SET UTF16
DEFINE JOB EXTRACT_TABLE_LOAD
.....
I have also successfully loaded the data by declaring a variable in the .vars file to define the CHAR SET E.g.
DATA_CHARACTER_SET='UTF16'
...
USING CHAR SET @DATA_CHARACTER_SET
DEFINE JOB EXTRACT_TABLE_LOAD
However, what I want to do is define a variable which holds the entire "using statement", but having tried lots of different permutations I continually get an error "Keywords 'DEFINE JOB' are missing from the beginning of the job script.". Is there any restriction on my doing this ? I often see / hear the statement "you can use variables everywhere in TPT" but it appears that I can't in this scenario.
My attempt:
Vars:
DATA_CHARACTER_SET='UTF16'
COMMAND_LINE ='USING CHAR SET ' || @DATA_CHARACTER_SET
TPT script statements
@COMMAND_LINE
DEFINE JOB EXTRACT_TABLE_LOAD
Thanks for any assistance.
 
 

Forums: 

TPT & Quoted data bug?

$
0
0

HI 
Im curently using TPT 14.10.00.05 with the default configuration:

Native I/O (non-PIPE)

     Data Format: 'DELIMITED'

     Character set: ASCII

     Delimiter = '|' (length 1), x'7C'

     EscapeTextCharacter = '' (length 0), x''

     Quoted data: No

     OpenQuoteMark = '"' (length 1), x'22'

     CloseQuoteMark = '"' (length 1), x'22'
Im getting a parse error on a line 
1|2|string3|"string4|string5|6|7
Paramater 4 is not being parsed due to the open quote. It is reading until EOL as the paramater
Quoted data is set to No!
So why is TPT attaching importance to the (") and expecting a CloseQuoteMark. Surely it should just be treated as a character?
IS this a BUG in TPT?

Forums: 

Script Repository

$
0
0

In a regionalized environment where no single administrator has access to all servers, what tool/repository do you use to share scripts between all administrators?
Thanks!

Forums: 

Cached credentials causing ABU jobs to fail?

$
0
0

Hello,
We are running into issues with performing backups using ABU.

We are running version 14 of Teradata on 2 nodes.

The specific issue is that the jobs are failing at the initial logon with the error:

Failure 8017: The UserId, Password or Account is invalid.

 

When the ABU GUI is initialized, it prompts for an Administrator password; the password field is empty, and when I enter the p/w, it is accepted.

From within the GUI, selecting Task > Backup Selected Objects... results in an additional credential check.

The UserId and Password fields here are populated; when I click Connect with the cached credentials, the ABU errors:

Error 505: Query to Teradata failed with DBS error 8017: The UserId, Password or Account is invalid.

If I manually enter the correct password for the listed account and click Connect, it is accepted.

 

If I attempt to run a saved backup job, or create a new backup job, the job fails with the same error.

If I manually edit the saved ARC script for a saved job, and manually enter the logon credentials into the script instead of using the LOGON$ string, the job succeeds.

 

I have followed the directions in the ABU Installation and User Guide for re-running the ABU configuration script (pg. 21-22), but this does not seem to have corrected the ABU cached credentials. This has resulted in the creation of a new 'defaults' configuration file. However, since the password values in the file are hashed, I cannot absolutely verify that the file has the correct passwords for the listed UserID's.

Is there some other place that could be caching the credentials used by ABU, or is there something else here that I'm not seeing?

Thanks,

 

Mike

Forums: 

TERADATA PROFILER QUERIES

$
0
0

Hi All,
Can anyone please help me out in extracting the Teradata Profiler queries? I would like to know what kind of queries does the Teradata Profiler make in detail if possible

Forums: 

MLOAD and loading of: Empty Strings and Null Strings

$
0
0

When loading data from flat files (text ascii) to tables I find that if the file is a delimited file with all the fields defined as VARCHAR for the columns the columns with blank values (all spaces not just two consecutive delimitters for a column which is defined in the target table as CHAR or VARCHAR) are loaded as NULLs wheres if the file is a fixed type file with the layout dfined as CHAR for the columns the columns are loaded with an 'empty string'.  Question is: (1) what's the theory behind this behavior if this is really by design? (2) how do I a make MLOAD load (without using a CASE Expression) emptry string when loading data from delimitted files.

Forums: 

TPT log file when run from Informatica

$
0
0

If we run TPT script directly by using tbuild, then it generates job id and basing on that we find the present status by using twbstat and the view the log file by using tlogview.
But if we run TPT from informatica how to get the jobid and the TPT log file.
 
 
Regards,
Sunny

Forums: 

Teradata Sql Assistant V13.10 error

$
0
0

I installed Teradata Sql Assistant V 13.10 on new machine. When I open it, it gives error as ‘Microsoft Access Database Engine [ACEDAO.dll] not found, or not registered correctly. History will not be available. ‘. With this error, we are not able to see the history. How to fix this error?

Forums: 

plink ssh connection with no

$
0
0

I try in C# to make an ssh connection with plink.
I get empty output.

Process p = new Process();
p.StartInfo.FileName = @"plink.exe";
param = "-ssh -pw " + Pass + "" + User + "@" + Host + "" + Cmd;

string cd = Environment.CurrentDirectory;

if (File.Exists("plink.exe") == false)
{
throw new Exception("SSHClient: plink.exe not found.");
}

else
{
p.StartInfo.UseShellExecute = false;
p.StartInfo.RedirectStandardInput = true;
p.StartInfo.RedirectStandardOutput = true;
p.StartInfo.RedirectStandardError = true;
p.StartInfo.CreateNoWindow = true;
p.StartInfo.Arguments = param;

p.Start();
p.StandardInput.Write("plink.exe " + param);

standerOut = p.StandardOutput;
string dd = p.Responding.ToString();

string f = p.StandardOutput.ReadToEnd();

while (!p.HasExited)
{
if (!standerOut.EndOfStream)
{
strReturn += standerOut.ReadLine() + Environment.NewLine;
}

}
string x = p.StandardError.ToString();

}

MessageBox.Show(strReturn);

1. dd is true
2. f is empty
3. x i get "System.IO.StreamReader"
4. strReturn i get "" (empty results)

Please please help
tx
matan

Forums: 

Named Pipe jobs failing on checkpoint file

$
0
0

We utilize an ETL tool from a vendor I wish to remain anonymous (starts with an S and ends with a P) :)  This particular vendor has chosen to execute TPTLOAD via tbuild instead of the TPTAPI.  There is no option in the GUI to execute via the TPTAPI.  We are in the process of upgrading this tool.  We are starting to experience random named pipe failures that may be related to checkpointing.  When a TPTLOAD job fails, it creates a checkpoint file in the checkpoint directory on the server.  This is created in the event the user wants to restart the job from the failure point.  This is all very logical and straight forward in my mind.  Where things start to fall apart is when a "named pipes" TPTLOAD is executed via tbuild.  By definition a named pipe job cannot be restarted via a checkpoint file.  This feature is reserved for physical files being read from disk.  If a TPTLOAD "named pipes" job fails, why would the Data Connector look for a checkpoint file in the checkpoint directory on the server and then fail if it is not found.  If the checkpoint file were found it could not be used as part of a "named pipes" restart process anyway...correct?
 
If a TPTLOAD "named pipes" job fails, we normally just clean up the error tables and restart the job from the beginning.  We can do this because we do not have very tight SLAs or very large amounts of data.  Sometimes (not sure what triggers this), when we try to restart, the job fails again because it found a checkpoint file associated with a prior run failure.  A couple of specific questions that I have are:
 
1. Are checkpoint files always created for TPTLOAD "named pipe" jobs even though they will never be used?  If so, why?
2. When a TPTLOAD "named pipe" job fails, are the checkpoint files always created and left in checkpoint files directory on the server?  If so, do these files always need to be deleted before the "named pipe" job can be re-submitted?
3. Any idea why a GUI generated tbuild "named pipes" job would fail, be re-submitted, and then fail again because a checkpoint file exists?
 
I am really trying to understand the relationship between "named pipe" jobs and checkpoint files since they seem to be mutually exclusive.
 
Thanks,
 
Joe

Forums: 

Teradata SQL Grammar

$
0
0

Hi,
I am planning to write a custom parser for Teradata SQL. Is the Teradata SQL grammar available for download somewhere? Is this grammar an open specification?
Thanks,
Sundar

Forums: 

Write Hierarchial file using Fast export

$
0
0

Hi All,
I would like to know if there is a way to write a hierarchial file using fast export utility.
For ex, lets assume we have 5 levels of hierarchy. (L1, L2, L3, L4, L5).
And let's assume L1 can have two L2's, three L3's, 1 L4 and 1 L5. 
Individually all these levels mentioned are a table.
Assuming, there are two records at L1 level. The fast export file should write something like
L1 (1)
L2 (Record corresponding to L1(1))
L2 (Record corresponding to L1(1))
L3 (Record corresponding to L1(1))
L3 (Record corresponding to L1(1))
L3 (Record corresponding to L1(1))
L4 (Record corresponding to L1(1))
L5 (Record corresponding to L1(1))
L1 (2)
L2 (Record corresponding to L1(2))
L2 (Record corresponding to L1(2))
L3 (Record corresponding to L1(2))
L3 (Record corresponding to L1(2))
L3 (Record corresponding to L1(2))
L4 (Record corresponding to L1(2))
L5 (Record corresponding to L1(2))

 

Is possible to write such a hierarchial file using fast export. Appreciate if anyone could provide some pointers.

Forums: 

TERADATA TPUMP

$
0
0

Hi all,
Can any one let me know the detailed functionality of TPUMP utility. On PI table, the row distribution is fairly even. But in case of NoPI, how is  the even distribution achieved?
 
In case of Fast Load on NoPI tables, if the no. of fast load sessions is less than the no. of amps in the system,then only those amps will be used for loading the data and then the deblocker task will perform a round robin technique to distribute the rows evenly to other amps. 
In case of TPUMP Load on NoPI table , I read that the hashing is done on query ID and all the rows that TPUMP fetches will be loaded to that AMP. Lets say, we have written a query to load the data in NoPI table using TPUMP. The query may fetch one row or multiple rows(say 100 rows). Since in TPUMP the hashing is done on query ID, the output would be 32 bit Row hash. If we take 16 or 20 bits to map a AMP, all the 100 rows goes to the same AMP.If this is the case, is it not leading to skewing?
 
Is the deblocker task performing a round robin technique, as in case of Fast load to acomplish even distribution?
 
Please help me in understanding the TPUMP utility functionality.
 
Thanks in advance,
 
Best Regards,
Shankar

Forums: 

SQL Assistant "execute the query one statement at a time"

$
0
0

SQL Assistant 15.0 on Win7 32 bit enterprise connected to TD 14.10.
Tried to create simple macro: create macro testdb.test as (select 1;); returns "Execute the query one statement at a time"
I thought this might have something to do with ODBC/JDBC, but, Administrator connected to the same ODBC connection works.  So does Studio.
Any suggestions (other than don't use SQL Assistant) (doctor, doctor it hurts when I do that -- Then don't do that).

Forums: 

How can I perform the same with a TPT script ?

$
0
0

Hello ,

 

I use very often this kind of code based on bteq export and I do not know how to turn it on TPT ?

Thanks for your help

bteq << EOF
 .logon ${CNX_SRC};
 .set width 5000;
 .EXPORT DATA FILE=${FIC_DDL_TABLE_SRC}
 .SET RECORDMODE OFF;
 .set titledashes off;

 show table  ${SRC_TBL};
 .IF ERRORCODE <> 0 THEN .QUIT 1

 show stat on ${SRC_TBL};

 .QUIT 0
EOF
 

Forums: 
Viewing all 870 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>