SQL Data Definition Language Syntax and Examples > CREATE TABLE
  
Syntax - CREATE TABLE … AS
 
 
 
 
 
 
 
 
 
where:
 
Syntax Element …
Specifies …
NONTEMPORAL Request Modifier
 
You can only preface a CREATE TABLE request with the NONTEMPORAL request modifier if you are copying an existing table definition to a new name using the CREATE TABLE … AS syntax. See Temporal Table Support for rules and examples.
NONTEMPORAL
See Temporal Table Support for information about this request modifier.
Table Kind
MULTISET
 
SET
duplicate row control. See SQL Data Definition Language Detailed Topics for details.
If there are uniqueness constraints on any column or set of columns in the table definition, then the table cannot have duplicate rows even if it is declared as MULTISET.
Some client utilities have restrictions with respect to MULTISET tables. See the CREATE TABLE (Table Kind Clause) topic “SET and MULTISET” in SQL Data Definition Language Detailed Topics.
You cannot create the following kinds of tables as SET tables.
Temporal
Column‑partitioned
Unpartitioned NoPI
GLOBAL TEMPORARY
 
GLOBAL TEMPORARY TRACE
 
VOLATILE
whether the table to be created is a global temporary table, global temporary trace table, or a volatile table.
GLOBAL TEMPORARY specifies that a temporary table definition be created and stored in the data dictionary for future materialization. You can create global temporary tables by copying a table WITH NO DATA, but not by copying a table WITH DATA.
You cannot create a column‑partitioned global temporary table.
GLOBAL TEMPORARY TRACE specifies that a global temporary trace table definition be created and stored in the data dictionary. See “CREATE GLOBAL TEMPORARY TRACE TABLE” for further information.
VOLATILE specifies that a volatile table be created, with its definition retained in memory only for the course of the session in which it is defined.
You cannot create a column‑partitioned volatile table.
See the CREATE TABLE (Table Kind Clause) topic “Volatile Tables” in SQL Data Definition Language Detailed Topics for further information.
If you do not specify one of these options for the table, then Teradata Database defines it to be a persistent user data table. Less formally, such tables are also referred to as base tables, though hash and join index tables are also base tables in the strict sense of the term.
Both global temporary and volatile tables can be created as NoPI tables or as temporal tables.
Table Name
database_name.table_name
 
user_name.table_name
the name of the new table and the name of the database or user in which it is to be contained if different from the current database.
See SQL Fundamentals for the rules for naming database objects.
If the name is not fully qualified, then the system assigns the name of the default database for the current session.
Table Options
FALLBACK
 
NO FALLBACK
duplicate copy protection or no duplicate copy protection for the table.
When you specify FALLBACK, Teradata Database creates and stores duplicate copies of rows in the table.
The default for this option is established by a CREATE DATABASE, CREATE USER, MODIFY DATABASE, or MODIFY USER request for the database in which the table is to be created.
When a hardware read error occurs, the file system reads the fallback copy of the data and reconstructs the rows in memory on their home AMP. Support for Read From Fallback is limited to the following cases:
Requests that do not attempt to modify data in the bad data block
Primary subtable data blocks
Reading the fallback data in place of the primary data. The system does not attempt to fix the bad data block.
PROTECTION
a dummy word that can be specified after the FALLBACK keyword.
WITH JOURNAL TABLE = database_name.table_
name
 
WITH JOURNAL TABLE = user_name.table_name
the name of the permanent journal table to be used for the data table being created.
table_name can be contained within the same database as the table being created or in a different database:
If you specify a database or user name, then that name must exist and table_name must have been defined as its default permanent journal name.
If you do not specify a database or user name, then the default database or user for the current session is assumed and table_name must have been defined as the default permanent journal table.
See SQL Fundamentals for the rules for naming database objects.
You cannot specify permanent journal options for an unpartitioned NoPI table or for a column‑partitioned table.
This clause is required if the CREATE TABLE request specifies some level of journaling, but a default journal table was not defined for the database in which the new table is being created.
If you specify permanent journaling for this table, but do not specify a journal table with this clause, then you must define a default permanent journal table for the containing database.
If a default permanent journal table was defined for the database, then this clause can be used to override the default.
LOG
 
NO LOG
whether transient journaling occurs for a global temporary or volatile table or not. This option pertains to global temporary and volatile tables only.
LOG specifies that any updates, inserts, or deletes made to the global temporary or volatile table be logged in the transient journal. This is the default.
NO LOG specifies that transient journal logging is not to be performed. Because the Teradata Database does not do any transient journaling, performance is greatly enhanced.
If any sort of abort or restart occurs and the table is defined as NO LOG, then any updates, inserts, or deletes made to the global temporary or volatile table cannot be recovered.
If the table is defined as NO LOG, contents of any materialized global temporary table or any volatile table are emptied when a transaction aborts.
BEFORE JOURNAL
 
NO BEFORE JOURNAL
 
DUAL BEFORE JOURNAL
the number of before change images to be maintained.
If the JOURNAL keyword is specified without NO or DUAL, then a single copy of the image is maintained unless FALLBACK is in effect or is also specified.
If journaling is requested for a table that uses fallback protection, DUAL images are maintained automatically.
Permanent journaling is not supported for unpartitioned NoPI tables, column‑partitioned tables, global temporary tables, or volatile tables.
AFTER JOURNAL
 
NO AFTER JOURNAL
 
DUAL AFTER JOURNAL
 
LOCAL AFTER JOURNAL
 
NOT LOCAL AFTER JOURNAL
the type of after-image to be maintained for the table; any existing images are not affected until the table is updated.
The NO and DUAL options specify the number of after-change images to be maintained for the table.
The NOT LOCAL and LOCAL options specify whether single after-image journal rows for non‑fallback data tables are written on the same virtual AMP (LOCAL) as the changed data rows, or on another virtual AMP in the cluster (NOT LOCAL).
Permanent journaling is not supported for unpartitioned NoPI tables, column‑partitioned tables, global temporary tables, or volatile tables.
JOURNAL
 
BEFORE JOURNAL
 
AFTER JOURNAL
the type of image to be maintained for the table.
This option can appear twice in the same request: once to specify a BEFORE or AFTER image, and again to specify the alternate type.
The default for this option is set by a CREATE DATABASE, CREATE USER, or MODIFY USER request for the database in which the table is to be created.
If you specify either BEFORE or AFTER, then Teradata Database maintains only the default journal image for that type of journal.
For example, if you specify AFTER, before-image journaling remains at the default setting.
If you specify both BEFORE and AFTER, then Teradata Database maintains both journal images, but the two specifications must not conflict with one another.
If you specify neither BEFORE nor AFTER, then Teradata Database maintains both journal images.
Permanent journaling is not supported for unpartitioned NoPI tables, column‑partitioned tables, global temporary tables, or volatile tables.
FREESPACE integer
the percent of free space that will remain on a cylinder during some loading operations. This option specifies the value of the percent freespace attribute to the specified value integer.
Caution: You cannot use PACKDISK to change this attribute after you have created a table.
Instead, submit an ALTER TABLE request (see “ALTER TABLE”) to change the free space percent value for the table and then immediately afterward submit a PACKDISK command using the same free space percent value (see the documentation for the Ferret utility in Utilities: Volume 1 (A-K) for more information and instructions for running PACKDISK).
If n is a DECIMAL or FLOATING POINT constant, Teradata Database converts it to an integer by truncating the value of n.
If the specified value does not fall within the allowable range (0 to 75%, inclusive), the system returns an error message to the requestor.
The following operations honor the FREESPACE setting you specify.
FastLoad data loads
MultiLoad data loads into unpopulated tables
Archive/Recovery RESTORE
Table Rebuild
Reconfiguration
Ferret PACKDISK
MiniCylPack
If few cylinders are available, and storage space is limiting, MiniCylPack might not be able to honor the specified FREESPACE value.
ALTER TABLE request that adds fallback protection to a table
CREATE INDEX request that defines or redefines a secondary index on a populated table
Creation of a fallback table during an INSERT … SELECT operation into an empty table that is defined with fallback.
Creation of a secondary index during an INSERT … SELECT operation into an empty table that is defined with a secondary index.
The following operations do not honor the FREESPACE setting you specify.
SQL INSERT operations
Teradata Parallel Data Pump INSERT operations
MultiLoad data loads into populated tables.
PERCENT
an optional keyword you can type following the value for integer.
DEFAULT MERGEBLOCKRATIO
 
MERGEBLOCKRATIO = integer
 
MERGEBLOCKRATIO = integer PERCENT
 
NO MERGEBLOCKRATIO
the merge block ratio to be used for this table when Teradata Database combines smaller data blocks into a single larger data block.
You can only specify a numeric merge block ratio for permanent base tables and permanent journal tables. You cannot specify a numeric merge block ratio for global temporary or volatile tables. You can, however, specify either the DEFAULT MERGEBLOCKRATIO or the NO MERGEBLOCKRATIO options for global temporary and volatile tables.
If you do not specify this option, Teradata Database uses the value that is specified for the MergeBlockRatio parameter in the DBS Control record at the time a data block merge operation on the table begins.
This value of the merge block ratio for a table does not affect the resulting block size when only a single block is modified. Setting the merge block ratio to too high a value can cause the resulting merged block to require being split during subsequent modifications.
The following list explains what the various options mean. See Utilities: Volume 1 (A-K) for details.
DEFAULT MERGEBLOCK RATIO, which is the default, means that Teradata Database uses the value for MergeBlockRatio that is defined by the DBS Control record at the time a data block merge operation on this table begins.
MERGEBLOCKRATIO = integer PERCENT means that Teradata Database uses the value specified by integer as the merge block ratio when a data block merge operation occurs on this table.
The valid range for integer is from 1 to 100, inclusive. The default value is 60.
PERCENT is an optional keyword that has no effect.
NO MERGEBLOCKRATIO means that Teradata Database does not merge small data blocks for this table.
Whether Teradata Database uses the merge block ratio you specify depends on the setting for the DBS Control parameter DisableMergeBlocks. See Utilities: Volume 1 (A-K) for details.
If DisableMergeBlocks is set FALSE, then the value you specify for MERGEBLOCKRATIO overrides the system‑wide default setting for the DBS Control parameter MergeBlockRatio.
If DisableMergeBlocks is set TRUE, then Teradata Database ignores all table‑level settings for the merge block ratio and does not merge data blocks for any table in the system.
DATABLOCKSIZE = data_block_size
 
DATABLOCKSIZE = data_block_size BYTES
 
DATABLOCKSIZE = data_block_size KBYTES
 
DATABLOCKSIZE = data_block_size KILOBYTES
the maximum data block size for blocks that contain multiple rows. You can express the value either as a decimal or integer number or using exponential notation. For example, you can write one thousand as either 1000 or 1E3.
This specification is optional. If you do not specify DATABLOCKSIZE, then data blocks default to the sizes set in the PermDBSize and JournalDBSize fields of the DBS Control record (see Utilities: Volume 1 (A-K) for more information about DBS Control).
For systems running without large cylinders, the minimum data block size you can specify is 9,216 bytes (18 sectors). You can specify a value of 8,960 bytes (17.5 sectors), but Teradata Database rounds that value up to 9,216 bytes internally, and that is the actual data block size the system uses.
For systems running with large cylinders, the minimum data block size you can specify is 21,504 bytes (42 sectors). You can specify a value of 21,248 bytes (41.5 sectors), but Teradata Database rounds that value up to 21,504 bytes internally, and that is the value the system actually uses.
DATABLOCKSIZE = data_block_size
 
DATABLOCKSIZE = data_block_size BYTES
 
DATABLOCKSIZE = data_block_size KBYTES
 
DATABLOCKSIZE = data_block_size KILOBYTES
The Teradata default is 130,560 bytes for all systems.
The maximum data block size value is 130,560 bytes (255 sectors) for all systems, whether running with large cylinders or not.
If you specify BYTES, then the value of the data block size is the result of rounding n to the nearest multiple of the sector size in bytes.
Rounding is upward when n is exactly halfway between two consecutive multiples.
If you specify KILOBYTES, then the value of the data block size is the result of rounding 1024n to the nearest multiple of the sector size in bytes.
If you specify nothing, then BYTES is assumed by default.
The default values for different data block types are determined by different DBS Control parameters. See Utilities: Volume 1 (A-K) for more detailed information about setting default data block sizes.
For a permanent data block, the default size is determined by the PermDBSize parameter.
For transient and permanent journals, the default data block size is determined by the JournalDBSize parameter.
MINIMUM
DATABLOCKSIZE
to set the DATABLOCKSIZE for this table to the minimum value that can be set.
MINIMUM DATABLOCKSIZE sets the maximum data block size for blocks that contain multiple rows to the minimum legal value of 21,504 bytes (42 sectors) for systems running with large cylinders or 9,216 bytes (18 sectors) for systems running without large cylinders.
You can abbreviate MINIMUM as MIN.
For details about minimum data block sizes, including the release‑specific rules for specifying a value for MINIMUM DATABLOCKSIZE, see the topic “DATABLOCKSIZE” in SQL Data Definition Language Detailed Topics.
MAXIMUM
DATABLOCKSIZE
to set the DATABLOCKSIZE for this table to the maximum value that can be set.
Both the default data block size and the maximum data block size for all systems is 130,560 bytes (255 sectors).
The value can be expressed either as a decimal or integer number or using exponential notation. For example, you can write one thousand as either 1000 or 1E3. You can abbreviate MAXIMUM as MAX.
BLOCKCOMPRESSION = block_compression_option
whether the data in the table should be block‑compressed based on the temperature of the cylinders on which it is stored. The definitions of the various thresholds are determined by the DBS Control TempBLCThresh parameter.
You cannot specify this option to define global temporary tables or volatile tables.
The following bullets list the valid specifications for block_compression_option. See “CREATE TABLE (Table Options Clause)” in SQL Data Definition Language Detailed Topics for details about the BLOCKCOMPRESSION options.
AUTOTEMP
AUTOTEMP identifies tables whose temperature‑based block‑level compression state can be changed by the file system at any time based on its Teradata Virtual Storage temperature.
DEFAULT
DEFAULT identifies tables whose temperature‑based block‑level compression is determined by the DBS Control parameter DefaultTableMode (see Utilities: Volume 1 (A-K) for details).
MANUAL
MANUAL identifies tables that are not managed automatically like AUTOTEMP tables are.
NEVER
NEVER identifies tables that should never be block‑level compressed, even if a query band or the applicable DBS Control parameter defaults indicate otherwise. The file system does not block‑level compress the table and its subtables even if the DBS Control block compression settings indicate otherwise.
CHECKSUM = integrity_checking_
level
a table‑specific disk I/O integrity checksum.
The checksum setting applies to primary data rows, fallback data rows, and all secondary index rows for the table.
If you do not specify a value, then the system assumes the system‑wide default value for this table type. The result is identical to specifying DEFAULT.
If you are changing the checksum for this table to the system‑wide default value, then specify DEFAULT.
You can change the percentage of sampling for the system‑wide default values for LOW, MEDIUM, and HIGH using the Checksum Level Definitions parameters in the DBS Control utility (see Utilities: Volume 1 (A-K)). The values provided here for those checksum percentages are the defaults specified by the installation or upgrade.
To detect all hardware read errors, which Teradata Database can handle only if fallback is defined for the table, the integrity checking level must be set to ALL. See Database Design for a complete list of the errors that invoke the Read From Fallback feature.
You cannot change the sampling percentage for the following:
The system‑wide default values ALL and NONE.
Dictionary tables.
CHECKSUM = integrity_checking_
level
The following list specifies the valid options and their checksum sample count percentages. See SQL Data Definition Language Detailed Topics for more information about these options.
ALL         100%
DEFAULT     Percentage specified for this table type in the Checksum Levels fields of
                               the Disk I/O Integrity fields in the DBS Control utility.
HIGH        67%
MEDIUM      33%
LOW          2%
NONE        Disables checksum disk I/O integrity checks.
REPLICATION GROUP replication_group_
name
the name of the replication group to which this table is to be added if it matches a rule set up by a CREATE REPLICATION RULESET request (see Teradata Replication Services Using Oracle GoldenGate).
Note: Teradata Replication Services Using Oracle GoldenGate is not scheduled for future enhancements after Release 13.10, although it continues to be supported. Teradata Replication Services 13.10 can be used with Teradata 14.0; however, attempts to use Teradata 14.0 new features with replication will return an error. Teradata Access Module for Oracle GoldenGate (TAM) 13.10 is compatible with Teradata 14.0. No further enhancements of TAM is planned.
Because Teradata Replication Services Using Oracle GoldenGate does not support column‑partitioned tables, so you cannot specify a replication group for a column‑partitioned table.
See SQL Fundamentals for the rules for naming database objects.
The following rules apply to the REPLICATION GROUP option.
The subscriber session logon user is assigned as the creator of the replicated table on the subscriber system by default.
As a result, users on the subscriber system do not have the same automatically generated privileges on the replicated table as do users on the primary system, so you must grant those privileges explicitly using GRANT requests on the subscriber system.
The subscriber users must be granted the appropriate privileges WITH GRANT OPTION to ensure that GRANT requests are replicated successfully.
The SQL text of CREATE TABLE requests is acquired by the subscriber system as a member of the replication_group_name specified by the REPLICATION GROUP option. To enable this, subscriber session users must have the REPLCONTROL privilege.
Column Definitions
COLUMN
 
ROW
the format to be used for the storage of a column partition. A column partition consists either of a series of containers or a series of subrows.
You can only specify this option for a column‑partitioned table.
If you precede a column grouping that is specified in the column list of the table definition with the keyword COLUMN, then Teradata Database stores the column partition, which consists of the columns in the specified grouping, using COLUMN format.
If you precede a column grouping that is specified in the column list of the table definition with the keyword ROW, then Teradata Database stores the column partition, which consists of the columns in the grouping, using ROW format.
If you do not specify either COLUMN or ROW, then Teradata Database determines which format to use based on the width of the column partition value.
As a general rule, Teradata Database assigns COLUMN format to narrow column‑valued partitions and ROW format to wide column‑valued partitions.
If you do not specify NO AUTO COMPRESS, Teradata Database autocompresses data as physical rows that are inserted into a column partition of a column‑partitioned table unless there are no applicable autocompression methods that reduce the size of the physical row. See “CREATE TABLE” in SQL Data Definition Language Detailed Topics for more information about autocompression.
column_name
the name of one or more columns, in the order in which they and their attributes are to be defined for the table. Up to 2,048 columns can be defined for a table.
(column_name)
to group the values from the specified column set of 1 or more columns into the same partition of a column‑partitioned table. You can specify that a single column is to be “grouped” within a partition.
This option is only valid for column‑partitioned tables.
You should consider grouping columns into a column partition when either of the following conditions occur.
The columns are frequently accessed together by queries.
The columns are infrequently accessed if autocompression on the individual columns or subsets of columns is not effective.
You cannot specify a column to be in more than one partition.
If you do not specify a column grouping for a COLUMN specification in the PARTITION BY clause, Teradata Database defines a column partition for each individual column and column group specified in the column list for table_name.
NO AUTO COMPRESS
not to apply auto compression to values inserted into this column partition.
You can only specify this option for column partitions of a column‑partitioned table.
data type
 
one or more data definition phrases that define data for the column.
You must specify a single data type for each column_name except for those defined in a CREATE TABLE … AS request. It is not valid to specify data types for such columns.
Column data attribute specifications are optional. If you specify attributes for a column, you must define its data type before you define its attributes.
The default specification for the NULL/NOT NULL attribute is NULL with the exception of the case when you copy a table definition using the CREATE TABLE … AS syntax, in which case the system carries the specification made for the source table over to the target table definition. You must specify NOT NULL explicitly if you do not want nulls to be valid for the column.
If you do not specify explicit formatting, a column assumes the default format for the data type, which can be specified by a custom data formatting specification (SDF) defined by the tdlocaledef utility (see Utilities: Volume 2 (L-Z)). Explicit formatting applies both to the parsing and to the retrieval of character strings.
Data types are the subject of SQL Data Types and Literals.
column attributes
the data type attributes, column constraints, or both for a column.
Teradata Database supports the following column attributes.
UPPERCASE
CASESPECIFIC and NOT CASESPECIFIC
FORMAT quotestring
If you specify FORMAT, the format string must be valid for the external type of the UDT, the external type being its fromsql transform routine as defined either by default or by user definition using the CREATE TRANSFORM statement. See “CREATE TRANSFORM/ REPLACE TRANSFORM” for details.
If you do not specify a format, the system automatically applies the default display format of the external type.
TITLE quotestring
NAMED name
DEFAULT
WITH DEFAULT
The WITH DEFAULT attribute is valid for Period columns, but not for UDT, or Geospatial columns.
CHARACTER SET server_character_set
You cannot specify a character server data set of KANJI1. If you attempt to do so, Teradata Database aborts the request and returns an error to the requestor.
NULL and NOT NULL
For information about temporal data type attributes, see Temporal Table Support.
For information about converting data between data types, see SQL Functions, Operators, Expressions, and Predicates.
The following column attributes are supported for UDT, Geospatial, and Period columns.
column attributes
DEFAULT NULL
FORMAT
NAMED
NOT NULL
NULL
TITLE
The following column attributes are not supported for UDT, Geospatial, and Period columns.
CASESPECIFIC
NOT CASESPECIFIC
CS
NOT CS
CHARACTER SET
COMPRESS
DEFAULT DATE
DEFAULT TIME
DEFAULT USER
UPPERCASE
UC
See SQL Data Types and Literals for a complete list of the attributes that are supported for Teradata Database.
See Temporal Table Support for information about the additional column attributes that apply to the columns of temporal tables.
Teradata Database supports the following column constraint forms.
UNIQUE
PRIMARY KEY
REFERENCES referenced_table_name
CHECK (boolean_condition)
Neither column‑level nor table‑level constraints are supported for UDT columns. This means that none of the following constraints support UDT, Geospatial, or Period columns.
BETWEEN … AND
CHECK
PRIMARY KEY
REFERENCES referenced_table_name
UNIQUE (boolean_condition)
UDT, Geospatial, and Period columns are not supported as Identity columns.
The following column attributes are explained in more detail in the documentation for these attribute clauses.
Column Storage Attributes
Identity Column Parameters
Column Constraint Attributes
Row‑Level Security Constraints
Compression Attributes
 
Compression attributes are a Teradata extension to the ANSI SQL:2008 specification.
COMPRESS
a specified set of distinct values in a column that is to be multi-value compressed to zero space.
The following rules apply to specifying multi-value compression for a table.
Multi-value compression is not supported for any column that is a member of the primary index column set for a table.
Multi-value compression is not supported for partitioning columns.
Multi-value compression is not supported for any column in a Standard referential integrity relationship.
Multi-value compression can be specified for a Batch or Referential Constraint (soft referential integrity) in a referential integrity relationship.
Multi-value compression is supported for columns defined with distinct UDT types based on the following predefined data types.
All numeric types
DATE
CHARACTER and GRAPHIC
VARCHAR and BARGRAPHIC
BYTE
VARBYTE
Multi-value compression is not supported for columns defined with the following data types:
BLOB
CLOB
Geospatial
Period
Structured, ARRAY/VARRAY, Period, and Geospatial UDTs
If a column is constrained as NOT NULL, then none of the specifications in the multi‑value compression list can be the literal NULL.
Columns defined with the COMPRESS attribute cannot participate in fast path INSERT … SELECT operations, so if you perform an INSERT … SELECT to a target table that has compressed columns, the Optimizer does not specify fast path optimization for the access plan it creates.
For virtually all applications, the performance cost of not being able to take advantage of the fast path INSERT … SELECT is more than offset by the compensating performance advantages of multi-valued compression.
COMPRESS
 
There is no essential ordering if you specify both multi-value compression and algorithmic compression specifications for the same column.
Teradata provides a few external UDFs for algorithmic compression and decompression. See “CREATE TABLE” in SQL Data Definition Language Detailed Topics for a list of the available functions.
See Database Design for a detailed description of multi-value compression.
See SQL Data Types and Literals for more information about the COMPRESS attribute.
NULL specifies that nulls are compressed for the column.
The following rules apply.
You cannot specify NULL if the column is defined as NOT NULL.
LOB‑based UDT nulls cannot be compressed.
You can only specify NULL once per column.
constant specifies that the specified value or list of values are compressed.
You cannot compress nulls and constants if the column is defined as NOT NULL.
Using COMPRESS on character data can save space depending on the percentage of rows for which the compressed value is assigned.
See SQL Data Types and Literals for information about limits for this value.
COMPRESS USING compress_UDF_name
 
COMPRESS USING database_name. compress_UDF_name
 
that compress_UDF_name is the name of the UDF to be used to compress CHARACTER, GRAPHIC, or BYTE data in this column algorithmically.
The default containing database for compress_UDF_name is SYSUDTLIB. If compress_UDF_name is a fast path UDF, then its default containing database is TD_SYSFNLIB. If Teradata Database cannot find compress_UDF_name in either database, the system returns an error to the requestor.
You can also use algorithmic compression for all UDT data types in non‑temporal tables except those that contain BLOB or CLOB data. You cannot use algorithmic compression on Period data types for VALIDTIME and TRANSACTIONTIME columns in a temporal table, however.
You can combine multi-value compression, algorithmic compression, and block-level compression for the same table to achieve better compression, but as a general rule you should not use algorithmic compression with block-level compression because of the possibility of a negative performance impact for other workloads.
See “CREATE FUNCTION (External Form)/ REPLACE FUNCTION (External Form)” in SQL Data Definition Language Detailed Topics and SQL External Routine Programming for the rules you must follow to create valid algorithmic compression UDFs for UDT columns.
The rules that apply to multi-value compression also apply to algorithmic compression.
If you specify COMPRESS USING, you must also specify DECOMPRESS USING. The COMPRESS USING and DECOMPRESS USING options can be specified in either order.
You can specify multi-value compression and algorithmic compression in either order. Algorithmic compression is only applied to values that are not specified for multi-value compression.
See SQL Fundamentals for the rules for naming database objects.
See SQL Data Definition Language Detailed Topics and Database Design for more information about algorithmic compression.
DECOMPRESS USING decompress_UDF_name
 
DECOMPRESS USING database_name. decompress_UDF_name
that decompress_UDF_name is the name of the UDF to be used to decompress character, graphic, or byte data in this column algorithmically. See SQL Fundamentals for the rules for naming database objects.
The default containing database for decompress_UDF_name is SYSUDTLIB. If decompress_UDF_name is a fast path UDF, then its default containing database is TD_SYSFNLIB. If decompress_UDF_name is not contained in SYSUDTLIB or TD_SYSFNLIB, Teradata Database returns an error to the requestor.
See “CREATE FUNCTION (External Form)/ REPLACE FUNCTION (External Form)” on in SQL Data Definition Language Detailed Topics and SQL External Routine Programming for the rules you must follow to create valid algorithmic decompression UDFs for UDT columns.
If you specify DECOMPRESS USING, you must also specify COMPRESS USING. The COMPRESS USING and DECOMPRESS USING options can be specified in either order.
Identity Column
 
Identity columns are ANSI SQL:2008-compliant.
 
You should not specify an identity column for a table that will be used with Teradata Unity.
GENERATED ALWAYS AS IDENTITY
that identity column values are always system-generated.
You cannot insert values into, nor can you update, an identity column defined as GENERATED ALWAYS AS IDENTITY.
GENERATED BY DEFAULT AS IDENTITY
that identity column values can be system-generated or user-inserted, depending on the circumstance.
If you insert a null into the column, then Teradata Database generates an identity column value to replace it.
If you insert a value into the column, then Teradata Database inserts the value into the identity column.
Identity Column Parameters
 
The five identity column parameters are optional and can be specified in any order.
 
You cannot specify identity columns for NoPI tables.
 
You should not specify an identity column for a table that will be used with Teradata Unity.
START WITH constant
the lowest integer value in the system-generated numeric series for an identity column. The default is 1.
The value you specify can be any exact negative or positive integer within the range of the data type for the column as long as it is less than MAXVALUE for an incremental series or greater than MINVALUE for a decremental series.
INCREMENT BY constant
the interval on which to increment system‑generated numbers.
The value you specify can be a negative integer. The default is 1.
The value can be any integer (except 0) less than or equal to the value of DBS Control parameter IdCol Batch Size.
MINVALUE constant
 
NO MINVALUE
the minimum value to which a system‑generated numeric series can decrement. MINVALUE applies only to system‑generated numbers.
You can specify any integer value with an absolute value less than the value specified for START WITH.
The default is the minimum integer number for the data type defined for the column.
When cycling is not enabled, the sum of the specified values for START WITH and INCREMENT BY must be greater than MINVALUE. If they are not, then Teradata Database generates only one number before the minimum limit is exceeded.
Specifying NO MINVALUE causes numbering to restart its cycle from the minimum value that can be expressed for the data type of the column when the MAXVALUE limit is reached.
You can only specify NO MINVALUE when the INCREMENT BY interval is a negative number. The default is the minimum value for the data type specified for the identity column.
The following rules apply to INCREMENT BY and CYCLE specifications for MINVALUE.
If INCREMENT BY is a positive number and you also specify CYCLE, then renumbering begins from MINVALUE when MAXVALUE is reached.
If INCREMENT BY is a negative number, then MINVALUE, if specified, must be a whole number such that MINVALUE START WITH.
If you do not specify INCREMENT BY, but do specify NO CYCLE, then MINVALUE is not applicable for positive increments.
No warning or error is returned if you specify a MINVALUE with NO CYCLE.
MAXVALUE constant
 
NO MAXVALUE
the maximum value to which a system‑generated numeric series can increment. MAXVALUE applies only to system‑generated numbers. Its value can be any integer with a value that is greater than the value specified for START WITH.
The default is the maximum number for the data type defined for the column.
When cycling is not enabled, the sum of the specified values for START WITH and INCREMENT BY must be less than MAXVALUE. If it is not, then Teradata Database generates only one number before the maximum limit is exceeded.
Specifying NO MAXVALUE causes number to restart its cycle from the minimum value that can be expressed for the data type of the column when the maximum value for the type is reached.
You can only specify NO MAXVALUE when the INCREMENT BY interval is a positive number. The default is the maximum value for the data type specified for the identity column.
The following rules apply to INCREMENT BY and CYCLE specifications for MAXVALUE.
If INCREMENT BY is a positive number, but you do not specify CYCLE, then MAXVALUE, if specified, must be a whole number such that MAXVALUE START WITH.
MAXVALUE cannot be larger than the maximum value for the data type assigned to the identity column.
If INCREMENT BY is a negative number and you also specify CYCLE, then renumbering begins with MAXVALUE when MINVALUE is reached.
If you do not specify INCREMENT BY, but do specify NO CYCLE, then MAXVALUE is not applicable for negative increments.
No warning or error is returned if you specify a MAXVALUE with NO CYCLE.
CYCLE
 
NO CYCLE
whether system-generated values can be recycled when their minimum or maximum is reached.
The default is NO CYCLE.
Column Attributes
CONSTRAINT constraint_name
that constraint_name is the optional name for a constraint.
The CONSTRAINT constraint_name specification is mandatory for named constraints for UNIQUE, PRIMARY KEY, FOREIGN KEY, and CHECK when those constraints are named, and applies to both column‑level and table‑level constraints.
It does not apply to row‑level security constraints.
Constraint names must conform to the rules for Teradata Database object names (see SQL Fundamentals for a description of Teradata Database identifiers and a list of the characters they can contain) and be unique among all other constraint, primary index, and secondary index names specified in the table definition.
Unnamed constraints are not assigned system-generated names.
See Temporal Table Support for information about the constraints that can be defined for temporal tables.
time option
See Temporal Table Support for information about this option and its components.
UNIQUE
that a column or column set is unique. This means that no two rows in the table can have the same value in the antidepressants column.
You can specify a UNIQUE constraint as a column attribute or as a table attribute. You can specify a UNIQUE constraint on a single column or on a composite column set, but you cannot specify a UNIQUE constraint for volatile tables.
You cannot specify UNIQUE constraints on columns having any of the following data types.
BLOB
BLOB UDT
CLOB
CLOB UDT
VARIANT_TYPE
ARRAY
VARRAY
Period
Geospatial
For any non‑temporal table defined with NO PRIMARY INDEX, Teradata Database not only implements the constraint as a USI, but also changes the SQL create text for the table to specify a USI, removing the UNIQUE constraint from the defining SQL text. You cannot create a NoPI temporal table, so this point does not apply to them.
Teradata Database uses a unique secondary index for non‑temporal tables and a single‑table join index for the majority of temporal tables to enforce this constraint (see Temporal Table Support for details).
Any system‑defined secondary or single‑table join indexes used to implement this constraint count against the collective maximum of 32 secondary, hash, and join indexes per table. This includes the system‑defined secondary indexes used to implement UNIQUE constraints for non‑temporal tables and the single‑table join indexes used to implement UNIQUE constraints for temporal tables.
UNIQUE constraints ensure that the uniqueness of alternate keys is enforced by the system. Columns with UNIQUE constraints can be used to create referential integrity relationships with other tables.
UNIQUE constraints are valid for non‑temporal and temporal tables. For details about temporal tables and UNIQUE constraints, see Temporal Table Support.
See “Primary Index Defaults” in SQL Data Definition Language Detailed Topics for a complete list of the rules for implicitly defined unique indexes.
If all of the following conditions are true, the implicitly defined index is a unique primary index if the table is non‑temporal.
No explicit primary index is specified.
No explicit primary key is specified.
This is the first unique constraint defined for the table.
If the table is temporal, the system returns an error to the requestor.
If none of the conditions listed in the previous bullet is true, the implicitly defined index is a unique secondary index if the table is non‑temporal.
If the table is temporal, the implicitly defined index is a system‑defined single‑table join index. See Temporal Table Support for details.
Column Attribute Syntax
 
UNIQUE
 
CONSTRAINT constraint_name UNIQUE
A column specified with a UNIQUE constraint must also be defined as NOT NULL.
If you specify UNIQUE as a column attribute, the syntax varies depending on whether the constraint is named or not.
Use the following syntax for an unnamed UNIQUE column constraint.
     UNIQUE
Use the following syntax for a named UNIQUE column constraint.
     constraint_name UNIQUE
Table Attribute Syntax
 
CONSTRAINT constraint_name UNIQUE (column_name)
 
CONSTRAINT UNIQUE (column_name)
The column_name variable in a table attribute UNIQUE constraint specifies a column set to be defined as the UNIQUE constraint column set for table_name. The specified columns must all be defined as NOT NULL.
If you specify more than one column_name, the UNIQUE column set is based on the combined values of the specified column name.
Use the following syntax for an unnamed UNIQUE table constraint.
     CONSTRAINT UNIQUE (column_name)
Use the following syntax for a named UNIQUE table constraint.
     constraint_name CONSTRAINT UNIQUE (column_name)
When you specify UNIQUE as a table constraint, the constraint for the table can be defined on a maximum of 64 columns.
PRIMARY KEY
that a column or column set is the primary key for table_name. This means that the defined column set is one that makes each row in the table unique. The primary key is also used to enforce referential constraints.
You can either specify a PRIMARY KEY constraint as a column attribute or as a table attribute.
You can specify a PRIMARY KEY constraint on a single column or on a composite column set, but you cannot specify a PRIMARY KEY constraint for volatile tables.
You cannot specify PRIMARY KEY constraints on columns having any of the following data types.
BLOB
BLOB UDT
CLOB
CLOB UDT
VARIANT_TYPE
ARRAY
VARRAY
Period
Geospatial
Only one primary key can be specified per table. To specify candidate primary keys for referential integrity relationships with other tables, use the UNIQUE column attribute. This is not necessary for Referential Constraints, but is required for standard referential integrity constraints and batch referential integrity constraints.
PRIMARY KEY
 
PRIMARY KEY constraints are valid for both non‑temporal and temporal tables. See Temporal Table Support for details about how temporal tables support PRIMARY KEY constraints.
Teradata Database implicitly implements any column set specified as a table attribute PRIMARY KEY as a USI for non‑temporal tables and as a single‑table join index for the majority of temporal tables to enforce this table attribute constraint.
For any non‑temporal table defined with NO PRIMARY INDEX, Teradata Database not only implements the constraint as a USI, but also changes the SQL create text for the table to specify a USI, removing the PRIMARY KEY constraint from the defining SQL text.
Any system‑defined secondary or single‑table join indexes used to implement this constraint count against the collective maximum of 32 secondary, hash, and join indexes per table. This includes the system‑defined secondary indexes used to implement PRIMARY KEY constraints for non‑temporal tables and the single‑table join indexes used to implement PRIMARY KEY constraints for temporal tables.
Like UNIQUE constraints, PRIMARY KEY constraints are used to ensure that the uniqueness of alternate keys is enforced by the system when they are specified as part of a referential integrity relationship.
If you do not explicitly specify a primary index, the implicitly defined index is:
a unique primary index for a non‑temporal table.
not defined, and the table is a NoPI or a column‑partitioned table.
If you do not explicitly specify a primary index for a temporal table, Teradata Database aborts the request and returns an error to the requestor.
If you explicitly specify a primary index, the implicitly defined index is a:
Secondary index if the table is non‑temporal.
System‑defined single‑table join index if the table is temporal.
Column Attribute Syntax
 
PRIMARY KEY
 
CONSTRAINT constraint_name PRIMARY KEY
A column specified with a PRIMARY KEY constraint must also be defined as NOT NULL.
If you specify PRIMARY KEY as a column attribute, the syntax varies depending on whether the constraint is named or not.
Use the following syntax for an unnamed PRIMARY KEY column constraint.
     PRIMARY KEY
Use the following syntax for a named PRIMARY KEY column constraint.
     constraint_name PRIMARY KEY
Use the column attribute form of PRIMARY KEY when the constraint applies only to the column for which it is specified, and the primary key for the table is defined only on the single column for which column_name must also be specified as NOT NULL.
The column_name variable specifies the one column to be defined as the primary key for table_name.
See SQL Fundamentals for the rules for naming database objects.
Table Attribute Syntax
 
PRIMARY KEY (column_name)
 
CONSTRAINT constraint_name PRIMARY KEY (column_name)
The column_name variable in a table attribute PRIMARY KEY constraint specifies a column set to be defined as the composite PRIMARY KEY constraint column set for table_name. Columns specified in a PRIMARY KEY constraint must all be defined as NOT NULL.
If you specify PRIMARY KEY as a table attribute, the syntax varies depending on whether the constraint is named or not.
Use the following syntax for an unnamed PRIMARY KEY table constraint.
     CONSTRAINT PRIMARY KEY (column_name)
Use the following syntax for a named PRIMARY KEY table constraint.
     constraint_name CONSTRAINT PRIMARY KEY (column_name)
PRIMARY KEY (column_name) is one or more columns that specify the PRIMARY KEY constraint for the table. When you specify PRIMARY KEY as a table constraint, the primary key for the table can be defined on a maximum of 64 columns.
If you specify more than one column_name for PRIMARY KEY as a table constraint, the primary key column set is based on the combined values of the specified columns.
Teradata Database implicitly implements any column set specified as a table attribute PRIMARY KEY as a USI for non‑temporal tables and as a single‑table join index for the majority of temporal tables to enforce this table attribute constraint.
For any non‑temporal table defined with NO PRIMARY INDEX, Teradata Database not only implements the constraint as a USI, but also changes the SQL create text for the table to specify a USI, removing the PRIMARY KEY constraint from the defining SQL text.
CHECK
an optionally named simple boolean conditional expression used to constrain the values that can be inserted into, or updated for, a column.
You can specify CHECK constraints as column attributes or as table attributes.
You cannot specify subqueries, aggregate, or ordered analytic functions in a CHECK constraint.
You cannot specify CHECK constraints for identity columns or for the columns of a volatile table.
If you specify a combination of table-level, column-level, and FOREIGN KEY … WITH CHECK OPTION constraints on a table that underlies a view, the constraints can create a constraint expression that is too large to be parsed for INSERT and UPDATE requests.
You cannot specify CHECK constraints on columns that are defined with any of the following data types.
BLOB
CLOB
UDT
ARRAY/VARRAY
Period
Geospatial
Teradata Database tests CHECK constraints for character columns using the current session collation. As a result, a CHECK constraint might be met for one session collation, but violated for another even though the identical data is inserted or updated for both.
Teradata Database considers unnamed CHECK constraints with identical boolean conditions and case to be duplicates, and the system returns an error when you specify them.
CHECK
 
For example, the following CREATE TABLE request is valid because the case of f1 and F1 is different:
    CREATE TABLE t1 (
      f1 INTEGER, CHECK (f1 > 0), CHECK (F1 > 0));
The following CREATE TABLE request is not valid because the case of both f1 specifications is identical.
    CREATE TABLE t1 (
      f1 INTEGER, CHECK (f1 > 0), CHECK (f1 > 0));
You can also use the BETWEEN … AND operator as a form of CHECK constraint except for volatile table columns, identity columns, UDT columns, ARRAY, VARRAY, Geospatial columns, Period columns, BLOB columns, or CLOB columns.
The following rules apply to column attribute and table attribute CHECK constraints.
A CHECK constraint can include any simple boolean search condition.
You cannot specify subqueries, aggregate expressions, or ordered analytic expressions.
You cannot specify CHECK constraints for identity columns or volatile table columns.
The search condition for a CHECK constraint cannot specify SET operators. Valid column attribute CHECK constraints that reference candidate keys of other tables are permitted.
Teradata Database supports the following non-ANSI SQL constraint syntax for table attribute CHECK constraints.
    BETWEEN value_1 AND value_2
The system treats it as if it were the following ANSI-compliant constraint.
    CHECK (column_name BETWEEN value_1 AND value_2)
See Database Design for information about using CHECK constraints to construct domains.
You can also use the BETWEEN … AND operator as a form of CHECK constraint except for volatile table columns, identity columns, UDT columns, ARRAY, VARRAY, Geospatial columns, Period columns, BLOB columns, or CLOB columns.
Column Attribute Syntax
 
CHECK (boolean_condition)
 
CONSTRAINT constraint_name CHECK (boolean_condition)
If you specify CHECK as a simple column attribute, the syntax varies depending on whether the constraint is named or not.
Use the following syntax for an unnamed CHECK column constraint
     CHECK (boolean_condition)
Use the following syntax for an unnamed CHECK column constraint
     constraint_name CHECK (boolean_condition)
In both cases, column_name is the lefthand side of boolean_condition.
The following rules apply only to column attribute CHECK constraints.
Column Attribute Syntax
 
CHECK (boolean_condition)
 
CONSTRAINT constraint_name CHECK (boolean_condition)
 
A column attribute CHECK constraint cannot reference other columns within the same table or in another table.
You can specify multiple CHECK constraints on a single column.
If you specify multiple unnamed column‑level CHECK constraints for a column, Teradata Database combines them into a single column‑level CHECK constraint.
If you specify multiple named column‑level CHECK constraint for a column, Teradata Database handles each of them separately.
You can specify column‑level CHECK constraints for both non‑temporal and temporal tables. See Temporal Table Support for details about how temporal tables support CHECK constraints.
Table Attribute Syntax
 
CHECK (boolean_condition)
 
CONSTRAINT constraint_name CHECK (boolean_condition)
If you specify CHECK as a composite table attribute, the syntax varies depending on whether the constraint is named or not.
Use the following syntax for an unnamed CHECK column constraint
     CHECK (boolean_condition)
Use the following syntax for an unnamed CHECK column constraint
     CONSTRAINT constraint_name CHECK (boolean_condition)
In both cases, column_name is the lefthand side of boolean_condition.
The following rules apply only to table attribute CHECK constraints.
A table-level CHECK constraint must reference at least 2 columns from its table.
A maximum of 100 table-level constraints can be defined for a given table.
A table attribute CHECK constraint can compare any columns defined for its table, both against each other and against constants.
FOREIGN KEY
a foreign key reference from a column set in this table to the primary key or an alternate key in another table.
You can either specify a foreign key REFERENCES constraint as a column attribute or as a table attribute.
See SQL Fundamentals for the rules for naming database objects.
The following rules apply to foreign key REFERENCES constraints:
You must either have the REFERENCES privilege on the referenced table or on all specified columns of the referenced table.
A maximum of 64 foreign keys can be defined for a table and a maximum of 64 referential constraints can be defined for a table.
Similarly, a maximum of 64 other tables can reference a single table. Therefore, a maximum of 128 reference indexes can be stored in the table header per table, but only 64 of these, the reference indexes that map the relationship between the table and its child tables, are stored per reference index subtable. The table header limit on reference indexes includes both references to and from the table.
See Temporal Table Support for information about how temporal tables support FOREIGN KEY REFERENCES constraints.
FOREIGN KEY
 
While it is possible to create a child table at a time that its parent table does not yet exist, a REFERENCES constraint that makes a forward reference to a table that has not yet been created cannot qualify the parent table name with a database name.
In other words, the forward‑referenced parent table that has not yet been created must be assumed to be “contained” in the same database as its child table that is currently being created.
Each column in the foreign key referenced_column_name list must correspond to a column of referenced_table_name in REFERENCES referenced_table_name, and you cannot specify the same column name more than once.
The foreign key column list must contain the same number of column names as the referenced primary or alternate key in table_name. The ith column of the referencing list corresponds to the ith column identified in the referenced list.
The data type of each foreign key referencing column must be the same as the data type of the corresponding REFERENCES referenced column.
Each individual foreign key can be defined on a maximum of 64 columns.
A maximum of 100 table-level constraints can be defined for any table.
You cannot specify foreign key REFERENCES constraints on columns with the following data types.
  •  BLOB
  •  CLOB
  •  UDT
  •  ARRAY/VARRAY
  •  Period
  •  Geospatial
You cannot specify a foreign key REFERENCES constraint on an identity column.
Foreign key REFERENCES constraints can be null.
Though foreign key REFERENCES constraints can be unique, it is rarely the case that one is.
An example of when a foreign key would be unique is the case of a vertical partitioning of a logical table into multiple tables.
You cannot specify foreign key REFERENCES constraints for global temporary trace, global temporary, volatile, or queue tables (see “CREATE TABLE (Queue Table Form)”).
Foreign key REFERENCES constraints cannot be copied to a new table using the copy table syntax.
You can specify a mix of standard referential integrity constraints, batch referential integrity constraints, and Referential Constraints for the same table, but not for the same column sets.
The various types have different applications and are described in “CREATE TABLE” in SQL Data Definition Language Detailed Topics.
Column Attribute Syntax
 
RI time option REFERENCES referenced_table_name (referenced_column_name)
 
CONSTRAINT constraint_name
RI time option REFERENCES referenced_table_name (referenced_column_name)
If you specify REFERENCES as a column attribute, the syntax varies depending on whether the constraint is named or not.
Use the following syntax for an unnamed REFERENCES column constraint.
     REFERENCES referenced_table_name (referenced_column_name)
Use the following syntax for an unnamed REFERENCES column constraint.
     CONSTRAINT constraint_name REFERENCES referenced_table_name
                               (referenced_column_name)
where:
referenced_table_name is the name of the table that contains the primary key or alternate key referenced by the specified referencing_column.
referenced_column_name is the name of the table that contains the primary key or alternate key referenced by referencing_column.
constraint_name is the name of the column attribute foreign key constraint.
The following rules apply to column attribute REFERENCES constraints only.
Do not specify FOREIGN KEY (referencing_column) for a column attribute foreign key constraint. The referencing column is implicit in a column foreign key constraint and the FOREIGN KEY keywords are only used for table constraint foreign key specifications.
If you do not specify column_name, the referenced table must have a simple primary key, and the specified foreign key column references that primary key column in the referenced table.
See Temporal Table Support for information about the referential integrity time options.
The following rules apply to column attribute REFERENCES constraints only.
If you specify column_name, it must refer to the simple primary key of the referenced table or to a simple alternate key in the referenced table that is defined as UNIQUE.
This rule does not apply for Referential Constraints. In such cases, the candidate key acting as the primary key in the referenced table need not be explicitly declared to be unique using the PRIMARY KEY or UNIQUE keywords or by declaring it to be a USI in the table definition.
However, the candidate key in the relationship actually must always be unique even if it is not explicitly declared to be unique; otherwise, you can produce incorrect results and corrupt your databases in some situations when appropriate care is not taken to ensure that data integrity is maintained. This is true by definition. See Database Design for details.
When you specify Referential Constraints, you must ensure that any candidate key in a referential integrity relationship is unique, just as you must ensure that the referential relationship it anchors holds true in all cases, because Teradata Database enforces neither constraint.
Table Attribute Syntax
 
FOREIGN KEY referencing_column_name REFERENCES referenced_table_name (referenced_column_name)
 
CONSTRAINT constraint_name FOREIGN KEY referencing_column_name REFERENCES referenced_table_name (referenced_column_name)
Unlike the case for a column attribute foreign key specification, you must specify the SQL text FOREIGN KEY referencing_column_name before you specify REFERENCES referenced_table_name (referenced_column_name) for a table attribute foreign key constraint.
If you specify FOREIGN KEY REFERENCES as a table attribute, the syntax varies depending on whether the constraint is named or not.
Use the following syntax for an unnamed FOREIGN KEY REFERENCES table constraint.
     FOREIGN KEY (referencing_column) REFERENCES
                  referenced_table_name (referenced_column_name)
Use the following syntax for an unnamed FOREIGN KEY REFERENCES table constraint.
     CONSTRAINT constraint_name FOREIGN KEY (referencing_column)
       REFERENCES referenced_table_name (referenced_column_name)
where:
referencing_column is one or more columns in table_name that reference a primary key or alternate key in referenced_table_name.
referenced_table_name is the name of the table that contains the primary key or alternate key referenced by the referencing_column set.
referenced_column_name is the column set in referenced_table_name that is referenced by the referencing_column set in table_name.
constraint_name is the name of the table attribute foreign key constraint.
See Temporal Table Support for information about the referential integrity time options.
WITH CHECK OPTION
that referential integrity is checked only when the entire transaction of which it is a component completes.
For a column attribute foreign key constraint, specify this clause as follows.
REFERENCES WITH CHECK OPTION referenced_table_name
                            (referenced_column_name)
For a table attribute foreign key constraint, specify this clause as follows.
FOREIGN KEY (referencing_column_name) REFERENCES WITH CHECK OPTION
            referenced_table_name (referenced_column_name)
If any of the rows in the transaction are found to violate the Referential Integrity rule, then the entire transaction is rolled back. See Database Design for information about the Referential Integrity rule.
This option is not valid for temporal tables (see Temporal Table Support for details).
This clause is a Teradata extension to the ANSI SQL:2008 standard.
WITH NO CHECK OPTION
that referential integrity is not to be enforced for the specified primary key‑foreign key relationship.
For a column attribute foreign key constraint, specify this clause as follows.
REFERENCES WITH NO CHECK OPTION referenced_table_name
                               (referenced_column_name)
For a table attribute foreign key constraint, specify this clause as follows.
FOREIGN KEY (referencing_column_name) REFERENCES WITH NO CHECK             OPTION (referenced_table_name referenced_column_name)
This is the only valid referential integrity option for temporal tables (see Temporal Table Support for details).
                                                                                 Row‑Level Security Constraint Columns
row_level_security_
constraint_name
CONSTRAINT
the name of a row‑level security constraint column for table_name.
Although the syntax you use to define a row‑level security constraint is similar to the syntax used for table‑level constraints, when you create a table that is defined with a row‑level security constraint, Teradata Database implicitly creates a table column with the same name that you specify for row_level_security_constraint_name, so row‑level security constraints are actually column‑level constraints, not table‑level constraints.
You cannot create a row‑level security constraint column for a global temporary table or for a volatile table.
constraint_name must be the name of a constraint that has already been created using a CREATE CONSTRAINT request (see “CREATE CONSTRAINT” for information about creating row‑level security constraints).
You cannot assign any additional attributes to a security constraint column. A security constraint is assigned a name and data type when it is created, and those are the only attributes it can take.
Once you have defined a security constraint column for a table, you cannot use an ALTER TABLE request to modify its definition directly. You can only drop it from the table definition.
The maximum number of security constraint columns you can define for a table is 5.
AS Clause (Copy Table Syntax)
 
You cannot specify data types for source table columns, but you can specify column attributes for them. You can also specify table constraints for the target table to be created.
column_name
a column or list of columns from source_table_name to be copied to target table table_name.
Note: You cannot specify a column data type using the AS clause syntax.
column attributes
one or more new column attributes for the column set specified by column_name to be used for the new definition for table_name.
Note: You cannot specify a foreign key constraint as a column attribute using the AS clause syntax.
COLUMN
ROW
whether to use COLUMN or ROW format for the column partition being copied from a column‑partitioned source_table_name.
table constraints
one or more new table constraints for the column set specified by column_name to be used for the new definition for table_name.
Note: You cannot specify a foreign key constraint as a table constraint using the AS clause syntax.
AS
keyword that precedes source table definition.
database_name
 
user_name
the containing database or user for source_table_name if not the current database or user.
source_table_name
the name of the source table whose column‑ and table‑level definitions (with some restrictions) and, optionally, data and statistics to copy to table_name.
You must specify source_table_name whether you use the AS source_table_name WITH DATA form or the AS SELECT … FROM source_table_name subquery form.
See SQL Fundamentals for the rules for naming database objects.
temporal qualifier
See Temporal Table Support for information about this option.
SELECT column_name FROM source_table_name
 
SELECT * FROM source_table_name
either a subset of the column definitions or all of the column definitions from source_table_name.
WHERE expression
a search condition, also referred to as a conditional expression, to identify the rows to copy to the new target table.
WITH DATA
that the data for the source table or subquery is to be copied to a new target table. You cannot create global temporary tables using the WITH DATA option.
See Temporal Table Support for documentation of the rules for copying temporal constraints WITH DATA.
WITH NO DATA
that none of the data from the source table or query expression is to be copied to a new table.
The table definitions for source_table_name are copied to a new table, not including data.
For information about copying temporal constraints WITH NO DATA, see Temporal Table Support
AND STATISTICS
AND NO STATISTICS
that the statistics for the source table be copied to a new table.
The system also sets up the appropriate statistical histograms in the dictionary for the target table and copies the statistics from the source table into them. See SQL Request and Transaction Processing.
When you copy PARTITION statistics, the statistics copied to the target table might not correctly represent the data in the target table because of differences in internal partition number mapping between the source and target tables. This is true even if the table definitions returned by a SHOW TABLE request are identical and the data is the same in both tables.
If you use a CREATE TABLE … AS … WITH DATA AND STATISTICS request to create a target table, the PARTITION statistics you copy from the source table are not valid if the internal partition numbers in the target table are different than the source table.
It is critical to understand that there is no way to guarantee that a target table created using a CREATE TABLE … AS … WITH DATA AND STATISTICS request is identical to the source table from which its statistics are copied down to the level of internal partition numbers, and it is important to understand that even though the two tables might appear to be identical from comparing their definitions using the output of SHOW TABLE requests on the tables.
As a general rule, you should always recollect the PARTITION statistics for the target table when you copy them from a source table.
If you specify WITH NO DATA with the AND STATISTICS option, Teradata Database sets up the appropriate statistical histograms for the target table, but does not populate them with the available statistical information. This state is referred to as zeroed statistics.
INDEX Definitions
 
You can define a new primary index for a new table using the CREATE TABLE … AS syntax.
 
If you neither specify PRIMARY INDEX (column_list) nor NO PRIMARY INDEX explicitly, then whether Teradata Database creates a non‑temporal table with or without a primary index depends on the setting of the DBS Control parameter PrimaryIndexDefault. All temporal tables must have an explicitly defined non‑unique primary index.
UNIQUE
that the named primary index is defined to be unique.
Temporal tables cannot have a unique primary index. See Temporal Table Support for details.
Any secondary indexes and the primary index can be defined to be unique with the exception of a PPI whose definition does not include all of its partitioning columns.
PRIMARY INDEX
the primary index definition.
The primary index is used by the hashing algorithm to partition table rows across the AMPs.
If neither a primary index nor the NO PRIMARY INDEX option is specified for a non‑temporal table, the result depends on the setting of the DBS Control parameter PrimaryIndexDefault with the exception of a column‑partitioned table, where the default is always NO PRIMARY INDEX regardless of the setting of PrimaryIndexDefault (see Utilities: Volume 1 (A-K)). You must always specify an explicit non‑unique primary index for temporal tables.
You cannot specify this option for non-partitioned NoPI tables or column‑partitioned tables.
See Temporal Table Support for documentation of the rules for primary indexes defined on temporal tables.
See “CREATE TABLE” in SQL Data Definition Language Detailed Topics for information about primary index defaults.
primary_index_column
a column in the column set that defines a partitioned primary index.
See SQL Fundamentals for the rules for naming database objects.
You cannot define a primary index on a column defined with any of the following data types.
BLOB
CLOB
LOB UDT
VARIANT_TYPE
ARRAY
VARRAY
Period
Geospatial
You can define a primary index on a column that has a UDT data type.
primary_index_column
You cannot define a primary index on a column that is a row‑level security constraint.
If you specify more than one column name, the index is created on the combined values of each column named. A maximum of 64 columns can be specified for an index, and a maximum of 32 secondary indexes can be created for one table. A multicolumn NUSI defined with an ORDER BY clause counts as two consecutive indexes in this calculation.
NO PRIMARY INDEX
that the table is defined with no primary index.
The NO PRIMARY INDEX specification is optional for NoPI and column‑partitioned tables.
You cannot specify a column name list preceding a NO PRIMARY INDEX specification.
You cannot create a non-partitioned NoPI or column‑partitioned temporal table. If you attempt to do so, Teradata Database aborts the request and returns an error to the requestor.
The default Table Kind for all NoPI tables is MULTISET. This cannot be changed for either unpartitioned NoPI tables or column‑partitioned tables.
If the preceding item in the index list is a partitioning clause that is not part of an index clause, you must specify a COMMA character preceding NO PRIMARY INDEX. Otherwise, the comma is optional.
See Database Design for the complete set of rules that control primary index defaults.
UNIQUE INDEX
that the table is to be defined with a unique secondary index.
INDEX
a keyword used to define any secondary indexes for the table.
You cannot define a secondary index on a column defined with any of the following data types.
BLOB
CLOB
LOB UDT
VARIANT_TYPE
ARRAY
VARRAY
Period
Geospatial
You can define a secondary index on a column have has a UDT data type.
If you specify INDEX without the preceding keyword UNIQUE, the index is a non‑unique secondary index.
The INDEX list is an extension to ANSI SQL.
Unlike the indexes created by the UNIQUE and PRIMARY KEY constraint definitions, indexes defined by the index list can have nullable columns.
index_name
an optional name for a secondary index.
See SQL Fundamentals for the rules for naming database objects.
index_column_name
the column set on which a secondary index is defined.
ALL
that the system should ignore the assigned case specificity for a column. This property enables a NUSI defined with the ALL option to do the following:
Include case‑specific values.
Cover a table on a NOT CASESPECIFIC column set.
ALL enables a NUSI to cover a query, enhancing performance by eliminating the need to access the base table itself when all columns needed by a query are contained in the NUSI.
Be aware that specifying the ALL option might also require additional index storage space.
You cannot specify multiple NUSIs that differ only by the presence or absence of the ALL option.
You cannot specify the ALL option for primary or unique secondary indexes.
PARTITION BY partitioning_level
that a table is partitioned by one or more partitioning levels.
You cannot specify a partitioning level that includes a row‑level security constraint column.
You cannot specify a character partitioning level for columns or constants that use the Kanji1 or KanjiSJIS server character sets.
A partitioning level can be defined using a single COLUMN keyword, a partitioning expression, or a combination of both.
You cannot define column partitioning for any of the following table types.
Primary-indexed
Global temporary
Volatile
Temporal
A partitioned NoPI table must be column‑partitioned.
A partitioning expression cannot specify external or SQL UDFs or columns having any of the following data types.
UDT
ARRAYVARRAY
BLOB
CLOB
Geospatial
Period
However, you can reference Period columns indirectly through the use of the BEGIN and END bound functions (see “Example 38: CASE_N Partitioning Expression Using the END Bound Function” through “Example 41: Partitioning Expression Using the UNTIL_CHANGED Predicate Variable” and Temporal Table Support).
If you specify multiple partitioning levels, each member of the set must be separated by a COMMA character, and the entire set must be delimited on the left by a LEFT PARENTHESIS character and on the right by a RIGHT PARENTHESIS character.
The maximum number of partitioning levels that you can specify for 2‑byte partitioning is 15.
The maximum number of partitioning levels that you can specify for 8‑byte partitioning is 62.
If a partitioning expression is not a RANGE_N or CASE_N function, its result must always be eligible to be implicitly cast to an INTEGER if it is not already an INTEGER.
PARTITION BY partitioning_level
Use a CASE_N function to define a mapping between conditions to INTEGER or BIGINT numbers.
The maximum number of partitions for a CASE_N partitioning level is typically limited to 4,000 or less.
Use a RANGE_N function to define a mapping of ranges of INTEGER, CHARACTER, or DATE values to INTEGER numbers or to define a mapping of ranges of BIGINT or TIMESTAMP values to BIGINT numbers.
The maximum number of ranges, not including the NO RANGE, NO RANGE OR UNKNOWN, and UNKNOWN partitions for a RANGE_N partitioning level, is 9,223,372,036,854,775,805.
You cannot specify an EACH clause if the RANGE_N function specifies a character or graphic test value.
See SQL Functions, Operators, Expressions, and Predicates for documentation of the CASE_N and RANGE_N functions.
COLUMN specifies a column partitioning level with COLUMN format and that the table is column‑partitioned.
If you do not specify NO AUTO COMPRESS for a column partition, then Teradata Database autocompresses data as physical rows that are inserted into that column partition of a column‑partitioned table unless there are no applicable autocompression methods that reduce the size of the physical row. See “CREATE TABLE” in SQL Data Definition Language Detailed Topics for more information about autocompression.
You can define a variety of partitionings with a wide range in the number of combined partitions. However, you must consider the usefulness of defining a particular partitioning and its impact, both positive and negative, on performance and disk storage.
Column grouping in the COLUMN clause enables more flexibility in specifying which columns go into which partitions while still being able to specify the display order (that is, when selecting the columns from the table using an asterisk) in the table element list. Column grouping in the column list for a table enables a simpler, but less flexible, specification of column groupings than you can specify in a partitioning level.
ROW specifies a column partition that has ROW format. A ROW format means that only one column-partition value is stored in a physical row as a subrow.
If you specify neither COLUMN nor ROW for a column partition, Teradata Database determines whether COLUMN or ROW format is used for the column partition (see CREATE TABLE in SQL Data Definition Language Detailed Topics for information about what COLUMN format and ROW format mean.
ADD constant
that the maximum number of partitions for a partitioning level is the number of partitions it defines plus the value of the BIGINT constant value specified by constant.
The value for constant must be an unsigned BIGINT constant and cannot exceed 9,223,372,036,854,775,807.
You can only specify this option for column‑partitioned tables.
ALL BUT (column_name_list)
that a single-column partition with autocompression and a system-determined COLUMN or ROW format is defined for each column, if any, that is not specified in the column group list.
You can only specify this option for column‑partitioned tables.
NO AUTO COMPRESS
to disable autocompression for the physical rows of a column partition of a column‑partitioned table.
If you do not specify NO AUTO COMPRESS, then Teradata Database applies a system‑determined method of autocompression for physical rows if it can find compression methods that reduce the size of the physical row.
Teradata Database does apply any user-specified compression and, for column partitions with COLUMN format, row header compression.
INDEX
a keyword to introduce a secondary index definition. The INDEX list is an extension to ANSI SQL.
Unlike the indexes created by the UNIQUE and PRIMARY KEY constraint definitions, indexes defined by the index list can include nullable columns.
index_name
an optional name for the index.
See SQL Fundamentals for the rules for naming database objects.
ALL
that the system should ignore the assigned case specificity for a column. This property enables a NUSI defined with the ALL option to do the following:
Include case‑specific values.
Cover a table or join index on a NOT CASESPECIFIC column set.
ALL enables a NUSI to cover a query, enhancing performance by eliminating the need to access the base table itself when all columns needed by a query are contained in the NUSI.
Be aware that specifying the ALL option might also require additional index storage space.
You cannot specify multiple NUSIs that differ only by the presence or absence of the ALL option.
You cannot specify the ALL option for primary or unique secondary indexes.
index_column_name
a column set whose values are to be used as the basis for a secondary index.
See SQL Fundamentals for the rules for naming database objects.
No column in the list can have a Period, Geospatial, BLOB, or CLOB data type.
If you specify more than one column name, the index is created on the combined values of each column named. A maximum of 64 columns can be specified for an index, and a maximum of 32 secondary indexes can be created for one table.
ORDER BY
row ordering on each AMP by a single NUSI column: either value-ordered or hash‑ordered.
If you specify ORDER BY, but do not specify VALUES or HASH, the ordering defaults to VALUES.
If you do not specify ORDER BY, then Teradata Database orders the rows on their hash values.
If you specify HASH, then Teradata Database orders the rows on their hash values.
VALUES
value-ordering for the ORDER BY NUSI column.
Select VALUES to optimize queries that return a contiguous range of values, especially for a covering index or a nested join.
You can specify VALUES with or without an order column name. If you do not specify order_column_name, Teradata Database orders the NUSI on the values of its first column.
HASH
hash-ordering for the ORDER BY column.
Select HASH to limit hash-ordering to one column, rather than all columns. This is the default if you do not specify an ORDER BY clause.
Hash-ordering a multicolumn NUSI on one of its columns allows the NUSI to participate in a nested join where join conditions involve only that ordering column.
You can specify HASH with or without an order column name. If you do not specify order_column_name, Teradata Database orders the NUSI on the hash of its first column.
order_column_name
an optional column in the INDEX column list for a NUSI that specifies the sort order used to store index rows.
See SQL Fundamentals for the rules for naming database objects.
If you do not specify an order_column_name, then the system orders the NUSI rows using the first column in the index definition by default.
The supported data types for a value-ordered order_column_name are the following:
BYTEINT
DATE
DECIMAL
INTEGER
SMALLINT
Values for order_column_name are limited to 4 or fewer bytes.
Temporary/Volatile Table Preservation
ON COMMIT DELETE ROWS
 
ON COMMIT PRESERVE ROWS
to delete or preserve the contents of an instance of a global temporary or volatile table when a transaction completes.
DELETE ROWS clears an instance of a global temporary or volatile table of all rows.
PRESERVE ROWS retains the rows in the global temporary or volatile table after the transaction commits.
DELETE is the default.
This option is valid for global temporary and volatile tables only.