Class MergeTableAsUtil

java.lang.Object
org.apache.flink.table.planner.operations.MergeTableAsUtil

public class MergeTableAsUtil extends Object
A utility class with logic for handling the CREATE TABLE ... AS SELECT clause.
  • Constructor Summary

    Constructors
    Constructor
    Description
    MergeTableAsUtil(org.apache.calcite.sql.validate.SqlValidator validator, Function<org.apache.calcite.sql.SqlNode,String> escapeExpression, org.apache.flink.table.catalog.DataTypeFactory dataTypeFactory)
     
  • Method Summary

    Modifier and Type
    Method
    Description
    maybeRewriteQuery(org.apache.flink.table.catalog.CatalogManager catalogManager, FlinkPlannerImpl flinkPlanner, PlannerQueryOperation origQueryOperation, org.apache.calcite.sql.SqlNode origQueryNode, org.apache.flink.table.catalog.ResolvedCatalogTable sinkTable)
    Rewrites the query operation to include only the fields that may be persisted in the sink.
    org.apache.flink.table.api.Schema
    mergeSchemas(org.apache.calcite.sql.SqlNodeList sqlColumnList, org.apache.flink.sql.parser.ddl.SqlWatermark sqlWatermark, List<org.apache.flink.sql.parser.ddl.constraint.SqlTableConstraint> sqlTableConstraints, org.apache.flink.table.catalog.ResolvedSchema sourceSchema)
    Merges the specified schema with columns, watermark, and constraints with the sourceSchema.
    org.apache.flink.table.api.Schema
    reorderSchema(org.apache.calcite.sql.SqlNodeList sqlColumnList, org.apache.flink.table.catalog.ResolvedSchema sourceSchema)
    Reorders the columns from the source schema based on the columns identifiers list.

    Methods inherited from class java.lang.Object

    clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
  • Constructor Details

    • MergeTableAsUtil

      public MergeTableAsUtil(org.apache.calcite.sql.validate.SqlValidator validator, Function<org.apache.calcite.sql.SqlNode,String> escapeExpression, org.apache.flink.table.catalog.DataTypeFactory dataTypeFactory)
  • Method Details

    • maybeRewriteQuery

      public PlannerQueryOperation maybeRewriteQuery(org.apache.flink.table.catalog.CatalogManager catalogManager, FlinkPlannerImpl flinkPlanner, PlannerQueryOperation origQueryOperation, org.apache.calcite.sql.SqlNode origQueryNode, org.apache.flink.table.catalog.ResolvedCatalogTable sinkTable)
      Rewrites the query operation to include only the fields that may be persisted in the sink.
    • mergeSchemas

      public org.apache.flink.table.api.Schema mergeSchemas(org.apache.calcite.sql.SqlNodeList sqlColumnList, @Nullable org.apache.flink.sql.parser.ddl.SqlWatermark sqlWatermark, List<org.apache.flink.sql.parser.ddl.constraint.SqlTableConstraint> sqlTableConstraints, org.apache.flink.table.catalog.ResolvedSchema sourceSchema)
      Merges the specified schema with columns, watermark, and constraints with the sourceSchema.

      The resulted schema will contain the following elements:

      • columns
      • computed columns
      • metadata columns
      • watermarks
      • primary key

      It is expected that the sourceSchema contains only physical/regular columns.

      Columns of the sourceSchema are appended to the schema columns defined in the sqlColumnList. If a column in the sqlColumnList is already defined in the sourceSchema, then the types of the columns are implicit cast and must be compatible based on the implicit cast rules. If they're compatible, then the column position in the schema stays the same as defined in the appended sourceSchema.

    • reorderSchema

      public org.apache.flink.table.api.Schema reorderSchema(org.apache.calcite.sql.SqlNodeList sqlColumnList, org.apache.flink.table.catalog.ResolvedSchema sourceSchema)
      Reorders the columns from the source schema based on the columns identifiers list.