Class CommonExecTableSourceScan

java.lang.Object
org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase<org.apache.flink.table.data.RowData>
org.apache.flink.table.planner.plan.nodes.exec.common.CommonExecTableSourceScan
All Implemented Interfaces:
ExecNode<org.apache.flink.table.data.RowData>, ExecNodeTranslator<org.apache.flink.table.data.RowData>, FusionCodegenExecNode, MultipleTransformationTranslator<org.apache.flink.table.data.RowData>
Direct Known Subclasses:
BatchExecTableSourceScan, StreamExecTableSourceScan

public abstract class CommonExecTableSourceScan extends ExecNodeBase<org.apache.flink.table.data.RowData> implements MultipleTransformationTranslator<org.apache.flink.table.data.RowData>
Base ExecNode to read data from an external source defined by a ScanTableSource.
  • Field Details

  • Constructor Details

    • CommonExecTableSourceScan

      protected CommonExecTableSourceScan(int id, ExecNodeContext context, org.apache.flink.configuration.ReadableConfig persistedConfig, DynamicTableSourceSpec tableSourceSpec, List<InputProperty> inputProperties, org.apache.flink.table.types.logical.LogicalType outputType, String description)
  • Method Details

    • getSimplifiedName

      public String getSimplifiedName()
      Overrides:
      getSimplifiedName in class ExecNodeBase<org.apache.flink.table.data.RowData>
    • getTableSourceSpec

      public DynamicTableSourceSpec getTableSourceSpec()
    • translateToPlanInternal

      protected org.apache.flink.api.dag.Transformation<org.apache.flink.table.data.RowData> translateToPlanInternal(PlannerBase planner, ExecNodeConfig config)
      Description copied from class: ExecNodeBase
      Internal method, translates this node into a Flink operator.
      Specified by:
      translateToPlanInternal in class ExecNodeBase<org.apache.flink.table.data.RowData>
      Parameters:
      planner - The planner.
      config - per-ExecNode configuration that contains the merged configuration from various layers which all the nodes implementing this method should use, instead of retrieving configuration from the planner. For more details check ExecNodeConfig.
    • getPhysicalRowType

      protected org.apache.flink.table.types.logical.RowType getPhysicalRowType(org.apache.flink.table.catalog.ResolvedSchema schema)
    • getPrimaryKeyIndices

      protected int[] getPrimaryKeyIndices(org.apache.flink.table.types.logical.RowType sourceRowType, org.apache.flink.table.catalog.ResolvedSchema schema)
    • createSourceFunctionTransformation

      @Deprecated protected org.apache.flink.api.dag.Transformation<org.apache.flink.table.data.RowData> createSourceFunctionTransformation(org.apache.flink.streaming.api.environment.StreamExecutionEnvironment env, org.apache.flink.streaming.api.functions.source.legacy.SourceFunction<org.apache.flink.table.data.RowData> function, boolean isBounded, String operatorName, org.apache.flink.api.common.typeinfo.TypeInformation<org.apache.flink.table.data.RowData> outputTypeInfo, int sourceParallelism, boolean sourceParallelismConfigured)
      Deprecated.
      This method relies on the SourceFunction API, which is due to be removed.
      Adopted from StreamExecutionEnvironment.addSource(SourceFunction, String, TypeInformation) but with custom Boundedness.
    • createInputFormatTransformation

      protected abstract org.apache.flink.api.dag.Transformation<org.apache.flink.table.data.RowData> createInputFormatTransformation(org.apache.flink.streaming.api.environment.StreamExecutionEnvironment env, org.apache.flink.api.common.io.InputFormat<org.apache.flink.table.data.RowData,?> inputFormat, org.apache.flink.table.runtime.typeutils.InternalTypeInfo<org.apache.flink.table.data.RowData> outputTypeInfo, String operatorName)
      Creates a Transformation based on the given InputFormat. The implementation is different for streaming mode and batch mode.