Class DeserializationSchemaAdapter
java.lang.Object
org.apache.flink.connector.file.table.DeserializationSchemaAdapter
- All Implemented Interfaces:
Serializable,org.apache.flink.api.java.typeutils.ResultTypeQueryable<org.apache.flink.table.data.RowData>,BulkFormat<org.apache.flink.table.data.RowData,FileSourceSplit>
@Internal
public class DeserializationSchemaAdapter
extends Object
implements BulkFormat<org.apache.flink.table.data.RowData,FileSourceSplit>
Adapter to turn a
DeserializationSchema into a BulkFormat.- See Also:
-
Nested Class Summary
Nested classes/interfaces inherited from interface org.apache.flink.connector.file.src.reader.BulkFormat
BulkFormat.RecordIterator<T> -
Constructor Summary
ConstructorsConstructorDescriptionDeserializationSchemaAdapter(org.apache.flink.api.common.serialization.DeserializationSchema<org.apache.flink.table.data.RowData> deserializationSchema) -
Method Summary
Modifier and TypeMethodDescriptionorg.apache.flink.connector.file.table.DeserializationSchemaAdapter.ReadercreateReader(org.apache.flink.configuration.Configuration config, FileSourceSplit split) Creates a new reader that reads from thesplit's pathstarting at thesplit's offsetand readslengthbytes after the offset.org.apache.flink.api.common.typeinfo.TypeInformation<org.apache.flink.table.data.RowData>Gets the type produced by this format.booleanChecks whether this format is splittable.org.apache.flink.connector.file.table.DeserializationSchemaAdapter.ReaderrestoreReader(org.apache.flink.configuration.Configuration config, FileSourceSplit split) Creates a new reader that reads fromsplit.path()starting atoffsetand reads untillengthbytes after the offset.
-
Constructor Details
-
DeserializationSchemaAdapter
public DeserializationSchemaAdapter(org.apache.flink.api.common.serialization.DeserializationSchema<org.apache.flink.table.data.RowData> deserializationSchema)
-
-
Method Details
-
createReader
public org.apache.flink.connector.file.table.DeserializationSchemaAdapter.Reader createReader(org.apache.flink.configuration.Configuration config, FileSourceSplit split) throws IOException Description copied from interface:BulkFormatCreates a new reader that reads from thesplit's pathstarting at thesplit's offsetand readslengthbytes after the offset.- Specified by:
createReaderin interfaceBulkFormat<org.apache.flink.table.data.RowData,FileSourceSplit> - Throws:
IOException
-
restoreReader
public org.apache.flink.connector.file.table.DeserializationSchemaAdapter.Reader restoreReader(org.apache.flink.configuration.Configuration config, FileSourceSplit split) throws IOException Description copied from interface:BulkFormatCreates a new reader that reads fromsplit.path()starting atoffsetand reads untillengthbytes after the offset. A number ofrecordsToSkiprecords should be read and discarded after the offset. This is typically part of restoring a reader to a checkpointed position.- Specified by:
restoreReaderin interfaceBulkFormat<org.apache.flink.table.data.RowData,FileSourceSplit> - Throws:
IOException
-
isSplittable
public boolean isSplittable()Description copied from interface:BulkFormatChecks whether this format is splittable. Splittable formats allow Flink to create multiple splits per file, so that Flink can read multiple regions of the file concurrently.See
top-level JavaDocs(section "Splitting") for details.- Specified by:
isSplittablein interfaceBulkFormat<org.apache.flink.table.data.RowData,FileSourceSplit>
-
getProducedType
public org.apache.flink.api.common.typeinfo.TypeInformation<org.apache.flink.table.data.RowData> getProducedType()Description copied from interface:BulkFormatGets the type produced by this format. This type will be the type produced by the file source as a whole.- Specified by:
getProducedTypein interfaceBulkFormat<org.apache.flink.table.data.RowData,FileSourceSplit> - Specified by:
getProducedTypein interfaceorg.apache.flink.api.java.typeutils.ResultTypeQueryable<org.apache.flink.table.data.RowData>
-