Class LongHybridHashTable
java.lang.Object
org.apache.flink.table.runtime.hashtable.BaseHybridHashTable
org.apache.flink.table.runtime.hashtable.LongHybridHashTable
- All Implemented Interfaces:
org.apache.flink.core.memory.MemorySegmentSource,MemorySegmentPool
Special optimized hashTable with key long.
See LongHashPartition. TODO add min max long filter and bloomFilter to spilled
partition.
-
Field Summary
Fields inherited from class org.apache.flink.table.runtime.hashtable.BaseHybridHashTable
buildRowCount, buildSpillRetBufferNumbers, buildSpillReturnBuffers, closed, compressionBlockSize, compressionCodecFactory, compressionEnabled, currentEnumerator, currentRecursionDepth, currentSpilledBuildSide, currentSpilledProbeSide, initPartitionFanOut, internalPool, ioManager, LOG, MAX_NUM_PARTITIONS, MAX_RECURSION_DEPTH, numSpillFiles, segmentSize, segmentSizeBits, segmentSizeMask, spillInBytes, totalNumBuffers, tryDistinctBuildRow -
Constructor Summary
ConstructorsConstructorDescriptionLongHybridHashTable(Object owner, boolean compressionEnabled, int compressionBlockSize, BinaryRowDataSerializer buildSideSerializer, BinaryRowDataSerializer probeSideSerializer, org.apache.flink.runtime.memory.MemoryManager memManager, long reservedMemorySize, org.apache.flink.runtime.io.disk.iomanager.IOManager ioManager, int avgRecordLen, long buildRowCount) -
Method Summary
Modifier and TypeMethodDescriptionprotected voidvoidclose()Closes the hash table.intorg.apache.flink.runtime.io.compression.BlockCompressionFactorybooleanvoidendBuild()voidfree()final RowIterator<org.apache.flink.table.data.binary.BinaryRowData>get(long probeKey) This method is only used for operator fusion codegen to get build row from hash table.abstract longgetBuildLongKey(org.apache.flink.table.data.RowData row) For code gen get build side long key.org.apache.flink.table.data.RowDataabstract longgetProbeLongKey(org.apache.flink.table.data.RowData row) For code gen get probe side long key.final voidinsertIntoProbeBuffer(org.apache.flink.table.data.RowData probeRecord) If the probe row corresponding partition has been spilled to disk, just call this method spill probe row to disk.booleanabstract org.apache.flink.table.data.binary.BinaryRowDataprobeToBinary(org.apache.flink.table.data.RowData row) For code gen probe side to BinaryRowData.voidputBuildRow(org.apache.flink.table.data.binary.BinaryRowData row) intbooleantryProbe(org.apache.flink.table.data.RowData record) Methods inherited from class org.apache.flink.table.runtime.hashtable.BaseHybridHashTable
createInputView, ensureNumBuffersReturned, freeCurrent, freePages, getNextBuffer, getNextBuffers, getNotNullNextBuffer, getNumSpillFiles, getSpillInBytes, getUsedMemoryInBytes, hash, maxInitBufferOfBucketArea, maxNumPartition, nextSegment, pageSize, readAllBuffers, releaseMemoryCacheForSMJ, remainBuffers, returnAll, returnPage
-
Constructor Details
-
LongHybridHashTable
public LongHybridHashTable(Object owner, boolean compressionEnabled, int compressionBlockSize, BinaryRowDataSerializer buildSideSerializer, BinaryRowDataSerializer probeSideSerializer, org.apache.flink.runtime.memory.MemoryManager memManager, long reservedMemorySize, org.apache.flink.runtime.io.disk.iomanager.IOManager ioManager, int avgRecordLen, long buildRowCount)
-
-
Method Details
-
putBuildRow
- Throws:
IOException
-
endBuild
- Throws:
IOException
-
get
@Nullable public final RowIterator<org.apache.flink.table.data.binary.BinaryRowData> get(long probeKey) throws IOException This method is only used for operator fusion codegen to get build row from hash table. If the build partition has spilled to disk, return null directly which requires the join operator also spill probe row to disk.- Throws:
IOException
-
insertIntoProbeBuffer
public final void insertIntoProbeBuffer(org.apache.flink.table.data.RowData probeRecord) throws IOException If the probe row corresponding partition has been spilled to disk, just call this method spill probe row to disk.Note: This must be called only after
get(long)method.- Throws:
IOException
-
tryProbe
- Throws:
IOException
-
nextMatching
- Throws:
IOException
-
getCurrentProbeRow
public org.apache.flink.table.data.RowData getCurrentProbeRow() -
getBuildSideIterator
-
close
public void close()Description copied from class:BaseHybridHashTableCloses the hash table. This effectively releases all internal structures and closes all open files and removes them. The call to this method is valid both as a cleanup after the complete inputs were properly processed, and as an cancellation call, which cleans up all resources that are currently held by the hash join.- Overrides:
closein classBaseHybridHashTable
-
free
public void free()- Overrides:
freein classBaseHybridHashTable
-
getBuildLongKey
public abstract long getBuildLongKey(org.apache.flink.table.data.RowData row) For code gen get build side long key. -
getProbeLongKey
public abstract long getProbeLongKey(org.apache.flink.table.data.RowData row) For code gen get probe side long key. -
probeToBinary
public abstract org.apache.flink.table.data.binary.BinaryRowData probeToBinary(org.apache.flink.table.data.RowData row) For code gen probe side to BinaryRowData. -
spillPartition
- Specified by:
spillPartitionin classBaseHybridHashTable- Throws:
IOException
-
getPartitionsPendingForSMJ
-
getSpilledPartitionBuildSideIter
- Throws:
IOException
-
getSpilledPartitionProbeSideIter
- Throws:
IOException
-
clearPartitions
protected void clearPartitions()- Specified by:
clearPartitionsin classBaseHybridHashTable
-
compressionEnabled
public boolean compressionEnabled() -
compressionCodecFactory
public org.apache.flink.runtime.io.compression.BlockCompressionFactory compressionCodecFactory() -
compressionBlockSize
public int compressionBlockSize()
-