spark_partition_id {SparkR} | R Documentation |
Return the partition ID of the Spark task as a SparkDataFrame column. Note that this is nondeterministic because it depends on data partitioning and task scheduling.
## S4 method for signature 'missing' spark_partition_id() spark_partition_id(x)
This is equivalent to the SPARK_PARTITION_ID function in SQL.
spark_partition_id since 2.0.0
## Not run: select(df, spark_partition_id())