pyspark.sql.SparkSession.range¶
-
SparkSession.
range
(start, end=None, step=1, numPartitions=None)[source]¶ Create a
DataFrame
with singlepyspark.sql.types.LongType
column namedid
, containing elements in a range fromstart
toend
(exclusive) with step valuestep
.New in version 2.0.0.
- Parameters
- startint
the start value
- endint, optional
the end value (exclusive)
- stepint, optional
the incremental step (default: 1)
- numPartitionsint, optional
the number of partitions of the DataFrame
- Returns
Examples
>>> spark.range(1, 7, 2).collect() [Row(id=1), Row(id=3), Row(id=5)]
If only one argument is specified, it will be used as the end value.
>>> spark.range(3).collect() [Row(id=0), Row(id=1), Row(id=2)]