pyspark.sql.DataFrame.cube#
- DataFrame.cube(*cols)[source]#
Create a multi-dimensional cube for the current
DataFrame
using the specified columns, allowing aggregations to be performed on them.New in version 1.4.0.
Changed in version 3.4.0: Supports Spark Connect.
- Parameters
- Returns
GroupedData
Cube of the data based on the specified columns.
Notes
A column ordinal starts from 1, which is different from the 0-based
__getitem__()
.Examples
>>> df = spark.createDataFrame([("Alice", 2), ("Bob", 5)], schema=["name", "age"])
Example 1: Creating a cube on ‘name’, and calculate the number of rows in each dimensional.
>>> df.cube("name").count().orderBy("name").show() +-----+-----+ | name|count| +-----+-----+ | NULL| 2| |Alice| 1| | Bob| 1| +-----+-----+
Example 2: Creating a cube on ‘name’ and ‘age’, and calculate the number of rows in each dimensional.
>>> df.cube("name", df.age).count().orderBy("name", "age").show() +-----+----+-----+ | name| age|count| +-----+----+-----+ | NULL|NULL| 2| | NULL| 2| 1| | NULL| 5| 1| |Alice|NULL| 1| |Alice| 2| 1| | Bob|NULL| 1| | Bob| 5| 1| +-----+----+-----+
Example 3: Also creating a cube on ‘name’ and ‘age’, but using the column ordinal.
>>> df.cube(1, 2).count().orderBy(1, 2).show() +-----+----+-----+ | name| age|count| +-----+----+-----+ | NULL|NULL| 2| | NULL| 2| 1| | NULL| 5| 1| |Alice|NULL| 1| |Alice| 2| 1| | Bob|NULL| 1| | Bob| 5| 1| +-----+----+-----+