class SparkSession extends Serializable with Closeable with Logging
The entry point to programming Spark with the Dataset and DataFrame API.
In environments that this has been created upfront (e.g. REPL, notebooks), use the builder to get an existing session:
SparkSession.builder().getOrCreate()
The builder can also be used to create a new session:
SparkSession.builder .master("local") .appName("Word Count") .config("spark.some.config.option", "some-value") .getOrCreate()
- Alphabetic
- By Inheritance
- SparkSession
- Logging
- Closeable
- AutoCloseable
- Serializable
- Serializable
- AnyRef
- Any
- Hide All
- Show All
- Public
- All
Value Members
-
final
def
!=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
def
##(): Int
- Definition Classes
- AnyRef → Any
-
final
def
==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
def
addArtifact(uri: URI): Unit
Add a single artifact to the client session.
Add a single artifact to the client session.
Currently only local files with extensions .jar and .class are supported.
- Annotations
- @Experimental()
- Since
3.4.0
-
def
addArtifact(path: String): Unit
Add a single artifact to the client session.
Add a single artifact to the client session.
Currently only local files with extensions .jar and .class are supported.
- Annotations
- @Experimental()
- Since
3.4.0
-
def
addArtifacts(uri: URI*): Unit
Add one or more artifacts to the session.
Add one or more artifacts to the session.
Currently only local files with extensions .jar and .class are supported.
- Annotations
- @Experimental() @varargs()
- Since
3.4.0
-
final
def
asInstanceOf[T0]: T0
- Definition Classes
- Any
-
def
clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( ... ) @native()
-
def
close(): Unit
Close the SparkSession.
Close the SparkSession. This closes the connection, and the allocator. The latter will throw an exception if there are still open SparkResults.
- Definition Classes
- SparkSession → Closeable → AutoCloseable
- Since
3.4.0
-
val
conf: RuntimeConfig
Runtime configuration interface for Spark.
Runtime configuration interface for Spark.
This is the interface through which the user can get and set all Spark configurations that are relevant to Spark SQL. When getting the value of a config, his defaults to the value set in server, if any.
- Since
3.4.0
-
def
createDataFrame(data: List[_], beanClass: Class[_]): DataFrame
Applies a schema to a List of Java Beans.
Applies a schema to a List of Java Beans.
WARNING: Since there is no guaranteed ordering for fields in a Java Bean, SELECT * queries will return the columns in an undefined order.
- Since
3.4.0
-
def
createDataFrame(rows: List[Row], schema: StructType): DataFrame
:: DeveloperApi :: Creates a
DataFramefrom ajava.util.Listcontaining Rows using the given schema.:: DeveloperApi :: Creates a
DataFramefrom ajava.util.Listcontaining Rows using the given schema. It is important to make sure that the structure of every Row of the provided List matches the provided schema. Otherwise, there will be runtime exception.- Since
3.4.0
-
def
createDataFrame[A <: Product](data: Seq[A])(implicit arg0: scala.reflect.api.JavaUniverse.TypeTag[A]): DataFrame
Creates a
DataFramefrom a local Seq of Product.Creates a
DataFramefrom a local Seq of Product.- Since
3.4.0
-
def
createDataset[T](data: List[T])(implicit arg0: Encoder[T]): Dataset[T]
Creates a Dataset from a
java.util.Listof a given type.Creates a Dataset from a
java.util.Listof a given type. This method requires an encoder (to convert a JVM object of typeTto and from the internal Spark SQL representation) that is generally created automatically through implicits from aSparkSession, or can be created explicitly by calling static methods on Encoders.Java Example
List<String> data = Arrays.asList("hello", "world"); Dataset<String> ds = spark.createDataset(data, Encoders.STRING());
- Since
3.4.0
-
def
createDataset[T](data: Seq[T])(implicit arg0: Encoder[T]): Dataset[T]
Creates a Dataset from a local Seq of data of a given type.
Creates a Dataset from a local Seq of data of a given type. This method requires an encoder (to convert a JVM object of type
Tto and from the internal Spark SQL representation) that is generally created automatically through implicits from aSparkSession, or can be created explicitly by calling static methods on Encoders.Example
import spark.implicits._ case class Person(name: String, age: Long) val data = Seq(Person("Michael", 29), Person("Andy", 30), Person("Justin", 19)) val ds = spark.createDataset(data) ds.show() // +-------+---+ // | name|age| // +-------+---+ // |Michael| 29| // | Andy| 30| // | Justin| 19| // +-------+---+
- Since
3.4.0
-
val
emptyDataFrame: DataFrame
Returns a
DataFramewith no rows or columns.Returns a
DataFramewith no rows or columns.- Since
3.4.0
-
def
emptyDataset[T](implicit arg0: Encoder[T]): Dataset[T]
Creates a new Dataset of type T containing zero elements.
Creates a new Dataset of type T containing zero elements.
- Since
3.4.0
-
final
def
eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
def
equals(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
def
execute(extension: Any): Unit
- Annotations
- @DeveloperApi()
-
def
finalize(): Unit
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( classOf[java.lang.Throwable] )
-
final
def
getClass(): Class[_]
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
-
def
hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
-
def
initializeLogIfNecessary(isInterpreter: Boolean, silent: Boolean): Boolean
- Attributes
- protected
- Definition Classes
- Logging
-
def
initializeLogIfNecessary(isInterpreter: Boolean): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
final
def
isInstanceOf[T0]: Boolean
- Definition Classes
- Any
-
def
isTraceEnabled(): Boolean
- Attributes
- protected
- Definition Classes
- Logging
-
def
log: Logger
- Attributes
- protected
- Definition Classes
- Logging
-
def
logDebug(msg: ⇒ String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logDebug(msg: ⇒ String): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logError(msg: ⇒ String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logError(msg: ⇒ String): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logInfo(msg: ⇒ String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logInfo(msg: ⇒ String): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logName: String
- Attributes
- protected
- Definition Classes
- Logging
-
def
logTrace(msg: ⇒ String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logTrace(msg: ⇒ String): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logWarning(msg: ⇒ String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logWarning(msg: ⇒ String): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
final
def
ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
def
newDataFrame(extension: Any): DataFrame
- Annotations
- @DeveloperApi()
-
def
newDataset[T](extension: Any, encoder: AgnosticEncoder[T]): Dataset[T]
- Annotations
- @DeveloperApi()
- def newSession(): SparkSession
-
final
def
notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
-
final
def
notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
-
def
range(start: Long, end: Long, step: Long, numPartitions: Int): Dataset[Long]
Creates a Dataset with a single
LongTypecolumn namedid, containing elements in a range fromstarttoend(exclusive) with a step value, with partition number specified.Creates a Dataset with a single
LongTypecolumn namedid, containing elements in a range fromstarttoend(exclusive) with a step value, with partition number specified.- Since
3.4.0
-
def
range(start: Long, end: Long, step: Long): Dataset[Long]
Creates a Dataset with a single
LongTypecolumn namedid, containing elements in a range fromstarttoend(exclusive) with a step value.Creates a Dataset with a single
LongTypecolumn namedid, containing elements in a range fromstarttoend(exclusive) with a step value.- Since
3.4.0
-
def
range(start: Long, end: Long): Dataset[Long]
Creates a Dataset with a single
LongTypecolumn namedid, containing elements in a range fromstarttoend(exclusive) with step value 1.Creates a Dataset with a single
LongTypecolumn namedid, containing elements in a range fromstarttoend(exclusive) with step value 1.- Since
3.4.0
-
def
range(end: Long): Dataset[Long]
Creates a Dataset with a single
LongTypecolumn namedid, containing elements in a range from 0 toend(exclusive) with step value 1.Creates a Dataset with a single
LongTypecolumn namedid, containing elements in a range from 0 toend(exclusive) with step value 1.- Since
3.4.0
-
def
read: DataFrameReader
Returns a DataFrameReader that can be used to read non-streaming data in as a
DataFrame.Returns a DataFrameReader that can be used to read non-streaming data in as a
DataFrame.sparkSession.read.parquet("/path/to/file.parquet") sparkSession.read.schema(schema).json("/path/to/file.json")
- Since
3.4.0
-
def
sql(query: String): DataFrame
Executes a SQL query using Spark, returning the result as a
DataFrame.Executes a SQL query using Spark, returning the result as a
DataFrame. This API eagerly runs DDL/DML commands, but not for SELECT queries.- Since
3.4.0
-
def
sql(sqlText: String, args: Map[String, Any]): DataFrame
Executes a SQL query substituting named parameters by the given arguments, returning the result as a
DataFrame.Executes a SQL query substituting named parameters by the given arguments, returning the result as a
DataFrame. This API eagerly runs DDL/DML commands, but not for SELECT queries.- sqlText
A SQL statement with named parameters to execute.
- args
A map of parameter names to Java/Scala objects that can be converted to SQL literal expressions. See Supported Data Types for supported value types in Scala/Java. For example, map keys: "rank", "name", "birthdate"; map values: 1, "Steven", LocalDate.of(2023, 4, 2). Map value can be also a
Columnof literal expression, in that case it is taken as is.
- Annotations
- @Experimental()
- Since
3.4.0
-
def
sql(sqlText: String, args: Map[String, Any]): DataFrame
Executes a SQL query substituting named parameters by the given arguments, returning the result as a
DataFrame.Executes a SQL query substituting named parameters by the given arguments, returning the result as a
DataFrame. This API eagerly runs DDL/DML commands, but not for SELECT queries.- sqlText
A SQL statement with named parameters to execute.
- args
A map of parameter names to Java/Scala objects that can be converted to SQL literal expressions. See Supported Data Types for supported value types in Scala/Java. For example, map keys: "rank", "name", "birthdate"; map values: 1, "Steven", LocalDate.of(2023, 4, 2). Map value can be also a
Columnof literal expression, in that case it is taken as is.
- Annotations
- @Experimental()
- Since
3.4.0
-
def
stop(): Unit
Synonym for
close().Synonym for
close().- Since
3.4.0
-
final
def
synchronized[T0](arg0: ⇒ T0): T0
- Definition Classes
- AnyRef
-
def
table(tableName: String): DataFrame
Returns the specified table/view as a
DataFrame.Returns the specified table/view as a
DataFrame. If it's a table, it must support batch reading and the returned DataFrame is the batch scan query plan of this table. If it's a view, the returned DataFrame is simply the query plan of the view, which can either be a batch or streaming query plan.- tableName
is either a qualified or unqualified name that designates a table or view. If a database is specified, it identifies the table/view from the database. Otherwise, it first attempts to find a temporary view with the given name and then match the table/view from the current database. Note that, the global temporary view database is also valid here.
- Since
3.4.0
-
def
time[T](f: ⇒ T): T
Executes some code block and prints to stdout the time taken to execute the block.
Executes some code block and prints to stdout the time taken to execute the block. This is available in Scala only and is used primarily for interactive testing and debugging.
- Since
3.4.0
-
def
toString(): String
- Definition Classes
- AnyRef → Any
- lazy val version: String
-
final
def
wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... ) @native()
-
object
implicits extends SQLImplicits
(Scala-specific) Implicit methods available in Scala for converting common names and Symbols into Columns, and for converting common Scala objects into
DataFrames.(Scala-specific) Implicit methods available in Scala for converting common names and Symbols into Columns, and for converting common Scala objects into
DataFrames.val sparkSession = SparkSession.builder.getOrCreate() import sparkSession.implicits._
- Since
3.4.0