Skip to content

Conversation

@hvanhovell
Copy link
Contributor

What changes were proposed in this pull request?

This PR reduces the number of configuration RPCs needed for building a LocalRelation in the Scala client to 1.

Why are the changes needed?

This reduces the number of RPCs between the Scala client and the server when building a LocalRelation. This is more efficient, and better for performance.

Does this PR introduce any user-facing change?

No.

How was this patch tested?

Added tests for RuntimeConfig.getMap(..).

Was this patch authored or co-authored using generative AI tooling?

No.

@hvanhovell hvanhovell marked this pull request as ready for review December 22, 2025 18:17
}
}

private[connect] def getMap(keys: String*): Map[String, String] = {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: getMap is a bit vague. It indicates that the return type is a Map but doesn’t say much about its behavior. In my opinion, batchGet might be clearer, as it suggests that the method is used for batching config requests and improving efficiency.

Copy link
Member

@HyukjinKwon HyukjinKwon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cc @zhengruifeng since you did the same thing in python side

maxRecordsPerBatch = maxChunkSizeRows,
maxBatchSize = math.min(maxChunkSizeBytes, maxBatchOfChunksSize),
timeZoneId = timeZoneId,
largeVarTypes = largeVarTypes,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it seems timeZoneId and largeVarTypes also need config RPCs

SqlApiConf.LOCAL_RELATION_BATCH_OF_CHUNKS_SIZE_BYTES_KEY)
val threshold = confs(SqlApiConf.LOCAL_RELATION_CACHE_THRESHOLD_KEY).toInt
val maxChunkSizeRows = confs(SqlApiConf.LOCAL_RELATION_CHUNK_SIZE_ROWS_KEY).toInt
val maxChunkSizeBytes = confs(SqlApiConf.LOCAL_RELATION_CHUNK_SIZE_BYTES_KEY).toInt
Copy link
Contributor

@LuciferYang LuciferYang Dec 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From the definition, this is a longConf, so should we call toLong here?

val LOCAL_RELATION_CHUNK_SIZE_BYTES =
buildConf(SqlApiConfHelper.LOCAL_RELATION_CHUNK_SIZE_BYTES_KEY)
.doc("The chunk size in bytes when splitting ChunkedCachedLocalRelation.data " +
"into batches. A new chunk is created when either " +
"spark.sql.session.localRelationChunkSizeBytes " +
"or spark.sql.session.localRelationChunkSizeRows is reached. " +
"Limited by the spark.sql.session.localRelationBatchOfChunksSizeBytes, " +
"a minimum of the two confs is used to determine the chunk size.")
.version("4.1.0")
.longConf
.checkValue(_ > 0, "The chunk size in bytes must be positive")
.createWithDefault(16 * 1024 * 1024L)

test("RuntimeConfig.get multiple keys") {
assert(spark.conf.getMap().isEmpty)
val result = spark.conf.getMap(
"spark.sql.ansi.enabled",
Copy link
Contributor

@LuciferYang LuciferYang Dec 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have a no-ansi daily test pipeline, and this case might fail in such a scenario:

SPARK_ANSI_SQL_MODE="false" build/sbt  clean "connect-client-jvm/testOnly org.apache.spark.sql.connect.ClientE2ETestSuite"
[info] - RuntimeConfig.get multiple keys *** FAILED *** (4 milliseconds)
[info]   Map("spark.sql.ansi.enabled" -> "false", "spark.sql.session.timeZone" -> "Asia/Shanghai", "spark.sql.binaryOutputStyle" -> "") did not equal Map("spark.sql.ansi.enabled" -> "true", "spark.sql.session.timeZone" -> "Asia/Shanghai", "spark.sql.binaryOutputStyle" -> "") (ClientE2ETestSuite.scala:1066)
[info]   Analysis:
[info]   Map("spark.sql.ansi.enabled": "false" -> "true")
[info]   org.scalatest.exceptions.TestFailedException:
[info]   at org.scalatest.Assertions.newAssertionFailedException(Assertions.scala:472)
[info]   at org.scalatest.Assertions.newAssertionFailedException$(Assertions.scala:471)
[info]   at org.scalatest.Assertions$.newAssertionFailedException(Assertions.scala:1231)
[info]   at org.scalatest.Assertions$AssertionsHelper.macroAssert(Assertions.scala:1295)
[info]   at org.apache.spark.sql.connect.ClientE2ETestSuite.$anonfun$new$139(ClientE2ETestSuite.scala:1066)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants