diff --git a/docs/sql-programming-guide.md b/docs/sql-programming-guide.md index 9cf480caba3e45da48a0975df53a508329c3d154..235f5ecc40c9fe1d975b0eb46a2d1c6fd26593c3 100644 --- a/docs/sql-programming-guide.md +++ b/docs/sql-programming-guide.md @@ -1272,9 +1272,9 @@ turning on some experimental options. ## Caching Data In Memory -Spark SQL can cache tables using an in-memory columnar format by calling `spark.cacheTable("tableName")` or `dataFrame.cache()`. +Spark SQL can cache tables using an in-memory columnar format by calling `spark.catalog.cacheTable("tableName")` or `dataFrame.cache()`. Then Spark SQL will scan only required columns and will automatically tune compression to minimize -memory usage and GC pressure. You can call `spark.uncacheTable("tableName")` to remove the table from memory. +memory usage and GC pressure. You can call `spark.catalog.uncacheTable("tableName")` to remove the table from memory. Configuration of in-memory caching can be done using the `setConf` method on `SparkSession` or by running `SET key=value` commands using SQL.