添加链接
link之家
链接快照平台
  • 输入网页链接,自动生成快照
  • 标签化管理网页链接
Collectives™ on Stack Overflow

Find centralized, trusted content and collaborate around the technologies you use most.

Learn more about Collectives

Teams

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

Learn more about Teams

I have the following Spark SQL test query:

Seq("france").toDF.createOrReplaceTempView("countries")
SELECT CASE WHEN country = 'italy' THEN 'Italy' 
    ELSE ( CASE WHEN country IN (FROM countries) THEN upperCase(country) ELSE country END ) 
    END AS country FROM users

which throws the following error:

Exception in thread "main" org.apache.spark.sql.AnalysisException: 
    IN/EXISTS predicate sub-queries can only be used in a Filter

the following part of the query CASE WHEN country IN (FROM countries) is the reason for that.

Is there any workaround in Spark SQL exists in order to emulate country IN (FROM countries) in the select conditions? I interested in pure SQL implementation and not in the implementation via API.

import sparkSession.implicits._
Seq("france").toDF("country").createOrReplaceTempView("countries")
Seq(("user1", "france"), ("user2", "italy"), ("user2", "usa"))
  .toDF("user", "country").createOrReplaceTempView("users")
val query =
     |SELECT
     |  CASE
     |    WHEN u.country = 'italy' THEN 'Italy'
     |    ELSE (
     |      CASE
     |        WHEN u.country = c.country THEN upper(u.country)
     |        ELSE u.country
     |      END
     |    ) END AS country
     |FROM users u
     |LEFT JOIN countries c
     |  ON u.country = c.country
  """.stripMargin
sparkSession.sql(query).show()

Result:

+-------+
|country|
+-------+
| FRANCE|
|  Italy|
|    usa|
+-------+

The reason behind the scene you can use IN/EXISTS sql operators only in predicates is: logic in projections (CASE-WHEN in our case) evaluated for each row in data set returned from selection. With this in mind, it's not the best idea to run equivalent of CASE WHEN country IN (SELECT * FROM countries) for each row from users table. So, SQL prevents this on language level (sql parser engine).

Is there any way to operate on each row? It's exactly what I am trying to do with a decision table whose rules span multiple rows and contain arrays of values. If Column A has the rules, then I use case when to check column B for 'in' / 'not in'. Case when column B = 'in', then fieldToMap in (exploded column A), case when column B = 'not in' then fieldToMap not in (exploded column A). I nest this inside a sum that counts the matches, which determines the final value for that row. All the SQL works outside the case when block, but I don't know how to go line by line without case when. – Blaisem Jan 29, 2022 at 16:06 Could you bring an example please? Case-when operates exactly on row level. Alternative approach might be to use declarative dataframe API and/or custom UDF – morsik Jan 31, 2022 at 6:17

function (from spark.sql.functions):

val users = Seq(("1", "france"), ("2", "Italy"), ("3", "italy")).toDF("userId", "country")
val countriesList = Seq("france", "italy", "germany").toList
val result = users.withColumn("country", when(col("country") === "italy", "Italy")
  .when(col("country") isin(countriesList:_*), upper(col("country"))).otherwise(col("country")))
result.show()

Result:

+------+-------+
|userId|country|
+------+-------+
|     1| FRANCE|
|     2|  Italy|
|     3|  Italy|
+------+-------+
        

Thanks for contributing an answer to Stack Overflow!

  • Please be sure to answer the question. Provide details and share your research!

But avoid

  • Asking for help, clarification, or responding to other answers.
  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.