Append Output Mode Not Supported When There Are Streaming Aggregations On Streaming DataFrames/DataSets Without Watermark;;\nJoin Inner
I wanna join 2 streams but I received the next error and I don't know how to fix it:
Append output mode not supported when there are streaming aggregations on streaming DataFrames/DataSets without watermark;;\nJoin Inner
df_stream = spark.readStream.schema(schema_clicks).option("ignoreChanges", True).option("header", True).format("csv").load("s3://mybucket/*.csv")
display(df_stream.select("SendID", "EventType", "EventDate"))
I wanna join df1 with df2:
df1 = df_stream \
.withColumn('timestamp', unix_timestamp(col('EventDate'), "MM/dd/yyyy hh:mm:ss aa").cast(TimestampType())) \
.select(col("SendID"), col("timestamp"), col("EventType")) \
.withColumnRenamed("SendID", "SendID_update") \
.withColumnRenamed("timestamp", "timestamp_update") \
.withWatermark("timestamp_update", "1 minutes")
df2 = df_stream \
.withColumn('timestamp', unix_timestamp(col('EventDate'), "MM/dd/yyyy hh:mm:ss aa").cast(TimestampType())) \
.withWatermark("timestamp", "1 minutes") \
.groupBy(col("SendID")) \
.agg(max(col('timestamp')).alias("timestamp")) \
.orderBy('timestamp', ascending=False)
join = df2.alias("A").join(df1.alias("B"), expr(
"A.SendID = B.SendID_update" +
" AND " +
"B.timestamp_update >= A.timestamp " +
" AND " +
"B.timestamp_update <= A.timestamp + interval 1 hour"))
Then finally when I write the result in append mode:
join \
.writeStream \
.outputMode("Append") \
.option("checkpointLocation", "s3://checkpointjoin_delta") \
.format("delta") \
.table("test_join")
I received the previous error.
AnalysisException Traceback (most recent call last) in () ----> 1 join.writeStream.outputMode("Append").option("checkpointLocation", "s3://checkpointjoin_delta").format("delta").table("test_join")
/databricks/spark/python/pyspark/sql/streaming.py in table(self, tableName) 1137 """ 1138 if isinstance(tableName, basestring): -> 1139 return self._sq(self._jwrite.table(tableName)) 1140 else: 1141 raise TypeError("tableName can be only a single string")
/databricks/spark/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py in call(self, *args) 1255 answer = self.gateway_client.send_command(command) 1256 return_value = get_return_value( -> 1257 answer, self.gateway_client, self.target_id, self.name) 1258 1259 for temp_arg in temp_args:
/databricks/spark/python/pyspark/sql/utils.py in deco(*a, **kw) 67 e.java_exception.getStackTrace()))
Answer
The problem is the .groupBy, it's necessary to add the timestamp. For example:
df2 = df_stream \
.withColumn('timestamp', unix_timestamp(col('EventDate'), "MM/dd/yyyy hh:mm:ss aa").cast(TimestampType())) \
.withWatermark("timestamp", "1 minutes") \
.groupBy(col("SendID"), "timestamp") \
.agg(max(col('timestamp')).alias("timestamp")) \
.orderBy('timestamp', ascending=False)
Related Questions
- → What are the pluses/minuses of different ways to configure GPIOs on the Beaglebone Black?
- → Django, code inside <script> tag doesn't work in a template
- → React - Django webpack config with dynamic 'output'
- → GAE Python app - Does URL matter for SEO?
- → Put a Rendered Django Template in Json along with some other items
- → session disappears when request is sent from fetch
- → Python Shopify API output formatted datetime string in django template
- → Can't turn off Javascript using Selenium
- → WebDriver click() vs JavaScript click()
- → Shopify app: adding a new shipping address via webhook
- → Shopify + Python library: how to create new shipping address
- → shopify python api: how do add new assets to published theme?
- → Access 'HTTP_X_SHOPIFY_SHOP_API_CALL_LIMIT' with Python Shopify Module