dongjoon-hyun commented on code in PR #46349: URL: https://github.com/apache/spark/pull/46349#discussion_r1588525182
########## docs/configuration.md: ########## @@ -3670,14 +3670,17 @@ Note: When running Spark on YARN in `cluster` mode, environment variables need t # Configuring Logging Spark uses [log4j](http://logging.apache.org/log4j/) for logging. You can configure it by adding a -`log4j2.properties` file in the `conf` directory. One way to start is to copy the existing -`log4j2.properties.template` located there. +`log4j2.properties` file in the `conf` directory. One way to start is to copy the existing templates `log4j2.properties.template` or `log4j2.properties.pattern-layout-template` located there. -By default, Spark adds 1 record to the MDC (Mapped Diagnostic Context): `mdc.taskName`, which shows something -like `task 1.0 in stage 0.0`. You can add `%X{mdc.taskName}` to your patternLayout in -order to print it in the logs. +## Structured Logging +Starting from version 4.0.0, Spark has adopted the [JSON Template Layout](https://logging.apache.org/log4j/2.x/manual/json-template-layout.html) for logging, which outputs logs in JSON format. This format facilitates querying logs using Spark SQL with the JSON data source. Additionally, the logs include all Mapped Diagnostic Context (MDC) information for search and debugging purposes. + +To implement structured logging, start with the `log4j2.properties.template` file. + +## Plain Text Logging Review Comment: Thank you for providing this section together. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org