티스토리 뷰
스파크 히스토리 서버의 log4j 설정을 알아보겠습니다. 대상 버전은 스파크 3.5.2입니다.
스파크 히스토리 서버는 log4j2 설정을 이용하고 있습니다. 다음은 콘솔 출력(console)과 파일 출력(DRFA)을 동시에 처리하는 예제 설정입니다.
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Set everything to be logged to the console
rootLogger.level = info
rootLogger.appenderRefs = stdout, fileout
rootLogger.appenderRef.stdout.ref = console
rootLogger.appenderRef.fileout.ref = DRFA
# In the pattern layout configuration below, we specify an explicit `%ex` conversion
# pattern for logging Throwables. If this was omitted, then (by default) Log4J would
# implicitly add an `%xEx` conversion pattern which logs stacktraces with additional
# class packaging information. That extra information can sometimes add a substantial
# performance overhead, so we disable it in our default logging config.
# For more information, see SPARK-39361.
appender.console.type = Console
appender.console.name = console
appender.console.target = SYSTEM_ERR
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = %d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n%ex
appender.DRFA.type = RollingRandomAccessFile
appender.DRFA.name = DRFA
appender.DRFA.fileName = ${sys:spark.log.dir}/${sys:spark.log.file}
appender.DRFA.filePattern = ${sys:spark.log.dir}/${sys:spark.log.file}.%d{yyyy-MM-dd}
appender.DRFA.layout.type = PatternLayout
appender.DRFA.layout.pattern = %d{ISO8601} %5p [%t] %c{2}: %m%n
appender.DRFA.policies.type = Policies
appender.DRFA.policies.time.type = TimeBasedTriggeringPolicy
appender.DRFA.policies.time.interval = 1
appender.DRFA.policies.time.modulate = true
appender.DRFA.strategy.type = DefaultRolloverStrategy
appender.DRFA.strategy.max = 30
# Set the default spark-shell/spark-sql log level to WARN. When running the
# spark-shell/spark-sql, the log level for these classes is used to overwrite
# the root logger's log level, so that the user can have different defaults
# for the shell and regular Spark apps.
logger.repl.name = org.apache.spark.repl.Main
logger.repl.level = warn
logger.thriftserver.name = org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver
logger.thriftserver.level = warn
# Settings to quiet third party logs that are too verbose
logger.jetty1.name = org.sparkproject.jetty
logger.jetty1.level = info
logger.jetty2.name = org.sparkproject.jetty.util.component.AbstractLifeCycle
logger.jetty2.level = info
logger.jetty3.name = org.sparkproject.jetty.server.handler.ConnectHandler
logger.jetty3.level = debug
logger.jersey.name = com.sun.jersey
logger.jersey.level = info
logger.hadoop.name = org.apache.hadoop
logger.hadoop.level = info
logger.jetty4.name = org.apache.httpcomponents
logger.jetty4.level = info
logger.replexprTyper.name = org.apache.spark.repl.SparkIMain$exprTyper
logger.replexprTyper.level = info
logger.replSparkILoopInterpreter.name = org.apache.spark.repl.SparkILoop$SparkILoopInterpreter
logger.replSparkILoopInterpreter.level = info
logger.parquet1.name = org.apache.parquet
logger.parquet1.level = error
logger.parquet2.name = parquet
logger.parquet2.level = error
# SPARK-9183: Settings to avoid annoying messages when looking up nonexistent UDFs in SparkSQL with Hive support
logger.RetryingHMSHandler.name = org.apache.hadoop.hive.metastore.RetryingHMSHandler
logger.RetryingHMSHandler.level = fatal
logger.FunctionRegistry.name = org.apache.hadoop.hive.ql.exec.FunctionRegistry
logger.FunctionRegistry.level = error
# For deploying Spark ThriftServer
# SPARK-34128: Suppress undesirable TTransportException warnings involved in THRIFT-4805
appender.console.filter.1.type = RegexFilter
appender.console.filter.1.regex = .*Thrift error occurred during processing of message.*
appender.console.filter.1.onMatch = deny
appender.console.filter.1.onMismatch = neutral
appender 설정
appender 는 logger 에 설정된 로깅 정보를 어디로 출력할 것인지 정의합니다.
# root 로거 설정. stdout = console 를 참조, fileout = DRFA 를 참조
rootLogger.level = info
rootLogger.appenderRefs = stdout, fileout
rootLogger.appenderRef.stdout.ref = console
rootLogger.appenderRef.fileout.ref = DRFA
# 콘솔 출력 타입의 예제
appender.console.type = Console
appender.console.name = console
# 파일 출력 타입 예제, Daily Rolling 예제
appender.DRFA.type = RollingRandomAccessFile
appender.DRFA.name = DRFA
logger 설정
logger 에 설정된 클래스의 패키지명에 맞는 로깅에 대한 설정을 수정할 수 있습니다.
logger.jetty1.name = org.sparkproject.jetty
logger.jetty1.level = info
logger.jetty2.name = org.sparkproject.jetty.util.component.AbstractLifeCycle
logger.jetty2.level = info
logger.jetty3.name = org.sparkproject.jetty.server.handler.ConnectHandler
logger.jetty3.level = debug
반응형
'빅데이터 > spark' 카테고리의 다른 글
반응형
공지사항
최근에 올라온 글
최근에 달린 댓글
- Total
- Today
- Yesterday
링크
TAG
- Hadoop
- airflow
- yarn
- 하이브
- AWS
- 하둡
- S3
- bash
- SPARK
- Linux
- 알고리즘
- build
- 파이썬
- 오류
- Python
- oozie
- Tez
- HDFS
- mysql
- HIVE
- java
- SQL
- 정올
- 백준
- emr
- error
- hbase
- ubuntu
- 다이나믹
- nodejs
일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | |||
5 | 6 | 7 | 8 | 9 | 10 | 11 |
12 | 13 | 14 | 15 | 16 | 17 | 18 |
19 | 20 | 21 | 22 | 23 | 24 | 25 |
26 | 27 | 28 | 29 | 30 | 31 |
글 보관함