Skip to main content
  1. Posts/

Secure Java Logging with Logback

Deploying application into secure environment adds some restrictions on logging and log management. OWASP community gives some useful recommendations.

OWASP Security Testing Guide Recommendations #

OWASP Security Testing Guide defines a number of questions to be answered when reviewing applciaiton logging configuration (see OTG-CONFIG-002):

1. Do the logs contain sensitive information? #

Log files should not contain any sensitive data. Anyway, log file access must be restricted:

Event log information should never be visible to end users. Even web administrators should not be able to see such logs since it breaks separation of duty controls. Ensure that any access control schema that is used to protect access to raw logs and any applications providing capabilities to view or search the logs is not linked with access control schemas for other application user roles. Neither should any log data be viewable by unauthenticated users.

The consequence is that you should not use same authentication mechanism to access application and accessing the log files.

Also, in some jurisdictions, storing some sensitive information in log files, such as personal data, might oblige the enterprise to apply the data protection laws that they would apply to their back-end databases to log files too. And failure to do so, even unknowingly, might carry penalties under the data protection laws that apply.

Update: Things even got even worse after GDPR directive was implemented in EU.

It’s not easy to make sure that no sensitive information is not printed to log.

When using logback it is possible to conigure regexp replace pattern to wipe certain data from log files being written, e.g. mask passwords.

1.1. Mask sensitive data with logging pattern #

To mask credit card number (PAN) you may use the following expression (logback.xml):

<pattern>%-5level - %replace(%msg){'\d{12,19}', 'XXXX'}%n</pattern>

This expression will replace all numbers with 12 to 19 digits with XXXX, so some other data will be masked.

Another pattern variation honors only 16-digit card numbers (PANs) with selective first digit and supports spaces between digit groups:

<pattern>%-5level - %replace(%msg){'[1-6][0-9]{3}[\s-]?[0-9]{4}[\s-]?[0-9]{4}[\s-]?[0-9]{4}|5[1-5][0-9]{2}[\s-]?[0-9]{4}[\s-]?[0-9]{4}[\s-]?[0-9]{4}', 'XXXX'}%n</pattern>

Masking PANs with Logback is the last resort to ensure the data is masked with a false-positive hits. It is preferrable to mask the data before it is being written to log in the applciation code.

You may read about securing coding practices in my next post.

1.2. Use “owasp-security-logging” library for masking sensitive data #

Another option is using owasp-security-logging library related to OWASP Security Logging Project

Add dependency:


In Java source code, add the CONFIDENTIAL marker to log statements that could contain confidential information:"userid={}", userid);, "password={}", password);

The intent is to produce the following output in the log:

2014-12-16 13:54:48,860 [main] INFO - userid=joebob
2014-12-16 13:54:48,860 [main] [CONFIDENTIAL] INFO - password=***********

See project wiki

2. Are logs stored in a dedicated server? #

It is advised to keep log files on the separate server to prevent removing/cleaning log files by attacker and to ease of centralized log file analysis.

Logback offers SocketAppender with SimpleSocketServer and SSLSocketAppender with SimpleSSLSocketServer for logging on a remote server instance.

Second option is DBAppender to write logs to the database thus keeping them apart from application instance.

Other option is to use SyslogAppender and delegate logging to system syslog service. But is is not secure enougth: in the system will be hacked, the hacker may re-configure syslog not to send any events to remote log server.

When using a Logstash server, you may send events via Logstash Logback Encoder. There are handful of appenders.

Another option is to use logstash encoder to write logs in json format to file and then [fluentd collector][] transfer it to elastic search server.

Also, you may consider using logback-audit which provides logging vis dedicated log server or directly to the database.

3. Can log usage generate a Denial of Service condition? #

In case of exceptions on production due to invalid data provided in the request, the exceptions may be printed to logs and cause high IO consumption. This may lead to server unavailability.

Log Asynchronously #

Logback offers some kind of protection against log overhead. First is using AsyncAppender to queue log events and spread the load. Set queueSize wisely. Default value is 256 which is not enougth.

If you’re fine with loosing some less important details then use AsyncAppender with discardingThreshold. Uf the event queue has only 20% capacity remaining, events with fine-grained logging category will be dropped.

Logstash provides the AsyncDisruptorAppender from the which is similar to logback’s AsyncAppender, except that a LMAX Disruptors RingBuffer is used as the queuing mechanism, as opposed to a BlockingQueue providing higher throughput and less GC overhead. These async appenders can delegate to any other underlying logback appender, including standard Logback file appenders. Set LMAX RingBuffer size wisely. Too low values may cause the the blocking of entire application.

Think twice before enabling Async Logging! As far as ensuring that a message has been successfully written before the app continues is concerned, you should not log asynchronously.

Use Appropriate Logging Levels #

Specifying inappropriate log levels in application and appenders may cause excessive load on production server. You’re not going to debug on production, right? Then why you are print valuable data with DEBUG level? On production configuration, default appender’s logging level should be INFO. If you always need some information - use INFO level in the application and use the database to save data like raw requests. Debugging should be enabled on production only in critical situations.

4. How are the log files rotated? Are logs kept for the sufficient time? #

Log files should be rotated at least daily. Reasonable log history depth is 6 months. Some regulations may require to keep log files longer for the purpose of investigations.

Some servers might rotate logs when they reach a given size. If this happens, it must be ensured that an attacker cannot force logs to rotate in order to hide his tracks.

5. How are logs reviewed? Can administrators use these reviews to detect targeted attacks? #

Log files can be used for attack detection. For example, the first phases of a SQL injection attack may producte 50x (server errors) or 40x (request errors) messages.

Log statistics or analysis should not be generated, nor stored, in the same server that produces the logs. Otherwise, an attacker might, through a web server vulnerability or improper configuration, gain access to them and retrieve similar information as would be disclosed by log files themselves.

6. How are log backups preserved? #

Make Log Files Append-only #

Other type of attack is modification logging configuration file in order to hide the fact of attack. Use Mandatory Access Controls on the log file to make it append-only to users of the app, to mitigate the possibility of tampering or removing existing messages.

The simplest way to make files append-only is probably this:

sudo chattr +a *.log


sudo chattr +a *.log

Also don’t forget to set default file attributes for log directory

# owner make the owner to be root and java group
sudo chown root:java /var/log/java
# set uid and gid   
sudo chmod ug+s /var/log/java  
# set group to w default
sudo setfacl -d -m g::w /var/log/java  
# set nothing to other
sudo setfacl -d -m o::--- /var/log/java

Make Backups #

You need to backup the logs, defenitely, as well as other applicaiton data.

You could additionally take periodic backups of the log file to ensure that nothing has been changed or removed between backups. This assumes that access to your backups is also controlled – a third party who can tamper with your backups can tamper with your log files in an undetectable fashion.

7. Is the data being logged data validated (min/max length, chars etc) prior to being logged? #

Be carefull what are you writting to logs. Always ask yourself: “Is it possible to produce big or huge logging output?”

Be carefull when implementing the method toString(). Include only minimum necessary information in toString() method.

Further steps: Protect your logging configuration #

Logback configuration can be included inside application (jar file) or be located in external file (logback.xml). Hacker may try to modify or remove logback.xml. In order to prevent this attack, logback.xml should be:

  1. Can not be modified by application user.
  2. Monitored by intrusion detection system.
  3. Logback auto-reload feature must not be enabled to prevent replacing configuration of the running java applicaiton.

Although auto-reload is very attractive feature of logback, it is reasonable to sacrifice it in favor of security.

*[GDPR]: The EU General Data Protection Regulation