Log Aggregation Guide

Log aggregation involves creating a single pool of log messages from multiple sources. Although this task sounds very straightforward, there is a catch. All of the IT systems that you use in your business produce log messages that range from debugging hints to security alerts. However, the layout of different types of messages is different.

The large volume of log messages that are now available within the typical business means that reading them manually is not a realistic option. Getting a simple search program to look through messages to identify important information is a worthwhile process. You need to know what to look for which requires a pattern to match and a context. The content of each field in a log message is created by the message’s format. Some fields might contain codes that need to be referenced and the field separator might be different between standards.

Log parsing

A software house can make up its standard for log message formats and then produce an explanation for the layout in a manual. A nice feature would be an accompanying data viewer that automatically splits out the long string of the log messages into separate fields. However, if every piece of software has its standard for log message layout, a systems administrator would need to use many, many different data viewers to get meaningful information out of these logs.

Dividing up a string into sections, which is what splitting out the fields in a log message involves, is called “parsing”. Any IT system that produces output needs to follow a formatting system. By very basic standards, that could just be an end-of-line character. In records, there will be fields and so the system needs to have a field separator. The programming team that creates a messaging format can make up its standard for those separators.

To parse a log message into its component fields, the log aggregator needs to know what the field and record separators are. So, the creators of log aggregation systems have to load up their tools with a method to detect the format in use and a library of related separators to parse messages from different sources. If the log message has a proprietary standard, the aggregator could guess – the most common separators are comma and tab.

You can simplify the scanning of log messages once they have been parsed. It is very easy to sort and filter through records on a specific field, looking for specific values – you can set up a macro in a spreadsheet to do it for you.

So, parsing log messages is the start of analyzing them. Once fields are separated they need to be identified by labels and field formats. This process associates a meaning with each field in a log message. If the parser also has access to reference databases or files, it can add descriptive columns to explain the meanings of code contained within the messaging standards.

Other actions performed by the parser include controls over valid values for each field. This extends the competence of the parser into record validation and catches formatting errors before data gets added to the common pool of records. Records that don’t comply with a common standard of values render consolidation and aggregation impossible.

Log aggregation

In IT, aggregation usually means summarizing data. However, in terms of logging, there is a bit more to the term. The fundamental action in log aggregation is the standardization of record formats. The process is also known as consolidation.

If you can identify each field in a log message, you can list all records in a table, which you could store in a file or a database. If you have each column in a table clearly defined, you can output fields in whatever order you want.

Log aggregation relies on the fact that a log message is always going to contain a set number of fields: timestamp, message code, severity, source device, source program, and program line are all obvious fields and it isn’t much else of meaning that anyone could want from a log message.

One log format might start with a timestamp, then have a severity code, then an error code, and then a program line number. Another log standard might output the severity code, a message code, the source system identifier, the software package name, the software package version, and then a timestamp. If messages from these two standards were read into a single table, the records would be a mess. Some log standards even specify different layouts and different formats for fields, such as the timestamp, according to the type of message being generated.

Log aggregation reorders the fields in log messages so that they have the same layout. There is no universal standard for log aggregation message formats. Each log aggregator uses its layout. The most important factor is that the fields are labeled so that they can be loaded into an associated viewer.

The log server itself can add values, such as message source and logging standard, into a log record during the aggregation process. This is particularly the case with log management systems that provide both the local collector/client and the log server. Some systems, such as the CrowdStrike Falcon Prevent endpoint protection service can generate their messages to add to the pool of logs gathered from other applications.

Different log management systems can exchange data, with the search and display systems produced by one company able to read the aggregated messages formed by a log aggregator produced by another software house. However, this interoperability is not universal and has to be organized by two companies that agree to exchange formats. Although the formats of log messages are usually published, those used by log aggregators are not.

Log message standards

The layout of a log message needs to be defined and the standard laid down in that definition can then be reused by developers. The standard might be a proprietary one, used by a software producer over and over again for its different products or it could be an open standard, maintained by a non-profit organization, such as Mozilla or Apache.

Given that log messages need to be interpreted by the buyers of the software that generates them, log message standards are not kept secret – they are not just for in-house use. So, publishing a logging standard is necessary. If other businesses use that same standard for their software products, then the importance of that standard grows. Log message standards have no commercial value, so there is no downside to encouraging their wider use.

The two main log message standards in use today are Syslog and Windows Event Logs. The standard for Windows Event Log is a creation of Microsoft. Syslog was created for Sendmail. Both systems have been around for a long time – Syslog dates back to the 1980s and Windows Event Log was created in 1993.

Windows Event Log

Microsoft’s Windows Event Logs is a system that is used by the company for logging for its software packages as well as for the Windows operating system. The benefit of the standard is that it is linked to a notifications process within the Windows system and provides a central store for all Microsoft-related issues. The service includes a message screen, called the Event Viewer, which is a console that provides search, filter, and sort facilities.

It is possible to save and forward Windows Events with rules set up within the Event Viewer. The tool also allows Event records to be read into the log message listing screen from a file or other sources, such as cloud platforms. The facilities in the free tool are equal to those expected from paid log managers. However, the system doesn’t allow for consolidation with logs from other formats.

Windows Events isn’t limited to providing log messages from Microsoft products because the message generator is available to developers through an API. Third-party software houses choose to use the Windows Event Log standard for their systems and these messages will be stored along with Windows and Microsoft logs and filterable in the Events Viewer.

Syslog

Syslog was developed for BSD Unix by Eric Allman while working at the University of California at Berkeley. As an early contributor to technology, Allman has been involved in many key developments, including the INGRES DBMS. Allman is responsible for many underlying standards for computing and communications, such as address standardization, email management, and IoT platforms. Syslog is probably his greatest achievement.

The Syslog standard is managed by the Internet Engineering Taskforce (IETF) and is defined in an RFC, which is free for anyone to download. The first document defining Syslog was published in 2001 and is RFC 3164, entitled The BSD Syslog Protocol. The standard was around for a long time before this date. This standard definition has been superseded by RFC 5424, which is entitled The Syslog Protocol and was published by the IETF in 2009.

Thanks to its longstanding usage, Syslog is thought of as the de facto logging standard for Unix and Unix-like operating systems and the software that runs on them. That group of operating systems includes all of the Linux distros and macOS.

Apache logs

Apache HTTP Server is one of the modest widely-used Web server systems in operation. This, along with other Apache tools, such as Ant, Cassandra, and Spark uses a logging system called the Common Log Format (CLF).

Annoyingly, there are three branches of this standard – system messages, error messages, and a custom format. The first two of these three standards are well known and can be recognized by most log servers and log managers. The processes of the custom format also make it possible for log servers to quickly integrate the log message format.

To make things even more complicated, some Apache systems allow the developer options over log messaging standards. So, even those systems that have CLF as the default logging method, such as Ant, might use different standards.

Apache doesn’t provide a native log viewer. You can access logs at the operating system, which is usually Linux, and either cat or move them. The logs are kept under the /var/log directory. Each Apache product has a separate subdirectory under that location to store its logs.

Log4j

Log4j is an Apache logging standard that is used by some Apache products and can also be integrated into third-party systems. This is a Java-based package and it outputs log messages in plain text, JSON, XML, or YAML. The output format can be altered by the end-user through a value contained in a configuration file that is checked each time an application using the Log4j standard starts up.

There are two versions of Log4j and Log4j 2 is not backward compatible, meaning that all systems based on versions 1.2 and 1.3 that are still in circulation need to be updated.

The Log4j standard has recently become headline news because of a flaw in the system that was exploited by hackers. The exploit was discovered in Log4j 2, which is a major problem because it can be wiped out by the deprecation of version 1 editions. This exploit is known as Log4Shell and it is believed to have been known in hacker circles since 2013 but it was only discovered by cybersecurity researchers in November 2021.

The exploit enables hackers to deliver executable packages as plug-ins to the Log4j system. Plug-ins don’t need to be bound into the program at the point of development but can be added at any time during the lifetime of the application.

Fortunately, Apache has produced a patch to close down this vulnerability. However, there are still millions of unpatched implementations of Log4j in operation. The exploit is also present in Log4j usage by java.util.logging.

Apache has created several alternative log formatting projects to replace Log4j. These are Logback, SLF4J, and reload4j.

Java logs

The Java programming system, produced by Oracle Corporation includes a package for log creation, called java.util.logging. Messages can be produced in plain text or XML format. There is a standard layout for a Java message, but the programmer can choose a different standard, such as tinylog, Log4j, and Logback, or create a custom layout. Apart from the custom format option, the logging layouts offered within the Java logging system are well-known and can be recognized by most log aggregators.

Some Apache projects use java.util.logging instead of the native Apache CLF – Apache Tomcat is an example.

Other log messaging standards

The section above explains the most widely-used logging standards at the moment. However, the rapid change in IT infrastructure strategies towards cloud platforms introduces new standards for logging that need to be recognized by log aggregators. AWS provides Amazon CloudWatch Logs for its platform and Azure provides Azure Monitor, which includes both a messaging standard and a data viewer. Users of Google Cloud Platform get the Cloud Logging system.

Serverless systems require new monitoring methods that are known as distributed tracing and there are several logging standards in this field: OpenTelemetry, OpenTracing, and OpenCensus are three of these.

The value of a log aggregator lies in the list of different log formats that it can recognize. Therefore, some old, poorly supported log management systems that are infrequently updated are becoming less and less usable.

Log management tools

Log aggregation is rarely provided by standalone tools. The consolidation process is usually built into the log server functions of a log management system.

There are many log management tools available but what you need to look out for is a package that includes the following:

  • Log server
  • Associated log collectors and clients for installation on remote platforms
  • Log aggregator with the ability to recognize a long list of formats
  • Logfile manager that can rotate log files and create a meaningful directory structure
  • Data viewer that will show live tail log messages and load the contents of log files
  • Analytical tools
  • File integrity monitoring and backup services
  • Archiving and revival utilities

You can read our list of the Best Log Management & Analysis Tools, but if you haven’t got time to go to that comparison review, here is a selection of the tools that you could consider.

  1. ManageEngine EventLog Analyzer (FREE TRIAL) This on-premises package is a log server with added analytical tools. It will collect logs from diverse sources in different formats and convert them into a common format. Logs can be searched, parsed, viewed, and stored and there is an opportunity to build analysis services on top of it. Available for Windows Server and Linux. Get a 30-day free trial.
  2. Datadog Log Management A cloud-based log server with a library of collectors that can be installed on local systems through the Datadog console. This package includes aggregation, a data viewer with analytical tools, and log file management that includes an archiving and revival system. Available for a 14-day free trial.
  3. Graylog A log management bundle that includes a server and a library of collectors plus a log file manager and a data analysis console. Available in free and paid versions and offered as a SaaS system or as a virtual appliance.
  4. Loggly This cloud-based log server includes the ability to recognize a very long list of log formats and has a data analyzer in its Web-based console. There is a free trial for this system, plus a free version.
  5. Papertrail This cloud-based service includes storage space and lets you insert your own rules into the log aggregator to optionally separate parts and direct messages to different storage locations. Store or forward logs  and use the onboard data viewer for analysis.
  6. Logstash This is a free log server package that is part of the ELK stack, using its associated Elasticsearch package to analyze logs, and Kibana to view data.