Analytics/EventLogging

From Wikitech
File:EventLoggingStag.jpg
Eventlogging Architecture

EventLogging (also EL for short) is a platform for modelling, logging and processing arbitrary analytic data. It consists of an event-processing back-end which aggregates events, validates them for compliance with pre-declared data models, and streams them to clients. There is also a MediaWiki extension that provides JavaScript and PHP APIs for logging events (Extension:EventLogging). The back-end is implemented in Python. This documentation is about the Wikimedia Foundation installation of EventLogging. To learn know more about the MediaWiki extension, refer to https://www.mediawiki.org/wiki/Extension:EventLogging.

For users

Schemas

Here's the list of the existing schemas. Note that many of them are active, but not all. Some schemas are still in development (not active yet) and others may be obsolete and listed for historical reference.

https://meta.wikimedia.org/wiki/Research:Schemas

The schema's discussion page is the place to comment on the schema design and related topics. It contains a template that specifies: the schema maintainer, the team and project the schema belongs to, its status (active, inactive, in development), and its purging strategy.

Creating a schema

There's thorough documentation on designing and creating a new schema here:

https://www.mediawiki.org/wiki/Extension:EventLogging/Guide#Creating_a_schema

Please, don't forget to fill in the schema's talk page with the template: https://meta.wikimedia.org/wiki/Template:SchemaDoc Note that for new schemas the default purging strategy is: automatically purge all events older than 90 days (see Data retention and auto-purging section).

Send events

See Extension:EventLogging/Programming for how to instrument your MediaWiki code.

Verify received events

Validation error logs are visible in the eventlogging-errors Logstash dashboard for up to 30 days. Access to Logstash requires an LDAP account with membership in a user group indicating that the user has signed an NDA.

Accessing Data

Access data in MySQL

Data stored by EventLogging for the various schemas has varying degrees of privacy, including personally identifiable information and sensitive information, hence access to it requires an NDA.

See Analytics/EventLogging/Data representations for an explanation on where the data lives and how to access it.

See also: Analytics/Data access#Analytics slaves.

Sample query

Note that you need to have ssh access to stat1003 and also be authorized to access the db.

On stat1003.eqiad.wmnet, type this command:

mysql --defaults-extra-file=/etc/mysql/conf.d/research-client.cnf 
--host dbstore1002.eqiad.wmnet -e "select left(timestamp,8) ts , COUNT(*) from log.NavigationTiming_10785754 where timestamp >= '20150402062613'   group by ts order by  ts";

Access data in Hadoop

EventLogging data is imported hourly into Hadoop by Camus. It is written to directories named after each schema in hourly partitions at stat1002 /mnt/hdfs/wmf/data/raw/eventlogging/eventlogging_<schema>/hourly/<year>/<month>/<day>/<hour>. There are a myriad of ways to access this data, including Hive and Spark. Below are a few examples. There may be many (better!) ways to do this.

NOTE: EventLogging data in HDFS is auto purged after 90 days.

Hive

Hive has a couple of built in functions for parsing JSON. Since EventLogging records are stored as JSON strings, you can access this data by creating a Hive table with a single string column and then parsing that string in your queries:

ADD JAR file:///usr/lib/hive-hcatalog/share/hcatalog/hive-hcatalog-core.jar;

-- Make sure you don't create tables in the default Hive database.
USE otto;

-- Create a table with a single string field
CREATE EXTERNAL TABLE `CentralNoticeBannerHistory` (
  `json_string` string
)
PARTITIONED BY (
  year int,
  month int,
  day int,
  hour int
)
STORED AS INPUTFORMAT
  'org.apache.hadoop.mapred.SequenceFileInputFormat'
OUTPUTFORMAT
  'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
LOCATION 
  '/wmf/data/raw/eventlogging/eventlogging_CentralNoticeBannerHistory';

-- Add a partition
ALTER TABLE CentralNoticeBannerHistory
ADD PARTITION (year=2015, month=9, day=17, hour=16)
LOCATION '/wmf/data/raw/eventlogging/eventlogging_CentralNoticeBannerHistory/hourly/2015/09/17/16';

-- Parse the single string field as JSON and select a nested key out of it
SELECT get_json_object(json_string, '$.event.l.b') as banner_name
FROM CentralNoticeBannerHistory
WHERE year=2015;
Spark

Spark Python (pyspark):

import json
data = sc.sequenceFile("/wmf/data/raw/eventlogging/eventlogging_CentralNoticeBannerHistory/hourly/2015/09/17/07")
records = data.map(lambda x: json.loads(x[1]))
records.map(lambda x: (x['event']['l'][0]['b'], 1)).countByKey()
Out[33]: defaultdict(<class 'int'>, {'WMES_General_Assembly': 5})

MobileWikiAppFindInPage events with SparkSQL in Spark Python (pyspark):

# Load the JSON string values out of the compressed sequence file.
# Note that this uses * globs to expand to all data in 2016.
data = sc.sequenceFile(
    "/wmf/data/raw/eventlogging/eventlogging_MobileWikiAppFindInPage/hourly/2016/*/*/*"
).map(lambda x: x[1])

# parse the JSON strings into a DataFrame
json_data = sqlCtx.jsonRDD(data)
# Register this DataFrame as a temp table so we can use SparkSQL.
json_data.registerTempTable("MobileWikiAppFindInPage")

top_k_page_ids = sqlCtx.sql(
"""SELECT event.pageID, count(*) AS cnt
    FROM MobileWikiAppFindInPage
    GROUP BY event.pageID
    ORDER BY cnt DESC
    LIMIT 10"""
)
for r in top_k_page_ids.collect():
    print "%s: %s" % (r.pageID, r.cnt)

Edit events with SparkSQL in Spark scala (spark-shell):

// Load the JSON string values out of the compressed sequence file
val edit_data = sc.sequenceFile[Long, String](
    "/wmf/data/raw/eventlogging/eventlogging_Edit/hourly/2015/10/21/16"
).map(_._2)

// parse the JSON strings into a DataFrame
val edits = sqlContext.jsonRDD(edit_data)
// Register this DataFrame as a temp table so we can use SparkSQL.
edits.registerTempTable("edits")

// SELECT top 10 edited wikis
val top_k_edits = sqlContext.sql(
    """SELECT wiki, count(*) AS cnt
    FROM edits
    GROUP BY wiki
    ORDER BY cnt DESC
    LIMIT 10"""
)
// Print them out
top_k_edits.foreach(println)

Accessing Data from Kafka

There are many Kafka tools with which you can read the EventLogging data streams. kafkacat is one that is installed on stat1002.

# Uses kafkacat CLI to print window ($1)
# seconds of data from $topic ($2)
function kafka_timed_subscribe {
    timeout $1 kafkacat -C -b kafka1012 -t $2
}

# Prints the top K most frequently
# occurring values from stdin.
function top_k {
    sort        |
    uniq -c     |
    sort -nr    |
    head -n $1
}

while true; do
    date; echo '------------------------------' 
    # Subscribe to eventlogging_Edit topic for 5 seconds
    kafka_timed_subscribe 5 eventlogging_Edit |
    # Filter for the "wiki" field 
    jq .wiki |
    # Count the top 10 wikis that had the most edits
    top_k 10
    echo ''
done

Generate reports and dashboards

In addition to ad-hoc queries, there are a couple of tools that make it easy to generate periodic reports on EventLogging data and display them in the form of dashboards. You can find more info on them here:

Data retention and auto-purging

Since the EventLogging database audit in Q1 2015, all database tables are being automatically purged to comply with WMF's data retention guidelines (some schemas have more restrictive purging strategies than others). Note that new tables (new schemas) will default to the most strict form of auto-purging.

Read more on this topic at Analytics/EventLogging/Data retention and auto-purging.

Publishing data

See Analytics/EventLogging/Publishing for how to proceed if you want to publish reports based on EventLogging data, or datasets that contain EventLogging data.

Operational support

We consider EventLogging a tier-2 system. An informal definition for a tier-2 system is one that helps you to operate better. A tier-1 system powers essential operation, meaning that without the system being up, you cannot function. We believe Event Logging is tier-2 as it is used for data that helps us improve our applications but we can certainly function without it.

Being tier-2 means that we provide support for EventLogging during business hours in the absence of any tier-1 issue that might be affecting our infrastructure. EventLogging could go down and be down for 48 hours (a weekend), so make sure your reporting can deal with gaps in the data.

For developers

Codebase

The eventlogging python codebase can be found at https://gerrit.wikimedia.org/r/#/admin/projects/eventlogging

Architecture

See Analytics/EventLogging/Architecture for EventLogging architecture.

Performance

On this page you'll find information about Event Logging performance, such as load tests and benchmarks:

https://wikitech.wikimedia.org/wiki/Analytics/EventLogging/Performance

Size limitation

There is a limitation of the size of individual EventLogging events due the underlying infrastructure (limited size of urls in Varnish's varnishncsa/ varnishlog, as well as Wikimedia UDP packets). For the purpose of size limitation, an "entry" is a /beacon request URL containing urlencoded JSON-stringified event data. Entries longer than 1014 bytes are truncated. When an entry is truncated, it will fail validation because of parsing (as the result is invalid JSON).

This should be taken into account when creating a schema. Large schemas should be avoided and schema fields with long keys and/or values, too. Consider splitting up a very large schema, or replacing long fields with shorter ones.

To aid with testing the length of schemas, EventLogging's dev-server logs a warning into the console for each event that exceeds the size limit.

Monitoring

You can use various tools to monitor operational metrics, read more in this dedicated page:

https://wikitech.wikimedia.org/wiki/Analytics/EventLogging/Monitoring

Testing

The Event Logging extension can be tested on vagrant easily and that is described on mediawiki.org at Extension:EventLogging. The server side of EventLogging (consumer of events) does not have a vagrant setup for testing but can be tested in the Beta Cluster:

https://wikitech.wikimedia.org/wiki/Analytics/EventLogging/TestingOnBetaCluster

How do I ...?

Visit this EventLogging how to page. It contains some dev-ops tips and tricks for EventLogging like: deploying, troubleshooting, restarting, etc. Please, add here any step-by-step on EventLogging dev-ops tasks.

https://wikitech.wikimedia.org/wiki/Analytics/EventLogging/How_to

Oncall guide

Here's a list of routine tasks to do when oncall for EventLogging.

https://wikitech.wikimedia.org/wiki/Analytics/EventLogging/Oncall

Incidents

Here's a list of all related incidents and their post-mortems. To add a new page to this generated list, use the "EventLogging/Incident_documentation" category.{{#ask: | format=ul | limit=50 | order=desc }}

For all the incidents (including ones not related to EventLogging) see: Incident documentation.

See also