Analytics/Data/Webrequest

From Wikitech
wmf.webrequest and wmf_raw.webrequest within the Wikimedia Foundation's pageview data pipeline (Hive/Hadoop is the rounded box at the bottom)

As of 2015-01, there are two 'webrequest' tables available on Hive. The wmf_raw.webrequest table is JSON data directly imported from Kafka. wmf.webrequest contains the 'refined' webrequest data.

wmf.webrequest

The wmf.webrequest table is 'refined' webrequest data. It contains the same data as the wmf_raw.webrequest, as well as additional information that has been added during the refinement phase. It is stored in the Parquet format. Parquet is a column-based data storage format. When querying wmf.webrequest, Hive will not have to parse JSON, and it will only have to read in data from disk for columns that you are explicitly using in your query. This should increase performance of most Hive queries. See Data Format Experiments#Results for more info.

Current Schema

$ hive --database wmf

hive (wmf)> describe webrequest;
OK
col_name	data_type	comment
hostname            	string              	Source node hostname
sequence            	bigint              	Per host sequence number assigned by varnish-kafka
dt                  	string              	Timestame at cache in ISO 8601
time_firstbyte      	double              	Time to first byte  
ip                  	string              	IP of packet at cache
cache_status        	string              	Cache status        
http_status         	string              	HTTP status of response
response_size       	bigint              	Response size       
http_method         	string              	HTTP method of request
uri_host            	string              	Host of request     
uri_path            	string              	Path of request     
uri_query           	string              	Query of request    
content_type        	string              	Content-Type header of response
referer             	string              	Referer header of request
x_forwarded_for     	string              	X-Forwarded-For header of request
user_agent          	string              	User-Agent header of request
accept_language     	string              	Accept-Language header of request
x_analytics         	string              	X-Analytics header of response
range               	string              	Range header of response
is_pageview         	boolean             	Indicates if this record was marked as a pageview during refinement
record_version      	string              	Keeps track of changes in the table content definition - https://wikitech.wikimedia.org/wiki/Analytics/Data/Webrequest
client_ip           	string              	Client IP computed during refinement using ip and x_forwarded_for
geocoded_data       	map<string,string>  	Geoceded data computed during refinement using computed client ip and maxmind database
x_cache             	string              	X-Cache header of response
user_agent_map      	map<string,string>  	User-agent map with browser_family, browser_major, device_family, os_family, os_major, os_minor, wmf_app_version keys and associated values.
x_analytics_map     	map<string,string>  	X_analytics map view of the x_analytics field
ts                  	timestamp           	Unix timestamp in milliseconds extracted from dt
access_method       	string              	Method used to accessing the site (mobile app|mobile web|desktop)
agent_type          	string              	Categorise the agent making the webrequest as either user, spider or bot (Tagged as bot if user-agent explicitly contains 'WikimediaBot', automata tagging to be added).
is_zero             	boolean             	Indicates if the webrequest is accessed through a zero provider
referer_class       	string              	Indicates if a referer is internal, external or unknown.
normalized_host     	struct<project_class:string,project:string,qualifiers:array<string>,tld:string>	struct containing project_class (such as wikipedia or wikidata for instance), project (such as en or commons), qualifiers (a list of in-between values, such as m and/or zero) and tld (org most often)
pageview_info       	map<string,string>  	map containing project, language_variant and page_title values only when is_pageview = TRUE.
page_id             	int                 	MediaWiki page_id for this page title. For redirects this could be the page_id of the redirect or the page_id of the target. This may not always be set, even if the page is actually a pageview.
webrequest_source   	string              	Source cluster      
year                	int                 	Unpadded year of request
month               	int                 	Unpadded month of request
day                 	int                 	Unpadded day of request
hour                	int                 	Unpadded hour of request
	 	 
# Partition Information	 	 
# col_name            	data_type           	comment             
	 	 
webrequest_source   	string              	Source cluster      
year                	int                 	Unpadded year of request
month               	int                 	Unpadded month of request
day                 	int                 	Unpadded day of request
hour                	int                 	Unpadded hour of request
Example query

Notice the webrequest_source, year, month, day, and hour fields. These are Hive partitions, and are explicit mappings to hourly imports in HDFS. You must include at least one partition predicate in the where clause of your queries (even if it is just year > 0). Partitions allow you to reduce the amount of data that Hive must parse and process before it returns you results. For example, if are only interested in data during a particular day, you could add where year = 2014 and month = 1 and day = 12. This will instruct Hive to only process data for partitions that match that partition predicate. You may use partition fields as you would any normal field, even though the field values are not actually stored in the data files.

Changes and known problems since 2015-03-04

Date from Date until Task record_version Details
2015-01-01 * Create table with data from 'text', 'mobile' and 'upload' request sources.
2015-01-26 * Add 'misc' request source.
2015-02-12 * Add 'bits' request source.
2015-02-27 task T90725, task T89401 0.0.1 Add record_version, client_ip and geocoded_data fields to the schema. An error is raised when trying to access those fields for data before the release date.
2015-03-30 task T91749, task T91793, task T89396 0.0.2 Add x_cache, user_agent_map and x_analytics_map fields to the schema. An error is raised when trying to access those fields for data before the release date. Note: user_agent_map and x_analytics_map are to replace original user_agent and x_analytics fields, but field replacement breaks schema backward compatibility. We will deploy this chamge when we'll be confident enough that no more backward incompatible change are needed soon.
2015-04-10 task T95178, task T94584 0.0.3 Add ts, access_method, agent_type and is_zero fields to the schema. An error is raised when trying to access those fields for data before the release date. Note: The agent_type on.y contains spider and user values for the moment. A detailed analysis on user_agent and usage patterns should be overtaken to provide new/refined values.
Warning Warning: [agent_type field is bugged in this version and is fixed in 0.0.4.]
2015-05-07 0.0.4 Add referer_class field to the schema, with possible values internal, external or unknown. An error is raised when trying to access those fields for data before the release date. Agent_type bug is corrected, and some improvements are made in the computation of is_pageview and access_method fields.
2015-06-11 task T99932, task T99918, task T96044 0.0.5 Add normalized_host and pageview_info fields to the schema, and update user_agent_map adding wmf_app_version. normalized_host is created splitting the uri host as a struct with project_class (wikipedia, wikidata for instance), project (en, commons for instance), qualifiers (an array of in-between qualifiers, like m or zero for instance), and tld (org most of the time). pageview_info is set only when is_pageview is true, and is a map with keys project (en.wikipedia, commons.wikimedia for instance), language_variant (zh-hant for instance), and page_title when possible.
2015-08-13 task T107764, task T108833, task T108717 0.0.6 Correct bug in the use of webrequest uri hosts to compute if pageview or not. The change affect 0.2% of pageviews by removing false positives. The 'bits' webrequest_source partition and associated automatic jobs have been removed (every data that was in bits is now text partition), and a new 'maps' webrequest_source partition has been created (data for this partition starts on 2015-08-11T00:00:00).
2015-08-03, 2015-08-10 task TT106581 Two separate Kafka upgrade attempts caused significant data loss. A bug in Kafka caused data not to be compressed properly. Once the bug was fixed, we spent a lot of time patching and repairing what data we could. Several hourly webrequest partitions had significant loss and could not be restored. This loss was either caused by Kafka being too overloaded to accept produce requests, or by a recovery attempt in which a single broker for whom no replicas were in sync was restarted.
File:Webrequest los 2015-08.png
http://debugging.wmflabs.org/#refinery-graphs-tab. See also: Incident_documentation/20150803-Kafka,Incident_documentation/20150810-Kafka
2015-08-27 task T109256, task T108866 0.0.7 Correct two bugs:
  • In is_pageview definition: outreach.wikimedia.org and donate.wikimedia.org are not pageviews anymore.
  • In page_title extraction (for pageview_info['page_title']): Consistently convert spaces into underscores (only special cases where correctly treated before).
2015-08-31 0.0.8 arbcom-*.wikimedia.org are not pageviews anymore.
2015-09-16 task T109383, task T106134, task T108598 0.0.9 Update ua-parser to an up-to-date version (better user-agent classification), improve spider tagging by enhancing the regular expression used in addition of ua-parser, update pageview definition to prevent requests having 'preview' set in x_analytics header to be counted as pageviews or appPageviews.
2015-11-19 task T119054 0.0.10 Improve spider tagging adding user agent being .*http.* flagged as spider.
2015-12-01 task T116023 0.0.11 Add the mediawiki page_id for this hit when available. For redirects this could be the page_id of the redirect or the page_id of the target. This may not always be set, even if the page is actually a pageview. Upgrade pageview_definition to better filter non-interesting path.
2015-12-15 0.0.12 There was an irrecoverable raw_webrequest data loss for the hour 17 UTC of: 25.6% (misc), 19.5% (mobile), 19.1% (text), 39.1% (upload). the loss was due to the enabling of IPSec. This encrypts varniskafka traffic between caches in remote datacenters and the Kafka brokers in eqiad. During a period of about 40ish minutes, no webrequest logs from remote datacenters were successfully produced to Kafka.
2015-12-18 0.0.12 There was another irrecoverable raw_webrequest data loss between 21:54 and 22:15. A puppet change was merged in which an important firewall rule dealing with IPSec was lost. This kept all varnishkafkas in remote datacenters from producing to Kafka during this time.

wmf_raw.webrequest

Likely you do not want to query the raw tables (your queries will be slow) but the processed ones (see above) that have a more friendly format when it comes to compression.

Load job (import)

Raw Webrequest logs are currently imported using Camus from Kafka. A Hive table with hourly partitions is mapped on top of this data. Webrequest logs are Snappy compressed SequenceFiles in JSON format. Using SequenceFiles makes it possible to use MapReduce jobs with compressed data.

In addition to the core data, two tables are created with detailed statistics about the hourly partition loaded.

  • wmf_raw.webrequest_sequence_stats -- Contains details for each varnish-kafka source of duplicates / missing data.
  • wmf_raw.webrequest_sequence_stats_hourly -- Contains hourly aggregated percent_missing and percent_loss for each webrequest source.

The statistics in those two tables are computed using a 'sequence' number, a counter generated by each varnish-kafka process incrementing its value for each line it outputs. This sequence number allows to track for duplicates and missing data using some arithmetics (max, min, distinct). For details look at the code for webrequest_sequence_stats and webrequest_sequence_stats_hourly).

Checking statistics

An usual request for checking high level statistics for load jobs is

SELECT
    year,
    month,
    day,
    hour,
    webrequest_source,
    percent_lost,
    percent_duplicate
FROM
    wmf_raw.webrequest_sequence_stats_hourly
WHERE
    -- Update the restrictions to your need.
    year = XXXX
    AND month = XX
    AND day = XX
    AND hour = XX
    AND webrequest_source IN (XXX)
ORDER BY
    year, month, day, hour, webrequest_source
LIMIT 1000;