Using BigQuery with Reddit Data

Using BigQuery with Reddit data is a lot of fun and easy to do, so let’s get started. You’ll want to start by setting up a BigQuery project if you don’t already have one. Google gives 1TB (one terabyte) of free data-processing each month via BigQuery. Each time you run a query, BQ will tell you how much data that particular query processed.

Set up a BigQuery Project

1. Go to


2. Create a new project if you don’t have one already.

3. This may redirect you to — if so, create a new project here.

4. Once you create a new project, you can go back to the bigquery console by selecting the icon in the upper-left and selecting BigQuery. If that doesn’t work, try going back to after you create a new project and it should take you to the screen where you can begin running queries.


If everything works, you should see this in the upper left-hand part of your screen.


Interacting with BigQuery

Using BQ is very easy if you’ve every used SQL before, but in case you haven’t, here’s one of the simplest SQL statements for getting data.

SELECT * FROM [pushshift:rt_reddit.comments@-60000-] LIMIT 1;

A database is composed of tables. Each table is composed of fields (columns) and records (rows). The above SQL query uses something called a table decorator in BQ. It restricts the amount of data processed and is always a good thing to use to keep a query from getting too large. Remember, BigQuery has databases that are well into the terabytes and running a query on very large tables can consume massive amounts of data processing. These queries can be very expensive (from a data processing standpoint).

The SQL query above requests one record (row) from the pushshift:rt_reddit.comments database using the latest 60 seconds worth of cached data (the table decorator part). Table decorators generally use milliseconds, so remember to multiply the number of seconds by 1000. For instance, if you wanted to show all reddit comments that mentioned “Bernie” (case-sensitive) over the past hour, it would look like this:

SELECT * FROM [pushshift:rt_reddit.comments@-3600000-] WHERE REGEXP_MATCH(body,r'Bernie')

Not everyone will capitalize Bernie, so if we wanted to do a case-insensitive search, we would want to use LOWER(body) instead of body. This makes all comments processed lower-case. The SQL query would now look like this:

SELECT * FROM [pushshift:rt_reddit.comments@-3600000-] WHERE REGEXP_MATCH(LOWER(body),r'bernie')

Make sure you keep everything you are searching for lower-case, too! Otherwise, you’ll get back 0 records. This would NOT work: REGEXP_MATCH(LOWER(body),r’Bernie’)

We’ve extracted all comments with “bernie” or “Bernie”, but what if you want to see which subreddits contain the most comments mentioning this keyword? This is how you would do it:

SELECT count(*), subreddit FROM [pushshift:rt_reddit.comments@-3600000-] 
GROUP BY subreddit

This is grouping the results by subreddit. What’s the ORDER BY 1 all about? With BQ, you can reference a field by it’s position, so ORDER BY 1 just means order by count(*) since count(*) is the first field. We could have also written it like this:

SELECT count(*) count, subreddit FROM [pushshift:rt_reddit.comments@-3600000-] 
GROUP BY subreddit

In this example, we’ve giving the field count(*) an alias called “count”. We can then reference that alias in the ORDER BY statement. This is usually a little clearer, especially when your SQL queries start getting very large.

Using the real-time comment and submission tables

In the previous examples, we ran some basic queries against the publicly available real-time BQ Reddit tables. There are two tables — one for comments and one for submissions. Let’s take a closer look at each:

Below are the fields (columns) for the comments table


Below are the fields (columns) for the submissions table


These are the fields available for the comments and submissions table. Using this as a reference, you should be able to start creating more advanced SQL queries in BQ. Let’s do some powerful and interesting queries using BigQuery!

Some interesting Queries to get you started

Top 10 Reddit posts from the previous 10 minutes of comment activity

Average Cost: 125 MB

SELECT * FROM (SELECT count(*) total_comments, link_id FROM [pushshift:rt_reddit.comments@-600000-]
GROUP BY link_id) c
JOIN EACH [pushshift:rt_reddit.submissions@-50000000-] s ON = c.link_id
ORDER BY c.total_comments DESC

This query will show the top 10 Reddit posts (submissions) using the past ten minutes of comment activity. Each comment made on Reddit belongs to a subreddit and submission, so we can use the above query to group by submissions and then join the link_id to the id of the submission. Each comment has a link_id, which is the id itself for a submission.

When you see something like:

SELECT * FROM (SELECT ...... ) t1

This is basically nesting select statements. The second select creates what is known as a “derived table.” You can then treat that derived table, or “t1”, as if it were a real table. This is how you can easily create more complicated queries without getting in over your head too easily. Let’s do a few more advanced queries.

Top 10 Reddit posts that are only a half hour old (from comment activity)

Average Cost: 10 MB

SELECT * FROM (SELECT count(*) total_comments, link_id FROM [pushshift:rt_reddit.comments@-1800000-]
GROUP BY link_id) c
JOIN EACH [pushshift:rt_reddit.submissions@-1800000-] s ON = c.link_id
ORDER BY c.total_comments DESC

Extracting Significant Subreddit Words (using /r/politics)

Average Cost: 3 GB

SELECT word, s_count, s_ratio, g_count, g_ratio, s_to_g_ratio, weight FROM (
SELECT s.word word, s.c s_count, ROUND(s.ratio,4) s_ratio, g.c g_count, ROUND(g.ratio,4) g_ratio,ROUND(s.ratio/g.ratio,2) s_to_g_ratio, ROUND(s.ratio/g.ratio,2) * s.c weight  FROM 
(SELECT c, word, ssum, (c/ssum)*100 ratio FROM
(SELECT c,word, sum(c) over () as ssum FROM 
(SELECT count(*) c, word FROM 
(SELECT REGEXP_EXTRACT(word,r'([a-z0-9\-\']*)') word FROM 
(SELECT SPLIT(lower(body),' ') word FROM [pushshift:rt_reddit.comments@-604800000-] WHERE lower(subreddit)="politics"))
GROUP BY word))) s
(SELECT c, word, gsum, (c/gsum)*100 ratio FROM 
(SELECT c,word, sum(c) over () as gsum FROM 
(SELECT count(*) c, word FROM 
(SELECT REGEXP_EXTRACT(word,r'([a-z0-9\-\']*)') word FROM 
(SELECT SPLIT(lower(body),' ') word FROM [pushshift:rt_reddit.comments@-604800000-]))
GROUP BY word))) g
ON g.word = s.word
WHERE s.word NOT IN ('gt','lt','amp')
WHERE s_to_g_ratio > 5
ORDER BY s_ratio DESC;
facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

Enhancing Reddit’s API and Search

Monitoring Reddit Activity

Below are some active real-time graphs showing the current comment volume to Reddit. This is using a back-end API that I am developing for other developers to increase transparency on Reddit and to provide cool and useful real-time analytics. In the next several weeks, I will be putting up a separate guide that fully documents the new API which includes analytics, search capabilities and facets (rollups) of the search data. Let’s dive into the first API endpoint to show Reddit activity based on global activity or more specific activity including search term activity (What do I mean by search term activity? Read on!).

API endpoints that fetches data for the graphs above: 

The display above is showing real-time comment activity for Reddit. is ingesting data using Reddit’s API and indexing the data in real-time. Sphinx search is used on the back-end to provide real-time search of comments submitted to Reddit.

The activity API call returns an array of arrays. Each array within the container array is a two element array that contains the current epoch time in milliseconds and the number comments posted during that time. The returned data for all API calls is held as a value of the data key. Here is a very basic illustration of how the data is returned.

    metadata:   {
                  results: 2,
                  time: 0.002
    data:       [
                  [1437016644000, 16],
                  [1437016645000, 11]
    parameters: {
                  unit: "second",
                  count: 2

This endpoint has a couple of very cool parameters. The parameters are:

Parameter Required Description
unit Yes The unit parameter can be one of the following predefined units (second,minute,hour) or an integer representing a span of time. For instance, if the unit parameter is set to “second”, each bar in the graph represents one second of activity.
count No The count parameter is the number of unit blocks to return. For instance, if unit was set to 360 (each value representing six minutes of activity) and count was set to 100, the total range of time returns would be 100 six minute blocks, or the previous ten hours of ingest activity for Reddit.
q No This parameter allows you to see activity for a specific search term. For instance, if you wanted to see a graph of how often the search term “Pao” was mentioned on reddit, you would set the “q” parameter to “pao”. Search is case-insensitive.
event No This parameter can be one of two values, “t1” (comments) or “t3” (submissions). If not present, it will default to “t1”. Use this parameter to narrow down activity to comments or submissions.
subreddit Coming soon… The “subreddit” parameter will allow you to narrow results to a specific subreddit.
submission Coming soon… The “submission” parameter will allow you to narrow results to a specific submission.
start Coming soon… The beginning date in epoch time. Useful for creating graphs from past activity.

Here is an example of using the real-time activity monitor to see all comments that mention “nintendo” over the past 400 hours in four hour units. The API call for this would be”nintendo”. Make sure you URL encode your requests to the API!

More information on this new API endpoint will be added soon. Check back!

facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

Using Redis as a Twitter Ingest Buffer

Why use Redis?


Using Redis is a great way to buffer ingest operations in the event that there is a database issue or some other piece of the process fails. Redis is fast. Extremely fast. Up to hundreds of thousands of SET and GET operations per second fast. That’s just using one Redis server (Redis is single-threaded, so to distribute the load, multiple Redis servers can run simultaneously — one per core). For most raw data ingest, Redis won’t even break a sweat against data feeds as large as Twitter’s firehose service.

To put into perspective just how popular Twitter is, each second there is an average of 6,000 TPS (tweets per second). On August 3, 2013, a record was set by Japanese people tweeting 143,199 tweets in just one second. That’s a tremendous amount of data. However, given the proper configuration including a recent multi-core Xeon processor (E5-16xx v3 or E5-25xx v3) and high-quality memory, one server running Redis could have handled that load. Although a 10 Gbps network adapter would probably be needed in such a situation since that many tweets probably consumed up to hundreds of megabytes per second even with compression. That heavy of a load from Twitter’s streaming API is very atypical, however.

The Basic Setup

The first part of our ingest process is getting the data from Twitter. Twitter has a few public streams that provide approximately 1% of the bandwidth of their full firehose stream. Back in 2010, Twitter partnered with Gnip and now Gnip is one of the few authorized companies allowed to resell Twitter data to consumers. How are we ingesting up to 5% of the full stream? It appears to possibly be a bug with Twitter’s filtering methods, but by following the top 5,000 Twitter users at once, we are ingesting far more than Twitter’s sample stream. Anyway, getting back to the process. We hook up to the Twitter stream using a Perl script that establishes a connection using compression to receive tweets in JSON format. Each tweet is then uncompressed from the Zlib compression stream and then re-compressed using Google’s LZ4 compression scheme which is very fast. Each compressed is then put inside a Redis list (using rpush) and later retrieved (hopefully very quickly) by another script using the Redis lrange command to request a block of tweets at once. Once those tweets have been successfully processed, indexed and stored, the script will send an ltrim command to Redis to remove the previous processed tweets. This works very well using a single-threaded approach and could be scaled up to multiple threads by using some more advanced methods. I may cover those methods in a future article if there is interest.

Dealing with Gremlins

There is always multiple ways for a system to fail. One of the first things that should be discussed at the beginning of project is what types of failures are acceptable and which failures are showstoppers. For our purposes, we can tolerate losing some tweets. There’s a few ways that could happen. The connection to Twitter could disconnect (network issues), Twitter itself could have a major failure (unlikely given the amount of redundancy they have built into their network), there could be a hardware failure or power failure. Most of those failure points are rare events. The most tweets that I have seen ingested in one second by our system was close to 1,500 tweets. Running stress tests, we can handle approximately 5,000 tweets per second until the Redis buffer begins to grow faster than we can pull data from it. However, we’re only using a single-thread approach at the moment. We could probably push that up to 20,000+ tweets per second with a multi-threaded solution. These approaches utilize only one server.

Mean / Average
Standard Deviation
Coefficient of Variance

Mean / Average
Standard Deviation
Coefficient of Variance

Mean / Average
Standard Deviation
Coefficient of Variance

Tracked Users

These are the top twitter users being tracked out of the total of 5,000.

facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

HTTPS — why your site should be using it

Why use a TLS Certificate?

A TLS (Transport Layer Security) certificate enables a secure channel between your website and the end-user.  There are a lot of reasons to do this.  In August of 2014, Google announced that they would be using HTTPS as a ranking factor.  This simply means that sites that use HTTPS will have their rankings adjusted favorably when the site is indexed by Google.  TLS is an evolutionary step forward from SSL.  TLS v1.0 was basically SSL v3.0, but since then TLS has become the mainstream method for securing a site with HTTPS.

Besides a better ranking score from Google, offering HTTPS to your end-users is a great value-add for your site in general.  In fact, there is no reason not to enable HTTPS on all of your endpoints, including a RESTful API if your site has one.  The computational expense for enabling HTTPS is negligible with modern processors.

Why uses HTTPS only

This website offers many different API services for consumers of big data.  There are many RESTful API endpoints provided by this website that serve data from Reddit — some of which require authentication or API keys to use.  Keeping everything over HTTPS benefits the consumers of’s API by providing privacy and security.  Forcing HTTPS connections only adds an additional 1% CPU utilization overall.

If you examine our certificate, you will see that it uses SHA2 and is publicly auditable .  If you would like to get started with enabling TLS for your website, read on!

TLS — The Basics

Enabling HTTPS for your website begins with choosing a certificate provider.  There are many to choose from with prices ranging from free to thousands of dollars.  A basic TLS certificate is domain and subdomain specific.  In other words, the certificate would apply only to one web address such as  I’ve used DigiCert in the past and have had very positive experiences with them.  Although they generally cost more than other providers such as Comodo, Digicert has a great response time if you should ever have an issue with your certificate.

Once you choose a certificate provider, you will need to generate a CSR (Certificate Signing Request) for the certificate issuer before they can create a new certificate for your organization.  There are three main pieces to the puzzle — your private key (which you never give to anyone), the CSR which is signed by your private key and the certificate that is generated for you by an issuer using the CSR you provide to them.

If you are using a Linux based system, you can easily create a new key and CSR all in one go.  When creating a new key and CSR for, the command used was:

openssl req -new -newkey rsa:2048 -nodes -out pushshift_io.csr -keyout pushshift_io.key -subj "/C=US/ST=Maryland/L=Linthicum/O=Jason Michael Baumgartner/"

Always keep your private key safe. If you lose your private key or suspect that it has been copied, you will want to contact your certificate issuer immediately and rekey your certificate!

Once you submit the CSR, the issuer will verify your identity and ownership of the domain.  They will then issue you a new certificate that you can use within your web server configuration to begin serving pages over HTTPS.  Keep in mind that the entire process from start to finish generally requires a couple hours (most of that time spent waiting for the provider to issue your new certificate). uses Nginx as its web server.  This is what the part of our configuration looks like that deals with enabling and forcing HTTPS:

server {
        listen 80;
        rewrite     ^   https://$server_name$request_uri? permanent;

server {
        listen 443;
        ssl     on;
        ssl_session_cache    shared:SSL:10m;
        ssl_session_timeout  10m;
        ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        ssl_prefer_server_ciphers on;
        ssl_certificate    /etc/ssl/private/;
        ssl_certificate_key    /etc/ssl/private/;

        ... a bunch of other configuration stuff ...

This basic SSL configuration does a few things.  First, all connections over port 80 (HTTP) are rewritten and forced to go over port 443 (HTTPS).  The ssl_protocols directive forces TLS only connections and also prevents the POODLE vulnerability.  The POODLE vulnerability was an attack vector aimed at SSL v3.0 and basically was the death knell of SSL.  The ssl_ciphers directive is an additional configuration option to fine-tune the types of protocols allowed when negotiating a new connection.

Once the configuration is put in place, you can reload the Nginx configuration by typing “nginx -s reload” and it will reload the configuration without taking the server offline.  If everything works, you should get a very high score when checking Qualys SSL Labs site.

Screenshot from 2015-04-25 02:13:32
Qualys SSL LAbs offers a nice TLS certificate check
facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

Using SSD and LVM for Big Data

When I began the task of archiving all Reddit comments, I realized that my 250 GB SSD drive would not be enough.  Not even close.  Storing such a large amount of data would require the big guns — storage measured in terabytes.  The database has many different tables (and will be covered in more detail in a future blog posting).  Many of the tables are basically index tables that store associations including comment author, creation time, the comment score, etc.  However, the largest table in use is one called “com_json.”  This table is simplistic in design containing only two columns — comment_id (INT64 or BIGINT) for the primary key and the column comment_json (TEXT) which stores a JSON string containing the comment itself along with all META data (score, creation time, link id, etc.)

For every 100 million comments, approximately 25 gigabytes of storage is used in this table alone.  There are approximately 800 million comments in this table with a projected total of 1.8 billion once the initial data scrape is completed.  From that point forward, the table will be fed data at an average rate of 25 comments per second (based on the current average of comments made to Reddit per second).

Introducing the Samsung 850 EVO 1 TB SSD Drive

Samsung 850 EVO SSD

Samsung makes a lot of high-quality electronic devices.  The 850 EVO SSD drive is no exception.  This particular drive, besides storing slightly less than a terabyte of data, has read/write speeds of 520 MB/s.  More importantly for the database, this drive also has an extremely fast IOPS speed of ~ 95,000.

Why is this number important?  A database stores data in such a way that the retrieval of data isn’t always in a sequential format.  For the com_json table, an API call may request individual records that are located randomly throughout the data file.  Sequential reads aren’t really applicable for SSD drives, though.  Platter drives, however, benefit when data is distributed in a sequential order.  Getting back to the relevance of IOPS speed, if an API call requests 1,000 comments, those comments may be distributed in basically random locations in the table data file.  The ability for the SSD drive to retrieve the data quickly is directly related to the IOPS speed.  It doesn’t necessarily mean that it could retrieve 95,000 comments per second, though.  There is also page caching and disk buffering in the OS itself which would influence the retrieval speeds.  A page read on this particular SSD is 8k, which means that a full comment object would fit within a page read about 90% of the time.  It also depends on the page boundaries and exactly where the database rows are located within the table data file.

Benchmark testing has shown that this drive will support approximately 50,000 comment object retrievals per second — which is fantastic compared to older platter drives.

Man-and-Q-OPT1SSD drives perform three main activities — reads, writes and erases.  Read and write operations are aligned to page boundaries while erase operations are performed on entire blocks.  The block size for the 850 EVO is 1536 Kb (192 pages per block).

Linux and LVM Volumes

LVM DiagramLVM, or logical volume manager, is a type of abstraction layer created by Heinz Mauelshage that allows for extending disk size easily.  It allows for combining block devices (like the Samsung EVO 850) into volume groups.  Volume groups are then partitioned into logical volumes that can span across multiple disks.  Logical volumes can also be set up with striping or mirroring enabled (RAID 0 or 1 respectively).  For my database, I used RAID 0 striping across two 1 TB drives to give a total storage capacity of 1.7 TB.

Raid 0?  Are you crazy?  That means if either drive fails, you will lose your database!  If you mentioned this, you would indeed be correct!  Fortunately, I sat down and did the math to figure out the MTBF (Mean time between failures) for this particular SSD.  My strategy is to keep a real-time backup of the database’s raw data (the com_json table) and if a failure were to occur, I would have to rebuild all indexes from the backup.  The MTBF for this drive is 1.5 million hours.  For two drives, the math works out to 750,000 hours, or over 85 years, between failures of the entire logical volume.  This makes a two drive RAID 0 striped array perfectly acceptable for my use case considering that other backups are made in real time.

Setting up an LVM volume with Linux is a simple process.  Using Ubuntu for example, all of the software tools needed are included in a standard server or desktop install.  A decent guide to the process is located here.

In closing, the API tools I have created for researchers would not be possible with standard platter drives given the immense size of the database and the main table com_json.  Although it could work theoretically with platter drives, it would definitely require an immense amount of RAM to buffer disk pages.  The API itself would probably end up becoming extremely slow due to the random retrieval of records from many of the API calls in service.facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

Learn about Big Data and Social Media Ingest and Analysis