Show top datasets cross-publisher. Drop-down for the publisher. Browser version numbers filtered on download, so you get this version in the CSV too - for privacy. single_popular_dataset now copes when not much data, and can return the figures so DGU can reskin it in its own repo. Notes about usage stats centralised to notes.html.
Show top datasets cross-publisher. Drop-down for the publisher. Browser version numbers filtered on download, so you get this version in the CSV too - for privacy. single_popular_dataset now copes when not much data, and can return the figures so DGU can reskin it in its own repo. Notes about usage stats centralised to notes.html.

file:a/.gitignore -> file:b/.gitignore
--- a/.gitignore
+++ b/.gitignore
@@ -15,6 +15,10 @@
 develop-eggs
 .installed.cfg
 
+# Private info
+credentials.json
+token.dat
+
 # Installer logs
 pip-log.txt
 

file:a/README.rst -> file:b/README.rst
--- a/README.rst
+++ b/README.rst
@@ -26,16 +26,13 @@
 1. Activate you CKAN python environment and install this extension's software::
 
     $ pyenv/bin/activate
-    $ pip install -e  git+https://github.com/okfn/ckanext-ga-report.git#egg=ckanext-ga-report
+    $ pip install -e  git+https://github.com/datagovuk/ckanext-ga-report.git#egg=ckanext-ga-report
 
 2. Ensure you development.ini (or similar) contains the info about your Google Analytics account and configuration::
 
       googleanalytics.id = UA-1010101-1
-      googleanalytics.username = googleaccount@gmail.com
-      googleanalytics.password = googlepassword
+      googleanalytics.account = Account name (e.g. data.gov.uk, see top level item at https://www.google.com/analytics)
       ga-report.period = monthly
-
-   Note that your password will be readable by system administrators on your server. Rather than use sensitive account details, it is suggested you give access to the GA account to a new Google account that you create just for this purpose.
 
 3. Set up this extension's database tables using a paster command. (Ensure your CKAN pyenv is still activated, run the command from ``src/ckanext-ga-report``, alter the ``--config`` option to point to your site config file)::
 
@@ -45,13 +42,59 @@
 
     ckan.plugins = ga-report
 
+Problem shooting
+----------------
+
+* ``(ProgrammingError) relation "ga_url" does not exist``
+  This means that the ``paster initdb`` step has not been run successfully. Refer to the installation instructions for this extension.
+
+
+Authorization
+--------------
+
+Before you can access the data, you need to set up the OAUTH details which you can do by following the `instructions <https://developers.google.com/analytics/resources/tutorials/hello-analytics-api>`_ the outcome of which will be a file called credentials.json which should look like credentials.json.template with the relevant fields completed. These steps are below for convenience:
+
+1. Visit the `Google APIs Console <https://code.google.com/apis/console>`_
+
+2. Sign-in and create a project or use an existing project.
+
+3. In the `Services pane <https://code.google.com/apis/console#:services>`_ , activate Analytics API for your project. If prompted, read and accept the terms of service.
+
+4. Go to the `API Access pane <https://code.google.com/apis/console/#:access>`_
+
+5. Click Create an OAuth 2.0 client ID....
+
+6. Fill out the Branding Information fields and click Next.
+
+7. In Client ID Settings, set Application type to Installed application.
+
+8. Click Create client ID
+
+9. The details you need below are Client ID, Client secret, and  Redirect URIs
+
+
+Once you have set up your credentials.json file you can generate an oauth token file by using the
+following command, which will store your oauth token in a file called token.dat once you have finished
+giving permission in the browser::
+
+    $ paster getauthtoken --config=../ckan/development.ini
+
 
 Tutorial
 --------
 
-Download some GA data and store it in CKAN's db. (Ensure your CKAN pyenv is still activated, run the command from ``src/ckanext-ga-report``, alter the ``--config`` option to point to your site config file)::
+Download some GA data and store it in CKAN's database. (Ensure your CKAN pyenv is still activated, run the command from ``src/ckanext-ga-report``, alter the ``--config`` option to point to your site config file) and specifying the name of your auth file (token.dat by default) from the previous step::
 
-    $ paster loadanalytics latest --config=../ckan/development.ini
+    $ paster loadanalytics token.dat latest --config=../ckan/development.ini
+
+The value after the token file is how much data you want to retrieve, this can be
+
+* **all**         - data for all time (since 2010)
+
+* **latest**      - (default) just the 'latest' data
+
+* **YYYY-MM-DD**  - just data for all time periods going back to (and including) this date
+
 
 
 Software Licence

--- a/ckanext/ga_report/command.py
+++ b/ckanext/ga_report/command.py
@@ -1,7 +1,10 @@
 import logging
+import datetime
 
 from ckan.lib.cli import CkanCommand
-# No other CKAN imports allowed until _load_config is run, or logging is disabled
+# No other CKAN imports allowed until _load_config is run,
+# or logging is disabled
+
 
 class InitDB(CkanCommand):
     """Initialise the extension's database tables
@@ -23,36 +26,85 @@
         ga_model.init_tables()
         log.info("DB tables are setup")
 
+
+class GetAuthToken(CkanCommand):
+    """ Get's the Google auth token
+
+    Usage: paster getauthtoken <credentials_file>
+
+    Where <credentials_file> is the file name containing the details
+    for the service (obtained from https://code.google.com/apis/console).
+    By default this is set to credentials.json
+    """
+    summary = __doc__.split('\n')[0]
+    usage = __doc__
+    max_args = 0
+    min_args = 0
+
+    def command(self):
+        """
+        In this case we don't want a valid service, but rather just to
+        force the user through the auth flow. We allow this to complete to
+        act as a form of verification instead of just getting the token and
+        assuming it is correct.
+        """
+        from ga_auth import init_service
+        init_service('token.dat',
+                      self.args[0] if self.args
+                                   else 'credentials.json')
+
+
 class LoadAnalytics(CkanCommand):
     """Get data from Google Analytics API and save it
     in the ga_model
 
-    Usage: paster loadanalytics <time-period>
+    Usage: paster loadanalytics <tokenfile> <time-period>
 
-    Where <time-period> is:
+    Where <tokenfile> is the name of the auth token file from
+    the getauthtoken step.
+
+    And where <time-period> is:
         all         - data for all time
         latest      - (default) just the 'latest' data
-        YYYY-MM-DD  - just data for all time periods going
-                      back to (and including) this date
+        YYYY-MM     - just data for the specific month
     """
     summary = __doc__.split('\n')[0]
     usage = __doc__
-    max_args = 1
-    min_args = 0
+    max_args = 2
+    min_args = 1
+
+    def __init__(self, name):
+        super(LoadAnalytics, self).__init__(name)
+        self.parser.add_option('-d', '--delete-first',
+                               action='store_true',
+                               default=False,
+                               dest='delete_first',
+                               help='Delete data for the period first')
 
     def command(self):
         self._load_config()
 
         from download_analytics import DownloadAnalytics
-        downloader = DownloadAnalytics()
-        
-        time_period = self.args[0] if self.args else 'latest'
+        from ga_auth import (init_service, get_profile_id)
+
+        try:
+            svc = init_service(self.args[0], None)
+        except TypeError:
+            print ('Have you correctly run the getauthtoken task and '
+                   'specified the correct token file?')
+            return
+
+        downloader = DownloadAnalytics(svc, profile_id=get_profile_id(svc),
+                                       delete_first=self.options.delete_first)
+
+        time_period = self.args[1] if self.args and len(self.args) > 1 \
+            else 'latest'
         if time_period == 'all':
             downloader.all_()
         elif time_period == 'latest':
             downloader.latest()
         else:
-            since_date = datetime.datetime.strptime(time_period, '%Y-%m-%d')
-            downloader.since_date(since_date)
+            # The month to use
+            for_date = datetime.datetime.strptime(time_period, '%Y-%m')
+            downloader.specific_month(for_date)
 
-

--- a/ckanext/ga_report/controller.py
+++ b/ckanext/ga_report/controller.py
@@ -1,10 +1,395 @@
+import re
+import csv
+import sys
 import logging
-from ckan.lib.base import BaseController, c, render
-import report_model
+import operator
+import collections
+from ckan.lib.base import (BaseController, c, g, render, request, response, abort)
+
+import sqlalchemy
+from sqlalchemy import func, cast, Integer
+import ckan.model as model
+from ga_model import GA_Url, GA_Stat, GA_ReferralStat
 
 log = logging.getLogger('ckanext.ga-report')
 
+
+def _get_month_name(strdate):
+    import calendar
+    from time import strptime
+    d = strptime(strdate, '%Y-%m')
+    return '%s %s' % (calendar.month_name[d.tm_mon], d.tm_year)
+
+
+def _month_details(cls):
+    months = []
+    vals = model.Session.query(cls.period_name).distinct().all()
+    for m in vals:
+        months.append( (m[0], _get_month_name(m[0])))
+    return sorted(months, key=operator.itemgetter(0), reverse=True)
+
+
 class GaReport(BaseController):
+
+    def csv(self, month):
+        import csv
+
+        q = model.Session.query(GA_Stat)
+        if month != 'all':
+            q = q.filter(GA_Stat.period_name==month)
+        entries = q.order_by('GA_Stat.period_name, GA_Stat.stat_name, GA_Stat.key').all()
+
+        response.headers['Content-Type'] = "text/csv; charset=utf-8"
+        response.headers['Content-Disposition'] = str('attachment; filename=stats_%s.csv' % (month,))
+
+        writer = csv.writer(response)
+        writer.writerow(["Period", "Statistic", "Key", "Value"])
+
+        for entry in entries:
+            writer.writerow([entry.period_name.encode('utf-8'),
+                             entry.stat_name.encode('utf-8'),
+                             entry.key.encode('utf-8'),
+                             entry.value.encode('utf-8')])
+
     def index(self):
-        return render('index.html')
-
+
+        # Get the month details by fetching distinct values and determining the
+        # month names from the values.
+        c.months = _month_details(GA_Stat)
+
+        # Work out which month to show, based on query params of the first item
+        c.month_desc = 'all months'
+        c.month = request.params.get('month', '')
+        if c.month:
+            c.month_desc = ''.join([m[1] for m in c.months if m[0]==c.month])
+
+        q = model.Session.query(GA_Stat).\
+            filter(GA_Stat.stat_name=='Totals')
+        if c.month:
+            q = q.filter(GA_Stat.period_name==c.month)
+        entries = q.order_by('ga_stat.key').all()
+
+        def clean_key(key, val):
+            if key in ['Average time on site', 'Pages per visit', 'New visits']:
+                val =  "%.2f" % round(float(val), 2)
+                if key == 'Average time on site':
+                    mins, secs = divmod(float(val), 60)
+                    hours, mins = divmod(mins, 60)
+                    val = '%02d:%02d:%02d (%s seconds) ' % (hours, mins, secs, val)
+                if key == 'New visits':
+                    val = "%s%%" % val
+            if key in ['Bounces', 'Total page views', 'Total visits']:
+                val = int(val)
+
+            return key, val
+
+        c.global_totals = []
+        if c.month:
+            for e in entries:
+                key, val = clean_key(e.key, e.value)
+                c.global_totals.append((key, val))
+        else:
+            d = collections.defaultdict(list)
+            for e in entries:
+                d[e.key].append(float(e.value))
+            for k, v in d.iteritems():
+                if k in ['Bounces', 'Total page views', 'Total visits']:
+                    v = sum(v)
+                else:
+                    v = float(sum(v))/len(v)
+                key, val = clean_key(k,v)
+                c.global_totals.append((key, val))
+                c.global_totals = sorted(c.global_totals, key=operator.itemgetter(0))
+
+        keys = {
+            'Browser versions': 'browser_versions',
+            'Browsers': 'browsers',
+            'Operating Systems versions': 'os_versions',
+            'Operating Systems': 'os',
+            'Social sources': 'social_networks',
+            'Languages': 'languages',
+            'Country': 'country'
+        }
+
+        def shorten_name(name, length=60):
+            return (name[:length] + '..') if len(name) > 60 else name
+
+        def fill_out_url(url):
+            import urlparse
+            return urlparse.urljoin(g.site_url, url)
+
+        c.social_referrer_totals, c.social_referrers = [], []
+        q = model.Session.query(GA_ReferralStat)
+        q = q.filter(GA_ReferralStat.period_name==c.month) if c.month else q
+        q = q.order_by('ga_referrer.count::int desc')
+        for entry in q.all():
+            c.social_referrers.append((shorten_name(entry.url), fill_out_url(entry.url),
+                                       entry.source,entry.count))
+
+        q = model.Session.query(GA_ReferralStat.url,
+                                func.sum(GA_ReferralStat.count).label('count'))
+        q = q.filter(GA_ReferralStat.period_name==c.month) if c.month else q
+        q = q.order_by('count desc').group_by(GA_ReferralStat.url)
+        for entry in q.all():
+            c.social_referrer_totals.append((shorten_name(entry[0]), fill_out_url(entry[0]),'',
+                                            entry[1]))
+
+
+        browser_version_re = re.compile("(.*)\((.*)\)")
+        for k, v in keys.iteritems():
+
+            def clean_field(key):
+                if k != 'Browser versions':
+                    return key
+                m = browser_version_re.match(key)
+                browser = m.groups()[0].strip()
+                ver = m.groups()[1]
+                parts = ver.split('.')
+                if len(parts) > 1:
+                    if parts[1][0] == '0':
+                        ver = parts[0]
+                    else:
+                        ver = "%s.%s" % (parts[0],parts[1])
+                if browser in ['Safari','Android Browser']:  # Special case complex version nums
+                    ver = parts[0]
+                    if len(ver) > 2:
+                        ver = "%s%sX" % (ver[0], ver[1])
+
+                return "%s (%s)" % (browser, ver,)
+
+            q = model.Session.query(GA_Stat).\
+                filter(GA_Stat.stat_name==k)
+            if c.month:
+                entries = []
+                q = q.filter(GA_Stat.period_name==c.month).\
+                          order_by('ga_stat.value::int desc')
+
+            d = collections.defaultdict(int)
+            for e in q.all():
+                d[e.key] += int(e.value)
+            entries = []
+            for key, val in d.iteritems():
+                entries.append((key,val,))
+            entries = sorted(entries, key=operator.itemgetter(1), reverse=True)
+
+            def percent(num, total):
+                p = 100 * float(num)/float(total)
+                return "%.2f%%" % round(p, 2)
+
+            # Get the total for each set of values and then set the value as
+            # a percentage of the total
+            if k == 'Social sources':
+                total = sum([x for n,x in c.global_totals if n == 'Total visits'])
+            else:
+                total = sum([num for _,num in entries])
+            setattr(c, v, [(k,percent(v,total)) for k,v in entries ])
+
+        return render('ga_report/site/index.html')
+
+
+class GaDatasetReport(BaseController):
+    """
+    Displays the pageview and visit count for datasets
+    with options to filter by publisher and time period.
+    """
+    def publisher_csv(self, month):
+        '''
+        Returns a CSV of each publisher with the total number of dataset
+        views & visits.
+        '''
+        c.month = month if not month == 'all' else ''
+        response.headers['Content-Type'] = "text/csv; charset=utf-8"
+        response.headers['Content-Disposition'] = str('attachment; filename=publishers_%s.csv' % (month,))
+
+        writer = csv.writer(response)
+        writer.writerow(["Publisher Title", "Publisher Name", "Views", "Visits", "Period Name"])
+
+        for publisher,view,visit in _get_top_publishers(None):
+            writer.writerow([publisher.title.encode('utf-8'),
+                             publisher.name.encode('utf-8'),
+                             view,
+                             visit,
+                             month])
+
+    def dataset_csv(self, id='all', month='all'):
+        '''
+        Returns a CSV with the number of views & visits for each dataset.
+
+        :param id: A Publisher ID or None if you want for all
+        :param month: The time period, or 'all'
+        '''
+        c.month = month if not month == 'all' else ''
+        if id != 'all':
+            c.publisher = model.Group.get(id)
+            if not c.publisher:
+                abort(404, 'A publisher with that name could not be found')
+
+        packages = self._get_packages(c.publisher)
+        response.headers['Content-Type'] = "text/csv; charset=utf-8"
+        response.headers['Content-Disposition'] = \
+            str('attachment; filename=datasets_%s_%s.csv' % (c.publisher_name, month,))
+
+        writer = csv.writer(response)
+        writer.writerow(["Dataset Title", "Dataset Name", "Views", "Visits", "Period Name"])
+
+        for package,view,visit in packages:
+            writer.writerow([package.title.encode('utf-8'),
+                             package.name.encode('utf-8'),
+                             view,
+                             visit,
+                             month])
+
+    def publishers(self):
+        '''A list of publishers and the number of views/visits for each'''
+
+        # Get the month details by fetching distinct values and determining the
+        # month names from the values.
+        c.months = _month_details(GA_Url)
+
+        # Work out which month to show, based on query params of the first item
+        c.month = request.params.get('month', '')
+        c.month_desc = 'all months'
+        if c.month:
+            c.month_desc = ''.join([m[1] for m in c.months if m[0]==c.month])
+
+        c.top_publishers = _get_top_publishers()
+
+        return render('ga_report/publisher/index.html')
+
+    def _get_packages(self, publisher=None, count=-1):
+        '''Returns the datasets in order of visits'''
+        if count == -1:
+            count = sys.maxint
+
+        q = model.Session.query(GA_Url)\
+            .filter(GA_Url.url.like('/dataset/%'))
+        if publisher:
+            q = q.filter(GA_Url.department_id==publisher.name)
+        if c.month:
+            q = q.filter(GA_Url.period_name==c.month)
+        q = q.order_by('ga_url.visitors::int desc')
+
+        if c.month:
+            top_packages = []
+            for entry in q.limit(count):
+                package_name = entry.url[len('/dataset/'):]
+                p = model.Package.get(package_name)
+                if p:
+                    top_packages.append((p, entry.pageviews, entry.visitors))
+                else:
+                    log.warning('Could not find package "%s"', package_name)
+        else:
+            ds = {}
+            for entry in q:
+                if len(ds) >= count:
+                    break
+                package_name = entry.url[len('/dataset/'):]
+                p = model.Package.get(package_name)
+                if p:
+                    if not p in ds:
+                        ds[p] = {'views': 0, 'visits': 0}
+                    ds[p]['views'] = ds[p]['views'] + int(entry.pageviews)
+                    ds[p]['visits'] = ds[p]['visits'] + int(entry.visitors)
+                else:
+                    log.warning('Could not find package "%s"', package_name)
+
+            results = []
+            for k, v in ds.iteritems():
+                results.append((k,v['views'],v['visits']))
+
+            top_packages = sorted(results, key=operator.itemgetter(1), reverse=True)
+        return top_packages
+
+    def read(self):
+        '''
+        Lists the most popular datasets across all publishers
+        '''
+        return self.read_publisher(None)
+
+    def read_publisher(self, id):
+        '''
+        Lists the most popular datasets for a publisher (or across all publishers)
+        '''
+        count = 20
+
+        c.publishers = _get_publishers()
+
+        id = request.params.get('publisher', id)
+        if id and id != 'all':
+            c.publisher = model.Group.get(id)
+            if not c.publisher:
+                abort(404, 'A publisher with that name could not be found')
+            c.publisher_name = c.publisher.name
+        c.top_packages = [] # package, dataset_views in c.top_packages
+
+        # Get the month details by fetching distinct values and determining the
+        # month names from the values.
+        c.months = _month_details(GA_Url)
+
+        # Work out which month to show, based on query params of the first item
+        c.month = request.params.get('month', '')
+        if not c.month:
+            c.month_desc = 'all months'
+        else:
+            c.month_desc = ''.join([m[1] for m in c.months if m[0]==c.month])
+
+        c.publisher_page_views = 0
+        q = model.Session.query(GA_Url).\
+            filter(GA_Url.url=='/publisher/%s' % c.publisher_name)
+        if c.month:
+            entry = q.filter(GA_Url.period_name==c.month).first()
+            c.publisher_page_views = entry.pageviews if entry else 0
+        else:
+            for e in q.all():
+                c.publisher_page_views = c.publisher_page_views  + int(e.pageviews)
+
+        c.top_packages = self._get_packages(c.publisher, 20)
+
+        return render('ga_report/publisher/read.html')
+
+def _get_top_publishers(limit=20):
+    '''
+    Returns a list of the top 20 publishers by dataset visits.
+    (The number to show can be varied with 'limit')
+    '''
+    connection = model.Session.connection()
+    q = """
+        select department_id, sum(pageviews::int) views, sum(visitors::int) visits
+        from ga_url
+        where department_id <> ''"""
+    if c.month:
+        q = q + """
+                and period_name=%s
+        """
+    q = q + """
+            group by department_id order by visits desc
+        """
+    if limit:
+        q = q + " limit %s;" % (limit)
+
+    # Add this back (before and period_name =%s) if you want to ignore publisher
+    # homepage views
+    # and not url like '/publisher/%%'
+
+    top_publishers = []
+    res = connection.execute(q, c.month)
+
+    for row in res:
+        g = model.Group.get(row[0])
+        if g:
+            top_publishers.append((g, row[1], row[2]))
+    return top_publishers
+
+def _get_publishers():
+    '''
+    Returns a list of all publishers. Each item is a tuple:
+      (names, title)
+    '''
+    publishers = []
+    for pub in model.Session.query(model.Group).\
+               filter(model.Group.type=='publisher').\
+               filter(model.Group.state=='active').\
+               order_by(model.Group.name):
+        publishers.append((pub.name, pub.title))
+    return publishers
+

--- a/ckanext/ga_report/download_analytics.py
+++ b/ckanext/ga_report/download_analytics.py
@@ -1,23 +1,40 @@
+import os
 import logging
 import datetime
-
+import collections
 from pylons import config
 
 import ga_model
-from ga_client import GA
+
+#from ga_client import GA
 
 log = logging.getLogger('ckanext.ga-report')
 
 FORMAT_MONTH = '%Y-%m'
+MIN_VIEWS = 50
+MIN_VISITS = 20
 
 class DownloadAnalytics(object):
     '''Downloads and stores analytics info'''
-    def __init__(self):
+
+    def __init__(self, service=None, profile_id=None, delete_first=False):
         self.period = config['ga-report.period']
-    
-    def all_(self):
-        pass
-    
+        self.service = service
+        self.profile_id = profile_id
+        self.delete_first = delete_first
+
+    def specific_month(self, date):
+        import calendar
+
+        first_of_this_month = datetime.datetime(date.year, date.month, 1)
+        _, last_day_of_month = calendar.monthrange(int(date.year), int(date.month))
+        last_of_this_month =  datetime.datetime(date.year, date.month, last_day_of_month)
+        periods = ((date.strftime(FORMAT_MONTH),
+                    last_day_of_month,
+                    first_of_this_month, last_of_this_month),)
+        self.download_and_store(periods)
+
+
     def latest(self):
         if self.period == 'monthly':
             # from first of this month to today
@@ -31,13 +48,13 @@
         self.download_and_store(periods)
 
 
-    def since_date(self, since_date):
+    def for_date(self, for_date):
         assert isinstance(since_date, datetime.datetime)
         periods = [] # (period_name, period_complete_day, start_date, end_date)
         if self.period == 'monthly':
             first_of_the_months_until_now = []
-            year = since_date.year
-            month = since_date.month
+            year = for_date.year
+            month = for_date.month
             now = datetime.datetime.now()
             first_of_this_month = datetime.datetime(now.year, now.month, 1)
             while True:
@@ -49,8 +66,8 @@
                     break
                 elif first_of_the_month < first_of_this_month:
                     in_the_next_month = first_of_the_month + datetime.timedelta(40)
-                    last_of_the_month == datetime.datetime(in_the_next_month.year,
-                                                           in_the_next_month.month, a)\
+                    last_of_the_month = datetime.datetime(in_the_next_month.year,
+                                                           in_the_next_month.month, 1)\
                                                            - datetime.timedelta(1)
                     periods.append((now.strftime(FORMAT_MONTH), 0,
                                     first_of_the_month, last_of_the_month))
@@ -71,46 +88,294 @@
             return period_name + ' (up to %ith)' % period_complete_day
         else:
             return period_name
-        
+
 
     def download_and_store(self, periods):
         for period_name, period_complete_day, start_date, end_date in periods:
+            if self.delete_first:
+                log.info('Deleting existing Analytics for period "%s"',
+                         period_name)
+                ga_model.delete(period_name)
             log.info('Downloading Analytics for period "%s" (%s - %s)',
                      self.get_full_period_name(period_name, period_complete_day),
                      start_date.strftime('%Y %m %d'),
                      end_date.strftime('%Y %m %d'))
-            data = self.download(start_date, end_date)
-            log.info('Storing Analytics for period "%s"',
+            data = self.download(start_date, end_date, '~/dataset/[a-z0-9-_]+')
+            log.info('Storing Dataset Analytics for period "%s"',
                      self.get_full_period_name(period_name, period_complete_day))
-            self.store(period_name, period_complete_day, data)
-
-    @classmethod
-    def download(cls, start_date, end_date):
+            self.store(period_name, period_complete_day, data, )
+
+            data = self.download(start_date, end_date, '~/publisher/[a-z0-9-_]+')
+            log.info('Storing Publisher Analytics for period "%s"',
+                     self.get_full_period_name(period_name, period_complete_day))
+            self.store(period_name, period_complete_day, data,)
+
+            ga_model.update_publisher_stats(period_name) # about 30 seconds.
+            self.sitewide_stats( period_name )
+
+            self.update_social_info(period_name, start_date, end_date)
+
+    def update_social_info(self, period_name, start_date, end_date):
+        start_date = start_date.strftime('%Y-%m-%d')
+        end_date = end_date.strftime('%Y-%m-%d')
+        query = 'ga:hasSocialSourceReferral=~Yes$'
+        metrics = 'ga:entrances'
+        sort = '-ga:entrances'
+
+        # Supported query params at
+        # https://developers.google.com/analytics/devguides/reporting/core/v3/reference
+        results = self.service.data().ga().get(
+                                 ids='ga:' + self.profile_id,
+                                 filters=query,
+                                 start_date=start_date,
+                                 metrics=metrics,
+                                 sort=sort,
+                                 dimensions="ga:landingPagePath,ga:socialNetwork",
+                                 max_results=10000,
+                                 end_date=end_date).execute()
+        data = collections.defaultdict(list)
+        rows = results.get('rows',[])
+        for row in rows:
+            from ga_model import _normalize_url
+            data[_normalize_url(row[0])].append( (row[1], int(row[2]),) )
+        ga_model.update_social(period_name, data)
+
+
+    def download(self, start_date, end_date, path='~/dataset/[a-z0-9-_]+'):
         '''Get data from GA for a given time period'''
         start_date = start_date.strftime('%Y-%m-%d')
         end_date = end_date.strftime('%Y-%m-%d')
-        # url
-        #query = 'ga:pagePath=~^%s,ga:pagePath=~^%s' % \
-        #        (PACKAGE_URL, self.resource_url_tag)
-        query = 'ga:pagePath=~^/dataset/'
-        metrics = 'ga:uniquePageviews'
+        query = 'ga:pagePath=%s$' % path
+        metrics = 'ga:uniquePageviews, ga:visitors'
         sort = '-ga:uniquePageviews'
-        for entry in GA.ga_query(query_filter=query,
-                                 from_date=start_date,
+
+        # Supported query params at
+        # https://developers.google.com/analytics/devguides/reporting/core/v3/reference
+        results = self.service.data().ga().get(
+                                 ids='ga:' + self.profile_id,
+                                 filters=query,
+                                 start_date=start_date,
                                  metrics=metrics,
                                  sort=sort,
-                                 to_date=end_date):
-            print entry
-            import pdb; pdb.set_trace()
-            for dim in entry.dimension:
-                if dim.name == "ga:pagePath":
-                    package = dim.value
-                    count = entry.get_metric(
-                        'ga:uniquePageviews').value or 0
-                    packages[package] = int(count)
-        return packages
+                                 dimensions="ga:pagePath",
+                                 max_results=10000,
+                                 end_date=end_date).execute()
+
+        if os.getenv('DEBUG'):
+            import pprint
+            pprint.pprint(results)
+            print 'Total results: %s' % results.get('totalResults')
+
+        packages = []
+        for entry in results.get('rows'):
+            (loc,pageviews,visits) = entry
+            packages.append( ('http