Tokens for users’ emails in various marketing automation/email distribution systems

In my job I have to help our customers build URLs for use in whatever tool they use to send emails to their customers and prospects, so I have a running list of the token to use as the template parameter in the URL for the email address (so they know who is opening the link).

I thought I’d share the running list I have collected:

System Template Code
Hubspot {{}}
Constant Contact $Subscriber.Email$
Marketo {{lead.Email Address}}
Pardot %%user_email%%
Salesforce {!Contact.Email}
Eloqua <span class=eloquaemail>EmailAddress</span>
ExactTarget %%emailaddr%%
BuzzBuilder %%emailaddress%%
ActiveCampaign %EMAIL%
MailChimp *|EMAIL|*

Resolving problems with PIP after Upgrading to OS X El Capitan

After I upgraded my Mac to El Capitan, I was having some problems installing new packages. I was getting access denied errors when some packages tried to upgrade (and hence remove) existing packages.

For those not in virtualenvs, I had package installed to the default Python site packages directory (/Library/Python/2.7/site-packages in my case). This was causing problems because El Capitan included a new feature called System Integrity Protection (also called rootless) that prevents you (even as root via sudo) from modifying files in a number of system directories, which seemed to be affecting this.

Below are the steps I took to resolve the issue, which is a general outline for how you can resolve this issue for yourself:

  1. Capture a list of all packages you have installed. Use pip freeze > some-file-to-keep-results
  2. Disable System Integrity Protection, which involves rebooting into recovery mode (hold Command+R), launch a terminal, use the command csrutil disable and reboot back into normal mode.
  3. Uninstall all packages from pip. Use pip freeze | xargs sudo pip uninstall -y or uninstall the package manually.
  4. Ensure that all the packages in the system site-packages directory are gone (/Library/Python/2.7/site-packages), remove any remaining packages manually.
  5. Re-enable System Integrity Protection using the same procedure as #2, with the csrutil enable command
  6. Once again rebooted in normal mode again, install a version of python that’s not the one that comes with OS X. brew install python will do that if you have the Homebrew package manager installed. This is better for development uses for Python anyway.
  7. Install pip manually by downloading the file from the link, and running it with python You can also install pip via Homebrew, but there are some reasons to do it the manual way.
  8. Finally, and this might not be required in your case, pip still wasn’t available via the shell, so I needed to manually create a command to invoke it. I created a script pip in /usr/local/bin and made it invoke the pip package:
    python /usr/local/Cellar/python/2.7.10_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pip/ $@

    Finally, I modified the script to executable with chmod uga+x pip.

  9. After pip is back in place and working, I re-installed the packages I previously had with pip install -r some-file-to-keep-results

    That’s it. Hopefully its at least that easy for you.

Executing script blocks in HTML injected via JavaScript

With all the challenges of XSS, most often your goal is to prevent unintentional script execution. Ironically, getting dynamically injected scripts to run when you want them to can be as hard as preventing those that you don’t want to.

My problem came up while building plugins for Docalytics documents. Essentially, we are allowing widgets between pages of an HTML5 document viewer. The owner of a document can define HTML and JavaScript to be placed between pages of the document to allow for things like video, surveys, etc.

The entire viewer is written in JavaScript, so these plugins were read by the viewer JS and created dynamically as needed. This meant that if the HTML created by the document owner included script tags, they should be run as needed.

Depending on your scenario, this might now be too hard. jQuery takes care of doing this for you when you add HTML via its methods, such as $(...).html(...). jQuery actually parses the HTML itself, identifies, script blocks, and executes them via eval(...). The problem comes in for scripts with an src attribute, rather than an inline script.

jQuery loads script tags with ansrc attribute via AJAX, and then executes them. This is fine if the script is located on your servers, but in my case the scripts were hosted on 3rd party sites, and the servers weren’t setup for cross-domain requests.

My final solution was based on this this StackOverflow question. I injected the HTML using the raw DOM APIs, then executed a helper function on that node to go back and execute the scripts. My modified version of the StackOverflow answer is below, which handles the case for src attributes on the scripts.

function executed_child_scripts_on_element ($element) {
        function nodeName(elem, name) {
            return elem.nodeName && elem.nodeName.toUpperCase() === name.toUpperCase();

        function evalScript(elem) {
            var data = (elem.text || elem.textContent || elem.innerHTML || "" ),
                head = document.getElementsByTagName("head")[0] ||
                script = document.createElement("script");

            script.type = "text/javascript";

            if(elem.src) {
                script.src = elem.src;
                script.async = !!elem.async;
            } else {
                try {
                    // doesn't work on ie...
                } catch (e) {
                    // IE has funky script nodes
                    script.text = data;

            head.insertBefore(script, head.firstChild);

        // main section of function
        var scripts = [],
            children_nodes = $element[0].childNodes,

        for (i = 0; children_nodes[i]; i++) {
            child = children_nodes[i];
            if (nodeName(child, "script") &&
                (!child.type || child.type.toLowerCase() === "text/javascript")) {

        for (i = 0; scripts[i]; i++) {
            script = scripts[i];
            if (script.parentNode) {

RESTful App Engine

Continuing with my trend of posting presentations I gave a while ago, last year at Twin Cities DevFest I gave a presentation about building RESTful JSON services on Google App Engine.

The presentation is designed to both explain the ideas of REST, including the following topics:

  • How REST differs from RPC-style APIs
  • The pros and cons of JSON versus XML
  • What HTTP verbs are appropriate for which operations, including PATCH witch is seen less often in the wild
  • What HTTP status codes should be used for which scenarios
  • Tools to use when developing RESTful APIs
  • Python Code examples implementing the same API in straight Webapp2, Google Cloud Endpoints, and Webapp2 + Pytracts

In the talk I introduce my JSON serialization library Pytracts (which was called ProtoPy at the time of the presentation).

Slides here and video of the presentation here. I’m planning on recording a screencast of the presentation so the audio quality is a bit better.

Data Migration on the App Engine Datastore

I gave a presentation a couple years ago at the Twin Cities DevFest conference and I’ve been meaning to post the slides.

The gist of the talk is that with web frameworks like Rails and Django, data migration is a feature of the data model tools. With App Engine Datastore (now Cloud Datastore) you have to do the work yourself. In the talk I give Python examples of how to update the NDB models, how to use deferred tasks and mapper/mapreduce jobs to update existing entities.

The slides are here:

I’m hoping to record myself giving the presentation soon.

Setting up port forwarding on Mac OS X El Capitan for Google App Engine local development

I’m going to preface this post with the fact that I’m not an expert with pf the tool I’m using here to do this. I’ve just hacked together something that works from other tutorials I’ve found online.

By default the App Engine local development server runs on port 8080 locally, which is fine, but our app has some domain regex rules that are hard to test when the URL isn’t similar to how its deployed in production. To make things more realistic I edited my /etc/hosts file to give me “real” domains for my local dev environment. That solves part of the issue but the other part is getting things running on the right port. The first 1024 ports on *nix are restricted, so directly running the development app server on port 80 would be a pain, so I setup port forwarding.

The above linked tutorials got me going in the right direction, but didn’t quite work for me. Here are my steps.

First, create a new rules file in pf.anchors:

sudo vim /etc/pf.anchors/local-appengine

Past the following in the file and save it (note that you just change 8080 if you are using a different port):

rdr pass on lo0 inet proto tcp from any to any port 80 -> port 8080

Now edit /etc/pf.conf which should look like this when you start:

# Default PF configuration file.
# This file contains the main ruleset, which gets automatically loaded
# at startup.  PF will not be automatically enabled, however.  Instead,
# each component which utilizes PF is responsible for enabling and disabling
# PF via -E and -X as documented in pfctl(8).  That will ensure that PF
# is disabled only when the last enable reference is released.
# Care must be taken to ensure that the main ruleset does not get flushed,
# as the nested anchors rely on the anchor point defined here. In addition,
# to the anchors loaded by this file, some system services would dynamically 
# insert anchors into the main ruleset. These anchors will be added only when
# the system service is used and would removed on termination of the service.
# See pf.conf(5) for syntax.

# anchor point
scrub-anchor "*"
nat-anchor "*"
rdr-anchor "*"
rdr-anchor "forwarding"
dummynet-anchor "*"
anchor "*"
anchor "forwarding"
load anchor "" from "/etc/pf.anchors/"
load anchor "forwarding" from "/etc/pf.anchors/local-appengine"

Update the non-comments part of the file to look like this:

scrub-anchor "*"
nat-anchor "*"
rdr-anchor "*"
rdr-anchor "forwarding"
dummynet-anchor "*"
anchor "*"
anchor "forwarding"
load anchor "" from "/etc/pf.anchors/"
load anchor "forwarding" from "/etc/pf.anchors/local-appengine"

Note that you are just adding the following lines:

rdr-anchor "forwarding"
anchor "forwarding"
load anchor "forwarding" from "/etc/pf.anchors/local-appengine"

but the order of commands in the file matters, so it has to look roughly like the above.

Finally, enable port forwarding from bash with the following command:

sudo pfctl -ef /etc/pf.conf

You can disable it with the following command:

sudo pfctl -df /etc/pf.conf

Google App Engine feature request form

I was referred to the feature request form for App Engine as part of a support ticket, and hadn’t seen a link to it previously. It may be useful to others, though I think it’s app-engine specific, not general to all products on the Google Cloud.

Cleaning out old data from Google App Engine map reduce

If you’re on Google App Engine and you are looking for a way to do some work over a large set of data in the datastore, there’s a good chance you’ll turn to App Engine Mapreduce. Unfortunately the UI for this tool leaves something (much) to be desired.

The control screen looks something like this after you’ve run a few jobs, especially if you are running pipelines that have a lot of sub-pipelines. All of this is a pain to clean up, as you have to click cleanup next to each entry, and it even annoyingly prompts you with a dialog for each one.

To resolve this issue, you can just delete the data in the datastore directly. Below is a code snippet which you can run through some sort of endpoint to delete the old data:

from google.appengine.ext import ndb

def do_cleanup():
    class _AE_Barrier_Index(ndb.Expando):

    class _AE_MR_MapreduceState(ndb.Expando):

    class _AE_MR_ShardState(ndb.Expando):

    class _AE_MR_TaskPayload(ndb.Expando):

    class _AE_Pipeline_Record(ndb.Expando):

    class _AE_Pipeline_Slot(ndb.Expando):

    class _AE_Pipeline_Status(ndb.Expando):

    class _AE_MR_MapreduceControl(ndb.Expando):

    class _AE_Pipeline_Barrier(ndb.Expando):

    to_delete_entities = [

    for cls in to_delete_entities:
        for k in cls.query().fetch(keys_only=True):

The function defines expando versions of the models the mapreduce library uses so that you don’t have to worry about crazy imports, and then just goes through and deletes all the entities for each type.

Unknown Publisher when Installing ClickOnce VSTO Outlook plugin signed with SHA256 Certificate

I just spent the last day fighting this issue, so I thought I’d post the problem and solution for anyone else who is fighting with it.

Docalytics is building an Outlook plugin to tracked attachments in Sales emails using VSTO (Visual Studio Tools for Office) and we are using ClickOnce for the deployment so that we can get automatic updates. Everything was going swimmingly until I was trying to test the installation. When running a copy of the installer locally the publisher was listed as “Unknown Publisher” even though we I was signing the ClickOnce manifests with a certificate from a trusted authority (COMODO RSO Code Signing CA). When trying to install it from the web, it was also behaving like the manifests weren’t signed, giving me errors like the following:

Customization URI: 
Exception: Customized functionality in this application will not work because the certificate used to sign the deployment manifest for Docalytics for Outlook or its location is not trusted. Contact your administrator for further assistance.

************** Exception Text **************
System.Security.SecurityException: Customized functionality in this application will not work because the certificate used to sign the deployment manifest for Docalytics for Outlook or its location is not trusted. Contact your administrator for further assistance.
   at Microsoft.VisualStudio.Tools.Applications.Deployment.ClickOnceAddInTrustEvaluator.VerifyTrustPromptKeyInternal(ClickOnceTrustPromptKeyValue promptKeyValue, DeploymentSignatureInformation signatureInformation, String productName)
   at Microsoft.VisualStudio.Tools.Applications.Deployment.ClickOnceAddInTrustEvaluator.VerifyTrustUsingPromptKey(Uri manifest, DeploymentSignatureInformation signatureInformation, String productName)
   at Microsoft.VisualStudio.Tools.Applications.Deployment.ClickOnceAddInDeploymentManager.VerifySecurity(ActivationContext context, Uri manifest, AddInInstallationStatus installState)
   at Microsoft.VisualStudio.Tools.Applications.Deployment.ClickOnceAddInDeploymentManager.InstallAddIn()
The Zone of the assembly that failed was:

This error was taken from the event log, but a similar (if not identical) error was in the details of the failed installation dialog.

The problem turned out to be a bug with the VSTO runtime that would classify packages signed with SHA256RSA as unknown publisher, even if the publisher was verified. The issue was resolved with the VSTO runtime version 10.0.50325 however even though I had a later version of the runtime installed on my development box (10.0.50903), I still needed to take the corrective action described in this this post describing the issue by microsoft and this other post describing the resolution in more detail. Special thanks to this StackOverflow question for helping me get to the bottom of the issue.

Library conflict between Datejs and D3

I’ve had fun over the past few days tracking down a problem where D3 Transitions weren’t working correctly. Everything looked right and I was pulling my hair out trying to figure out why the transition didn’t get invoked. Copying the code in question to a separate page (in isolation) showed that the transitions worked fine, so I figured it must be a conflict with something else on the page.

After a couple hours of deleting things from the page (it’s tough to pull things off because of the tree of dependencies) I figured out the problem was Datejs. A little googling confirmed it. What made this challenging was that there wasn’t any errors from the conflict. It just didn’t work.

I’m not clear on what the cause of the problem is (I had already lost enough time), but I ended up switching everything to moment.js. Datejs looks like it’s been dead since 2008 anyway.