The following section includes additional components of the project which I either excluded from the main case study for the sake of concision, or am including here for clarification.

The Page-By: A Comparative Analysis Method

In the course of working on this project, with support from my team, I developed the Page-By: a method for the comparison of user flows in similar product categories. The Page-By method is useful in the background research phase of a UX project, when gaining insight into how competitors structure comparable experiences may be of value.

The goal of the method is to deliver a comparison of how a user would go "page-by-page" to achieve similar tasks on different platforms (e.g. build a crowdfunding campaign). The method is designed to provide comparisons at least at two levels of detail, with high levels of detail being easily reduced to low, but more easily presentable, levels of detail.

The method is roughly based on the spreadsheet-based competitive analysis method as described by Jaime Levy in her book UX Strategy. But rather than mapping the market with the aim of discovering competitive opportunities, this method is more focused on discovering differences and similarities in user flows, with the ultimate goal of identifying opportunities for emulation or improvement in the user's experience of the product.

The following is a very quick step-by-step guide to the method.


    Identify the relevant competitors (Jaime's book is again instructive in this regard).

  2. TASK

    Explicitly state the Task you want the user to accomplish, including the start and end points. In our case, this went something like:

    Try to formulate the task such that it can be completed with minimal branching in the flow - what's sometimes called the "happy path." This might not always be possible.

  3. SET UP

    Set up your spreadsheet in the following way (adjusting as necessary for your own project).



    These will come from the different section names of the flow that you are analyzing. Briefly look at several platforms to see how they divide and name the sections. Abstract these into generic section titles that can be used across all platforms you are analyzing. These may have to be adjusted as you delve deeper into the analysis. URL sub-directories or top-of-page progress indicators are often a good hint of how the various platforms name their sections.

    It would also help tremendously to give the rows of each section a distinct background color.

    Screenshot of the spreadsheet setup. Click or tap image to see full example.

  4. START

    For each platform, start on the first page (as specified in the Task statement) and document it according to the column headings.

    PAGE COUNT: keep track of how many pages you've been on. Pop-ups or warnings that require you to click-through to continue also count.
    PAGE TITLE: How is the page named by the platform. Multiple pages might have the same names. Using the latter portion of the URL might also be helpful in figuring out the name of the page.
    DESCRIPTION: Title aside, what is this page? What information is it trying to convey, why might this page be important, etc? Grabbing a piece of the platform's own copy, usually from the top of the page, is sometimes a quick way to "describe" the page.
    CONTENT: Describe the content that you see. Is it a bunch of text telling you something? What is it telling you? Is there radio buttons? Fields? Images?
    TASK: Is the task that the user is expected to accomplish on this page clear? If so, name it explicitly.
    FIELDS: If there are text-fields on the page requiring user input, list the prompts for those fields.
    CTA: (call to action) How does the user advance to the next page? What is the wording on the button, if any?
    NOTES: Any other remarks, comments, or observations you might have about the page. Since this exercise is likely being done at the start of the project - where information collection should be broad - feel free to include thoughts on design, usability, flow, ideas worth adopting, fulfilled and unfulfilled expectations, etc.

    Again, you will likely need to adjust, limit, or augment these categories as your project demands.


    Following the "happy-path" through the task, complete this survey for each page of the flow you are analyzing.

    With the exercise complete for all relevant competitors (consider splitting up the work by competitor among your team-mates), you will now have a very comprehensive overview of how other platforms have taken users through a flow that's similar to the one you're developing.


    However, the depth of this data makes comparison between platforms somewhat unwieldy. To make the data more manageable for comparison, we can momentarily mask the qualitative aspects of the data and only consider the quantitative, specifically, the number of pages in each section and in each flow.

    A very quick and hands-on way to get a visual comparison of several different flows, is to print each spreadsheet out, trim them to the edge, and literally lay them side-by-side. (Before you print, you will have to make sure all the row heights in your document are set the same, and that you print each document at the same scale.)

    If pressed for time, a photograph of this layout, as seen above, may be sufficiently presentable to make a certain point to your teammates or stakeholders.


    But if time permits, it is also possible to present this comparative quantitative data in much more compelling, and ultimately more insightful ways.

    Platform comparisons developed with the Page-By method can be succinctly presented with graphics like the one above. While comparing flows strictly by the number of pages is in many ways limiting, it may also be useful as a quick comparative snapshot with deeper data available from the method's collection phase.


A limitation of this method is worth noting. While the graphic presents a quickly discernible comparison between the user experience of different platforms, what's left out in the details of each page may be of high relevance to the user experience. In other words, simply comparing the number of pages it takes to get through the Profile Setup flow, for example, may not give accurate impressions of the designs. This limitation, which might be especially misleading with more polished graphic presentations, should always be disclosed. The graphic is bound to be an oversimplification that, while useful as a snapshot, also doesn't tell the whole story.

With that, next steps in refining this method would include finding ways to re-incorporate some amount of qualitative data into the quantitatively driven graphic layout. It's easy enough to imagine an interactive version of this presentation, for example, where clicking on one of the blocks representing a page leads to an actual screenshot of the page, or some pertinent insights into how that page contributes to the over all experience. This data would be captured from the initial audit of the competitor platforms, as described above.

Ultimately, in our own use of this method in the eduDAO project, we found it useful in helping us quickly grasp and analyze UX conventions in a specific product type. Additionally, because of the way the spreadsheet is laid out, abstracting these analyses into easily comparable graphic layouts is straight forward and almost immediately presentable at low or high fidelity.

Additional Interview Excerpts

On time expectation:

"Some way of letting us know what type of refinement they're looking for."

On submitting a proposal prior to the public-facing campaign:

"I would prefer a short initial application,... and [later] it would be great to have the ability to craft it and make it ours."

On presenting students' need-based status:

"'These schools are the least likely to be able to afford a robust journalism program.' That, we will not shy away from saying, either to the board or to the public."

On crowdfunding as a development strategy:

"It was a helpful stepping stone."

Attention Decrement Hypothesis

Attention decrement hypothesis suggests that when people are presented with an ordered series of information items (for example, a list of adjectives describing a colleague), they are more likely to be influenced by information that comes earlier in the series, than by that which comes later.

This phenomenon is commonly referred to as the primacy effect. With attention understood to be a limited cognitive resource, the primacy effect is likely caused by the gradual decrease in the availability of this resource during the information processing task. Hence, "attention decrement."

The implication here is that when let to our own devices, we will find information presented to us earlier more meaningful than that presented later.

Of course, attention is not completely beyond our control, but is actually something we can, to some extent, direct (think of what happens when you tell yourself to "focus!").

This premise is further bolstered by the fact that when people are explicitly asked to pay attention throughout an information series, they are then more likely to be influenced by later items - a phenomenon known as the recency effect.

While these outcomes (collectively known as serial-position-effects) might seem divergent, what they ultimately point to is that the order in which information is presented and the kind of attention directed to that order, both have significant implications on how we make meaning out of information series.

And that has a lot of implications for UX.

For further reading, see "cognitive load" here and "attention decrement hypothesis" here and here.