I’m (finally!) scoring the last of the webcomics for my Sueness and Popularity project.
But I’ve just realised a major flaw in my experiment plan: I haven’t specified which characters I will score as “Fan Characters & Newcomers”. The test has a separate section for these characters, and I know from my first scoring project that the choice of section makes a big difference to a character’s score.
This is fine for fan characters, since it’s usually fairly obvious when a webcomic is based on an existing story. But for “newcomers”, it’s a difficult judgement call. As a reader, I can’t reliably tell if a character is a new one the author just made up, or if they were planning to introduce them all along.
I could pick a rule: for instance, “any character who doesn’t show up in the first storyline is a newcomer”. But for something that could have such a big impact on my results, I don’t want to rely on such an arbitrary choice.
Alternatively, I could ignore the original-character/newcomer distinction, and just score every character on the “Original Fiction” scale. This avoids arbitrary cut-offs, but it also means I’m not applying the test properly.
Neither choice really makes sense, so I’m going to re-plan my experiment to try both:
To do this, I need to record more than is in my original data collection rules (and go back and collect the extra data).
Per Comic (Fanfic)
As well as the existing comic details, I’ll record which comics are fanfic.
I’m counting comics as “fanfic” if they are set in, or cross over with, a story written by someone else. Even one crossover in an otherwise non-fanfic comic counts — if the fanfic questions on the test apply at all, I want to collect that data.
Per Character (“Arc”)
Furthermore, for each character (fan or original), I’ll record the page and arc they first appear in (where an “arc” is a storyline after which the comic could have ended).
That’s a bit of a fuzzy definition. To clarify:
- If the comic has marked chapter (or similar) divisions, those are the arcs.
- Failing that, any storyline divided into “parts” counts as an arc (like GoGoGo‘s “The Tournament Finals”.
- If the comic has no chapters, and no ongoing plot (like Manitowiki), each strip is its own arc.
If none of those rules apply, I’ll guess based on where the natural divisions (updates, pages, etc.) and climactic moments are.
Sometimes, the author will say when they invented a particular character (e.g. Jasper in The Glass Scientists was a late addition). If that changes things, I’ll write in “virtual” arcs for the older characters, and add a comment in the data file to explain this.
For each character, I’ll collect two scores on the test: a “baseline” score where I pretend they’re from an original work, and a “per-instructions” score where I use the questions for for fanfic and newcomers.
To answer those questions, I need to know what counts as “canon”. For true fanfic, this is easy: any story the comic crosses over with or otherwise uses bits of.
For “newcomers”, I’ll count all the arcs before their first appearance as canon, so they only score points for interfering with characters who were around in earlier storylines.
If they’re not fanfic, and were there from the first arc, they get scored in the “Original Fiction Characters” section — so their baseline and per-instructions scores will be the same.
Data Analysis (and multiple comparisons)
I’ll still use a linear regression (on highest character score versus views per month), as described in my original post. However, I’m now going to run it twice: once with the baseline scores, and again taking fanfic and newcomers into account.
That means I can’t use a p-value of 0.05 any more. The 1-in-20 chance of a getting a “yes” result at random is per comparison, and I’m now doing two comparisons — so the overall chance is more like 1 in 10 (1 – 0.952). (See this article on multiple comparisons.)
To fix that, I need to use a lower p-value.
How low is low enough? There are a few ways to calculate that.
In this particular case, the two sets of scores are correlated: a very Sue-ish fanfic character will probably score high as an original character too. At the very least, both scores will use the same answers for the “All Characters” section.
This means I can use the Holm-Bonferroni method, which bases the cut-off on how many low p-values I get.
As I understand it:
- If both p-values are below 0.05, then 0.05 is good enough. Since the odds of that happening randomly for two random tests are only 1 in 400, this is at least as strong evidence as getting below 0.05 with just one test. (It might not be any stronger — for instance, if there were no fan or new characters in my sample, both tests would be the same.)
- If one p-value is higher than 0.05 (nothing proven), I need stronger evidence: the other value must be 0.025 (1 in 40) or lower. Since there are two chances to get a low value at random, I need to be twice as certain as for a single test.
That’s all for now. I’ve got some data collection to catch up on!