• any pronouns

Musician (metal guitarist) and aspiring furry writer!

I love Rain World, soulslikes, Monster Hunter, and cool weird games with immaculate vibes!

Music obsessed and guitar obsessed!


olive20xx
@olive20xx

Hi there, I'm Olive. I worked in Games User Research for ~6 years. Not a tremendously long career in absolute time, but it was mostly agency work, which tracks like dog years.

I don't have the energy to start a newsletter or whatever, but I would be overjoyed to answer any questions you might have. I'd love for my experience to benefit others, especially folks who don't typically have access to this kind of expertise.

Some examples of things I can help with

  • how to properly gather feedback on social media
  • common pitfalls to watch out for when playtesting
  • how to get the most out of the feedback you collect
  • writing good survey questions and avoiding response bias
  • when is research useful, what kind of testing to do based on what problem you're facing
  • probably a million other things I'm not thinking of

Disclaimers

This is not an advertisement for a paid service. I work in software now, and I'm a hobbyist gamedev myself. I'm extremely not-interested in squeezing broke indies for a few bucks.

My only caveat is that I'd prefer to answer questions publicly so they can help others.

If you truly believe you need a private answer, please mention it in your ask along with a way to contact you, and we can discuss it. I won't answer publicly without your permission, but I also reserve the right to not answer a private request at all.

My experience

Like I said, I mainly worked at agencies running many, many projects for big corpo clients like EA, Sony, Microsoft, Blizzard, Riot, 2K, etc. I also spent almost a year at [big mobile game company] before I got fired cuz I couldn't bring myself to care about their evil little skinner boxes.

I ran playtests, concept tests, usability tests, and I'm happy to explain the different methodologies behind those things. I did plenty of quantitative analysis (ie, spreadsheet magic with 1000s of survey responses), but my speciality was qualitative work (ie, talking to people face-to-face, asking the fun "why" questions)


You must log in to comment.

in reply to @olive20xx's post:

I don't know if I have any specific questions at this point in time, but I was wondering if you had any thoughts about testing larger aspects of a game like in game economies or overarching game loops. I feel like a lot of testing I've done over my career is more focused on things at a micro level than a macro one, stuff like can players figure out this puzzle or is this level too hard, and a lot of the time larger scale systems aren't given as much attention. It's easier to see problems that pop up after playing for 10 minutes compared to problems that pop up after playing for 10 hours, so how do you make sure that your not missing anything in those sorts of cases?

Right now we're working on an open world action game and our testing so far has either been small internal tests to make sure that things are working as they should or stuff like testing the tutorial and onboarding with new players while showing the game at local events. We are planning to give some people access to the game and let them go off on their own, but I guess what I was trying to get at before was I'm not entirely sure how to deal with these longer playtests. I suppose we would have like a survey at the end, but I'm worried that I'm going to miss something now that I'm not able to peek over their shoulder as they play.

Gotcha. That makes sense.

It sounds like you're doing exploratory research. You don't have any specific problems in mind, you just want to see what comes up, is that right?

For a longer playtest, I would definitely try to get incremental feedback. This could mean a survey after each play session, or a weekly 1-on-1 interview, or even feedback tools in the build itself. It really depends on your team's capacity, the questions you have, the timeline of the test, and the relationship you have with your testers.

Without any more info, my loose plan would be to issue regular surveys (scheduled, or after each play session if you can trust your testers to self-start). Surveys should have a way to ID the section(s) of the game they were playing, one or two numerical ratings, and an open-ended question to capture any highlights or lowlights. Keep it as short and simple as possible so a) you're not sifting through a ton of noise and b) to make it more likely testers will actually follow through.

The bigger the testing cohort, the more I'd lean on ratings. Easier to get an image at a glance, and trends become meaningful with a larger sample size.

If you have the capacity, you could keep an eye on the survey results as they come in, and follow up if something interesting comes up. (Let players know if you will be doing this!) This way you can capture detailed feedback while the experience is fresh in their minds. A live conversation is usually ideal, but text is fine too.

With less capacity, you can hold your followups for a closing interview. It won't be as fresh, but people generally remember things they write about.

Hm, I know I said I'd prefer to answer questions in public, but I'm seeing that this might clash with the kind of specificity needed to talk research design. (Not to mention these comment boxes get pretty cramped!) If you'd like to dig deeper, feel free to shoot me an email. It's my username at gmail.

Thanks for the advice! We'll definitely look into collecting feedback incrementally, checking in regularly sounds like it'd be much more useful in getting a big picture than trying to piece things together after the fact like I was imagining. This was super helpful! I'll follow up with you at your email if I have more questions.