Once you have results for your free classification task, you'll need to code what stimuli participants grouped together. We recommend having around 30 participants or more for your eventual analysis. If you use the format pictured below, you will then be able to use an R script that creates similarity matrices from your data. Make sure you label your columns this way so that the R script will work correctly. Subject: The participant's ID Version: The version of the task that the participant did
Context: The context for that slide Group: Which group on that slide you are coding Tokens: Which tokens were grouped together in that group Let's use the following participant's results as a model. Here is their first slide: This participant's ID is B50. They did version A of our task. This is slide 1, and In Version A, the context on the first slide is "ata". They made 5 groups of stimuli on this slide, so let's choose a group at random and code it, for instance the group at the left containing 19, 14, 23, 20, and 24. Since this is the first group we're coding on this slide, we can label it group 1 under Group and put in the token numbers under Tokens. For the token numbers, we want to separate them with commas and no spaces. It doesn't matter what order the token numbers are in within that cell. So now we have: If we code the rest of the groups on this slide, we have: Here is B50's second slide: And below is the coding for this slide added to the spreadsheet. Since B50 made 4 groups on this slide, we only have 1-4 under Group: Here is B50's slide 3: And the coding for this slide added to the spreadsheet: This participant's results are finished being coded! Do this for all your participants and you'll be ready to analyze your data. You'll need to save this spreadsheet as a tab-separated text file for use with the R script to create similarity matrices.
0 Comments
In this post I'll discuss how to create a free classification task, also known as a free sort task, which we apply to non-native perception in Daidone, Kruger, and Lidster (2015). This task is useful for determining the perceptual similarity of non-native sounds and examining what acoustic, phonological, or indexical dimensions of the stimuli matter for listeners. It can be used to examine segmental or suprasegmental phenomena and can be used to predict their discriminability (check out our slides from New Sounds 2019). Here is an example of what our Finnish length free classification task looks like in PowerPoint. The numbers on the slide are sound files that participants click on and listen to and then group by which seem similar to them.
Once you have the sound file containing all of your stimuli, you'll need to segment it into smaller, individual files for each stimulus. You can do this using the free acoustic analysis software Praat, available at praat.org.
Once you open Praat, you'll see that both a "Praat Objects" window and a "Praat Picture" window appear at start up. You won't be using the Praat picture window, so you can close that. Before we begin cutting a sound file, let's just see what sounds look like in Praat. In the top menu, go to "Open" --> "Read from file" and choose your sound file. It should now appear highlighted in the Objects window. Click on "View & Edit" on the right-hand menu to see your sound file: The recording list:
Once you've chosen a perception task, it's time to make stimuli for it.
How many stimuli do I need? The answer to this question isn't simple. You'll need to strike a balance between getting a sufficient amount of data and how long you can reasonably expect people to sit and do your experiment. In our lab, we generally have to recruit participants with extra credit, the promise of snacks, and desperate pleas, so any experiment over an hour or an hour and 15 minutes is unlikely to have many people sign up. If you can pay people they'll be more willing to do a longer experiment, but that means more money you'll have to shell out for each person. Since your experiment is likely to be made up of two or more tasks, such as both discrimination and lexical decision plus a background questionnaire, each task in itself shouldn't be longer than about 25 minutes, if possible. Shorter tasks will also prevent participants' attention from wandering too much, which means more reliable data. A 20-minute AXB or oddity task is already very boring even with a break, and with difficult contrasts it can also be mentally taxing and demoralizing. I know some psychology experiments have participants doing one repetitive task for an hour (how?!), but if you don't want participants to constantly time out on trials because they are falling asleep or trying to surreptitiously check their phones, keep it shorter. Welcome to my blog! I've decided to use this space as a how-to for creating and running perception experiments, both as a way to organize my thoughts and as a way to help you, random person on the internet. I'm writing this for an audience (assuming you exist) that has some knowledge of L2 phonology, but no practical experience running experiments.
So let's get started! First of all, if you're excited to start a perception experiment, as we all should be, you have a research question in mind that you want answered. This research question will determine what kind of task you should use, as different types of tasks examine different levels of processing. In this post I'll outline common types of research questions along with their corresponding appropriate task(s). |
AuthorI like sounds. Here I'll teach you how to play with them and force other people to listen to them. For science. Archives
March 2022
Categories |