<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:wfw="http://wellformedweb.org/CommentAPI/" xmlns:dc="http://purl.org/dc/elements/1.1/" >

<channel><title><![CDATA[Danielle Daidone - Blog]]></title><link><![CDATA[https://www.ddaidone.com/blog]]></link><description><![CDATA[Blog]]></description><pubDate>Mon, 13 Apr 2026 22:36:35 -0400</pubDate><generator>Weebly</generator><item><title><![CDATA[Analyzing free classification results: Visualization of MDS analyses]]></title><link><![CDATA[https://www.ddaidone.com/blog/analyzing-free-classification-results-visualization-of-mds-analyses]]></link><comments><![CDATA[https://www.ddaidone.com/blog/analyzing-free-classification-results-visualization-of-mds-analyses#comments]]></comments><pubDate>Mon, 13 Apr 2026 15:07:05 GMT</pubDate><category><![CDATA[Uncategorized]]></category><guid isPermaLink="false">https://www.ddaidone.com/blog/analyzing-free-classification-results-visualization-of-mds-analyses</guid><description><![CDATA[The long awaited script for MDS visualization is finally here!&nbsp; It took so long in part because I'm not great at math, and rotating the dimension scores is matrix multiplication.&nbsp; Thankfully, with Ryan Lidster's help on that part, you can now plot your MDS output and rotate the points to your heart's content.The R Markdown file is available here, and an example of its output is available in this pdf.&nbsp; As you can see, MDS_visualization.Rmd takes the output files created by MDS_Anal [...] ]]></description><content:encoded><![CDATA[<div class="paragraph">The long awaited script for MDS visualization is finally here!&nbsp; It took so long in part because I'm not great at math, and rotating the dimension scores is matrix multiplication.&nbsp; Thankfully, with Ryan Lidster's help on that part, you can now plot your MDS output and rotate the points to your heart's content.<br /><br />The R Markdown file is available <a href="https://www.ddaidone.com/uploads/1/0/5/2/105292729/mds_visualization.rmd" target="_blank">here</a>, and an example of its output is available in <a href="https://www.ddaidone.com/uploads/1/0/5/2/105292729/mds_visualization_example.pdf" target="_blank">this pdf</a>.&nbsp; As you can see, <a href="https://www.ddaidone.com/uploads/1/0/5/2/105292729/mds_visualization.rmd" target="_blank">MDS_visualization.Rmd</a> takes the output files created by <a href="https://www.ddaidone.com/uploads/1/0/5/2/105292729/mds_analysis_new.rmd" target="_blank">MDS_Analysis_new.Rmd</a> as input, such as MDS_3D_AllContexts.txt.&nbsp; It works with both 2D and 3D solutions, but make sure you pay attention to which you are choosing because there are different chunks you should run based on 2D versus 3D.&nbsp;<br /><br />After reading in your file, the script creates new columns from the row names.&nbsp; You want columns for however you want to group your dimension scores in the figure.&nbsp; In this example, we want each length template to have the same color (one color for CVCV, another for CVVCV, etc.) and each speaker to have the same shape (square for speaker N, circle for speaker L, etc.).&nbsp;&nbsp;Since our row names are typically condition_speaker, the script separates the two based on the underscore delimiter.&nbsp; If you've used another way of naming your rows, the&nbsp;<a href="https://stringr.tidyverse.org/" target="_blank"><em>stringr&nbsp;</em>package</a> has a lot of options for string manipulation.&nbsp; Also, since the script pulls the labels from the "condition" column, you'll want to make sure your condition names are the ones you want to have in your figure (e.g., IPA symbols).&nbsp; The script has an example of using <em>stringr</em> to find and replace condition names.<br /><br />The script then uses the <a href="https://www.sthda.com/english/wiki/ggplot2-scatter-plots-quick-start-guide-r-software-and-data-visualization" target="_blank"><em>ggplot2</em> package</a> to create the scatterplots.&nbsp; This package is very versatile, and you can change essentially any part of the plot to look how you want (title, axes, colors, sizes, labels, etc.).&nbsp; For example, because dimension scores are not inherently meaningful on their own, I've removed the numerical tick marks from the axes.&nbsp; Here is what our first plots look like.&nbsp; Since this is a 3D solution, we have both Dimension 1 by Dimension 2 and Dimension 1 by Dimension 3 (the second plot basically looking at the first plot from above).</div>  <div><div class="wsite-multicol"><div class="wsite-multicol-table-wrap" style="margin:0 -15px;"> 	<table class="wsite-multicol-table"> 		<tbody class="wsite-multicol-tbody"> 			<tr class="wsite-multicol-tr"> 				<td class="wsite-multicol-col" style="width:50%; padding:0 15px;"> 					 						  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.ddaidone.com/uploads/1/0/5/2/105292729/mds-1by2_orig.png" alt="Picture" style="width:auto;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>   					 				</td>				<td class="wsite-multicol-col" style="width:50%; padding:0 15px;"> 					 						  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.ddaidone.com/uploads/1/0/5/2/105292729/mds-1by3_orig.png" alt="Picture" style="width:auto;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>   					 				</td>			</tr> 		</tbody> 	</table> </div></div></div>  <div class="paragraph">If you don't like the orientation of the coordinates, this is where matrix rotation comes into play.&nbsp; If you think of a 2D solution as dots on a flat piece of paper, if you spin the paper around, you can see them from different angles, but the distances between the dots don't change.&nbsp; We can do the same in three dimensions, like turning a cube all around in your hand.&nbsp; The only important thing is that we don't change the distances between points, since figuring out distances is the purpose of an MDS analysis.&nbsp;&nbsp;<br /><br />In order to do rotation, we can spin the coordinates around the x-axis, y-axis, or z-axis.&nbsp; Here is a great visual example of rotating around each of these axes from&nbsp;&#8203;<a href="https://jsfiddle.net/greggman/Lqg93y1v/" target="_blank">jsfiddle.net/greggman/Lqg93y1v/</a>.&nbsp;</div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.ddaidone.com/uploads/1/0/5/2/105292729/published/rotation.gif?1776117739" alt="Picture" style="width:529;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>  <div class="paragraph">We tell the script to do this by inputting how much of a rotation we want in radians.&nbsp; Below is a handy guide for converting angles to radians from <a href="https://byjus.com/maths/relation-between-degree-and-radian/" target="_blank">this site</a>.</div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.ddaidone.com/uploads/1/0/5/2/105292729/published/radians.png?1776118105" alt="Picture" style="width:402;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>  <div class="paragraph">You can put&nbsp;&pi; into R as "pi", so&nbsp;pi/2 is a 90&ordm; rotation, pi is a 180&ordm; rotation, and (3 * pi)/2 is a 270&ordm; degree rotation.&nbsp; Let's say we want to rotate our solution 90&ordm; around the z-axis.&nbsp; Below is what this would look like.&nbsp; As you can see in the first plot, the points have moved counterclockwise a fourth of the way around.&nbsp; If you look at the updated dimension scores in the new dataframe MDS_points_rotated, you'll notice that because we're rotating around the z-axis, none of the D3 scores have changed.&nbsp; In the second plot below, this means that all of the points are at the same height as in the original plot.</div>  <div><div class="wsite-multicol"><div class="wsite-multicol-table-wrap" style="margin:0 -15px;"> 	<table class="wsite-multicol-table"> 		<tbody class="wsite-multicol-tbody"> 			<tr class="wsite-multicol-tr"> 				<td class="wsite-multicol-col" style="width:50%; padding:0 15px;"> 					 						  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.ddaidone.com/uploads/1/0/5/2/105292729/mds-rotated-1by2_orig.png" alt="Picture" style="width:auto;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>   					 				</td>				<td class="wsite-multicol-col" style="width:50%; padding:0 15px;"> 					 						  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.ddaidone.com/uploads/1/0/5/2/105292729/mds-rotated-1by3_orig.png" alt="Picture" style="width:auto;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>   					 				</td>			</tr> 		</tbody> 	</table> </div></div></div>  <div class="paragraph">If you want to rotate around more than one axis, change the input of the relevant rotation chunk to the output of the previous rotation (MDS_points to MDS_rotated_points).&nbsp; Here is what it would look like if we also rotated the already rotated solution 90&ordm; around the y-axis.</div>  <div><div class="wsite-multicol"><div class="wsite-multicol-table-wrap" style="margin:0 -15px;"> 	<table class="wsite-multicol-table"> 		<tbody class="wsite-multicol-tbody"> 			<tr class="wsite-multicol-tr"> 				<td class="wsite-multicol-col" style="width:50%; padding:0 15px;"> 					 						  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.ddaidone.com/uploads/1/0/5/2/105292729/mds-rotated-1by2new_orig.png" alt="Picture" style="width:auto;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>   					 				</td>				<td class="wsite-multicol-col" style="width:50%; padding:0 15px;"> 					 						  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.ddaidone.com/uploads/1/0/5/2/105292729/mds-rotated-1by3new_orig.png" alt="Picture" style="width:auto;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>   					 				</td>			</tr> 		</tbody> 	</table> </div></div></div>  <div class="paragraph">Use the rotations to help you understand how listeners were grouping the stimuli.&nbsp; In this case, the original dimension scores are clearer, since for Dimension 1 at least, we can also see a divide with points with short V1 on the left and those with long V1 on the right.&nbsp; Perhaps we would want to rotate -45&ordm; along the z-axis to make this distinction sharper.&nbsp; You can also think about what makes most sense visually for your audience, such as putting /i/ in the top left corner in a study looking at vowels.&nbsp; <br /><br />&#8203;Once you're happy with your rotation, be sure to save the image(s) with the <em>ggsave</em> chunk at the bottom of the script.&nbsp; There is also a chunk for saving the new dimension scores, which you can use later if you want to correlate acoustic/phonological properties with the dimension scores to find out what cues listeners were using to group the stimuli.</div>]]></content:encoded></item><item><title><![CDATA[Analyzing free classification results: Multi-dimensional scaling (MDS) analyses]]></title><link><![CDATA[https://www.ddaidone.com/blog/analyzing-free-classification-results-multi-dimensional-scaling-mds-analyses]]></link><comments><![CDATA[https://www.ddaidone.com/blog/analyzing-free-classification-results-multi-dimensional-scaling-mds-analyses#comments]]></comments><pubDate>Thu, 03 Aug 2023 04:00:00 GMT</pubDate><category><![CDATA[Free classification]]></category><guid isPermaLink="false">https://www.ddaidone.com/blog/analyzing-free-classification-results-multi-dimensional-scaling-mds-analyses</guid><description><![CDATA[[Edited on 4/13/26 with additional information from my new and improved MDS script that also outputs Euclidean distances!&nbsp; Make sure to download the newest version of the script: MDS_Analysis_new.Rmd]Multi-dimensional scaling (MDS) is a way of determining the placement of each stimulus in space so that the perceptual distances between the stimuli are recreated as closely as possible, with stimuli that were judged to be more similar placed closer together and stimuli judged to be less simila [...] ]]></description><content:encoded><![CDATA[<div class="paragraph"><span>[Edited on 4/13/26 with additional information from my new and improved MDS script that also outputs Euclidean distances!&nbsp; Make sure to download the newest version of the script: <a href="https://www.ddaidone.com/uploads/1/0/5/2/105292729/mds_analysis_new.rmd">MDS_Analysis_new.Rmd</a>]<br /><br />Multi-dimensional scaling (MDS) is a way of determining the placement of each stimulus in space so that the perceptual distances between the stimuli are recreated as closely as possible, with stimuli that were judged to be more similar placed closer together and stimuli judged to be less similar placed further apart.&nbsp; If you're not familiar with MDS, you may find the explanation on pp. 1107-1108 of our&nbsp;</span><u><a href="https://doi.org/10.1017/S0272263123000050" target="_blank">2023 SSLA article on free classification</a></u><span>&nbsp;to be useful.&nbsp;</span><br /><br /><span>You can perform MDS analyses on any of the matrix outputs from the&nbsp;</span><a href="https://www.ddaidone.com/uploads/1/0/5/2/105292729/create_fc_similarity_matrices.rmd" target="_blank">create_FC_similarity_matrices.Rmd</a><span>&nbsp;file from the previous blog post using&nbsp;</span><u><font color="#2a2a2a">this R Markdown file&nbsp;[updated 4/13/26]</font></u><span>.&nbsp; A pdf example using this script to analyze our Finnish length data can be viewed&nbsp;</span><u><font color="#2a2a2a"><a href="https://www.ddaidone.com/uploads/1/0/5/2/105292729/mds_analysis_new.pdf" target="_blank">here&nbsp;[updated 4/13/26]</a></font></u><span>.</span><br /><br /><span>In order to decide the appropriate number of dimensions for your data, it's a trade-off between minimizing model misfit (stress) and maximizing the amount of variation explained (R-squared, "R2"), as well the interpretability of the solution.&nbsp; As we say in our SSLA article (pp. 1112-1113):</span></div>  <blockquote><em>Because higher stress in MDS indicates greater model misfit, Clopper (2008, p. 578) recommends looking for the &ldquo;elbow&rdquo; in the stress plot to find the number of dimensions beyond which stress does not considerably&nbsp;decrease, whereas Fox et al. (1995, p. 2544) recommend looking for the number of dimensions beyond which&nbsp; does not considerably increase, provided that this number of dimensions is interpretable based on the relevant theory. Clopper (2008) also states that a stress value of less than 0.1 for the matrix is considered evidence of &ldquo;good fit,&rdquo; although she acknowledges that this is rarely achieved in speech perception data.&nbsp;</em>&#8203;</blockquote>  <div class="paragraph"><span>Unfortunately the R package for MDS that I used in the R Markdown file doesn't give R2 values (if this is important, you can obtain these with an MDS analysis in SPSS); however, we can look at the stress amounts and the plot produced by the script.&nbsp; In the stress plot for our Finnish length data, there is no clear elbow in the plot, but rather a gradual decrease in model misfit.&nbsp; We decided on a 3D solution, but a 2D solution would also be appropriate.&nbsp; A 1D solution has clearly too much stress, while a 4D or 5D solution would be very difficult to interpret and visualize.</span></div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.ddaidone.com/uploads/1/0/5/2/105292729/stressplot_orig.jpg" alt="Picture" style="width:auto;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>  <div class="paragraph">This script will save text files of the dimension scores for 1, 2, 3, 4, and 5 dimensional solutions.&nbsp; The dimension scores are the points in space of each stimulus. In the image below, we see the 3 dimensional solution of our Finnish length data, combined across contexts.&nbsp; You can think of D1, D2, and D3 as (x, y, z) coordinates.&nbsp; For example, in row 7, we see that the average CVCCVV token for speaker N (averaged across&nbsp;<em>pata, tiki,&nbsp;</em>and&nbsp;<em>kupu&nbsp;</em>contexts) is placed by the MDS solution at x = 0.00527, y = -0.59382, z = -0.41867.&nbsp;&nbsp;</div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.ddaidone.com/uploads/1/0/5/2/105292729/3d-scores_orig.jpg" alt="Picture" style="width:auto;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>  <div class="paragraph">When Dimension 1 x Dimension 2 (i.e. x and y coordinates) and Dimension 1 x Dimension 3 (i.e. x and z coordinates) are plotted , we see where CVCCVV_N is placed relative to other stimuli.&nbsp; You can think of the second graph (Dim1 x Dim3) as viewing the first graph from above.&nbsp;</div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.ddaidone.com/uploads/1/0/5/2/105292729/published/3d-solution-plot.jpg?1691096341" alt="Picture" style="width:auto;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.ddaidone.com/uploads/1/0/5/2/105292729/3d-solution-plot2_orig.jpg" alt="Picture" style="width:auto;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>  <div class="paragraph">In these plots, you can somewhat see that the stimuli are grouping together mainly by vowel length rather than consonant length (e.g. Dim 1 has mostly short V1 on the left and long V1 on the right), but it's still difficult to interpret.&nbsp; For this reason, we recommend rotating the solution (i.e. moving all of the points in a certain way by a specific amount) to provide a better visualization.&nbsp; This doesn't change the position of the points <em>relative to each other</em>, which is the important part in an MDS solution.&nbsp; My colleague Ryan Lidster did this in Excel, but since that was rather clunky, I'm working with him on creating an R script to aid in rotating MDS solutions and plotting them.&nbsp; Stay tuned!&nbsp; [Edit 4/13/26: We finally have a script for visualization that includes rotation!&nbsp; See the next blog entry.]<br /><br />Added 4/13/26:<br />Finally, this new version of the script outputs matrices of Euclidean distances between stimuli.&nbsp; These are useful if you want to compare MDS distances for contrasts to their accuracy in a discrimination task, as we did in our 2023 paper.&nbsp; I'll have more information about this in a later blog entry.<br /></div>]]></content:encoded></item><item><title><![CDATA[Analyzing free classification results: Using an R script to obtain (dis)similarity matrices]]></title><link><![CDATA[https://www.ddaidone.com/blog/analyzing-free-classification-results-using-an-r-script-to-obtain-dissimilarity-matrices]]></link><comments><![CDATA[https://www.ddaidone.com/blog/analyzing-free-classification-results-using-an-r-script-to-obtain-dissimilarity-matrices#comments]]></comments><pubDate>Sun, 12 Feb 2023 00:52:27 GMT</pubDate><category><![CDATA[Free classification]]></category><guid isPermaLink="false">https://www.ddaidone.com/blog/analyzing-free-classification-results-using-an-r-script-to-obtain-dissimilarity-matrices</guid><description><![CDATA[To analyze your data with the R script below, you'll need 2 files:&nbsp;Your Lookup Matrix file, which details which stimulus corresponds to which number on which slide.&nbsp; How to create this file is explained in the&nbsp;blog post here.&nbsp; This file should be saved as a tab-separated text file.&nbsp; The example Lookup Matrix from our Finnish length experiment is available here.Your Coded FC Results&nbsp;file, which codes the results from each participant's free classification PowerPoint  [...] ]]></description><content:encoded><![CDATA[<div class="paragraph">To analyze your data with the R script below, you'll need 2 files:&nbsp;<ul><li>Your Lookup Matrix file, which details which stimulus corresponds to which number on which slide.&nbsp; How to create this file is explained in the&nbsp;blog post <u><a href="https://www.ddaidone.com/blog/creating-a-free-classification-task" target="_blank">here</a></u>.&nbsp; This file should be saved as a tab-separated text file.&nbsp; The example Lookup Matrix from our Finnish length experiment is available <u><a href="https://www.ddaidone.com/uploads/1/0/5/2/105292729/fnl_lookup_matrix.txt" target="_blank">here</a></u>.</li><li>Your Coded FC Results&nbsp;file, which codes the results from each participant's free classification PowerPoint task.&nbsp; How to create this file is explain in the blog post <u><a href="https://www.ddaidone.com/blog/coding-a-free-classification-task" target="_blank">here</a></u>.&nbsp;&nbsp;<span>This should also be saved&nbsp;as a tab-separated text file.&nbsp; Make sure your stimuli are in the same order across contexts, since the R script outputs results in alphabetical order, and the combined contexts results will be inaccurate if the orders are different across contexts.&nbsp; The example Coded FC Results file from our Finnish length experiment is available <u><a href="https://www.ddaidone.com/uploads/1/0/5/2/105292729/fnl_ae_fc.txt" target="_blank">here</a></u>.</span></li></ul> You'll use&nbsp;<u><a href="https://www.ddaidone.com/uploads/1/0/5/2/105292729/create_fc_similarity_matrices.rmd" target="_blank">this R Markdown file</a></u> to analyze your results (if you don't have R and RStudio, download those first).&nbsp; The comments in the script show what you should get if you analyze the example files above.&nbsp; Don't forget to set your working directory to the file path where your files are located and to change the file names to match your Lookup Matrix and Coded FC Results files.<br /><br />The R code will create various similarity and dissimilarity matrices (by counts,&nbsp; by percentages, by contexts individually and combined, etc.) that can be used to visualize your results and analyze them with multi-dimensional scaling.&nbsp; These will be saved to your working directory as text files with the name as specified in each code block.&nbsp; Note that when you open these files, the headers will be one column off, since they'll start at the very left.&nbsp; Let's look at one file as an example.&nbsp; Below we have a screenshot of the tab-separated text file for similarity in percentages for Context 1 (in this case "upu") with all speakers combined.</div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.ddaidone.com/uploads/1/0/5/2/105292729/published/outputmatrixtext.jpg?1687746918" alt="Picture" style="width:auto;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>  <div class="paragraph">As you can see, the header cVccV_upu at the top left should be the header for the first column of numbers.&nbsp; If you want to make a table with these results, I recommend opening this file in Excel and moving all of the headers one column to the right.&nbsp; (Note that you do NOT need to do this to use the R script for multi-dimensional scaling described in the following blog post.)&nbsp; The file will now look like this:</div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.ddaidone.com/uploads/1/0/5/2/105292729/published/outputmatrixexcel.jpg?1687747068" alt="Picture" style="width:auto;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>  <div class="paragraph">This file shows us that, for example, [kuppu] tokens and [kuuppu] tokens were grouped together 19.7% of the time (column B, row 6).&nbsp; Since this is the similarities for all speakers combined, the numbers along the diagonal show how often the sound files of the same stimulus spoken by different speakers were grouped together (not very often; English speakers are bad at length).<br /><span><br />&#8203;The R code can handle up to 4 different contexts.&nbsp; If you have more than 4 contexts, have more than 2 versions (i.e. Version A and Version B for counterbalanced order of presentation), or find any errors with the code, let me know at daidoned AT uncw.edu and I can modify/fix the script.</span><br /></div>]]></content:encoded></item><item><title><![CDATA[Coding a free classification task]]></title><link><![CDATA[https://www.ddaidone.com/blog/coding-a-free-classification-task]]></link><comments><![CDATA[https://www.ddaidone.com/blog/coding-a-free-classification-task#comments]]></comments><pubDate>Sat, 12 Mar 2022 05:00:00 GMT</pubDate><category><![CDATA[Free classification]]></category><guid isPermaLink="false">https://www.ddaidone.com/blog/coding-a-free-classification-task</guid><description><![CDATA[Once you have results for your free classification task, you'll need to code what stimuli participants grouped together.&nbsp; We recommend having around 30 participants or more for your eventual analysis.&nbsp; If you use the format pictured below, you will then be able to use an R script that creates similarity matrices from your data.         Make sure you label your columns this way so that the R script will work correctly.Subject: The participant's ID&nbsp;Version: The version of the task t [...] ]]></description><content:encoded><![CDATA[<div class="paragraph">Once you have results for your free classification task, you'll need to code what stimuli participants grouped together.&nbsp; We recommend having around 30 participants or more for your eventual analysis.&nbsp; If you use the format pictured below, you will then be able to use an R script that creates similarity matrices from your data.</div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.ddaidone.com/uploads/1/0/5/2/105292729/published/codeddata.jpg?1647060435" alt="Picture" style="width:auto;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>  <div class="paragraph">Make sure you label your columns this way so that the R script will work correctly.<br />Subject: The participant's ID&nbsp;<br />Version: The version of the task that the participant did<ul><li>For example, in our Finnish length task w<span>e had two versions for the different orders of slides that we counterbalanced.&nbsp; In version A, the "ata" context (</span><span>pata, paata, patta, etc.)&nbsp;</span><span>was on slide 1, the "iki" context&nbsp;</span><span>(tiki, tiiki, tikki, etc.)&nbsp;</span><span>was on slide 2, and the "upu" context&nbsp;</span><span>(kupu, kuupu, kuppu, etc.)&nbsp;</span><span>was on slide 3.&nbsp; In version B, the "upu" context was first, the "iki" context was second, and the "ata" context was third.&nbsp; Be sure to use the letters "A" and "B" for your coding instead of numbers.&nbsp; If you only have one version, label them&nbsp;all version A.&nbsp; You'll still need this column for the R code.&nbsp; If you have more than 2 versions, the R code won't work.&nbsp; Send me an email and I can modify the script for you.</span></li></ul> <span>Slide: The slide that you are coding<br />Context: The context for that slide<br />Group: Which group on that slide you are coding<br />Tokens: Which tokens were grouped together</span>&nbsp;in that group</div>  <div class="paragraph"><span>Let's use the following participant's results as a model.&nbsp; Here is their first slide:&nbsp;&nbsp;</span>&#8203;</div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.ddaidone.com/uploads/1/0/5/2/105292729/b50-slide1_orig.jpg" alt="Picture" style="width:auto;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>  <div class="paragraph">This participant's ID is B50.&nbsp; They did version A of our task.&nbsp; This is slide 1, and in Version A, the context on the first slide is "ata".&nbsp; They made 5 groups of stimuli on this slide, so let's choose a group at random and code it, for instance the group at the left containing 19, 14, 23, 20, and 24.&nbsp; Since this is the first group we're coding on this slide, we can label it group 1 under Group and put in the token numbers under Tokens.&nbsp; For the token numbers, we want to separate them with commas and no spaces.&nbsp; It doesn't matter what order the token numbers are in within that cell.&nbsp; So now we have:</div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.ddaidone.com/uploads/1/0/5/2/105292729/b50-slide1-row1_orig.jpg" alt="Picture" style="width:auto;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>  <div class="paragraph">If we code the rest of the groups on this slide, we have:</div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.ddaidone.com/uploads/1/0/5/2/105292729/b50-slide1-coded_orig.jpg" alt="Picture" style="width:auto;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>  <div class="paragraph">Here is B50's second slide:</div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.ddaidone.com/uploads/1/0/5/2/105292729/b50-slide2-2_orig.jpg" alt="Picture" style="width:auto;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>  <div class="paragraph">And below is the coding for this slide added to the spreadsheet.&nbsp; Since B50 made 4 groups on this slide, we only have 1-4 under Group:</div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.ddaidone.com/uploads/1/0/5/2/105292729/b50-slide2-coded_orig.jpg" alt="Picture" style="width:auto;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>  <div class="paragraph">Here is B50's slide 3:</div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.ddaidone.com/uploads/1/0/5/2/105292729/b50-slide3-2_orig.jpg" alt="Picture" style="width:auto;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>  <div class="paragraph"><span>And the coding for this slide added to the spreadsheet:</span></div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.ddaidone.com/uploads/1/0/5/2/105292729/b50-slide3-coded_orig.jpg" alt="Picture" style="width:auto;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>  <div class="paragraph">This participant's results are finished being coded!&nbsp; Do this for all your participants and you'll be ready to analyze your data.&nbsp; You'll need to save this spreadsheet as a tab-separated text file for use with the R script to create similarity matrices.</div>]]></content:encoded></item><item><title><![CDATA[Creating a free classification task]]></title><link><![CDATA[https://www.ddaidone.com/blog/creating-a-free-classification-task]]></link><comments><![CDATA[https://www.ddaidone.com/blog/creating-a-free-classification-task#comments]]></comments><pubDate>Wed, 02 Mar 2022 05:00:00 GMT</pubDate><category><![CDATA[Free classification]]></category><guid isPermaLink="false">https://www.ddaidone.com/blog/creating-a-free-classification-task</guid><description><![CDATA[In this post I'll discuss how to create a free classification task, also known as a free sort task, which we apply to non-native perception in&nbsp;Daidone, Kruger, and Lidster (2015)&#65279;.&nbsp; This task is useful for determining the perceptual similarity of non-native sounds and examining what acoustic, phonological, or indexical dimensions of the stimuli matter for listeners.&nbsp; It can be used to examine segmental or suprasegmental phenomena and can be used to predict their discriminab [...] ]]></description><content:encoded><![CDATA[<div class="paragraph">In this post I'll discuss how to create a free classification task, also known as a free sort task, which we apply to non-native perception in&nbsp;<a href="https://www.ddaidone.com/uploads/1/0/5/2/105292729/icphs_2015_daidone_kruger_lidster_final_version.pdf" target="_blank">Daidone, Kruger, and Lidster (2015)</a><span>&#65279;</span>.&nbsp; This task is useful for determining the perceptual similarity of non-native sounds and examining what acoustic, phonological, or indexical dimensions of the stimuli matter for listeners.&nbsp; It can be used to examine segmental or suprasegmental phenomena and can be used to predict their discriminability (check out <a href="https://www.ddaidone.com/uploads/1/0/5/2/105292729/new_sounds_2019_overall_presentation.pptx" target="_blank">our slides from New Sounds 2019</a>).&nbsp; Here is an example of what our Finnish length free classification task looks like in PowerPoint.&nbsp; The numbers on the slide are sound files that participants click on and listen to and then group by which seem similar to them.</div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.ddaidone.com/uploads/1/0/5/2/105292729/fc-finnish_orig.jpg" alt="Picture" style="width:auto;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>  <div>  <!--BLOG_SUMMARY_END--></div>  <div class="paragraph">To choose the stimuli for a free classification task, I recommend including all related sounds if possible, such as all the vowels in that language.&nbsp; If not, when it comes time to do a multi-dimensional scaling analysis and correlations with properties of the stimuli, the results may be difficult to interpret because listeners don't need to use all of the relevant dimensions to group the stimuli.&nbsp; For example, if you only include front vowels in your task, it may appear in your analysis that F2, or vowel backness, is not a relevant dimension for grouping vowels, when&nbsp;actually&nbsp;it is an artifact of the stimuli you picked.&nbsp;<br /><br />Once you decide what sounds you want to examine, you will need to determine what phonetic contexts you want to use.&nbsp; I recommend two or three different contexts, since perception may differ between them, and you'll have more data points.&nbsp; In our German vowel experiment, we used both an alveolar context&nbsp;(/&#643;tVt/) and velar context (/skVk/), and in our Finnish length experiment we used three contexts (<em>pata, tiki, kupu).&nbsp;&nbsp;</em>The stimuli for each context will be on a separate slide.<br /><br />Next, you'll need to figure out how many speakers you should record, which will determine how many stimuli you have per slide.&nbsp; I think that 30 stimuli per slide is the upper limit, since it becomes increasingly difficult to compare all the sound files to each other the more you have.&nbsp; For example, in our German experiment, since we looked at 14 different vowels, we only had 2 speakers, such that each slide contained 28 stimuli.&nbsp; For our Finnish length experiment, we looked at 8 possible length templates (CVCV, CVVCV, CVVCCV, etc.), and thus we chose to have 3 different speakers, for a total of 24 stimuli on each slide.&nbsp; Keep in mind that if you have few stimuli to group per slide, you'll have fewer data points for your analysis.&nbsp;<br /><br />When you have your list of stimuli for each slide, you'll have to decide which number each stimulus will have.&nbsp; These should be randomized so that participants can't use the numbers to group the stimuli.&nbsp; If you want to use our R script for analyzing results, you should create a file like the one pictured below to record which numbers and slides correspond to which stimuli.&nbsp; This should then be saved as a tab-separated text file.&nbsp; We have called this file a "Lookup Matrix" and our example one for the Finnish length experiment is available <u><a href="https://www.ddaidone.com/uploads/1/0/5/2/105292729/fnl_lookup_matrix.txt" target="_blank">here</a></u>.&nbsp; <em>Make sure your codes will be in the same alphabetical order across contexts.</em>&nbsp; For example, here we have our code as CVCCV_ata_M instead of patta_M, since patta_M would alphabetically be after paata_M, but kuppu_M would alphabetically be before kuupu_M.&nbsp; When the R script calculates similarity, the codes are put in alphabetical order, and the contexts combined matrices are calculated by adding the matrices for each context. Thus, all of the "ata" stimuli need to be in the same order as all of the "iti" and "upu" stimuli.</div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.ddaidone.com/uploads/1/0/5/2/105292729/fnl-lookup2_orig.jpg" alt="Picture" style="width:auto;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>  <div class="paragraph"><span>We have two versions of the task, Version A with the "ata" context first and Version B with the "upu" context first.&nbsp; Only the order of slides differs, rather than what is on the slides themselves.&nbsp; ASlide and BSlide refer to which slide contains which context within each version.&nbsp; In this case, the "ata" context is on slide 1 for Version A but on slide 3 for Version B.&nbsp; IconNumber refers to the random number that each stimulus is given.&nbsp; For Finnish length, Condition refers to the length template of each stimulus (e.g. stimulus 1 is [patta]).&nbsp; For your own task, this may be the target vowel or consonant for that stimulus.&nbsp; Code is the columns Condition, Context, and Speaker concatenated together.<br /><br />&#8203;</span>Once you have the sound files and their corresponding random numbers ready, you&rsquo;ll need to insert all the sound files into PowerPoint and change their images to the appropriate number icon.&nbsp; Insert the sound file and change the image for each one by one to lessen the possibility of mixing up the numbers.&nbsp; A sample PowerPoint with grid can be downloaded <u><a href="https://www.ddaidone.com/uploads/1/0/5/2/105292729/example_free_classification_grid.pptx" target="_blank">here</a></u>.&nbsp; Number images 1-28 are available <u><a href="https://www.ddaidone.com/uploads/1/0/5/2/105292729/number_images.zip" target="_blank">here</a></u>.&nbsp; The following instructions work with Office 365.<br /><br />To insert a sound file into PowerPoint:<br />Go to &ldquo;Insert&rdquo; --&gt; &ldquo;Audio&rdquo; --&gt; &ldquo;Audio on my PC&rdquo;</div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.ddaidone.com/uploads/1/0/5/2/105292729/published/capture1.jpg?1646276462" alt="Picture" style="width:auto;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>  <div class="paragraph">After you&rsquo;ve chosen a sound file, it will show up with a speaker icon.&nbsp; You will need to change this icon to the picture of a number by clicking on the icon, clicking on "Audio Format" in the top ribbon, and then clicking on &ldquo;Change Picture&rdquo; and choosing the appropriate image file.</div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.ddaidone.com/uploads/1/0/5/2/105292729/capture2_orig.jpg" alt="Picture" style="width:auto;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>  <div class="paragraph">Now that you have a number icon for the sound file, you can add a border to it by clicking on "Picture Border".&nbsp; Make sure the color is black and the width is 0.75.</div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.ddaidone.com/uploads/1/0/5/2/105292729/capture3_orig.jpg" alt="Picture" style="width:auto;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>  <div class="paragraph">To adjust the size of the images, I recommend specifying the height and width to ensure uniformity across the images rather than manually adjusting the size of each by eyeballing it. &nbsp;Images that are 0.46&rdquo; high x 0.61&rdquo; wide fit nicely into the rectangles of the grid.&nbsp;&nbsp;You can specify the size of the image on the right size of the ribbon.&nbsp; Follow these steps for the sound files and numbers for all your slides and you're ready to go!</div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.ddaidone.com/uploads/1/0/5/2/105292729/capture4_orig.jpg" alt="Picture" style="width:auto;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>  <div class="paragraph">For a participant to complete a free classification task takes about 10-20 minutes, depending on how many stimuli they have to group and how obsessively they listen to everything.&nbsp; You can remind participants that there is no right answer and that they can make groups of any size as long as there are at least two sound files in a group.&nbsp; They can also arrange groups however they want as long as the divisions between groups are clear.&nbsp; Where they put the groups on the grid, or even if they use the grid, does not matter for the analysis.&nbsp; Once a participant finishes, we always check to make sure their groups are clearly delineated and that they don't have any single sound files without a group.&nbsp; The PowerPoint file is then saved with the participant name for later coding.&nbsp; Here are some examples of how participants have completed a free classification slide.</div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.ddaidone.com/uploads/1/0/5/2/105292729/capture5_orig.jpg" alt="Picture" style="width:auto;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.ddaidone.com/uploads/1/0/5/2/105292729/capture6_orig.jpg" alt="Picture" style="width:auto;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.ddaidone.com/uploads/1/0/5/2/105292729/capture7_orig.jpg" alt="Picture" style="width:auto;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>  <div class="paragraph">Additional tips:<ul><li>You can counterbalance which slide participants see first, such that half the participants get one context first (e.g. velar) (Version A)&nbsp;and the other other get the other context first (e.g. alveolar) (Version B).&nbsp; We've never found a difference by&nbsp;slide order, so I wouldn't say this is strictly necessary, but it's nice to show that presentation order doesn't matter.&nbsp; Our R script for analysis assumes there are two versions.&nbsp;</li><li>The correspondences between numbers and sound files should differ between the slides so that participants don't catch on and simply choose to group the same numbers as before.&nbsp;</li><li>You can change the background color so that they differ for each slide (e.g. first slide is white, second is light green).&nbsp; This makes it visually easier to see what slide participants are on if you're testing them in person.&nbsp; &nbsp;</li><li>You may find it useful to also give participants a practice slide with 6-8 sound files with very clear differences to make sure they are following instructions.</li></ul></div>]]></content:encoded></item><item><title><![CDATA[Cutting sound files]]></title><link><![CDATA[https://www.ddaidone.com/blog/cutting-sound-files]]></link><comments><![CDATA[https://www.ddaidone.com/blog/cutting-sound-files#comments]]></comments><pubDate>Wed, 06 Feb 2019 05:00:00 GMT</pubDate><category><![CDATA[Creating Perception Tasks]]></category><guid isPermaLink="false">https://www.ddaidone.com/blog/cutting-sound-files</guid><description><![CDATA[Once you have the sound file containing all of your stimuli, you'll need to segment it into smaller, individual files for each stimulus.&nbsp; You can do this using the free acoustic analysis software Praat, available at praat.org.Once you open Praat, you'll see that both a "Praat Objects" window and a "Praat Picture" window appear at start up.&nbsp; You won't be using the Praat picture window, so you can close that.Before we begin cutting a sound file, let's just see what sounds look like in Pr [...] ]]></description><content:encoded><![CDATA[<div class="paragraph">Once you have the sound file containing all of your stimuli, you'll need to segment it into smaller, individual files for each stimulus.&nbsp; You can do this using the free acoustic analysis software Praat, available at <a href="http://www.praat.org" target="_blank">praat.org</a>.<br /><br />Once you open Praat, you'll see that both a "Praat Objects" window and a "Praat Picture" window appear at start up.&nbsp; You won't be using the Praat picture window, so you can close that.<br /><br />Before we begin cutting a sound file, let's just see what sounds look like in Praat.&nbsp; In the top menu, go to "Open" --&gt; "Read from file" and choose your sound file.&nbsp; It should now appear highlighted in the Objects window.&nbsp; Click on "View &amp; Edit" on the right-hand menu to see your sound file:</div>  <div>  <!--BLOG_SUMMARY_END--></div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.ddaidone.com/uploads/1/0/5/2/105292729/view-and-edit_orig.png" alt="Picture" style="width:auto;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>  <div class="paragraph"><span>Your sound file will look something like this:</span></div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.ddaidone.com/uploads/1/0/5/2/105292729/sound-file-all_orig.png" alt="Picture" style="width:auto;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>  <div class="paragraph">What you'll notice is that it is zoomed out all the way in order to fit the entire file.&nbsp; If you want to zoom in to see individual words, you'll need to use the controls at the bottom left of the window.&nbsp; The "in" button zooms in around wherever your cursor is, in this case the middle of the sound file.&nbsp; The "sel" button zooms in to a selection.&nbsp; You can drag your cursor across the sound to highlight a region, then press "sel".</div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.ddaidone.com/uploads/1/0/5/2/105292729/cursor-selection_1_orig.png" alt="Picture" style="width:auto;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>  <div class="paragraph">You may have to zoom in a few times in order to get to individual words.&nbsp; Once you do, you'll see something like this:</div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.ddaidone.com/uploads/1/0/5/2/105292729/individual-word_orig.png" alt="Picture" style="width:auto;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>  <div class="paragraph">Now you can see two different representations of the sound.&nbsp; The top shows the sound wave ("waveform") and the bottom shows the energy at different frequencies ("spectrogram").&nbsp; The darker the spectrogram is, the more energy there is at that frequency. The blue line is pitch, which you can toggle off and on through the "Pitch" menu at the top.&nbsp; You can also toggle on and off other analysis tools, like formant tracking and glottal pulses, but you won't need to worry about those for cutting sound files.&nbsp; If you want to listen to just the part you zoomed in to, click on the bottom bar labeled "Visible part" or press Tab on the keyboard to play the selection.&nbsp; If you click on the bar labeled "Total duration", it will play the entire sound file.&nbsp; You can stop a sound file from playing by pressing Esc on the keyboard.<br /><br />Now that you're familiar with basic controls for Praat, follow the instructions in the attached pdf prepared by my colleague Ryan Lidster.&nbsp; It explains how to mark individual stimuli with intervals in a TextGrid and then pull out those words as individual sound files (using a PC, I'm not sure how well everything correlates to a Mac).</div>  <div><div style="margin: 10px 0 0 -10px"> <a title="Download file: instructions-for-extracting-sound-files-from-praat.pdf" href="https://www.ddaidone.com/uploads/1/0/5/2/105292729/instructions-for-extracting-sound-files-from-praat.pdf"><img src="//www.weebly.com/weebly/images/file_icons/pdf.png" width="36" height="36" style="float: left; position: relative; left: 0px; top: 0px; margin: 0 15px 15px 0; border: 0;" /></a><div style="float: left; text-align: left; position: relative;"><table style="font-size: 12px; font-family: tahoma; line-height: .9;"><tr><td colspan="2"><b> instructions-for-extracting-sound-files-from-praat.pdf</b></td></tr><tr style="display: none;"><td>File Size:  </td><td>1381 kb</td></tr><tr style="display: none;"><td>File Type:  </td><td> pdf</td></tr></table><a title="Download file: instructions-for-extracting-sound-files-from-praat.pdf" href="https://www.ddaidone.com/uploads/1/0/5/2/105292729/instructions-for-extracting-sound-files-from-praat.pdf" style="font-weight: bold;">Download File</a></div> </div>  <hr style="clear: both; width: 100%; visibility: hidden"></hr></div>  <div class="paragraph">Here is a summary of the steps you'll need to take:<br /><br />&#8203;&#8203;1. For each stimulus, highlight close to where the sound file begins and ends, about 30-50 ms before and after, though it doesn't need to be exact.&nbsp; In general, you don't want the amount of silence before and after your stimuli to vary greatly, because this will affect your interstimulus interval (ISI) later.&nbsp; [Note: If you're doing an experiment that requires exact timing of the stimuli, like priming or eyetracking, you will need to be more precise in your cutting, especially at the beginning of the stimulus.&nbsp; One possible thing you could do is put your left boundary exactly at the start of the word, then use the Praat script "<a href="http://www.ddaidone.comhttps://www.ddaidone.com/uploads/1/0/5/2/105292729/move_left_boundary_left_for_labeled_intervals___zero_cross.txt" target="_blank">move_left_boundary_left_ for_labeled_intervals_&amp;_zero_cross.praat" </a>to change the left boundary to, for example, exactly 30 ms before the start of the stimulus.&nbsp; The script will also zero cross both boundaries.]<br /><br />2. Before you hit enter to make the boundaries, hit Ctrl&nbsp;+ comma and Ctrl&nbsp;+ period (or Command + comma and Command + period with a Mac) to make sure the boundaries are at zero crossings (i.e. the where the sound wave at the top crosses over zero) so you don't get clicking noises (if you've already hit enter, you can move the boundaries to the nearest zero crossing by going to "boundary" in the top menu).<br /><br />3. You should have a spreadsheet in which you've prepared the file names for each stimulus.&nbsp; Depending on the type of experiment, it can get confusing to name your files as simply the (non)word that they are, because then you won't have information about the condition or contrast they're being used in, which could make analyses more difficult.&nbsp; <span>Instead, it's often useful to label your files something like condition_sound_speaker.&nbsp; For example, in our study on Finnish length perception, we labeled our files like "ata1.L", meaning the /ata/ context, with the first length template (i.e. all short segments; we had a code for each template, such as "2" for the first vowel was long, "3" for the first consonant was long, etc.), and "L" for the speaker. [Edit 4/26/22: I recommend using underscores instead of periods in your file names.&nbsp; It turns out some Praat scripts don't play well with periods.]&nbsp; Whatever you choose, make sure it's laid out very clearly somewhere.&nbsp;<br /><br />4.&nbsp;</span>I find it useful to highlight my stimuli in this spreadsheet as I go along putting the labels in the TextGrid.&nbsp; I use green for a good token, yellow for a somewhat questionable token, and red if there was something wrong with it and I didn't even mark that word.&nbsp; It's always a good idea to keep track of your progress and take note of any problems you encounter.<br /><br />4. Save your TextGrid often because Praat may crash.&nbsp; While you're working in the sound file, go to "File" in the top menu --&gt; "Save TextGrid as text file".<br /><br />5. Once all the labels are in, run the script "<a href="http://www.ddaidone.comhttps://www.ddaidone.com/uploads/1/0/5/2/105292729/save_labeled_intervals_to_wav_sound_files.txt" target="_blank">save_labeled_intervals_to_wav_sound_files.praat</a>" by going to the Praat Objects window --&gt; "Praat" on the top menu --&gt; "Open Praat Script" (if you're using a Mac, go to "Open"--&gt;"Read from file").&nbsp; Make sure the sound file is open as a long sound file ("Open" --&gt; "Open long sound file") and the TextGrid is also highlighted.&nbsp; If you access the script from my website, it will open in the .txt format in the browser instead of downloading as .praat.&nbsp; You can copy the text of the script, then go to the&nbsp;<span>Praat Objects window --&gt; "Praat" --&gt; "New Praat Script", and paste it in.</span><br /><br />6. When you run the script ("Run" in the top menu in the open script --&gt; "Run"), don't forget to put in the folder where you want all the sound files to go (including a backwards [PC] or forwards [Mac] slash at the end of the file path).&nbsp; At this point you can add a prefix or a suffix to all your files, such as labeling them with the speaker.<br /><br />You should now have sound files to use in your experiment!</div>]]></content:encoded></item><item><title><![CDATA[Tips for recording stimuli]]></title><link><![CDATA[https://www.ddaidone.com/blog/tips-for-recording-stimuli]]></link><comments><![CDATA[https://www.ddaidone.com/blog/tips-for-recording-stimuli#comments]]></comments><pubDate>Thu, 03 Aug 2017 19:59:34 GMT</pubDate><category><![CDATA[Creating Perception Tasks]]></category><guid isPermaLink="false">https://www.ddaidone.com/blog/tips-for-recording-stimuli</guid><description><![CDATA[The recording list:Make sure it's easy to read for the speakers&nbsp;(i.e. the font is 12pt or bigger).&nbsp; I like to use 3 double-spaced columns per page.&nbsp; You could also number the words.&nbsp;&nbsp;If the list is long (more than 2 pages), number the pages to avoid confusion and add titles for each part of the list (stimuli for AX, stimuli for lexical decision).&nbsp; If they read the titles, this could help you later when cutting the sound files.Oftentimes the last word of a list is sa [...] ]]></description><content:encoded><![CDATA[<div class="paragraph"><span>The recording list:</span><ul><li><span>Make sure it's easy to read for the speakers&nbsp;(i.e. the font is 12pt or bigger).&nbsp; I like to use 3 double-spaced columns per page.&nbsp; You could also number the words.&nbsp;&nbsp;</span></li></ul><ul><li><span>If the list is long (more than 2 pages), number the pages to avoid confusion and add titles for each part of the list (stimuli for AX, stimuli for lexical decision).&nbsp; If they read the titles, this could help you later when cutting the sound files.</span></li></ul><ul><li><span>Oftentimes the last word of a list is said with different intonation (a final fall); repeat this word earlier in the list or at least have it be a filler.</span></li></ul><ul><li><span>Print the recording list one sided, or make sure they don't turn the page while speaking, since this will be audible. &nbsp; &nbsp;</span></li></ul></div>  <div>  <!--BLOG_SUMMARY_END--></div>  <div class="paragraph">The recording space:<ul><li><span>Use a sound booth if possible, otherwise pick the quietest place you possibly can.</span></li></ul><ul><li><span>Avoid large spaces with echos.</span></li></ul><ul><li><span>Avoid&nbsp;air conditioners, fridges, heaters, etc. running in background, since these will often create an audible hum in the recording.</span></li></ul><ul><li><span>Place the mic about 2 inches from their mouth and slightly to the side to prevent large spikes in amplitude from their breath hitting the mic.</span></li></ul> The recording style:<ul><li><span>Model how you want the words read or play a small part of a previous recording if you want to match speed/style of speaking.&nbsp;&nbsp;</span></li></ul><ul><li><span>You need clear pauses between each word.&nbsp; I cannot stress enough how important this is for cutting the stimuli later.&nbsp; They're basically worthless if there is no pause between them because there will be audible coarticulation, making segmenting the words into individual sound files very difficult and causing the words to sound weirdly chopped off when you try.</span></li></ul><ul><li><span>Have them read with falling intonation on each word so that when the stimuli are cut, they don't sound like questions. This is difficult for speakers since it is natural to read with list intonation, which has rising intonation on each word.&nbsp; If they're really bad at it, you may need to have them repeat after you for each word (making sure they are pausing sufficiently and not overlapping your speech).&nbsp; Another technique is to have them say each word a few times as in a list, so that the final iteration of the word has falling intonation. &nbsp;You&nbsp;</span>can also avoid list intonation by putting stimuli in a sentence. This can help create&nbsp;more natural-sounding stimuli, but recording takes longer and cutting stimuli is more time consuming.&nbsp; If you do embed words in a context, place&nbsp;the stimuli between stop consonants or repeat it&nbsp;after a sentence&nbsp;(e.g.&nbsp;<span>"Say X&nbsp;</span><span>again. X.")</span>&nbsp;for ease of cutting later.&nbsp; We've found that sentence-final&nbsp;tokens are clearest for perception experiments (if clear segments are&nbsp;your intention).&nbsp; By repeating the word at the end of a sentence, it is easier&nbsp;for the speaker to produce it surrounded by a pause and&nbsp;with falling intonation. &nbsp;</li></ul><ul><li><span>If the speaker&nbsp;uses creaky voice, call attention to it and have them try to lessen it.&nbsp; I've noticed people tend to do it more toward the end of the recording when they're bored and tired, so giving them breaks and water might help.&nbsp;&nbsp;</span></li></ul><ul><li><span>Watch out for speakers' tendency to speed up over time. &nbsp;</span></li></ul> <span>Making the recording:</span><ul><li><span>Have them practice part of the list and listen to/look at the recording to check the mic levels, background noise, reading speed, pauses, etc.</span></li></ul><ul><li><span>Record with a 3 to 1 ratio of&nbsp;recordings to number of stimuli you need.&nbsp; In other words, if you need one good token of a word, record the list 3 times. &nbsp;It may be necessary to record even more times and coach the speaker further if you need particular phonetic properties, such as final released [t] in English or dialectal variants like [h] for /s/ in Spanish.&nbsp;I recommend having the speaker read&nbsp;the list multiple times rather than reading each word multiple times in a row because people tend to say a word the same (possibly erroneous) way when repeating it.</span></li></ul></div>]]></content:encoded></item><item><title><![CDATA[Creating stimuli]]></title><link><![CDATA[https://www.ddaidone.com/blog/creating-stimuli]]></link><comments><![CDATA[https://www.ddaidone.com/blog/creating-stimuli#comments]]></comments><pubDate>Tue, 11 Jul 2017 04:00:00 GMT</pubDate><category><![CDATA[Creating Perception Tasks]]></category><guid isPermaLink="false">https://www.ddaidone.com/blog/creating-stimuli</guid><description><![CDATA[Once you've chosen a perception task, it's time to make stimuli for it. &nbsp;How many stimuli do I need?The answer to this question isn't simple. &nbsp;You'll need to strike a balance between getting a sufficient amount of data and how long you can reasonably expect people to sit and do your experiment. &nbsp;In our lab, we generally have to recruit participants with extra credit, the promise of snacks, and desperate pleas, so any experiment over an hour or an hour and 15 minutes is unlikely to [...] ]]></description><content:encoded><![CDATA[<div class="paragraph"><span>Once you've chosen a perception task, it's time to make stimuli for it. &nbsp;</span><br /><br /><em>How many stimuli do I need?</em><br />The answer to this question isn't simple. &nbsp;You'll need to strike a balance between getting a sufficient amount of data and how long you can reasonably expect people to sit and do your experiment. &nbsp;In our lab, we generally have to recruit participants with extra credit, the promise of snacks, and desperate pleas, so any experiment over an hour or an hour and 15 minutes is unlikely to have many people sign up. &nbsp;If you can pay people they'll be more willing to do a longer experiment, but that means more money you'll have to shell out for each person. &nbsp;Since your experiment is likely to be made up of two or more tasks, such as both discrimination and lexical decision plus a background questionnaire, each task in itself shouldn't be longer than about 25 minutes, if possible. &nbsp;Shorter tasks will also prevent participants' attention from wandering too much, which means more reliable data. &nbsp;A 20-minute AXB or oddity task is already very boring even with a break, and with difficult contrasts it can also be mentally taxing and demoralizing. &nbsp;I know some psychology experiments have participants doing one repetitive task for an hour (how?!), but if you don't want participants to constantly time out on trials because they are falling asleep or trying to surreptitiously check their phones, keep it shorter.&nbsp;&#8203;<br />&#8203;</div>  <div>  <!--BLOG_SUMMARY_END--></div>  <div class="paragraph"><span>&#8203;For most kinds of tasks, to calculate how long it will take you'll need to take into account the number of trials and how long each trial lasts. &nbsp;</span>When figuring out how many trials you need, keep in mind that f<span>or AX, you should have an equal number of same and different trials, and with lexical decision you should have an equal number of word and nonword trials. &nbsp;For ABX-like tasks you'll need to balance the order of stimuli so that all six possible orders of stimuli are present: ABA, ABB, AAB, BAB, BAA, BBA. &nbsp;In other words, if your contrast is [e] vs. [i], in one trial the order will be [kek], [kik], [kek] (ABA), in another [kek], [kik], [kik] (ABB), etc. &nbsp;This ensures that X is equally likely to be A or B and that the order of presentation does not influence the results. &nbsp;Oddity tasks should also have the six possible orders of stimuli, plus all-the-same trials (AAA, BBB). &nbsp;You can either balance the number of same vs. odd-one-out trials or balance how often each button is the correct answer (first sound is different, second sound is different, third sound is different, or all the same). &nbsp;I prefer the second option, since participants are likely to hear difficult contrasts as same trials anyway. &nbsp;Note that the types of trials don't have to be perfectly balanced; for example, you could do 12 different trials (4, 4, and 4 in each position) and 8 same trials. &nbsp;Just make sure they aren't too disparate. &nbsp;<br /><br />For the number of conditions, don't forget that you need a control condition to show that your task is working. &nbsp; In our experiments, we've tested up to 10 contrasts in a discrimination task. &nbsp;I think this is near the upper limit, since having more data points per condition is always better, especially if you plan to do individual-level analyses, and for each condition you add, the less trials per condition you'll be able to fit in. &nbsp;Around 16-20 trials per condition is a good number, which may be split into a couple different phonetic contexts. &nbsp;Here's an example from our latest oddity task:</span><br /><br /><span>10 contrasts (e.g. /u/ vs. /y/) x 10 trials per contrast (2 AAA, 2 BBB, AAB, ABA, ABB, BAA, BBA, BAB) x 2 contexts ([tVhVt], [kVhVk]) = 200 trials<br /><br />It's helpful if you map out your trial set up in Excel, like this:<br />&#8203;</span></div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.ddaidone.com/uploads/1/0/5/2/105292729/stimuli-setup_orig.jpg" alt="Picture" style="width:auto;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>  <div class="paragraph"><span>You'll also need practice trials so that participants can learn how to do the task. &nbsp;About 8 or 10 is enough. &nbsp;For some tasks you'll need filler trials as well, particularly if you are only testing a small number of conditions. &nbsp;The point is that you don't want participants to figure out what you're testing and start employing an explicit strategy for completing the task.</span><br /><br /><span>In order to calculate how long each trial will take, you add the&nbsp;</span><span>length of the sound files in each trial + interstimulus intervals (pauses between stimuli) + intertrial interval (pause between trials). &nbsp;For example, in our oddity task e</span><span>ach word lasts about 600-750 ms, so let's use 700 ms as an estimate. &nbsp;With an interstimulus interval (ISI) of 400 ms and an intertrial interval (ITI) of 500 ms, that means each trial takes 700 ms for the first stimulus + 400 ms pause + 700 ms for the second stimulus + 400 ms pause + 700 ms for the third stimulus + 500 ms after the trial. &nbsp;In other words, 700 ms x 3 + 400 ms x 2 + 500 ms = 3400 ms, or 3.4 seconds per trial. &nbsp;</span><br /><br /><span>Now you can calculate how long the all trials will take combined. &nbsp;3.4 s x 200 trials = 680 s, or 11.33 min. &nbsp;With the instructions, practice trials, and a break, that's about 15 minutes for the oddity task, which is totally doable.</span><br /><br /><em>Words or nonwords?</em><br /><span>If your task is examining discrimination, similarity, or categorization, it's best to use nonwords. By using nonwords, you won't need to worry about lexical frequency effects, such as the fact that people respond faster to more frequent words. Also, if you're testing learners and a control group of native speakers, the lexical knowledge of each group will likely vary, possibly affecting your results. &nbsp;For a lexical decision task you'll obviously need words, but you'll also need non-words both as near-words to test lexical knowledge and as fillers.</span><br /><br /><span>Tips for making nonwords:</span><ul><li>While I personally haven't tried it,&nbsp;<a href="http://crr.ugent.be/programs-data/wuggy/general-information-and-overview-of-operation" target="_blank">Wuggy</a>&nbsp;seems like a useful program for creating nonwords based on words the user inputs. &nbsp;It can generate nonwords in Basque, Dutch, English, French, Serbian, and Spanish.</li><li>Choose phonotactically plausible nonwords. &nbsp;Using phonotactically infrequent patterns will be unrepresentative of the language you're testing, &nbsp;not to mention make it difficult for the speakers to produce usable stimuli. &nbsp;If the filler nonwords in a lexical decision task are phonotactically implausible, it will be obvious they are nonwords, possibly making participants more likely to incorrectly accept the&nbsp;near-words in the experiment.</li><li>Control the phonetic contexts surrounding the target segment, since context will affect perception. &nbsp;For example, American English listeners have more difficulty discriminating [y] and [u] in an alveolar context ([tyt] vs. [tut]) than a velar context ([kyk] vs. [kuk]).&nbsp;&nbsp;If you're testing discrimination, perceptual similarity, or categorization, you should pick perhaps two or three contexts for your stimuli.&nbsp;&nbsp;</li><li>Check to make sure your nonwords are truly nonwords, especially if you're a non-native speaker. &nbsp;In my work on Spanish, I like&nbsp;to check both WordReference.com and the dictionary of the Real Academia Espa&ntilde;ola, since although the RAE has more words, WordReference includes&nbsp;more slang.&nbsp; Seeing if Microsoft Word underlines all&nbsp;your stimuli can also be helpful, because if something is not underlined, then it's&nbsp;a real word.&nbsp; Be sure to check that all possible alternate spellings are also nonwords, since how the stimuli are pronounced is what is important. &nbsp;Once I get my final list, I have my speakers make sure there aren't any real words I missed. &nbsp;</li><li>Nonwords should not be words in the L1. &nbsp;This is especially important for lexical decision tasks.</li></ul><br /><span>Tips for choosing real words:</span><ul><li><a href="http://www.bcbl.eu/databases/espal/" target="_blank">EsPal</a>&nbsp;for Spanish and the&nbsp;<a href="http://elexicon.wustl.edu/default.asp" target="_blank">English Lexicon Project</a>&nbsp;for English allow you to input lexical properties like number of syllables and they output words that fit the criteria. &nbsp;If you need minimal pairs, such as for a lexical decision task with repetition priming task,&nbsp;<a href="http://www.minpairs.talktalk.net/" target="_blank">this page</a>&nbsp;is useful for English, though it is for British English so be careful if you work on American English.&nbsp;</li><li>Avoid cognates&nbsp;unless they are&nbsp;the focus of your study, since participants often react differently to them.&nbsp;If it isn't possible to avoid cognates, you may want to balance the number of cognates and non-cognates and check later&nbsp;if cognate status affected the results.</li><li>Control&nbsp;lexical frequency. &nbsp;Frequency from subtitles is the best predictor of participants' reaction times in lexical decisions tasks, particularly for shorter words. &nbsp;Words should all be within a certain range, e.g. log frequency of at least 1.5,&nbsp;both for similar reaction times across stimuli and to ensure that learners will know the words.&nbsp; It's important to also have learners do a word familiarity questionnaire at the end of the experiment to verify that they knew the words, as&nbsp;frequency in a&nbsp;native-speaker&nbsp;corpus is not always an accurate predictor of what words learners know.</li></ul><br /><span>Tips for both words and nonwords:</span><ul><li>Avoid difficult segments unless they are the focus of your study. &nbsp;For example, if you are looking at the lexical representations&nbsp;of English vowels by Japanese listeners, but are (cruelly!) including a bunch of /l/ and /r/ words as stimuli, responses to these words are highly&nbsp;likely to be influenced by the presence of the liquids and not necessarily&nbsp;the test vowels.</li></ul></div>]]></content:encoded></item><item><title><![CDATA[Choosing a perception task]]></title><link><![CDATA[https://www.ddaidone.com/blog/choosing-a-perception-task]]></link><comments><![CDATA[https://www.ddaidone.com/blog/choosing-a-perception-task#comments]]></comments><pubDate>Fri, 09 Jun 2017 22:15:47 GMT</pubDate><category><![CDATA[Creating Perception Tasks]]></category><guid isPermaLink="false">https://www.ddaidone.com/blog/choosing-a-perception-task</guid><description><![CDATA[Welcome to my blog!&nbsp; I've decided to use this space as a how-to for creating and running perception experiments, both as a way to organize my thoughts and as a way to help you, random person on the internet.&nbsp; I'm writing this for an audience (assuming you exist) that has some knowledge of L2 phonology, but no practical experience running experiments.&nbsp;&nbsp;So let's get started!&nbsp; First of all, if you're excited to start a perception experiment, as we all should be, you have a  [...] ]]></description><content:encoded><![CDATA[<div class="paragraph"><span style="color:rgb(0, 0, 0)">Welcome to my blog!&nbsp; I've decided to use this space as a how-to for creating and running perception experiments, both as a way to organize my thoughts and as a way to help you, random person on the internet.&nbsp; I'm writing this for an audience (assuming you exist) that has some knowledge of L2 phonology, but no practical experience running experiments.&nbsp;&nbsp;</span><br /><br /><span style="color:rgb(0, 0, 0)">So let's get started!&nbsp; First of all, if you're excited to start a perception experiment, as we all should be, you have a research question in mind that you want answered. &nbsp;This research question will determine what kind of task you should use, as different types of tasks examine different levels of processing.&nbsp; In this post I'll outline common types of research questions along with their corresponding appropriate task(s).<br />&#8203;</span><br /></div>  <div>  <!--BLOG_SUMMARY_END--></div>  <div class="paragraph"><font color="#000000"><strong>1. What sounds in the L1 are closest to these non-native/L2 sounds?</strong><br /><br /><em>Perceptual Assimilation:</em></font><ul><li><font color="#000000">Participants listen to a non-native/L2 stimulus and choose what L1 category is closest to this sound.&nbsp; Participants then judge&nbsp;on a scale&nbsp;how similar or different the stimulus was to the L1 category they chose.&nbsp; Categories may be represented in different ways.&nbsp; Often orthography is used on each button, either with the relevant categories written out&nbsp;alone, such as&nbsp;<em>a e i o u</em>&nbsp;&nbsp;for Spanish vowels (Escudero &amp; Vasiliev, 2011), or with keywords if the orthography is not transparent, such as&nbsp;<em>heed</em>&nbsp;to represent English /i/ (Tyler et al., 2014).&nbsp; IPA symbols have also been used (Strange et al., 2005). &nbsp;Here is&nbsp;an example of a perceptual assimilation experiment in Praat:</font><font color="#000000">&#8203;&#8203;</font></li></ul></div>  <span class='imgPusher' style='float:left;height:0px'></span><span style='display: table;width:auto;position:relative;float:left;max-width:100%;;clear:left;margin-top:0px;*margin-top:0px'><a><img src="https://www.ddaidone.com/uploads/1/0/5/2/105292729/editor/pa-screenshot.png?250" style="margin-top: 5px; margin-bottom: 10px; margin-left: 0px; margin-right: 10px; border-width:1px;padding:3px; max-width:100%" alt="Picture" class="galleryImageBorder wsite-image" /></a><span style="display: table-caption; caption-side: bottom; font-size: 90%; margin-top: -10px; margin-bottom: 10px; text-align: center;" class="wsite-caption"></span></span> <div class="paragraph" style="display:block;"><em>&#8203;</em><br /><font color="#000000">Additional information:</font><br /><font color="#000000">Studies that examine the relationship of L1 sounds to non-native or L2 sounds are typically carried out under the theoretical framework of the Perceptual Assimilation Model (PAM) (Best, 1995) or its adaptation to L2 learning (PAM-L2; Best &amp; Tyler, 2007).&nbsp; Researchers often use the relationship of L1 to non-native/L2 categories to make predictions about how accurately various contrasts will be discriminated (e.g. Tyler et al., 2014).&nbsp; We've found that perceptual similarity of non-native/L2 sounds to each other, rather than to L1 sounds, is a better predictor of discriminability.&nbsp; This can be computed either indirectly through perceptual assimilation overlap scores (how often two non-native sounds were perceived as the same set of L1 categories; see Levy, 2009, for more about this analysis) or through the results of perceptual similarity tasks.&nbsp;</font><br /><br /><strong><font color="#000000">2. What non-native/L2 sounds are perceived as similar to each other?</font></strong><br /><br /><em><font color="#000000">Similarity Judgment:</font></em><ul><li><font color="#000000">Participants hear two stimuli and rate their (dis)similarity on a scale. &nbsp;The size of the scale varies by study. &nbsp;For example,&nbsp;</font><span style="color:rgb(0, 0, 0)">Iverson et al. (2003) used a&nbsp;</span><font color="#000000">7-point scale, while&nbsp;</font><span style="color:rgb(0, 0, 0)">Fox, Flege, &amp; Munro (1995) used a&nbsp;</span><font color="#000000">9-point scale.</font>&#8203;</li></ul><br /><em><font color="#000000">Free Classification (aka Free Sort):</font></em><ul><li><font color="#000000">Participants are visually presented with all sound files at once.&nbsp; These are labeled with numbers (Daidone, Kruger, &amp; Lidster, 2015) or initials of the talkers (Clopper, 2008).&nbsp; They are free to click on each one and listen as many times as needed to make groups of similar-sounding stimuli. &nbsp;Here is an example in PowerPoint:</font></li></ul></div> <hr style="width:100%;clear:both;visibility:hidden;"></hr>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.ddaidone.com/uploads/1/0/5/2/105292729/fc_orig.png" alt="Picture" style="width:auto;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>  <div class="paragraph"><font color="#000000">Additional information:</font><br /><font color="#000000">These kinds of research questions most closely align with the Speech Learning Model (SLM) (Flege, 1995). &nbsp;Similarity judgment has been used both with natural stimuli (</font><span style="color:rgb(0, 0, 0)">Fox, Flege, &amp; Munro, 1995) and&nbsp;</span><font color="#000000">with synthetic stimuli on a continuum in order to examine how the perceptual space is warped by the L1 (Iverson et al., 2003).&nbsp;&nbsp;Free classification or similarity judgment tasks have the advantage of not imposing a predetermined number of categories or category labels on the participant.&nbsp; A multi-dimensional scaling (MDS) analysis allows the researcher to investigate how many dimensions the participants are using to determine the similarity of stimuli. </font><span style="color:rgb(0, 0, 0)">Dimension scores can then be correlated with acoustic properties (e.g. F1, roundedness) to determine what these dimensions represent</span><font color="#000000"> (Daidone, Kruger, &amp; Lidster, 2015; Fox, Flege, &amp; Munro, 1995). &nbsp;&nbsp;</font><br /><br /><strong><font color="#000000">3. Can listeners discriminate this non-native/L2 contrast?</font></strong><br /><br /><em>&#8203;</em><font color="#000000"><em>AX: </em></font><ul><li><font color="#000000">Participants hear two sounds in a row and judge them as 'same' or 'different'.</font></li></ul><br /><font color="#000000"><em>ABX (AXB, XAB): </em></font><ul><li><font color="#000000">Participants hear three sounds in a row and decide whether the last stimulus (X) was the same as the first stimulus (A) or the second stimulus (B).&nbsp; In the AXB and XAB variations of this task, the critical stimulus X is presented at different positions within the triad, i.e. second and first, respectively. &nbsp;Below is an example of an AXB task through a web browser using jsPsych. &nbsp;We had participants use a mouse, and the robots raised their arms when the cursor hovered over them. &nbsp;</font><span style="color:rgb(0, 0, 0)">The second robot was grayed out and did not react to the cursor in order to emphasize that the second stimulus was not a possible choice. &nbsp;</span><font color="#000000">Other studies use a keyboard response in order to get more accurate reaction times, though it's not particularly meaningful for AXB since participants can often make a decision after hearing the second stimulus.</font></li></ul></div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.ddaidone.com/uploads/1/0/5/2/105292729/axb_orig.png" alt="Picture" style="width:auto;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>  <div class="paragraph"><font color="#000000"><em>Oddity:&nbsp;</em></font><ul><li><font color="#000000">Participants hear three sounds in a row and decide which stimulus was different, i.e. the odd one out.&nbsp; Typically participants also have the option to say "all the same" if they don't perceive any of the stimuli as different. &nbsp;In our task, we used an X as the option for "all the same."</font>&#8203;</li></ul></div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:left"> <a> <img src="https://www.ddaidone.com/uploads/1/0/5/2/105292729/oddity_1_orig.png" alt="Picture" style="width:auto;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>  <div class="paragraph"><font color="#000000"><em>Oddball:&nbsp;</em></font><ul><li><font color="#000000">Participants hear a sequence of repeating stimuli which change to a new category after a certain number of trials.&nbsp; Participants indicate when they heard a change in the category of the stimuli.&nbsp; This technique originated with studies of infants, in which a change in category is indicated by a head turn, rather than by a button press as with adults.</font></li></ul><br /><font color="#000000"><em>Sequence recall:</em></font><ul><li><font color="#000000">Participants are trained to associate two nonwords with two different keys. &nbsp;For example, in Dupoux et al. (2008) /'numi/ was matched to key 1 and /nu'mi/ was matched to key 2.&nbsp; Once participants are familiarized with the keys, they hearing a sequence of these nonwords and then have to type in the sequence they heard. &nbsp;These sequences are followed by the word "okay" to prevent listeners from using echoic memory to complete the task. &nbsp;Sequences start with two stimuli and increase incrementally until six stimuli in a row are presented. &nbsp;</font></li></ul><br /><span>&#8203;&#8203;&#8203;</span><font color="#000000">Additional information:</font><br /><font color="#000000">Different discrimination tasks allow for different processing strategies, and tasks can be modified to increase or decrease the amount of cognitive load. &nbsp;</font><span style="color:rgb(0, 0, 0)">Strange and Shafer (2008) have a nice summary of perception tasks and the effects of different experiment conditions on performance.&nbsp;</span>&nbsp;<span style="color:rgb(0, 0, 0)">An increase in cognitive demand reduces the possibility that participants can use acoustic cues to complete the task. &nbsp;Physically different stimuli, multiple talkers, and embedding the stimuli in a context are all ways to force participants to process the contrast phonetically or phonologically, rather than searching for an acoustic match. &nbsp;Cognitive demand can also be increased through a higher memory load. &nbsp;For example, at the higher sequence lengths in a sequence recall task, even the native speakers are not at ceiling. &nbsp;The effect of interstimulus interval (ISI) is somewhat mixed in the literature, with some studies finding a difference in performance at different ISIs (e.g. Werker &amp; Tees, 1984) and others not (e.g. Tyler &amp; Fenwick, 2012). &nbsp;Overall, it seems that both an extremely short ISI (0ms) and an extremely long ISI (1500ms) make the task more difficult. &nbsp;In my experience, an ISI of 1000ms is already very boring, and participants are more likely to let their attention wander. &nbsp;</span><br /><br /><font color="#000000">In general, AX tasks are easiest for participants, with difficulty increasing for AXB and oddity, followed by sequence recall, particularly at higher sequence lengths. &nbsp;For example, in a series of studies Dupoux and colleagues found that French learners of Spanish could discriminate a stress contrast in an AX task with a single talker, but performed worse in an ABX task with multiple talkers. &nbsp;In a sequence recall task, the performance of French participants was negatively affected by an increased sequence length, as well as by the amount of phonetic variability present in the stimuli. &nbsp; In our research, we've found that the results of an AXB task with a 1000ms ISI and an oddity task with a 400ms ISI were very highly correlated. &nbsp;Given this, I suggest using an oddity task since 1) the task is more intuitive, so participants are less confused by it and 2) chance level is much lower (25% for oddity vs. 50% for AXB), so a higher variability in scores is possible with oddity.<br /><br />Results of discrimination tasks are typically reported in terms of accuracy or d' (d-prime) (or A'). d' is often a better metric because unlike accuracy rates, it is not affected by a participant's bias to answer one way or the other, e.g. a participant that responds 'same' to all trials.</font><br />&#8203;<br /><strong><font color="#000000">4. Can L2 learners identify these L2 sounds?</font></strong><br /><br /><font color="#000000"><em>Identification:</em></font><ul><li><font color="#000000">Participants listen to a stimulus and pick what&nbsp;they heard from various options presented to them. &nbsp;These options may be in the form of single segments or (non)words. &nbsp;Identification tasks have been paired with training sessions in which&nbsp;participants learn to identify a non-native/L2 contrast and later are tested on their accuracy in identifying the sounds in this contrast (e.g. Bradlow et al., 1999).&nbsp;&nbsp;This task has also been used with eyetracking in the visual world paradigm, in which pictures (or orthographic representations)&nbsp;of a target word, a possible competitor, and distractors&nbsp;are displayed on the screen while participants listen to the target stimulus&nbsp;(Weber &amp; Cutler, 2004). &nbsp;By tracking participants' gaze as they listen to the stimuli, it is possible to gain insight into lexical activation and competition.&nbsp;A variation&nbsp;of the identification&nbsp;task with synthetic stimuli on a continuum plus goodness ratings (e.g.&nbsp;</font><span style="color:rgb(0, 0, 0)">Iverson &amp; Kuhl, 1995)</span><font color="#000000">&nbsp;has been used to evaluate the Native Language Magnet Model (NLM) (Kuhl, 1993; Kuhl et al., 2008) for L1 phonology. &nbsp;Identification tasks are&nbsp;not well suited for the investigation of non-native perception, since participants won't have labels for non-native sounds.&nbsp;This task is very useful, however, for examining whether learners associate certain variants with their respective phonemes. &nbsp;For example, Schmidt (2011) examined whether learners had acquired [h] as a possible variant of coda /s/ in Spanish by playing nonwords like [bahpe] and providing different possible orthographic representations to choose from.&nbsp;&nbsp;The following is a screenshot of the identification task in Praat from Schmidt (2011, p. 207):</font></li></ul></div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.ddaidone.com/uploads/1/0/5/2/105292729/id_orig.png" alt="Picture" style="width:auto;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>  <div class="paragraph"><br /><strong><font color="#000000">&#8203;5. Can L2 learners represent this contrast in their mental lexicon?</font></strong><br /><br /><font color="#000000"><em>Lexical decision:</em></font><ul><li><font color="#000000">Participants hear a stimulus and decide if it is a word or non-word. &nbsp;If participants do not have a contrast between two L2 sounds in their lexical representations, they will frequently accept nonwords as words. &nbsp;For example, in Daidone and Darcy (2014), English-speaking learners of Spanish often accepted the trill as a possible realization of the tap, e.g. they indicated that&nbsp;<em>quierro&nbsp;</em>/kjero/<em>&nbsp;w</em>as a word, when the actual word is&nbsp;<em>quiero</em>&nbsp;/kje&#638;o/. &nbsp;Lexical decision tasks are often paired with priming in order to investigate lexical activation and competition. &nbsp;Most of these lexical decision tasks are cross-modal, such that participants listen to a (non)word then respond to a visual, orthographic target. If participants are faster at responding to the word after a prime that contains a different sound, e.g. L1 Dutch speakers responding&nbsp;faster to&nbsp;<em>groove&nbsp;</em>after hearing the nonword&nbsp;<em>groof</em>, they have not encoded the relevant contrast in their mental lexicon, in this case final consonant voicing (Broersma &amp; Cutler, 2008).&nbsp;These primes can also be fragment primes, i.e. only&nbsp;the beginning of&nbsp;a word, such as&nbsp;<em>daf&nbsp;</em>from <em>daffodil&nbsp;</em>followed by the visual&nbsp;target <em>deaf </em>in order to investigate whether Dutch learners of English have encoded this vowel contrast (Broersma, 2012).&nbsp; A variation of this task is with repetition priming. &nbsp;If learners do not have a contrast between words in a minimal pair, hearing one word of the minimal pair will result in a faster reaction time when the second word appears in the task (e.g. Pallier, Colom&eacute;, &amp; Sebasti&aacute;n-Gall&eacute;s, 2003). &nbsp;It is important to use the&nbsp;keyboard or a response box for accurate reaction times for this task.</font></li></ul><br /><em><font color="#000000">Word Learning:</font></em><ul><li><font color="#000000">Participants learn a small group of nonwords that contain the relevant contrast, e.g. Japanese minimal pairs like <em>keto&nbsp;</em>and <em>ketto&nbsp;</em>with singleton and geminate consonants (Hayes-Harb &amp; Masuda, 2008). &nbsp;In the word-learning phase, participants learn to associate these words with a picture of their meaning. &nbsp;This is followed by practice in which participants must indicate whether each word-picture pair they are given is correct; at this stage, they do not have to be sensitive to the test contrast. &nbsp;Participants who fail to reach a certain criterion (e.g. 90% accuracy) must repeat this phase until they pass. &nbsp;The test phase is similar to the practice phase, but no feedback is given and word-picture pairs that require sensitivity to the test contrast are presented, e.g. hearing&nbsp;<em>keto&nbsp;</em>but seeing the picture for&nbsp;<em>ketto</em>. &nbsp;This type of task is commonly used to evaluate the effect of different &nbsp;orthographic representations on participant's&nbsp;ability to encode contrasts in their mental lexicon (e.g. Showalter &amp; Hayes-Harb, 2013).</font></li></ul> <font color="#000000">&#8203;</font><br /><br /><font color="#000000">I hope this summary of perception tasks has been helpful! &nbsp;I also recommend checking out <a href="https://people.ucsc.edu/~gmcguir1/experiment_designs.pdf" target="_blank">"A Brief Primer on Experimental Designs for Speech Perception Research"</a> by Grant McGuire at UC Santa Cruz. &nbsp;As you're deciding on a perception task, keep in mind that researchers frequently pair tasks together to look at different levels of processing. &nbsp;A study may examine, for example, both participants' ability to discriminate a contrast and their ability to represent this contrast in lexical representations. Previous research has found that discrimination is easier than identification, which in turn is easier than any task tapping a lexical level of processing (D&iacute;az et al., 2012; Ingram &amp; Park, 1997; Sebasti&aacute;n-Gall&eacute;s &amp; Baus, 2005).</font><br /><br /><br /></div>  <div class="paragraph"><span style="color:rgb(0, 0, 0)">References<br /><br />&#8203;Best, C. T. (1995).&nbsp;A&nbsp;direct&nbsp;realist&nbsp;view of&nbsp;cross-language&nbsp;speech&nbsp;perception.&nbsp; In&nbsp;W. Strange (Ed.),&nbsp;</span><em style="color:rgb(0, 0, 0)">Speech perception and linguistic experience: Issues in cross-language research</em><span style="color:rgb(0, 0, 0)">&nbsp;(pp.&nbsp;171-204).&nbsp;Timonium, MD: York Press.</span><br /><br /><font color="#000000">Best, C., &amp; Tyler, M. (2007). Nonnative and second-language speech perception: Commonalities and complementarities. In O.-S. Bohn &amp; M. Munro (Eds.),&nbsp;<em>Language experience in second language speech learning: In honor of James Emil Flege</em>&nbsp;(pp. 13-34). Philadelphia: John Benjamins.<br /><br />Bradlow, A. R., Akahane-Yamada, R., Pisoni, D. B., &amp; Tohkura, Y. I. (1999). Training Japanese listeners to identify English /r/ and /l/: Long-term retention of learning in perception and production. <em>Attention, Perception, &amp; Psychophysics</em>, <em>61</em>(5), 977-985.</font><br /><br /><font color="#000000">Broersma, M. (2012). Increased lexical activation and reduced competition in second-language listening. <em>Language and cognitive processes</em>, <em>27</em>(7-8), 1205-1224.</font><br /><br /><font color="#000000">Broersma, M., &amp; Cutler, A. (2008). Phantom word activation in L2. <em>System</em>, <em>36</em>(1), 22-34.</font><br /><br /><font color="#000000">Clopper, C. G. (2008). Auditory free classification: Methods and analysis.&nbsp;<em>Behavior Research Methods</em>,&nbsp;<em>40</em>(2), 575-581.</font><br /><br /><font color="#000000">Daidone, D. &amp; Darcy, I. (2014).&nbsp; Quierro comprar una guitara: Lexical encoding of the tap and trill by L2 learners of Spanish.&nbsp; In R. T. Miller, K. I. Martin, C. M. Eddington, A. Henery, N. Marcos Miguel, A. M. Tseng, &hellip;D. Walter (Eds.), <em>Selected Proceedings of the 2012 Second Language Research Forum </em>(pp. 39-50). Somerville, MA: Cascadilla Proceedings Project.</font><br /><br /><font color="#2a2a2a">Daidone, D., Kruger, F., &amp; Lidster, R. (2015). Perceptual assimilation and free classification of German vowels by American English listeners. In The Scottish Consortium for ICPhS 2015 (Eds.), Proceedings of the 18th International Congress of Phonetic Sciences. Glasgow, UK: Glasgow University.</font><br /><br /><font color="#000000">D&iacute;az, B., Mitterer, H., Broersma, M., &amp; Sebasti&aacute;n-Gall&eacute;s, N. (2012). Individual differences in late bilinguals' L2 phonological processes: From acoustic-phonetic analysis to lexical access. <em>Learning and Individual Differences, 22</em>(6), 680-689.&nbsp;</font><br /><br /><font color="#2a2a2a">Dupoux, E., Sebasti&aacute;n-Gall&eacute;s, N., Navarrete, E., &amp; Peperkamp, S. (2008). Persistent stress &lsquo;deafness&rsquo;: The case of French learners of Spanish. <em>Cognition</em>, <em>106</em>(2), 682-706.&nbsp;</font><br /><br /><font color="#000000">Escudero, P., &amp; Vasiliev, P. (2011). Cross-language acoustic similarity predicts perceptual assimilation of Canadian English and Canadian French vowels.&nbsp;</font><em style="color:rgb(0, 0, 0)">The Journal of the Acoustical Society of America</em><font color="#000000">,&nbsp;</font><em style="color:rgb(0, 0, 0)">130</em><font color="#000000">(5), EL277-EL283.</font><br /><br /><font color="#000000">Flege, J. E. (1995). Second language speech learning: Theory, findings, and problems. In W. Strange (Ed.),&nbsp;</font><em style="color:rgb(0, 0, 0)">Speech perception and linguistic experience: Issues in cross-language research</em><font color="#000000">&nbsp;(pp. 233-277). Timonium, MD: York Press.</font><br /><br /><font color="#000000">Fox, R. A., Flege, J. E., &amp; Munro, M. J. (1995). The perception of English and Spanish vowels by native English and Spanish listeners: A multidimensional scaling analysis. <em>The Journal of the Acoustical Society of America</em>, <em>97</em>(4), 2540-2551.</font><br />&#8203;<br /><font color="#000000">Hayes-Harb, R., &amp; Masuda, K. (2008). Development of the ability to lexically encode novel second language phonemic contrasts. <em>Second Language Research</em>, <em>24</em>(1), 5-33.</font><br /><br /><font color="#000000">Ingram, J. C., &amp; Park, S. G. (1998). Language, context, and speaker effects in the identification and discrimination of English /r/ and /l/ by Japanese and Korean listeners. <em>The Journal of the Acoustical Society of America</em>, <em>103</em>(2), 1161-1174.</font><br /><br /><font color="#000000">Iverson, P., &amp; Kuhl, P. K. (1995). Mapping the perceptual magnet effect for speech using signal detection theory and multidimensional scaling.&nbsp;</font><em style="color:rgb(0, 0, 0)">The Journal of the Acoustical Society of America</em><font color="#000000">,&nbsp;</font><em style="color:rgb(0, 0, 0)">97</em><font color="#000000">(1), 553-562.<br /><br />Iverson, P., Kuhl, P. K., Akahane-Yamada, R., Diesch, E., Tohkura, Y. I., Kettermann, A., &amp; Siebert, C. (2003). A perceptual interference account of acquisition difficulties for non-native phonemes.&nbsp;<em>Cognition</em>,&nbsp;<em>87</em>(1), B47-B57.</font><br /><br /><font color="#2a2a2a">Kuhl, P. K. (1993). Innate predispositions and the effects of experience in speech perception: The native language magnet theory. In </font><em style="color:rgb(42, 42, 42)">Developmental neurocognition: Speech and face processing in the first year of life</em><font color="#2a2a2a"> (pp. 259-274). Springer: Netherlands.</font><br /><br /><font color="#000000">Kuhl, P. K., Conboy, B. T., Coffey-Corina, S., Padden, D., Rivera-Gaxiola, M., &amp; Nelson, T. (2008). Phonetic learning as a pathway to language: New data and native language magnet theory expanded (NLM-e).&nbsp;</font><em style="color:rgb(0, 0, 0)">Philosophical Transactions of the Royal Society B: Biological Sciences, 363</em><font color="#000000">(1493), 979-1000.</font><br /><br /><font color="#000000">Levy, E. S. (2009). On the assimilation-discrimination relationship in American English adults' French vowel learning.&nbsp;<em>The Journal of the Acoustical Society of America</em>,&nbsp;<em>126</em>(5), 2670-2682.<br /><br />Pallier, C., Colom&eacute;, A., &amp; Sebasti&aacute;n-Gall&eacute;s, N. (2001). The influence of native-language phonology on lexical access: Exemplar-based versus abstract lexical entries. <em>Psychological Science</em>, <em>12</em>(6), 445-449.<br /><br />Schmidt, L. B. (2011). <em>Acquisition of dialectal variation in a second language: L2 perception of aspiration of Spanish /s/ </em>(Unpublished doctoral dissertation). Indiana University, Bloomington, Indiana.&nbsp;</font><br /><br /><font color="#000000">Sebasti&aacute;n-Gall&eacute;s, N., &amp; Baus, C. (2005). On the relationship between perception and production in L2 categories. In A. Cutler (Ed.), <em>Twenty-first century psycholinguistics: Four cornerstones</em> (pp. 279-292). Mahwah, NJ: Lawrence Erlbaum Associates.</font><br /><br /><font color="#000000">Showalter, C. E., &amp; Hayes-Harb, R. (2013). Unfamiliar orthographic information and second language word learning: A novel lexicon study. <em>Second Language Research</em>, <em>29</em>(2), 185-200.</font><br /><br /><font color="#000000">Strange, W., Bohn, O. S., Nishi, K., &amp; Trent, S. A. (2005). Contextual variation in the acoustic and perceptual similarity of North German and American English vowels. <em>The Journal of the Acoustical Society of America</em>, <em>118</em>(3), 1751-1762.</font><br /><br /><font color="#000000">Strange, W., &amp; Shafer, V. L. (2008). Speech perception in second language learners: The re-education of selective perception.&nbsp;In J. G. Hansen Edwards &amp; M. L. Zampini (Eds.),&nbsp;</font><em style="color:rgb(0, 0, 0)">Phonology and second language acquisition</em><font color="#000000">&nbsp;(pp.&nbsp;153-192).&nbsp;Philadelphia, PA: John Benjamins.</font><br /><br /><font color="#2a2a2a">Tyler, M. D., &amp; Fenwick, S. (2012). Perceptual assimilation of Arabic voiceless fricatives by English monolinguals. In <em>INTERSPEECH</em> <em>2012&nbsp;</em>(pp. 911-914).</font><br /><br /><font color="#000000">Tyler, M. D., Best, C. T., Faber, A., &amp; Levitt, A. G. (2014). Perceptual assimilation and </font><font color="#2a2a2a">discrimination of non-native vowel contrasts.&nbsp;<em>Phonetica</em>,&nbsp;<em>71</em>(1), 4-21.</font><br /><br /><font color="#000000">Weber, A., &amp; Cutler, A. (2004). Lexical competition in non-native spoken-word recognition. <em>Journal of Memory and Language</em>, <em>50</em>(1), 1-25.</font><br /><br /><font color="#2a2a2a">Werker, J. F., &amp; Tees, R. C. (1984). Phonemic and phonetic factors in adult cross&#8208;language speech perception. </font><em style="color:rgb(42, 42, 42)">The Journal of the Acoustical Society of America</em><font color="#2a2a2a">, </font><em style="color:rgb(42, 42, 42)">75</em><font color="#2a2a2a">(6), 1866-1878.</font></div>]]></content:encoded></item></channel></rss>