CoDAS
https://codas.org.br/article/doi/10.1590/2317-1782/20202018324
CoDAS
Artigo Original

Investigação da discriminação neural das características acústicas dos sons de fala em normo-ouvintes por meio do Frequency Following Response (FFR)

Investigation of the neural discrimination of acoustic characteristics of speech sounds in normal-hearing individuals through Frequency-following Response (FFR)

Caroline Nunes Rocha-Muniz, Eliane Schochat

Downloads: 1
Views: 630

Resumo

Objetivo: Avaliar como as vias auditivas codificam e diferenciam as sílabas plosivas [ga],[da] e [ba], por meio do potencial evocado auditivo Frequency Following Response (FFR), nas crianças em desenvolvimento típico. Método: Vinte crianças (6-12 anos) foram avaliadas por meio do FFR para estímulos [ga],[da] e [ba]. Os estímulos foram compostos por seis formantes, sendo diferenciados na transição F2 e F3 (porção transiente). Os demais formantes foram idênticos nas três sílabas (porção sustentada). Foram analisadas latências de 16 ondas que compõe a porção transiente do estímulo (<70ms) e latências de 21 ondas da porção sustentada (90- 160ms) nas respostas neurais obtidas para cada uma das sílabas. Resultados: As respostas eletrofisiológicas registradas por meio do FFR demonstraram que as latências da porção transiente da resposta neural foram diferentes nas três silabas evocadas. Além disso, os valores de latência das ondas da porção transiente foram aumentando progressivamente, sendo [ga]<[da]<[ba]. Já na porção sustentada da resposta, não houve diferenças significantes nas latências das ondas que compõe essa porção. Conclusão: O FFR mostrou-se uma ferramenta eficiente na investigação da discriminação subcortical de diferenças acústicas dos sons de fala, uma vez que demonstrou diferentes resposta eletrofisiológica para três silabas evocadas. Na porção transiente (consoantes) foram observadas mudanças de latência e na porção sustentada (vogal) não houve diferenças entre as latências para os três estímulos. Esses resultados demonstram a capacidade neural de distinção entre características acústicas dos estímulos [ga],[da],[ba].

Palavras-chave

Audiologia; Eletrofisiologia; Vias Auditivas; Percepção Auditiva; Percepção de Fala

Abstract

Purpose: To evaluate how the auditory pathways encode and discriminate the plosive syllables [ga], [da] and [ba] using the auditory evoked Frequency-following Response (FFR) in children with typical development. Methods: Twenty children aged 6-12 years were evaluated using the FFR for the [ga], [da] and [ba] stimuli. The stimuli were composed of six formants and were differentiated in the F2 to F3 transition (transient portion). The other formants were identical in the three syllables (sustained portion). The latencies of the 16 waves of the transient portion (<70ms) and of the 21 waves of the sustained portion (90-160ms) of the stimuli were analyzed in the neural responses obtained for each of the syllables. Results: The transient portion latencies were different in the three syllables, indicating a distinction in the acoustic characteristics of these syllables through their neural representations. In addition, the transient portion latencies progressively increased in the following order: [ga] <[da] <[ba], whereas no significant differences were observed in the sustained portion. Conclusion: The FFR proved to be an efficient tool to investigate the subcortical acoustic differences in speech sounds, since it demonstrated different electrophysiological responses for the three evoked syllables. Changes in latency were observed in the transient portion (consonants) but not in the sustained portion (vowels) for the three stimuli. These results indicate the neural ability to distinguish between acoustic characteristics of the [ga], [da] and [ba] stimuli.

Keywords

Audiology; Electrophysiology; Auditory Pathways; Auditory Perception; Speech Perception

Referências

1. Hillenbrand J, Gayvert RT. Vowel classification based on fundamental frequency and formant frequencies. J Speech Hear Res. 1993;36(4):694- 700. http://dx.doi.org/10.1044/jshr.3604.694. PMid:8377482.

2. Ladefoged P, Maddieson I. The sounds of the world’s languages. Oxford: Blackwell. 1996.

3. Johnson K. Acoustic and auditory phonetics. Malden, MA: Blackwell; 2003.

4. Johnson KL, Nicol T, Zecker SG, Bradlow AR, Skoe E, Kraus N. Brainstem encoding of voiced consonant-vowel stop syllables. Clin Neurophysiol. 2008;119(11):2623-35. http://dx.doi.org/10.1016/j.clinph.2008.07.277. PMid:18818121.

5. Sachs MB, Young ED. Encoding of steady-state vowels in the auditory nerve: representation in terms of discharge rate. J Acoust Soc Am. 1979;66(2):470-9. http://dx.doi.org/10.1121/1.383098. PMid:512208.

6. Young ED, Sachs MB. Representation of steady-state vowels in the temporal aspects of the discharge patterns of populations of auditory-nerve fibers. J Acoust Soc Am. 1979;66(5):1381-403. http://dx.doi.org/10.1121/1.383532. PMid:500976.

7. Chen GD, Nuding SC, Narayn SS, Sinex DG. Responses of single neurons in the chinchilla inferior colliculus to consonant-vowel syllables differing in voice-onset time. Aud Neurosci. 1996;3:179-98.

8. White-Schwoch T, Woodruff Carr K, Thompson EC, Anderson S, Nicol T, Bradlow AR, et al. Auditory processing in noise: a preschool biomarker for literacy. PLoS Biol. 2015;13(7):e1002196. http://dx.doi.org/10.1371/ journal.pbio.1002196. PMid:26172057.

9. Kraus N, White-Schwoch T. Unraveling the biology of auditory learning: a cognitive-sensorimotor-reward framework. Trends Cogn Sci. 2015;19(11):642- 54. http://dx.doi.org/10.1016/j.tics.2015.08.017. PMid:26454481.

10. Banai K, Hornickel J, Skoe E, Nicol T, Zecker SG, Kraus N. Reading and subcortical auditory function. Cereb Cortex. 2009;19(11):2699-707. http:// dx.doi.org/10.1093/cercor/bhp024. PMid:19293398.

11. Johnson KL, Nicol T, Kraus N. Brain stem response to speech: a biological marker of auditory processing. Ear Hear. 2005;26(5):424-34. http://dx.doi. org/10.1097/01.aud.0000179687.71662.6e. PMid:16230893.

12. Langner G. Neural processing and representation of periodicity pitch. Acta Otolaryngol Suppl. 1997;532(sup532):68-76. http://dx.doi. org/10.3109/00016489709126147. PMid:9442847.

13. Merzenich MM, Reid MD. Representation of the cochlea within the inferior colliculus of the cat. Brain Res. 1974;77(3):397-415. http://dx.doi. org/10.1016/0006-8993(74)90630-1. PMid:4854119.

14. McGee T, Kraus N, King C, Nicol T, Carrell TD. Acoustic elements of speech like stimuli are reflected in surface recorded responses over the guinea pig temporal lobe. J Acoust Soc Am. 1996;99(6):3606-14. http:// dx.doi.org/10.1121/1.414958. PMid:8655792.

15. Sharma A, Dorman M. Cortical Auditory evoked potential correlates of categorical perception of voice-onset time. J Acoust Soc Am. 1999;106(2):1078- 83. http://dx.doi.org/10.1121/1.428048. PMid:10462812.

16. Tremblay K, Piskosz M, Souza P. Effects of age and age related hearing loss on the neural representation of speech cues. Clin Neurophysiol. 2003;114(7):1332-43. http://dx.doi.org/10.1016/S1388-2457(03)00114-7. PMid:12842732.

17. Korczak P, Stapells DR. Effects of various articulatory features of speech on cortical event-related potentials and behavioral measures of speechsound processing. Ear Hear. 2010;31(4):491-504. http://dx.doi.org/10.1097/ AUD.0b013e3181d8683d. PMid:20453651.

18. Elangovan S, Stuart A. A cross-linguistic examination of cortical auditory evoked potentials for categorical voicing contrast. Neurosci Lett. 2011;490(2):140-4. http://dx.doi.org/10.1016/j.neulet.2010.12.044. PMid:21193015.

19. Blumstein SE, Isaacs E, Mertus J. The role of the gross spectral shape as a perceptual cue to place articulation in initial stop consonants. J Acoust Soc Am. 1982;72(1):43-50. http://dx.doi.org/10.1121/1.388023. PMid:7108042.

20. Gorga M, Abbas P, Worthington D. Stimulus calibration in ABR measurements. In Jacobsen J, editor. The auditory brainstem response. San Diego: College-Hill Press; 1985. p. 49-62.

21. AAA: American Academy of Audiology. Diagnosis, treatment, and management of children and adults with central auditory processing disorder [Internet]. Reston: AAA; 2010 [citado em 2019 Maio 10]. Disponível em: https://www.audiology.org/publications-resources/document-library/ central-auditory- processing-disorder

22. ASHA: American Speech and Hearing Association. (Central) auditory processing disorders. Technical report [Internet]. Washington: ASHA; 2005 [citado em 2019 Maio 10]. Disponível em: https://www.asha.org/ policy/TR2005-00043/

23. Klatt DH. Software for a cascade/parallel formant synthesizer. J Acoust Soc Am. 1980;67(3):971-95. http://dx.doi.org/10.1121/1.383940.

24. Skoe E, Nicol T, Kraus N. Cross-phaseogram: objective neural index of speech sound differentiation. J Neurosci Methods. 2011;196(2):308-17. http://dx.doi.org/10.1016/j.jneumeth.2011.01.020. PMid:21277896.

25. Dancey CP, Reidy J. Estatística sem matemática para psicologia. Porto Alegre: Artemed; 2006.

26. Hornickel J, Skoe E, Nicol T, Zecker S, Kraus N. Subcortical differentiation of stop consonants relates to reading and speech-in-noise perception. Proc Natl Acad Sci USA. 2009;106(31):13022-7. http://dx.doi.org/10.1073/ pnas.0901123106. PMid:19617560.


Submetido em:
10/05/2019

Aceito em:
12/04/2020

60c500b7a95395282158f002 codas Articles

CoDAS

Share this page
Page Sections