Journal of cognitive neuroscience
-
Whether the cortical processing of nociceptive input relies on the activity of nociceptive-specific neurons or whether it relies on the activity of neurons also involved in processing nonnociceptive sensory input remains a matter of debate. Here, we combined EEG "frequency tagging" of steady-state evoked potentials (SS-EPs) with an intermodal selective attention paradigm to test whether the cortical processing of nociceptive input relies on nociceptive-specific neuronal populations that can be selectively modulated by top-down attention. ⋯ We found that selectively attending to nociceptive or vibrotactile somatosensory input indistinctly enhances the magnitude of nociceptive and vibrotactile SS-EPs, whereas selectively attending to nociceptive or visual input independently enhances the magnitude of the SS-EP elicited by the attended sensory input. This differential effect indicates that the processing of nociceptive input involves neuronal populations also involved in the processing of touch, but distinct from the neuronal populations involved in vision.
-
Behavioral inhibition and performance monitoring are critical cognitive functions supported by distributed neural networks including the pFC. We examined neurophysiological correlates of motor response inhibition and action monitoring in patients with focal orbitofrontal (OFC) lesions (n = 12) after resection of a primary intracranial tumor or contusion because of traumatic brain injury. Healthy participants served as controls (n = 14). ⋯ This effect was particularly evident in patients whose lesion extended to the subgenual cingulate cortex. In summary, although response inhibition was not impaired, the diminished stop N2 and ERN support a critical role of the OFC in action monitoring. Moreover, the increased stop P3, error positivity, and post-error beta response indicate that OFC injury affected action outcome evaluation and support the notion that the OFC is relevant for the processing of abstract reinforcers such as performing correctly in the task.
-
The aim of the current study was to shed further light on control processes that shape semantic access and selection during speech production. These processes have been linked to differential cortical activation in the left inferior frontal gyrus (IFG) and the left middle temporal gyrus (MTG); however, the particular function of these regions is not yet completely elucidated. We applied transcranial direct current stimulation to the left IFG and the left MTG (or sham stimulation) while participants named pictures in the presence of associatively related, categorically related, or unrelated distractor words. ⋯ Associative facilitation occurred for faster responses, whereas associative interference resulted in slower responses under MTG stimulation. This reduction of the associative facilitation effect under transcranial direct current stimulation may be caused by an unspecific overactivation in the lexicon or by promoting competition among associatively related representations. Taken together, the results suggest that the MTG is especially involved in the processes underlying associative facilitation and that semantic interference and associative facilitation are linked to differential activation in the brain.
-
Our ability to make decisions is predicated upon our knowledge of the outcomes of the actions available to us. Reinforcement learning theory posits that actions followed by a reward or punishment acquire value through the computation of prediction errors-discrepancies between the predicted and the actual reward. A multitude of neuroimaging studies have demonstrated that rewards and punishments evoke neural responses that appear to reflect reinforcement learning prediction errors [e.g., Krigolson, O. ⋯ Furthermore, we demonstrate the existence of a reward positivity at choice presentation, a previously unreported ERP component that has a similar timing and topography as the feedback error-related negativity that increased in amplitude with learning. The pattern of results we observed mirrored the output of a computational model that we implemented to compute reward prediction errors and the changes in amplitude of these prediction errors at the time of choice presentation and reward delivery. Our results provide further support that the computations that underlie human learning and decision-making follow reinforcement learning principles.