Bilingual interactive activation plus (BIA+) is a model for understanding the process of bilingual language comprehension and consists of two interactive subsystems: the word identification subsystem and task/decision subsystem.[1] It is the successor of the Bilingual Interactive Activation (BIA) model [2] which was updated in 2002 to include phonologic and semantic lexical representations, revise the role of language nodes, and specify the purely bottom-up nature of bilingual language processing.
The BIA+ is one of many models that was defined based on data from psycholinguistic or behavioral studies which investigate how the languages of bilinguals are manipulated during listening, reading and speaking each of them; however, BIA+ is now being supported by neuroimaging data linking this model to more neurally inspired ones which have a greater focus on the brain areas and mechanisms involved in these tasks.
The two basic tools in these studies are the event-related potential (ERP) which has high temporal resolution but low spatial resolution and the functional magnetic resonance imaging (fMRI) which typically has high spatial resolution and low temporal resolution. When used together, however, these two methods can generate a more complete picture of the time course and interactivity of bilingual language processing according to the BIA+ model.[1] These methods, however, do need to be considered carefully as overlapping activation areas in the brain do not imply that there is no functional separation between the two languages at the neuronal or higher-order level.[3]
According to the BIA+ model shown in the figure, during word identification, the visual input activates the sublexical orthographic representations which simultaneously activate both the orthographic whole-word lexical and the sublexical phonological representations. Both whole-word orthographic and phonological representations then activate the semantic representations and language nodes which indicate membership to a particular language. All of this information is then used in the task/decision subsystem to carry out the remainder of the task at hand. The two subsystems are further described by the assumptions associated with them below.
The integrated lexicon assumption describes the interactivity of the visual representation of word or word parts and orthography, the phonologic or auditory component of language processing, and the semantic or significance and meaning representations of words.[4] This theory was tested with orthographic neighbors, words of the same length that differ by one letter only (e.g. BALL and FALL). The number of target and non-target language neighbors influenced target word processing in both the primary language (L1) and the secondary language (L2).[5] This cross-language neighborhood effect was supposed to reflect a co-activation of words whatever the language they belong to, that is a lexical access that is language nonselective. Both target and nontarget languages can be automatically and unconsciously activated even in a pure monolingual mode. This does not imply, however, that there may not be features unique to one language (i.e. the use of different alphabets) or that, at the semantic level, there are no shared features.
This assumption states that language nodes/tags exist to provide a representation for the language of membership based on the information from upstream orthographic and phonological word ID processes. According to the BIA+ model, these tags have no effect on the activation level representation of words.[1] The focus of activation of these nodes is postlexical: the existence of these nodes enables bilingual individuals not to get too much interference from the nontarget language while they process one of their language.
Parallel access assumes that language is nonselective and that both potential word choices are activated in the bilingual brain when exposed to the same stimulus. For example, test subjects reading in their second language have been found to unconsciously translate to their primary language.[6] N400 stimulus response activation measurements show that semantic priming effects were seen in both languages and an individual cannot consciously focus their attention to only one language, even when told to ignore the second.[7] This language nonselective lexical access has been shown during semantic activation across languages, but also at the orthographic and phonological levels.
The temporal delay assumption is based on the principle of resting potential activation which reflects the frequency of word use by the bilingual such that high frequency words correlate to high resting level activation potentials, and words used with little frequency correlate to low resting level activation potentials. A high resting potential is one that is less negative or closer to zero, the point of activation, and therefore needs less stimuli in order to become activated. Because the less commonly used words of L2 have a lower resting level activation, L1 is likely to be activated before L2 as seen by N400 ERP patterns.[8] This resting level activation of words also reflects the proficiency level of bilinguals and their frequency of usage of the two languages. When a bilingual’s language proficiency is lower in L2 than L1, the activation of L2 lexical representations will be further delayed as more extensive or higher-level brain activation is necessary for language control.[4] Both low and high proficiency bilinguals have parallel activation of the word representations, however the less proficient language, L2, becomes active more slowly contributing to the temporal delay assumption.
The locations of many of the word identification processing tasks have been determined with fMRI studies. Word retrieval is localized in Broca's area of the prefrontal cortex,[9] whereas storage of information is localized in the inferior temporal lobes.Globally, the same brain areas have been shown to be activated across the L1 and L2 in highly proficient bilinguals. Some subtle differences between L1 and L2 activations emerge though when testing lower proficient bilinguals.
The task/decision subsystem of the BIA+ model determines which actions must be executed for the task at hand based on the relevant information that becomes available after word identification processing.[1] This subsystem involves many of the executive processes including monitoring and control associated with the prefrontal cortex.
Action plans that meet the task at hand are executed by the task/decision system on the basis of activation information from the word identification subsystem.[7] Studies that tested bilinguals with homographs showed that conflicts between target and non-target language readings of the homographs still led to a difference in activation between it and a control, implying that bilinguals are not able to regulate activation in the word identification system.[10] Therefore, the action plans of the task/decision system have no direct influence on activations of word identification language subsystem.
The neural correlates of the task/ decision subsystem consist of multiple components that map onto different areas of prefrontal cortex responsible for executing control functions. For example, the general executive functions of language switching have been found to activate the anterior cingulate cortex and dorsolateral prefrontal cortex areas.,[11] [12] Translation, on the other hand, requires controlled actions in language representations and has been associated with the left basal ganglia,[12] [13] The left caudate nucleus has been associated with control of in-use language,[14] and the left mid-prefrontal cortex is responsible for monitoring interference and suppressing competing responses between languages.,[13] [15]
According to the BIA+ model, when a bilingual with English as their primary language and Spanish as their secondary language translates the word advertencia from Spanish to English, several steps occur. The bilingual would use the orthographic and phonological cues to differentiate this word from the similar English word advertisement. At this point, however, the bilingual automatically derives the semantic meaning of the word, not only for the correct Spanish meaning of advertencia which is warning but also for the Spanish meaning of advertisement which is publicidad.
This information would then be stored in the bilingual’s working memory and used in the task/decision system to determine which of the two translations best fits the task at hand. Since the original instructions were to translate from Spanish to English, the bilingual would choose the correct translation of advertencia to be warning and not advertisement.
While the BIA+ models shares several similarities with its predecessor, the BIA model, there are a few distinct differences that exist between the two. First and most notable is the purely bottom-up nature of the BIA+ model which assumes that information from the task/decision subsystem cannot influence the word identification subsystem, while the BIA model assumes that the two systems can fully interact.
Second is that the language membership nodes of the BIA+ model do not affect the activation levels of the word identification system, whereas they play an inhibitory role in the BIA model.
Finally participant expectations could potentially affect the task/decision system in the BIA+ model; however the BIA model assumes there is no strong effect on the activation state of words based on expectations.[1]
The BIA+ model has been supported by many of the quantitative neuroimaging studies but more research needs to be completed in order to strengthen the model as a frontrunner in the accepted models for bilingual language processing. In the task/decision system, the task components are well-defined (e.g. translation, language switching) but the decision components involved in the execution of these tasks in the subsystem are underspecified. The relationship of the components in this subsystem need further exploration in order to be fully understood.
Scientists are also considering the use of magnetoencephalography (MEG) in future studies. This technology would link the spatial activation processes with the temporal patterns of brain response more accurately than simultaneously considering the response data from ERP and fMRI which are more limited.
Not only have studies suggested that the executive functioning of bilingualism extends beyond the language system, but bilinguals have also been shown to be faster processors who display fewer conflict effects than monolinguals in attentional tasks[16] This research implies that there may be some spillover effects of learning a second language on other areas of cognitive function that could be explored.
One future direction theories on bilingual word recognition should take is the investigation of developmental aspects of bilingual lexical access.[17] Most studies have investigated highly proficient bilinguals, but not many have looked at low-proficient bilinguals or even L2 learners. This new direction should prove to bring a lot of educational applications.