Ever wondered if all that screen time is negatively affecting your child’s brain? Bad news, I am afraid. According to all the experts, this electronic screen syndrome (ESS) is causing sleep deprivation, social isolation, behavior problems, and a hyper aroused nervous system. Oct 17, 2015 12 Challenging Brain Teasers For Adults With Answers. On October 17, 2015. He asked his architect to plan the kingdom so all the castles will be connected by 5 straight walls. Each wall must connect 4 castles together. 12 Challenging Brain Teasers For Adults With Answers; AdderRx: Now You Can Perform the Best and Still Be. Brain Teaser #2 - The Perfect Kingdom. Once upon a time there was a king. He wanted to build a new kingdom with 10 beautiful castles. He asked his architect to plan the kingdom so all the castles will be connected by 5 straight walls.
Children today encounter and utilize technology constantly both at home and in school. Television, DVDs, video games, the internet, cell phones and PDAs – all now play a formative role in many children’s development. Given that the term “technology” subsumes a large variety of somewhat independent items, it may not be surprising that current research indicates causes for both optimism and for concern that depend upon the content of the technology, the context in which the technology immerses the user, and the developmental stage of the user. Furthermore, because the field is still, relative to other natural sciences, in its infancy, results can be surprising. For example, video games designed to be reasonably mindless result in a widespread enhancement of various abilities, acting, we will argue, as exemplary learning tools. Counter-intuitive outcomes like these, besides being practically relevant, challenge and eventually lead to refinement of theories concerning fundamental principles of brain plasticity and learning. Thus, technology offers us a range of tools, not just for promoting certain behaviors, but also for studying the neural bases of learning and development.
It is Monday morning at 7:58 AM when John enters the building. Immediately a dossier is uploaded to his iPad complete with a schedule, maps to relevant locations, and background information for the various tasks he will need to complete. As he reads that his first assignment begins in 2 minutes in the physics lab two floors above, his walk becomes a light jog… In this story John is not a spy, but is instead an average 8th grader sometime in the near future. In the physics lab, he will have to complete computer-administered problem-sets on Newton’s laws and work with a team to build a video game that incorporates the principles he has learned. While this scenario may seem farfetched, pilot programs such as the School of One1 or the Quest to Learn2 have already embarked on this journey, exploring how technology may be best harnessed for teaching.
Beyond these limited and controlled settings though, a far larger experiment of nature is unfolding before our eyes. While there are certainly innate or genetic limitations to our various capabilities, an enormous part of “who we are” is shaped by our experiences - experiences that today are defined by the pervasive influence of technology. This fact is particularly relevant in the case of children, both because children are at the forefront of the technological revolution3 and because the developing brain is more malleable in response to experience than is the adult brain4,. The central question for researchers is therefore not whether technology is affecting cognitive development - that is a given. The question is instead, how is technology affecting cognitive development? Are the changes for the better or for the worse? How can we harness technology to effect more changes for the better? How do we limit technology’s ability to effect changes for the worse? However, before we can begin, we must first admit that the overarching question “How is technology affecting cognitive development?” is poorly posed. “Technology” is not a single unique entity and thus is unlikely to have a single unique effect. One can no more ask, “How is technology affecting cognitive development?” than one can ask, “How is food affecting physical development?” Like with food, the effects of technology will depend critically on what type of technology is consumed, how much of it is consumed, and for how long it is consumed.
Persistent, but not transient effects
Technology use is associated both with transient changes in arousal/mood as well as with long-term changes in behavior/brain function. Therefore, in the same way one cannot simply lump together the short-term effects of consuming a single caffeinated soda with the lasting effects of consuming multiple such sodas daily for years, we need to be sure to distinguish between the temporary and the long-term effects of technology consumption. Transient changes are likely to be shared across all experiences that similarly affect mood and arousal, rather than being specific for any one type of experience. One such example of this is what has been dubbed “The Mozart Effect” or the finding that listening to an up-tempo piece of music composed by Mozart temporarily enhances performance on some IQ tests. Subsequent research demonstrated that the “Mozart Effect” is not specific to pieces by Mozart, or even to classical music, but instead is observed after any experience that leads to a comparable temporary increase in arousal and mood. Anyone who has played, or even has watched another individual playing, many of today’s video games understands technology’s ability to manipulate mood and arousal. Yet, as the Mozart Effect illustrates, any temporary effect of technology use, albeit important, is unlikely to be specific to technology per se. Furthermore, because changes in mood and arousal quickly diminish and eventually disappear following the cessation of the experience, so too do the changes in behavior. Because our interest is in sustained behavioral outcomes, the remainder of the review will therefore focus on the long-term effects of technology use, where changes induced by technology are visible days, months or even years afterwards.
In the same way that there is no single effect of “eating food,” there is also no single effect of “watching television” or “playing video games”. Different foods contain different chemical components and thus lead to different physiological effects; different kinds of media have different content, task requirements, and attentional demands and thus lead to different behavioral effects. Even products that seem on the surface to be extremely similar, for instance, the children’s television shows ‘Dora the Explorer’ and ‘Teletubbies’, can lead to markedly different effects (e.g. - exposure to ‘Dora the Explorer’ is associated with an increase in vocabulary and expressive language skills in two year olds, while exposure to ‘Teletubbies’ is associated with a decrease in both measures8). Furthermore, again like with food, the actual consequence of exposure to a given form of technology can confound “common sense” predictions. Technology specifically developed for the purpose of enhancing cognitive abilities, such as infant-directed media like the Baby Einstein collection or various “brain games” designed for adults, may lead to no effects or worse may lead to unanticipated negative effects,. Meanwhile, technological applications that on the surface seem rather mindless (such as action video games) can result in improvements in a number of basic attentional, motor, and visual skills,. Thus, although content clearly matters, the disconnect that can occur between the predicted and actual outcomes is a clarion call for more theoretically-driven work in this new emerging field.
Causes for optimism and concern
While a strictly dichotomous classification into “good” and “bad”, makes for nice headlines (e.g. - “Coffee: Science Says It’s Good for You!”), such a scheme ignores the fact that human experience is intrinsically multi-dimensional; almost all experiences are “good” in some ways and “bad” in others. Not surprisingly then, technology has been linked with both positive and negative effects13,14. Here we consider the behavioral and cognitive effects of technology use separated by the intent of the technology. We will first examine the effects of “educational” technology followed by the effects of “entertainment” technology. As we will see, some products designed to benefit cognitive development actually hinder it, while some products designed purely for entertainment purposes lead to long-lasting benefits.
Lessons from 60 years of television
Television first entered our households more than 60 years ago, and for nearly as long individuals have sought to harness the form for the betterment of children. Because the introduction of television in the 1950s did not occur simultaneously throughout America, but was instead geographically localized, this allowed researchers to follow pre-schoolers who had access to television and compare them to preschoolers from matching demographics who happened to live in an area where television was introduced later. Pre-schoolers whose family owned a television set showed an overall positive, albeit small, effect years later on their adolescent test scores as compared to those that did not view television as preschoolers15. Although suggestive, this positive outcome could be due to the stimulating effect of introducing a new experience in the life of preschoolers rather than the specific technology per se. Of greater interest is the research that has compared and refined television programs intended specifically for young children. And indeed, although the literature is certainly mixed, exposure during the preschool years (2.5 years to 5 years) to certain educational media has been linked to many positive effects16. For instance, a number of shows over the years have been developed in an attempt to promote language literacy and early mathematical skills in children. ‘Sesame Street’, which premiered in 1969, has been repeatedly associated with various positive outcomes such as school readiness, vocabulary size and numeracy skills17–19. Relatively newer programs such as ‘Blue’s Clues’, ‘Dora the Explorer’, and ‘Clifford the Big Red Dog’, have also been correlated with positive outcomes such as greater vocabulary and higher expressive language skills8. While these studies are typically correlational in nature (i.e. cross-sectional or longitudinal designs), a recent randomized controlled trial in pre-schoolers, the Ready to Learn Initiative, compared a literacy curriculum, that included television shows such as ‘Sesame Street’, to a science curriculum with more science-based television shows20. After ten weeks, the students in the literacy group showed increased literacy skills as compared to those in the science group indicating a direct causal link between the media activities in the literacy curriculum and improvements in literacy.
However, it is not the case that all television/media intended for children have positive effects. For example, time spent watching the children’s television show ‘Teletubbies’ has been linked with a reduction in language skills8. Such contrasts in outcome - between ‘Sesame Street’, ‘Blue’s Clues’, ‘Clifford the Big Red Dog’, and ‘Dora the Explorer’ on one hand, and ‘Teletubbies’ on the other - are theoretically important as they allow us to ask what characteristics lead to beneficial outcomes and what characteristics lead to negative outcomes. In the case of promoting early literacy, the use of child-directed speech, elicitation of responses, object labeling and/or a coherent storybook-like framework throughout the show appears positively related to vocabulary acquisition and better language expression8. Thus, to be effective, early intervention programs need not only engage the young viewer, but they must also elicit direct participation from the child, provide a strong language model, avoid overloading the child with distracting stimulation, and include a well-articulated narrative structure. In addition, effective educational shows also exemplify how to resolve social conflicts and productively manage disagreements and frustration. This social teaching may be as important to child development as academic content, because anti-social behavior has been linked to poor academic outcomes. The advances in our understanding of the content and structures that best foster learning in young children have only been possible by strong partnerships between content producers and scientific researchers that were first formed in the early days of public broadcasting. Unfortunately, the economics of television, and media at large, has shifted since those early days, creating an ever-widening gap between the entertainment industry and educational media, severely diminishing the ability of those seeking to create educational media to leverage the knowledge and infrastructure possessed by the entertainment industry.
Formal and informal access to media
A recurrent concern about television viewing is the passive mode it enforces upon the user. The best television shows (given the goal of enhancing cognitive development) foster active participation of the viewers, such as asking the child to repeat, point or answer questions at the same time as the lead character. Given the importance of active participation, it is no surprise that personal computers and the interactive opportunities they afford have recently captured the attention of policy makers and educators as a tool for learning23. The data are still relatively scarce, but again a positive trend is emerging,24. Computer access in informal settings outside of school improves school-readiness and enhances academic achievement in young children as well as older ones–27. In one such study conducted in the U.S., home computer ownership was associated with a seven percent higher probability of graduating from high school, even after controlling for a number of confounding factors such as parental and home characteristics27. The impact of home computer use on social and emotional skills is more mixed. While some studies report no effect, others document both positive and negative effects26,28,29.
Current theories suggest that technology in informal settings may have positive effects because the activities it displaces are presumed to be of low educational value, such as hanging around with friends, playing sports, or watching entertainment television shows. This time displacement hypothesis contends that technology use has no intrinsic value per se, but instead has value only with respect to the activities it displaces,31. Such a hypothesis leads to the prediction that technology in school settings, which displaces an already rich academic content, may not produce more learning than what human teachers are currently facilitating32 (and could even produce less). Consistent with this view, technology use in the K-12 school setting has led to mixed outcomes. An instructional computer program known as FastForWord designed to train language skills did not lead to widespread gain in either language acquisition or reading skills when introduced in U.S. grades 3–633 and in one of the most comprehensive studies of its kind, conducted by the U.S. Department of Education, various types of reading software were not associated with enhanced literacy in 1st and 4th graders34. The case of mathematics software seems more hopeful. Although some studies report no effect34, many others indicate an increase in mathematics test scores35–37
All parties agree that more research on this topic is needed, but two caveats come to mind. First, it seems urgent to run randomized, controlled studies in which the control group does not just follow the standard math or literacy curriculum. Introduction of new media in a school curriculum may stimulate students just because of the novelty of the experience and the resulting “I am special” feeling it may engender in students. However, once the media becomes the norm, such an effect would vanish. Studies need to establish that it is the content of the media that triggers the increase in knowledge. Second, while a key goal of the educational system is certainly to teach the basics of literacy and mathematics, it also aims to prepare students for the workforce in a 21st century economy. Given this, introducing technology in schools becomes not just a passing fad, but an educational necessity. This seems all the more urgent because a child in a family with a low socio-economic status is more likely to suffer from lack of technology access and thus is more likely to be “left behind”38–.
Finally, it is striking that most, if not all, of the studies that address the impact of technology on academic achievement do so using standardized tests developed in the 20th century. Whether these tests are valid tools to evaluate how well our educational system prepares children for the demands of the 21st century economy remains largely unaddressed. Indeed, this may prove to be a significant challenge, as digital literacy is likely to become a key determinant of productivity and creativity.
While exposure to educational media is increasingly prevalent in the early 21st century, the preponderance of exposure to technology comes from entertainment media. This content, rather than being driven by the goal of improving human development, is driven exclusively by what sells - and what sells may not be the things that are good for us! Current research indicates that children may be wired, but also as a result they may also be more violent, addicted, and distracted.
Perhaps the number one concern regarding the influence of technology among the general public is the potential for media to increase behavioral aggression and violent conduct. Children are often exposed to violent media, whether it is through television or video games (60% of TV programs contained violence in 1997 and this number is unlikely to be lower now; 94% of games that are rated as appropriate for teenagers contain some violence42–). Because young children develop beliefs about social norms and acceptable behavior based on the content of their experiences, any activity that promotes violence is likely to be a risk factor for violent behavior in adulthood and is worthy of careful scientific examination. Meta-analyses, combining data from hundreds of individual studies, confirm an association between exposure to violence in media and antisocial tendencies such as aggression–47 (note that aggression in this literature does not exclusively refer to aggressive or violent actions but also includes aggressive thoughts or violent feelings). Because long-term intervention studies are unethical, the best studies in this domain are arguably of the longitudinal variety, where a group of children is followed for several months or years, with researchers quantifying how their aggressive behavior evolves as a function of exposure to violent media. The effect size in these longitudinal studies, while again statistically significant, is small compared to other public health effects, accounting for less than 1% of the variance when confounding factors like gender are controlled for (whether these effects are large enough to be practically relevant is a matter of intense current debate,–51). Thus, while exposure to violent media in childhood should be of concern, it should not overshadow addressing other known causes of aggressive behavior such as abusive home environments, substance abuse, and poor performance in school52.
A second growing concern is the potential for some forms of technology to be addictive. Anecdotal examples of technology addiction constantly hit the headlines - a 28 year-old collapsed on his game console in an internet cafe after playing the game Starcraft for 50 hours in a row with only short pauses for basic needs53; a couple starved their 3-month old baby girl to death as a result of becoming obsessed with caring for a virtual girl in the role-playing on-line game Prius54. While incidents of this severity are isolated, the general phenomena appears to be much more widespread. Recent surveys indicate that about 2% of youth can be described as having internet addiction with 10–20% having an at-risk internet use,.
Actual scientific research on the topic has been somewhat hindered by the lack of firmly established standards. The American Medical Association does not currently recognize video-game or internet addiction as psychiatric disorders – (see arguments for and against its recognition,). However, there does appear to be an emerging scientific consensus that internet-use and video game play has the potential to become pathological, with researchers adopting and/or adapting the criteria for pathological gambling–. It is important to note that “pathological” means more than simply spending a substantial amount of time playing video games or using the internet – rather, it implies an actual reduction in the ability of the individual to function normally in society. Thus, while some individuals may be able to invest large amounts of their time in technology use without becoming a pathological user, others may exhibit pathological signs with relatively lighter use,. Professional gamers for example, may spend several hours a day training to perfect their skills without their behavior becoming pathological; such deliberate choice to practice a skill over engaging in other activities is a key determinant of expertise65, be it in chess, music or in this case video game play.
A key issue for future research concerns the neural pathways involved in pathological use of technology. The fronto-striatal pathway, which has been strongly implicated in both drug addiction and behavioral disorders such as pathological gambling–, is also activated by interaction with certain types of media technology, video games in particular,–. Unfortunately, little is known about how these pathways mature or how their development is affected by technology use. Such research seems urgently needed given how disruptive technology use may be to some children’s ability to function normally in society.
We watch television while playing games on our laptops; we take part in meetings while checking email on our phones; we browse the web while instant messaging with friends - and frankly, some of us have probably done all of these things simultaneously. Technology allows an incredible amount of information and potential stimulation to be constantly, and concurrently, accessible. However, there may be a behavioral cost to such multi-tasking in the form of attentional difficulties. For instance, in a recent study Ophir and colleagues asked more than 250 Stanford University students about their use of different media forms, from print media to video games or web surfing. Those who reported high concurrent usage of several types of media were less able to filter out distracting information in their environment, more likely to be distracted by irrelevant information in memory, and less efficient when they were required to quickly switch from one task to another. Other studies have also linked time spent using technology with negative effects such as teacher-reported problems, paying attention in class, and deficits in attention, visual memory, imagination, and sleep–.
Is Technology to Blame?
Although there are clearly a number of potentially negative effects associated with technology use, the interpretation of these studies is not as straightforward as it appears at first glance. For example, most of these studies tabulate only total hours spent using technology rather than classifying technology use as a function of content type. As content clearly matters, the results from such reports are inherently noisy and thus provide unreliable data. Second, the vast majority of the work is correlational in nature and as we know, correlation per se cannot be used to infer causation. Technology use, in particular, is highly correlated with other factors that are strong predictors of poor behavioral outcomes making it difficult to disentangle the true causes of the observations. For instance, children who watch the most television also tend to live in lower income homes and tend to have mothers with lower levels of education, both of which are strong predictors of a variety of diminished capabilities. In one large study of 800 infants, average daily television exposure was strongly correlated with lower language skills at 3 years of age when such factors were not considered, but when these (and many additional factors, some as detailed as the length of breast feeding) were controlled for, no relationship between television exposure and language development was observed. Furthermore, children who have attentional problems may very well be attracted to technology because of the constant variety of activities it permits. Accordingly, the strength of the relationship between technology use and attention disorders is significantly reduced after controlling for whether the child suffered from attentional problems at the start of the study. Although researchers nearly always attempt to statistically control for known confounding variables, the possibility of additional lurking variables always remains. Controlled intervention studies would avoid these potential pitfalls and demonstrate a clear causal relationship between technology use and behavioral outcomes. Although there are clear ethical concerns in doing large-scale randomized interventions when the predicted result is a long-term negative behavioral effect, these are not beyond our reach and, we would argue, critical to society. A possible route is to select parents who plan to introduce a new technology in their homes and ask half of them to delay the introduction of the new technology by a few months allowing researchers to compare children with and without access to the technology. A recent study by Weis and Cerankosky followed a similar logic to test the hypothesis that video game console ownership negatively affects academic performance. A large group of parents who were planning on purchasing video game consoles for their children were promised a video game console in exchange for their children participating in the study. The children were then split into two groups; the researchers provided consoles for one group immediately, while the other group did not receive their consoles for four months. Over the course of those four months, those children that received consoles demonstrated significant reductions in reading and writing skills (more than one-half of a standard deviation in the case of writing) as compared to the control group of peers who did not receive consoles yet. Teachers also tended to rate those children who received their consoles immediately as having greater learning difficulties, although no attentional problems were observed. We would note for future studies that given the distinctly negative hypothesized effect of the introduction of technology in this case, there are definite ethical concerns about researchers actually providing the technology of interest and failing to inform the parents as to the true hypothesis being tested, both of which were true of this study. A more ethical design may involve researchers encouraging a subset of parents who are planning to introduce technology that has a predicted negative effect to not do so, while not intervening in a corresponding group (in which case the intervention has a predicted positive effect).
Defying common sense - “good” things can be bad and “bad” things can be good
When good turns bad
The past decade has seen an explosion in the popularity of “baby DVDs” or media designed to enhance the cognitive capabilities of infants and toddlers. Forty percent of parents believe that child-friendly programming may benefit their infant or toddler and some estimates suggest that roughly one in three U.S. infants were exposed to baby DVDs. However this boom now appears to be a case of marketing and parents’ common sense beliefs outpacing actual science. At best, current research suggests that these DVDs produce no changes in cognitive development – for instance babies exposed to DVDs designed to teach new words, such as BabyWordsworth (The Baby Einstein Company, Glendale, California), show no evidence of specific word learning,. More worrisome however is that some studies actually report negative effects. For example, in a recent cross-sectional study, Zimmerman and colleagues surveyed over one thousand parents of 2- to 24-month old children. The parents were asked questions about general demographics, their child’s television and DVD viewing habits, and were asked to complete a measure of language development. A large negative association between viewing baby DVDs (e.g. ‘Baby Einstein’ or ‘Brainy Baby’) and language development score was found for the youngest children (8–16 months), or in other words, each hour of daily viewing/listening in this group was associated with a significant decrement in the pace of language development. Furthermore, the size of the decrement was not minor - while daily reading with a parent is associated with a 7-point increase in language score, each hour of daily baby DVD viewing was associated with a 17-point decrease. What is the reason for this? Babies learn an enormous amount from real-world experience as they watch their parents or caregivers interact with the world or with them; yet when the same material is delivered through audio-visual media, much less is learned,82. Although videos are capable of attracting babies’ attention83, this alone is not necessarily sufficient to induce learning. A key determinant of whether learning occurs may be the ability of the infant to appreciate the symbolic nature of the video84. Very young children may not always be able to link objects, persons and events in a video to reality. Therefore, young learners may not reach a maturational state at which they can truly learn from media until their pre-school years. Research on technology and brain development may benefit from more systematically addressing the cognitive state of the learner, especially when it comes to the boundary between video content, reality and fantasy.
When bad turns out good
Although entertainment media is typically designed for entertainment purposes only, some forms of this technology have exhibited effects far beyond simple amusement. For instance, action video games, where avatars run about elaborate landscapes while eliminating enemies with well-placed shots, are often thought of as rather mindless by parents. However, a burgeoning literature indicates that playing action video games is associated with a number of enhancements in vision, attention, cognition, and motor control (for a review). For instance, action video game experience heightens the ability to view small details in cluttered scenes and to perceive dim signals, such as would be present when driving in fog,86. Avid players display enhanced top-down control of attention and choose among different options more rapidly,. They also exhibit better visual short-term memory,, and can more flexibility switch from one task to another,.
Furthermore, these enhancements have been found to have real world applications. On the medical front, action games have been harnessed for the rehabilitation of patients with amblyopia, a developmental deficit of vision and are being considered to treat attentional problems in children94. Playing games, especially in a virtual reality environment, also appears to increase pain tolerance in both controls and patients. On the job-training front, laparoscopic surgeons who are habitual video game players have been observed to be better surgeons than their more experienced peers, both in terms of their speed of execution and their reliability during surgery,. Videogame play also appears to be useful training for pilots98. Following this trend, in 2009, the Royal Air Force stopped requiring that only trained pilots control unmanned drone flight missions and opened its door to less experienced young gamers, after studies indicated that the best drone pilots were often young video game players99. This is not to say that all aspects of behavior may change for the better as a result of action video game play, but this abridged list already indicates much more benefit than one would have immediately suspected as one watches an average 14 year old blast monsters.
One of the strong suits of the action video game literature is that, in contrast to much of the literature discussed earlier, a direct causal relationship has been established between the action game experience and the behavioral outcomes. The impact of action game play has been causally related to improved performance by having non-game players play action games for an extended period of time (e.g. 50 total hours spaced over 6 weeks) in a controlled laboratory environment. Furthermore, in addition to this experimental intervention group, these studies always also include a control group of subjects, drawn from the same participant pool as the experimental group, but who are required to play non-action games. These non-action games are also commercially available entertainment games, selected in part to be as equally enticing and stimulating as action games. All participants undergo visual, attention or cognitive tests before and after their respective video game training. Importantly, the post-tests take place at least 24 hours after the final session of video game play to ensure that any effects cannot be attributed to temporary changes in mood or arousal. Clear enhancements are noted in those that underwent action game training as compared to control game training. Furthermore, these effects last much longer than a few days after the final training session – in fact, enhancements are still noted anywhere from 6 months to 2 or more years later,. While a strong causal link has been observed between action game experience and improvements in perceptual, attentional and cognitive skills, it should be noted that these studies have been carried out exclusively in young adults (18–30 years old), as it is ethically questionable to train children on action games (that tend to contain significant amounts of violence). However, despite the lack of training studies in children, we know that children who report playing action games show significantly increased attentional skills as compared to those who do not–104. On some measures of attention, such as the temporal dynamics of attention, 7–10 year old action gamers function at adult levels indicating significant deviations from age-related norms.
Action video game training is of substantial theoretical interest because the improvements in performance that occur as a result of such training also transfer to tasks beyond the training regimen itself. In other words, playing an action game results in behavioral changes in non-gaming environments. This is in strict contrast with most other training regimens where the learning is highly specific to the exact task, stimuli, and environment used during training,105,. A possible mechanism for such wide transfer after action video game play may be that this activity teaches the player how to swiftly adapt to current task demands. Action game players may dynamically retune connectivity across and within different brain areas to augment information processing, and may thus be in a position to make more informed decisions. This was recently confirmed experimentally in the case of perceptually-driven decisions. According to this view, action video game experience would promote an essential feature of human cognition, “learning to learn”. This proposal is appealing as it readily captures why the effects of action game play transfer so widely. It will be, however, for future work to assess whether this “learning to learn” benefit is also found when information has to be retrieved from internal representations rather than from the external environment, such as when one thinks or solves problems.
The contrast between the widespread benefits observed after playing action video games and the limited value of training on “mini brain games” suggests we may need to drastically rethink how educational games should be structured. While action game developers intuitively value emotional content, arousing experiences and richly structured scenarios, educational games have until now, for the most part, shied away from these attractive features that video games offer. Instead, educational games have mostly exploited the interactivity and the repetitive nature of practice-makes-perfect that computer-based games can afford - often reducing the experience to automated flashcards. It is only very recently that the richness that the video game medium has to offer has been considered as an integral part of the learning experience,108. However, in such rich environments, only a fine line separates a stimulating and successful media from an overloading experience making the development of such games challenging109. Dimension M (from Tabula Digita), an action-packed video game geared toward teaching linear algebra to 7th and 8th graders, represents one such first attempt and early results appear promising. In a recent intervention study, its introduction into high school mathematics classes led to significant benefits on benchmark mathematics tests37. Yet a gap remains between the entertainment industry and such “Serious Game” initiatives. Theoretical work suggests that when the concepts to be learned are experienced across multiple contexts and domains, learning is more likely to transfer to new tasks or situations beyond those experienced during training110,. The highly complex architecture of action games, afforded by sophisticated game engines, ensures a variety of emotional, cognitive and attentional states as the player progresses in the game, which should foster learning and its transfer to new situations. In an elegant evaluation of this claim, Gentile and Gentile have shown that this is indeed the case with the violent content action games typically contain112. Action games, thanks to their rich structure, efficiently teach aggression. Replacing violent content with educational content is not out of reach, but it will require a degree of sophistication in game design and financial means that may call for a coherent, multidisciplinary “Big Science” approach rather than the proliferation of small, fragmented and often uncoordinated endeavors.
Understanding wired brains
Much of what we know about technology and child development has been driven by advances in the fields of education and behavioral sciences. Yet, understanding how the brain is altered by technology use is essential to a furthering of this emerging field. Granted, no one will be surprised to learn that the visual cortex is activated when one watches a video, or that the motor cortex is challenged when playing an action game. Of greater interest is our understanding of how technology impacts regular brain functioning and changes brain organization over time. This calls for an array of studies given the need to separately address different types of technology and content, as well as users. A recent seminal study by Brem and colleagues compared the impact of playing a grapheme-to-phoneme game versus a mathematics game in 6–7 year olds on the maturation of the visual word form area (VWFA), a brain area important in mediating literacy. As assessed by functional magnetic resonance imaging, the group trained with the phoneme-to-grapheme game showed greater maturation of the VWFA than the control group, suggesting direct involvement of the VWFA in the acquisition of reading skills. In a similar vein, Rueda et al. compared the impact of playing simple games aimed at training attention versus watching popular children’s videos in 4 and 6 year olds. Event-related potentials revealed more adult-like markers of the executive attention network after attention-game training than after watching videos. A working hypothesis is that the attention-game training may have allowed the brain system mediating conflict resolution to become more efficient, as it would during typical development. It is worth noting that in both this study and the Brem study, experimental trainees demonstrated significant brain changes from pre- to post-test compared to the control group, but with no significant behavioral improvement differences. Thus, brain-imaging studies may provide a more sensitive assay of the effects of technology than do behavioral studies. Brain imaging can also be used document whose brain may best benefit from technology. In a recent study, structural brain scans of young adults were acquired before they learned to play a first generation computer game, Space Fortress (University of Illinois). Those individuals with an initially larger caudate nucleus and putamen, two basal ganglia nuclei involved in the control of movement, reinforcement learning and reward, were most likely to learn efficiently. In contrast, the size of the hippocampus, a key structure in memory and learning for declarative knowledge, was not predictive of learning. Thus, a computer game like Space Fortress requires cognitive and motor control skills best predicted by structures that regulate habit formation and reward processing, rather than content learning.
Another fruitful line of research will be to investigate which events during technology use enhance learning and brain plasticity. It is only recently that we have acquired the means to follow brain activity in real time as participants interact with technology. Thanks to these developments, we are in a position to isolate from a continuous media stream key events hypothesized to foster learning and brain plasticity (such as rewarding or salient events). Then by injecting content along with these events, learning can be directly assessed. This approach builds on an ever-growing literature documenting the critical role of neuromodulators in the control of learning and brain plasticity. Events that are arousing, and thus likely to trigger a release of acetylcholine, are prime targets for such a manipulation. It is hypothesized that acetylcholine facilitates the retuning of existing connectivity in an experience-dependent manner, which allows for better behavioral inference from the learned experience,. Dopamine, a neuromodulator implicated in executive functions and the control of attention, also promotes brain plasticity. Its concurrent release during an auditory tone discrimination task increased the cortical area and the receptive field selectivity to learned tones in rats. This facilitatory effect was obtained by stimulating the origin of dopaminergic cell bodies, the ventral tegmental area which is not only a key player in motivation and reward, but also in drug addiction. Unfortunately, only mixed reports exist about neuromodulator release and technology use,. Future research should capitalize on all of the tools at our disposal from traditional neuroscience techniques such as PET and fMRI,, to the wealth of new tools becoming available, including cameras that monitor facial emotions, smart controllers that record galvanic skin response and heart rate, helmets fitted with electrodes that assess brain state, so as to adapt media experience in real time according to the users current experience.
Finally, the possibility of developing an animal model of young, wired learners is not as far-fetched as it may seem. Using a new virtual reality system in which a mouse interacts with a virtual maze through a spherical treadmill, Harvey et al. have characterized the intra-cellular dynamics of hippocampal coding in awake, behaving animals. Adapting such virtual navigation system to study decision making and learning in fast-paced, mice-enticing environments will certainly require new developments, but appears within reach.
The past half-century has seen a dramatic increase in the amount of technology available to and used by children - a fact that has clearly shaped the way children learn, develop, and behave. Given the multi-faceted nature of technology, it is perhaps unsurprising that the story of its impact on child development is extremely complex and multi-sided. Some forms of technology have no effect on the form of behavior they were designed to transform, while others have effects that reach far beyond their intended outcomes. All of this is indicative of a field that is still emerging. What we do know is that, in technology, we have a set of tools that have the capability to drastically modify human behavior. What remains, which is not trivial, is to determine how to purposefully direct this capability to produce desired outcomes. In this endeavor it will be key for the field, which to this point has been largely behavioral in nature, to partner with neuroscience. For instance, given the goal of predicting behavioral outcomes, it would likely be of substantial benefit to describe forms of technology quantitatively in terms of the neural processing they demand, rather than describe them qualitatively based upon surface characteristics. Such collaboration would also benefit neuroscientific theories of learning, as it offers an opportunity to “reverse engineer” the learning problem – starting with a tool that strongly promotes learning and determining how and why it works, rather than starting with low-level principles of neural learning and building tools that may or may not produce the desired outcomes.
We are thankful to T. Jacques as well as L. Takeuchi and G. Cayton-Hodges from the Joan Ganz Cooney Center at Sesame Workshop for their help in literature searches. We also thank T. Jacques for invaluable help with manuscript preparation. This work was funded by EY016880, the James S. McDonnell Foundation and the Office of Naval Research (MURI Program) to DB.
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Kids Straight Hairstyles
Human identity, the idea that defines each and every one of us, could be facing an unprecedented crisis.
It is a crisis that would threaten long-held notions of who we are, what we do and how we behave.
It goes right to the heart - or the head - of us all. This crisis could reshape how we interact with each other, alter what makes us happy, and modify our capacity for reaching our full potential as individuals.
And it's caused by one simple fact: the human brain, that most sensitive of organs, is under threat from the modern world.
PROFESSOR SUSAN GREENFIELD
Unless we wake up to the damage that the gadget-filled, pharmaceutically-enhanced 21st century is doing to our brains, we could be sleepwalking towards a future in which neuro-chip technology blurs the line between living and non-living machines, and between our bodies and the outside world.
It would be a world where such devices could enhance our muscle power, or our senses, beyond the norm, and where we all take a daily cocktail of drugs to control our moods and performance.
Already, an electronic chip is being developed that could allow a paralysed patient to move a robotic limb just by thinking about it. As for drug manipulated moods, they're already with us - although so far only to a medically prescribed extent.
Increasing numbers of people already take Prozac for depression, Paxil as an antidote for shyness, and give Ritalin to children to improve their concentration. But what if there were still more pills to enhance or 'correct' a range of other specific mental functions?
What would such aspirations to be 'perfect' or 'better' do to our notions of identity, and what would it do to those who could not get their hands on the pills? Would some finally have become more equal than others, as George Orwell always feared?
Of course, there are benefits from technical progress - but there are great dangers as well, and I believe that we are seeing some of those today.
I'm a neuroscientist and my day-to-day research at Oxford University strives for an ever greater understanding - and therefore maybe, one day, a cure - for Alzheimer's disease.
But one vital fact I have learnt is that the brain is not the unchanging organ that we might imagine. It not only goes on developing, changing and, in some tragic cases, eventually deteriorating with age, it is also substantially shaped by what we do to it and by the experience of daily life. When I say 'shaped', I'm not talking figuratively or metaphorically; I'm talking literally. At a microcellular level, the infinitely complex network of nerve cells that make up the constituent parts of the brain actually change in response to certain experiences and stimuli.
The brain, in other words, is malleable - not just in early childhood but right up to early adulthood, and, in certain instances, beyond. The surrounding environment has a huge impact both on the way our brains develop and how that brain is transformed into a unique human mind.
Of course, there's nothing new about that: human brains have been changing, adapting and developing in response to outside stimuli for centuries.
What prompted me to write my book is that the pace of change in the outside environment and in the development of new technologies has increased dramatically. This will affect our brains over the next 100 years in ways we might never have imagined.
Our brains are under the influence of an ever- expanding world of new technology: multichannel television, video games, MP3 players, the internet, wireless networks, Bluetooth links - the list goes on and on.
Couple Playing Video Games
But our modern brains are also having to adapt to other 21st century intrusions, some of which, such as prescribed drugs like Ritalin and Prozac, are supposed to be of benefit, and some of which, such as widelyavailable illegal drugs like cannabis and heroin, are not.
Electronic devices and pharmaceutical drugs all have an impact on the micro- cellular structure and complex biochemistry of our brains. And that, in turn, affects our personality, our behaviour and our characteristics. In short, the modern world could well be altering our human identity.
Three hundred years ago, our notions of human identity were vastly simpler: we were defined by the family we were born into and our position within that family. Social advancement was nigh on impossible and the concept of 'individuality' took a back seat.
That only arrived with the Industrial Revolution, which for the first time offered rewards for initiative, ingenuity and ambition. Suddenly, people had their own life stories - ones which could be shaped by their own thoughts and actions. For the first time, individuals had a real sense of self.
But with our brains now under such widespread attack from the modern world, there's a danger that that cherished sense of self could be diminished or even lost.
Anyone who doubts the malleability of the adult brain should consider a startling piece of research conducted at Harvard Medical School. There, a group of adult volunteers, none of whom could previously play the piano, were split into three groups.
The first group were taken into a room with a piano and given intensive piano practise for five days. The second group were taken into an identical room with an identical piano - but had nothing to do with the instrument at all.
And the third group were taken into an identical room with an identical piano and were then told that for the next five days they had to just imagine they were practising piano exercises.
The resultant brain scans were extraordinary. Not surprisingly, the brains of those who simply sat in the same room as the piano hadn't changed at all.
Equally unsurprising was the fact that those who had performed the piano exercises saw marked structural changes in the area of the brain associated with finger movement.
But what was truly astonishing was that the group who had merely imagined doing the piano exercises saw changes in brain structure that were almost as pronounced as those that had actually had lessons. 'The power of imagination' is not a metaphor, it seems; it's real, and has a physical basis in your brain.
Alas, no neuroscientist can explain how the sort of changes that the Harvard experimenters reported at the micro-cellular level translate into changes in character, personality or behaviour. But we don't need to know that to realise that changes in brain structure and our higher thoughts and feelings are incontrovertibly linked.
What worries me is that if something as innocuous as imagining a piano lesson can bring about a visible physical change in brain structure, and therefore some presumably minor change in the way the aspiring player performs, what changes might long stints playing violent computer games bring about? That eternal teenage protest of 'it's only a game, Mum' certainly begins to ring alarmingly hollow.
Already, it's pretty clear that the screen-based, two dimensional world that so many teenagers - and a growing number of adults - choose to inhabit is producing changes in behaviour. Attention spans are shorter, personal communication skills are reduced and there's a marked reduction in the ability to think abstractly.
This games-driven generation interpret the world through screen-shaped eyes. It's almost as if something hasn't really happened until it's been posted on Facebook, Bebo or YouTube.
Add that to the huge amount of personal information now stored on the internet - births, marriages, telephone numbers, credit ratings, holiday pictures - and it's sometimes difficult to know where the boundaries of our individuality actually lie. Only one thing is certain: those boundaries are weakening.
And they could weaken further still if, and when, neurochip technology becomes more widely available. These tiny devices will take advantage of the discovery that nerve cells and silicon chips can happily co-exist, allowing an interface between the electronic world and the human body. One of my colleagues recently suggested that someone could be fitted with a cochlear implant (devices that convert sound waves into electronic impulses and enable the deaf to hear) and a skull-mounted micro- chip that converts brain waves into words (a prototype is under research).
Then, if both devices were connected to a wireless network, we really would have arrived at the point which science fiction writers have been getting excited about for years. Mind reading!
He was joking, but for how long the gag remains funny is far from clear.
Today's technology is already producing a marked shift in the way we think and behave, particularly among the young.
I mustn't, however, be too censorious, because what I'm talking about is pleasure. For some, pleasure means wine, women and song; for others, more recently, sex, drugs and rock 'n' roll; and for millions today, endless hours at the computer console.
But whatever your particular variety of pleasure (and energetic sport needs to be added to the list), it's long been accepted that 'pure' pleasure - that is to say, activity during which you truly 'let yourself go' - was part of the diverse portfolio of normal human life. Until now, that is.
Now, coinciding with the moment when technology and pharmaceutical companies are finding ever more ways to have a direct influence on the human brain, pleasure is becoming the sole be-all and end-all of many lives, especially among the young.
We could be raising a hedonistic generation who live only in the thrill of the computer-generated moment, and are in distinct danger of detaching themselves from what the rest of us would consider the real world.
This is a trend that worries me profoundly. For as any alcoholic or drug addict will tell you, nobody can be trapped in the moment of pleasure forever. Sooner or later, you have to come down.
I'm certainly not saying all video games are addictive (as yet, there is not enough research to back that up), and I genuinely welcome the new generation of 'brain-training' computer games aimed at keeping the little grey cells active for longer.
As my Alzheimer's research has shown me, when it comes to higher brain function, it's clear that there is some truth in the adage 'use it or lose it'.
However, playing certain games can mimic addiction, and that the heaviest users of these games might soon begin to do a pretty good impersonation of an addict.
Throw in circumstantial evidence that links a sharp rise in diagnoses of Attention Deficit Hyperactivity Disorder and the associated three-fold increase in Ritalin prescriptions over the past ten years with the boom in computer games and you have an immensely worrying scenario.
But we mustn't be too pessimistic about the future. It may sound frighteningly Orwellian, but there may be some potential advantages to be gained from our growing understanding of the human brain's tremendous plasticity. What if we could create an environment that would allow the brain to develop in a way that was seen to be of universal benefit?
I'm not convinced that scientists will ever find a way of manipulating the brain to make us all much cleverer (it would probably be cheaper and far more effective to manipulate the education system). And nor do I believe that we can somehow be made much happier - not, at least, without somehow anaesthetising ourselves against the sadness and misery that is part and parcel of the human condition.
When someone I love dies, I still want to be able to cry.
But I do, paradoxically, see potential in one particular direction. I think it possible that we might one day be able to harness outside stimuli in such a way that creativity - surely the ultimate expression of individuality - is actually boosted rather than diminished.
I am optimistic and excited by what future research will reveal into the workings of the human brain, and the extraordinary process by which it is translated into a uniquely individual mind.
But I'm also concerned that we seem to be so oblivious to the dangers that are already upon us.
Well, that debate must start now. Identity, the very essence of what it is to be human, is open to change - both good and bad. Our children, and certainly our grandchildren, will not thank us if we put off discussion much longer.
• Adapted from ID: The Quest For Identity In The 21st Century by Susan Greenfield, to be published by Sceptre on May 15 at £16.99. To order a copy for £15.30 (p&p free), call 0845 606 4206.