Content » Vol 93, Issue 1

Debate Article

Teaching and Learning in Dermatology: From Gutenberg to Zuckerberg via Way of Von Hebra

Jonathan L. Rees

Grant Chair of Dermatology, University of Edinburgh, Lauriston Building, Edinburgh, United Kingdom

The World Wide Web (www) and other internet-based technologies offer enormous potential for enhancing teaching in dermatology. There is also the possibility that if these technologies are adopted uncritically, either because of ignorance of how people learn, or because they are viewed primarily as ways to reduce institutional costs, that they might diminish learning, thereby reducing the value proposition that undergraduate students receive from Medical Schools. I review the history of recent technological change with a focus on what value such technologies bring to both student and institution. After summarising some of the core principles underpinning successful learning, and modern theories of medical expertise, I critically discuss some of the ways the Web and allied technologies might enhance the learning of dermatology. Key words: elearning; teaching; cognitive psychology; dermatology; learning.

(Accepted June 6, 2012.)

Acta Derm Venereol 2012; 92: XX–XX.

Jonathan Rees, Grant Chair of Dermatology, University of Edinburgh, Rm 4.018 Dermatology, Lauriston Building, Lauriston Place, Edinburgh EH3 9HA, United Kingdom. E-mail: reestheskin@me.com

In 1989 a young British physicist, Tim Berners-Lee, working at the international physics research institute CERN near Geneva, wrote a proposal for a hypertext based scheme that would allow data to be exchanged between computers across all of the internet (1, 2). In early 1991 he released what we now know as the World Wide Web (www, the ‘Web’) onto the internet. There are two key features of this single revolutionary event that are often forgotten: he did not need to get anybody’s permission to release it, and at most only a handful of individuals were involved prior to launch. This was no Human Genome Project (1).

Only four years later, writing in the journal Science in 1995, Eli Noam, a professor of finance and economics at Columbia University, imagined what the birth of the Web might mean for the University (3). The article’s title signalled its content: ’Electronics and the Dim Future of the University’. Noam presciently argued that this new method of disseminating information would undermine the dominant financial models of higher education. Universities, he argued, were traditionally involved in three activities (i) the acquisition of new knowledge, (ii) the preservation of new knowledge and (iii) the transfer of this knowledge to others. All of these activities, especially the preservation and transfer of knowledge, would be changed irrevocably by the ability of the Web to allow low cost distribution of information. Noam’s paper was, however, considerably more nuanced than many that came after. He pointed out that whereas the discovery of new knowledge and transmission of this knowledge would indeed become ever more important in modern society, the issue for the continued existence of Universities was whether the present economic foundations of higher education could be maintained given the changes in information distribution afforded by the Web. To survive in this new disrupted world Universities had to add value to what could now be achieved by merely pointing a browser to a Web page.

In 2001, just 10 years after the first Web page appeared, MIT President Charles Vest hatched plans to release on the Web almost all the material from all of the 2,000 courses taught on the MIT campus (4). The course materials were visible to all with access to the Web, but no course credits or certification were available – the benefits and kudos of an MIT education were still confined to those on campus. Ten years later, in 2011, exactly two decades after the public birth of the Web, Stanford University made available courses specifically for an online audience. These free online courses were not now merely passive collections of videos and lecture slide shows, but contained assignments and these assignments were marked (albeit by computer) (5, 6). Within months over 100,000 persons had registered for a single course in computing science. In 2012, Stanford and MIT began discussing how credits were to be offered to those who successively completed parts of these courses (7).

As we enter the second decade of the 21st century, 20 years after the first Web page, and 15 years after Noam had published his essay on the future of the University in the age of the Internet (4), Science, (the leading US journal that had published Eli Noam’s essay) published more papers on education than on skin disease or skin biology. As well as a series of articles on science education in a section labelled “Education Forum” (8–10), a number of primary research articles were published. These included research studies on how to improve learning in large-enrolment classes (by the physics Nobel Laureate Carl Wieman and colleagues) (11, 12), a paper on how retrieval practice influences long-term learning (13), how writing down thoughts before an exam can improve performance in that exam (14), and a paper showing that graduate students’ teaching experiences can improve their research skills (15).

Why it matters

The events described above may seem distant to those of us who teach clinical dermatology to the next generation of doctors. Is it not grossly fanciful to imagine that these technological developments have anything to do with how we should (or will) teach undergraduate medical students and train future specialists? Replace campus-based and time-honoured bed-side teaching with distant learning from providers situated halfway across the world? Surely, this is just eccentric techno-utopianism? And why is a prestigious research journal like Science suddenly so concerned with undergraduate learning, rather than filling its scarce pages with ever yet more cell biology and genetics?

In what follows, I will argue that far fetched although it may seem, the threat that Noam highlighted is real, and that to respond to it we need to consider three related issues. First, we need to be aware of the history of previous tectonic shifts in communication and education. Second, technology will inevitably force upon us some long overdue soul searching about how we teach clinical dermatology – or more importantly how students learn it. We will be forced to ask questions that have received little attention. Are we really very good at what we do? Could we do it better and at lower unit cost. What evidence do we possess for justifying current teaching patterns? Third, whatever we now think, as Noam correctly pointed out (3), the Web radically changes the financial model for higher education. As the cost of distributing (some) materials approaches zero, from a student’s learning perspective, value can now only come from something else (4). But what is this something else? If you believe that teaching dermatology is just about delivering 10 lectures to two hundred students at a time, followed by brief clinical exposures to clinicians with patients, then perhaps a little bit of techno-utopian shock therapy is indeed necessary.

Gutenberg meets Von Hebra

The best example we have of the effects of the invention of a radical new means of communication (such as the Web) comes from the example of what happened to the world when, in 1455 in Mainz, Johannes Gutenberg invented printing with moveable type (16). John Naughton, in a book whose title I have borrowed for this article (2), asks us to imagine a Gedankenexperiment. You are a medieval pollster standing on the bridge in Mainz just twenty years after Gutenberg’s invention (as we are just over twenty years after the birth of the Web) asking pedestrians the following questions:

On a scale of 1 to 5… how likely do you think that Herr Gutenberg’s invention will:

  • Undermine the authority of the Catholic Church?
  • Trigger a Protestant reformation?
  • Enable the rise of modern science?
  • Create entirely new social classes and professions?

Of course, we cannot help but smile at this juxtaposition of technology and history, but that is only because we now know the answers. To the inhabitants of Mainz, a mere two decades after the invention, the ramifications of the invention of moveable type were unknown. But, here we are, just two decades after the birth of the Web.

From the perspective of a dermatologist there is however a more telling example of the effect of technological change. In my office I have a copy of Daniel Turners ‘De Morbis Cutaneis’, printed using technology similar to that of Gutenberg in 1714 (17). I confess I have not delved into it very far, and that is because it contains only a single image – ironically, a black and white print of the author! There are no pictures of skin disease. The then available technology meant that the only economically viable way to produce a book was to produce one that was almost entirely text-based. Note the link between technology and economics. If we now jump forward to the mid 19th century, things were very different. Von Hebra’s magisterial ‘Atlas der Hautkrankheiten’ (8), married the then available technology, with the skills of physician-artists, allowing dissemination of knowledge in way that previously one could only have obtained by travel. Prior to this moment, anybody from the UK wishing to learn dermatology would have been advised to visit Vienna or Paris (as an aside, based on my own experience and practice, I would still recommend such visits). Technology had not only made the world smaller but radically democratised the available expertise. If, in the early 1980‘s, we wanted to see the cutaneous manifestations of the newly described syndrome AIDS, something that was then outside the clinical experience of most clinicians, we did not visit San Francisco, we visited the library. Now of course we don’t need to even visit the library, we just search online. Today, you don’t need a personal introduction to visit the clinic of the master, you can just search the web and find thousands and thousands of pictures of skin disease. Many of these images are available for free, and nor do you have to attend a medical school to view them.

I know how to teach!

If technology doesn’t meet a need it will wither – at least in the long term– and the history of genuine technological advance in teaching and learning is more failure than success. A standard quip is that the last genuinely transformative educational technology was either the blackboard, or the bus that takes children to school. Sceptics will read the last lines of the previous paragraph and (rightly) retort that teaching is a lot more than just having access to (or control over) the resources. The Web may for instance allow access to thousands of images, but there is much much more to learning, and if we do accept that good teaching exists, then surely it cannot be provided for free. After all, even professors have to eat. This is of course the kernel of the value proposition that any University has now to offer: where once materials were scarce and costly, there is now an abundance of material that is cheap, so the University has to offer more. But more of what?

Ever since Galileo history suggests that nihilism about the influence of technological change on society is frequently misplaced. So, imagine we were talking about clinical medicine, rehearse in your mind the sorts of arguments you would muster if the introduction of new therapies for psoriasis was denied on the grounds that current treatments ”worked OK”. Indeed, current treatments might work, but to deny outright that improvement might be possible would seem perverse. We live in a time when all qualifying doctors know more medicine than William Osler: why would we imagine we might not be able to do even better? As individuals we (mostly) assume we are good at teaching, and in general many academics appear sceptical that teaching delivery can be improved upon greatly or optimised. We are suspicious of attempts to systematise teaching or of the need to assess its efficacy except by anecdote. Whereas we might accept that teaching styles evolve (“usually because of unproven educational fads”), the idea that we need experimental and analytical science to sort out how to do it well, seems a little far fetched. After all these years, surely we know how to teach. The following vignettes suggest otherwise.

Some cognitive delusions of those who do not want to examine their teaching efficacy

  • “The old ways are best! Look at me!” (an n of 1 study with no control and based on subjective recall – why do we bother with experimental science?)
  • “I am an expert dermatologist, of course I can teach beginners!” Imagine the converse: I am an expert educationalist, of course I can practice as a dermatologist….. or a brain surgeon for that matter.
  • “The feedback on my teaching is good.” So is the feedback on the political rulers of a country just north of South Korea.
  • “All our students say they feel confident about their skills in dermatology.” Most of the inhabitants of a country just north of South Korea also say they feel materially well off.

”It worked for me”. Upon hearing arguments similar to those made above, colleagues referring to their own experiences will say, ‘Well it worked for me when I was student. I learned my dermatology from Dr Baggins and he was very good’ (18). But imagine you were talking about a therapy rather than teaching. Are your really saying that because you took drug X and the outcome was good that drug X caused that particular outcome. Is this how you would expect your students to assess whether a drug works? And the very people who make this argument are usually those who have followed in the master’s shoes – what about the silent majority who chose other specialties (18)?

“I am an expert therefore I know how to teach my subject”. One of the things we know about domain expertise is that experts see and organise the world in a different way from novices, and that experts in a particular domain may be less, rather than more, able to see the world the way a beginner does (19, 20). To the expert, Wickham’s striae are self evident – if a student has 20/20 vision why can’t he see them? The Nobel Laureate, Carl Wieman, refers to this as the ‘Curse of knowledge’ (20). If you want to know what causes difficulty for beginners you have to acquire expertise in teaching beginners. What beginners find difficult is not always self-evident, and subject experts are quite capable of being ignorant of the minds and problems of the subject novice.

”Well I have only dabbled in dermatology but I have a Masters in Medical Education so I can teach it”. This is, of course, almost the converse of the previous error. One of the overriding principles of research into learning is that of content specificity (21). Dermatologists do not possess generic visual skills, nor do radiologists or pathologists (22). What each specialist possesses is expertise within a domain (23). Insights into learning and teaching are valid insofar as we can show that they improve learning within a particular domain, but if the teacher is unable to diagnose lichen planus then his role in teaching others how to do so is, to understate it, severely curtailed. As a report from the National Academy of Sciences highlighted, it is one of the most popular and dangerous myths about teaching that it is a generic skill and that a good teacher can teach any subject (19). You may wish to involve young trainees in clinical teaching for all sorts of good reasons, but remember that just as good teaching has a long term positive role, bad teaching may act so as to inhibit future learning such that even more effort is required than if the first teaching session was null (as with drugs, it is possible to do more harm than good). The teacher will require not just knowledge of pedagogy in general, but knowledge of pedagogy in a particular clinal domain. The implication here is that teaching requires 3 areas of expertise and specialist knowledge: pedagogy in general, clinical competence in the relevant clinical area, and finally knowledge of how to teach in a particular clinical domain (19).

”I must teach well because the feedback is positive”. It is difficult to attend any meeting or undertake any course without being chastised for not filling in the seemingly mandatory ’feedback’. In fact although such feedback is not totally worthless (’did the lecturer turn up?’), it is often of questionable value, serving some administrative role rather than being a vehicle for promoting learning. If we want to promote learning we need to measure learning outcomes (18). Student’s perceptions of what they need to know, how well they think they have learned a topic and how clinically competent they think they are, are not reliable or valid measures of learning or competence. Eric Mazur, the Harvard physicist, and promoter of ‘Peer Instruction’ recalls how painful it was to discover the dissociation between how students graded his lectures (terrific) and what they had learned (poor) (24–26). From feeling satisfied with his abilities and course based on feedback, he had to unpick all that he had previously believed about learning, and devise outcome measures that tested what the students had really learned. As Clark Glymour has argued, such faculty evaluation feedback may actually drive out serious measures of learning and act so as to lower teaching standards (27, 28). Sadly, most Universities and professional organisation embrace such measures because they are cheap, because they do not challenge what they already do, and because they think such feedback makes them appear ‘in touch’ and ‘sympathetic’ to their students.

“My students say they feel confident seeing patients with skin disease so I must be getting it right”. Any reassurance you gain from this line of argument needs dispelling quickly (29–31). Students do not intuitively know what knowledge they need to posses, and students and doctors are often poor judges of their own clinical competence. Most doctors and students think they are better than average and if you want to judge how good your students are you need some measures of outcome – and of course, so do they. The idea that ‘self-reflection’ or some measure of the ability of a student or doctor to be a reflective practitioner provides a worthwhile measure of competence is mistaken. For instance we have recently shown that students who claim certitude about their ability to diagnose skin cancer in the absence of objective evidence should give great concern – particularly if you are a patient with a potential skin cancer (32).

’You learn medicine by being an apprentice.’ John Burton in the preface to his (rightly) celebrated and witty undergraduate textbook (33), pointed out that there were no colour images in the book (because of cost) but that in any case colour plates were of limited value. To acquire expertise, he explained, you needed to spend a lot of time in the clinic seeing patients with an acknowledged master who could discuss the differential diagnosis with you. But of course you can see far more images on the www than you can in clinic, and this apprentice model he described assumes that both teacher and student have enough time to undergo or provide such training. For undergraduate and increasingly postgraduate medicine this is almost certainly not the case. Most medical schools have hundreds of students, who are attached to dermatology for extremely short periods of time. The lecture was a scaleable solution for how one teacher could ’instruct’ many students in the absence of written texts or as a supplement to them, but we still have no off-the-shelf solution to how we can scale the apprentice system without greatly increasing costs. Ask yourself a pair of simple questions: how many melanomas do students see when they are attached to dermatology, and how many do you think they need to see to become competent at spotting suspicious features in apparently benign nevi? Is your teaching in line with your expectations (32)?

“They were fine when they finished the attachment, but their performance seems to have deteriorated since then. I blame the subsequent psychiatry attachment for confusing them.” Most medical schools have carousel structures for attachments such as dermatology. It does not require profound insights into cognitive psychology to know that student performance deteriorates after they finish any single attachment (32). The reasons are not hard to fathom. The specialist knowledge they have accumulated is not subject to use, the students have no need to ‘retrieve’ or consolidate the learned information, and consequently what knowledge they have acquired is gradually lost. This is a particular problem for subjects such as dermatology because few other non-specialists have any knowledge of it, and those that should, such as primary care physicians (in the UK at least) have received virtually no formal tuition themselves, and consequently are not in a position to teach it. We have known ever since the work of Ebbinghaus in 1885 about the importance of review for student learning (34), and today people write papers heavy with mathematical notation on how best to optimise student learning in the light of Ebbinhaus’s work (35). There is seemingly a clear conflict between the economics of course delivery and individual learning.

A variant of this problem is the belief that ‘brief interventions’ will change clinical behaviour (36). For instance, studies are reported showing (for example) that exposing a group of GPs to a seminar on skin cancer improves factual knowledge in the short term (37). Well, of course it would be very surprising if it didn’t. The crucial question is whether the change is long term and, if there is a benefit, how is it to be maintained, and what other aspect of learning this particular intervention replaces. It is of course not difficult to show that almost any teaching intervention improves outcomes when the control is ‘no intervention’, but in practice any intervention will come at the expense of some other intervention either in the same domain or in another domain of medicine. No control, is not a control.

What we really do know about learning

The previous section may have given the impression that we know little about student learning or how to maximise it. This is far from the truth (19, 22, 38, 39). In practice however, rather than relying on secure knowledge that has been subject to experimental scrutiny, individual teachers and institutions often cling to what has been termed ‘folk pedagogy’ – think of it as the educational counterpart of ‘folk medicine’ or what your grandmother might have told you about how to treat pemphigus (18, 40). There is a widespread professional reluctance to examine teaching performance analytically and admit that much of it is done very badly. This is not just an issue for individual teachers however, but reflects institutional biases too. Derek Bok, one of Harvard’s most successful Presidents, recently pointed out that few Universities took teaching seriously or make serious attempts to improve student learning (41). Examining student learning critically is uncomfortable for many, if not, most institutions.

Proven strategies to improve student learning

  • Use more than one sensory channel (e.g. pictorial and auditory).
  • Mixed practice facilitates transfer better than practice on only disease (i.e. compare and contrast; test on images of multiple different diagnostic groups).
  • Do not duplicate or clutter material (do not read out the same body of text verbatim that is on a slide).
  • High quality simulations or figures are not necessarily better than low quality ones (2D may be better than 3D representations, and line drawings better than video).
  • Distributed (over time) learning and revision is preferable to once only intense periods of learning. Tested recall is often better than re-study of the already presented material.
  • Content matters! Generic visual diagnostic skills are easily overestimated – students need exposure to multiple examples of all the various rashes you expect them to know about.

In order to support my belief that we do indeed know a lot about how to improve learning – even though this knowledge is frequently ignored– I summarise some key findings about the cognition of (medical) learning and clinical practice below. Following this section I return to the role that technology may or may not play in dermatology teaching and learning.

Learning and cognitive load theory

Modern theories of educational instruction emphasise that learning strategies must take notice of the informational processing abilities of the human brain (42–44). Key amongst these limitations is the central role of working memory that can only hold material for a short time and is severely limited in the number of units (‘chunks) of information it can hold at any one time (45). Whereas sensory memory can hold a large amount of information for a very short period of time (< 1 second), and long-term memory may be apparently unlimited, learning requires transfer from sensory memory and integration of this new knowledge with prior knowledge within the working memory. The limited capacity of working memory means that this process can be influenced for good or bad. For instance, extraneous material may place an unnecessary load on working memory, meaning that the core information is not processed in a meaningful way. Because it is thought that informational processing is dual channel, then use of both pictorial and auditory channels may increase the ability of working memory. For instance, use of diagrams and speech is to be preferred to the use of diagrams and text of the spoken words and the spoken words themselves (42). If the amount of information passed into working memory is too great then, the ability to process this information and ‘make sense’ of this input is disturbed. Think of a rapid fire slide lecture in which it appears that there is never a pause to integrate the sensory information with what is ‘going on’. In this instance, cognitive working capacity is so busy trying to keep up with the flow of information, that it is not possible to engage in meaningful learning to understand ‘what is going on’. Some of the predictions from cognitive load theory run against what often seems to pass for best practice. For instance people may learn better from black and white drawings than from colour photographs, and many video effects may detract processing time from working memory and consequently impair learning (43). A review of this topic is provided by Mayer in the context of multimedia learning (44).

Interventions based on cognitive load theory have large effect sizes – if we are thinking using the clinician’s mindset, we would say that these interventions described above are far more effective than the majority of treatments employed by cardiologists or neurologists. Norman has summarised a number of other proven practical learning strategies that have also been shown to improve learning, again with large effect sizes (39). For instance, we know distributed practice in which learning is spaced out over time is accompanied by large improvements in learning. We know that varied and contrasting practice is also important – if you wish to teach students about how you diagnose and treat common skin cancers, whereas initially you may only want to include a single class of lesions, you would be better later to broaden their exposure to more than one type of lesion. If the lecture has been on basal cell carcinomas then seeing only pictures of classical nodular BCCs at the end of a lecture is unlikely to be as useful as providing contrasting images of different types of BCC as well as say squamous cell carcinomas and a range of other lesions. Finally, as mentioned in the opening section, we know that periodic testing has greater beneficial effects on learning than other types of self study time, such as going over the learned material passively once again.

The basis of clinical expertise in dermatology: the important role of non-analytical reasoning

Views on the nature of expertise in medicine have changed considerably over the last twenty years (38). At one time it was imagined that experts possessed some particular and general ‘critical thinking skills’ that enabled them to diagnose patients, and that this skill was absent in beginners. It was thought that experts were able to successfully reason from basic science to clinical diagnosis in a way that beginners couldn’t, and diagnostic reasoning progressed by some form of hypothetico-deductive process. Most of these beliefs are now thought in large part to be either false or insufficient models to explain medical diagnostic skills. In truth, and with the benefit of hindsight, their applicability to expertise in dermatology always seemed far-fetched.

More recent work, particularly in dermatology has emphasised the role of non-analytic models of clinical reasoning (NAR) (23, 46, 47). Diagnosis here is viewed as a problem of categorisation, and the impetus for this work has come from how humans are able to classify everyday objects such as cats and dogs, and faces (48). Such abilities often seem effortless, are fast, and frequently not the subject of conscious scrutiny. On the face of it there are strong parallels with how dermatologists work: diagnosis is often ‘blick’ diagnosis – it is apparently not subject to conscious scrutiny – and the process is often difficult to convey to others in such a manner that they can emulate the process.

There are various theories of how categorisation is achieved, including prototype theory and exemplar theory. In the former, to caricature it at least, a person has a single prototype for each diagnostic class, based on their prior experience (48). When a future case requires diagnosis, the new instance is compared with the properties of the various classes in memory and the one with the most properties in common is deemed the correct diagnosis. There is as Norman says a ‘feature-by-feature’ matching (47). Exemplar theory by contrast holds that an expert may hold in their memory a large number of examples based on prior experience. When a new case is seen, the clinician is able to match this index case with a particular case or example held in their memory. The diagnosis is that of the class to which the referent example belongs. Neither of these theories provides exact mechanisms for how these cognitive tasks are undertaken in a neurological manner, nor do they demand that the processes involved are conscious or subject to conscious examination (48).

The idea of NAR has great implications for teaching and learning in dermatology, as well as in many other areas of medicine. For instance, work based on it has shown that accurate diagnosis may be better achieved if the diagnosis is made quickly rather than if conscious deliberation is involved, something that runs against much current advice to students. Because diagnosis may not be subject to conscious deliberation then experts may well not be able to explain how they arrived at a diagnosis – they may be able to tell a convincing story and in a teaching situation probably feel obliged to, but in reality the reasons they give for their expertise in a particular instance may be mistaken. The expert may therefore indeed recognise a BCC or a melanoma correctly but the reasons he verbalises for why he thinks this is the correct diagnosis are not necessarily the ones that allow him to make that diagnosis (23). NAR also calls into question many attempts to use rules to make a particular diagnosis. For instance, use of rules such as the ABCD or various checklists for melanoma cannot be taken as literal accounts of how experts make a diagnosis (49). Indeed, given the nature of expert diagnosis it is quite possible that the verbalisation of signs is biased – the diagnosis is already known to the expert and his reporting of the qualities such as colour variation, and irregularity are likely the result rather then cause of the process that has led to the correct diagnosis. Dreyfus and Dreyfus, in a now classic text on the nature of artificial intelligence, pointed out that the idea that expertise is accounted for by experts possessing more and more refined rules, that the beginner has to acquire and learn as they move from novice to expert, may be the reverse of what really happens (50). Beginners may start with simple rules, but with time, rather than use sophisticated rule-based strategies, they instead build up a large library of exemplars, which they then use to classify particular situations. The expert relies not on rules but just has to hand a large library of personal examples to guide action. Certainly this view chimes with a lot of my own prejudices from teaching students, where rule based strategies seem to function more in a a social role providing ‘something to say’ whilst students are guided through more and more images of skin disease that allow them to expand their own exemplar library.

The importance of deliberate practice, structured feedback and domain expertise

The danger of believing that acquiring medical expertise involves learning some particular and special type of transferable reasoning strategy, or that if only you learned the basic cell biology or physiology, that you can then be set free to become an expert, is because it misinforms how to teach and train doctors. Asking students to learn dermatology is not a way to develop higher level visual reasoning skills. Learning dermatology is not a way to promote reading of X-rays. Learning clinical diagnostic skills in dermatology may not even promote learning in dermatopathology except where some of the factual knowledge is common. Expertise is in large part about possessing more knowledge and skill in a particular domain.

There is a growing literature about various diverse forms of expertise, covering both mental skills such as chess, motor skills such as physical sports, and for some activities that involve both such as painting and surgery (for reviews see Ericsson (21, 51–54). There seem some commonalities across many of these domains. Acquiring expertise requires continued practice and exposure, but also structured and preferably immediate feedback (52). Here we return to the master and apprentice model where the master criticises and comments on the work of the novice. Practice is not just about ‘seeing patients’ but about seeing patients in an environment that is structured around the learners weakness. Acquisition of musical expertise provides a clear example. The novice improves by practice of course, but this practice is not just attempting to play any old concert piece, rather the novice progresses through exercises chosen to focus on particular skills more intensely that can be achieved by a performance of any particular repertoire piece. Eric Cantona, honed his footballing skills by long practice against a wall or an empty goal not just by playing games. Along the way, the novice requires long periods of intense practice but also receives detailed feedback on his individual performance. Finally, the skill needs to be sustained – if you don’t use it you are in danger of losing it. Andrés Segovia– or for that matter Eddie Van Halen or Steve Vai – practiced not just by playing concerts each day, but by practising exercises that focussed on individual elements of skill. They then put the individual elements together on the concert platform. There is a lesson here. It is almost an article of faith for many that trainees (just) learn by seeing lots of patients. Well, of course the converse would seem absurd – is it hard to develop expertise if you never see patients. On the other hand, structured non-clinic learning may promote some aspects of learning in ways that are superior and more efficient than learning based purely on seeing patients on a day-to-day basis.

How exactly will the Web and allied technology enhance learning?

Readers will have detected two threads to this essay. The first was an account of recent changes in our ability to communicate and disseminate material – the Web. The second, was an overview from ‘ten thousand feet’ of some recent work into the cognition of learning and medical expertise. I now need to bring these two strands together and the nature of this juxtaposition is critical for my purpose. If the new virtual world the Web offers is to sustain and change dermatological education, it is on the basis that it is better than the alternatives. In the context of this review, this means that the technology improves learning or promotes the same learning at lower unit cost. It is this demand, and indeed empirical question, that links the two threads of this review together. It is not just about copying the status quo – the online world a copy of the real world – but a question of to what extent the online world can promote student learning given what we know about how students learn. But of course predicting the transformative powers of technology is difficult. Think of the example I quoted earlier of the pollster on the bridge at Mainz enquiring about peoples’ opinion on the transformative potential of Gutenberg’s technology. Who would have thought, just a few years ago, that 1 in 10 humans on this planet would communicate with each other using a network designed to rank attractiveness of members of the opposite sex (Facebook) or signal using a character limit of 140 (Twitter). This uncertainty notwithstanding, it is surely possible to create a value framework for how technology might enhance learning, and a useful place to start is with the question of whether the new technology is merely propping up the status quo or enhancing student learning.

Much traditional teaching at medical schools uses the lecture format to pass knowledge from teacher to student. The lecture format dates from before the printed text and, in part, was a technological solution to the absence of cheap textual material that students could buy and read in private (4). Lectures are largely a one way medium, with the students being fairly passive (if indeed awake) but one that is capable of being scaled up to deal with hundreds of students. The Web obviously allows this scaling almost infinitely with little apparent diminution of value. For the many medical schools in the UK that do not employ full time academics in dermatology it may be sensible to tap into lectures produced elsewhere that are available on the Web. Indeed for many domains of knowledge, freely available Web lectures such as those from the major US research universities are likely much better in quality than those given face-to-face locally in many UK Universities.

There is however nothing sacrosanct about lectures as a medium of education, and using online lectures produced elsewhere may or may not be sensible. Debates still rage about the value of lectures, but one of the core findings from multimedia research is that the medium often matters very little (43). What does matter is how well the content obeys the types of learning principles I have outlined elsewhere in this essay – cognitive load theory, mixed practice, reinforcement and so on – whether it is provided via a computer screen or a paper book is less important. So placing lectures on the web is attractive, but in one sense this is the virtual world aping the real world, when the invention of the now traditional real world format was a solution to a problem (the absence of books) that we no longer face. So, virtual lectures comprising simple video formats are far less interactive than live lectures that for instance make use of clickers and other forms of interaction. Live lectures, being unique events, may enhance student attention in a way that students no longer maintain if they know they can watch the lecture at their leisure. Ironically, many online lectures seem to follow the format of the traditional one hour lecture, something that might make sense in the real world, but is unnecessary in a virtual time frame where shorter times may favour learning and attention. And sadly the same tired Powerpoint bullet point ridden slides seem as common online as in the real world. Of course, online asynchronous lectures, allow students to watch the same material again and again, and to segment the material as they wish, but this advantage comes at the cost of loss of intimacy. And what does the Web offer beyond the lecture?

Earlier, I pointed out how the development of cheap colour prints democratised dermatological education. In the context of dermatology, the Web seems to offer major possibilities. Just as at the moment there is a lively debate about the value of commercial journal publishers versus academic led repositories such as ArXiv, publishing medical textbooks is largely in the hands of commercial publishers, whose interests are not necessarily congruent with those of learners. The most pertinent issue for dermatology is the ability of Web based material (or iBooks) to allow publication of texts with thousands of images at very low cost. For dermatology this may be a major fillip to promote learning and clinical skills. Given the role of NAR in dermatological diagnosis the ability to expose students in a structured way to large volumes of relevant and well annotated images may be a real boon to student learning. For instance, at my own institution, despite considerable investment of time and resource, and with students receiving more than the average undergraduate exposure for the UK, clinical exposure to skin cancers was at such a low level that meaningful clinical skills were unlikely to develop (32). In practice, our students use the free resource provided by the New Zealand dermatology society (http://dermnetnz.org/), or freely available online textbooks that contain more clinical photographs than standard undergraduate dermatology texts (note a conflict of interest: the current author is the author of one of these online texts). The ability to disseminate high quality photographs at next to zero cost may be not just a real boon to undergraduates but also postgraduates. When your trainees finish their specialist training, how many cases of melanoma have they seen? How many patients with Merkel cell carcinoma? It is hard to imagine that such virtual cases could not be used as a basis to improve clinical skills, just as Hebra’s Atlas did in his own day. There are however downsides. Many images on the web are of poor quality, and the diagnosis for some images is wrong even on sites that are supposedly ‘certified’ as high quality (such as the English NHS).

Online lectures and online atlases are of course virtual copies of what we can do in the real world. Whereas the former seems to this author of minor value, the latter may well be transformative in that atlases of thousands of images – if well annotated and indexed – offer a simulated learning experience that is almost qualitatively different from that available to most medical students today. The Web however offers opportunities to do things that are not simply practical outside a one-to-one master-pupil apprenticeship. One of the striking aspects of the Web such as Facebook and Twitter is the social nature of enquiry and information sharing they encourage. It is possible to find out what your friends and colleagues are doing, respond to it and interact with it. In education this means using Web based interaction to learn from your peers, to react asynchronously to textual and image material, and to find out how you are doing in relation to your colleagues or peers. Interactive and automatically marked quizzes and games (‘gamification’) also allows teachers to be better able to assess what students have learned and what topics they are having difficulty with. We may not be able to provide one-to-one tuition, but a web of learners offers enormous opportunities for self and peer assessment and goal sharing. We are also no longer tied to the linear series of lectures or chapters in a book. Different media can be tied together, linked using hypertext, and the learner themselves allowed to navigate at their own speed through knowledge domains. I doubt if we have even scratched the surface of how we can use online material to promote learning in dermatology.

Some caveats: disruption is uncomfortable, especially for institutions

In the introduction of this essay I quoted Eli Noam’s seminal essay in which he drew attention to the economics of Universities and how the Web will challenge traditional financial models (3). Many of the opportunities the Web offers will promote learning if they are linked to sound pedagogical principles. The danger is that rather then the goal being to enhance learning they are used to just cut costs without enhancing learning or student value (4). So, for instance, if the cost of materials continues to drop (whether it be online lectures or books or atlases) then students are no longer dependent on a particular institution for that material. The value for a particular student at a particular institution has to come from something else. This might be seminars, one-to-one clinic tuition or a particular ‘esprit’ only available in the physical milieu of that institution: for clinical medicine this will obviously include talking to and examining real patients. What is no longer tenable (ignoring the issues of certification and degree awarding powers for the moment) is for a medical school to justify its value to an individual student much as it could once have done when course material was geographically local and restricted. Many institutions will be reluctant to embrace this change, particularly given the almost worldwide financial crisis in higher education, instead thinking that the new technology just allows them to charge the same (whether fees to individual students or the state) but reduce their overhead costs (4, 55).

The online world also allows a promulgation of material and approach that ( I hope) would fail if it was classroom-based. There are of course the obvious examples of images that are clearly annotated incorrectly or are of very poor quality, or textual material that has not been edited well or is presented badly. However there are other problems. Some of the E-learning packages the present author has seen seem to lack any sense of pedagogical logic and often appear merely as a way of satisfying bureaucratic goals – “we know all staff completed the training package because an electronic trail exists” – rather than promoting real learning. Or, in an attempt to solve the age old timetabling problem of undergraduate medicine, instead of reducing the syllabus or concentrating on extracting learning value from each unit, the curriculum solution is all too often just to pile on virtual lectures that the students are supposed to watch in their ‘spare time’ (this allows the institution to tick the administrative box that the subject was covered, and of course assumes that students unlike staff have lots of free time…).

Finally, whilst I do believe new technologies has lots to offer, there is still too widespread a tendency to imagine that just because it involves a computer it must be better. For instance modern technologies also allow richer audiovisual presentation, including 3D visualisation for anatomy and skin lesions (56), and richer animation in lectures. Surely that will promote learning? Well, the evidence suggests that it may not (57). 2D figures may actually be superior to 3D images, and videos in lectures may inhibit deep learning (43). It may be novel, but just because it is branded ‘E-learning’ does not mean it is useful. As Alan Kay, one of the legendary computing scientists who passed through Xerox PARC, and a long time researcher into how computers can enhance human learning, pointed out only a few years ago, the computer revolution hasn’t happened – yet (58).

REFERENCES

*The title for this review owes considerably to: From Gutenberg to Zuckerberg: What You Really Need to Know About the Internet. London: Quercus; 2012 (2).

  • Naughton J. A brief history of the future: the origins of the Internet. London: Phoenix; 2000.
  • Naughton J. From Gutenberg to Zuckerberg: What you really need to know about the internet. London: Quercus; 2012.
  • Noam EM. Electronics and the dim future of the University. Science 1995; 270: 247–249.
  • DeMillo RA. Abelard to Apple: The fate of American Colleges and Universities in the twenty-first century. Cambridge, Mass: MIT Press; 2011.
  • Draycott M. Disruptive technologies in higher education: adapt or get left behind. Higher Education Network. Guardian Professional http: //www.guardian.co.uk/higher-education-network/blog/2012/mar/21/disruptive-technology-in-he?CMP=. Accessed April 17, 2012.
  • Lewin T. MOOCs, Large Courses Open to All, Topple Campus Walls – NYTimes.com. New York Times http: //www.nytimes.com/2012/03/05/education/moocs-large-courses-open-to-all-topple-campus-walls.html?pagewanted=all. Accessed April 17, 2012.
  • Parry M. MIT will offer certificates to outside students who take its online courses – Technology. The Chronicle of Higher Education http: //chronicle.com/article/MIT-Will-Offer-Certificates-to/130121/. Accessed April 17, 2012.
  • Hebra F, Elfinger A, Heitzmann C, Wien ADWI. Atlas der hautkrankheiten. Vienna: KK Akademic der Wissenschaften; 1876.
  • Anderson WA, Banerjee U, Drennan CL, Elgin SC, Epstein IR, Handelsman J, et al. Science education. Changing the culture of science education at research universities. Science 2011; 331: 152–153.
  • DeHaan RL. Science education. Teaching creative science thinking. Science 2011; 334: 1499–1500.
  • Smith MK, Wood WB, Adams WK, Wieman C, Knight JK, Guild N, Su TT. Why peer discussion improves student performance on in-class concept questions. Science 2009; 323: 122–124.
  • Deslauriers L, Schelew E, Wieman C. Improved learning in a large-enrollment physics class. Science 2011; 332: 862–864.
  • Karpicke JD, Blunt JR. Retrieval practice produces more learning than elaborative studying with concept mapping. Science 2011; 331: 772–775.
  • Ramirez G, Beilock SL. Writing about testing worries boosts exam performance in the classroom. Science 2011; 331: 211–213.
  • Feldon DF, Peugh J, Timmerman BE, Maher MA, Hurst M, Strickland D, et al. Graduate students’ teaching experiences improve their methodological research skills. Science 2011; 333: 1037–1039.
  • Man J. The Gutenberg Revolution. London: Bantom Books; 2009.
  • Turner D. De Morbis Cutaneis: A treatise of diseases incident to the skin. In two Parts. With a short appendix... By Daniel Turner. Printed for R. Bonwicke, W. Freeman, Tim. Goodwin, J. Walthoe, M. Wotton [and 5 others in London]; 1714.
  • Lister R. Proceedings of the tenth conference on Australasian computing education-Volume 78. Hamilton S, Hamilton M, eds. Australian Computer Society, Inc. Darlinghurst, Australia, Australia; 2008, p. 3–17.
  • Bransford J, Brown AL, Cocking R. How people learn: Brain, mind, experience, and school. Washington, USA: National Academies Press; 2000.
  • Wieman CE. APS News – The back page. The” curse of knowledge” or why intuition about teaching often fails. American Physical Society News 2007; 16: 8.
  • The Cambridge handbook of expertise and expert performance. Ericsson KA, Charness N, Feltovich PJ, Hoffman RR, eds. Cambridge; New York: Cambridge University Press; 2006.
  • Norman G. Fifty years of medical education research: waves of migration. Med Educ 2011; 45: 785–791.
  • Norman G. Building on experience – the development of clinical reasoning. N Engl J Med 2006; 355: 2251–2252.
  • Mazur E. Education. Farewell, lecture? Science 2009; 323: 50–51.
  • Mazur E. Peer instruction. Upper Saddle River, NJ: Prentice Hall; 1997.
  • Crouch CH, Mazur E. Peer instruction: Ten years of experience and results. Am J Phys 2001; 69: 970.
  • Glymour C. Why the university should abolish faculty course evaluations. Carnegie Mellon University Department of Philosophy: Paper 358. http: //repository.cmu.edu/philosophy/358. Accessed March 13, 2012.
  • Glymour CN. Galileo in Pittsburgh. Cambridge, Mass: Harvard University Press; 2010.
  • Eva KW, Cunnington JPW, Reiter HI, Keane DR, Norman GR. How can I know what I don’t know? Poor self assessment in a well-defined domain. Adv Health Sci Educ Theory Pract 2004; 9: 211–224.
  • Davis DA, Mazmanian PE, Fordis M, Van Harrison R, Thorpe KE, Perrier L. Accuracy of physician self-assessment compared with observed measures of competence: a systematic review. JAMA 2006; 296: 1094–1102.
  • Chiang YZ, Tan KT, Chiang YN, Burge SM, Griffiths CE, Verbov JL. Evaluation of educational methods in dermatology and confidence levels: a national survey of UK medical students. Int J Dermatol 2011; 50: 198–202.
  • Aldridge RB, Maxwell S, Rees JL. Dermatology undergraduate skin cancer training: a disconnect between recommendations, clinical exposure and competence. BMC Med Educ 2012, 12: 27.
  • Burton J. Essentials of dermatology. Edinburgh: Churchill Livingstone; 1980.
  • Custers EJ. Long-term retention of basic science knowledge: a review study. Adv Health Sci Educ Theory Pract 2010; 15: 109–128.
  • Novikoff TP, Kleinberg JM, Strogatz SH. Education of a model student. Proc Natl Acad Sci USA 2012; 109: 1868–1873.
  • Cliff S, Bedlow AJ, Melia J, Moss S, Harland CC. Impact of skin cancer education on medical students’ diagnostic skills. Clin Exp Dermatol 2003; 28: 214–217.
  • Bedlow AJ, Cliff S, Melia J, Moss SM, Seyan R, Harland CC. Impact of skin cancer education on general practitioners’ diagnostic skills. Clin Exp Dermatol 2000; 25: 115–118.
  • Norman G. Research in clinical reasoning: past history and current trends. Med Educ 2005; 39: 418–427.
  • Norman G. Chaos, complexity and complicatedness: lessons from rocket science. Med Educ 2011; 45: 549–559.
  • Hestenes D. Wherefore a science of teaching. The Physics Teacher 1979; 17: 235–242.
  • Bok DC. Universities in the marketplace: The commercialization of higher education. Princeton Univ Pr; 2003.
  • Mayer RE. Applying the science of learning to medical education. Med Educ 2010; 44: 543–549.
  • Colvin Clark R, Mayer RE. eLearning and the science of instruction: proven guidelines for consumers and designers of multimedia learning. San Francisco: John Wiley & Sons; 2008.
  • Mayer RE. The Cambridge handbook of multimedia learning. Cambridge, U.K.: Cambridge University Press; 2005.
  • Sweller J. The Cambridge handbook of multimedia learning. Mayer RE, eds. Cambridge, U.K.; New York: Cambridge University Press; 2005, p. 19–30.
  • Allen SW, Norman GR, Brooks LR. Experimental studies of learning dermatologic diagnosis: The impact of examples. Teach Learn Med 1992; 4: 35–44.
  • Norman G, Young M, Brooks L. Non-analytical models of clinical reasoning: the role of experience. Med Educ 2007; 41: 1140–1145.
  • Murphy GL. The big book of concepts. Cambridge, Mass: The MIT Press; 2002.
  • Aldridge RB, Zanotto M, Ballerini L, Fisher RB, Rees JL. Novice Identification of Melanoma: Not Quite as Straightforward as the ABCDs. Acta Derm Venereol 2011; 91: 125–130.
  • Dreyfus HL, Dreyfus SE. Mind over machine: The power of human intuition and expertise in the era of the computer. New York: The Free Press, MacMillan; 1988.
  • Ericsson KA, Krampe R, Tesch-Romer C. The role of deliberate practice in the acquisition of expert performance. Psychological review 1993; 100: 363–406.
  • Ericsson KA. Deliberate practice and acquisition of expert performance: a general overview. Acad Emerg Med 2008; 15: 988–994.
  • Ericsson KA, Prietula MJ, Cokely ET. The making of an expert. Harvard Business Review 2007; 85: 114.
  • Ericsson KA. An expert-performance perspective of research on medical expertise: the study of clinical performance. Med Educ 2007; 41: 1124–1130.
  • Christensen CM, Eyring HJ. The innovative university: changing the DNA of higher education from the inside out. San Francisco: Jossey-Bass; 2011.
  • Aldridge RB, Li X, Ballerini L, Fisher RB, Rees JL. Teaching dermatology using 3-dimensional virtual reality. Arch Dermatol 2010; 146: 1184–1185.
  • Norman G. Anatomical mysteries. Adv Health Sci Educ Theory Pract 2010; 15: 149–151.
  • Kay AC. Moryton: The Computer revolution hasn’t happened yet, OOPSLA 1997. http: //blog.moryton.net/2007/12/computer-revolution-hasnt-happened-yet.html. Accessed April 17, 2012.