Friday, September 25, 2009

Screening test mandatory for foreign medical graduates

Screening test mandatory for foreign medical graduates

Medical graduates with foreign degrees will not be able to practise in India till they have cleared a screening test conducted by the Medical Council of India, the Supreme Court has ruled.

The screening test will also be mandatory for those students who have got MBBS degrees from a country with which India has a reciprocity agreement. At present, certain medical qualifications of UK, Australia, Canada, Italy, Japan, New Zealand, South Africa, Ireland, Nepal, Pakistan and Bangladesh are covered under the reciprocity clause. From now, if an Indian student gets a medical degree from a foreign country covered under the reciprocity clause and wants to practise in India, he can do so only after clearing the MCI’s screening test.

The worst affected would be Indian students who had made a beeline for medical degrees from colleges in Nepal after the MCI had refused to recognise medical degrees from institutes in erstwhile USSR countries, which had liberal admission criteria.

Students went in droves to get admission in medical colleges in Nepal, with which India has a reciprocity clause, and had approached the SC after MCI said they were required to appear in the screening test.

Dismissing their plea against the screening test, a Bench comprising Chief Justice K G Balakrishnan and Justices P Sathasivam and J M Panchal said:

Appellants have to appear in the screening test conducted by the National Board of Examination in terms of the Screening Test Regulations made by the MCI. It was noticed that such students also included those who did not fulfil the minimum eligibility requirements for admission to medical courses in India. Serious aberrations were noticed in the standards of medical education in some foreign countries, which were not on par with standards of medical education available in India,” the SC said justifying its ruling.

It was therefore felt necessary by Parliament to make a provision to enable MCI to conduct a screening test to satisfy the regulatory body about the adequacy of knowledge and skills acquired by citizens of India, who obtained medical qualifications from universities or medical institutions outside India.

MCI aims to bring back 5000 NRI doctors in 5 years

MCI aims to bring back 5000 NRI doctors in 5 years

Amendments in the Medical Council of India (MCI) regulations will open the floodgates for hundreds of non-resident Indian (NRI) doctors to come back to their roots. MCI has eased the cross-over rules and has set a target of bringing back 5,000 Indian doctors, including teachers, settled in US, UK, Canada, Australia and New Zealand.

MCI has removed the main bottleneck by recognising the postgraduation and other degrees of these specific countries where health facilities are supposedly best in the world and the education was done in English medium. They have the choice of coming back to teach in a private or government college as well as work in a private or government hospital. Also, they can set up their own medical colleges and hospitals. Indian doctors in these countries are the richest segment even among NRIs.

Apart from accepting foreign degrees, the MCI has made special provision so that foreign experience is also counted. For example, if there is a professor of medicine in a US university, with the required number of years of experience to become one in India, he can be hired as a professor by any medical college in India. This will bring about a huge change not only in the cities but also in the countryside, if the doctors returning home really go deeper into their roots. Besides, MCI also sees the possibility of groups of NRI doctors coming back and pooling in their resources to build hospitals and medical colleges.


Tomorrow's Doctors...

Tomorrow's Doctors...
...are going to be quite similar to yesterday's doctors, apparently. According to the GMC, Medical Schools should now be focussing on giving students meaningful clinical experience, making sure that medical students are ready to become junior doctors. Which is what we've always thought, right?

But it is encouraging to see the GMC trying to take the lead in guiding medical schools towards promoting useful clinical experience rather than increasing PBL, training sessions, communication skills and simulations (all of which are valuable educational tools as an adjunct to clinical teaching, but have perhaps been over-represented).

In my brief run-through the new Tomorrow's Doctors I can't say I found much to substantively address two related issues though:

1. It's all very well incorporating clinical experience into the first few years of medical school, but this experience is of limited value when basic knowledge is so poor (as you'd expect early in undergrad training). Students need a good grounding in relevant medical science (ie you don't need to know the Krebs cycle inside-out but a good knowledge of pharmacology is essential). For example, I spent 5 minutes teaching a medical student (not final year, but not 1st either) about a lumbar spine x-ray. It took longer than I thought because instead of concentrating on the osteoporotic crush fractures, we had to spend some time working out what the calcified tube-thing anterior to the spine was (hint, it sounds a bit like "Ray Liotta") I didn't use that clue, though.

2. Dumping groups of medical students on wards doesn't equal clinical experience. All the checkboxes, DOPS etc in the world will not ensure that the student isn't spending most of his/her time wandering round aimlessly behind a disinterested ward-round, chatting to the other students because no one has the time or interest to actively teach. My ward was short staffed earlier this week, leaving a house officer for one team and an SHO for the other. This situation is manageable, but not ideal. Enter 5 medical students. You can imagine what kind of educational experience they got that day. Perhaps the advent of Student Assistanships will make the students more responsible and useful on the ward, which would undoubtedly improve the educational yield from their 'ward time'.

Once I've had a chance to have a proper read I may need to eat those words. We'll see.

The 11th Reason Doctors order unnecessary tests

The 11th Reason Doctors order unnecessary tests
I liked this list of reasons why doctors order tests. It's based on medical practice in the US but most apply to doctors in the UK too. I'd go so far as to add another - temporizing. It's really an extension of reason 1, with a bit of 2,3 and to some extent 5 as well.

Time can be an excellent way of finding out what the natural history of a disease process is, of gaining new information, etc, so ordering a few tests while watchfully observing your patient is often reasonable or even very good practice. However there's definitely a trap that many doctors fall into where they have a patient in want of a diagnosis or definitive plan, who doesn't readily fit into a disease paradigm, and they'll keep on ordering tests until they get bored. The problem with this sequential over-testing is it allows the doctor to stop thinking. All you need to do is fire off a few tests, then you don't need to think until they all come back negative. What to do? Order another test that takes a few days! And again, and again…

Although this could result in the diagnosis coming to light, either by eventually finding the 'right' test, or by the disease revealing itself more clearly (or just resolving), the unfortunate side effect of the process is that instead of being watchful and considering possible diagnoses for a time, the doctor disengages brain for all but the 30 seconds it takes to think up another few tests - thus while thinking he's exemplifying the considerate, watchful doctor, he becomes the exact opposite of that, sometimes for weeks on end.

However, I'd add just a tiny critique of Dr Rangel's underlying rationale for critiquing over-testing. Not that I disagree with him, because the behaviours he describes are absolutely not good medicine and should all be avoided. But why are they not good? In criticizing the lazy physician who can't be bothered to formulate a diagnosis using clinical skills, he says:

"It takes time to listen to and sort through a patient’s symptoms and to do a proper and directed physical exam. But if you have 55 patients to see today and you want to make it home on time then you can just order a GIANT MRI SCAN of EVERYTHING that’s all but guaranteed to detect any and every abnormality. Wrong. That’s not practicing medicine. That’s the cookie cutter approach. My dog can do that."


Yes, that's not very impressive doctoring. But the problem with the 'cookie cutter' approach is not that it's intellectually lazy, although it is. It's that it doesn't work - it has a terrible signal to noise ratio, and it results in patients being exposed to risks from the original investigation and from subsequent investigations or procedures relating to incidentalomas. However, if we had some amazing new body scan that could accurately predict the natural history and effects of every 'abnormality', at £1 per scan, then ordering a GIANT WIZZBANG SCAN of EVERYTHING might be very good for patients, even though any lazy idiot could order the scan. I'd be out of a job, but people would probably be healthier.

Despite what a few mail-order scanning companies would like to tell you, that scan doesn't exist, and is very unlikely to any time soon, so us good doctors who use clinical skill and judgement can rest safe in our paycheques. But it's important to remember what the point of our jobs is - being a 'good doctor' (which includes using investigations judiciously) improves the health and lives of our patients. It's not an end in itself.

As a medical teacher, I can't teach my students / juniors about every situation where they should or shouldn't order a particular test. But if I can teach them an underlying throught process or behaviour pattern relating to how to approach diagnostic situations - with the outcome for the patient paramount - then I shouldn't need to tell them how to avoid each of the 10 bad reasons for ordering tests. They should be able to work that out for themselves.

A Nursing student writes...

A Nursing student writes...
I recently received an email from a charming nursing student who read my blog, and wanted to know a little more about a presentation I'd uploaded to slideshare - on NICE and healthcare rationing. Primarily she wanted to reference it in an essay for her nursing degree on a similar topic. Now, of course I was very flattered, and yes, I do think my opinions are sensible and backed up by evidence, but I'm clearly not an expert on the ethics, law or economics of healthcare rationing. So I advised her to go to my references and look at the primary sources.

Because I'm a doctor I'm contractually obliged to unthinkingly underestimate nurses, and in fact she'd already done that. But she still thought it was appropriate to reference my presentation since she felt it had influenced her thinking:

"In some ways it's a grey area as I could solely reference primary sources and the Tutor would be unlikely to question it. But I am definitely borrowing the odd point from your presentation, so best to do the right thing"

TBH I don't think I would have been quite so honourable. I read a lot in articles, blogs, twitter feeds, on the TV, and from friends and colleagues. Sometimes I hear ideas or opinions I like or that persuade me to change my thinking. Some of it is conscious, much unconscious. So, when it comes to writing scholarly work, I tend to reference the primary sources that are at least published if not peer-reviewed too. Even if a blog article or online presentation influenced my thinking, I think I wouldn't reference it unless I was quoting it.


Is this reasonable, or am I being a snob about referencing sources that I don't think of as traditionally 'authoritative'? Would I feel better about referencing an article or book chapter by someone rather than the same person's blog? I think I probably would. And what about sites like wikipedia, which has the advantage of being 'peer reviewed' in some sense?


The debate about referencing wikipedia in scholarly work still has some distance to run, I think. For now, the rule seems to be that you can use wikipedia to learn but shouldn't rely on it as authoritative - and therefore shouldn't reference it directly. I think there's a lot to be said for wikipedia generally, especially if you understand how it works and how to look at the evolution of the article and its related discussions. But no matter how good wikipedia / my slideshare presentations / my blog waffling is, if I'm still sceptical about sticking them in the reference section of my essays, I think it'll be some time before these kinds of resources are widely accepted as reasonable reference points for academic work.

Perhaps this is a shame, but perhaps a conservative attitude to this new medium is wise until theres a widespread and deeper appreciation of how it works, how it can be used and what it adds.


Finally, my web-savvy nursing-student reader signed off with another interesting point. Having reviewed many of the primary sources I'd mentioned in my talk, she did pause for thought at the end of the assignment, reflecting...


"Oh well, I still can't give a patient a urinary catheter, but I read Aristotle today..."


What hath I wrought?

Bait for the MedWeb2Skeptic

Bait for the MedWeb2Skeptic
Gah, @amcunningham beat me to a proper look at this paper on web2 use in medical education. To be fair, I was on night shift at the time, so wasn't really in the right frame of mind to write anything longer than 140chars. Still, feeling quite chuffed that I got in there early with the critique, even if it was a little... concise.

Anyway, there isn't a massive amount to add to Anne-Marie's skewering of this survey-based paper on use of Web2 tools in medical/nursing education - she rightly critiques the low response rate, confusion & conflation of web2 / social media tools, and the authors' rather bold conclusions (subsequently echoed around the twittersphere).

The authors do acknowledge one of the paper's weaknesses when they state:

"...given the small sample size, it is difficult to predict whether the use of Web 2.0 tools portends a growing trend in education or merely represents a passing fad"

But although they note the small sample size, they still accept their findings as significant, albeit perhaps transient. To be honest, in this paper, the future of web2 use in medical education is not 'difficult to predict', it's completely outwith any of the conclusions that could possibly be drawn from the data.

But just a few more points...

1. A survey of web2 usage by medical/nursing institutions by a fairly open-access survey, with a very poor response rate means that any conclusions must be interpreted with a degree of caution. But it's not just the low response rate that sounds a note of caution. One also has to question why those particular people bothered to respond (selection bias). It's easy to hypothesize that survey recipients who'd never heard of Moodle etc would just delete the email, while those who were evangelical about using wikis and youtube would reply in their droves. So the sample biases itself.

2. I think there's two other ways to do this kind of research - either spend some time identifying IT/education leads at medical schools and send them a better-designed survey asking questions about overall web2 tool use in medical school, or survey a large number of medical students from several medical schools with a very short survey to ask what tools they actually use on a regular basis.

3. As Anne-Marie mentioned, the qualitative data isn't mentioned. My guess is that there wasn't very much of it. The question is too broad and vague "please briefly describe how these tools are used in your institution". This makes it difficult to answer (therefore most respondents probably don't bother) and unlikely to identify any common themes, as the responses given are likely to be highly heterogenous. If you've ever tried to get useful qualitative responses from questionnaires, you learn this lesson pretty quickly. I did, and I was doing an MSF in my spare time.

So, having kicked the corpse a bit, what's the real issue here? I think it's this - apart from generating headlines, what use is this kind of research anyway? So 45% of medical/nursing schools use web2 tools. Big woop. Who uses them? What for? How? How often? And most importantly, why? If a web2 tool can deliver a better educational outcome (or an equivalent one more cheaply / easily / quickly) than a conventional teaching method, that's a good thing. But just using web2 education tools isn't important - it's what you do with them that counts.

Teaching Feedback - 'The Intimidator'

Teaching Feedback - 'The Intimidator'
In 7 years as a doctor I think I've filled in a bazillion (approx) work-based assessments for junior doctors (most with contemporaneous structured feedback, some rather pointlessly a week or so later via email). I've handed in a few multi-source-feedback questionairres, and I've probably completed 0.3 bazillion post-lecture feedback forms. Feedback is everywhere in medicine now, and if it's done well it's incredibly useful. If it's done poorly, it's a total waste of time.

In terms of feedback I've received, most of it relates to my skills as a doctor, and very little has been comment on my skills as an educator. And if you don't count the aggregated scores from near-useless lecture feedback forms, I've received almost no feedback about my teaching. In fact, I really don't count those forms - the quantitative questions are so vague they're only useful for comparing yourself to the other speakers in a putative best-speaker competition. There is no specific information from this that can inform self-improvement.

Recently for the MSc in Geriatric Medicine (Teaching/Communication Module) I'm working towards, I completed an assigment on devising a multi-source feedback survey on one aspect of my teaching skills. The process, results and reflection was delivered by means of PowerPoint slides. This is it...



Notes:
1. Now, for those of you who don't know me, I'm not the kind of person that thinks of himself as intimidating. I'm a 5'7" geriatrics reg, ex-computer game reviewer, briefly a stand-up comedian. Not that these things define me or negate the possibility that I'm a scary, dastardly figure. But it's not something that's really come up very often, and frankly quite the opposite of my self-image, which is why I decided to explore the issue with my MSF. It seems I can be intimidating, to a few juniors. In fact this shouldn't be such a surprise, really. I've got just over 2 years until I'm a consultant, for many of them I'm 2-3 grades up in the professional hierarchy, I'm the teacher, I've (usually) got more knowledge than them... What do I do about it, though?

2. I don't actually think I'm Pete 'Maverick' Mitchell in Top Gun. But we do share a surname. And a nickname. Not really. But doing an MSF on yourself, about an aspect of your professional identity you're quite proud of is quite a challenge to self-image. That's what I was discussing with these slides.

3. Yes, the PPT slides are a bit wordy. But words mean points mean prizes (for the MSc markers).

4. HT to @nlafferty, who worked on the original DREEM, and pointed me towards the PHEEM (more relevant to F1s generally but less about teaching style, so I ended up using the DREEM as inspiration). The people you meet on Twitter...

Why are Junior Doctors no cleverer than I was?

Why are Junior Doctors no cleverer than I was?
Amongst doctors in training there seems to be little appreciation for the benefits of on-line learning. As a source of information (primarily via google and wikipedia) all but the most luddite seem to appreciate some of the benefits, although the benefits that are most often praised seem to be immediacy and accessibility. Accuracy less so, and not because most doctors know how accurate the information sources they're accessing are, thus give them less weight or learn how to assess, compare and cross reference relevant data - but because unfortunately many don't seem to care. That's fine when you need a two-line summary of a condition in a patient's medical history, but not good enough when on-line information is the backbone of your learning & referencing. Confession - I can't remember the last time I opened a traditional medical textbook to look something up.

The old-fashioned method of trusting a few reputable names (Davidson's, Harrison's, The Lancet, NEJM, Cochrane, the AHA, or even specialized online efforts such as Medscape or Up-to-date etc) isn't going to fly when there is such a huge amount of information available, going far beyond the scope of any of these august institutions. Frankly, appealing to authority rather than assessing sources, data and methodology yourself has never really been good enough either, even before the intertubes. Not to devalue these organs (all worthy in their own right, and still regularly form the backbone of my referencing) but their depth and breadth are already dwarfed by the rest of what's out there on the tubes.

So, we need to teach young doctors how to obtain, interpret, and evaluate data sources from more sources than can ever be pre-emptively approved. They also need to know how to integrate this new learning into their pre-existing knowledge to form new understanding and improve practice. That is, in order to learn and improve practice, they need to self-apply a constructive hierarchy of learning, from finding new information and understanding it, through using and evaluating that knowledge academically, and then applying it to their patients (creativity).


(Simplified version of the Revised Bloom's taxonomy (Anderson & Krathwohl, 2001))

I've talked before about how medical students are exposed to a huge volume of experience but seem to lack the skills or opportunity to assimilate it usefully. The same can be said of junior doctors, only substituting 'teaching' for 'experience'. When I was a junior doctor I got one hour of organized teaching a week at lunchtime, and the occasional attendace at grand round. I was often too busy to make either. Currently, the juniors in my hospital get an afternoon of teaching (An hour of Grand Round and 2.5-3h of specific F1/F2/CMT tutorials). It's bleep free and their wards are covered by on-call staff. So, 2-4x the amount of teaching, and they usually get to it. But knowledge and practice don't seem to be any better (and I am aware of the 'when I was a house officer' fallacy - I don't think they're any worse than I was). But why no better?

Often the methods used in hospital teaching programs try to jump over the intervening stages of learning, firing knowledge at the bemused faces of junior doctors via PowerPoint and expecting that to magically enhance their practice. I've even heard consultants bemoaning the fact that "They were taught this last week!". Not well enough, it would seem. Further, doing this kind of thing for 3 hours is utterly pointless. Even if they remember a few points from the first PowerPoint, they've forgotten them by the end of the third one, and are also apocalyptically bored.

The Plan

So, junior doctors have access to a huge amount of information, but don't know how to use it. They're also given a large amount of teaching time, a lot of which is wasted. I think there's an opportunity here, and I'm currently planning to change some of the junior doctor training at my next Manchester hospital placement to demonstrate it. Details a bit sketchy at the minute, but if things work out, I'll be setting up a (probably Wetpaint-based) VLE / Wiki to assist with the delivery of either the CMT or Foundation curriculum.

Face-to-face teaching will remain the backbone of the program, but with a 30 min introductory lecture rather than 3 hours of PPT-punishment. Then, case discussions (PBL style), followed by wiki-based knowledge sharing, evaluation and synthesis. I'm aware that contributions outside of class time are substantially lower than during, so I'd plan for them to do the majority of the work straight away. Also, since I'm a believer in evaluation-driven learning (but sceptical of how accurately exam scores reflect real skills) I'd expect to use their contributions as a marker of the learning process. So instead of just checking at an appraisal that the doctor has signed in to 70% of teaching sessions, I'd be able to give an indication of exactly how much the doctor has participated - this could even be used as a official learning objective by the educational supervisor.

So, that's the idea. I expect it will change, due to practical constraints, and also because I'm learning about the process of delivering this kind of connectivist program. But for me to be learning alongside those that I'm teaching is really exciting.

Personal Case Report: Visual Hallucinations post-op

Personal Case Report: Visual Hallucinations post-op
Interesting case last week - post-op Mr C was told by his surgeon that he'd had a myocardial infarction during recovery. A week or so later, his memory has turned this MI into a stroke. On a medical review of the frail surgical patients, Mr C happily told me he was getting over his stroke OK, but was troubled by odd hallucinations.


It's not uncommon for the elderly to experience confusion, fluctuating consciousness and hallucinations during acute illness - this usually represents delirium, a global, reversible brain phenomenon usually caused by infection, metabolic disturbance, drugs (prescribed, illicit and socially-acceptable) or drug withdrawal.
But my patient didn't quite fit the pattern. His memory wasn't great, but this doesn't appear to have changed recently. He was quite alert. And his hallucinations were, when he went into them in more depth, almost exclusively appearing to the left of his bed, usually seeming to be fleeting images of people, disappearing beyond the field of view as soon as they appear. To his left was a window, through which was the nurses station. Such fleeting mirages on a hospital ward are usually called 'nurses'. Ha ha.
This kind of hallucination sounded a little like the hallucinations often described in Parkinson's Disease - patients will often describe feeling like there is someone standing just outside their visual field ('presence' hallucinations), or seeing animals or dark shapes flitting out of their sight. But the left-sided phenomenon was odd.
A screeening neurological examination - tone and power in the limbs, a brief check of facial power and eye movements - was normal. Crude checking of visual fields however was grossly abnormal. This man had no vision to the left of midline.
More accurate visual field examination followed - demonstrating a left homonymous hemianopia - the loss of the left half of each eye's visual field.
For med students / junior doctors - Where do you think this man's stroke was? Bamford classification (TACS, PACS, PoCS or LacS) or anatomically. Or both.
Then look at the CT images:

The CT scan demonstrates a right-sided infarct in the occipital lobe - a stroke at the back of the brain on the right, which fits with the clinical picture of left-sided visual loss (the nerves from the eyes cross over in the middle of the brain). There is a small amount of haemmorhage within the infarct, but outside the acute period (first few hours) this doesn't affect management significantly.


Learning points? Clinical examination is still useful. Listen to your patient to guide your examination. Don't assume an elderly patient is confused just because they're describing odd phenomena.
Further, I'm now wondering if the visual hallucinations are similar to the Charles-Bonnet syndrome, or whether this is some kind of excitatory effect from the small amount of haemorrhage into the infarct.

SpR GIM Training Day

SpR GIM Training Day
Sort of experimental one, this, especially since I sent it from my iPhone. It turns out I'm trying to do several different things with this blog (but then it is my blog). I think that's ok since I can use posts such as this both as my personal reflection, and as examples of different ways to use a blog. Anyway, today was regional GIM SpR teaching on cardiology. Over all, lots of knowledge and expertise but most speakers too keen to show that rather than educate effectively. Take home messages required some distillation...

AF - Ablation? main indication is symptoms (and drug failure). Amiodarone little use for rate control, for rhythm control ok but poisonous.


Cardiac MR - Cost in Manchester only 3x echo. ?Now should be 1st line for myocardial perfusion.


Interventions for LVF:
Medical Rx
Beta blockers most important, some data that early (acute) beta blockade is lifesaving. (observational data). Optimal rx may soon include adding ivabradine to B-blocker.

Biventric pacing (resync) for NYHA 3 or 4 despite medical mx, EF <35% + LBBB

ICDs for 2ndary prevention, primary prevention post-MI, CCF, VT on tape and stimulation.

VADs = mini bypass machines. Weak evidence base. Big stroke risk.
Used for:
1. Buy time before transplant (4-6/52)
2. Bridge to recovery (longer)
3. Long term (not on NHS) if not fit for transplant.

Transplant for the otherwise fit. 20% mortality at 1yr.


Aortic Stenosis: once symptomatic, survival is months and QoL poor.

Transcutaneous AV replacement:
For severe symptomatic AS with excessive surgical risk + survival > 1yr. No good survival data yet.

How to Fix British Undergrad Medical Education? Two suggestions: Copy the US, or use Google Wave.

How to Fix British Undergrad Medical Education? Two suggestions: Copy the US, or use Google Wave.
So, is Google Wave going to fix medical education in this country? Quick answer: of course not. Silly question. Of course, the question is really there to draw you in and make you think. What am I really asking? Something like... "How can we use advanced forms of Social Media to improve the undergraduate medical education experience?"

But why fix? Is it broken? Well, not really broken, but there are issues. And what's this got to do with the US? Well, foremost of the problems I see with current medical undergrads is the lack of time spent on wards / clinical responsibility. When I was working at MUSC in the US, I worked with some excellent medical students who were knowledgeable, personable and interested. Even though I was only there for 3 months, and some of their attachments were even shorter than that, I got to know them well and saw them improve noticeably in that short period.

I even friended some of them on facebook.

Back home, I can't tell you the names of any of the medical students that have been attached to my firm in the last 3 years. Is this because my memory is terminally shoddy? On this occasion, no. Because recently I've not seen a student often enough to really get any idea who they were, what they knew, or what they needed to learn.

Particularly in district general hospitals, students seem to spend one or two half-day sessions on the ward, following the consultant ward round, occasionally getting a grilling, then disappearing off to some unspecified teaching. At first I thought they were all just going home (I distinctly remember using the phrase 'studying in the library' as a euphemism for going straight home, possibly via the pub (it didn't look like this when I was at med school though)) but they really do have timetables with various educational commitments, all over the hospital.

Great that they're being exposed to so much, but dismal that they spend most of the time as hospital tourists, superficially glancing over a variety of procedures, clinics, rounds and lectures then forgetting all about most of it.

How do we get students to make the most of their time in a hospital?

So option 1: Get them on the ward, get to know them, give them some responsibility (the US model). I like it. But with so much to see and learn, are we depriving them of the huge variety of experience they currently have access to?

Option 2: Use social media. Google Wave is the hot topic right now (and it does looke pretty freaking cool), since it mashes up the best of email, blogging, IM, and wikis/online documents. Is this how we'll get students to actually reflect on and synthesize their experiences?

Perhaps a single session in the cath lab, or at a falls clinic will become substantially more meaningful if medical students can discuss, reflect, share relevant resources, and contribute to some kind of assessable group output on each of the types of experience they've had in their hospital attachment? (Because with British medical students, if there's no assessed end product, several would probably say f**k it and go to the pub. I know I would have) This does raise the question of how you assess collaborative work, of course.

Or why not do both? A bit of the old-school with some of the new wave? Sounds like a good idea to me.