Scroll Top

The hard task of assessing “soft” skills: Notes from a regional seminar

soft-skills-arent-so-soft

On the 12th of September, the Western Cape SAAHE regional committee hosted a half-day seminar on the topic of assessing “soft” skills. We had 40 participants who spent a few hours discussing different approaches to assessing things like professionalism, interpersonal skills and graduate attributes. This post presents the notes taken during the discussion from the seminar, with some additional information added after the fact, together with links to external sources. We share it here in the hope that others may find it useful.


The point was made that these should perhaps not be called soft skills given that soft has the unintended connotation of being “unimportant” or “easy”. Should these be referred to rather as: Complex skills? Intrinsic roles (e.g. as described in competency frameworks)? Graduate attributes? Critical cross-field outcomes?


Tools that can be used in the context of a course / module / block

  • Assessment of Physiotherapy Practice (APP) tool
    • This is a Physio-specific tool that is used in the clinical practice module at SU, addressing the development and assessment of soft skills. Students tend to score low when they start using it, then get better over time. Principles to think about here are progress testing and the testing effect (for learning theory and for learning skills). For a nice systematic review of a range of clinical practice assessment tools, including the APP, see O’Connor, A., McGarr, O., Cantillon, P., McCurtin, A., & Clifford, A. (2017). Clinical Performance Assessment Tools in Physiotherapy Practice Education: A Systematic Review.
  • Feedback from patients in Linguistics
    • Has the added advantage of boosting students’ confidence but we also need to bear in mind that patients may need to be prepared in advance. In the experience of some, patients tend to be overly complimentary about student performance (e.g. “everything was perfect”).
    • Scope of undertaking: No indication of how many students
  • Longitudinal matrix
    • UWC Occupational therapy department developed a matrix over four years considering the following: UG graduate attributes and core competencies; Weighted according to the four years; Used Bloom’s taxonomy as a guide for levels e.g., know about communication in the first year; application and integration in the fourth year.
    • Scope of the undertaking: About 50 per year level
  • Module handovers
    • At the end of every term or every semester, lecturers sit and hand over what was covered and what still needs to be covered in upcoming modules to bridge gaps in learning, module expectations and what students struggle with. Individual students who are struggling are raised in regular staff meetings
    • Scope of the undertaking: 50 students
  • Create opportunities for students to develop but then don’t actually assess
    • It is questionable with respect to the degree to which these things can be taughtThese opportunities might consist solely of 1) do something, 2) get feedback on what you did, 3) reflect.

Practicalities of deploying approaches

  • Designing and managing a distributed curriculum
    • Nursing at UWC has Interpersonal skills sessions for first-year students e.g. death and dying; washing a baby – observation of how student washes baby, makes eye contact with the baby, etc.
      • three times a week; four scenariosuse simulated patientsfacilitated by mentorsfizzles out in later yearsScope of undertaking: over 1000 students in programme; 300-350 in first year
    • UCT has two courses that run in the first year and incorporate soft skills: Becoming a Professional and Becoming a Health Professional (all disciplines are working together in these courses)
      • A challenge is whether the principles addressed here are followed through in later years, particularly in the clinical yearsIn addition, there is some evidence that isolated modules like this aren’t very effective at developing what is essentially an integrated skill that must be used across domains
  • Understanding fully what you are trying to help students to learn
    • For example, iare communication skills about mastering isiXhosa as a language or about understanding culturally appropriate communication e.g. the role of eye contact and body position in communication?
      We need to understand the role of cultural humility (see here and here for more information) in the context of understanding and assessing soft skills – ensuring that students recognise that difference of any sort has an impact and moderating yourself and your behaviour accordingly.Role modelling is very important in this regardAre we assessing things like “punctuality” and “dress code” under the broad name of “professionalism”?
  • Symbolism is important – soft skills should be communicated as being integral rather than peripheral
    • Shouldn’t be “putting it on the side as a different mark” on a rubricAll aspects of the curriculum should be integrated; a signal that clinical expertise is necessary but not sufficientAssessment system should be designed in such a way that a student cannot pass on clinical reasoning alone yet have poor communication skills or empathy (see below)Using soft skills tools that do not contribute to decision making can be  a two-edged sword e.g., if Nursing students must log 4000 clinical hours, this equates to an impact curriculum; student focuses on the technique so much they forget there is a patient in the bed; rote behaviours are emphasised, rather than soft skills, in order to demonstrate the technique (think here about the quote adapted from Rees & Knight, 2007: “Do we want [health care workers] who are professional, or will we settle for [health care workers] who can act in a professional manner?”)Important to consider the instrument and purpose of the assessment; if the student doesn’t understand that communication is part of the manual technique, then they will not focus on / address / bother with communication skills.
  • We could consider using longitudinally collected narrative data obtained from clinicians, patients, peers, etc.
    • We currently compartmentalise information about students because we have modular systems within universities and institutions
      There is little or no longitudinal collation and consideration of data
    • Consider narrative, qualitative approaches to collecting assessment data: Maybe each clinician writes a short paragraph about the students’ behaviour e.g. “the student rolled their eyes a bit every time I spoke to them” – this says something about a student’s professionalism without having to assign a number to a definition of what “professional” means.
      • once in 4 years or if three people out of 25 have a problem with an aspect of student behaviour, maybe it’s not an issue; but if 18 out of 25 note issues and make qualitative, open-ended claims about student behaviour, that is a different story
    • An example of a Physiotherapy school in New Zealand collating narrative feedback alongside quantitative data was discussed in some depth in a podcast you can listen to here.
    • The individual preceptor or tutor isn’t burdened with making pass-fail decisions about a student – all they have to do is write a short narrative where they don’t even have to define the behaviour i.e. is this about “communication” or “professionalism”? All they need to do is describe an observed behaviour.
      • lowering the stakes for the preceptors
    • This helps us to evaluate the aggregate picture over time rather than make decisions based on single snapshots of performanceProgrammatic approach to assessment suggests applying criteria used to illustrate rigour for qualitative research – trustworthiness; credibility; dependability; confirmability

How do we use the information?

  • Only make decisions with high resolution data
    • Can’t make defensible decisions about soft skills using tiny amounts of data collected on a single clinical placementCees van der Vleuten uses the idea of the pixel resolution of the “picture” we have about a student. He suggests that we gather and collate small numbers of pixels over time to generate a high-resolution picture of what is going on.
  • This takes pressure off any one clinician as being “responsible for” a student’s failure
  • Helps avoid the problem of “failure to fail”
    • If I have to make a defensible decision, I rather make no decisionThen, the student reaches the final year and everyone agrees the student shouldn’t be there
  • Each clinician / tutor / preceptor just contributes their 3 or 7 or 4 pixels worth of data towards building the composite picture, without the pressure of pass-fail decision making attached to the data collection
  • These small pixels don’t form the basis for any single decision but are rather summed over the duration of the programme; so, rather than “professionalism” counting for 5% of a single assessment task, you have many tasks where those marks are added up over time.
  • Separate data collection from decision making
    • this kind of principle could have systemic implicationsfor all of the principles we mention, there are implications for programme design i.e. it’s hard to fit these ideas into our traditional curriculum models.
  • Use collective decision making
    • Anonymise the observations; give them to a panel of five clinicians to make a judgement about competent / not e.g. is this someone you would hire? No? Not competent.Is it defensible?
      • if you have 60-70 observations from 23 preceptors over 2 years, it will be (again, consider qualitative criteria for rigour as mentioned above)once such a system is in place, there are bound to be legal challenges but we cannot let that dictate what is sound practice and our current system faces legal challenges anyway – it is not as though it is perfect
    • Avoid spending too much time on students who are clear fails or clear passes (i.e., spending hours “agreeing vigorously”); rather focus energy on borderline students eg 5% or 1SEM or some such measure either side of the cut score

Incorporating this into assessment programmes and bureaucratic systems

  • This should be incorporated into rules of progression or for the award of a qualification rather than used at module level
  • We need to circumvent the tyranny of an assessment bureaucracy that demands a numerical mark to enter against a credit bearing course code on the university’s computer system
  • Options to consider
    • make a “competent” judgment (don’t use numerical scores) for each of a series of soft skills it a prerequisite / DP requirement for access to the next year of study or to the exit exams
    • make a “competent” judgement for each of a series of soft skills a subminimum requirement for the award of a qualification (irrespective of scores in other domains)
    • this would ensure that there are meaningful consequences for not being competent in soft skills that are valued in the programme and ensure that the student who scores 80% overall but is very unprofessional or very poor communicators does not qualify
    • Could convert letter grades (what the assessor gives e.g. A = Excellent, B = Good, C = Borderline, D = Weak) to number grades (what the system needs to be captured e.g. 80%, 70%, 60%, etc.)
  • This would require some planning about what kind of remediation to put in place

Related Posts

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.