Born to Be…
Integrating Assessments into One’s Being – which Assessments speak the truth?
After attending conferences aimed at Industrial Organisational Psychology, I often feel overwhelmed by the magnitude of information I was able to not take in during my attendance, simply because there appears to be so much to still learn. This being said, this year I felt my interests dwindle and pique alternating between information that I felt was new, and other information that I felt was…well…old. I also try and remain active on research platforms so that I can grasp new ventures into research frontiers before they become too complex for me to understand off the bat.
One such presentation that I felt really hit the mark was related to development of an assessment tool – but more specifically, looking at the statistical techniques implemented to furbish this new tool. Some of the research I’ve been following (researchgate.net is an incredibly good website to stalk your favourite researchers and favourite topics or interests) has been bringing up old articles that I have very rarely investigated, namely because it is on a subject that seems to make little sense (or little notice) to those of a qualitative inclination. Research statistics seems to be taking quite a knock to the ego now that more popular psychology techniques are bringing out mass-market assessments that proclaim to “do it all”.
What do these pop-psych tests do to the realm of IOP? Unquestionably, they create a negative impression of confidence in any other assessment that is not as widely marketed and branded as they are. Above and beyond that, however, I find that they often result in outcomes synonymous with horoscopes: anyone can read the results and find some semblance of truth that is relatable. I often get a kick out of reading my horoscope, simply because I can almost always justify its contents to my daily life – and that is definitely a writing trait I want to be able to incorporate for those long psychometric integrations that seemingly contradict each other. The jibes aside, these accumulating pop-psych tests do seem to have some good coming out of them – they are allowing for exposure and creating a shift in the “common man’s” mindset – in generating more business for themselves, they are in turn generating the idea that psychometric tools are useful and multi-purposeful, therefore allowing for a foot in the door for psychometric assessments with depth and statistically complex foundations.
Now before these producing companies jump down my throat about the generalisation that pop-psych does not have hard facts supporting it, I will advocate to say there is emphasis placed on “formulae”. A part of the conference included a coaching panel discussion, talking about the approach to coaching and how IOP fits in with the masses slowly but surely appropriating this technique as the be-all and end-all intervention. What they said in that session is that there are “rhymes” that are producing “reason” – particular formulae, so to speak, that can be applied not withstanding context. And somehow these formulaic approaches are creating outcomes. In the same way, pop-pscyh tools also apply a formulae and produce an outcomes that can be leveraged off of if you buy in to the ingredients that make it.
So what’s the deal? Why is this an issue? If it’s producing an outcome, then surely we should be happy about this? My argument relates to the difference between a methodology and a formulae. Simply put, a methodology allows for variables outside the strict delimitations of its guidelines. In fact, a good methodology advocates for these interruptions and changes. In this way, a methodology can create a penultimate number of outcomes circumstantial to the situation at hand. Which, in turn, creates justification for the specifics of organisations’ differences. A formulae uses only the ingredients necessary for its intentions. You will always get the outcomes you are looking for using a formulae if you follow the recipe and put in what it wants to get out. It does not necessarily consider or factor in the extraneous variables.
This creates the difficulty that psychologists in the workplace often experience with pop-psych tools. While they look flashy, have a hard-selling factor, and can be used at almost any level of the organisation…how true are they in representing what we do not want to see? No one wants to read their horoscope and see how horrible they are…they want to see the good stuff. This puts extra emphasis on psychologists, who use pop-psych tools, to ensure that they know how to tell the tale that the formulae represent.
The strength of a well statistically researched and validated tool, is that the user should not have to use as much brain-power to make sense of its results, or to apply it to a person in such a way that it so holistically embraces their being. Instead, it should be able to more easily pin-point areas. After all, an assessment is merely you telling the story of yourself using the words that the researcher provides – and then representing this in a fashion that can be used more meaningfully going forwards. It builds the house using the bricks, but allows the person to choose how many rooms and what colour. A good assessment might even have windows and doors included – meaning that its applicability is more broad-based whilst not losing the intensity of the validity across contexts.
Every person wants to feel special. Every person wants to feel unique. While we hate the idea that we are constrained to the defines of our genes, we also want to feel that purposeful belonging – that we were born to be something. While an assessment should be used to help us describe and differentiate one person from another, it should also be used to make a person feel good about themselves. That they are a thing. That they have a level of cognition. That they have a personality that is distinctly different from others. That they have emotional reasoning and intelligence that breeds forth particular kinds of interactions. That they have a risk profile that allows them to be human. That they have strengths that they can embed within their work to realise better outcomes. In essence, that they were born to be something and have a purpose.
In this way, I feel that pop-psych tools are maybe a step above the rest. In aiming to sell themselves, they have answered a very deep-set insecurity plagued by human-kind. They have managed to almost ensure a happy outcome. Instead of “good” and “bad” traits, the likes of which organisations are always trying to discern and discard, pop-psych tools have opened up a plethora of conversations surrounding how awesome it is to be who you are. They have managed to create a platform where a person is not scared to answer questions about themselves. They have encouraged some level of self-reflection. They have started the ball rolling in inner-discourse about “who am I”, “why am I here”, and “where to after this?”.
I love statistical verification. I love seeing the numbers play out into a myriad of conclusions, producing something and guiding our human intuition towards future predictability. And while this is fantastic, it is not always a selling point when the number of statistical verification processes exponentially increase the price of the tool. Nonetheless, I often put emphasis in training sessions on understanding and knowing what the numbers mean – because this will help professionals to know how to apply the output of the tool appropriately, given the variables considered in the tool’s foundation. That being said… I also love feeling special. I also love feeling as if, for once, a low number does not diminish me as a being. I also love feeling, essentially, no matter what the outcome, that I was born to be…
* * *
Wonder White Rabbit hopping off
* * *