It is digital Mental Health Week #digiMHweek and there have been several articles about the use of Artificial Intelligence Bots and social media for delivering mental health support or analysing social media posts for signs of suicidal intent and going as far as alerting the authorities – so this is my take on the subject
Delivering support – here in the UK there are many instances of CBT being delivered via the internet. When I went through First Step, the workbooks weren’t worked through with the practitioner. In a typical session I would do the standard questionnaire for how I felt, using inherently negative questions rather than positive statements. Then there would be a brief discussion based on my current score and then I would be given a chapter and workbook to look up on the website
Hardly an innovative use of the internet but I can see that it can be used to deliver support to a lot of people at a relatively low cost. Interestingly the website was Australian rather than a UK developed model but I guess CBT is universal so why reinvent the wheel.
Monitoring WellBeing or Suicidal Intent – a lot more controversial, as pointed out in the excellent article by Mark Brown @MarkOneInFour – mentalhealthtoday-facebook-robots
Since you don’t have an opt in and more controversially no opportunity to opt-out there are shades of George Orwell’s 1984 BigBrother approach whereby everybody is monitored 24/7 when they are on social media. I realise that we are already monitored 24/7, generally for ads and location or preference driven content. This would involve the AI bot analysing the posts and making a decision that could ultimately lead to the emergency services being called
So my thoughts/issues with this are as follows.
How good is the AI? Previous instances of AI learning bots have been problematic at best. Do you remember Microsoft’s AI bot TAY, being withdrawn after a couple of days because via Twitter interactions it had learnt to swear like a trooper and was posting racist, misogynistic and homophobic quotes. Now hopefully things have moved on but you can be sure there will be instances of false calls, crying wolf that will undermine the trust and therefore response of the emergency services
There are many people, myself included, who have blogged on the subject of suicidal intent and how they have coped, the strategies that they use etc. These are nearly always posted via social media, would an AI bot be able to differentiate between the two styles of post?
If we were to go down this route then maybe a starting point could be that
a – you need to opt-in
b – you nominate an online social media buddy from your online friends who would receive an alert and could make an initial check on the posts.
c – there would be a clear escalation path. If you have opted-in then your buddy could be given contact details for the CMHT or Crisis teams.
My other concern is that while I can see benefits of technology delivered support especially here in Cumbria where rural isolation is a big issue. However, we also have in Cumbria areas of digital isolation where broadband speeds are problematic or just don’t exist
More importantly, for me, is the issue of depersonalisation and withdrawal. We know that a key indicator of an oncoming episode of mental ill-health is often a withdrawal from contact with friends/family/society. By using the technological approach would we risk reinforcing these behaviours? From my own experience my first stage of withdrawal is from social media, email and the internet in general.
We have also seen various evidence based studies that indicate that young people are at risk of losing the social and communication skills that we use for face to face communication.
We say that it is good to talk but are we in danger of limiting the options to communicate, especially as cuts to funding mean a continuous search to deliver more for less
I am not a Luddite and saying that we should not use technology, but in the last few years we have seen a number of high profile attempts to use technology that have been quickly withdrawn when they have not worked as intended.
To make this work the developers need to get closer to the mental health community at all levels. Twitter, Facebook et al undoubtedly have bona-fide geniuses who have great initial ideas based on technology but it needs to be married with the expertise that is found with potential users and the skilled community that delivers the support at the moment
Thank you for reading
Published by