Insight

Trustworthy by design

How do we approach trustworthiness by design at Lua Health?
September 20, 2023

This is a guest post authored by Nathan (Nate) Kinch who collaborated with the team in the area of trust by design.

Mental health matters. It matters to you. It matters to me. It matters to everyone. Fortunately, given it’s 2023, that can be said without much controversy or pushback.

I could wax poetic about the history of philosophy exploring ‘the good life’, I could discuss the ‘meaning crisis’, or better yet, the ‘meta crisis’. But that’s not what this post is about. This is about sharing deeply practical work that has the potential to help millions of people live healthier, more emotionally adept lives. This is about enhancing the quality of time we spend at work, which may then translate into other aspects of life.

Think of this as a super concise ‘case study’ that helps highlight why we chose to collaborate.

Onwards!

The problem and opportunity

Half of Millennials and 75% of Gen Zers have left their jobs, citing mental health as ‘the’ reason. This is both voluntary and involuntary [1].

83% of US workers currently suffer from work-related stress [2].

These problems are largely avoidable (I recognise this is a can of worms. I’ll stake the claim and move on for now). As a result, companies are making investments in support of employee well-being.

One of the most effective ways to do this is through well-being benefits. These benefits are often scientific-backed methods, such as mindfulness courses, resilience training or CBT therapy. Unfortunately there is a massive gap between who could benefit from these (40% of the workforce) and who uses them (3% of the workforce).

Recognising this, a team of researchers from the NLP unit at the University of Galway have developed Lua. Lua is a digital well-being tool that is designed to help employees know if, and what support they might need to prevent or treat mental ill health. The technology has been peer reviewed and tested with promising results [3,4].

In short, we reckon Lua has the potential to massively benefit workplaces - and the people within them - all around the world.

Preamble: A little about trust

Trust is the belief in another's trustworthiness*. Although it's complex, to keep things practical you might like to think of trust as a 'mental process' and/or ‘mental state’. This process encourages one party to assess what they know, what remains unknown ('risk' or the spectrum of uncertainty) and the potential benefits of a decision and action. In short, this mental process asks, "do I have good reason to believe in the trustworthiness of this other party?"

Many factors influence how this assessment is made, from one's upbringing, the context of the relationship, the information that is or isn't available, the perception of uncertainty, the balance of possible consequences (positive versus negative) and even 'harder' factors such as one's DNA.

A big part of this mental process is assessing the intent of the other party. Another is clarifying the ethics and integrity of the other party. Yet another is exploring the competence of the other party as to relates to the likelihood they consistently deliver their value promises.

If the assessment is favourable, the trust state or trust judgment is likely to be positive. This enables one party to positively engage with the other party, even without a full picture of the current reality or potential futures (this calls out an important difference between trust and assurance and helps highlight the primary role trust plays sociologically; enabling positive action when verification or full assurance isn't possible).

An important note before moving on: This idea that trust is the belief in another's trustworthiness, and that this belief is driven by largely 'deliberate' processes, is an oversimplified picture. It's very likely that trust is heavily influenced by far more 'automatic' brain processes. This is demonstrated especially well in certain neuroscience experiments, such as this 2014 work from NYU researchers.

In my work I often use a model from TIGTech.org that refers to 7 qualities of trustworthiness. These are the qualities that seem to most significantly impact the belief one party has in another party’s trustworthiness.

Given this, and with much context left unexplored, it’s best to think of how you can design products, services and organisations that are worthy of trust. Trust is something given to you (you can at best partially influence this). Being trustworthy is something you aspire to and work deliberately to make a reality (you can quite directly influence this).

A simple rule of thumb; trustworthiness > trust so focus on the former.

The project

I was very kindly hired by Fionn and Mihael (the folks leading this project from the NLP Unit at The University of Galway). This, as is often the case, came off the back of a recommendation (from a senior team member at what was formerly Telefonica Alpha Health, now Koa Health).

I was asked to lead a project that would help enhance the belief that Lua's end users would have in the trustworthiness of the chatbot itself. It’s noteworthy that this has to be situated in a much broader context, which forces consideration of factors such as the existing relationship between a group of people employed by an organisation and the organisation itself (you can think of this as the belief that employees have in the trustworthiness of their employer. And you might be surprised by how often this belief is quite negative…), amongst other things.

This meant:

  1. Familiarising myself with the 2 studies that the team had already published
  2. Re-engaging (I’ve been involved in a number of chatbot projects in the past, but this was the first in quite some time) with the literature on human to chatbot interactions, exploring topics such as tone, the extent to which the chatbot is anthropomorphised, the speed with which such relationships can develop etc. And
  3. Looking directly to the peer reviewed literature for similar examples of services (digital wellbeing tools, chat based mental health services etc.) that we might learn from

This helped us frame some hypotheses that informed our approach to the conversational structure, including the decision tree or ‘conditional logic’.

We then began a process of building upon existing work and attempting to define a conversational structure that thoughtfully embedded the most prominent insights from our literature review.

We combined this with a very clear approach to ‘designing’ for the qualities of trustworthiness and brought this together with our Data Trust by Design practice (an approach to product and service design that considers the trustworthiness of each and every macro and micro interaction. This approach is principles based and practices driven. I’ve published about this extensively before, so will skip over the specifics for now).

I then worked with Bjorn from Muteo (Bjorn was our Lead Designer when I was CEO of Greater Than X) to begin designing some of the more visual disclosures. This drew on our Better Disclosure Toolkit (here’s a useful reference point) and involved evolving the conversational flows, defining various interactions and producing layered visual disclosures that could help enhance comprehension of certain activities.

Eventually we ended up with something like this (we were collaborating asynchronously using Figma).


We explored numerous design patterns. We explored different sentence structures and conversational branches. We interrogated each and every word. We conducted a readability assessment (focused on a readability level <7th grade). We did all of the stuff you might expect, and maybe a little more.

We then recruited a small cohort of participants and ran a hybrid user research program.

We drew inspiration from a lot of our past work on Social Preferability Experiments to do this.


Basically, each research participant interacted with a prototype that felt real. They engaged in the process of interacting with the prototype unimpeded. This unimpeded usability testing was then followed up with a series of structured interview questions. These questions balanced a simple Likert Scale scoring method with (word count constrained) open ended answers.

We did all of this online due to constraints imposed by the pandemic.

Through this approach we developed a reasonable understanding of how people were interacting with Lua. We learned about their expectations, desires, frustrations and a few delightful surprises.

We explored specific questions surrounding the propensity they’d have to explicitly consent to certain activities that add new layers of value to Lua's core functions (note that I offered no specific advice as to the lawful basis that would be relied upon for Lua’s various data processing activities. That advice will come from a professional and result from a formal Data Protection Impact Assessment).

We learned where certain experience elements enhanced or eroded the belief that research participants had in Lua's trustworthiness.

This informed a series of iterations, including some of the work we’ve done on the ‘program’ that supports the adoption and inclusive integration of Lua within the workplace (still an ongoing activity).

What’s next?

Well, I’m going to stay on board as an advisor in the hope that we can effectively implement Lua into a number of workplaces.



References

  1. It’s a New Era for Mental Health at Work by Kelly Greenwood and Julia Anas published in the Harvard Business Review
  2. The American institute of stress https://www.stress.org/workplace-stress#:~:text=83%25%20of%20US%20workers%20suffer,stress%20affects%20their%20personal%20relationships.
  3. Delahunty, Fionn, Robert Johansson, and Mihael Arcan. "Passive Diagnosis incorporating the PHQ-4 for Depression and Anxiety." Proceedings of the Fourth Social Media Mining for Health Applications (# SMM4H) Workshop & Shared Task. 2019.
  4. Delahunty, F., Wood, I. D., & Arcan, M. (2018, December). First Insights on a Passive Major Depressive Disorder Prediction System with Incorporated Conversational Chatbot. In AICS (pp. 327-338).


If you're interested in learning more about Trust, you can subscribe to Nate's newsletter "Trustworthy by design"

Written by: 

Nathan (Nate) Kinch

Sociotechnology Ethicist, Social Entrepreneur, Action Researcher, Speaker and Writer

Build a proactive wellbeing culture

Lua empowers you to build a truly proactive wellbeing culture for your workforce
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.