make a chatbot about it
– So yeah, the problem is that there's all these services, but it's completely opaque from the outside. They get a referral but then it's really unclear what's going to happen next, or even what the options are.
– So can't the providers explain this to people?
– Sure, but, like, you only see them once you've gotten through the waiting list. And in any case, they only really have a lot of visibility into their part of the system... and they're under a lot of pressure to see people quickly, so it's hard for them to get explain all these details.
– Ah, I see - like we've talked about before, I guess this comes down to slack - giving people enough time and space to do things outside of what their described job is, to be less "efficient" but to solve the problems that they see, even if it's not something that's explicitly rewarded.
– Yeah. But I have some space to think about these things, I'm a little outside the system because I'm studying it.
– So what are you thinking about now?
– Well, I think there's something about peer support groups - something about embedding this knowledge in a community. But to know about the peer support groups, you need to be signposted to them
– It's a catch-22.
– Yeah, exactly.
– What if – and I know they wouldn't go for this, I know it's not practical, but I do think it would solve the problem – what if you just put all the people who are referred into WhatsApp groups with each other? Or, maybe only the people who are referred in a month, or in a week, I don't know how many people you get coming through. I mean, obviously with permission. And you don't run the groups, you just let them talk to each other. I reckon they'd start asking each other questions and then they'd start demanding answers from you, and then you'd solve the signposting issue pretty quickly
– I mean, even just the safeguarding risks... there's no way...
– I know, I know. And of course doing this would just place more stress on the system, when they do start demanding things.
– Yeah. Anyway, right now I'm trying to understand what these services are, and how people move through. And then I'm trying to make some resources. We did get this one jpeg approved, which is a kind of flowchart that shows some of the services – so there's some positive movement.
– Just one jpeg?
– Yeah, like I say, it's a lot of work to get things approved. But! We have been talking about making a chatbot for this.
– But I thought you said none of this was particularly documented? How will the chatbot know the answers?
– We do have all these policy documents, like what each service is supposed to be doing. They're written in technical language, they're like 22 pages long. They're not the right kind of thing to put out for service users.
– Well, if I was referred, I'd read them. Or, like, at least skim them.
– Sure, but... anyway, we're thinking we'll feed those into the chatbot, and then it can explain things to people. It won't be perfect, but at least it's something.
– It just feels silly to have to go through a chatbot for this, they're so inaccurate. I mean, if you just released the documents, all it would take would be one motivated user to go through the boring documents and make a cranky blog and then you'd have some summaries and explanations out there.
– No, we'd never get approval for that. These are internal documents!
– You know that people can get chatbots to regurgitate the documents they've been fed? Putting them in a chatbot is pretty much the same as just releasing the documents directly.
– Oh, I guess so. But, still, I think we'd be fine to get approval to put a chatbot out, there's a lot of enthusiasm for AI right now.
– So, what you're saying is that the main function of the chatbot is to provide a kind of excuse for getting things through approvals.
– Yeah
– I guess that's a pretty useful function.
Member discussion