Podcast: Play in new window | Download
Subscribe: Apple Podcasts | Spotify | Pandora | Email | TuneIn | Deezer | RSS | More
Let’s talk about digital identity with Rachel O’Connell, Founder & CEO, and Nicky Hickman, Product Innovation Manager, at TrustElevate.
In episode 70, Nicky Hickman and Rachel O’Connell of TrustElevate discuss children’s digital identity – why this is so important, what challenges are currently being faced and what solutions need to be put in place to help protect children within the digital landscape.
[Transcript below]
“There is a clear and present need for regulatory drivers to enhance children’s safety online to ensure the companies are held accountable and are transparent in terms of the measures that they take to keep kids safe online. And critical and central to that is digital identity.”
Nicky Hickman is a freelance product & innovation manager based in the UK with international experience in APAC, Europe and Africa. With a background in telecoms she has worked with digital identity and personal data markets for ~20 years researching, designing and delivering multi-channel large scale CIAM services and strategies for clients including Vodafone, O2, GSMA, Barclays, Sky and Verizon. In the last 5 years she has been a contributor to open-source communities at the Sovrin Foundation, where she served as a Trustee and Chair of the Identity for All Council, and at Trust over IP Foundation where she is a co-chair of the Human Experience Working Group. Nicky is also an active researcher and is an industry contributor and guest lecturer at the University of Jyväskylä’s Blockchain & Digital Identity Start-Up Lab in Finland.
Find Nicky on LinkedIn.
Dr Rachel O’Connell is a leading expert on online child safety. Her PhD examined paedophile activity online and the implications for investigative strategies. Rachel set up the first UK Internet safety centre in 2000; she was Chief Security office for Bebo a social networking platform 2006-2010. Rachel is the founder of TrustElevate, author of a technical standard published by the British Standards Institution that describes how to verify the age band a person belongs in a privacy-preserving, secure manner.
Find Rachel on LinkedIn.
We’ll be continuing this conversation on Twitter using #LTADI – join us @ubisecure!
Go to our YouTube to watch the video transcript for this episode.
Podcast transcript
Let’s Talk About Digital Identity, the podcast connecting identity and business. I am your host, Oscar Santolalla.
Oscar Santolalla: Hello and welcome to this new episode. And today, we’ll talk about enabling and protecting children’s digital identity. And for that, we have two guests who are working together in this very important issue.
Let me introduce my first guest, it’s Nicky Hickman. She is a Freelance Product and Innovation Manager based in the UK with international experience in the Asia Pacific, Europe and Africa. With a background in telecoms, she has worked with digital identity and personal data markets for 20 years researching, designing and delivering multichannel, large-scale CIAM services and strategies for clients including Vodafone, O2, the GSMA, Barclays, Sky, and Verizon.
All of Nicky’s recent work focuses on using digital identity to promote socio-economic inclusion, and impact against the United Nations Sustainable Development Goals with an underlying commercial business model that is sustainable for the long-term. For the last year, Nicky has focused on youth and child identity through work with a UNICEF YOMA programme, and with TrustElevate as a Product and Innovation Manager.
Our second guest is Dr. Rachel O’Connell. She is a leading expert on online child safety. Her PhD examined paedophile activity online and the implications for investigative strategies. Rachel set up the first UK Internet Safety Centre in 2000. She was Chief Security Officer for Bebo, a social networking platform between 2006 and 2010. Rachel is the founder of TrustElevate. She’s an author of a technical standard published by the British Standards Institution that describes how to verify the age band, a person belongs in, a privacy preserving and secure manner.
Hello, Rachel. Hello, Nicky, welcome.
Dr. Rachel O’Connell: Hello.
Nicky Hickman: Hello, Oscar.
Oscar: It’s a pleasure being with you. Definitely we want to hear– and for the first time touching specifically this topic of children’s digital identity. So, let’s start talking about digital identity. First, I would like to hear, as always, a bit more about our guests, especially how was your journey into this world of digital identity.
Rachel: Absolutely. So, Oscar, for me, it was you mentioned that my PhD was analysing paedophile activity on the internet, and that was way back in the late 1990s. And since then, I’ve just been a firm advocate of the very simple premise that platforms need to know the age bands of their users so that they can better protect them. And that’s been the kind of driving force throughout my career as I set up the first UK Internet Safety Centres and ran 19 of them across Europe. We’re developing programmes of education to keep kids safe online and to equip parents, teachers, social workers, anybody responsible for the care of children and young people. We wanted to equip them with the kind of tools, the knowledge, and the skills to navigate the internet safely.
But also, another focus was looking on how we could leverage the technology intelligently, to enhance children’s safety online. And I’ve been working in and I’ve chaired a number of cross industry groups. For example, we developed the Safer Social Networking Principles that were guidelines for social media companies to comply with, but it was during the era of self-regulation. So even though companies signed up to make under 16s’ profiles private by default, and we had a signing event, et cetera, a few weeks later, those companies reverted to making those profiles public.
So, there is a clear and present need for regulatory drivers to enhance children’s safety online to ensure the companies are held accountable and are transparent in terms of the measures that they take to keep kids safe online. And critical and central to that is digital identity and specifically checking a particular attribute of a child which is their age, to understand their age.
Oscar: Thank you, Rachel. And Nicky, what was your journey?
Nicky: Quite different from Rachel. I started working with digital identity in the telecoms industry, working on kind of large-scale multichannel digital identity capabilities for consumers about the time that the mobile internet was just starting. So, there weren’t really CIAM solutions at the time, we were using Enterprise IAM solutions. I then did some work with various programmes associated with government type digital identity, now known as the .Verify solution. And I worked with Verizon and Barclays on various rounds with that.
And then I’ve also, I’ve done quite a lot of work on the other side of identity, attribute sharing in advertising and media and banking markets. More in the last five years, I’ve really been focusing my identity time in the self-sovereign identity, decentralised community. And I lead some working groups at the Trust over IP Foundation at the moment, focused on human experience and understanding harms associated with digital identity, particularly in decentralised ecosystems. So, I’m on much more of a commercial focus, perhaps than Rachel, and just kind of large-scale login angle for many years.
Oscar: Yeah, thank you. So, it’s super interesting the journey that you came to now work together in TrustElevate solving this problem that we’re going to discuss today. So, first question will be, why do children need a digital identity?
Rachel: It’s absolutely critical in terms of like- one in three users online are children, young people, the services that have been developed social media, gaming, et cetera have been designed with adults in mind. And therefore, when platforms don’t know the ages of their users reliably, that inhibits and means that children can’t exercise their rights online. And they can’t benefit both from the protections that should be afforded to them and the opportunities. Knowing the age bands of your users, rather than restricting them, it actually creates safer environments for them to have fun and enjoy and to exercise their rights and connect with kids in the same age bands.
And right now, the business models that underpin social media, the systems and processes are driven by a commercial imperative around the attention economy, trying to keep your eyeballs on that screen for as long as possible so that you could see the maximum number of ads. And that business model puts children at risk and definitely needs to be, and is currently being, reviewed by lawmakers around the world.
For example, the easiest way to explain this is that, for example, if you’re an adult and you’re really interested in cooking and comedy, if you go on TikTok, the algorithms detect that and just serve you that content, which is fine. If you’re an adult with a sexual interest in children, the algorithm similarly detects that you are interested in looking at content that’s produced by 6, 7, 8-year-olds, and will connect you seamlessly with those children. And when those children are live streaming, their audience are adults, who ask them to engage in specific, sometimes sexually explicit acts. And so, we’ve seen over the last year, a quadrupling in the prevalence of child sexual abuse material online because these guys are asking these kids to engage in specific sex acts.
So, the business models that underpin the operation of social media platforms actively put children at risk. So having a digital identity, and specifically having one aspect of your digital identity, i.e., your age band, known to the platforms should result in a massive reduction in the risks, the kid’s exposure to risks and harms.
Another example of that is, for example, if you’re a young girl or a young boy, and you might be worried about your weight, and you go on YouTube, and you look at a video about, I don’t know, a diet smoothie, the algorithm similarly detects that. And then you’re inundated with a tsunami of content that relates to eating and eating disordered behaviour, because the attention economy wants to keep you engaged. So, the content becomes more and more engaging but in some instances it’s very, very harmful. So, by knowing the age bands, on one part of your digital identity, you can mitigate the risk.
The third kind of risk then that to be aware of is huge amounts of data about children has been collected by companies so that they can serve ads. So, what you’re seeing is the surveillance, corporate surveillance of children from a very, very young age, in the absence of their knowledge or indeed the knowledge of their parents. So those are the sorts of issues that need to be combatted.
And a final one then is then you’ve got kid influencers. Many parents of children who like to be influencer, kid influencers stars, so they have– they run unboxing. They get new toys, and they film the unboxing of those toys and stuff. And these children have huge followings. Some of them started out with life as young as four years of age.
So, a French MEP introduced employment law to protect these kids, and to make sure not only that the platforms look out for the well-being of these children, but also that their parents do. So, there is a plethora of, there are just some top-level examples about how digital identity could impact positively on enabling those kids to exercise their rights online and to be protected.
Nicky: Maybe I could add to that. I thought this was a kind of interesting question, and needed to be supplemented by some other questions, because in many respects, children already have multiple identities online. In many cases, before they’re even born, their parents are sharing images about them. And the platforms are tracking the fact that they’re coming into the world, because they can derive from their parents’ activity, they can estimate birthday, you know, we all know the amazing things you can do with analytics. So, they’ve got loads of identities. And that suggests there’s a need, at least from the platforms if you like, and in many ways from the parents, for them to be recognised online.
The questions are more that I would ask are what type of ID is it? And who controls that ID? And when do they get an ID that they control, that they are aware of, and that they’re able to understand, start moving their learning and understanding about what that means in terms of their interactions, and the exercise of those rights that Rachel was talking about. So, my answer to the question is, they already have one. And what the need is, they need a control point that rests with their parents, and as with other rights and responsibilities as you grow up is gradually transitioned to the control of the child.
Oscar: Yeah, indeed, indeed, I agree with your comment, Nicky. And the way that Rachel explained all this problem, it’s huge, definitely it’s huge. Many reasons why children need a digital identity in a controlled way. And one of the main aspects that comes to our minds when we think of digital identity for children, especially underage, is verify someone’s age. So, why should companies verify someone’s age when they are using an online service?
Rachel: They’re essentially for the reasons outlined earlier. Once they know a person’s age, they can then tailor the service accordingly. And increasingly, we’re seeing a raft of regulations. So, this is – these are regulatory requirements. There are drivers such as the General Data Protection Regulation, Article 8 of which requires companies, they’re going to process the data of anybody under the age of 16 to obtain parental consent. Now there is go for a derogation, so some countries like Germany have stuck with 16 and Ireland, but the UK and Malta, for example, have gone for 13. And this reflects the Children’s Online Privacy Protection Act in the US which came into- was started to be discussed in 1998, right? So, this is a consistent kind of issue, and has been to date, the unsolvable problem online.
But now, we’re seeing a plethora of age verification and parental consent projects. And certainly, that’s what TrustElevate is, we’re a child age verification and parental consent provider so we can actually meet this requirement. That’s also echoed in the Digital Services Act, and a whole range of Internet Safety, Online Safety bills, that first originated in Australia, and the most recent one is being discussed currently in the UK, which all of these take at their core, the notion of a duty of care that company should exercise a duty of care towards children and young people.
And the simple kind of analogy is in the real world. If you go to an amusement park that has really big roller coaster rides and smaller ones for kids. If you’re not above a certain height, if you’re below a certain height, you’re not allowed to go on to the big, huge roller coasters, you have to be confined to the smaller ones, which are still a huge amount of fun, particularly if you’re a kid. So, it’s kind of trying to bring those health and safety requirements into the online digital playgrounds.
The question is why should companies? They should do it because it’s the right thing to do. But now there are regulatory drivers that insist that they should do. And similarly, a lot of the Age Annual General Meetings are happening at Meta, Alphabet and Amazon. So, their shareholders are increasingly calling for greater transparency and accountability in relation to, not only children’s rights but human rights impacts around the world. They are saying to companies, you have a responsibility, we have seen what the unfettered spread of misinformation, disinformation online, the harms that can result.
So, there is an increasing pressure on companies to know the age bands of their users and also to conduct impact assessments to understand the features of their products and how the deep learning, machine learning and algorithms and recommendations engines, how those are actually impacting on society and human rights in general.
Oscar: And those regulations you have mentioned, how present are they across the globe? So, what would you say, are in many countries or just a few of them right now?
Rachel: Oh, absolutely. So, the General Data Protection Regulation has got an extra territoriality clause in it, which means it applies globally, just as the COPA, the Children’s Online Privacy Act, apply to globally. But what we’re seeing in like, for example, South Africa, in various countries around the world, the GDPR, or the General Data Protection Regulation is being replicated.
So, one of the industry, when I worked within the industry, one of the responses that we had to lawmakers was like, “Hey, listen, we’re offering a free service. There are different legislations in different jurisdictions, which would mean we’d have to have compliance teams in each country and that’s overly burdensome.” But now there’s a whole almost a harmonisation of data protection regulation. And also, and increasingly, you could see a harmonisation in relation to the centrality of human rights and the need to conduct impact assessments, and for there to be regulatory oversight of these companies in terms of their degree of adherence and alignment to those requirements. So, it’s happening globally. And, frankly, it’s kind of about time.
Oscar: Indeed. And how can you verify the age of a user?
Rachel: So, we have, for example, TrustElevate, we have developed a way to do this, and we’ve got a process patent pending on this. So, we’re gathering data from minimal, absolutely minimal data points from the parents first name, last name, mobile number, and the child’s first name, last name and date of birth. And we’ve connected to a number of authoritative data sources, government and non-government data sources. And we check for a match. We’re hashing the data on the user’s device, and the data sources that we’re checking against. So, we don’t hold any data. It’s a zero data, zero knowledge model. And this is quite ground-breaking. And it was because we are driven, our purpose is to enhance child safety online, it’s not to use their data in any way that is harmful. So that’s the core in essence of how TrustElevate does this. We don’t require biometrics, or any document scanning.
Other services that are out there scan the faces of children and estimate the age of those children, which can be useful. And so, it certainly can be useful in the context of 18 plus. If you want to go to a remote checkout, or a checkout that isn’t handled by a human being anywhere to buy some wine, it can be handy to have age estimation. But age estimation has its drawbacks in terms of younger children, because it’s happening in the absence of the consent of the parents, or indeed the child who’s too young to give consent.
Other approaches are using various data sources and, again, estimating potentially the age. So different sectors are going to require different levels of assurance in relation to how confident you can be in the verification. There is a plethora of approaches that are developing, because this is a new marketplace. And many of these are subject, all of these actually are subject to regulatory oversight.
Nicky: And maybe I could add a couple of points on that. So, in terms of the regulatory and compliance and indeed commercial drivers for businesses, one aspect that’s really caught my interest in the last year or so, is the new requirement, the EU Directive on corporate responsibility and sustainability due diligence against environmental, social and governance factors.
This is important for businesses operating in any market where you’re using digital identity, because the requirement moves from an optional or reporting requirement to due diligence. And obviously, identity data is going to be really fundamental in these growing ESG data markets. And also, how that translates in terms of obviously, investment in capital markets.
So, adherence to human rights and supporting that in your product and your business at every level, for me, this is really significant because it moves it out of the CSR department and into the core of your business. So, there’s lots of thinking and opportunity for identity and attribute verification services to happen there.
Also I think, one of the things that really surprised me when I first started working with Rachel, is that this requirement to know the age of your users or the age band, for example, adhering to children’s code or age-appropriate design code, and so forth. What surprised me was that platforms don’t know. They say they can’t know. Because my experience in data analytics is that absolutely they know, otherwise, they can’t sell the advertising. It’s one of the core attributes.
So, I mean you know, we’ve seen evidence from Meta, they know the harms that are being done specifically to teenage girls. So, they know a great deal more often than many of the kind of bearer providers like the telcos. So, I really struggle with this, because the tech is not hard. There are many solutions, as Rachel pointed out. Obviously, many of them are very disrespectful of that initial consent piece, which is so fundamental with children, scanning lots of faces, you know, we’re all– “we don’t want that, we don’t need that.” And obviously, from a documentation ID and V perspective, children don’t have utility bills. So, they’re automatically thin files. So, it’s very difficult to start using the documentation approach.
So really, this interrogation of data sources, which the platforms have, to at least a degree of level of assurance. So, we rely on the regulators, the policy makers, and the moral compass of people making these decisions, thinking about all the children in their life, just to do the right thing and recognise the age band of users. Obviously, verification requires a solution, like TrustElevate or one of the others. But I’m pretty certain that the methods, the platforms like Meta or in Google have already, they are more accurate, I would suggest than the plus or minus three years that you get from facial scanning.
Oscar: Yeah, most likely, that’s the case, at least in the big platform. Big platforms should have that capability. I agree with that. The technical – yeah, it has like an excuse. One topic that it’s related also, I’ve been talking recently is the guardianship. Could you, any of you, tell us a bit about guardianship?
Nicky: Sure, I’ll take that one. So, guardianship is obviously a legal term in many jurisdictions. But what we mean by guardianship, in a more general sense, kind of started about three years ago when we were doing The Sovrin Foundation Governance Framework. And we were thinking about what was necessary for self-sovereign identity and for identity for all that anyone could have a digital identity via the Sovrin Network. And we started thinking about how you manage people who, for whatever reason, digital exclusion, or they’re a child, or they’re an elder person living with dementia, or they’re a refugee, or many such exclusionary examples.
And we wrote a white paper, which is freely available from The Sovrin Foundation website, about guardianship. That has been, further extended and explored, and it looks at how you could use things like verifiable credentials and other governance constructs to enable guardianship. And this would be represented, obviously, the parent is the guardian of a child, and they are able to exercise their rights and responsibilities with respect to dependents. And they are also gradually able to transfer those rights and responsibilities because there’s no cut-off date when you cease to be a child and suddenly become an adult.
And all of us require guardianship or our guardians at some point in our lives. So, it enables by fragmenting out, if you like, not having one single digital identity or decentralised solution, you’re able to represent those relationships using various forms of digital identity. Obviously, it can be done in a more classic, centralised model. But considering the number of different context and jurisdictions that you might want to assert your identity, and the need for data minimisation, particularly when it comes to children. You know, we looked into that and to my mind, if you’re not designing guardianship into your identity architecture and set of services, then you’re going to have to go back and start again. It’s so fundamental, in my mind to a good architecture that is respectful of human rights and is focused on human beings.
Oscar: Yes, as we know, you’ve been describing in a very interesting way, what are the challenges, the solutions that have been introduced also from the technical side, from the regulatory side. But we also know that the identity landscape is evolving, it’s evolving all the time. So those challenges that we are describing now, or were the main challenges few years ago, will be changing. So how that, this change of challenges affects the solutions or the proposal for verifying the children’s identity?
Rachel: So, we are seeing increasingly, as Nicky just really eloquently pointed out there, the responsibility that companies have in relation to understanding guardianship and building that into their frameworks. And we’re seeing increasingly across the EU, for example, the eIDAS framework, and the architecture, and for mobile bank and eID and its use, there is an increasing focus on children and young people. So, there’s a project at the moment called euCONSENT, which is looking at exactly this issue about guardianship and parental responsibility.
And the key thing to keep in mind Oscar, is that like kids as young as two, three or four are online, they’re on YouTube videos. So, at that very young age, there is such a critical need for parental oversight. And for parents to be able to exercise children’s rights on their behalf in relation to what happens to their data, what kind of content that they’re exposed to, and increasingly, we’re seeing VR in education. They’re huge. This technology is so exciting. It’s so incredibly exciting, that children could put on these headsets and be transposed to, I don’t know, if they’re studying volcanoes to see Mount Etna and stuff. There are incredible, incredible opportunities here. And we want children to be able to benefit from these and not have to deal with the huge negative issues that they’ve had to deal with, that they can deal with on an everyday basis.
And, for example at Bebo, I was responsible for the abuse management system, so first hand, the issues that kids have to deal with in relation to eating disorders, suicidal ideation, self-harm, and adults with a sexual interest approaching them. So, these future Metaverse and VR-enabled environments need to be designed with safety at its core. Safety by design principles need to be implemented. Core to that is understanding the age bands, and then critical to that are their supporting identity architectures and trust frameworks that can enable that at a global level. And for those to be interoperable is critical also.
Nicky: I see some interesting trends in the landscape at the moment, which are going to throw up some new challenges and opportunities for children’s identity. So, I think the first is obviously, there’s lots of talk about Web 3.0 and it’s definitely happening in all sorts of different ways. There are new types of asset and one of those types of asset is our digital identity.
What’s interesting for me is a couple of trends from the kind of world of DeFi and crypto and stuff like that, is the falling importance of legal identity, because a lot of it is wild west, and there aren’t the legal frameworks in place, although they’re coming. And there’s obviously a kind of anarchic strand in there. It’s kind of designed, the world of Bitcoin is designed not to be controllable or regulated by central authorities and governments, in particular, which obviously has that strong link with legal identity.
So, what you’re seeing is emerging are two other types of sort of clustering of attributes and identity. The first is ability to pay. Now, if we think most of our identity systems are there to manage risk and liability. Because children are not legally liable, they’re of little interest to centrally orchestrated identity systems that control only the risks to that central authority or paying stakeholders. So, as we recognise more types of value exchange above and beyond payments and in fiat, for example, I think that ability to pay will be calculated and managed in a different way. So, I think that provides a challenge for children because obviously, one of their rights is financial inclusion, and there are some challenges around privacy, different types of control points. And again, the interplay of liabilities in complex and decentralised transactions. So, identity that’s more about the ability to pay, and more about the association of attributes with different types of value.
And then the second one that’s emerging quite strongly is reputation based, social reputation. So, in the world of crypto, etc., they just care about the consequences. So, there’s no law, and they’re not going to be fined necessarily, because there’s no central authority or judiciary to impose that law. Even though we’ve got things like DAOs, at the same time, what you’re seeing is a kind of self-regulation. So, what’s the worst thing that could happen to you, is that your reputation in these networks takes a beating, and that no one will do business with you or trade with you. And so, you’ve got things like KILT developing, which are based on reputation scoring. So, I see a lot more of that kind of identity emerging, which might be, nothing about what we would classically call legal identity or foundational identity.
Rachel: Yes, and in terms of that ability to pay as well, in the more regulated spaces, we’re seeing the Payment Services Directive, which enables bank-to-bank transactions, and the growth of super apps such as, which are basically enabling you to, like WeChat in China, enabling you to do everything, pretty much everything from within the app. So that ability for kids to be able to make bank-to-bank transactions brings in secure customer authentication requirements.
For example, a child’s bank account, until they’re about 11 or 12, is a sub-account of a parent’s account. So, in the more regulated payment environments, there is a need for– and certainly that’s a driver for the evolving digital identity landscape, to ensure that the legal requirements for secure customer authentication of the parents, and that the authorisation of that transaction to happen is critical.
And also, for example, TrustElevate, we enabled digital onboarding of child and teen bank accounts, which is new, because ordinarily, you have to go into a bank with your child and have somebody look at your identity documents, and basically eyeball you and the child and say, “How likely is it that this chap is that child’s parent?”
So, we’re seeing the evolving digital landscape, identity landscape is streamlining those processes and making those a seamless process and that should result in a greater proportion of children having access to bank accounts. And then that in combination with PSD2, we can see a change in that happening and the levels of financial inclusion and the opportunities for financial education for children and young people. So, it’s a change that will have an impact across business sectors and enhancing children’s rights in relation to payments and their engagements online.
Oscar: Yeah, definitely, that’s a great solution that you already have shaped by TrustElevate. Indeed, hearing both of you, again, I can see many new challenges, let’s say growing at this time we are speaking and some that will become more relevant in the years to come. So, if you can just shortly summarise how do we overcome these new challenges?
Rachel: I think part and parcel is, and it’s really, really important to understand, like Nicky brought this up earlier, of course, companies know probably within reason, a reasonable number of years, the ages have their users. But the thing is, they’ve had 20 years to recognise that and to behave accordingly, and they haven’t done so.
But there’s also that notion of, for example, in a nightclub, you have a bouncer who’s not going to let you get into the nightclub if you’re 12 years old, right? So that’s what age verification does. It’s a bouncer at the door, rather than the systems that currently are in place, it’s like let everybody in, and then we’ll use AI and tracking and surveillance to pick out the ones that are below age, and then kick them out. And in the absence of a reliable age verification system, you can kick them out, but they can just create a new identity and login again. So, if you remove all your cookies, you could just set up another account. It’s a little bit more challenging with mobile phones these days.
So, we need to set a standard. And that’s what the PAS 1296 Age Checking Code of Practice did, is set a standard, and that’s becoming an ISO standard. And once we have those standards in place, and the legislation in place, and the drivers such as shareholders holding companies to be a bit more responsible, then we can have greater clarity in relation to what is expected of companies, and where the lines are drawn in relation to what is acceptable and what is illegal, or what is in noncompliance with the legislation.
So those are the challenges. It’s ensuring those regulatory drivers have enough of an import on the way that companies think about their duty of care towards children and young people. And then ensuring that regulators– one of the challenges to date has been that regulators have kind of operated in silos. The data protection regulator is not necessarily talking to the Financial Conduct, the financial regulator, they weren’t necessarily speaking to Competition Markets Authority.
What we’re seeing over the last few years is those– a recognition that those regulators need to be speaking to each other and need to have a way to enforce the regulation and a way to communicate to industry to help them to understand all the differing- the impact of a raft of regulation on different aspects of their businesses. And once they understand that, actually, if I’m going to be selling products and I want to enable bank-to-bank transactions, there is a clear return on investment in terms of knowing the identity of my users, so that I can enable that to happen, right?
So, getting companies to understand the impact of those regulations on different business verticals, and horizontally, having the ability to know the key identity attributes of their users, it actually makes good business sense. And I think once companies grasp that, and also understand that for end users, trust is a massive differentiator, right? People have seen the negative impact of the unfettered spread of misinformation, disinformation, the harms on children, so they’re expecting more from companies.
And also, we’ve had a generation, a couple of generations of kids going through who are now adults. And so, there’s almost a mobilisation of consumers to say, “Listen, we expect more from you and we demand more from you, as companies.” So, I think the winners in going forward will be those companies that recognise that actually it is valuable and meaningful, and there’s a massive return on investment for them, if they behave responsibly, and comply with the regulations, and exercise a duty of care towards children and young people.
Oscar: Thank you. Final question for you, for all business leaders, listening to us now, what is the one actionable idea that they should write on their agendas today?
Rachel: I think they must recognise that they need to know the ages of their users and design their services accordingly. They need to understand the harms, and work with– very often, and I’ve seen this myself working with product developers and stuff like, “Whoa, dude, we could create this amazing product, and then we’ll be able to serve ads.” Increasing legislation and statutory obligations are to think about the age bands of your users, think about the various harms, the known risks and harms that are associated with particular product features, and behaviour modification, like endless scroll and loop boxes. And there are known mitigations.
So, making sure that your product developers, data scientists, and commercial teams are engaged in the process of conducting a Child Rights Impact Assessment, so that they’re understanding and really thinking through and then documented, “we’ve considered these risks, these know risks and known harms, we’ve utilised, we’ve put in place these mitigations so we’ve kind of diversified our product offering depending upon the ages of our users”. And then you build consumer trust. So, thinking through those regulatory requirements, and the end user, and exercising their duty of care is certainly something that they should have on their agenda.
Nicky: And I just have one simple thing that you could do is create a youth council in your organisation that’s made up of young people and works with children themselves. Because as adults, we haven’t grown up with the world that we see today. We’re too busy creating the wonder machines and trying to figure out the latest software and so forth, but they’re living this experience and as with all great products and services, if you talk to the people who you’re designing the service for and co-create with them, then all sorts of magic can happen. So, create a youth council for your organisation.
Oscar: Yeah, great idea. Thank you for that. And thank you for this super interesting conversation. Would you finally please tell us if someone would like to get in touch with you or learn more about the work you are doing what are the best ways for that?
Rachel: You can come to [email protected], or to me directly [email protected].
Nicky: Yeah. And you can contact me at [email protected] or via LinkedIn.
Oscar: Perfect. Well, thank you very much Nicky and Rachel for the super interesting interview. And congratulations for all the work, the amazing work you are doing. So, thanks a lot and all the best.
Nicky: OK. Thanks, Oscar.
Rachel: Thank you. Thank you, Oscar.
Nicky: Bye.
Thanks for listening to this episode of Let’s Talk About Digital Identity produced by Ubisecure. Stay up to date with episodes at ubisecure.com/podcast or join us on Twitter @ubisecure and use the #LTADI. Until next time.
[End of transcript]
About The Author: Chloe Hartup
All the blogs, articles and more posted by Digital Marketing Manager, Chloe, aiming to share insightful and interesting content on identity and LEIs, including episodes from the Let's Talk About Digital Identity Podcast.
More posts by Chloe Hartup