Emerging technologies can play a role in preventing domestic violence with checks and balances – Telstra Exchange

By Joanna Knox November 25, 2022
But how far should such technology be used to uncover the warning signs of domestic violence? This was the question I recently explored in a keynote address to the 2022 Technology Safety Summit. Hosted by WESNET, this fourth Summit brought together national and international experts on technology safety and violence against women.
A study in 2020 by the by the Centre for Economic Performance in the UK found that machine learning systems that analyse information such as criminal records, calls to the police and reported incidents of violence, can identify the risk of repeat incidents more accurately than the standardised questionnaires used by police forces. Closer to home, Queensland has trialled AI as a risk-assessment tool to predict and prevent domestic violence. It screens data from police records to identify ‘high risk of high harm’ repeat offenders.
In another recent example, natural language processing and machine learning methods were used to identify tech facilitated abuse cases by assessing five years of unstructured text data from the independent UK crimefighting charity Crime Stoppers. Seven hundred instances of potential tech facilitated abuse were identified from over 430,000 reports. A human review was needed to isolate 14 actual cases of tech facilitated abuse from the 700.
Meanwhile the Commonwealth Bank has used AI and machine learning to detect abusive behaviour in transaction descriptions. In a three month period CBA detected 229 unique senders of potentially serious abuse, which were then manually reviewed to determine severity and the appropriate action required from the Bank.
It is well known that technology itself can be used to facilitate abuse.
A recent paper by Bridget Harris and Delanie Woodlock looked at women’s experiences of technology facilitated domestic violence in regional, rural and remote areas. They make the point that perpetrators are increasing using technology as part of their control and intimidation tactics such as sending or posting abusive messages or communications; stalking and monitoring movements or communications; impersonating or stealing another person’s identity.
They propose that technological abuse be classified as ‘digital coercive control’.
Coercive control is a particularly insidious form of domestic abuse and typically involves isolating the victim from friends, family and other forms of support, manipulating the victim to create dependency, micro-managing day-to-day activities, controlling finances, and can include trying to get the victim’s mobile disconnected.
Our society is not likely to tolerate widespread monitoring of texts or people’s movements and other communications, but how far should we use AI/ML to search for this behaviour? For example, what about publicly posted information? What about metadata such as the volume and timing of texts and calls, or searching for signs of unauthorised access? All could be collated to identify potential warning signs of ‘digital coercive control’. This is all certainly possible with today’s technology.
Our approach to questions about the use of AI and machine learning is informed by our Responsible AI Policy. The Policy is there to ensure that we fully understand any potential positive and negative impacts, and possible unintended consequences, that our AI systems can have on our customers, our people, and the community in which we operate, and to seek to make those impacts positive, fair, and sustainable.
This is closely aligned to the Federal Government’s Framework for Ethical AI that is now also being used by other corporates in Australia. At Telstra we have a group called the Risk Council for AI and Data (RCAID). This is where we review potential ways of using AI and machine learning and ensure our AI Policy is enacted when making use of it across the company.
We also have a Telstra Data and AI Council. This set of executives, representing each one of our businesses, has the purpose to understand what each business is doing, and to ensure they are supported by our corporate functions in terms of legal, cyber security and reputation oversight when it comes to how we use data and AI.
It would of course be so much better if we could help potential victims see the warning signs of abuse as opposed as opposed to providing a technology safety net after the fact.
This is not simply a task for Telstra to pick up, as it involves answering bigger questions as a community around privacy, data security, and other ethical issues around AI and machine learning.
The point for us to consider is that our society has the technology within its grasp to uncover the warning signs of domestic violence even at its most difficult to discern – coercive control, mostly because of the prominent use of technology to exert the abuse.
This is perhaps the ethical challenge for the times because it involves a level of monitoring and associated implications for individual privacy.
AI alone is not going to solve the complex challenge of preventing domestic violence. But there is an opportunity for industry, government and the not-for-profit sector to work together, to see whether and where it can play a role to connect people facing potential domestic violence with help earlier.
AI will need to emerge from simply helping inform better business decisions to where it can help the community and society for the common good.
Whatever the solution, it will need to be properly tested, operate in a transparent manner, and always allow the final decision to have a vigorous degree of human oversight.
Telstra is certainly ready to help work through these issues.
For Telstra customers, our customer support team is trained to identify affected customers and can refer you to our SAFE team for further help. The SAFE team is specially trained to help victim-survivors stay safely connected to their Telstra services and can be reached between 8am – 6pm weekdays on 1800 452 566.
Since 2014 we have partnered with WESNET, the peak body for specialist women’s domestic and family violence services. Telstra has donated over 34,000 smartphones with $30 of pre-paid credit to WESNET to give to victim-survivors impacted by domestic and family violence.
WESNET provides the phones through its network of specially trained frontline agencies across the country through our joint Safe Connections program.
Top image: Scenes from around UN Headquarters during the 63rd session of Commission on the Status of Women on 18 March 2019. Image: UN Women/Amanda Voisard
Group Owner for Product Excellence and Incubation – Product & Technology, Telstra
Joanna Knox is the Group Owner for Product Excellence and Incubation in the Product & Technology team at Telstra. Product Incubation includes the Telstra Labs, where we validate, explore, and incubate new product solutions for and with customers. Product Excellence is all about modernising our product lifecycle management disciplines, including Product Architecture to support our T25 Digital Leadership goals. Previously, Joanna was Telstra’s Chief Risk Officer from 2017 to 2021. In this core governance role for Telstra, Joanna drove programs to uplift our risk and compliance effectiveness, and implement an agile@scale operating model for risk. Joanna led the Crisis Management team and resilience risk management, including our responses to natural disasters and covid. Prior to joining Telstra, Joanna was a management consultant with Bain & Company for 10 years. Joanna holds a PhD in Neuroscience and Physiotherapy, a Master’s degree in Anatomy and a Bachelor of Physiotherapy.

source

Related Articles