Online Harm - and why it matters

By Eva Hartshorn-Sanders
Director of Hartsheba Limited and
Convenor of the International Action Hub for the National Council of Women of New Zealand

Throughout my career I have worked to understand and address social justice issues, most recently as the Head of Policy at the international NGO, The Center for Countering Digital Hate. This NGO is producing cutting edge research and actively disrupting emerging and toxic forms of online harm.  

Real World Consequences

Everything we value is impacted by what happens online with real world consequences.  Take for example, the basic premise embedded in Article 1 of the Universal Declaration of Human Rights that:

“All human beings are born free and equal in dignity and rights.”

Every second, this basic human right is undermined and frustrated by the proliferation of hate and abuse online, by content that promotes violence and extremism, by young and vulnerable users being shared and recommended toxic content that promotes negative self-image, eating disorders, self harm and suicide, and by disinformation content that undermines national security and elections and prevents humanity from effectively dealing with big issues like climate change and COVID-19.

This harm does not stay online. It has a real and profound impact on people’s wellbeing, dehumanising others, social unrest, negative impacts on public health, environmental degradation, violence and extreme terrorist events. 

The January 6 violent extremism attack on the US Capitol was driven by a conspiracy theory shared online by extreme actors and Big Tech companies’ lack of action on election disinformation (the idea that the US Presidential election had been stolen - categorically unproven and thrown out of every US court).  

I’m sure many of you saw the COVID-19 period inflate and amplify conspiracy theories, disinformation, and an increasingly polarised and extreme discourse online.

Misinformation in New Zealand

The Classification Office in New Zealand ran the first nationally representative survey on misinformation through this period. High level stats found that:

  • 82% of NZers are somewhat or very concerned about the spread of misinformation in NZ; 
  • 79% of NZers get news or information from social media;
  • The majority (57%) believed they had come across misinformation in the past six months, and 21% said they noticed this daily or weekly;
  • Most New Zealanders (81%) think misinformation is becoming more common over time, while just 4% think it is becoming less common; and 
  • Most New Zealanders (90%) think misinformation is influencing people’s views about public health, and 75% tended to think false information about Covid-19 is an urgent and serious threat.

International Influence

Alongside domestic issues, what happens overseas is influencing what is happening here in New Zealand and vice versa.

The Christchurch terrorist attacker’s livestream video of the attack went viral and has been linked with subsequent terrorist attacks in Europe and the United States. At the same time a Chinese-owned app, TikTok, is recommending and amplifying eating disorders, self harm and suicide content to 13 year old users in Australia, Canada, the EU, the US and the UK within seconds of them joining the platform. 

Meanwhile, misogynist, racist and LGBTQ+ hate is being shared virally and globally across all platforms and leading to offline violence and crime, such as the tragic shooting in Colorado, US. 

Why is this happening and what can be done?

There are a number of key reasons why this is happening, which also provides a direction for how we can start to address online harm.  

Number 1: A lack of transparency online

Currently, there is a lack of transparency over what is being shared and the impact it has on individuals and communities. Big Tech companies are built on a business model that extracts personal data and then uses and converts that data into targeted advertising revenue (what Shoshana Zuboff has dubbed Surveillance Capitalism).

As soon as you join social media platforms like Facebook, Instagram, Youtube or TikTok, the platform’s algorithms are recommending content for you. Every time you pause, share, comment, or otherwise engage with a post on a social media platform the company is adding that data to your personal dossier and then through its algorithm and processes, targeting content to you based on your profile. 

Social media companies take money from advertisers who want to have their content shared and promoted in a targeted way.  They have also developed an information ecosystem that rewards the promotion of high engagement content.  Engagement is data - regardless of whether it is a cute cat video or a livestream of a man spouting out abusive, misogynist content about women - for example, the likes of Andrew Tate.    

Content that is popular makes money for Big Tech companies.  There is no friction to pause and ask, is this legal, is it harmful, is it fact?  

Meanwhile, search engines like Google are curating and seamlessly promoting paid posts at the top of search results and in maps, distorting the information environment.

Google ads are being sold to advertisers who are often unaware of where their advertisements are shown and what their advertising dollars are funding.  Google is the first source of information for many and they have incredible power in what they choose to show (or omit).  This has a real life impact on people’s individual and collective decisions and can shape public knowledge and understanding of an issue or person, and access to core services

For example, Google maps and search is directing women seeking abortions to anti-abortion fake clinics and Google Search has been found to direct vulnerable users to the incelosphere and misinformation about climate change.  

In the majority of cases and despite company PR about trust and safety, when users make complaints about content because, for example, it is abusive and breaches the terms and conditions of the company, they are met with silence.  The worst kind of gaslighting.  

The problem is aggravated by the fact that information about what is happening on platforms is tightly held by the platforms themselves.

In short, these companies are not showing and sharing the full picture of what is happening online and who is being hurt.  The incentives are not in the system to change this behaviour, and IP and profit are placed ahead of public safety and human rights.  Voluntary agreements between governments and tech companies can only go so far without express legislative requirements being needed. 

As stated, social media and search companies have a significant impact on humanity.  Despite this, they largely sit outside regulation globally.  This is changing and we’ve seen significant advances in Australia, the UK, and the EU. 

Amongst other governments, I’ve worked with the Biden administration on online harm issues and feel confident that despite the gridlock, Congress will advance some controls on Big Tech this session. All of the above countries have transparency requirements high on the agenda.

There are three key areas where transparency is required: 

  • Transparency of Algorithms
  • Transparency of Economics
  • Transparency of Rules Enforcement.

These areas are explained more in CCDH’S STAR Framework, which I wrote last year in consultation with colleagues.  API access is not automatic, but it should be.  Currently companies are withholding internal research that shows the degree of harm and intervention - as we saw through Frances Haugen’s testimony regarding Facebook and Instagram.  Independent research should be supported through both funding and API access.  Access should be available to researchers and the public by default rather than the grace and favour of the company.

Number 2 - Failure to Design Safe Products and Services

The starting point with any product or service should be safety by design.  Online products and services are no exception. 

Companies and individuals should not be making money from products and services that are hurting people, damaging human rights or undermining our elections.  They have a duty of care - to both adults and children - to ensure that their products and services are safe before inflicting them on a trusting public.  

Safety by design means taking a preventative approach to the development of products and services, for example risk assessments (like those proposed in the UK’s Online Safety Bill), testing, and other incentives in the system that push companies to consider the impact of their services to ensure they are safe - such as liability [criminal and/or civil] for failing to meet a safe industry standard. 

In the long-run, a safety by design approach is beneficial for companies as well as they can avoid the expense and work involved in trying to deal with individual cases of harm from a product that is not fit for purpose. 

You can read more about safety by design in the STAR Framework. The Australian eSafety Commissioner has developed supporting guidance for companies.   

Number 3 - Collective Failure to Enforce Big Tech Terms and Conditions [And a Lack of Accountability]

Self regulation has palpably failed. 

Despite all companies having terms and conditions (albeit of varying quality) the big problem is enforcement.  Current changes that happen, if at all, are frequently the result of high profile, public research and campaigning.  But even when this does trigger changes these are frequently not sustained.  

The only way that we can expect to see significant and sustained change, is with accountability for transparency and safety through an independent media regulator. 

Even in areas where Big Tech has publicly committed to resource safety interventions, there is a failure to act on reports from users using the company’s reporting systems.  For example, the findings in this study on Anti-Muslim hate released last year show that social media companies are failing to act on anti-Muslim hate 89% of the time despite promises made by these companies after the Christchurch mosque terrorist attack.  

Number 4 - Lack of responsibility within companies

Trust and safety teams are frequently deprioritised within company structures, as growth, engagement and market share are prioritised.  

As seen in Congressional testimony from ex-tech company staffers last year, this is reflected in areas like promotion, bonus schemes, project management and company metrics.  Many staff who are in these roles care deeply about the issues that are happening online but their concerns become marginalised within the company.  

Civil society groups who engage with companies in good faith have also been disillusioned by this process.  The trend for many of these companies is to fire the teams who are doing this work - with Elon Musk’s toxic Twitter leading the charge - and corresponding increase in hate speech and disinformation.  

Licensing Regime for New Zealand?

In New Zealand, we have regulatory regimes for situations where people are in positions of trust and confidence, to avoid abuse.  We see this, for example, with lawyers, accountants, sale of alcohol, promotion of gambling and real estate.  

Frequently regulatory regimes may have a licensing component where people in positions of trust may need to complete a test or qualification, be subject to an enforceable code of ethics and conduct, have minimum systems in place, and even pass a “good character and fit and proper” test.  We also have corporate manslaughter laws for situations where both the company and individuals may be held accountable.  

Further thought should be given to whether we need a licensing regime for the heads of the company and people in positions of power and influence within the company in order for that company to operate in New Zealand. This could sit alongside legislative requirements for all companies, which could distinguish according to size and risk profile of the company.   

The UK Government is progressing codes through legislation and regulations, which is a stronger approach than leaving it up to industry to develop their own.  Regulations would (and should) be subject to consultation with the community - which would include tech companies but extends, for example, to charities, academics, iwi/Māori and community groups.  

It’s time to find solutions

Transparency International’s working definition of corruption is the abuse of entrusted power for private gain. This concept of private gain is not limited to financial gain but extends to control over political power.  It is not limited to politicians and governments but to private sector actors that wield disproportionate power and set the course for our democracies, human rights, social inclusion and the way we receive information about important issues like COVID-19 and climate change. 

It’s time for New Zealand to take a closer look at the role of big tech companies in amplifying and profiting from this harm, and to ensure our laws are fit for purpose.  This article sets out some of the elements that should be included in a regulatory regime for online harm, including a duty of care model based on safety by design, transparency, and accountability.   

The Department of Internal Affairs has recently released its discussion document for public consultation and I encourage people to read this and consider where you want NZ laws to go.  I know Transparency International New Zealand will make an informed submission.  

About me and events I’m involved with

My current role is Director of Hartsheba Limited and I'm the Convenor of the International Action Hub for the National Council of Women of New Zealand (NCWNZ).  

NCWNZ is running a series of events this year on online harm.  The next one organised by the Influence and Decision-making Hub is about Countering Misogyny is scheduled for 12pm on 16 June. You can register for this here.  

Save the date for a political panel event on Feminist Foreign Policy to be held on 19 July in Wellington.

Blog Post written by:
No items found.