Can We Have or Do We Even Need, Ethical Technology?

Updated: 25 February 2021
Updated: 25 February 2021

Many of us have woken up to a new year, after a dreadful 2020, wondering about our lives, who we are, and how we impact the planet. This feeling of doing harm is not just because of Covid-19 running rampage across the human species. This climate of harmful behavior has been percolating in tech circles for many years.

In life, it often takes a major event to make you sit up and take notice. To change a behavior often requires multiple events, that are staggering in their impact, before human society makes changes. This is no truer than in the area of technology. We humans love our gadgets and tech toys, but do they love us back?

The question is…can technology be ethical or are we doomed to be spied upon and misused by the very tools that are designed to make our lives better and easier?

Can we Even Have Ethical Technology

The Road to EthTech

In the last ten years or so, the idea that we own our own data has arisen. The information that helps each of us to create a digital presence and perform digital tasks, as diverse as engaging with social media to paying for goods on Amazon, has become a commodity.

Personal data is the stuff upon which digital wars are fought. Its ownership is part of the battle for online control – companies like Google and Facebook vying for that control. Data became valuable as soon as there was plenty of it, the storage to deal with it, and the smart data analytics technology was available to make true use of it. Since 2010, there has been an exponential growth in captured and consumed data.

This surge is broadly in line with Internet usage: According to the UN, by the end of 2019, 51% of the world’s population or around 4 billion people, were using the internet. But the importance of data as a valuable commodity was taking root long before this. In 2006, Clive Humby made the now infamous statement that “data is the new oil”. Whilst this may or may not be a hyperbolic statement, the fact remains that data is the stuff of the internet and may well end up as the stuff of the individual’s nightmare.

The data boxing ring

When something is valuable, the gloves come off. The resulting attitudes towards personal data, started the ethical debate for real. Data is exploitable because it represents us as individuals. And exploited it has become. Data has driven not only unethical behavior amongst tech giants but is the root of cybercrime. Massive data gathering exercises has driven the money-making ventures of the big techs.

Google made the statement “don’t be evil” part of its company code of conduct (CoC) back in 2000, it still has the words as a closing statement in its last CoC. However, the company became a poster child for unethical data use. In 2012, a spoof video called the “GMail Man” made fun of the fact that Google Mail looked for keywords in emails that were then used to market ads at GMail users: Google, effectively, took ownership of personal communications to create marketing data.

This step was lucrative and opened the door for this, now common, type of privacy misuse, many companies following this pathway. In 2014, Google received a lawsuit for illegal email scanning. Two years later, Google stopped scanning emails. But this was not an earnest move, Google still arguing against the ruling in 2019.

Google is not unusual in using our data to oil their corporate wheels. The argument being that “you can’t have something for nothing” and data is the payment for GMail. However, ethical technology must transcend this on/off privacy switch. Our data carries the digital form of ourselves and should be up to the individual to determine its use.

This ideology of consent is written into laws like the EU’s GDPR and the U.S. CCPA regulations. But consent, regulations, and individual choice, are complex and nuanced when it comes to using data online. Choice and consent are not the same thing if the individual who is offered the choice, has, indeed, no real choice. As an example, take a young woman who has a 3-month-old baby. The woman is temporarily homeless, living in a friend’s home with her baby by her side.

She has little money, borrowing off friends. She browses on a friend’s laptop one day and comes across an offer to give her a few dollars if she fills in an online form – the form asks many deeply personal questions. She feels uncomfortable but needs money to buy milk for her baby; she ‘chooses’ to take up the offer. The ethics behind this are dubious; in this situation do you really have a choice?

The road to ethnical technology (EthTech) is set…

Can Technology be Ethical?

Ethical technology is about preventing or reducing the harm that technology can do in society and to the individual. As technology has become increasingly intrinsic in our lives and wound up in an intricate web of personal data, this harm prevention is no longer a simple fix. Technologies that hold great promise, such as those based on artificial intelligence (AI) are already raising red ethical flags.

Issues range from illicit data collection and misuse to baked in racism. These things may seem like ‘so-whatery’, but the impact can have devastating effects. Take the example of ‘racist AI algorithms’. The UN reported on an African American man arrested for shoplifting. The man was handcuffed in front of his family and taken into custody. AI-enabled facial recognition had been used to identify known shoplifters and picked the man out.

However, this particular tool hadn’t learned how to differentiate different black faces because the training images were, in the main, white faces. The man was subsequently released, however, not before the trauma of arrest.

Ethical technology is possible. It is a case of awareness of the issues and understanding how poor design has far-reaching impacts. Organizations are starting to appear thar work to police the ethics of tech and to create advisories on tech design and use. These organizations include:

  • Me2B Alliance: Working on creating the standards that technology companies need to abide by to build with ‘ethics-by design’ as a core remit.
  • AllTechIsHuman: Building a community of diverse voices across the technology industry to develop the structures needed to create responsible tech.

Creating ethical tech or ‘EthTech’ is ultimately going to be of benefit for all as the importance of trust and relationships are seen as part of the behavioral ecology of technology. A 2020 survey by Cisco “From Privacy to Profit: Achieving Positive Returns on Privacy Investments” concurring with this by concluding “investment in privacy results in large returns”.

Is EthTech Achievable?

There are copious examples of unethical tech running amok with our liberty. Too many to list here, but examples include those from recent years such as the Facebook-Cambridge Analytica scandal and TikTok’s record fine for putting the safety and privacy of children’s data at risk. Other examples such as biased AI algorithms are also copious.

The sheer numbers of data misuses and associated falsehoods is changing user behavior. People want respect and with respect comes loyalty – this is not a new idea, of course, but technology seems to have forgotten this, and the result has been devastating with knock-on effects that end in data breaches and scams as well as a building distrust in technology, in general.

An eMarketer survey from 2019, into consumer attitudes towards online marketing, found over half of respondents were “concerned” about how tech/social media companies used personal data for commercial purposes. Three-quarters of those surveyed used ad blockers and only 10% felt comfortable about their data being used for targeted ads: You get what you give in the world of online marketing and technology.

In terms of business and tech ethics, an annual study of digital business carried out by MIT Sloan Management Review and Deloitte found only 35% of respondents said their organization’s leaders spend enough time on the impact of their digital initiatives on society.

People want tech to be ethical and the time is now to take on that challenge. The discipline of Values in Design (ViD) looks at how socio-technical design principles can be used to make better tech that works for all. It comes down to recognizing that technology is not just a tool, it can be a representation of ourselves. And, as such, it must be designed to do no harm.

We have a choice now. We can continue down a path to a dystopian future where we have no privacy and where ethics are for those wealthy enough to afford them. Or we can choose to create safe technology, for use by all, designed to eliminate harm. I predict that Ethical by Design is the next big thing.

Written by: Susan Morrow

Cybersecurity specialist

Susan is an ex-chemist who transitioned to the IT-security sector in the early 1990s where she became a founder of a cybersecurity start-up. Since then, she has built a knowledgebase across diverse areas including encryption, digital rights management, digital signatures, privacy, and online identity.

Susan has been involved in identity projects addressing government, enterprise, and consumer needs. She has helped design and commercialize award-winning software solutions used by organizations of all sizes, worldwide.

Currently, Susan works as Head of R&D at Avoco Identity who specialize in data orchestration solutions to facilitate the identity ecosystem.

Susan is also a tech writer and has a regular blog about digital identity at CSOOnline:

https://www.csoonline.com/author/Susan-Morrow/

In 2020, Susan was shortlisted for a top 100 Women in Tech award and included in the long list of the most influential women in technology in the UK by Computer Weekly magazine.

Her mantra is design for a digital life, not just digital identity.

Leave a Reply

Your email address will not be published. Required fields are marked *