Many of us have woken up to a new year after a dreadful 2020, wondering about our lives, who we are, and how we impact the planet. This feeling of harm is not just because of the pandemic running rampage across the human species. This climate of harmful behavior has been percolating in tech circles for many years.
In life, it often takes a major event to make you sit up and take notice. Changing behavior usually requires multiple occasions that are staggering in their impact before human society changes. This is no truer than in the area of technology. We humans love our gadgets and tech toys, but do they love us back?
The question is…can technology be ethical, or are we doomed to be spied upon and misused by the tools designed to make our lives better and easier?
The idea that we own our data has arisen in the last ten years. The information that helps us create a digital presence and perform digital tasks, as diverse as engaging with social media to pay for goods on Amazon, has become a commodity.
Personal data is the stuff upon which digital wars are fought. Its ownership is part of the battle for online control – companies like Google and Facebook vying for that control. Data became valuable as soon as there was plenty of it, the storage to deal with it, and the smart data analytics technology was available to use it properly. Since 2010, there has been an exponential growth in captured and consumed data.
This surge is broadly in line with Internet usage: According to the UN, by the end of 2019, 51% of the world’s population or around 4 billion people, were using the internet. But the importance of data as a valuable commodity was taking root long before this. In 2006, Clive Humby made the now infamous statement that “data is the new oil”. While this may or may not be a hyperbolic statement, the fact remains that data is the stuff of the internet and may well end up as the stuff of the individual’s nightmare.
When something is valuable, the gloves come off. The resulting attitudes toward personal data started the ethical debate for real. Data is exploitable because it represents us as individuals. And exploited it has become. Data has driven not only unethical behavior amongst tech giants but is the root of cybercrime. Massive data gathering exercises have moved the money-making ventures of the big techs.
Google made the statement “don’t be evil” part of its company code of conduct (CoC) back in 2000, it still has the words as a closing statement in its last CoC. However, the company became a poster child for unethical data use. In 2012, a spoof video called the “GMail Man” made fun of the fact that Google Mail looked for keywords in emails that were then used to market ads at Gmail users: Google, effectively, took ownership of personal communications to create marketing data.
This step was lucrative and opened the door for this common type of privacy misuse, with many companies following this. In 2014, Google received a lawsuit for illegal email scanning. Two years later, Google stopped scanning emails. But this was not an earnest move, Google still arguing against the ruling in 2019.
Google is not unusual in using our data to oil its corporate wheels. The argument is that “you can’t have something for nothing” and data is the payment for Gmail. However, ethical technology must transcend this on/off privacy switch. Our data carries the digital form of ourselves and should be up to the individual to determine its use.
This consent ideology is written into laws like the EU’s GDPR and the U.S. CCPA regulations. But consent, regulations, and individual choice are complex and nuanced when using data online. Choice and consent would not be the same if the individual offered the choice had no real choice. As an example, take a young woman who has a 3-month-old baby. The woman is temporarily homeless, living in a friend’s home with her baby by her side.
She has little money, borrowing from friends. She browses on a friend’s laptop one day and comes across an offer to give her a few dollars if she fills in an online form – the form asks many deeply personal questions. She feels uncomfortable but needs money to buy milk for her baby; she ‘chooses’ to take up the offer. The ethics behind this are dubious; do you have a choice in this situation?
The road to ethical technology (EthTech) is set…
Ethical technology is about preventing or reducing the harm technology can do to society and the individual. As technology has become increasingly intrinsic in our lives and wound up in an intricate web of personal data, this harm prevention is no longer a simple fix. Technologies that hold great promise, such as those based on artificial intelligence (AI), are already raising red ethical flags.
Issues range from illicit data collection and misuse to baked-in racism. These things may seem so-watery, but the impact can have devastating effects. Take the example of ‘racist AI algorithms.’ The UN reported on an African American man arrested for shoplifting. The man was handcuffed in front of his family and taken into custody. AI-enabled facial recognition had been used to identify known shoplifters and picked the man out.
However, this particular tool hadn’t learned how to differentiate different black faces because the training images were, in the main, white faces. The man was subsequently released, however, not before the trauma of arrest.
Ethical technology is possible. It is a case of awareness of the issues and understanding how poor design has far-reaching impacts. Organizations are starting to appear to work to police tech ethics and create advisories on tech design and use. These organizations include:
Creating ethical tech or ‘EthTech’ will ultimately benefit all as the importance of trust and relationships are seen as part of the behavioral ecology of technology. A 2020 survey by Cisco “From Privacy to Profit: Achieving Positive Returns on Privacy Investments” concurs with this by concluding that “investment in privacy results in large returns.”
There are copious examples of unethical tech running amok with our liberty. Too many to list here, but examples include those from recent years, such as the Facebook-Cambridge Analytica scandal and TikTok’s record fine for putting the safety and privacy of children’s data at risk. Other examples, such as biased AI algorithms, are also copious.
The sheer number of data misuses and associated falsehoods is changing user behavior. People want respect, and with respect comes loyalty – this is not a new idea, but technology seems to have forgotten this. The result has been devastating, with knock-on effects that end in data breaches and scams, as well as a building distrust in technology in general.
An eMarketer survey from 2019, into consumer attitudes towards online marketing, found over half of respondents were “concerned” about how tech/social media companies used personal data for commercial purposes. Three-quarters of those surveyed used ad blockers, and only 10% felt comfortable about their data being used for targeted ads: You get what you give in online marketing and technology.
In terms of business and tech ethics, an annual study of digital business carried out by MIT Sloan Management Review, and Deloitte found only 35% of respondents said their organization’s leaders spend enough time on the impact of their digital initiatives on society.
People want tech to be ethical, and the time is now to take on that challenge. The discipline of Values in Design (ViD) looks at how socio-technical design principles can be used to make better tech that works for all. It comes down to recognizing that technology is not just a tool; it can be a representation of ourselves. And as such, it must be designed not to harm.
We have a choice now. We can continue down a path to a dystopian future where we have no privacy and where ethics are for those wealthy enough to afford them. Or we can choose to create safe technology for use by all designed to eliminate harm. I predict that Ethical Design is the next big thing.
2 Comments
Derek Lewis
October 26, 2023 5:14 pm
This article is very interesting and thought provoking on the topic of ethics in technology. I like how she raises questions about the ethics and impact of technology on our society. It is important to talk about the issues of data privacy, manipulation and control that can arise from the use of modern technology. I agree with the author that we should consider creating ethical standards and norms to ensure that the use of technology does not harm our privacy and personal rights. A critical look at the development of modern technologies and their ethical dimension is absolutely necessary to ensure a balance between progress and respect for human dignity. This article raises questions about the various aspects of ethics in technology and encourages further research on this topic. But there are also positive aspects in the development of technology. For example, services that allow you to transcribe documents, or that can translate videos using AI. I don’t see anything bad in them.
iLONES
February 8, 2022 12:01 pm
Greats post