Technology ethics. What is that? What kind of ethical dilemmas does technology bring about? How does technology influence our morals? What are some examples of this, what impact does it have on society, and what are the risks?
By the way, I wrote a separate article about human enhancement ethics.
And, also related: What is the impact of technology on society?
Ethics of technology
What is the ethics of technology? A lot of people talk about technology ethics, but what does it mean? What are technology ethics issues? What are ideas and theories to think about these topics?
I made this video as a sort of summary on my YouTube-channel:
Summary
This is a summary of this article:
- With the progress of science and technology, ethics becomes more important.
- Technology and humans are intertwined. Humans shape technology, and technology shapes humans.
- The design of a technological product or service implies certain moral choices.
- It is not always possible to predict the effect of a technology or scientific discovery.
- A solution is our mindset: a balance between being progressive and conservative.
Overview
This is an overview of the article:
- Definition of technology ethics;
- Consequences of technology;
- Morality of technology (especially design);
- Technology Ethical Issues;
- Issues in biotechnology (like artificial brains);
- Solutions
- My conclusion
Lastly, you can hire me (for a webinar or consultancy) and dive into the reading list with extra resources, like books, articles, and links.
Definition of technology ethics
Definition technology ethics
What is ethics, exactly? According to the Center of Ethics and Health, the definition is as follows: ‘Ethics is consciously thinking about taking the right actions’. One of the primary roles played by ethics, in relation to technology, is to make sure that technology doesn’t enter our lives in undesirable ways.
It’s becoming more and more important to reflect on the impact of technology. Futurologist Gerd Leonhard: ‘One of the dangers is that technological progress could overpower human values. Technology does not have ethics. And a society without ethics is doomed.’ You can watch my interview with Gerd Leonhard down below.
A society without ethics is doomed
Gerd Leonhard, author
According to professor Peter-Paul Verbeek, this view is a bit too simplistic. Professor Verbeek (Philosophy, Twente University) specializes in the connection between humans and technology, and how ethics ties into this. In 2017, I started to develop an interest in the ethical aspects and consequences of technology. I regularly used Professor Verbeek’s books and ideas, both for the presentation and for this article.
What are the consequences of technology?
Consequences technology
One of the most important things to realize, is that technology has always had an influence on humans and has always played a role in our lives. Technology often functions as an intermediary between the user and their surroundings.
Just some examples: using binoculars to see better from far away, using a smartphone to communicate with others, or using earplugs to protect your ears from excessive noise.
Technology co-shapes reality.
Bruno Latour, philosopher
The French sociologist and philosopher Bruno Latour argues that ‘technology is a mediator that actively co-shapes reality.’ Take a technology like the internet. If you look for information on Google, then Google’s search results also shape the reality that you experience. So, humans shape technology and vice versa.
Technology does not have ethics
Technology in and of itself doesn’t have ethics. Perhaps it sounds a bit cynical: technology, such as artificial intelligence, only uses ethics, norms and values in order to learn more about humans.
I believe technology will continue to play an important role regarding our norms and values, as it’s becoming ever more all-encompassing and invasive. Take virtual reality for instance; that’s a great example of how technology is playing a bigger and bigger role in co-shaping our reality.
In his books, Verbeek argues that there’s no use in simply thinking of ethics as ‘protecting’ the boundaries between humans and technology. That’s because humans and technology have always been intertwined. There’s no way to prohibit technology or stop it from playing a part in our lives. It’s inevitable, like language, oxygen and gravity.
What is the morality of technology?
Morality of technology
In a way, those who design a certain technology also manifest its morality. Anyone can function as the designer of a technology, be it companies, governments or individual innovators. It’s important to find a delicate balance between being completely free of any morals and being condescending, towards the users of these designs. As every technological design implicitly or explicitly has its ideas about what a good and just life looks like.
Take the Google Glass, an experimental pair of glasses by Google that allows the user to see extra information. When I’m around others, can they see that I’m using these glasses? That’s one example where Google could choose to build morality into its design – by creating a LED light in the front, which allows others to see whether the glasses are on.
Design and morality
One of the problems is that the designers of technologies aren’t always able to accurately predict how their technology eventually will be used, and what kind of consequences this could have. To name a few examples:
- Telephone
- Typewriter
- Car
#1 The first prototype of the telephone was developed as a hearing aid for the hard of hearing. Another example: SMS (texting) was intended to help technicians communicate with each other.
#2 The typewriter was invented as a machine that could help visually impaired people write.
#3 This one really took me by surprise. The first cars were used for sports and medical purposes (!?). Patients would sit in the back of the car and be driven around. The idea was that they could inhale thin air, which would be good for their lungs.
Adversarial Design
A fun example of how technology and ethics can blend together, is the so-called ‘adversarial design’. This refers to creating technology designs that provoke and inspire users to be more conscious of the technologies they use, instead of looking the other way.
A few examples provided by Carl DiSavo: a browser extension that converts Amazon prices to how much oil they would amount to, an umbrella with electric lights that keep surveillance cameras from recognizing you, and the Natural Fuse. The latter is a system that allows you to monitor your energy consumption by reading it off a plant. If you consume too much energy, thus burning CO2, your plant will die.
This part is about ethical issues caused by the introduction of certain technologies.
Technology Ethical Issues (5x)
Apart from the fact that technology can be used for other purposes than intended by their designer, it can also have an (unexpected) impact on other, seemingly unrelated aspects. A few examples:
- Contraceptive pill
- Low-energy lightbulbs
- Deep Brain Stimulation (DBS)
#1 The introduction of the contraceptive pill also led to a breakthrough in the social acceptance of homosexuality. Because of the pill, society started to think of sex and reproduction as two separate things.
#2 When low-energy lightbulbs were introduced, energy consumption was expected to go down. However, people started using these low-energy bulbs in far more places, which led to an overall increase in energy consumption.
#3 Deep Brain Stimulation (DBS) is a technology that can help cure certain symptoms of Parkinson patients, through an electrode in their brain. One medical journal mentioned a case where a patient who underwent this treatment didn’t just get better, but also got a completely different personality. He started purchasing expensive things, cheated on his partner, and his friends hardly recognized him anymore.
Impact on society
The examples I mentioned before, show the kind of impact technology can have – both on individuals and on society. Sometimes it takes a while before society adjusts to new technologies; this was also the case with driving (#4) and smartphones (#5).
#4 In the first few years that cars started driving around, the number of traffic deaths rapidly increased. This continued, until car manufacturers started developing technologies that could make driving safer. Think of seat belts, airbags and ABS. In addition, the government started introducing laws and regulations (about drunk driving, for instance) and changing the infrastructure (like introducing roundabouts).
An interesting lesson is that car manufacturers and governments started with these safeguards and regulations after Ralph Naber published his book Unsafe at Any Speed.
#5 In the first few years after the cellphone was introduced, everyone left their sound on. After a while, this was no longer socially accepted, and leaving the phone on silent became the new normal.
And yet…
These examples are all super interesting, but with the technology tsunami that is about to hit us now – with genetic modification with CRISPR/cas9, artificial intelligence and neurotechnology – we’re changing our lives and our society even more rapidly, directly and invasively.
This part zooms in on biotechnology-related examples: ectogenesis, organoids and human enhancement.
Ectogenesis
One interesting example that has caused a fair bit of controversy, is ectogenesis. This word refers to the development of a technology that allows a fetus to develop into a baby, outside a woman’s body. According to futurist Gerd Leonhard, this might become possible within the next 20 years.
This method would be less invasive for the mother, more efficient, and most likely cheaper as well. But should we decide to do it, just based on those rational arguments? How would this affect the emotional maturity of the child, or the bond between a mother and her child?
Growing brains outside of the body
Another example that raises a lot of ethical questions is the creation of mini organs, also known as organoids. This is a technique where small organs are created in a laboratory. That way, researchers can test whether or not certain medicines are effective in a patient; this is also a potential substitute for animal testing.
Several scientists published an article in Nature to inform the general public about a particular type of mini organs: the brain. Mini brains are currently being used by scientists to study disorders such as autism and schizophrenia.
Consciousness of mini brain?
Jeantine Lunshof (MIT Media Lab in Boston) was interviewed about this for a Dutch magazine. She stated: ‘People need to know about this before we start applying these techniques in all types of ways. Cultivating brain-like structures raises a good deal of ethical questions. Would those brains also be able to think? Would they experience consciousness? Could this constitute a way to create back-up brains?’
Cultivating mini-brains raises a fair deal of ethical questions.
Jeantine Lunshof (MIT Media Lab)
Because science just keeps progressing. One example is a brain-organoid of a girl with a genetic defect. Scientists were able to study this genetic defect thanks to these new techniques, but of course the organoid wasn’t able to ‘think’ in the way that the girl is able to. Another example is that a group of scientists decapitated a group of pigs and kept their brains alive for 36 hours.
Jeantine Lunshof aptly stated: ‘After a while, these types of technologies might change our definitions of life and death, and consciousness.’
Video bio-ethics
And here is my video about bio-ethics:
Human Enhancement Ethics
Human Enhancement concerns the use of science and technology to improve, increase and change human functions. For example: a boost in intelligence, strength, power or compassion.
As you might imagine, there are a host of issues and dilemma’s in this domain. Read this article if you want to know more:
I read the book The Ethics of Human Enhancement: understanding the debate. In this video I tell you what I learned from this book. The video is published on my YouTube Channel:
This part zooms in on potential solutions.
Solutions
In his books, Verbeek refers to ancient Greece to point us in the right direction for our ethical dilemmas. According to him, there’s a reason why the word ‘hybris’ (overconfidence) and the word ‘hybrid’ (humans + technology) are so similar. The ancient Greeks realized that the use of technology doesn’t come without risks. One of those risks is that people can become reckless and power-hungry.
One of the solutions that I personally like, goes back to the title of one of his book: On Icarus’ wings. In the Greek myth that this refers to, Daedalus created wings of feathers and wax for his son, to help him escape the island of Crete. He warned his son: ‘Don’t fly too low, because your wings will get wet in the ocean. Don’t fly too close to the sun, because the wax will melt and you will lose your feathers.’
That’s how we could look at the role of technology as well. Don’t become overconfident and recklessly try out everything you can, but don’t be too conservative either, because then you’d halt progress that could be used in beautiful ways. Such as curing diseases (with neurotechnology), solving food scarcity (with genetic modification) or further advancing humanity (with artificial intelligence).
Coding ethics
Kevin Kelly, author and technologist, believes that technology, and particularly artificial intelligence, will force humans to reflect more on ethics. That’s because we’re forced to code these kinds of ethical questions into deep learning and machine learning systems.
Entrepreneur and futurist Nell Watson is working on this as well. I met her at the Brave New World Conference 2017 in Leiden. Watson is currently working on a database of ethical dilemmas, to help train artificial intelligence systems. According to her, the way we teach computers to distinguish pictures of cats from pictures of other animals, can also be used to teach artificial intelligence what is and isn’t ethical.
What is my conclusion?
Conclusion
That’s why it’s important that we keep experimenting and, subsequently, discussing the consequences. Not just scientists, but psychologists, anthropologists, sociologists and experts from within other fields as well. Professor Gary Marcus (New York University) shared a similar message at the World AI Summit 2017 in Amsterdam.
Our discussion focused specifically on artificial intelligence, but I think this goes for all technologies out there. We shouldn’t just rely on experts to give their opinion, but on everyone who wants to contribute to the discussion.
Because in the end, that’s what technology ethics is about. How do we increase our quality of life through the use of technology? Just to reiterate it: technology is a part of us, whether we like it or not.
No quick fix
In a world full of neurotechnology, genetic modification, artificial intelligence, biohacking, human enhancement, and human augmentation, we’re only going to become more intertwined with technology: we’re going to modify ourselves and upgrade our bodies with hardware and wetware. We’ll certainly make mistakes while we’re trying to find our way in this, but I’m convinced that ultimately, we’re going to be better off for it as humans.
But we do have to be smart about it. Or even better: wise. And no, there is no technical quick fix for that.
You can hire me for more knowledge and insights about this topic.
Hire me!
Please contact me if you want to invite me to give a lecture, presentation, or webinar at your company, at your congress, symposium, or meeting.
Here you can find additional resources, like videos, websites and books.
Panel
At TBX 2022 I was in a panel with Jason Hart (Rapid 7) and Stefan Buijsman (TU Delft). The panel was moderated by Monique van Dusseldorp.
Reading list
This is a specific type of ethics:
- What are the ethics of human enhancement?
These are relevant books:
What are your thoughts on technology ethics? Leave a comment!
Like you, I am concerned about technology getting out of control. My book is about Slaughterbots and other horrors of technology, Bioengineering, AI, Machine Learning, et al. We need articles like yours to lift awareness and raise concern. However, I think a good fictional story that explores the down side of technology is another vehicle for lifting awareness and raising concern.
I invite you to visit amazon.com to read a free snippet of Disposable Human Beings by Julian Olson. Perhaps together, and with others, we can get in front of technologies that seem to be getting ahead of our ability to understand their ramifications and hopefully control them before they get beyond our control. Thank you Julian.
thanks for sharing this info with us