My Few Thoughts on AI Ethics

Tl;dr

  • AI development is rapidly outpacing societal adaptation
  • We should focus on adapting AI development to benefit human society
  • Concerns about job displacement can be mitigated by creating new AI-related roles
  • Combining AI with various scientific domains could accelerate breakthroughs
  • AI raises existential and philosophical questions about human uniqueness and purpose
It's so clear that we are on the fast lane of AI development; and as I previously wrote, AI safety is becoming a huge concern for a lot of people. But I think we should also focus more on AI Ethics.

Obviously, we will face or are facing the situation that the pace of development of those frontier AI system greatly surpasses the one of adaptation of human society. However, the problem is: Should human society adapt to the development of AI, or should the development of AI adapt to the human society?

I think it should be us to adapt the development of the AI system.

Currently, one of the greatest concern is about unemployment. My opinion is that general deployment of advanced models would make a lot of people lose their jobs, for example, those who do repetitive text-based work (transcriptionist, proofreader, etc.). This would cause great negative impact to our society. But we can try to slow down the speed of unemployment rate by bring more job opportunities. I suppose in the near future, we would need more people who are adept in guiding the AI systems to do something that they're not so good at, or trying to bring them into some specific domains (Chemistry, Biology, etc.).

So it comes to my next point, I believe we would get something really great when combining the most frontier models with different domains. And I would say AI + Science ≥ Science. I believe that we might have models that are as knowledgeable and creative as top scientists by 2025-26, including Nobel Prize winners or heads of research labs; for example, if we had a million copies of such AI models, they can collaborate with each others, and just work like a research group. This unlike human scientists who definitely would get tired, AI systems would not, you can just turn the server on and let them do research in the background. The research group they form would have greater efficiency comparing to human one, while also free human scientists from heavy (unnecessary maybe?) work. And more importantly, may accelerate the rate of scientific discoveries significantly. We can actually use AI to help us solve some of the most challenging or unsolved problems in the world, such as climate change, cancer, etc. I think it would not only be a great way to let human society adapt AI development, but also largely accelerate the development of human society.

Except of this, I think we should also consider the existential crisis that AI would possibly bring. As those AI models become more advanced in areas like creativity, reasoning, and emotional intelligence, some of us may question what makes us truly unique or special as a species. Besides, AI raises profound questions about the nature of consciousness, intelligence, and what it means to be a sentient being. What's more, AI's potential to solve many of our problems might lead some to question the meaning and purpose of our existence if our traditional roles are diminished.

I think these are complex and nuanced questions without simple answers; and only time would give us the answer.