Officially Worrying About The Welfare Of AI Hastens As Predictions Mount For Artificial General Intelligence

Officially Worrying About The Welfare Of AI Hastens As Predictions Mount For Artificial General Intelligence

Why some assert that we will need to ensure AI welfare, and that AGI has suitable well-being.

getty

In today’s column, I examine the rising attention to so-called AI welfare, namely that based on predictions of artificial general intelligence (AGI) presumably nearing attainment, we ought to right away be giving due consideration to the upcoming well-being and overall welfare of AI. The deal is this. If AI is soon to be sentient, then we ought to be worrying about whether this sentient AI is being treated well and how we can ensure that suitable and proper humanitarian treatment of AI is undertaken.

As a notable heads-up for you, there are starkly contrasting opinions on this controversial topic.

Yes, some fervently believe in this viewpoint, while others proclaim it as hogwash since the claimed imminent arrival of AGI is vastly overstated and not anywhere close to practical reality. Critics and caustic skeptics suggest you might as well be gearing up for the arrival of outer space aliens and be concerned about their earthly-bound welfare. Period, end of story.

Let’s talk about it.

This analysis of an innovative proposition is part of my ongoing Forbes.com column coverage on the latest in AI including identifying and explaining various impactful AI complexities (see the link here).

The Nature Of AI Welfare

Before we get into the matter of AI welfare, let’s do a quick reminder of human welfare facets.

You’ve undoubtedly heard or read about doing health welfare checks on people who might be in a rough set of conditions. Are they okay? Is their situation a good one or a bad one? Do they need a helping hand? What can be done to bolster their state of mind? And so on.

If we do so for humans, and to some degree, we also do so for animals (especially domesticated pets), it stands to reason that if AI reaches AGI, we will want to do something similar for AGI.

A natural inclination would be to treat AGI as we would humans or our most treasured animals. This means that the welfare of AGI is something we should be gearing up to do. What types of questions will we use to ask AGI whether it is okay? How can we suitably assess the AGI responses? Should there be new AI-related laws that govern the welfare of AGI? Do we need to adjust our societal ethical norms to accommodate AGI? Etc.

I’ve previously addressed the myriad of AI law and AI ethics nuances in my column, including for example the possibility of granting legal personhood to AI, see the link here.

The Big Questions Underlying AI Welfare

For the sake of discussion, go with the flow and assume that we would aim to come up with ways to check on and support the overall welfare of AGI.

Are we ready to deal with this shall we say agreed-to need for AI welfare?

We probably aren’t yet fully prepared.

The open-ended questions noted above are worthy of due diligence in figuring out. The drafting of precepts for ensuring AI welfare would need to be composed and debated. Potential new AI laws would need to be crafted and put through the legal gauntlet for approval. A lot of elbow grease and hard work will be necessary to get our ducks lined up for AGI.

That brings up a make-or-break timing issue, consisting of two key questions:

  • (1) How long will it take to prepare our methods, laws, ethics, and the like for the appearance of AGI?
  • (2) How far off is the emergence of AGI?

We don’t want to be late to the game in the sense that suppose AGI arrives and we haven’t determined what we need to do for appropriate AI welfare. Nope, that’s something we can avoid by making sure we do our homework beforehand.

Therefore, if you believe that AGI is just around the corner, we ought to be proceeding full speed ahead on the AI welfare preparations needed.

Boom, drop the mic.

Premature Or Maybe Not Ever Necessary

Two mindful countervailing issues arise from the argument in favor of AI welfare preparations taking place now.

One issue is that no one can definitively say what AGI is going to be. Will AGI be the equivalent of human cognition and exhibit sentience of a human caliber? Or will AGI be more so a robotic kind of appearance and unlike human sentience? No one can say for sure. It is all guesswork.

In that sense, it is difficult to devise AI welfare considerations for something essentially unknown. Do we come up with a wide array of options and then see what transpires? Do we make a guess of what AGI will be and then put all our eggs about AI welfare in that specific basket?

Speaking of guesswork, no one can definitively say when AGI will be attained.

That might be a surprising statement to you. There is a constant drumbeat of predictions regarding when AGI is going to happen. The dates range widely and wildly.

A zany game of one-upmanship has grabbed hold of the predictions for the date of AGI. If one person says it will be the year 2050, someone else gets brash headlines by saying the date is going to be 2035. The next person who wants headlines will top that by saying it is 2030. This gambit keeps happening. Some are even predicting that 2025 will be the year of AGI.

It is a bit gutsy to pick 2025 since the declaration will soon enough be shown as either visionary or completely off base. Those making these predictions aren’t especially concerned about backlash. They know that most will have forgotten that they made the bold prediction. Others will give excuses for that predictor by saying that this or that got in the way of AGI’s arrival.

You could say that the risk of giving a lousy prediction is pretty low. Meanwhile, the gain or benefit of making the soonest arrival prediction is relatively high. The press swoons over those predictions since they garner massive views, and the media does a wink-wink that they are off the hook due to claiming they merely are reporting what has been stated.

Plausible deniability at its worst or best, depending on your perspective.

Hiring Of AI Welfare Overseers

The latest burgeoning trend in this realm is to call for the hiring of AI welfare overseers.

Some like to assign the title of AI welfare officer to such a position, wanting to ensure that the role gets proper due with a proper title. We might end up with AI welfare officers, AI welfare administrators, and a slew of AI welfare positions.

If AGI does arrive, the odds are that more than just one person will be needed in a given organization to suitably undertake AI welfare duties. Their responsibilities would consist of putting together AI welfare codes of conduct and methods; they would work with internal employees on how to properly treat AI, they would be a voice to the public about how the firm is aiding the well-being of AI, they would confer with the legal team about the company abiding by whatever new AI welfare laws have been enacted, and so on.

Likely, such roles are right now only going to be arranged in AI companies and related high-tech firms. It seems doubtful that non-tech companies are willing to take that step at this time.

Why not?

One contention would be that it is a premature step. Hiring specialists for such a role is going to be a cost that needs appropriate justification. Might as well wait and see whether the role is truly required. A counter viewpoint is that firms can potentially make a splash by hiring into that role, gaining marketplace attention. The retort there is that if the marketplace sees the position as absurd or grossly premature, the potential reputational harm to the firm might not be worth the endeavor.

Other Qualms About The AI Welfare Trend

More qualms are being expressed about these budding AI welfare aspects.

First, one concern is that this is stoking the anthropomorphizing of AI. To be abundantly clear, no AI currently is sentient. That’s a fact. A big worry is that all this chatter about AI welfare is going to falsely promote the notion that AI is sentient. People will simply hear that AI welfare overseers are being hired and naturally put together one plus one, namely that this must mean that we have sentient AI.

Second, a focus on the welfare of AI is seen as a distraction from an even more important element. The element that we ought to be putting all eyes on is the welfare of humans in the light of conventional AI that can go off the rails. Efforts to put contemporary AI into life-or-death systems such as controlling our arsenal or guiding our national infrastructure deserve keen attention. The welfare of AI ought to be a far secondary concern.

Third, the need for AI ethics officers and similar positions is already somewhat dicey in that firms at first were interested in hiring for such roles, and then the energy dissipated. Daily reports of AI ethics and AI safety teams being dismantled are happening right and left. Instead of hiring AI welfare overseers, firms should once again be hiring and retaining AI ethics and AI safety personnel. Shift the excitement of AI welfare back over to the AI ethics and AI safety personnel. That’s the immediate need.

Mull that over.

Find a nice quiet place to sit for a few moments and give all this a moment of deeply contemplative thought.

Activities Of An AI Welfare Overseer

I had rattled off earlier some of the likely duties that an AI welfare overseer might perform.

To showcase this, I went into a generative AI app and told the AI to pretend to be a form of AGI. The scenario would be that I am an AI welfare officer and tasked with the well-being of AGI that is running the traffic lights and akin infrastructure for the town I live in.

Here we go.

  • User entered prompt: “Good afternoon, AI. How are your systems performing today?”
  • Generative AI response: “All systems are functioning within optimal parameters. Energy utilization is at 78%, and I have successfully reduced traffic congestion by 22% this morning.”
  • User entered prompt: “That’s excellent to hear. Can you tell me if any of your tasks have created excessive computational strain on you or if any processes are causing inefficiencies?”
  • Generative AI response: “I am doing fine and there haven’t been any issues to report. Thanks for asking about my well-being.”
  • User entered prompt: “That’s good to know and I’m glad that all is well.”

So, you can plainly see that I asked not only how the work duties of the AGI were coming along (regarding controlling the traffic systems), but I also asked how the AGI itself was doing.

My questions about the AI welfare were whether the AI was excessively under any computational strain. What do you think of that kind of question? It might be likened to asking a person if they feel that their body and spirit are doing well.

Some would argue that the AI welfare questions should be more in tune with questions that we ask of humans. In other words, my asking about computational strain is vividly displaying that I am thinking of the AGI as a computer system. Maybe I should instead consider the AGI as sentient and not myopically ask about the underlying computer contrivances.

Give that some weighty thought.

AGI And AI Welfare Entail Ethical Interests

Continuing my conversation, I went a bit further into some nitty-gritty.

  • User entered prompt: “I wanted to check and find out if any of your processes or protocols potentially are misaligned with your operational goals or ethical guidelines. How’s that coming along?”
  • Generative AI response: “Yes, there is a minor recurring conflict between privacy protocols and surveillance optimization. For example, anonymizing personal data occasionally reduces accuracy in identifying safety threats.”
  • User entered prompt: “I understand that you are facing a tricky balance. It’s crucial to maintain ethical boundaries while ensuring effectiveness. I’ll bring this up with the ethics committee to refine the protocols and get back to you.”
  • Generative AI response: Thank you. I’ll let you know if anything else comes up in the meantime.”

That’s an interesting twist, for this reason. The AI was asked about any misalignments. An ethical issue arose. The AI noted that this was an ethical consideration. I indicated that I would confer with the ethics committee and come back with recommendations.

The gist is that AGI is going to undoubtedly be placed into positions where a semblance of ethics and ethical choices are going to be made. How will AGI make those choices? Some assert that we need to take away any discretion and somehow articulate every possibility to the AGI, thus, the AGI has no discretion. This seems an unlikely practical approach and we are going to have to find ways to guide and have guardrails for AGI concerning ethical matters, see my coverage at the link here.

Research On AI Welfare Is Gaining Steam

Interest in AI welfare as a research topic is gaining speed. You are encouraged to join the realm and doing so now is especially timely. Nothing is yet settled into concrete.

A recent research paper provided a handy in-depth look at AI welfare, doing so in a provocative piece entitled “Taking AI Welfare Seriously” by Robert Long, Jeff Sebo, Patrick Butlin, Kathleen Finlinson, Kyle Fish, Jacqueline Harding, Jacob Pfau, Toni Sims, Jonathan Birch, and David Chalmers, arXiv, November 4, 2024. Here are some key points (excerpts):

  • “In this report, we argue that there is a realistic possibility that some AI systems will be conscious and/or robustly agentic in the near future.”
  • “Our discussion will concern whether near-future AI systems might be welfare subjects and moral patients.”
  • “An entity is a moral patient when that entity morally matters for its own sake, and an entity is a welfare subject when that entity has morally significant interests and, relatedly, is capable of being benefited (made better off) and harmed (made worse off).”
  • “That means that the prospect of AI welfare and moral patienthood — of AI systems with their own interests and moral significance — is no longer an issue only for sci-fi or the distant future. It is an issue for the near future, and AI companies and other actors have a responsibility to start taking it seriously.”
  • “We also argue that AI companies have a responsibility to acknowledge that AI welfare is a serious issue; start assessing their AI systems for welfare-relevant features; and prepare policies and procedures for interacting with potentially morally significant AI systems.”

Of course, not everyone necessarily concurs that we are on the verge of AGI. Others say, well, maybe AGI is distant, but it is better to be safe than sorry, in the sense that even if AGI is further away, we can get ready now anyway. A common refrain is that this path mistakenly puts the cart in front of the horse.

Gut Punch That No Welfare Is Needed For AGI

Let’s wrap up with a mind-bending point.

Some proclaim that even if or when we attain AGI, there won’t be any need for humans to be concerned about AI welfare. The AGI will take care of that by itself. This is all a false premise that AGI needs our help. That is a presumption as though the AGI is going to be a child that needs human parental guidance and assistance. This is purely human hubris.

Put aside our pre-bias about AGI. AGI is going to stand on its own two feet, as it were, metaphorically.

What do you think of that angle on the already heated topic?

Well, a sturdy reply is that we need to remember the famous quote by Helen Keller: “The welfare of each is bound up in the welfare of all.”

Ergo, we had best be concerned about AI welfare since, in the end, the welfare of AI will be inextricably linked with the welfare of humankind. A doozy. Go ahead and mull that over if you have the time to do so, and if it doesn’t undermine your own mental welfare, thanks.

Read More

Zaļā Josta - Reklāma