-8

Today I asked Bing Chat for "the top ten funny movies in the past 20 years."

It responded with (first 4): Good boys (2019), Stuber (2019), Shazam (2019), When we first met (2018).

I was disturbed that BingGPT gave this answer, as its obviously heavily influenced by whomever is paying them.

I then asked ChatGPT for comparison.

It responded (first 4) Superbad (2007), The Hangover (2009), Groundhog Day (1993), Anchorman: The Legend of Ron Burgundy (2004)

While obviously, one can argue what the top 10 funniest movies are, etc, etc. Bing chat's answered skewed to what an advertising agency told them to answer, regardless of what basis information the internet provided.

I imagine they run their queries run something like this:

  1. "User phrase" is first used to search for any active advertising.
  2. Compile a break down of this, and tell ChatGPT to prefer any items in the given list, etc. Not to say negative characteristics about items in the list, etc.

My question is this: When does this become illegal? Does it ever become illegal?

For instance, can Bing give me back counterfactual information that endangers me, if an advertiser wanted to sell me, let's say, drug A, even if it was proven harmful.

Can Bing lie to me about things like car fatalities, given a brand they advertise?

Could Bing tell me to take a homeopathic remedy for depression instead of seeking counseling?

Is there any threshold where the lie becomes illegal?


Thank you, oh gods of the law. I look forward to your response.

iamacomputer
  • 188
  • 1
  • 8

2 Answers2

2

It is possible for lying in a commercial setting to constitute fraud. It seems like in the US the lie has to be material to the commercial relationship.

It is unreasonable (but common) to assume that a text generator like Bing Chat will not lie. It does in fact spit out things that are not true on a regular basis, and people keep trying to tell the general public this. If a system like this is marketed as if it reliably tells the truth, that marketing is misleading and might violate applicable laws about misleading marketing. If it induces reliance on something not actually reliable, and something goes wrong, the person harmed could sue for damages. (But any consciencable provisions of Bing Chat's contract would apply.)

But the real thrust of your question seems to be along the lines of "would it be legal to market an AI chatbot service as if it were a fiduciary designed to act in the user's best interest, while actually subborning it and instructing it to manipulate the user into e.g. buying particular products?". I'm not sure that the law has caught up to that particular question, so the applicable laws would be about false advertising and deceptive trade practices in general. I am not sure that the law addresses fiduciary duties of non-human systems, or whether they can be agents of corporations so that their false statements are attributable to the corporation.

These might be good topics to call your local lawmakers about.

interfect
  • 4,791
  • 25
  • 46
-1

The Bing TOS contains a sufficient disclaimer (ยง9), so that they would not be liable for any harm resulting from responses. It should also be noted that "lie" refers to a particular mental state, which a program lacks. There is some possibility that they could be found liable for copyright infringement, a point which has been discussed inconclusively here (inconclusive because of insufficient facts). In fact, a user would be more likely to be founds to be negligent in relying unreasonably on Bing AI output, in the same way that a person who jumped off a bridge would be found negligent in relying on the say-so of an insane passer-by that "If you jump off this bridge, you will fly" (thus defeating a lawsuit against the insane person for negligently giving bad advice).

user6726
  • 217,390
  • 11
  • 353
  • 587