Why Consumers Are Losing Trust in Generative AI?

Published on:

As interest in generative artificial intelligence (AI) picks up pace, a report reveals that consumers are becoming increasingly concerned with how companies use the technology.

2023 has brought the boom of generative AI tools such as ChatGPT, Bard, Midjourney, and many more. However, the trust gap has widened as companies explore using generative AI to scale up their businesses.

Salesforce: Distrust Rising

Salesforce released the “State of the Connected Customer” report, which collected data from over 14,300 business buyers and consumers. 

As the generative AI hype grows, the Salesforce survey shows that only 13% of consumers completely trust companies to use AI ethically. Whereas 10% of consumers have complete distrust of the usage of generative AI by companies.

Click here to learn more about the best AI companies in 2023.

Only 13% of consumers completely trust companies to use AI ethically. Source: Salesforce survey

Need For Human Involvement

Furthermore, the survey reveals that consumers are concerned about data security risks, unethical use of AI, and biases. Over 89% of consumers believe knowing if they communicate with AI or a human is important.

And 80% of consumers have also highlighted that it is important for a human to stay in the loop to validate the output generated by an AI tool.

In June, US Senators proposed bipartisan AI bills to make it mandatory to put humans in the driver’s seat while making critical decisions. Furthermore, the government must give proper disclosure if it uses AI to interact with the public.

Consumers' opinion to maintain trust with the companies using AI. Source: Salesforce
Consumers’ opinion to maintain trust with the companies using AI. Source: Salesforce

Amid the growing trust gap, Paula Goldman, the Chief Ethical Officer at Salesforce, shared her insights for prioritizing consumers’ trust. She said:

“It’s always been important to collect quality data and ensure transparency and consent in the collection process. But it’s not just about taking data in. It’s also about what happens to that data once we have it. 

Companies may need data as much as ever, but the best thing they can do to protect customers is to build methodologies that prioritize keeping that data — and their customers’ trust — safe.”

In July, executives from AI giants like Google and OpenAI committed to maintaining safe and transparent AI development at the White House. 

Besides the US, the UK is working to publish an AI regulation white paper. And in November, the UK will host the world’s first global summit on AI regulation.

Do you have something to say on consumers’ distrust of generative AI or anything else? Write to us or join the discussion on our Telegram channel. You can also catch us on TikTok, Facebook, or X (Twitter).

For BeInCrypto’s latest Bitcoin (BTC) analysis, click here.


In adherence to the Trust Project guidelines, BeInCrypto is committed to unbiased, transparent reporting. This news article aims to provide accurate, timely information. However, readers are advised to verify facts independently and consult with a professional before making any decisions based on this content.

Source link