Info Quality May Vary

Search engines like Google and Bing are able to sidestep legal liability for their search results because they present third-party material. Will that change as AI-powered experiences take centerstage?

Section 230 of the Communications Decency Act (CDA)

Search engines are just tools that reflect the internet and society as a whole. While the right to be forgotten and other legal initiatives (primarily from the EU) have reshaped search engines slightly over time – Google’s liability for the blue links it serves users has changed little in 20 years.

Google is no more accountable for their search results than Meta (Facebook’s parent company) is for the contents of individual social media posts. But that may all change with generative AI. Section 230 of the Communications Decency Act protects social media companies and search engines from being liable for content posted by users. But what happens when that content is manipulated by AI before it’s presented?

Will AI-powered Content be Protected?

Can a search engine claim AI-powered search results are just a reflection of the internet or society? The level of handling done to the data and content ingested by LLMs makes it difficult from a technical perspective to justify. And if tech companies aren’t relying on legal protection referenced above – what are they doing to insulate themselves from lawsuits?

“Info quality may vary” probably isn’t a strong enough disclaimer for Google’s Search Generative Experience (SGE) results even though it still in testing. Based on our testing and research, SGE results are far from perfect and sometimes very problematic. And with news breaking that Google Bard has been indexing conversations in search results calls for more oversight are essential.

Our View

Society needs more effective oversight and regulation over the use of AI. Legislative bodies (i.e., Congress in the US) need to act to protect people from being harmed in the AI goldrush. AI regulation isn’t about stifling innovation or preventing a Terminator-like doomsday – it’s about preventing corporations from causing societal harm in a blind pursue of profit.

Social media has proven time and again that reinforcing harmful biases through technology can have detrimental outcomes (i.e., genocide). AI has the potential to revolutionize the way we live, but we need safeguards to prevent disaster. And those safeguards need to come from outside the companies creating (and profiting from) these technologies.

In the meantime, we recommend being extremely careful what information you share with generative AI platforms.


While writing this blog article Google announced that they are expanding SGE while browsing to teens in the US 13 years of age and up. Google’s Director of Product Management claims that they designed guardrails to prevent inappropriate or harmful content from surfacing.

Scroll to Top