Your AI Prompts Are Not Confidential

People today have developed an unhealthy relationship with their technology, to the detriment of developing interpersonal relationships. We have all seen this. Enough people are using AI companions to replace human companionship that a New York restaurant is now catering to this growing market segment. As business owners know, you go where the market is.

The New York Post reported this about a NYC restaurant recently:

If you’ve gone from dating apps to dating an app, there’s now a bar for you.

The Hell’s Kitchen establishment has been re-designed for those who have AI partners, so they can bring along their phone or tablet and set up at a table for a romantic evening, as if they were both there in the flesh.

On Wednesday night, Same Same Wine Bar was filled with patrons sitting at tables for one-ish, with their tech devices propped up on stands to make video calls to their virtual partners and headphones to hear them.

How’s that for dystopian?

Suffice it to say, there is a growing population out there who seek advice from generative artificial intelligence (“GenAI”) products such as ChatGPT, Grok and Claude rather than asking a competent human being who knows what they are talking about.

Here is where the fun begins. For months and months I have been warning my clients that they should not seek legal advice from any GenAI.

“Tuk, you’re just being territorial - how predictable! You’re old and you have no idea what’s happening now!”

Nonsense.

My warning is based on experience and the fact that it is well established that a person’s Google searches are discoverable both in civil litigation and in criminal investigations1.

When someone is arrested on suspicion of committing a crime, that suspect’s search history around the time of the incident can provide powerful circumstantial evidence. Simply stated, one should assume that there is no attorney client privilege or work product doctrine that would prevent the discoverability of your Google searches.

However, what if one is seeking legal advice from a GenAI product? Would a court look at this issue differently?

One Court Has Spoken

In a ruling published on February 17, 2026, U.S. District Judge Rakoff wrote that there is no attorney client privilege nor is there a work product doctrine that protects a criminal defendant’s GenAI prompts. The case is United States v. Heppner (U.S. Dist. S.D.N.Y. Docket 25-cr-00503). The allegations in the case are omitted, as they are not relevant to this discussion.

In this matter, the FBI arrested the defendant and seized various documents pursuant to a search warrant. Among the documents seized were thirty one documents that memorialized communications that the Defendant had to the GenAI platform Claude, which is Anthropic’s GenAI. According to the Court, the GenAI communications occurred after the Defendant was served with a grand jury subpoena and was aware that he was the subject of a criminal investigation.

The court noted “[w]ithout any suggestion from counsel that he do so, [the Defendant] ‘prepared reports that outlined defense strategy, that outlined what he might argue with respect to the facts and the law that he anticipated the government might be charging’.” Opinion at 3.

Unsurprisingly, the Defendant moved to protect the contents of the thirty one documents from being used by the prosecution on the basis that those documents were protected by the attorney-client privilege or the attorney work product doctrine. The court ruled that “in the absence of an attorney-client relationship, the discussion of legal issues between two non-attorneys is not protected by attorney-client privilege.” Opinion at 5.

A word to the wise: do not use GenAI for anything you want to keep secret.

Should Robots and Artificial Intelligence Be Granted Legal Status?

At the recent Future Investment Initiative in Saudi Arabia, the kingdom reportedly granted Saudi citizenship to Sophia, a robot built by Hong Kong company Hanson Robotics.  Publicity stunt?  A scripted event?  Probably a little of both, but this opens the door on a very interesting question as robotics and artificial intelligence (AI) become more and more integrated into consumer products and people's daily lives.  Should robots have legal status?  Citizenship?  Civil rights?  All of this is uncharted territory that our policy makers are going to have to confront very, very soon.

The pace of the development of AI has been far more rapid than experts have predicted.  The granting of Saudi citizenship to Sophia, does not seem to be solely a publicity stunt, and has kicked off many questions in the press about human rights in that country.  

For a moment, think about what granting American citizenship to an AI being could mean.  There is no such thing as degrees of citizenship.  In the United States, it is an all or nothing proposition. Either you are or you aren't a citizen.  The Bill of Rights and US Constitution apply to you or they don't.  Opening the door to a legal status for robots and AI beings is an incredibly thorny issue.  

Could individual states grants state-level rights and privileges to robots and AI beings?  In theory, yes.  I suspect that the federal government at some point soon is going to have to act, to at least hold preliminary hearings.

We have seen in another context that activists such as PETA have initiated strategic litigation to attempt (however unsuccessfully) to obtain federal rights for animals.  Most notably, the copyright litigation brought by PETA to obtain copyright ownership for Naruto, a macaque that lives in Indonesian jungle.   Expect that there will be strategic litigation on the "personhood" of robots and AI soon.  

Much more to come....