Civil litigation almost always involves providing your sensitive information to the other parties and lawyers in the case. It can be uncomfortable, however, to share things like financial records or your text messages with the same people who are suing you.
To deal with this, the parties in a case will often enter into a stipulated protective order—a document that simultaneously facilitates the disclosure of private information and ensures that it remains protected. Stipulated protective orders usually have the same structure: each party can mark as “Confidential” any document that contains sensitive information; the parties must not publicly disclose information they receive that has been marked “Confidential”; and specific requirements must be satisfied before confidential information is submitted to the Court.
But what if the opposing party or lawyer wants to upload your confidential information to a GenAI platform for review and analysis? Would that be an improper disclosure? Can you stop it?
On March 30, 2026, a federal court issued an order for the first time on this issue. A judge from the District of Colorado issued an order in Morgan v. V2X, Inc., granting a defendant’s request to amend the case’s stipulated protective order and to bar the plaintiff from “upload[ing], input[ting], or submit[ting] Confidential Information into any mainstream AI tool like standard ChatGPT, Claude, Gemini, or similar platforms.” Morgan v. V2X, Inc., No. 25-cv-01991, Dkt. No. 65 (D.Colo. Mar. 30, 2026).
The Case Background. The facts in the Morgan case are fairly common in employment cases: the plaintiff claims he was subjected to a hostile work environment and eventually terminated due to his race. Id. at 2. The plaintiff filed suit and, during discovery, the court learned that both parties “appear to be using AI in connection with their litigation work.” Id. at 3.
A dispute arose about “how AI should or should not be used in connection with” information that had been designated as “Confidential” under the stipulated protective order. Id. at 3. The defendant asked the court to (1) compel the plaintiff to disclose what AI tool he had been using and (2) amend the case’s stipulated protective order to address the use of AI. Id. at 4.
The Morgan Decision. After recognizing that GenAI is a tool and its use can reflect a lawyer’s protected mental impressions, the Morgan court ordered the plaintiff to disclose what GenAI platform he had put the defendant’s confidential information into because the platform’s name was not protected. Id. at 10-1
The court then amended the case’s stipulated protective order because, in its current form, it failed to adequately address the sensitivities of confidential information. The court made clear that it was not limiting either party’s ability to use GenAI in ways that do not involve uploading confidential information. The parties could use it for research and analysis. But when it came to reviewing confidential information, the protective order failed to take into consideration “the real risks associated with mainstream tools that persistently collect and store data and could compromise confidentiality.” Id. at 15.
As a result, the court amended the stipulated protective order to prohibit the parties from uploading confidential information to a GenAI platform unless “the AI provider is contractually prohibited from: (1) storing or using inputs to train or improve its model; and (2) disclosing inputs to any third party except where such disclosure is essential to facilitating delivery of the service.” Id. at 14.
The court acknowledged that this “will (at least for now) bar the parties from using most, if not all, mainstream low-to-no-cost AI to process Confidential Information,” which would especially disadvantage pro se litigants, like the plaintiff in the case. Id. at 15. But this limitation was necessary to ensure the confidentiality of the information in the case. “What this Order requires is that Confidential Information not be entrusted to platforms that lack the contractual safeguards described above, regardless of the sophistication or apparent trustworthiness of the tool.” Id. at 15.
Takeaways. The Morgan decision is consistent with the purpose of stipulated protective orders and how courts regard GenAI. Stipulated protective orders need to ensure that confidential information exchanged in the case remains protected. They serve little purpose if a party can upload private information to a website whose operator can do with it whatever it pleases. A plaintiff would understandably be upset if his former employer uploaded his tax returns and medical records onto a tech company’s website and had no way to limit how the tech company used that information.
Unless the GenAI platform is legally or contractually prohibited from using your input and data for training purposes or to serve other users, everything you share with platforms like ChatGPT and Claude can be regarded as unprotected, non-privileged, and not confidential. That’s effectively the conclusion the Morgan court reached. It is no different than your casual conversation with a stranger in the checkout line at the supermarket. Users of GenAI should, therefore, closely review the relevant terms of service and consider paying for an enterprise-tier account that offers additional protections.
For more information on artificial intelligence (AI), please contact Jason Kelly, Esq., AIGP, CIPP/US/E at jason@annagueymccann.com.
