In partnership with

Welcome to March!

March is one of my favorite months: birthdays, anniversaries, spring—everything starts feeling lively again after the cold and gloom of winter.

I started drafting this edition a day before the Claude vs White House debacle. So I planned to kick us off with another hot take, a potentially new diagnosis, a new tool overview, and a new product I’ve been quietly working on.

But I’m not going to gatekeep something so urgent.

Since this is the first week of the month, this is a free edition to all subscribers! Today’s edition specifically provides you with tools to migrate away from OpenAI, should you want and choose to do so.

Want one every week? You can upgrade to a monthly or annual subscription below.

Thanks so much for being here!

Note to readers: Moving forward, some links in these weekly newsletters may contain affiliate links to help support my work.

Hot Take & AI Headline of the Week

Say goodbye to ChatGPT

Last week, Anthropic made headlines for drawing a firm line in its work with the US Government by refusing to permit its AI to be used for domestic surveillance or fully autonomous weapons systems. The government's response was to designate Anthropic as a supply chain risk to national security (Feb. 27, 2026).

To put that in context: supply chain risk designations have historically been reserved for foreign adversaries. Applying one to a domestic company for declining to enable mass surveillance or autonomous lethal systems is, to put it plainly, an extraordinary escalation, and one that is likely to be contested in court for years to come.

The vacuum didn't stay empty for long. With Anthropic sidelined from key government contracts, OpenAI stepped in to fulfill the remaining requirements. Make of that what you will. The craziest thing about this is that OpenAI’s contract includes those exact stipulations that Anthropic was defending. So is this just a political move? Because disintegrating AI systems and reintegrating new ones can take a year or more. So what is this really about?

What happened next may have been predictable: Anthropic saw a significant surge in new users. When a company puts its ethics on the line publicly, people notice—and in this case, many chose to vote with their accounts.

Back in a recent survey here, a number of you asked about migrating away from ChatGPT. This week, I've got two step-by-step guides to help you do exactly that: one for Gemini, and one for Claude.

My initial take was very strongly worded and biased, so this rewrite is brought to you by Claude.

Your Resources this Week

ChatGPT » Claude

  • Overview

  • What transfers, what doesn’t

  • Step 1: Exporting data

  • Step 2: Transferring data

  • Step 3: Transferring memory and personalizations

  • Step 4: Handling conversation history

  • Step 5: Rebuilding custom assistants

  • Step 6: Re-uploading files

  • Step 7: Learn about Claude

ChatGPT » Gemini

  • Overview

  • What transfers, what doesn’t

  • Step 1: Exporting data

  • Step 2: Transferring data

  • Step 3: Transferring memory and personalizations

  • Step 4: Handling conversation history

  • Step 5: Rebuilding custom assistants

  • Step 6: Re-uploading files

  • Step 7: Learn about Gemini

Stop Drowning In AI Information Overload

Your inbox is flooded with newsletters. Your feed is chaos. Somewhere in that noise are the insights that could transform your work—but who has time to find them?

The Deep View solves this. We read everything, analyze what matters, and deliver only the intelligence you need. No duplicate stories, no filler content, no wasted time. Just the essential AI developments that impact your industry, explained clearly and concisely.

Replace hours of scattered reading with five focused minutes. While others scramble to keep up, you'll stay ahead of developments that matter. 600,000+ professionals at top companies have already made this switch.

New Tool of the Month: Claude

Since Claude is publicly available (and the new fan favorite), it made sense to cover this one next.

What it is:

Claude is an LLM developed by Anthropic that focuses heavily on reasoning, safety, and long-form analysis. Compared to many generative AI tools that prioritize speed and quick outputs, Claude is designed to handle:

  • complex prompts

  • longer documents

  • nuanced writing and synthesis

  • structured analysis

Think of Claude less as a rapid drafting assistant and more as a thinking partner for complex material.

What it’s good for:

Claude performs particularly well when tasks require depth rather than speed, such as:

  • summarizing long research papers or policy documents

  • synthesizing multiple ideas into structured arguments

  • refining long-form writing (articles, lectures, reports)

  • reviewing large documents and identifying key themes

  • analyzing complex prompts that require reasoning

For clinicians, educators, and researchers, this makes Claude especially useful for knowledge work and critical thinking tasks.

Where it falls short:

Claude is not always the best choice for:

  • quick brainstorming or rapid short outputs

  • highly structured formatting tasks

  • visual or presentation-heavy outputs

  • tools that require extensive integrations or plugins

It tends to prioritize thoughtful responses over speed, which can feel slower compared to other AI tools (especially if you’re hopping over from ChatGPT).

When NOT to use it:

Claude is not the best tool when you need:

  • fast, lightweight drafting (use Gemini instead)

  • visual content or presentation creation (try NanoBanana, NotebookLM, or Gamma)

  • real-time data retrieval or research browsing (remember Perplexity?)

  • workflow integrations across multiple apps (try local Gemini or CoPilot integrations)

Claude shines when the task requires thinking through complexity, not just generating content quickly.

Tired of news that feels like noise?

Every day, 4.5 million readers turn to 1440 for their factual news fix. We sift through 100+ sources to bring you a complete summary of politics, global events, business, and culture — all in a brief 5-minute email. No spin. No slant. Just clarity.

Prompting Tip & Use Case Example

Claude prefers thinking > output

Claude performs best when prompts are structured like a thinking exercise, not just a request for output.

Unlike many AI tools that prioritize quick responses, Claude tends to produce stronger results when you ask it to analyze, reason, and organize information step-by-step (like the migration guides above!).

This week’s prompting tip:

Ask Claude to explain its reasoning before giving conclusions.

Instead of immediately asking for a final answer, guide the model through the analytical process. This produces more transparent and thoughtful outputs.

Use Case Example:

Scenario: An OT educator is reviewing a newly published research paper on a rehabilitation approach they’re unfamiliar with and wants to understand the implications before discussing it with students.

Goal: Quickly break down the paper, understand the strength of evidence, and identify what actually matters for clinical practice.

Analyze the following research article.

First, summarize the study design, sample population, and key findings.
Then, evaluate the strength of the evidence by identifying:

  • study limitations

  • potential biases

  • whether the conclusions are fully supported by data

Finally, explain what aspects of this research may or may not translate into occupational therapy practice.

Prompt Example for Claude

Why this works:

  • It forces the AI to walk through the reasoning process rather than jump to conclusions.

  • It separates evidence analysis from clinical interpretation.

  • It reduces the risk of overconfident summaries.

  • It aligns the output with how clinicians are trained to evaluate research.

Claude tends to perform especially well when prompts are framed like structured critical thinking, not just information retrieval.

Ethics Corner

Times like these show us exactly why ethics in AI matter.

The recent clash between AI platforms and the U.S. government agencies over security safeguards isn’t just a tech story.

It’s a governance story.

When powerful AI systems are being considered for use in national security, policymaking, healthcare, or public infrastructure, the conversation quickly shifts from “What can this tool do?” to “What protections exist if something goes wrong?

That’s the difference between capability and accountability.

This week’s ethical consideration:

Security safeguards and system accountability

AI tools are becoming increasingly capable of analyzing sensitive information, drafting policy recommendations, and influencing decision-making at scale.

Without clear guardrails, the risks include:

  • exposure of sensitive data

  • misuse of generated outputs

  • overreliance on systems without human verification

  • unclear responsibility when errors occur

Think about this:

If a powerful AI tool is influencing decisions in healthcare, government, or public policy, we need to ask:

  • Who built it?

  • Who audits it?

  • Who controls access to the data it processes?

  • Who is accountable if something fails?

Ethics in AI isn’t about slowing innovation.
It’s about making sure the systems we build—and the systems we rely on—are worthy of our trust.

A Note from Me

If this is your first article, thank you so much for being here! As I build this, I want to make sure it’s helpful to you. Please comment or email with any feedback or suggestions!

With a new month and so many changes, as well as feedback on the previous newsletters, I’m giving this new template a try! What do you think?

powered by Beehiiv.
Create your own with 20% off x3 months here. Or see the ad below for 30% off!

Why is everyone launching a newsletter?

Because it’s how creators turn attention into an owned audience, and an audience into a real, compounding business.

The smartest creators aren’t chasing followers. They’re building lists. And they’re building them on beehiiv, where growth, monetization, and ownership are built in from day one.

If you’re serious about turning what you know into something you own, there’s no better place to start. Find out why the fastest-growing newsletters choose beehiiv.

And for a limited time, take advantage of 30% off your first 3 months with code GROW30.

Reply

Avatar

or to participate

Keep Reading