In partnership with

Happy February! Welcome to my many new subscribers this week. The first edition of each month is free to all subscribers. If you find this helpful and would like to receive a weekly article, you can subscribe to a paid tier and help support my work!

—Pooja

🔥 Hot Take of the Week

We’re watching a widening gap between basic AI literacy and AI development, and it’s creating a weird divide.

  • Group A: people using 1-2 AI tools daily to get through work/life

  • Group B: people building the systems that power those tools

  • And it feels like… there’s nothing in between?

The “middle” is the missing role: Applied AI Literacy.

Not “power users,” not “engineers,” but clinicians (OTPs included) who can evaluate, verify, translate, and implement AI outputs responsibly in real workflows.

If AI platforms are influencing healthcare decisions and expectations, we need to be in the training rooms, because the middle is where safety, context, and accountability get built.

Companies like Mercor are actively hiring subject matter experts to do exactly that.

Do you agree or disagree with this hot take?

Login or Subscribe to participate

📰 AI News of the Week

One curated update from technology, healthcare, or regulation, translated into what it actually means for your daily practice or studies.

This week’s headline:
What’s happening?

A large randomized trial found that AI-supported mammography screening can improve cancer detection performance compared with standard screening workflows, reporting changes in measures like sensitivity, specificity, and interval cancer outcomes.

Why this matters:

This is a real-world signal that AI is moving from “cool demo” to a key clinical workflow component, not replacing clinicians, but reshaping how detection, triage, and follow-up may occur.

High-level data takeaways:
  • Randomized trial scale: 100,000+ participants in Sweden

  • AI-supported screening showed higher sensitivity for invasive cancer vs standard screening in the reported analysis.

  • Outcomes reported include interval cancer rate and diagnostic performance measures (sensitivity and specificity)—key indicators for screening effectiveness.

  • The big implementation question isn’t, “Does it work?” It’s how it changes workflow, staffing, oversight, and patient follow-up.

What to pay attention to:
  • Workflow design: where AI sits (first read, second read, triage, adjudication) matters more than the AI itself.

  • False reassurance vs false alarm: performance stats don’t automatically translate into patient experience without careful pathway design.

  • Equity + access: who benefits first depends on rollout sites, resourcing, and follow-up infrastructure (not just the algorithm).

Why this matters to OT:

Even when AI is used “upstream” (radiology/screening), the downstream impact hits OT through:

  • earlier diagnosis → earlier rehab needs

  • shifts in treatment timing → shifts in discharge planning + caregiver readiness

  • new patient expectations driven by AI-supported results

Bottom line: AI is reshaping the care timeline, and OT lives inside that timeline.

My goal is to keep these article reviews as non-biased as possible. If you find something to be biased, please feel free to write a comment or send me a message. I’m always willing to learn and improve.

—Pooja

🧰 Tool of the Month: Perplexity AI

A beginner-friendly look at one AI tool: what it’s good for, where it falls short, and when not to use it.

Last month, we dove into ChatGPT - what it is, what it’s good for, and where it falls short.

Master ChatGPT for Work Success

ChatGPT is revolutionizing how we work, but most people barely scratch the surface. Subscribe to Mindstream for free and unlock 5 essential resources including templates, workflows, and expert strategies for 2025. Whether you're writing emails, analyzing data, or streamlining tasks, this bundle shows you exactly how to save hours every week.

This month, we’re diving into Perplexity.

What it is:

Perplexity is an AI search + answer engine that synthesizes information from the web and provides citations you can click to verify.

What it’s good for:

Evidence-informed practice when you need a fast on-ramp.

When you provide clinical, educational, or professional context, it can:

  • quickly orienting to an unfamiliar diagnosis

  • pulling a starting set of sources (clinical guidelines, review papers, reputable organizations)

  • generating a “map” of what to read next with citations

Perplexity’s Pro Search is positioned for deeper research workflows, including academic papers and databases.

Where it falls short:

  • Citations ≠ quality. A cited source can still be low-quality, outdated, or not clinically applicable.

  • Synthesis can blur nuance. It may combine findings across heterogeneous populations/settings unless you constrain it (remember my review on context last week!)

  • Paywall + access limitations. It may cite sources you can’t fully access, or summarize them imperfectly.

(And yes, this is why this week’s ethics corner below is on verification.)

⚡ A Use Case Example

A real OT-relevant example of the tool in action (e.g., summarizing a complex neuro-rehab study or drafting a patient-friendly home program).

Scenario:

An OT in their first 2-3 years of practice encounters a rare diagnosis and isn’t sure how to approach assessment and intervention.

Example Prompt:

“Help me get evidence-informed OT starting points for [diagnosis] in adults.
1. Give me a brief clinical overview (2-3 bullets).
2. List 5-8 high-quality sources (guidelines, systematic reviews, RCTs, reputable clinical organizations) with citations.
3. Extract OT-relevant implications: likely functional impacts, common precautions or contraindications, and assessment domains to prioritize.
4. End with ‘what I should read first’ in order.”

Why this works:
  • You’re using Perplexity for what it’s best at: sourcing + orientation with citations.

  • You’re constraining scope (adult, OT lens, domains).

  • You’re producing a reading plan, not outsourcing your clinical judgment.

💬 Prompting 101

One simple strategy to talk to AI so you get better, more tailored results.

This week’s tip:

Tell the AI what “good” looks like for you.

Try this instead of that:

“Give me interventions for [diagnosis].”

“Give me OT intervention ideas for [diagnosis] with these constraints: outpatient setting, adult, mild cognitive impairment, low-cost home program. Success criteria: must be occupation-based, measurable, and include any potential contraindications and/or precautions.”

This is the bridge from “prompting” to workflow-ready outputs.

⚖️ Ethics Corner

A brief reflection on privacy, bias, and professional responsibility. Because in OT, literacy without ethics is just risk.

This week’s ethical consideration:

Verification of sources/citations
Perplexity makes it easier to see sources, but the responsibility to verify is still yours.

Think about this:
If you act on an AI summary without opening the citations:

  • you may apply the wrong population (peds vs adults)

  • miss contraindications

  • inherit bias from low-quality sources

  • or carry forward outdated recommendations

Rule of thumb:
If it will influence patient education, treatment planning, documentation, or billing justification, open at least 2-3 primary sources before you trust the synthesis.

The risk isn’t that AI gives wrong information; it’s that it gives context-free information that feels right.

🔁 Looking Ahead

I start each week with a hot take. These aren’t random, and I’m not pulling them out of thin air. I’ve been in this space since ChatGPT launched in 2022, and I’ve spent the past year learning and teaching generative AI - dos, don’ts, and best practices. My hot takes develop from countless experiences with others as I navigate this industry.

Then I share a headline. There are tens to hundreds of new AI headlines every single day. I try to sift through them and find one that’s relevant or adjacent to occupational therapy, and I try to break it down in a way that’s easy to understand and relevant.

I select one tool to focus on each month. This is so that one, I don’t overwhelm anyone, and two, I get more time to share nuance. January was ChatGPT. February is Perplexity.

The rest of the newsletter follows the same format each week: a prompting tip, a use case example, and an ethical consideration.

If you’d like to see anything else here, my inbox is open!

There’s more to AI than ChatGPT.

If you’re only using AI to rewrite emails, you’re doing it wrong.

The AI for Business & Finance Certificate from Columbia Business School Exec Ed breaks down how to use AI to make faster, more strategic decisions at work.

Save $300 with code SAVE300 + $200 with early enrollment by Feb. 17.

a note from me:

If this is your first article, thank you so much for being here! As I build this, I want to make sure it’s helpful to you. Please comment or email with any feedback or suggestions!

— Pooja

What are your initial thoughts on this layout/format?

Please share! I'd love to build something you'd love!

Login or Subscribe to participate

Reply

Avatar

or to participate

Keep Reading