Happy April!
Spring has finally sprung. I thought this winter would never end here in Chicago.
April is a busy month for me! I’m headed on a two-week international trip, tagging along with my hubby to a conference in Dublin, Ireland. That’s immediately followed by a week in Anaheim for the 2026 AOTA INSPIRE Annual Conference. Let me know if you’ll be there! Would love to catch some of you IRL.
Also, we hit 100 subscribers yesterday! Thank you SO much for following along. Please share with your friends, peers, and colleagues who may be interested.
Note to readers: Moving forward, some links in these weekly newsletters may contain affiliate links to help support my work.
Hot Take & AI Headline This Week
You never know what your public data will be used for. Remember 23andMe?
A few years ago, millions of people willingly handed over their most intimate biological data to a consumer genetics company. The pitch was personal insight — ancestry, health risks, family connections. What most people didn't think about was what would happen to that data if the company failed, got acquired, or changed its terms.
Now, 23andMe is bankrupt. And that data — your DNA — is an asset on a balance sheet.
We're doing the same thing with our words and our pictures. Every post, every comment, every photo uploaded to a public platform is training data for someone's model. You consented to it somewhere in a “terms of service” you didn't read. And you have very little say in what happens to it next.
This isn't a reason to disappear from the internet. It's a reason to be intentional about what you put there — and to pay attention when the legal system starts asking the same questions we should have been asking all along.
Which brings us to this week's headline.
This week’s headline:
What’s happening?
Encyclopedia Britannica & Merriam-Webster have filed a lawsuit against OpenAI, alleging that their copyrighted content was used without permission to train large language models. Britannica joins a growing list of publishers, authors, and news organizations pursuing legal action against AI companies over the use of their intellectual property in training data.
Why this matters:
This isn't just a publishing industry story. The core legal question — whether scraping and training on copyrighted content constitutes infringement — has implications for every profession that produces written knowledge. That includes academic journals, clinical practice guidelines, OT textbooks, and open-access research.
High-level takeaways:
The legal framework around AI training data is actively being contested in courts right now — outcomes are not settled
If courts side with publishers, it could fundamentally change what data AI companies can use — and what models are trained on going forward
Open-access publications, including OT research, are not necessarily exempt — "open access" means free to read, not free to train on
The quality and accuracy of AI outputs in healthcare settings may eventually be tied to how these copyright cases resolve
What to pay attention to:
Watch whether courts treat AI training as transformative use — which would favor AI companies — or as reproduction — which would favor publishers. The outcome will shape what AI tools are built on, and by extension, what they know and don't know about your field.
Why this matters to OT:
Occupational therapy has a relatively small body of published research compared to medicine or nursing. If OT-specific literature was used to train AI models without permission, the profession has standing to care about that — both ethically and practically. And if future models are restricted from training on certain content, the already-limited representation of OT knowledge in AI outputs could shrink further.
This is exactly why OT needs to be in these conversations — not watching from the sidelines while other professions and industries define the rules.
The Gold Standard for AI News
AI keeps coming up at work, but you still don't get it?
That's exactly why 1M+ professionals working at Google, Meta, and OpenAI read Superhuman AI daily.
Here's what you get:
Daily AI news that matters for your career - Filtered from 1000s of sources so you know what affects your industry.
Step-by-step tutorials you can use immediately - Real prompts and workflows that solve actual business problems.
New AI tools tested and reviewed - We try everything to deliver tools that drive real results.
All in just 3 minutes a day
In Other News This Week
Pokémon Go Players Are Training Robots
Niantic has been using its massive network of players to collect real-world spatial data, now being used to train robotic navigation systems. A striking example of how consumer behavior at scale becomes infrastructure for technologies most users never see.
A Deep Dive on How LLMs Actually Work
If you've been using these tools and want to understand what's actually happening under the hood, this video is worth your time. Understanding the mechanism makes you a better, more skeptical user.
Anthropic Is Hiring a Weapons Expert
The goal: prevent “catastrophic misuse.” For readers following the Anthropic/Pentagon story, this is consistent with the same position that got them blacklisted: Anthropic is not stepping back from its safety stance. If anything, they're formalizing it. They're building the internal expertise to define and defend those lines more precisely.
The best marketing ideas come from marketers who live it. That’s what The Marketing Millennials delivers: real insights, fresh takes, and no fluff. Written by Daniel Murray, a marketer who knows what works, this newsletter cuts through the noise so you can stop guessing and start winning. Subscribe and level up your marketing game.
Tool of the Month: Gemini
Gemini: Intro & Overview
What it is:
For many of you, the replacement for most of your ChatGPT queries.
Gemini is Google's large language model, available through gemini.google.com and increasingly embedded across Google's ecosystem — Gmail, Docs, Drive, and more. It's developed by Google DeepMind and designed to handle a wide range of tasks, including writing, research, summarization, coding, and multimodal inputs like images and documents.
Where Gemini stands out is in its integration. If you're already living in Google Workspace, Gemini is the tool most likely to meet you where you already are.
What it’s good for:
Gemini performs well across a broad range of everyday tasks, particularly when speed and accessibility matter:
quick drafting and editing of emails, documents, and summaries
research assistance with real-time web access built in
working across Google products — summarizing a Drive document, drafting a Gmail response, pulling from your calendar
multimodal tasks — analyzing images, reading uploaded documents, working with mixed content
shorter, faster outputs where you need something usable quickly rather than deeply reasoned
For clinicians and educators already using Google Workspace, Gemini's embedded integrations make it one of the lowest-friction AI tools available.
Where it falls short:
Gemini can feel less precise than Claude on tasks requiring deep reasoning, nuanced synthesis, or long-form structured analysis. It also has a tendency to produce confident-sounding outputs that require the same verification discipline as any AI tool — real-time web access doesn't eliminate hallucination.
When NOT to use it:
Complex, multi-layered analysis or long document synthesis — Claude handles depth better
Tasks requiring careful ethical framing or nuanced professional judgment in the output
When you need to work outside the Google ecosystem entirely
Gemini is the right tool when you need something fast, integrated, and good enough — and when your workflow already runs through Google.
Next week: Gemini in the clinic — including a closer look at NanoBanana.
Prompting Tip & Use Case Example
Give Gemini a Workflow, Not Just a Task
Gemini is built for speed and integration. The prompts that work best with it reflect that — they're specific, context-light, and workflow-oriented.
This week’s prompting tip: Tell Gemini where this fits in your day
Instead of asking Gemini for a standalone output, frame the request around the workflow it's part of. This is especially useful when you're using Gemini inside Google products, where it already has access to context you'd otherwise have to explain.
Use Case Example:
Scenario: An OT practitioner has just finished a busy eval day and needs to send a follow-up email to a referring physician summarizing a new patient's functional status and recommended plan of care. They have their notes but are running behind.
Goal: Draft a concise, professional summary email quickly without starting from scratch.
I'm an occupational therapist writing a follow-up email to a referring physician after an initial evaluation. My patient is a 68-year-old male, referred for functional decline following a hip replacement. Key findings: decreased bilateral upper extremity strength, difficulty with ADL setup, fall risk moderate. Recommended plan: 2x/week OT for 6 weeks focusing on ADL retraining and home safety. Draft a concise, professional email summarizing these findings and the plan of care. Tone should be clinical but readable.
Why this works:
Giving Gemini the clinical context upfront eliminates back-and-forth
Specifying tone — clinical but readable — prevents outputs that are either too casual or too dense
The request is bounded and specific, which plays to Gemini's strengths
It saves 10–15 minutes on a task that doesn't require your clinical expertise to complete — just your clinical information
Use Gemini to clear the administrative runway so you can reinvest that time where it actually matters.
Ethics Corner
“Free” Tools and the Cost of Convenience
Gemini is free. So is ChatGPT's base version. So is Claude's. And that's worth thinking about.
When a powerful technology is offered at no cost to the end user, the business model is worth understanding. For most consumer AI tools, you are not just the user — you are also a source of data. How your prompts are used, whether they contribute to model training, and what the platform's data retention policies actually say are questions worth asking before you type anything sensitive into a free tool.
This matters more in healthcare than almost anywhere else.
This week’s ethical consideration:
Know the data policy of every AI tool you use — especially the free ones.
Ask yourself:
Does this platform use my inputs to train its models?
What is the data retention policy — how long are my prompts stored?
If I'm using a free consumer version, am I subject to different terms than an enterprise or paid user?
Would I be comfortable if my employer, my licensing board, or my patient saw exactly what I typed into this tool?
For Google specifically: Gemini's data practices vary depending on whether you're using the consumer version, a Google Workspace account, or an enterprise plan. The settings are not identical. Check yours.
Convenience is real. So is the cost of not reading the fine print.
A Note from Me
If this is your first article, thank you so much for being here! As I build this, I want to make sure it’s helpful to you. Please comment or email with any feedback or suggestions!
What are your initial thoughts on this layout/format?
powered by Beehiiv.
Create your own with 20% off x3 months here. Or see the ad below for 30% off!



