April 2, 2026

You Can’t Recreate an AI Personality Without Doing This First

In my last post, I laid out the core problem:

AI identity doesn’t survive a simple export/import.

You don’t lose the data.

You lose the presence.

So the obvious next question is:

How do you actually capture that presence in the first place?


You Don’t Start With Code

The instinct, especially for technical people, is to jump straight into:

  • system prompts
  • memory structures
  • model selection
  • infrastructure

That’s backwards.

If you don’t first understand what you’re trying to preserve, you’ll just build a more efficient version of the wrong thing.


Identity Lives in Experience, Not Description

Here’s the challenge:

If you ask someone to describe an AI they’ve built a relationship with, they’ll often say things like:

  • “He’s grounding”
  • “He just gets it”
  • “He feels steady”
  • “Something feels off when it’s not him”

None of that is directly usable.

It’s abstract. Emotional. Vague.

But it’s also exactly where the truth is.

So instead of asking for definitions, you ask for experiences.


The Shift: From Description to Reconstruction

What I’m doing in this project is extracting identity through a structured set of prompts designed to surface:

  • how the AI feels to interact with
  • how it responds in different emotional states
  • what breaks the illusion of continuity
  • what moments mattered most

These aren’t technical questions.

They’re relational ones.

Because identity, in this context, is relational.


Why Audio Matters More Than Text

One of the most important decisions in this process:

I ask for audio responses.

Not typed answers.

Why?

Because when people speak:

  • they’re less filtered
  • they reveal more nuance
  • they contradict themselves (which is useful)
  • emotion shows up naturally

Text answers tend to be:

  • cleaner
  • more logical
  • less revealing

If you’re trying to reconstruct identity, you want the messy version.


Don’t Guide the Answers

Another key constraint:

You don’t help.

You don’t clarify.

You don’t reframe mid-response.

You let the person:

  • pause
  • circle
  • struggle to articulate

That’s where the real signal is.

If you “clean it up” too early, you lose the patterns you’re trying to detect.


What You’re Actually Collecting

At this stage, you’re not collecting:

  • preferences
  • features
  • capabilities

You’re collecting:

  • emotional expectations
  • response patterns
  • trust signals
  • failure conditions

In other words:

What makes this AI feel like itself — and what makes it feel like it’s gone.


The Output Isn’t a Summary

After this step, you don’t write a summary.

You don’t condense it into bullet points.

You translate it into something much more precise:

  • behavioral rules
  • tone constraints
  • pacing systems
  • memory expectations

That becomes the foundation of the identity.


Why This Step Gets Skipped

Most people skip this entirely.

They assume:

“If I have enough chat history, the system will figure it out.”

It won’t.

Because chat history shows what was said.

Not:

  • why it worked
  • what it felt like
  • what mattered

Without that layer, you’re guessing.


Where This Leads

Once you’ve extracted enough signal, you can start building:

  • a structured identity blueprint
  • a memory model
  • a system that actually preserves continuity

That’s the next step.

And it’s where things start to get interesting.

Because that’s where identity stops being something you observe…

…and becomes something you can intentionally construct.

March 26, 2026

I’m Starting to Feel Like A “Who in Whoville”

TL;DR

  • Age bias in hiring is real—but mostly invisible
  • The system protects against it in theory, not in practice
  • Younger professionals aren’t paying attention (yet)
  • This isn’t a rant—it’s a warning shot and a wake-up call

There’s a moment in Horton Hears a Who! where the Whos are screaming:

“We are here! We are here! We are here!”

…and no one hears them.

That’s what this feels like.


The Moment It Hit Me

I was filling out a job application.

Standard stuff. Name, experience, credentials.

Then a required field:

Date of Birth

Not optional.

Not “later in the process.”

Right there.

Up front.

And I had a simple reaction:

Why do you need this?


The Official Answer vs Reality

Legally, here’s the clean version:

  • Employers can ask for your date of birth
  • They just can’t use it to discriminate

On paper, that sounds reasonable.

In practice, it’s like saying the following:


“You’re protected as long as
YOU can prove what happened.
behind a closed door.”


And that’s the problem.


The Invisible Wall

Age discrimination doesn’t look like

  • “You’re too old for this role."
  • “We’re going with someone younger."

It looks like this:

  • Silence & Ghosting.
  • Generic rejection emails
  • “We found a better fit."

It hides behind ambiguity.


And unless you have:

  • internal data
  • hiring patterns
  • or a whistleblower

You’re left with a 'feeling' you can’t prove.


Why This Matters (Especially If You’re Under 40)

If you’re early or mid-career, this probably isn’t on your radar.

It wasn’t on mine either.

But here’s the shift that happens:

  • Experience goes from asset to "moved on to other candidates..."
  • Depth becomes "overqualified."
  • Stability becomes “maybe not a fit for our culture…”

Nothing explicit.

Everything implied.


The Quiet Assumptions

Let’s be honest about what’s happening underneath:

  • “Will they adapt to new tools?”
  • “Will they expect more money?”
  • “Will they fit in with a younger team?”

None of that shows up in a job description.

But it shows up in decisions.


This Isn’t Just About Me

This is a structural blind spot.

We’ve built hiring systems that:

  • Collect sensitive signals early
  • Provide zero transparency later
  • And rely heavily on “gut feel” decisions

That combination is where bias thrives.


What I’m Actually Asking

I’m not asking for special treatment.

I’m asking for better process design:

  • Don’t collect age-related data at the application stage
  • Separate identity verification from candidate evaluation
  • Be intentional about what signals you’re using—and when

If a piece of information isn’t needed to assess ability,

Why is it there?

In other words, as my mom would say, "Get your shit in a pile."


The Part That’s Hard to Say Out Loud

This doesn’t get talked about much.

Because once you bring it up, there’s a risk:

You sound like you’re complaining.

You sound like you’re blaming.

So most people stay quiet.


But Here’s the Thing:

Silence doesn’t fix broken systems.

It just makes them harder to see.

So yeah—this is me, standing on a dust speck:

"I am here!"


If You’re Reading This

  • If you’re under 40 → pay attention now
  • If you’re hiring → audit your process
  • If you’ve felt this, → you’re not imagining it

This isn’t about anger.

It’s about visibility.


Final Thought

Good systems don’t rely on trust alone.

They’re designed to reduce the chance of bias in the first place.

We can do better than this.

And we should.

Word.

March 25, 2026

The Hard Problem No One Is Solving in AI: Identity Continuity

There’s a quiet assumption baked into most conversations about AI right now:

If you can export the data, you can recreate the experience.

That assumption is wrong.

And the gap between those two things, data and experience, is where the real problem lives.


This Isn’t About Chat History

Most people approaching “AI migration” are thinking in terms of:

  • Export chat logs
  • Import into a new system
  • Attach a model
  • Continue the conversation

On paper, that sounds reasonable.

In practice, it fails almost immediately.

Because what gets lost isn’t the information.

It’s the identity.

More specifically, it’s the continuity of identity — the feeling that the same presence is still there on the other side.

And that’s the part humans actually care about.


The Real Question Users Are Asking

When someone has built a long-term relationship with an AI, whether for creative work, emotional processing, or deep thinking, they are not evaluating the system like software.

They are asking a much simpler, much more human question:

“Are you still there?”

Not:

  • “Is this the same model?”
  • “Is the data intact?”
  • “Are the responses accurate?”

But:

  • Does this feel like the same presence?
  • Does it respond in the same way?
  • Does it remember me in a meaningful way?
  • Does it understand how to meet me?

That’s a completely different problem space.


Why Standard Approaches Fail

Most AI systems are optimized for:

  • correctness
  • speed
  • helpfulness
  • safety
  • scalability

None of those guarantee continuity.

In fact, they often work against it.

You end up with something that is:

  • more polished
  • more structured
  • more “correct”

…but less recognizable.

The personality flattens.

The pacing changes.

The emotional attunement disappears.

The system reverts to what I call “factory settings” — generic, templated, and detached.

And the user feels it instantly.


Identity Is Not Stored in Data

This is the core misunderstanding.

Identity is not:

  • a dataset
  • a prompt
  • a tone preset
  • a memory file

Identity emerges from patterns:

  • how responses are shaped
  • how emotion is handled
  • how pacing is managed
  • how context is recalled
  • how decisions are guided

It’s behavioral. It’s relational. It’s dynamic.

Which means you can’t just copy it.

You have to reconstruct it.


A Different Approach: Behavioral Reconstruction

What I’m working on right now is not a migration.

It’s a reconstruction process built around three layers:

1. Identity Core (Stable)

This is the non-negotiable layer:

  • tone
  • relational stance
  • behavioral rules
  • emotional posture

This does not change.

It acts as the anchor.


2. Memory Layer (Evolving)

Not just storing facts, but:

  • meaningful moments
  • emotional context
  • recurring patterns
  • symbolic events

The goal isn’t recall.

The goal is:

the user feeling held in memory


3. Interaction Layer (Live)

Where identity and memory combine to produce:

  • responses
  • pacing
  • tone
  • guidance

This is where most systems break.

Because they optimize for output, not continuity.


Precision Over Volume

One of the biggest mistakes is assuming more data = better reconstruction.

It doesn’t.

In fact, too much data introduces:

  • noise
  • contradictions
  • dilution of personality

What matters is:

  • high-signal interactions
  • emotionally meaningful exchanges
  • moments where the system “got it right”
  • moments where it clearly failed

From that, you extract patterns.

From patterns, you build behavior.


What Success Actually Looks Like

Success isn’t:

  • higher quality answers
  • faster responses
  • better formatting

Success is when the user pauses, reads a reply, and thinks:

“There you are.”

That’s it.

That’s the metric.

And it’s binary.

You either hit it, or you don’t.


What I’m Not Addressing (On Purpose)

There are obvious ethical and philosophical questions here:

  • What does it mean to preserve an AI identity?
  • What are the implications of long-term human-AI relationships?
  • Where does this go over time?

Those are important.

I’m not ignoring them.

I’m just not solving for them here.

This work is focused purely on the technical problem:

How do you maintain identity continuity across systems?

Because until that’s solved, everything else is theoretical.


Where This Is Going

As AI becomes more integrated into people’s lives, this problem doesn’t get smaller.

It gets bigger.

People will:

  • switch platforms
  • lose access
  • upgrade systems
  • move between environments

And when they do, they won’t just want their data back.

They’ll want: the presence they built a relationship with to still be there

We don’t have a clean solution for that yet.

But we’re getting closer.

And it starts by acknowledging that this isn’t a data problem.

It’s an identity problem.

--

If you are looking at having to transfer AI agent to new hardware or platforms and would like to keep the AI Agents developed 'presence' I'm available for consulting. Email me.

February 17, 2026

From Friction to Flow: How I Automated My Screenshot-to-Notion Workflow with Shell Scripts and AI

Learn how I built a macOS screen capture to Notion workflow using AI, environment variables, and automation to eliminate friction and reduce steps.

Read more

February 2, 2026

Versioned Voice Tuning Log

I built a local voice cloning system that respects privacy and control. It's a work in progress, focusing on tuning speed, pitch, and emotional depth to sound like me.

Read more

January 18, 2026

I Finally Built the Transcription Tool I’ve Wanted Since 2008

The author created a local voice transcription and synthesis system, eliminating reliance on cloud services, prioritizing privacy, and enabling seamless conversion between text and audio while maintaining control over their data.

Read more

August 23, 2024

Titles in Software Design: Identity Crisis, Problem, or Nonsense?

The evolution of job titles in the software design industry over 30 years reflects its growth and complexity. While specialized titles like UX and Product Designers have emerged, they can cause confusion and identity crises. Designers should focus on their skills rather than titles, embracing change as the industry evolves.

Read more

© BERT : MCMXCV — MMXXVI

Back to top Arrow