Financial Planning and AI: What It Can Do, and What It Can't Do... Yet
A few weeks ago, a post by AI entrepreneur Matt Shumer started circulating in business and finance circles under the title “Something Big Is Happening.” It’s written for people outside the technology industry, and it makes an unsettling argument: that AI is improving faster than most people realize, that the people building it are more alarmed than they publicly admit, and that the disruption already underway in software engineering is coming for nearly every field that involves cognitive work. If you haven’t read it, I’d recommend it. Shumer is neither a doom-monger nor a utopian. He’s someone who watched his own job change dramatically in the span of only a few months.
Around the same time, the Atlantic ran a cover story on what AI may do to the labor market, and the New Yorker published a long profile of Anthropic — the company that builds Claude, one of the leading AI models — that grappled with some genuinely strange questions about what these systems actually are.
What none of those pieces can offer is the view from inside a financial planning practice. I’ve been using AI tools in my work for a few years now. The experience has been instructive, sometimes frustrating, and more recently, a little startling. So in addition to pointing you toward other people’s writing, I want to tell you what I’ve actually seen.
One more thing before I continue: this post was drafted with AI assistance. I’ll explain what that means in practice shortly, but it seemed right to say so up front.
How I’ve Used AI
My first real experiment with AI in a professional context was a few years ago, working with a marketing consultant on copy for my website. We used an early ChatGPT model to brainstorm different ways to phrase things — trying to find language that would resonate with a prospective client and improve visibility in search results. A lot of what it produced was awkwardly phrased. None of it was close to production-ready. But it did speed up the brainstorming process, which was useful enough.
Where it became genuinely useful was meeting notes. If you’ve met with me in the last couple of years, you’ve probably seen me set my phone on the table or turn on recording at the start of a Zoom call. AI is phenomenal at transcribing conversations, synthesizing core topics, pulling out action items, and documenting data points and decisions. It lets me be more present in the meeting rather than splitting my attention between the client and my notepad. It also means I don’t miss things. This kind of work — sitting in on a meeting to take notes — used to be a job for an administrative assistant or junior advisor. AI does it better, and instantly.
As the models have improved, I’ve also used them to help with writing: editing blog posts, adjusting the tone of an email, and organizing my thinking on a topic before I start writing. There’s an irony here worth acknowledging: AI has also flooded the world with mediocre marketing copy and regurgitated technical content. The barrier to producing words has dropped to nearly zero, and you can feel it. If anything, it’s made me more selective about what’s worth reading.
More recently, I’ve experimented with using AI agents for data entry — reading financial documents and extracting key figures into a usable structure. A year ago, this technology was unreliable; I’d spend more time checking its work than if I’d just done it myself. The current generation of tools is different. I’d describe it as scary good. It still requires a second check, but I rarely find errors now. That’s a meaningful change.
I’ve even used it to write some custom computer code. I took one programming class in college, twenty-plus years ago, but with that thin foundation and a lot of persistence, I was actually able to produce some useful internal tools. I described the objective, the AI wrote the code, I tested the results, and used the AI to troubleshoot, expand, and improve. This also gave me direct experience with what developers call “technical debt.” The AI can produce a working prototype very quickly, but designing software that operates efficiently at scale and handles edge cases is orders of magnitude more complex. The original prototype code can work flawlessly in a narrow environment, but design flaws or inefficiencies compound, and shortcuts come back to haunt you.
And then there’s the area where AI has been both most useful and most dangerous: research. I regularly use Claude or ChatGPT as a starting point when working through a specific technical question about the tax code, Medicare rules, or estate planning. It’s genuinely helpful for getting oriented quickly. It’s also where I’ve seen AI be confidently, completely wrong.
Confident and Wrong
The hardest AI errors to catch are not the ones that look wrong. They’re the ones that look right.
I once posed a question about 529 plan usage for certain expenses, and the AI told me something was permitted that wasn’t. The answer was fluent, well-organized, and incorrect. Without the background to recognize the error, a client following that advice could have ended up on the wrong side of an audit.
In another situation — I’ll own this one — I relied on an AI research response when advising an accountant about whether a particular filing was required. The AI had been adamant. When I reached out to colleagues and other accountants, it became clear the AI was simply wrong. No harm was done, but it cost me an apology to the accountant in question. A humbling reminder that fluency is not the same as accuracy.
The pattern I’ve observed is that AI performs well on common scenarios and poorly on edge cases — which is precisely where the stakes tend to be highest. Common scenarios are well-represented in training data. Edge cases, by definition, are not.
The Robo-Advisor Parallel
There have been some recent headlines about AI-driven financial planning tools that generate their own recommendations. I’ve tried a few. The results are instructive.
This has happened before, in a different form. When robo-advisors arrived around 2010 with promises of replacing the human advisor, they did something genuinely valuable: they put investors into well-diversified, low-cost portfolios with very little friction. That was (and still is) better than a lot of what advisors were delivering at the time.
What actually happened is that they raised the bar. Advisors who were primarily doing portfolio construction lost ground. Advisors doing harder work — financial planning, tax coordination, behavioral coaching, estate guidance — found that the robo-advisor threat mostly cleared away the competition that wasn’t adding much value anyway.
AI-driven financial plans feel like a similar moment. The technical recommendations on common scenarios are often pretty good. If what you need is to be told that your savings rate is too low and you should max out your 401(k), an AI tool may be sufficient. Where it can lead you astray is with nuanced questions — tax planning edge cases, Medicare enrollment timing, estate structure, Roth conversion strategy — where the model may not have enough context or experience to recognize what it doesn't know.
My honest view is that AI may make basic financial planning more accessible and will raise the bar for what a human advisor needs to deliver to justify the relationship. That’s probably a good thing for clients, especially those who are lower income and underserved. Advisors who are primarily charging for tasks that AI can now handle competently will face real pressure. Advisors doing deeper planning work are in a different position.
The Pipeline Problem
The concern that gets the most attention is AI displacing experienced professionals. For now, I think that's mostly wrong. The same qualities that made a great advisor, lawyer, or accountant valuable before — judgment, creativity, communication, the ability to navigate ambiguity — are exactly what AI is worst at. For experienced professionals, these tools are more likely to be a productivity multiplier than a threat.
The more interesting problem is at the other end of the pyramid. The entry-level work AI is already replacing — data entry, transcription, first-draft research, basic document review — is also how you develop judgment in the first place. You learn to read a financial statement by reading hundreds of them. You develop the instinct to catch a confident AI error by having made similar mistakes yourself, early on, when the stakes were low and someone more experienced caught it.
If those entry-level reps disappear, where does the next generation of experienced advisors come from? I don't think anyone has a good answer to that yet.
What’s Actually Happening Out There
Shumer's core argument is that AI is moving faster than most people realize, and that the people closest to it are the most unsettled. The examples that stick with me come from former skeptics. Terence Tao, one of the most decorated mathematicians alive, now uses AI as a copilot to check proofs. Linus Torvalds, who founded Linux and once dismissed AI as "90% marketing and 10% reality," is now using it for his own projects and says it codes better than he does. When people with that level of expertise change their minds, it's worth paying attention.
The counterargument is that we've navigated technological disruptions before. That's true, and there are reasonable grounds for optimism that we do so again. But previous disruptions displaced specific, bounded skills and left retraining paths open. A factory worker could become an office worker. A travel agent could move into logistics. AI improves across all cognitive domains simultaneously, so whatever you retrain for, it's improving at that too. The retraining path is less obvious this time, and the pace of change is less forgiving.
I'm not predicting collapse. But the growing pains will be real, and they won't be distributed equally.
How Can I Start Using AI?
If your experience with AI so far is limited to treating it like a Google search, try pushing it further. My wife and I regularly tell ChatGPT what's in our fridge and ask for a recipe — it hasn't let us down yet. My 11-year-old used it to build his own video game. The barrier to making things is lower than it has ever been.
Ask it to explain something you've been meaning to understand, then push with follow-up questions. Use it to draft a difficult email or think through a decision. If you have real expertise in something, test it — ask about edge cases and see how it handles nuance. Describe an app you've always wished existed and see what it builds."
That said, match your skepticism to the stakes. A failed recipe costs you dinner. A wrong answer on a legal question can cost considerably more, and those are precisely the areas where AI sounds most confident and makes the most consequential errors or omissions. Use it as a starting point, not a final answer, for anything that matters.
For your kids or younger colleagues: encourage them to start building with these tools now, toward things they actually care about. The people who thrive in disruptions are those who engage and stay adaptable. That has always been true.
Where This Leaves Financial Planning
I’ve watched this play out before. When robo-advisors arrived around 2010, promising to replace human advisors, the advisors who adapted and expanded their services and expertise came out stronger. I expect something similar here, on a larger scale and at a faster pace.
What I’ve come to think is that the technical side of financial planning has never been the hardest part. Running the projections, modeling the Roth conversion, calculating the Medicare surcharge — those are solvable problems. What’s harder is the conversation that has to happen before any of that: What are you actually trying to do with your time and your resources? What does a good retirement look like for you specifically? What are you afraid of, and is that fear calibrated to reality? AI can run the numbers, but I haven’t seen it have that conversation.
That’s where my energy is going: getting better at the parts that are hardest to automate. Which, as it turns out, are also the parts that matter most.