The short version: most AI training for SMEs fails because it teaches tools instead of habits, ignores the team's actual workflows, and stops at the workshop. If you want your team to be genuinely productive with ChatGPT, Claude or an automation platform, you need to teach prompting before tools, build sessions around real work, and plan for at least one follow-up after the initial workshop. Anything less and you'll spend a day being entertained and a month going back to the old way.
Why does AI training fail at most SMEs?
Generic AI training fails because it answers the wrong question. Most courses spend their time on what ChatGPT or Claude can do - which is impressive, broad and almost completely irrelevant to whether your specific team will use it. The question that matters isn't "what can these tools do?" but "what does my team actually do all day, and which 20% of that does AI take a chunk out of?"
The pattern we see in UK SMEs is consistent. A leader books a half-day workshop. The trainer demos clever prompts. People nod. The session ends. A week later, two enthusiasts are still using ChatGPT for emails and everyone else has gone back to their old workflow because no one connected the demos to anything they actually had on their plate.
The fix isn't more polish on the demo. It's tailoring: bring the team's real work into the session, prompt against their real documents, build automations on their real tools, and finish with a list of three things each person is going to use next week. This is the difference between "we did some AI training" and "AI is now part of how we work".
What should you actually teach?
In order: prompting, then the tools, then workflows. Most courses do the reverse and lose people in the first hour.
Prompting comes first because everything else depends on it. If your team can't write a prompt that gets useful output, no amount of ChatGPT plugins or Zapier integrations will save them. Prompting isn't magic - it's a small set of habits: be specific, give context, show examples, ask for the format you want, iterate when the first attempt is wrong. An hour on this is genuinely high-leverage.
Tools come second. Once people can prompt, they can use ChatGPT and Claude as drop-in tools for drafting, summarising, analysing and reformatting. Most knowledge workers spend a non-trivial chunk of their week on those four things, and AI handles them well. Show people where their existing work overlaps with these capabilities, and they'll find their own use cases.
Workflows come last. Once your team is fluent with AI on individual tasks, the next step is connecting it into the systems they already use - their CRM, their inbox, their spreadsheets - via a tool like n8n, Zapier or Make. This is where genuine time savings live. But it's also where untrained teams burn weeks: trying to build complex workflows before they've understood prompts simpler than them.
ChatGPT and Claude: what's the practical difference for a team?
For most SME use cases, the practical difference is smaller than the marketing makes it sound. Both ChatGPT and Claude are general-purpose large language models, both can draft, summarise, analyse and reformat, and both are good enough at the kinds of tasks most teams will throw at them. If you're standardising on one, the choice matters less than people think.
That said, there are honest differences. Claude tends to be better at long-document analysis, at following nuanced instructions, and at producing output that doesn't read like a press release. It also handles tone well, which matters for client-facing drafts. ChatGPT has a wider plugin ecosystem, the deepest set of third-party integrations, and the better-known brand - which sometimes matters when you're trying to get a sceptical team on board. Both have business plans with sensible data-handling defaults.
Pick one and standardise. Switching cost between them is low, but having half the team on each creates needless inconsistency and makes training harder. For most UK SMEs we work with, either is fine - the productivity gain comes from using one of them consistently, not from picking the "right" one.
When does n8n, Zapier or Make come into the picture?
Workflow automation tools come into the picture as soon as your team is comfortable using AI on individual tasks and starts asking "could we just do this automatically?" That's the signal that they've understood the value, and the next step is connecting AI to the systems where the work actually lives.
If we covered the strengths of each platform in detail in our comparison of n8n, Zapier and Make, the short version for training purposes is: Zapier is the easiest to start with and the right call for non-technical teams who want to build simple workflows themselves. Make rewards a slightly steeper learning curve with better handling of branching and complex logic - good for teams that have someone willing to invest a few hours. n8n is the most powerful but the most demanding; for self-hosted setups or AI-heavy workflows, it's the right tool, but it's not where you'd start an entirely non-technical team.
The practical advice: train your team on whichever platform they'll actually use. If they have Zapier accounts already, train on Zapier. If they're standardised on Make, train on Make. Generic "automation 101" content that doesn't touch the tools they have is as useless as generic AI training.
How long does it take to make a team productive?
Faster than people expect, but not in a single workshop. Realistically: a half-day session gets people unblocked. The next two weeks of using what they learned in real work is where the actual learning happens. A follow-up session a few weeks later cements habits and unblocks the things people got stuck on. Total elapsed time to "this is now part of how we work" is typically four to six weeks.
The biggest variable is whether anyone cares enough to keep using the tools after the workshop. We've seen teams of ten get to genuine productivity in three weeks because the lead was using AI publicly and visibly, asking the team to share wins and stuck points. We've also seen teams attend the same workshop and use almost nothing because no one created the space to apply it. The training itself is rarely the bottleneck.
What does a useful workshop look like?
A useful workshop has four properties.
It's tailored. The trainer has spent at least an hour ahead of the session understanding your tools, your workflows, and the use cases that matter. Worked examples are built around the team's actual work, not generic "imagine you're an estate agent" scenarios.
It's hands-on. People are typing into ChatGPT or Claude or building actual workflows in n8n, not watching a demo. A reasonable rule of thumb: if more than 40% of the time is the trainer talking, the session is too lecture-heavy.
It ends with something concrete. Each person leaves with a written list of three to five things they're going to do in the next two weeks. Vague "I'll use AI more" intentions don't survive Monday morning.
It has a follow-up. Either a return session in three to four weeks, a Slack channel where the trainer is accessible for questions, or both. Without follow-up, retention drops sharply and the workshop becomes a pleasant memory rather than a behaviour change.
Should you train internally or bring someone in?
Bring someone in if: you don't have anyone internally who's already fluent with the tools, you want training tailored to your specific workflows, you want the workshop to have authority (people listen differently to an external expert than to a colleague), or you're rolling out something across multiple teams and want consistency.
Train internally if: you have one or two genuinely fluent users on the team, the use cases are simple, and you have someone with the time and inclination to put materials together. The risk is that internal training tends to drift toward "this is how I use it" rather than "this is how to use it", which is fine for one team but doesn't scale.
Most UK SMEs end up with a hybrid: bring in an external trainer for the initial rollout and to set the standard, then have an internal champion own the ongoing reinforcement. That's also how we like to structure things at Smooth Vector Solutions - workshops designed so that someone on your team can run the next round themselves once we've left the room. If you'd rather keep it lean and outsource the whole thing, our training engagements are scoped that way too.
Either way, the goal is the same: a team that uses AI tools confidently as part of their normal work, rather than one that nodded through a workshop and went back to the old way on Monday.