Contact Us

Anchor Group Podcast: Episode 22

How Most Businesses get AI Wrong | Anchor Group Podcast: Episode 22

Follow Us On LinkedIn!

Podcast Transcript

Caleb (00:00)
In episode 22 of the Anchor Group podcast, I invite Stephen onto this conversation, who is a fractional CTO and CIO. We actually started our conversation talking about technology stacks, business process analysis, and customer pain points, but it quickly evolved into AI agents, how companies can leverage them, different AI use cases, the NetSuite AI connector, and scenarios where we’ve already started applying AI in real business cases.

Sometimes it’s a terrible idea and best to avoid it entirely because it doesn’t fit the use case. People often jump straight to AI as the solution for everything when it’s not. But there are real use cases that make it very valuable for different companies. Stay tuned and you’ll see exactly what we’re talking about in this episode.

Caleb (00:47)
Tell me a little bit about what you’re doing, your company, and some of the things you specialize in so we can learn more about you.

Stephen (00:53)
I started my own consulting company, Blue Helm Technology, at the end of 2023. I’ve been in tech for 30-plus years. I stopped counting at 30, so now it’s just “30-plus.” My background is split. I graduated with a computer information systems degree and have bounced between software development and traditional IT—network infrastructure—up through management and into consulting.

For me, it’s about diving in and really understanding a business. I like to do discovery from a business standpoint: how do you make your money, how does data flow through your system? Then I look at processes and procedures and figure out if there’s a good way to automate. Everyone wants AI now, but often they’re not set up at all for AI.

Caleb (02:06)
When you help people with process flow maps and understanding their pain points, what technology stack do you gravitate toward?

Stephen (02:15)
I split that two ways. One is the process I use for discovery. For that, I go back to my school roots and start with input, process, output: what are we trying to achieve, do we have the inputs to reach those outputs, and what process do we use to get there?

From a stack standpoint, I stick with Microsoft because I work a lot in the SMB market. Many of the people I work with are on a Microsoft stack, so it just makes sense to take advantage of it.

Caleb (03:00)
So you’re doing everything across the board from 365 to Dynamics, maybe Great Plains in the past, and ERP as well?

Stephen (03:14)
Usually when I get to a company with an ERP, it’s already in place. It’s less about shifting them to something new and more about taking advantage of what they currently have. Occasionally I’ll find a client running something ancient, and then it’s time to either upgrade to the newest version of the platform or look at what they need the system to do and decide if a shift is necessary.

Caleb (03:52)
When you’re looking at business processes, it can involve a range of software and point solutions, not just ERP. My style is similar to yours—I do a deep dive business discussion, but I don’t want to waste too much time in conversation. Normally I can have a one-hour meeting, map out their business process and pain points, make safe assumptions through experience, and then present it back. It’s usually 95% accurate, and then we tweak it.

From there, we talk about solving pain points and their existing processes. I map tech stacks to specific products. Even though it’s attractive to use the same suite for everything—and there are benefits—you still have to navigate how enterprise software licensing works, which is a whole different beast.

If you’re used to paying $15 per user a month, enterprise licensing is a different game. We don’t work with on-prem ERP anymore—it’s all cloud-based. Helping companies transition from on-prem to cloud requires educating them to see things differently than they’ve ever experienced. I really enjoy that. Starting with business processes, I often agree that sticking with what they have and fixing it is the best option.

Stephen (05:55)
It also depends on why and when you’re brought in. Sometimes it’s for a specific pain point, so you target that. Other times, it’s because AI is the hot button. They saw a presentation and think they need an agent. But often, they don’t. What they need is someone to look at their processes and suggest improvements—which may or may not involve AI.

Caleb (06:31)
And often there’s existing software that already solves those processes—they just don’t realize it.

Stephen (06:37)
Exactly. And it may not even need a technical solution.

Caleb (06:43)
Sometimes it’s just changing the way you do things further up the line. I’ll give you an example. I work with a lot of e-commerce companies—mostly manufacturers and wholesalers in the B2B space. It’s more of a customer portal than traditional e-commerce.

If I look at what people need, I’ve had cases where sales reps complain it takes too long to enter orders. They’ll say they need a CPQ solution—a configure, price, quote tool—to speed things up. But often, that’s jumping to a conclusion without thinking further up the chain.

Since I’m frequently in the B2B e-commerce space, I’ll ask: do you want to keep taking phone calls and entering orders directly, or do you want unlimited orders via a customer portal? Sometimes they worry about losing personal touch, which is fair, but solvable.

If speed is the issue, CPQ could help. But sometimes a simple automation fixes it, like automatically populating 20 fields every time you create an order. With a little more discovery, we can drill deeper, propose multiple approaches, and pick the best one. It’s the same with AI and agents—you don’t always need the agent, but you do need to evaluate the real need.

Stephen (08:41)
A lot of times it’s just straight automation that helps, and companies get a huge lift from that. But they’re often sold on the idea that they need an agent because in their mind that’s the solution.

Caleb (09:11)
NetSuite’s been doing some interesting things in the AI space lately. Have you looked into that at all? They’re taking a different approach. We’ll see how the market responds. Most software companies are building their own agents, which I’ve never questioned, but NetSuite took another path.

Stephen (09:18)
Not from the NetSuite side, no.

Caleb (09:38)
They decided to create a connector to any agnostic AI. It started with a handful of compatible options where a non-technical user or general admin can connect to the AI of their choice and interact with NetSuite.

I’ve seen examples where you can upload a CSV file and have the AI map it in, create items, determine fields, match NetSuite IDs, and handle the CSV import through AI. That kind of automated mapping is impressive.

There are also scenarios where the AI is placing orders and running analytics. Taking that agnostic approach means you can pick whichever AI wins the race, since it’s too early to know who that will be. I like that flexibility.

Stephen (10:37)
Of course, the other part of me asks: how is the cost structured around that usage? Are you able to point it to something like Ollama to use an internally hosted LLM? That would help with cost. And then, where’s your data going?

Caleb (10:52)
Good point. The data in this case is directly with the ERP, but there could be implications if you accidentally create a bunch of transactions in NetSuite and impact your transaction volume.

Stephen (11:08)
But how does the LLM get the data from the CSV to process it?

Caleb (11:21)
You’re paying for that transaction somehow, usually tied to licensing and usage volume.

Stephen (11:30)
Right—so where’s the data going? The LLM you’re connected to—where is it being processed?

Caleb (11:39)
If you were to take an agnostic approach and connect easily with an ERP, how would you structure that AI?

Stephen (11:52)
I’d host it internally. That way you know your data is staying internal. But then the challenge is how you handle it per tenant, per client. You’d need to control the LLM and its data exposure by tenant.

Caleb (12:23)
So, what guardrails would you put up?

Stephen (12:25)
It depends. Do you actually take a CSV and push it out to the LLM for processing or not?

Caleb (12:35)
So restrictions on what you even allow in the first place. Rules and permissions related to the user connecting with NetSuite.

Stephen (12:38)
Exactly. That’s typically the approach. Microsoft does this. Their Copilot uses ChatGPT’s backend, but under agreements that ChatGPT won’t use the data for training.

Caleb (12:52)
So the main issues are pricing—transaction storage, API calls, tokens—and then data security.

Stephen (13:15)
Right. LLMs process on a token basis, similar to API calls.

Caleb (13:24)
And data security is usually the biggest concern, followed by cost.

Stephen (13:38)
Data security matters on two levels. First, is my data being used to train the LLM? You don’t want that unless you’re hosting internally. Second, role-based access. As a user, should I even have access to certain data?

Caleb (13:50)
Unless it’s hosted by you—then you do want it. Otherwise, if AI is connected via an admin role, anyone could ask, “What’s our profit margin?” or “What does this employee make?” Restricting it at the user level is key.

Stephen (14:19)
Exactly.

Caleb (14:33)
So you’d either connect AI with each user individually, or control it through hosting.

Stephen (14:47)
Yes. Going back to Microsoft, this is why I often steer SMBs to them. Copilot follows the permission set by user as long as the agent is tied to the user’s credentials. You can also set it up with its own credential, but then you have to manage its access separately.

Stephen (15:16)
There are multiple ways to do it. But in general, people come in saying they want to start paying for the business version of Copilot, the M365 Copilot. My first question is, do you have your permission sets in place—like SharePoint restrictions on who can access what? Because if not, once you set up Copilot, everyone could gain access.

Caleb (15:38)
Yeah, they’d be getting access to the entire database. What would be some good business use cases? I’ve been thinking about this over the last 24 hours—different applications and creative use cases.

Caleb (16:06)
For your use case, if you were to use a Microsoft product, are there a couple of departments and business processes where you see AI being applied creatively?

Stephen (16:16)
That’s the running question right now. A lot of demos are set up around HR—pointing an agent at HR documentation so employees never have to involve HR again. But in real life, that doesn’t work. You might point it to three documents and get good answers, but HR has policies and nuances that go beyond that.

Caleb (16:35)
And how would the AI know which documents are outdated or not applicable anymore?

Stephen (16:49)
Exactly. There’s a lot that goes into it. For SMBs, I often find Power Automate or Logic Apps provide a better lift than building out a fancy one-off agent. The smarter approach is to do discovery, identify areas where multiple team members are involved, and then see if automation or an agent would deliver ROI. For example, five salespeople versus one HR person—automation is likely a better investment on the sales side.

Caleb (17:45)
A couple ideas come to mind for me. AI works really well for mapping data. It’s great at finding text descriptions and similarities between text, which makes it valuable in mapping exercises.

Integrations are a perfect use case. In our space—Sligo, MineCloud, and others—integrations can be finicky and points of failure. The more they leverage AI, the easier it becomes to tweak and adjust integrations. That’s an awesome use case for AI in business processes and admin work.

Another example: purchase orders from a large customer like a big-box retailer. They’ll send POs via email, and your team has to key them in as sales orders. We use a solution that extracts information from the PDF, maps it, and creates the sales order automatically. Hard coding works if the layout is always consistent, but if you’re getting lots of different variations, AI is very effective at handling the variability.

Stephen (19:42)
Right. We’ve done something similar with drawings in PDF format. The AI extracts the text and fills out the form. We still keep a human in the loop, but instead of manually pulling everything out, they’re just vetting it. If something looks off, they correct it. With enough corrections, it becomes a training opportunity to improve recognition.

Stephen (20:29)
I’m also involved in another company, IR Game, which runs incident response tabletop exercises online instead of in person. We use AI heavily to build scenarios—business email compromise, cloud compromise, and so on. We’ve built prompts where we can say: this industry, this type of scenario, generate based on these templates. That’s another fun use case.

Stephen (21:19)
That’s a little less on the automating process side and more about using the LLM’s ability to predict what’s next and match. It’s a fun use case.

Caleb (21:29)
Yeah, that’s a good one. I think anything with text description is really strong. I had a client in the medical space who would get quote requests with 100–200 line items from a hospital. They’d have to quote on all those products and compare pricing to see if it made sense to switch vendors.

The challenge was matching the descriptions they received to their own item descriptions in the ERP. It’s rarely a one-to-one match. So we built a custom NetSuite application tied to AI and their item base. It could take all their ERP item descriptions, compare them against the CSV from the hospital, and find the best match.

Stephen (22:21)
Mm-hmm.

Caleb (22:24)
If it didn’t find an exact match but came up with a few similar ones, the user could choose the correct option. That input would then train the model further. So it still required human intervention, but it reduced the process from four hours down to 15–30 minutes. It was much easier to handle at higher volumes.

For me, that’s where AI shines: efficiency with text-to-text comparisons. It’s not pure automation—I find fixed automations more reliable—but AI helps with decision-making and streamlining.

Stephen (22:55)
Exactly. And when most people say “AI” today, they’re really talking about LLMs. But AI is broader than that. At their core, LLMs are just predicting the next word based on context. If you understand that, you realize you can get more accurate results by shrinking down the context.

That’s where building an agent with sub-agents—or an orchestrating agent—makes a big difference. A higher-level agent decides what it’s working on and passes it to a sub-agent with refined context. The sub-agent produces more accurate results, which then feed back up to the orchestrator.

Caleb (24:02)
Interesting. Let me try to make an example. Say you have a company in the electronics components space. The orchestration agent knows this context: it’s a wholesaler in electronics.

If the system is handling a purchase order, the orchestration agent could delegate that task to a procurement sub-agent specialized in that industry. Within procurement, there could be sub-agents for different tasks—one focused on purchase orders, another on item receipts, and so on.

Stephen (25:07)
Exactly. Or you might even say, this is a purchase for cabling. That would route to a sub-agent focused specifically on cabling.

Caleb (25:18)
That would definitely improve accuracy and get closer to full automation.

Stephen (25:26)
Right. Think about normal ChatGPT. If you shift context mid-conversation, it often doesn’t catch on and still answers from the original context. The best way to handle that is to start a new chat with a clear new context.

If you have an agent that’s too broad, it struggles with context. But if you tell an orchestration agent, “If it’s procurement, route here,” or “If it’s cabling, route there,” then you narrow context. That little bit of focus helps deliver cleaner, more accurate answers.

Caleb (26:20)
Especially if you’re getting inaccurate results. Refining context reduces that. That makes a lot of sense.

Stephen (26:32)
Exactly. On your earlier example, one thing I like to do is have the system provide a confidence percentage or ranking. For example: “This matches at 80%.” Anything with lower confidence—say, 20%—gets flagged for human review.

Caleb (26:47)
That’s true. In one of my use cases, what really made it effective was knowing where to put focus.

Stephen (27:12)
Exactly. The fun part is when you can have AI analyze the outputs from AI itself and see what it thinks. There’s a lot of interesting possibilities.

Caleb (27:19)
Right. We’ve been able to get everything to 90% or better by looking at outliers and fine-tuning further. It’s been fun exploring AI concepts and different use cases.

Recently, we did a website migration from our old CMS—which we’d used since day one at Anchor Group—to a hosted headless platform with much more control. One part of the process involved schema markup. Schema tells Google crawlers what exists on a page with specific attributes. The crawler compares what it sees with the schema markup, and if they match, it has high confidence in the content. If not, rankings drop.

ChatGPT worked really well during that migration. Normally I’d use a tool to create schema markup outputs, like when inserting a video into a landing page or FAQ. With ChatGPT, I could give it the URL, tell it what schema type I wanted, and have it generate the markup. If no reviews were detected, it wouldn’t output review snippets. It basically acted like a bot, producing cleaner schema markup.

Stephen (29:42)
That’s a whole other podcast. But right now, I’ve read that up to 50% of all search results are generated through LLMs. That changes the SEO landscape.

Caleb (29:47)
Definitely. There’s a marketing use case there. I track leads from it.

Stephen (30:11)
SEO strategies now need to focus on what LLMs will consume.

Caleb (30:14)
Exactly. With SEO, I’ve noticed AI responds well to pairing brand names with keywords. For example, “Anchor Group” plus the keyword. As a reader, it sounds arrogant, but bots love it. SEO has always been a dance between serving humans and serving bots, and AI makes that balance even trickier.

Stephen (31:10)
They like good summaries. Prompting strategies matter too—how you stack prompts makes a big difference in results.

Caleb (31:27)
I’ve had a lot of fun thinking this through, especially with NetSuite’s new AI connector releases. You’ve inspired me to explore hosting more locally for data security and pricing. I suspect they’ll start connecting to those options. We’ll see where it goes.

Stephen (32:16)
If you want to experiment, I’ve set up Docker containers for N8n, Ollama, and Open WebUI. That lets me run workflows against a local LLM.

Caleb (32:42)
That’s fun.

Stephen (32:45)
You can also connect out to OpenAI’s API, but local hosting requires a good video card. For me, running locally without GPU took 28–30 seconds. Switching it to GPU brought it down to about a second. Huge difference.

Caleb (33:18)
That’s a really cool idea. Steven, can you tell people a little about your services and how they can find you?

Stephen (33:38)
Sure. My main offering is fractional CTO and CIO work. On the CTO side, I help product and software development teams be more efficient and enhance their products. On the CIO side, I focus on networking, automation, and building technology roadmaps. My goal is to shift IT from being seen as an expense to being an enabler.

Caleb (34:19)
And how would people find you?

Stephen (34:24)
I’m at bluehelmtech.com, or on LinkedIn at linkedin.com/in/stephenkellogg—don’t forget the second “g.” I’m also involved in IR Game at irgame.ai, and I’m the interim executive director at Cyber Rise, a nonprofit.

Caleb (35:01)
Awesome. I’m glad people can reach out. Thanks so much, Stephen.

Stephen (35:06)
Thanks for having me. It was a good, fun conversation.

 

Where to Listen to the Podcast

Find more episodes of the Anchor Group podcast!

Oracle NetSuite Alliance Partner, BigCommerce Certified Partner

As both a BigCommerce Certified Partner and an Oracle NetSuite Alliance Partner, Anchor Group is ready to handle BigCommerce and NetSuite projects alike! Whether you already have one platform and are looking to integrate the other, are considering a full-scale implementation of both platforms, or simply need support with ongoing customizations, our team is ready to help answer any questions you might have! Get in touch!

Horizontal Anchor Group logo orange anchor icon navy Anchor Group text

Tagged with Podcast