-->

Register Now to SAVE BIG & Join Us for KMWorld 2025, November 17-20, in Washington, DC.

LLMs for the Rest of Us

Article Featured Image

For lead generation, LLMs can create nuanced emails tailored to various categories of recipients. “The LLM should be given some context so that it connects effectively,” noted Monti. “Otherwise, people dismiss the message and move on because it does not apply to them.” In a sophisticated workflow described by Monti, an AI assistant could retrieve a news item about a company, assess its industry, and create a customized email pointing to a solution of a problem in that industry and related to that company. Other SMB use cases for LLMs include content creation such as generating press releases and blogs, document summarization, and generating training materials from company content.

Despite the potential for improved efficiency, many small organizations are not using LLMs. One significant barrier to implementing LLM-based solutions is simply that people do not yet fully understand what they can do and are resistant to change, observed Monti. “On the other hand, there is a lot of hype about AI, with every company wanting to claim their product is AI-enabled, so some of the hesitation is legitimate.” Monti is a strong believer in keeping the human in the loop. “LLMs can generate good emails, but they still need to be checked,” he asserted.

Choosing a Model

The good news is that there are many models from which to select. That is also the bad news. But there are numerous ways to help select one. “Most of the models are trained on similar foundational knowledge, but differentiated on how they are fine-tuned,” said Mac Carlton, product manager for AI at SAS. “They might be designed for conversation or for writing code. Some are good in specific languages.” One way to figure out what the capabilities are is to use a publicly available benchmarking resource. “The specific benchmark may not exactly reflect your real world situation,” Carlton explained, “but the rankings will be good indicators.”

Another consideration is size. “Being large, LLMs are more generally intelligent and have a broader range of ability, but are also more expensive and slower to respond,” continued Carlton. Some vendors provide multiple sizes of LLMs. Carlton suggests starting with the smallest language model that will do the job, both for cost and for lower latency.

There is also a big distinction between closed and open source models. “Users may want to bring in an open source LLM into their environment,” Carlton commented, “rather than using a cloud-based product.” There are trade-offs, in that the on-prem product often requires the user to maintain it and is generally smaller and less capable, although more customizable. Cloud LLMs are easier to get up and running, but more expensive and, in some cases, less customizable. However, in certain regulatory environments, the cloud environment is not acceptable.

Although SAS products are enterprise scale, the company has had an SMB division for many years. Its primary platform, Viya, can be scaled to customers’ needs. SAS Viya Copilot is provided with any implementation of Viya. Viya’s Model Manager product allows testing on multiple LLMs for any given use case to see which works the best and to control the versioning of prompts. Viya also offers an Information Catalog that governs prompts. “Managing prompts is an underrated aspect of working with LLMs,” noted Carlton. “Building a prototype is relatively easy, but bringing it into production requires a lot more control and governance.”

KMWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues