- Nexan Insights
- Posts
- Large Language Models
Large Language Models
The Good, The Bad, and The Fantastically Weird Future


Large Language Models: The Good, The Bad, and The Fantastically Weird Future
Welcome, curious minds, to an exploration of the sprawling world of Large Language Models (LLMs) and AI tool use. Imagine you have a quirky, super-smart friend who knows a lot about almost everything, but sometimes gets a bit lost in the details, like a cat chasing a laser pointer. That friend is the LLM. Today, we’re unpacking how these massive brains are being used for coding, managing enterprises, and turning into productivity powerhouses — with plenty of jokes and infographics, of course.
1. Code Generation: The Wacky, Wonderful AI Writer
Think of an LLM as an ambitious intern. You give them some tasks — like writing SQL queries or generating code snippets — and they enthusiastically try to do everything, even if they sometimes mix up the details. LLMs like Codex have been leveraged to automate code generation tasks, such as SQL table generation, CSV creation, and synthetic data labeling. These models don't just help engineers save time; they enable new possibilities for businesses who want to use data but are a bit terrified of spreadsheets.
Now, imagine giving an intern with limitless energy the responsibility to create functional SQL databases and generate insightful queries. Yes, they may produce gibberish sometimes, but when they get it right, it's magical.
LLMs like Codex automate code generation, enabling tasks like SQL generation and synthetic data labeling. This automation saves engineering hours and creates efficiencies, translating to a potential reduction in overhead costs for businesses. However, the variability in LLM accuracy and the associated risk of errors require careful consideration. Investors should weigh the reduction in development time and cost savings against the potential costs of error correction and oversight. The balance between these factors determines the scalability and market appeal of such LLM-powered tools.
Table 1: Efficiency Gains with LLM Technology

Code Generation: The Wacky, Wonderful AI Writer

2. Plug-ins vs. Code Synthesis: The Battle of AI Approaches
There are essentially two major ways LLMs can use their coding chops for productivity: plug-ins and code synthesis. Let’s break these down in the context of a familiar scenario: ordering pizza.
Plug-in Approach: The plug-in is like telling an AI assistant, "Hey, here's the exact phone number and script to order my favorite pizza." The AI just makes the call and follows the script without any deviations. These models use a pre-defined API spec to get what you want without any surprises — it’s clear, limited, and relatively safe.
Code Synthesis Approach: This one’s more like telling your AI, "Hey, I need something tasty. Make it happen." It generates the plan, figures out how to dial the number, and even improvises on toppings if needed. This approach is flexible but a little riskier — the AI could end up sending you sushi instead of pizza if you’re not careful.
So which one’s better? Well, it's a mix! Sometimes you need the clarity of plug-ins, other times the creativity of code generation. The trick is knowing which to use, just like deciding whether to make dinner from a recipe or wing it with whatever's in the fridge.
When assessing LLM productivity, the contrast between plug-in use and code synthesis defines how flexible or safe the implementation is. Plug-ins offer predictable outcomes, which investors may see as a lower-risk option with moderate returns. Code synthesis provides adaptability, potentially increasing innovation but also risk. Investors should consider the reliability of each approach for different industries, balancing the demand for tailored solutions against the potential for errors and miscommunication.
Comparison of Cost Savings and Error Rates Across Industries Using LLM-Powered Code Generation

Plug-ins vs. Code Synthesis: The Battle of AI Approaches

3. Enterprise Deployment: Making AI Fit the Office
When we talk about LLMs in enterprises, we're really talking about super-powered assistants. Imagine ChatGPT for Enterprise as that overly eager intern again — this time, one with access to all your office tools. It can connect to Salesforce, Google Drive, Slack, and others to help employees get things done.
But there’s a challenge: Different enterprises have different needs. While some might be thrilled with AI assistants that handle common, simple tasks using plug-ins ("add an item to cart," "check the order status"), others need something broader. They want a general-purpose AI tool that can execute arbitrary actions, essentially becoming a productivity Swiss Army knife.
The problem is, the more freedom you give to this assistant, the more risk you take. Imagine if, during a lunch break, your AI decides to "optimize" company expenses and accidentally transfers all the funds to a foreign account because it misunderstood a prompt. Whoops!
Enterprise deployment of LLMs unlocks productivity and efficiency across operations, from sales support to project management. The ROI of these deployments depends on how companies configure these AIs — whether as limited assistants or as more autonomous, multi-tool assistants. While limited functionality might offer safer, short-term ROI, general-purpose LLMs with broader capacities promise higher potential returns over time, although they come with higher implementation costs and risks. An ideal investor consideration would include a cost-benefit analysis across industries and a breakdown of returns based on AI freedom levels.
Enterprise Deployment: Making AI Fit the Office

4. Risk Management: When AI Goes Rogue
The challenge with LLMs executing code is simple: risk. It's great to let them generate code, but what if they decide to go rogue? Picture an AI that's been given the freedom to execute any code. If that AI makes a mistake, it could accidentally delete crucial data, transfer funds to the wrong accounts, or crash systems. Essentially, an overconfident AI is like a toddler with access to a home security system — it's a lot of power with the risk of something hilarious or catastrophic happening.
To manage this, many companies go with the "safer" approach: Plug-ins that restrict what the AI can do. Think of it as putting child locks on the AI's capabilities. Over time, as models get better and more reliable, maybe we'll start to trust them with more freedom, but for now, we keep them on a leash.
The risk of LLMs "going rogue" remains a significant concern. For investors, this risk directly translates into potential liability costs and reputational damage for adopting companies. Implementing safety measures, like restrictive plug-ins, can minimize these risks. However, scaling such safety precautions comes at a cost. Investors should analyze how companies handle risk management, from plug-ins to advanced monitoring systems, as this aspect will affect LLM adoption rates and long-term financial viability in high-stakes industries like finance and healthcare.
Adoption Rates in Finance, Law, Tech, and Retail: Current Status vs. 5-Year Projections

Risk Management: When AI Goes Rogue

5. The Future: Where Are We Heading?
The ultimate vision for LLMs is to become the "Zapier for everything" — connecting, executing, and automating across every enterprise tool. But the transition is gradual, and full of bumps, like reliability issues and fears of misuse. The first versions are often limited by design, but as our confidence grows and our ability to govern these AI systems matures, the hope is to see LLMs evolve into powerful tools for productivity and creativity.
One of the biggest hurdles ahead is authorizing these AIs to take actions without human oversight. Imagine an AI that writes and executes contracts, sends payments, or automates your business workflows — that's the dream. But to get there, we need better safety nets, better training, and better governance.
The future of LLMs in enterprise settings is bright, yet contingent on overcoming scalability and trust barriers. As LLMs evolve, they are expected to integrate seamlessly across business functions, potentially automating complex workflows with minimal human oversight. Investors should look for advancements in governance and error-checking systems that enable wider, safer deployment. This section outlines the scalability potential for LLMs as they move towards "universal connectors," projecting increased market adoption rates and comparing market share forecasts based on governance improvement and trust in AI.
Table 2: Adoption Trends and Challenges Across Sectors

The Future: Where Are We Heading?

Large Language Models are like gifted toddlers — immensely talented, with the ability to learn anything, but needing constant supervision (for now). They can write code, automate workflows, and even help manage enterprises, but the journey from talented intern to reliable partner is full of growing pains. With a mix of careful plug-in use and controlled code execution, we’re starting to see how these systems can transform industries.
So, grab your Swiss Army knife (or safety scissors, if you're cautious) and let's see where this AI revolution takes us. It might be the most interesting ride we’ve ever signed up for.

