- Nexan Insights
- Posts
- The Rise of Generative AI
The Rise of Generative AI
Where We Are, Who Wins, and What's Next


Imagine waking up in 2020 to find that the generative AI landscape just got a rocket booster strapped to its back. And now, a few years later, we're speeding ahead on a rollercoaster where the next twist seems like it's going to fling you right off the track—but instead, you end up laughing at how smooth it is. Yeah, that's generative AI for you. Let's unpack this wild ride—from the sudden breakthroughs to the competing players who want a seat at the table and the staggering implications this tech has for business.
1. Generative AI Before and After the "2020 Leap"
Okay, so let's rewind to the pre-2020 era. We had GANs (Generative Adversarial Networks) doing their thing—basically generating new images by training two neural networks to duel each other until they were good enough to fool one another. Sounds cool, right? But there were limits. GANs were great, but they didn't provide the flexibility and versatility needed to truly expand generative capabilities to new levels.
Enter diffusion models, which in early 2020 suddenly made this whole thing work in a way nobody quite expected. It's like watching an awkward teenager suddenly become the school basketball star. These models brought a new kind of flexibility that shifted the trajectory entirely—from "Hey, we can generate some okay-ish images" to "Whoa, this model is painting realistic portraits of Captain America riding a unicorn in photorealistic style."
With diffusion models, text-to-image generators (like DALL·E and Stable Diffusion) could flourish. They turned weird, obscure ideas into visuals that looked professional. What was the secret sauce? Well, instead of two networks fighting each other like in GANs, the diffusion models start with noise and gradually refine it—like an artist staring at a chaotic paint blob until it resembles Mona Lisa—only this artist runs at hyperspeed.
Performance Comparison Between GANs (Pre-2020) and Diffusion Models (Post-2020) Across Realism, Flexibility, and Efficiency

Evolution of Generative Models: The Shift from GANs to Diffusion Breakthroughs

2. Who Are the Big Players, and Why Are They Winning?
Think about the generative AI game as a giant poker tournament. You’ve got Google, OpenAI, NVIDIA, and a few other players who have the massive bankrolls to stay in the game—because generative AI is costly. They’ve got the compute power, access to data, and the AI talent that keeps them at the front of the line.
OpenAI's Codex powers GitHub Copilot, and it's still 18 months ahead of its open-source competitors in terms of tech. But that doesn’t mean smaller players aren’t making waves. Stable Diffusion, for instance, came out of nowhere and gave everyone a good surprise—like the kid who suddenly pulls a royal flush.
The big guys’ edge? It comes down to three things: Compute, Data, and Know-how. When you have compute costs running into millions, only a few have the deep pockets to foot that bill. It’s no wonder Microsoft invested $1 billion into OpenAI—just to get their foot in the door. But then there’s Google with its Flan-T5, and NVIDIA with the compute clout to power anything they dream up.
However, open-source models are like public libraries—accessible, effective, but perhaps a bit behind in terms of the newest releases. That’s not a bad thing for a lot of businesses, because an 18-month-old AI model is still good enough for most practical use cases.
Table 1: open-source models

Generative AI Leaders: The Competitive Edge of Google, OpenAI, and NVIDIA

3. Differentiation in the AI Application Layer
Let’s talk about applications. If you’re building on top of the same models, how do you stand out? The answer—differentiation in UI/UX and brand. Companies like Jasper simply took GPT-3 and built an incredibly user-friendly and popular copywriting interface on top of it. Simple as that. It's like using the same pizza dough but crafting a topping combo that nobody has tried before (and it turns out, people love pineapple on everything).
But what happens when giants like Google or Microsoft decide to get serious? Well, it might seem like they could just walk in and dominate, but red tape slows them down—like getting permission to put pineapple on a pizza in Italy. Bureaucracy and risk-aversion make them less nimble compared to startups.
The Hierarchy of Generative AI: From Foundation Models to User Experience

4. The Future: More Data Types, General Computer Interfaces, and Compression
Generative AI isn’t just about making pretty pictures. The next big milestone is expanding to handle new types of data—like video, audio, and database records. Imagine a model that can understand your CRM data, your videos, and even flip those into actionable insights. That’s the kind of thing AI companies are aiming at now—the all-in-one solution where data types blur into each other.
Then there's the holy grail—a general computer interface that listens to your natural language instructions and just does the thing. For example, "Hey computer, reduce carbon emissions in my CAD designs." The computer writes code, runs simulations, and delivers a new design—just like asking an assistant to handle a to-do list, except this assistant has a PhD in physics.
And what about compression technology? This one’s like a magician’s trick—NVIDIA has already figured out how to compress video calls 1,000x by reconstructing facial motion based on a still image and audio. Imagine applying this to streaming services—sending ultra-compressed data that your TV upscales back to 4K. It's like sending a postcard of a painting and having a super-talented artist recreate the original right in front of you.
The Next Frontier: Expanding Capabilities and Applications of Generative AI

5. Open-Source vs. Commercial Models: A Game of Leapfrog
Commercial vs. open-source—who wins? In generative AI, it's a game of leapfrog. OpenAI and Google are consistently ahead with cutting-edge models like GPT-4 or PaLM, but open-source models aren’t that far behind. And when the progress plateaus—as it inevitably will—those 18-month-old models become almost indistinguishable from the newest shiny commercial ones.
If there’s a big enough technological plateau, open source might take over—kind of like how everyone gets the same iPhone features eventually, even though the new one costs 4x more. And in the world of AI training, once the incremental improvements drop off, it’s going to be about customization and efficiency rather than having the absolute newest model.
The Competitive Leapfrog: Open-Source vs. Commercial AI Models

The landscape of generative AI is being shaped by compute power, proprietary data, and the know-how of an elite few, but there’s a lot more going on beneath the surface. It's about how effectively we can adapt these models to new, complex use cases and make them work for businesses. With newer types of data, clever ways of compressing information, and a computer interface that feels more like speaking to an engineer buddy, the possibilities are endless.
The open-source players, the proprietary giants, and even startups—everyone’s got a piece of the puzzle. Just remember: every time a new model is launched, the game changes, but the players stay the same.
The Expanding Generative AI Horizon: From Text to Full Automation

