(888) 481-4011
How to Talk Tech: What is a Large Language Model (LLM)?

If you’re at all interested in tech, you’ve heard about large language models (or LLMs for short). In fact, you’re probably interacting with several LLMs every day, as they power many of the tools we use for everyday tasks.

But what is an LLM, exactly? How does it work, what can it do, and what can’t it do? Here’s a brief overview of what LLMs are, how they are trained and used, and why they’re such a big deal.

Like Autocorrect, but Not Quite

Let’s start by taking a step back. Language models—of regular size—are machine learning models that can predict and generate language for a specific purpose, like the autocorrect in your phone. Autocorrect can (usually) identify errors based on a set of rules, and predict what you want to write based on what you’ve written before. Other language models you may have used are OCR (the super handy technology that can pick up text from images), handwriting recognition, and speech recognition.

While useful, these technologies are usually limited to one function, or just a couple functions, and can make many mistakes. But large language models take that limited functionality much further—they can do seemingly endless language-related tasks with incredible precision. The difference lies mostly in the amounts of data each kind of model was trained on. While autocorrect, OCR, and the other LMs were trained on modest quantities of data and fed a limited set of rules, LLMs were trained on staggeringly massive amounts of data, unlocking capabilities never seen before.

For example, OpenAI’s GPT-3 (and then GPT-3.5), the model behind the chatbot ChatGPT, was trained on 175 billion parameters, approximately 800GB in size. This allows it to do incredible things, like write an essay, code an app, or create a recipe out of the three items in my fridge. But most importantly, it can do these things at a level that imitates or even surpasses what a human can do…and accomplishes what would normally take hours of work in just a few seconds.

ChatGPT was launched on November 30, 2022. It was the pioneer that put a real start to the LLM chatbot race—after its launch, competitors like Google had to scramble to put forward a passable alternative. Arguably, its biggest advantage is that it can handle very large prompts and gives longer outputs than its competitors. It’s also excellent at translation, and has an incredible command of language. On the other hand, the free version of ChatGPT doesn’t have access to the internet, and its training data is cut off after September 2021, so it’s not in the loop of current affairs. This can also be an advantage, as ChatGPT is excellent at giving the kind of concise answers you want when you’re not in the mood for an internet treasure hunt.

Getting Futuristic

Even though LLM technology is still emerging, the latest-generation LLMs like GPT-4 are improving vastly upon their predecessors. While this is immensely useful for us users, it also means we can easily be fooled into thinking the machine is alive. Lots of confusion sparks up online when people ask ChatGPT questions about its own thoughts and feelings, and then take the output as proof that the bots are sentient and have aspirations of their own. Those characteristics are beyond what an LLM can do, and belong in the realm of science fiction—so far.

Actual sentience in AI would mean that an artificial general intelligence (AGI) exists; a system that, like us, has actual thoughts that belong to it and are not merely the prediction of what a human would say in that exact scenario. But so far, AGI is a purely hypothetical concept belonging in movies and speculation, as no technology today is close to that achievement. For a taste of what a true AGI could work like, you need to look no further than Hollywood; the movie “Her” (2013), starring Joaquin Phoenix and Scarlett Johansson, is an excellent example.

AI-Supercharged Search Engines

Bing Chat, by Microsoft, is also trained on OpenAI’s GPT, but it has access to GPT-4 (and it’s free to use, even though it’s supported by advertisements). Bard, on the other hand, is powered by Google’s own PaLM2 LLM. In some areas, it is somewhat behind GPT in terms of capabilities, but in other areas, it’s currently a lot stronger. Given a math problem, for example, ChatGPT is suffering from what is called “drift”—in which the platform actually gets worse over time. In fact, one study recently found that ChatGPT went from answering math questions right 98% of the time to just 2% of the time over the last several months.

Where Bing and Bard excel is at the one thing ChatGPT can’t do: they are meant mostly as AI-powered search engines as they search the internet to gather information after every prompt. This means that they can answer questions about current events and gather facts about niche topics that ChatGPT may not have enough training data on.

One of the biggest pros about using LLMs in search is that it allows the user to have an ongoing “conversation” with the search engine since the search can have persistence. So instead of just searching for “Hiking trails in San Francisco” you can instead say “Could you give me a list of 20 popular hiking trails in the San Francisco area” and then, based on the answers, you can follow up with “Tell me more about your first suggestion” and then “How do I get there from where I am now?” and so on.

Finally, each of these bots’ biggest draw is the ecosystem where they are integrated: if you’re a heavy user of Microsoft tools, you’ll find Bing already built into them, and the same goes for Bard if you’re already working with the Google suite.

One last thing you need to know about these tools is that all LLM-powered chatbots are prone to “hallucinations”, that is, stating made-up facts in an extremely convincing way. Writer Satyen Bordoloi shared an amusing example recently:

“When I asked ChatGPT ‘what is the world record for crossing the English channel entirely on foot’, it replied: ‘The world record for crossing the English Channel entirely on foot is held by Christof Wandratsch of Germany, who completed the crossing in 14 hours and 51 minutes on August 14, 2020.’

Unless Christof was able to walk on water, that wouldn’t make a lot of sense.

To help counter this setback, Bing and Bard embed citations in their answers, making them easier to fact-check.

Harness the Power of LLMs with Plan A

Intrigued, but not sure where to start?

Plan A Technologies has a stellar team of AI and LLM experts who can help you figure out how to apply these new innovations to your organization. Our AI specialists can help you figure out exactly how to integrate AI into your business, how to build world-class AI-powered software for you from scratch, or how to integrate AI capabilities into your existing software. Just get in touch.

Search

Recent Posts

Database Management Project: Everything You Need to Know

Database Management Project: Everything You Need to Know

In today’s post we’ll be talking about three of the sexiest words in the technology world: “database management project.” But wait! Don’t abandon reading just yet. These are actually critically important initiatives that a surprising number of organizations tend to...

Loyalty Software: What You Need to Know

Loyalty Software: What You Need to Know

If you work at a company that has a loyalty program, the chances are pretty darn good that technology is a big part of it, and loyalty software is a big deal. These days, tech is how most customers directly interact with their favorite loyalty programs—using points,...

Why Outsource Your IT Infrastructure Management

Why Outsource Your IT Infrastructure Management

IT infrastructure management is hard. There’s a lot to worry about. A lot that can go wrong, and a lot of people depending on everything continuing to run smoothly as the organization grows. And the challenge is only becoming more and more intricate as new tools are...