What happened with AI last year, and what it means for 2025

The tech industry has never been accused of moving slowly. The exponential explosion of AI tools in 2024, though, sets a new standard for fast-moving. The past few months of 2024 rewrote what happened in the past few years. If you have not been actively paying attention to AI, now is the time to start.

I have been intently watching the AI space for over a year. I started from a place of great skepticism, not willing to internalize the hype until I could see real results. I can now say with confidence that when applied to the correct problem with the right expectations, AI can make significant advancements possible no matter the industry.

In 2024, not only did the large language models get more powerful and extensible, but the tools are being created to solve real business problems. Because of this, skepticism about AI has shifted to cautious optimism. Spurred by the Fortune 500’s investments and early impacts, companies of every shape and size are starting to harness the power of AI for efficiency and productivity gains.

Let’s review what happened in Quarter Four of 2024 as a microcosm of the year in AI.

New Foundational Models in the AI Space

A foundational large language model (LLM) is one which other AI tools can be built from. The major foundational LLMs have been Chat GPT, Claude, Llama, and Gemini, operated by OpenAI & Microsoft, Anthropic, Meta, and Google respectively.

In 2024, additional key players entered the space to create their own foundational models. 

Amazon

Amazon has been pumping investments into Anthropic as their operations are huge consumers of AI to drive efficiency. With their own internal foundational LLM, they could remove the need to share their operational data with an external party. Further, like they did with their AWS business, they can monetize their own AI services with their own models. Amazon Nova was launched in early December.

xAI

In May of 2024, X secured funding to start to create and train its own foundational models. Founder Elon Musk was a co-founder of OpenAI. The company announced they would build the world’s largest supercomputer in June and it was operational by December.

Nvidia

In October, AI chip-maker Nvidia announced it own LLM named Nemotron to compete directly with OpenAI and Google — organizations that rely on its chips to train and power their own LLMs. 

Rumors of more to come

Apple Intelligence launched slowly in 2024 and uses OpenAI’s models. Industry insiders think it is natural to expect Apple to create its own LLM and position it as a privacy-first, on-device service. 

Foundational Model Advancements

While some companies are starting to create their own models, the major players have released advanced tools that can use a range of inputs to create a multitude of outputs: 

Multimodal Processing

AI models can now process and understand multiple types of data together, such as images, text, and audio. This allows for more complex interactions with AI tools. 

Google’s NotebookLM was a big hit this year for its ability to use a range of data as sources, from Google Docs to PDFs to web links for text, audio, and video. The tool essentially allows the creation of small, custom RAG databases to query and chat with.

Advanced Reasoning

OpenAI’s 01 reasoning model (pronounced “Oh One”) uses step-by-step “Chain of Thought” to solve complex problems, including math, coding, and scientific tasks. This has led to AI tools that can draw conclusions, make inferences, and form judgments based on information, logic, and experience. The queries take longer but are more accurate and provide more depth.

Google’s Deep Research is a similar product that was released to Gemini users in December.

Enhanced Voice Interaction

More and more AI tools can engage in natural and context-aware voice interactions — think Siri, but way more useful. This includes handling complex queries, understanding different tones and styles, and even mimicking personalities such as Santa Claus.

Vision Capabilities

AI can now “see” and interpret the world through cameras and visual data. This includes the ability to analyze images, identify objects, and understand visual information in real-time. Examples include Meta’s DINOv2, OpenAI’s GPT-4o, and Google’s PaliGemma

AI can also interact with screen displays on devices, allowing for a new level of awareness of sensory input. OpenAI’s desktop app for Mac and Windows is contextually aware of what apps are available and in focus. Microsoft’s Co-pilot Vision integrates with the Edge browser to analyze web pages as users browse. Google’s Project Mariner prototype allows Gemini to understand screen context and interact with applications.

While still early and fraught with security and privacy implications, the technology will lead to more advancements for “Agentic AI” which will continue to grow in 2025.

Agentic Capabilities

AI models are moving towards the ability to take actions on behalf of users. No longer confined to chat interfaces alone, these new “Agents” will perform tasks autonomously once trained and set in motion.

Note: Enterprise leader SalesForce launched AgentForce in September 2024. Despite the name, these are not autonomous Agents in the same sense. Custom agents must be trained by humans, given instructions, parameters, prompts, and success criteria. Right now, these agents are more like interns that need management and feedback.

Specialization

2024 also saw an increase in models designed for specific domains and tasks. With reinforcement fine-tuning, companies are creating tools for legal, healthcare, finance, stocks, and sports. 

Examples include Sierra, who offers a specifically trained customer service platform, and LinkedIn agents as hiring assistants.

What this all means for 2025

It’s clear that AI models and tools will continue to advance, and businesses that embrace AI will be in a better position to thrive. To be successful, businesses need an experimental mindset of continuous learning and adaptation: 

  • Focus on AI Literacy — Ensure your team understands AI and its capabilities. Start with use cases that add value immediately.
  • Prioritize Data Quality — AI models need high-quality, relevant data to be effective. Start cleaning and preparing your internal data before implementing AI at scale.
  • Combine AI and Human Expertise — Use AI to augment human capabilities, not replace them. Think of AI as a junior employee who will require input, alignment, and reinforcement.
  • Experiment and Iterate — Be willing to try new approaches and adapt based on results. Include measurement in your plans — collect data before and after to benchmark progress. 
  • Embrace Ethical AI — Implement policies to ensure AI is used responsibly and ethically. Investigate ways the company can offset carbon and support cleaner energy, as AI tools require more electricity than non-AI tools. Understand hallucinations and the new, more complex “scheming” in reasoning models problem.
  • Prepare for Change — Understand that technology is constantly evolving, and business models will need to adapt.

While the models will continue to get better into 2025, don’t wait to explore AI Even if the existing models never improve, they are powerful enough to drive significant gains in business. Now is the time to implement AI in your business. Choose a model that makes sense and is low-friction — if you are an organization that uses Microsoft products, start with a trial of AI add-ons for office tools, for example. Start accumulating experience with the tools at hand, and then expand to include multiple models to evaluate more complex AI options that may have greater business impact. It almost doesn’t matter which you choose, as long as you get started.

Oomph has started to experiment with AI ourselves and Drupal has exciting announcements about integrating AI tools into the authoring experience. If you would like more information, please reach out for a chat.

Related tags: Composable Business Emerging Technology Technical Architecture

ARTICLE AUTHOR

More about this author

J. Hogue

Director, Design & User Experience

I have over 20 years of experience in design and user experience. As Director of Design & UX, I lead a team of digital platform experts with strategic thinking, cutting-edge UX practices, and visual design. I am passionate about solving complex business problems by asking smart questions, probing assumptions, and envisioning an entire ecosystem to map ideal future states and the next steps to get there. I love to use psychology, authentic content, and fantastically unique visuals to deliver impact, authority, and trust. I have been a business owner and real-estate developer, so I know what is like to run a business and communicate a value proposition to customers. I find that honest and open communication, a willingness to ask questions, and an empathy towards individual points of view are the keys to successful creative solutions.

I live and work in Providence, RI, and love this post-industrial city so much that I maintain ArtInRuins.com, a documentation project about the history and evolution of the local built environment. I help to raise two amazing girls alongside my equally strong and creative wife and partner.