When companies merge, successfully combining digital assets like websites, intranets, apps, and other platforms takes more than just squishing things together. Poorly merged digital properties can diminish brand equity, squander years of SEO value, and even drive away customers or employees — ultimately tanking the value the merger was supposed to create.
The challenge is that you’re bringing together two end-user communities with different experiences and expectations. And it’s easy to assume the bigger or faster-growing company has the better digital platform, even when there’s a lot you could learn from the smaller company’s practices.
That’s why we recommend a collaborative, UX-centered approach to combining digital properties, to ensure you’re leveraging the best of both worlds. In this article, we’ll share a sample of UX analyses that can help set up a new combined platform for success.
First, let’s talk about leveraging the right mindset.
A Different Approach
Typically, when combining digital platforms, companies tend to take a top-down approach, meaning there’s a hierarchy of decision-making based on which platform is believed to be better. But the calculus can change a lot when those decisions are made from the end users’ point of view instead.
From a practical standpoint, these companies are usually trying to create efficiencies and add new competencies while carefully messaging the benefits of the merger for their customers. They focus on things like branding, SEO, and consolidating social media — all of which are important, and none of which truly shapes the platform user’s experience.
To be fair, before the merger, both companies were likely focused on trying to create the best possible user experience for their customers. Now that they’re joining forces, each brings a unique set of learnings and techniques to the table. Which begs the question: what if your new partner handles some aspects of UX better than you?
Working collaboratively through in-depth Acquisition Analysis gives you an opportunity to extract the best from all digital properties, as either company’s platforms may have features, functionality, or content that does a better job of meeting business goals. How do you know which elements will be more successful? By auditing both platforms with tools like the ones we’ll talk about next.
When merging, don’t assume the bigger company should swallow the smaller and all its digital assets. There might be many things that the smaller company is doing better.
Conducting UX Audits
To preserve SEO value and cull the best-performing content for the new platform, many companies conduct content and SEO audits, often using free or paid tools. These usually involve flagging duplicate content, comparing performance metrics, and using R.O.T. (redundant/outdated/trivial) analyses.
What many organizations miss, however, is the opportunity to conduct UX and customer audits while directly comparing digital platforms. These can provide invaluable insights about the mental models and behaviors of users.
At a minimum, we recommend comparing both platforms using Nielsen Norman Group’s 10 usability heuristics. Setting the standard for user interface design for almost three decades, these guidelines give you a great baseline for identifying which parts of each platform are the most user-friendly. You can also compare heatmaps and scrollmaps to assess which platform does a better job of engaging users in ways that matter to your business goals.
Here are some other examples of UX analyses we conduct for clients when merging digital platforms:
Five second test
With existing customers or representatives of your target audience, ask users to view a page for five seconds and then answer a few questions about it. You’re looking for gut feelings here, as first impressions can tell you a lot about a page’s effectiveness.
Questions might include:
- What does the site tell you about the company’s personality?
- What’s something you think you could do on that website?
- Did anything stand out as new or surprising to you?
This test should be done for multiple pages on a website, not just the homepage. It’s especially valuable for product or service pages, where you can assess whether specific features are easily visible and accessible.
Customer interview comparison
For this assessment, enlist 5 to 10 customers for each business. Have the customers of Company A use Company B’s platform and vice versa, asking them to explain the value each company offers. You can also ask users what’s missing when they use the other company’s website. What’s different and better (or worse) than before? The answers can help you determine which brand and functional elements are essential to the user experience for each platform.
This test can also provide insights about the impact of elements you may not have previously considered, like the quality of photography or the order in which information is presented. These elements can set expectations and affect how people use the platform, all of which contributes to building users’ trust.
For a more in-depth analysis of user engagement and preferences, try gathering a combination of quantitative and qualitative data.
Site map analysis
Given that the merging companies are likely in the same or similar industries, there will probably be overlap between the site maps for each company’s website. But there will also be elements that are unique to each site. To successfully blend the information architecture of both properties, you’ll need to determine which elements work best for your target audience.
In addition to comparing analytics for the different websites to see which elements are most effective, here are a few other research methods we recommend:
Cohort analysis
Looking at other websites in your industry, examine their site structures and the language they use (e.g. “Find a doctor” vs. “Find a provider”). This reflects visitors’ expectations of what information they’ll get and where they can find it. You can also identify areas where you should deviate from the norm, including language that’s more authentic and unique to your brand.
Card sort
Card sorting helps you understand how to structure content in a way that makes sense to your users, so they can find what they’re looking for. Participants group labeled notecards according to criteria they would naturally use. For example, if you have a car rental site, you could ask users to organize different vehicle models into groups that make sense to them. While your company might use terms like “family car” or “executive sedan,” your customers might have completely different perceptions.
Tree testing
Tree testing helps you evaluate a proposed site structure by asking users to find items based solely on the menu structure and terminology. Using an online interface (Treejack is a popular one) that displays only navigation links without layout or design, users are asked to complete a series of 10–15 tasks. This can show you how easy it is for site visitors to find and use information. This test is often used after card sorting sessions to confirm that the findings from the card sorting exercise are correct.
Use Information, Not Intuition
Like we said, just because a larger company acquires a smaller one doesn’t mean its digital properties have nothing to learn from the other’s. Better practices could exist in either place, and it would be a shame to lose any unique value the smaller company’s platform might offer.
With so many robust tools available for UX analysis, there’s no reason not to gather the crucial data that will help you decide which features of each platform will best achieve your business goals. When combining digital properties, the “1 + 1 = 3” trope only works if you truly glean the best of both worlds.
Need help laying the groundwork for merging separate digital platforms? Our strategic UX experts can craft a set of research exercises to help your team make the best possible decisions. Contact us today to learn more.
Is your digital platform still on Drupal 7? By now, hopefully, you’ve heard that this revered content management system is approaching its end of life. In November 2023, official Drupal 7 support from the Drupal community will end, including bug fixes, critical security updates, and other enhancements.
If you’re not already planning for a transition from Drupal 7, it’s time to start.
With nearly 12 million websites currently hacked or infected, Drupal 7’s end of life carries significant security implications for your platform. In addition, any plug-ins or modules that power your site won’t be supported, and site maintenance will depend entirely on your internal resources.
Let’s take a brief look at your options for transitioning off Drupal 7, along with five crucial planning considerations.
What Are Your Options?
In a nutshell, you have two choices: upgrade to Drupal 9, or migrate to a completely new CMS.
With Drupal 9’s advanced features and functionalities, migrating from Drupal 7 to 9 involves much more than applying an update to your existing platform. You’ll need to migrate all of your data to a brand-new Drupal 9 site, with a whole new theme system and platform requirements.
Drupal 9 also requires a different developer skill set. If your developers have been on Drupal 7 for a long time, you’ll need to factor a learning curve into your schedule and budget, not only for the migration but also for ongoing development work and maintenance after the upgrade.
As an alternative, you could take the opportunity to build a new platform with a completely different CMS. This might make sense if you’ve experienced a lot of pain points with Drupal 7, although it’s worth investigating whether those problems have been addressed in Drupal 9.
Drupal 10
What of Drupal 10, you ask? Drupal 10 is slated to be released in December 2022, but it will take some time for community contributed modules to be updated to support the new version. Once ready, updating from Drupal 9 to Drupal 10 should be a breeze.

Preparing for the Transition
Whether you decide to migrate to Drupal 9 or a new CMS, your planning should include the following five elements:
Content Audit
Do a thorough content inventory and audit, examine user analytics, and revisit your content strategy, so you can identify which content is adding real value to your business (and which isn’t).
Some questions to ask:
- Does your current content support your strategy and business goals?
- How much of the content is still relevant for your platform?
- Does the content effectively engage your target audience?
Another thing to consider is your overall content architecture: Is it a Frankenstein that needs refinement or revision? Hold that thought; in just a bit, we’ll cover some factors in choosing a CMS.
Design Evaluation
As digital experiences have evolved, so have user expectations. Chances are, your Drupal 7 site is starting to show its age. Be sure to budget time for design effort, even if you would prefer to keep your current design.
Drupal 7 site themes can’t be moved to a new CMS, or a different version of Drupal, without a lot of development work. Even if you want to keep the existing design, you’ll have to retheme the entire site, because you can’t apply your old theme to a new backend.
You’ll also want to consider the impact of a new design and architecture on your existing content, as there’s a good chance that even using an existing theme will require some content development.
Integrations
What integrations does your current site have, or need? How relevant and secure are your existing features and modules? Be sure to identify any modules that have been customized or are not yet compatible with Drupal 9, as they’ll likely require development work if you want to keep them.
Are your current vendors working well for you, or is it time to try something new? There are more microservices available than ever, offering specialized services that provide immense value with low development costs. Plus, a lot of functionalities from contributed modules in D7 are now a part of the D9 core.
CMS Selection & Architecture
Building the next iteration of your platform requires more than just CMS selection. It’s about defining an architecture that balances cost with flexibility. Did your Drupal 7 site feel rigid and inflexible? What features do you wish you had – more layout flexibility, different integrations, more workflow tools, better reporting, a faster user interface?
For content being brought forward, will migrating be difficult? Can it be automated? Should it be? If you’re not upgrading to Drupal 9, you could take the opportunity to switch to a headless CMS product, where the content repository is independent of the platform’s front end. In that case, you’d likely be paying for a product with monthly fees but no need for maintenance. On the flipside, many headless CMS platforms (like Contentful, for example) don’t allow for customization.
Budget Considerations
There are three major areas to consider in planning your budget: hosting costs, feature enhancements, and ongoing maintenance and support. All of these are affected by the size and scope of your migration and the nature of your internal development team.
Key things to consider:
- Will the newly proposed architecture be supported by existing hosting? Will you need one vendor or several (if you’re using microservices)?
- If you take advantage of low or no-cost options for hosting, you could put the cost savings toward microservices for feature enhancements.
- Given Drupal 9’s platform requirements, you’ll need to factor in the development costs of upgrading your platform infrastructure.
- Will your new platform need regular updates? How often might you add feature enhancements and performance optimizations? And do you have an internal dev team to handle the work?
Look on the Bright Side
Whatever path you choose, transitioning from Drupal 7 to a new CMS is no small task. But there’s a bright side, if you view it as an opportunity. A lot has changed on the web since Drupal 7 was launched! This is a great time to explore how you can update and enhance your platform user experience.
Re-platforming is a complex process, and it’s one that we love guiding clients through. By partnering with an experienced Drupal development firm, most migrations can be planned and implemented more quickly, with better outcomes for your brand and your audience.
From organizing the project, to vendor and service selection, to budget management, we’ve helped a number of organizations bring their platforms into the modern web. To see an example, check out this design update for the American Veterinary Medical Association.
Need help figuring out the best next step for your Drupal 7 platform? Contact us today for a free consultation.
While the terminology was first spotlighted by IBM back in 2014, the concept of a composable business has recently gained much traction, thanks in large part to the global pandemic. Today, organizations are combining more agile business models with flexible digital architecture, to adapt to the ever-evolving needs of their company and their customers.
Here’s a high-level look at building a composable business.
What is a Composable Business?
The term “composable” encompasses a mindset, technology, and processes that enable organizations to innovate and adapt quickly to changing business needs.
A composable business is like a collection of interchangeable building blocks (think: Lego) that can be added, rearranged, and jettisoned as needed. Compare that with an inflexible, monolithic organization that’s slow and difficult to evolve (think: cinderblock). By assembling and reassembling various elements, composable businesses can respond quickly to market shifts.
Gartner offers four principles of composable business:
- Discovery: React faster by sensing when change is happening.
- Modularity: Achieve greater agility with interchangeable components.
- Orchestration: Mix and match business functions to respond to changing needs.
- Autonomy: Create greater resilience via independent business units.
These four principles shape the business architecture and technology that support composability. From structural capabilities to digital applications, composable businesses rely on tools for today and tomorrow.
So, how do you get there?
Start With a Composable Mindset…
A composable mindset involves thinking about what could happen in the future, predicting what your business may need, and designing a flexible architecture to meet those needs. Essentially, it’s about embracing a modular philosophy and preparing for multiple possible futures.
Where do you begin? Research by Gartner suggests the first step in transitioning to a composable enterprise is to define a longer-term vision of composability for your business. Ask forward-thinking questions, such as:
- How will the markets we operate in evolve over the next 3-5 years?
- How will the competitive landscape change in that time?
- How are the needs and expectations of our customers changing?
- What new business models or new markets might we pursue?
- What product, service, or process innovations would help us outpace competitors?
These kinds of questions provide insights into the market forces that will impact your business, helping you prepare for multiple futures. But you also need to adopt a modular philosophy, thinking about all the assets in your organization — every bit of data, every process, every application — as the building blocks of your composable business.
…Then Leverage Composable Technology
A long-term vision helps create purpose and structure for a composable business. Technology is the tools that bring it to life. Composable technology begets sustainable business architectures, ready to address the challenges of the future, not the past.
For many organizations, the shift to composability means evolving from an inflexible, monolithic digital architecture to a modular application portfolio. The portfolio is made up of packaged business capabilities, or PBCs, which form the foundation of composable technology.
The ABCs of PBCs
PBCs are software components that provide specific business capabilities. Although similar in some respects to microservices, PBCs address more than technological needs. While a specific application may leverage a microservice to provide a feature, when that feature represents a business capability beyond just the application at hand, it is a PBC.
Because PBCs can be curated, assembled, and reassembled as needed, you can adapt your technology practically at the pace of business change. You can also experiment with different services, shed things that aren’t working, and plug in new options without disrupting your entire ecosystem.
When building an application portfolio with PBCs, the key is to identify the capabilities your business needs to be flexible and resilient. What are the foundational elements of your long-term vision? Your target architecture should drive the business outcomes that support your strategic goals.
Build or Buy?
PBCs can either be developed internally or sourced from third parties. Vendors may include traditional packaged-software vendors and nontraditional parties, such as global service integrators or financial services companies.
When deciding whether to build or buy a PBC, consider whether your target capability is unique to your business. For example, a CMS is something many businesses need, and thus it’s a readily available PBC that can be more cost-effective to buy. But if, through vendor selection, you find that your particular needs are unique, you may want to invest in building your own.
Real-World Example
While building a new member retention platform for a large health insurer, we discovered a need to quickly look up member status during the onboarding process. Because the company had a unique way of identifying members, it required building custom software.
Although initially conceived in the context of the platform being created, a composable mindset led to the development of a standalone, API-first service — a true PBC providing member lookup capability to applications across the organization, and waiting to serve the applications of the future.
A Final Word
Disruption is here to stay. While you can’t predict every major shift, innovation, or crisis that will impact your organization, you can (almost) future-proof your business with a composabile approach.
Start with the mindset, lay out a roadmap, and then design a step-by-step program for digital transformation. The beauty of an API-led approach is that you can slowly but surely transform your technology, piece by piece.
If you’re interested in exploring a shift to composability, we’d love to help. Contact us today to talk about your options.
Why are microservices growing in popularity for enterprise-level platforms? For many organizations, a microservice architecture provides a faster and more flexible way to leverage technology to meet evolving business needs. For some leaders, microservices better reflect how they want to structure their teams and processes.
But are microservices the best fit for you?
We’re hearing this question more and more from platform owners across multiple industries as software monoliths become increasingly impractical in today’s fast-paced competitive landscape. However, while microservices offer the agility and flexibility that many organizations are looking for, they’re not right for everyone.
In this article, we’ll cover key factors in deciding whether microservices architecture is the right choice for your platform.
What’s the Difference Between Microservices and Monoliths?
Microservices architecture emerged roughly a decade ago to address the primary limitations of monolithic applications: scale, flexibility, and speed.
Microservices are small, separately deployable, software units that together form a single, larger application. Specific functions are carried out by individual services. For example, if your platform allows users to log in to an account, search for products, and pay online, those functions could be delivered as separate microservices and served up through one user interface (UI).
In monolithic architecture, all of the functions and UI are interconnected in a single, self-contained application. All code is traditionally written in one language and housed in a single codebase, and all functions rely on shared data libraries.

Essentially, with most off-the-shelf monoliths, you get what you get. It may do everything, but not be particularly great at anything. With microservices, by contrast, you can build or cherry-pick optimal applications from the best a given industry has to offer.
Because of their modular nature, microservices make it easier to deploy new functions, scale individual services, and isolate and fix problems. On the other hand, with less complexity and fewer moving parts, monoliths can be cheaper and easier to develop and manage.
So which one is better? As with most things technological, it depends on many factors. Let’s take a look at the benefits and drawbacks of microservices.
Advantages of Microservices Architecture
Companies that embrace microservices see it as a cleaner, faster, and more efficient approach to meeting business needs, such as managing a growing user base, expanding feature sets, and deploying solutions quickly. In fact, there are a number of ways in which microservices beat out monoliths for speed, scale, and agility.
Shorter time to market
Large monolithic applications can take a long time to develop and deploy, anywhere from months to years. That could leave you lagging behind your competitors’ product releases or struggling to respond quickly to user feedback.
By leveraging third-party microservices rather than building your own applications from scratch, you can drastically reduce time to market. And, because the services are compartmentalized, they can be built and deployed independently by smaller, dedicated teams working simultaneously. You also have greater flexibility in finding the right tools for the job: you can choose the best of breed for each service, regardless of technology stack.
Lastly, microservices facilitate the minimum viable product approach. Instead of deploying everything on your wishlist at once, you can roll out core services first and then release subsequent services later.
Faster feature releases
Any changes or updates to monoliths require redeploying the entire application. The bigger a monolith gets, the more time and effort is required for things like updates and new releases.
By contrast, because microservices are independently managed, dedicated teams can iterate at their own pace without disrupting others or taking down the entire system. This means you can deploy new features rapidly and continuously, with little to no risk of impacting other areas of the platform.
This added agility also lets you prioritize and manage feature requests from a business perspective, not a technology perspective. Technology shouldn’t prevent you from making changes that increase user engagement or drive revenue—it should enable those changes.
Affordable scalability
If you need to scale just one service in a monolithic architecture, you’ll have to scale and redeploy the entire application. This can get expensive, and you may not be able to scale in time to satisfy rising demand.
Microservices architecture offers not only greater speed and flexibility, but also potential savings in hosting costs, because you can independently scale any individual service that’s under load. You can also configure a single service to add capability automatically until load need is met, and then scale back to normal capacity.
More support for growth
With microservices architecture, you’re not limited to a UI that’s tethered to your back end. For growing organizations that are continually thinking ahead, this is one of the greatest benefits of microservices architecture.
In the past, websites and mobile apps had completely separate codebases, and launching a mobile app meant developing a whole new application. Today, you just need to develop a mobile UI and connect it to the same service as your website UI. Make updates to the service, and it works across everything.
You have complete control over the UI — what it looks like, how it functions for the customer, etc… You can also test and deploy upgrades without disrupting other services. And, as new forms of data access and usage emerge, you have readily available services that you can use for whatever application suits your needs. Digital signage, voice commands for Alexa… and whatever comes next.

Optimal programming options
Since monolithic applications are tightly coupled and developed with a single stack, all components typically share one programming language and framework. This means any future changes or additions are limited to the choices you make early on, which could cause delays or quality issues in future releases.
Because microservices are loosely coupled and independently deployed, it’s easier to manage diverse datasets and processing requirements. Developers can choose whatever language and storage solution is best suited for each service, without having to coordinate major development efforts with other teams.
Greater resilience
For complex platforms, fault tolerance and isolation are crucial advantages of microservices architecture. There’s less risk of system failure, and it’s easier and faster to fix problems.
In monolithic applications, even just one bug affecting one tiny part of a single feature can cause problems in an unrelated area—or crash the entire application. Any time you make a change to a monolithic application, it introduces risk. With microservices, if one service fails, it’s unlikely to bring others down with it. You’ll have reduced functionality in a specific capacity, not the whole system.
Microservices also make it easier to locate and isolate issues, because you can limit the search to a single software module. Whereas in monoliths, given the possible chain of faults, it’s hard to isolate the root cause of problems or predict the outcome of any changes to the codebase.
Monoliths thus make it difficult and time-consuming to recover from failures, especially since, once an issue has been isolated and resolved, you still have to rebuild and redeploy the entire application. Since microservices allow developers to fix problems or roll back buggy updates in just one service, you’ll see a shorter time to resolution.
Faster onboarding
With smaller, independent code bases, microservices make it faster and easier to onboard new team members. Unlike with monoliths, new developers don’t have to understand how every service works or all the interdependencies at play in the system.
This means you won’t have to scour the internet looking for candidates who can code in the only language you’re using, or spend time training them in all the details of your codebase. Chances are, you’ll find new hires more easily and put them to work faster.
Easier updates
As consumer expectations for digital experiences evolve over time, applications need to be updated or upgraded to meet them. Large monolithic applications are generally difficult, and expensive, to upgrade from one version to the next.
Because third-party app owners build and pay for their own updates, with microservices there’s no need to maintain or enhance every tool in your system. For instance, you get to let Stripe perfect its payment processing service while you leverage the new features. You don’t have to pay for future improvements, and you don’t need anyone on staff to be an expert in payment processing and security.
Disadvantages of Microservices Architecture
Do microservices win in every circumstance? Absolutely not. Monoliths can be a more cost-effective, less complicated, and less risky solution for many applications. Below are a few potential downsides of microservices.
Extra complexity
With more moving parts than monolithic applications, microservices may require additional effort, planning, and automation to ensure smooth deployment. Individual services must cooperate to create a working application, but the inherent separation between teams could make it difficult to create a cohesive end product.
Development teams may have to handle multiple programming languages and frameworks. And, with each service having its own database and data storage system, data consistency could be a challenge.
Also, when you choose to leverage numerous 3rd party services, this creates more network connections as well as more opportunities for latency and connectivity issues in your architecture.
Difficulty in monitoring
Given the complexity of microservices architecture and the interdependencies that may exist among applications, it’s more challenging to test and monitor the entire system. Each microservice requires individualized testing and monitoring.
You could build automated testing scripts to ensure individual applications are always up and running, but this adds time and complexity to system maintenance.
Added external risks
There are always risks when using third-party applications, in terms of both performance and security. The more microservices you employ, the more possible points of failure exist that you don’t directly control.
In addition, with multiple independent containers, you’re exposing more of your system to potential attackers. Those distributed services need to talk to one another, and a high number of inter-service network communications can create opportunities for outside entities to access your system.
On an upside, the containerized nature of microservices architecture prevents security threats in one service from compromising other system components. As we noted in the advantages section above, it’s also easier to track down the root cause of a security issue.
Potential culture changes
Microservices architecture usually works best in organizations that employ a DevOps-first approach, where independent clusters of development and operations teams work together across the lifecycle of an individual service. This structure can make teams more productive and agile in bringing solutions to market. But, at an organizational level, it requires a broader skill set for developing, deploying, and monitoring each individual application.
A DevOps-first culture also means decentralizing decision-making power, shifting it from project teams to a shared responsibility among teams and DevOps engineers. The goal is to ensure that a given microservice meets a solution’s technical requirements and can be supported in the architecture in terms of security, stability, auditing, etc…
3 Paths Toward Microservices Transformation
In general, there are three different approaches to developing a microservices architecture:
1. Deconstruct a monolith
This kind of approach is most common for large enterprise applications, and it can be a massive undertaking. Take Airbnb, for instance: several years ago, the company migrated from a monolith architecture to a service-oriented architecture incorporating microservices. Features such as search, reservations, messaging, and checkout were broken down into one or more individual services, enabling each service to be built, deployed, and scaled independently.
In most cases, it’s not just the monolith that becomes decentralized. Organizations will often break up their development group, creating smaller, independent teams that are responsible for developing, testing, and deploying individual applications.
2. Leverage PBCs
Packaged Business Capabilities, or PBCs, are essentially autonomous collections of microservices that deliver a specific business capability. This approach is often used to create best-of-breed solutions, where many services are third-party tools that talk to each other via APIs.
PBCs can stand alone or serve as the building blocks of larger app suites. Keep in mind, adding multiple microservices or packaged services can drive up costs as the complexity of integration increases.
3. Combine both types
Small monoliths can be a cost-effective solution for simple applications with limited feature sets. If that applies to your business, you may want to build a custom app with a monolithic architecture.
However, there are likely some services, such as payment processing, that you don’t want to have to build yourself. In that case, it often makes sense to build a monolith and incorporate a microservice for any features that would be too costly or complex to tackle in-house.
A Few Words of Caution
Even though they’re called “microservices”, be careful not to get too small. If you break services down into many tiny applications, you may end up creating an overly complex application with excessive overhead. Lots of micro-micro services can easily become too much to maintain over time, with too many teams and people managing different pieces of an application.
Given the added complexity and potential costs of microservices, for smaller platforms with only one UI it may be best to start with a monolithic application and slowly add microservices as you need them. Start at a high level and zoom in over time, looking for specific functions you can optimize to make you stand out.
Lastly, choose your third party services with care. It’s not just about the features; you also need to consider what the costs might look like if you need to scale a particular service.
Final Thoughts: Micro or Mono?
Still trying to decide which architecture is right for your platform? Here are some of the most common scenarios we encounter with clients:
- If time to market is the most important consideration, then leveraging 3rd party microservices is usually the fastest way to build out a platform or deliver new features.
- If some aspect of what you’re doing is custom, then consider starting with a monolith and either building custom services or using 3rd parties for areas that will help suit a particular need.
- If you don’t have a ton of money, and you need to get something up quick and dirty, then consider starting with a monolith and splitting it up later.
Here at Oomph, we understand that enterprise-level software is an enormous investment and a fundamental part of your business. Your choice of architecture can impact everything from overhead to operations. That’s why we take the time to understand your business goals, today and down the road, to help you choose the best fit for your needs.
We’d love to hear more about your vision for a digital platform. Contact us today to talk about how we can help.
How we leveraged Drupal’s native API’s to push notifications to the many department websites for the State.RI.gov is a custom Drupal distribution that was built with the sole purpose of running hundreds of department websites for the state of Rhode Island. The platform leverages a design system for flexible page building, custom authoring permissions, and a series of custom tools to make authoring and distributing content across multiple sites more efficient.
Come work with us at Oomph!
VIEW OPEN POSITIONS
The Challenge
The platform had many business requirements, and one stated that a global notification needed to be published to all department sites in near real-time. These notifications would communicate important department information on all related sites. Further, these notifications needed to be ingested by the individual websites as local content to enable indexing them for search.
The hierarchy of the departments and their sites added a layer of complexity to this requirement. A department needs to create notifications that broadcast only to subsidiary sites, not the entire network. For example, the Department of Health might need to create a health department specific notification that would get pushed to the Covid site, the RIHavens site, and the RIDelivers sites — but not to an unrelated department, like DEM.

Exploration
Aggregator:
Our first idea was to utilize the built in Drupal aggregator module and pull notifications from the hub. A proof of concept proved that while it worked well for pulling content from the hub site, it had a few problems:
- It relied heavily on the local site’s cron job to pull updates, which led to timing issues in getting the content — it was not in near real-time. Due to server limitations, we could not run cron as often as would be necessary
- Another issue with this approach was that we would need to maintain two entity types, one for global notifications and a second for local site notifications. Keeping local and global notifications as the same entity allowed for easier maintenance for this subsystem.
Feeds:
Another thought was to utilize the Feeds module to pull content from the hub into the local sites. This was a better solution than the aggregator because the nodes would be created locally and could be indexed for local searching. Unfortunately, feeds relied on cron as well.
Our Solution
JSON API
We created a suite of custom modules that centered around moving data between the network sites using Drupal’s JSON API. The API was used to register new sites to the main hub when they came online. It was also used to pass content entities from the main hub down to all sites within the network and from the network sites back to the hub.
Notifications
In order to share content between all of the sites, we needed to ensure that the data structure was identical on all sites in the network. We started by creating a new notification content type that had a title field, a body field, and a boolean checkbox indicating whether the notification should be considered global. Then, we packaged the configuration for this content type using the Features module.
By requiring our new notification feature module in the installation profile, we ensured that all sites would have the required data structure whenever a new site was created. Features also allowed us to ensure that any changes to the notification data model could be applied to all sites in the future, maintaining the consistency we needed.
Network Domain Entity
In order for the main hub, ri.gov, to communicate with all sites in the network, we needed a way to know what Drupal sites existed. To do this, we created a custom configuration entity that stored the URL of sites within the network. Using this domain entity, we were able to query all known sites and passed the global notification nodes created on ri.gov to each known site using the JSON API.
Queue API:
To ensure that the notification nodes were posted to all the sites without timeouts, we decided to utilize Drupal’s Queue API. Once the notification content was created on the ri.gov hub, we queried the known domain entities and created a queue item that would use cron to actually post the notification node to each site’s JSON API endpoint. We decided to use cron in this instance to give us some assurance that a post to many websites wouldn’t timeout and fail.
Batch API
To allow for time sensitive notifications to be pushed immediately, we created a custom batch operation that reads all of the queued notifications and pushes them out one at a time. If any errors are encountered, the notification is re-queued at the end of the stack and the process continues until all notifications have been posted to the network sites.

New site registrations
In order to ensure that new sites receive notifications from the hub, we needed a site registration process. Whenever a new site is spun up, a custom module is installed that calls out to the hub using JSON API and registers itself by creating a new network domain entity with it’s endpoint URL. This allows the hub to know of the new site and can push any new notifications to this site in the future.

The installation process will also query the hub for any existing notifications and, using the JSON API, get a list of all notification nodes from the hub to add them to it’s local queue for creation. Then, the local site uses cron to query the hub and get the details of each notification node to create it locally. This ensured that when a new site comes online, it will have an up to date list of all the important notifications from the hub.
Authentication
Passing this data between sites is one challenge, but doing it securely adds another layer of complexity. All of the requests going between the sites are authenticating with each other using the Simple Oauth module. When a new site is created, an installation process creates a dedicated user in the local database that will own all notification nodes created with the syndication process. The installation process also creates the appropriate Simple OAuth consumers which allows the authenticated connections to be made between the sites.
Department sites
Once all of the groundwork was in place, with minimal effort, we were able to allow for department sites to act as hubs for their own department sites. Thus, the Department of Health can create notifications that only go to subsidiary sites, keeping them separate from adjacent departments.
Translations
The entire process also works with translations. After a notification is created in the default language, it gets queued and sent to the subsidiary sites. Then, a content author can create a translation of that same node and the translation will get queued and posted to the network of sites in the same manner as the original. All content and translations can be managed at the hub site, which will trickle down to the subsidiary sites.
Moving in the opposite direction
With all of the authorization, queues, batches, and the API’s in place, the next challenge was making this entire system work with a Press Release content type. This provided two new challenges that we needed to overcome:
- Instead of moving content from the top down, we needed to move from the bottom up. Press release nodes get created on the affiliate sites and would need to be replicated on the hub site.
- Press release nodes were more complex than the notification nodes. These content types included media references, taxonomy term references and toughest of all, paragraph references.
Solving the first challenge was pretty simple – we were able to reuse the custom publishing module and instructed the queue API to send the press release nodes to the hub sites.
Getting this working with a complex entity like the press release node meant that we needed to not only push the press release node, but we also needed to push all entities that the initial node referenced. In order for it all to work, the entities needed to be created in reverse order.
Once a press release node was created or updated, we used the EntityInterface referencedEntities() method to recursively drill into all of the entities that were referenced by the press release node. In some cases, this meant getting paragraph entities that were nested two, three, even four levels deep inside of other paragraphs. Once we reached the bottom of the referenced entity pile, we began queuing those entities from the bottom up. So, the paragraph that was nested four levels deep was the first to get sent and the actual node was the last to get sent

Are you a developer looking to grow your skills? Join our team.
Conclusion
Drupal’s powerful suite of API’s gave us all the tools necessary to come up with a platform that will allow the State of Rhode Island to easily keep their citizens informed of important information, while allowing their editing team the ease of a create once and publish everywhere workflow.
You’ve decided to decouple, you’re building your stack, and the options are limitless – oh, the freedom of escaping the LAMP square and the boundaries of the conventional CMS! Utilities that were once lumped together into one unmoveable bundle can now be separately selected, or not selected at all. It is indeed refreshing to pick and choose the individual services best fitted to your project. But now you have to choose.
One of those choices is your backend content storage. Even though decoupling means breaking free from monolithic architecture, certain concepts persist: content modeling, field types, and the content editor experience.
I recently evaluated four headless CMS options: Contentful, Cosmic, Dato, and Prismic. Prior to that, I had no experience with any of them. Fortunately they all offer a free plan to test and trial their software. For simpler projects, that may be all you need. But if not, each CMS offers multiple tiers of features and support, and costs vary widely depending on your requirements.
I was tasked with finding a CMS and plan that met the following specs:
- 10 separate content editors
- Multiple editor roles (admin, contributor, etc…)
- Draft/scheduled publishing, with some sort of editorial workflow, even as simple as limiting a role to draft-only status
Although this doesn’t seem like a big ask for any CMS, these requirements eliminated the free plans for all four services, so cost became a factor.
Along with cost, I focused my evaluation on the editor experience, modeling options, integration potential, and other features. While I found lots of similarities between the four, each had something a little different to offer.
It’s worth mentioning that development is active on all four CMSs. New features and improvements were added just within the span of time it took to write this article. So keep in mind that current limitations could be resolved in a future update.
Contentful
Contentful’s Team package is currently priced at $489 per month, making it the most expensive of the four. This package includes 10 content editors and 2 separate roles. There is no editorial workflow without paying extra, but scheduled publishing is included.
Terminology
A site is a “space” and content types are “content types.”
What I love
The media library. Media of many different types and sources – from images to videos to documents and more – can be easily organized and filtered. Each asset has a freeform name and description field for searching and filtering. And since you can provide your own asset name, you’re not stuck with image_8456_blah.jpeg
or whatever nonsense title your asset had when you uploaded it. Additionally, image dimensions are shown on the list view, which is a quick, helpful reference.
Video description
- Upload images from many sources: Contentful supports the addition of media from sources such as Google Photos and Facebook, not just the files on your computer.
- Give your files meaningful names and add searchable descriptions.
RUNNER UP
Dato’s Media Area offers similar filtering and a searchable notes field.
What I like
Commenting. Every piece of content has an admin comments area for notes or questions, with a threaded Reply feature.
My Views. My Views is an option in the content navigation panel. With a single click, you can display only content that you created or edited – very convenient when working with multiple editors and a large volume of content.
What could be better
Price. Contentful is expensive if your project needs don’t allow you to use the free/community plan. You do get a lot of features for the paid plans, but there’s a big jump between the free plan and the first tier paid plan.
Cosmic
Cosmic ranks second most pricey for our requirements at $299 per month for the Pro Package. This package includes 10 editors and 4 predefined roles. It has draft/scheduled publishing, and individual editor accounts can be limited to draft status only.
Terminology
A site is a “bucket” and content types are “object types.”
What I love
Developer Tools. Developer Tools is a handy button you can click at the object or object type level to view your REST endpoint and response. It also shows other ways (GraphQL, CLI, etc.) to connect to a resource, using real code that is specific to your bucket and objects.
Video description
- Find the Developer Tools button on all objects or a single object.
- Browse the different methods to connect to your resource and return data.
RUNNER UP
Dato’s has an API Explorer for writing and running GraphQL queries.
The Slack Community. The Cosmic Slack community offers a convenient way to get technical support – in some cases, even down to lines-of-code level support – with quick response times.
What I like
View as editor. This is a toggle button in the navigation panel to hide developer features – even if your account is assigned the developer or admin role – allowing you to view the CMS as the editor role sees it. This is useful for documenting an editor’s process or troubleshooting their workflow.
Extensions. Cosmic provides several plug-and-play extensions, including importers for Contentful and WordPress content, as well as Algolia Search, Stripe, and more. I tested the Algolia extension, and it only took minutes to set up and immediately began syncing content to Algolia indexes1. You can also write your extensions and upload them to your account.
What could be better
Price/price structure. I found Cosmic’s pricing structure to be the most confusing, with extra monthly charges for common features like localization, backups, versioning, and webhooks. It’s hard to know what you’ll actually pay per month until you add up all the extras. And once you do, you may be close to the cost of Contentful’s lower tier.
Content model changes. Changing the content model after you’ve created or imported a lot of content is tricky. Content model changes don’t flow down to existing content without a manual process of unlocking, editing and re-publishing each piece of content, which can be very inefficient and confusing.
Dato
Dato’s Professional package is priced at €99 (about $120) per month, making it the second least pricey for our requirements. It includes 10 content editors and 15 roles, with configurable options to limit publishing rights.
Terminology
A site is a “project” and content types are “models.”
What I love
Tree-like collections. Dato lets you organize and display records in a hierarchical structure with visual nesting. The other CMSs give you roundabout ways to accomplish this, usually requiring extra fields. But Dato lets you do it without altering the content model. And creating hierarchy is as simple as dragging and dropping one record under another, making things like taxonomy a breeze to build.
Video description
- Organize the records in your model in a hierarchical structure by dragging and dropping.
- Dato lets you easily visualize the nested relationships.
RUNNER UP
No other CMS in this comparison offers hierarchical organizing quite like Dato, but Cosmic provides a parent field type, and Prismic has a documented strategy for creating hierarchical relationships.
What I like
Maintenance Mode. You can temporarily disable writes on your project and display a warning message to logged in editors. If you need to prevent editors from adding/editing content — for instance, during content model changes — this is a useful feature.
What could be better
Field types. Out-of-the-box Dato doesn’t provide field types for dropdowns or checkboxes. There’s a plugin available that transforms a JSON field into a multiselect, but it’s presented as a list of toggles/booleans rather than a true multiselect. And managing that field means working with JSON, which isn’t a great experience for content editors.
Dato is also missing a simple repeater field for adding one or more of something. I created repeater-like functionality using the Modular Content field type, but this feels overly complicated, especially when every other CMS in my comparison implements either a Repeater field type (Cosmic, Prismic) or a multi-value field setting (Contentful).
Prismic
Prismic ranks least pricey, at $100/mo for the Medium Package. This package includes 25 content editors, 3 predefined roles, draft/scheduled publishing and an editorial workflow.
Terminology
A site is a “repository”, and content types are “custom types.”
What I love
Field types. Prismic gives you 16 unique field types for modeling your content, but it’s not the number of types that I love; it’s the particular combination of options: the dedicated Title type for headings, the Media link type, the Embed type, the Color Picker. Plus, the UI is so intuitive, content editors know exactly what they’re expected to do when populating a field.
Take the Title type for example. If you need a heading field in the other CMSs, you’d probably use a plain text or rich text field type. Using rich text almost guarantees you’ll get unwanted stuff (paragraph tags, in particular) wrapped around whatever heading the editor enters. Using plain text doesn’t let the editor pick which heading they want. Prismic’s Title type field solves both of these problems.
Video description
- Prismic gives you a unique combination of field type options, some that you won’t find elsewhere.
- Use the Title type to create designated heading fields, and select or deselect the heading tags you want to include.
RUNNER UP
This is a tough one, but I’m leaning toward Contentful. What they lack in the number of available field types, they make up for in Appearance settings that allow you to render a field type to the editor in different formats.
Price. Unlimited documents, custom types, API calls and locales are included in the Medium package for a reasonable price. Additionally, Prismic has more packages and support tiers than any of the others, with one paid plan as low as $7/mo.
What I like
Slices. Slices are an interesting addition to back-end content modeling, because they’re essentially components: things you build on the front. Prismic lets you create custom components, or use their predefined ones — blockquotes, a list of articles, an image gallery, etc… I admit I didn’t test how these components render on the front-end, but Slices deserve further exploration.
What could be better
Integration options/plugins. Although Webhooks are included in all of Prismic’s plans, there doesn’t seem to be any development of plugins or ways to quickly extend functionality. Every other CMS in this comparison offers simple, click-to-install extensions and integrations to common services.
A note on Front-end Frameworks
A headless CMS, by simple definition, is a content storage container. It does not provide the markup that your website visitors will see and use. Therefore, your project planning will include choosing and testing a front-end system or framework, such as Gatsby JS. It’s important to find out early in the process what, if any, obstacles exist with connecting your choice CMS to your choice front-end.
At Oomph, we’ve successfully used both Contentful and Cosmic with a Gatsby front-end. However, Gatsby plugins exist for Prismic and Dato as well.
Summary
As with any decoupled service, your headless CMS choice will be determined by your project’s distinct requirements. Make sure to build into your project plan enough time to experiment with any CMS options you’re considering. If you haven’t worked with a particular CMS yet, give yourself a full day to explore, build a sample content model, add some content and media, and test the connection to your front-end.
Does a clear winner emerge from this comparison? I don’t think so. They each succeed and stand out in different ways. Use this article to kickstart your own evaluation, and see what works for you!
At the time of this writing, there are some field types that the extension doesn’t pass from Cosmic to Algolia.
If you live in an area with a lot of freight or commuter trains, you may have noticed that trains often have more than one engine powering the cars. Sometimes it is an engine in front and one in back, or in the case of long freight lines, there could be an engine in the middle. This is known as “Distributed power” and is actually a recent engineering strategy. Evenly distributed power allows them to carry more, and carry it more efficiently.1
When it comes to your website, the same engineering can apply. If the Content Management System (CMS) is the only source of power, it may not have enough oomph to load pages quickly and concurrently for many users. Not only that, but a single source of power may slow down innovation and delivery to multiple sources in today’s multi-channel digital ecosystems.
One of the benefits of decoupled platform architecture is that power is distributed more evenly across the endpoints. Decoupled means that the authoring system and the rendering system for site visitors are not the same. Instead of one CMS powering content authoring and page rendering, two systems handle each task discreetly.
Digital properties are ever growing and evolving. While evaluating how to grow your own system, it’s important to know the difference between coupled and decoupled CMS architectures. Selecting the best structure for your organization will ensure you not only get what you want, but what is best for your entire team — editors, developers, designers, and marketers alike.
Bombardier Zefiro vector graphic designed for Vexels
What is a traditional CMS architecture?
In a traditional, or coupled, CMS, the architecture tightly links the back-end content administration experience to the front-end user experience.
Content creation such as basic pages, news, or blog articles are created, managed, and stored along with all media assets through the CMS’s back end administration screens. The back end is also where site developers create and store customized applications and design templates for use by the front-end of the site.
Essentially, the two sides of the CMS are bound within the same system, storing content created by authenticated users and then also being directly responsible for delivering content to the browser and end users (front end).
From a technical standpoint, a traditional CMS platform is comprised of:
- A private database-driven CMS in which content editors create and maintain content for the site, generally through some CMS administration interfaces we’re used to (think WordPress or Drupal authoring interfaces)
- An application where engineers create and apply design schemas. Extra permissions and features within the CMS give developers more options to extend the application and control the front end output
- A public front end that displays published content on HTML pages
What is a decoupled CMS architecture?
Decoupled CMS architecture separates, or decouples, the back-end and front-end management of a website into two different systems — one for content creation and storage, and another for consuming content and presenting it to the user.
In a decoupled CMS, these two systems are housed separately and work independently of the other. Once content is created and edited in the back end, this front-end agnostic approach takes advantage of flexible and fast web services and APIs to deliver the raw content to any front-end system on any device or channel. It is even possible that an authoring system delivers content to more than front-end (i.e. an article is published in the back-end and pushed out to a website as well as a mobile App).
From a technical standpoint, a decoupled CMS platform is comprised of:
- A private database-driven CMS in which content editors create and maintain content for the site, generally through the same CMS administration interfaces we’re used to — though it doesn’t have to be2
- The CMS provides a way for the front-end application to consume the data. A web-service API — usually in a RESTful manner and in a mashup-friendly format such as JSON — is the most common way
- Popular front-end frameworks such as React, VueJS, or GatsbyJS deliver the public visitor experience via a Javascript application rendering the output of the API into HTML
Benefits of decoupled
By moving the responsibility for the user experience completely into the browser, the decoupled model provides a number of benefits:
Push the envelope
Shifting the end-user experience out of the conventions and structures of the back-end allows UX Engineers and front-end masterminds to push the boundaries of the experience. Decoupled development gives front-end specialists full control using their native tools.
This is largely because traditional back-end platforms have been focused on the flexibility of authoring content and less so on the experience of public visitors. Too often the programming experience slows engineers down and makes it more difficult to deliver an experience that “wows” your users.
Need for speed
Traditional CMS structures are bogged down by “out-of-the-box” features that many sites don’t use, causing unnecessary bloat. Decoupled CMS structures allow your web development team to choose only what code they need and remove what they don’t. This leaner codebase can result in faster content delivery times and can allow the authoring site to load more quickly for your editors.
Made to order
Not only can decoupled architecture be faster, but it can allow for richer interactions. The front-end system can be focused on delivering a truly interactive experience in the form of in-browser applications, potentially delivering content without a visitor reloading the page.
The back-end becomes the system of record and “state machine”, but back-and-forth interaction will happen in the browser and in real-time.
Security Guard
Decoupling the back-end from the front-end is more secure. Since the front-end does not expose its connection to the authoring system, it makes the ecosystem less vulnerable to hackers. Further, depending on how the front-end communication is set up, if the back-end goes offline, it may not interrupt the front-end experience.
In it for the long haul
Decoupled architectures integrate easily with new technology and innovations and allow for flexibility with future technologies. More and more, this is the way that digital platform development is moving. Lean back-end only or “flat file” content management systems have entered the market — like Contentful and Cosmic — while server hosting companies are dealing with the needs of decoupled architecture as well.
The best of both worlds
Decoupled architecture allows the best decisions for two very different sets of users. Content editors and authors can continue to use some of the same CMSs they have been familiar with. These CMSs have great power and flexibility for content modelling and authoring workflows, and will continue to be useful and powerful tools. At the same time, front-end developers can get the power and flexibility they need from a completely different system. And your customers can get the amazing user experiences they have come to expect.
The New Age of Content Management Systems
Today’s modern CMS revolution is driving up demand for more flexible, scalable, customizable content management systems that deliver the experience businesses want and customers expect. Separating the front- and back-ends can enable organizations to quicken page load times, iterate new ideas and features faster, and deliver experiences that “wow” your audience.
- Great article on the distributed power of trains: Why is there an engine in the middle of that train?
- Non-monolithic CMSs have been hitting the market lately, and include products like Contentful, CosmicJS, and Prismic, among others.
The Challenge
Execute on a digital platform strategy for a global private equity firm to create a centralized employee destination to support onboarding, create interpersonal connections between offices, and drive employee satisfaction.
The key components would be an employee directory complete with photos, bios, roles and organizational structure; News, events, and other communications made easily available and organized per location as well as across all locations; The firm’s investment portfolio shared through a dashboard view with all pertinent information including the team involved.
These components, and the expected tactical assets that an intranet provides, would help the firm deepen connections with and among employees at the firm, accelerate onboarding, and increase knowledge sharing.
The Approach
Supporting Multiple Intentions: Browsing vs. Working
An effective employee engagement platform, or intranet, needs to support two distinct modes — task mode and explore mode. In task mode, employees have access to intuitive navigation, quick page loading, and dynamic search or filtering while performing daily tasks. They get what they need fast and proceed with their day.
At the same time, a platform must also encourage and enable employees to explore company knowledge, receive company-wide communications, and connect with others. For this firm, the bulk of content available in explore mode revolves around the firm’s culture, with a special focus on philanthropic initiatives and recognition of key successes.
Both modes benefit from intuitive searching and filtering capabilities for team members, news, events, FAQs, and portfolio content. News and events can be browsed in a personalized way — what is happening at my location — or a global way — what is happening across the company. For every interaction within the platform, the mode was considered and influential of nearly all design decisions.
From a technical standpoint, the private equity firm needed to support security by hosting the intranet on their own network. This and the need to completely customize the experience for close alignment with their brand meant that no off-the-shelf pre-built intranet solution would work. We went with Drupal 8 to make this intranet scalable, secure, and tailor-made to an optimal employee experience.
The Results
The platform deployment came at a time when it was most needed, playing a crucial role for the firm during a global pandemic that kept employees at home. What was originally designed as a platform to deepen employee connections between offices quickly became the firm’s hub for connecting employees within an office. As many businesses are, the firm is actively re-evaluating its approach to the traditional office model, and the early success of the new platform indicates that it is likely to play an even larger role in the future.
THE BRIEF
Transform the Experience
The core Earthwatch experience happens outdoors in the form of an expedition — usually for about a week and far away from technology in locations like the Amazon Basin, Uganda, or the Great Barrier Reef. But before this in-person experience happens, an expedition volunteer encounters a dizzying array of digital touchpoints that can sow confusion and lead to distrust. Earthwatch needed “Experience Transformation.”
SURVEY THE LANDSCAPE
Starting with a deep strategy and research engagement, Oomph left no stone unturned in cataloging users and their journeys through a decade’s worth of websites and custom applications. We were able to conduct multiple interview sessions with engaged advocates of the organization. Through these interviews, the Earthwatch staff learned how to conduct more interviews themselves and listen to their constituents to internalize what they find wonderful about the experience as well as what they find daunting.
CREATE THE MAP
With a high-level service blueprint in place, Oomph then set out to transform the digital experiences most essential to the organization: the discovery and booking journey for individuals and the discovery, research, and inquiry journey for corporate sustainability programs.
The solution took shape as an overhaul and consolidation of Earthwatch’s public-facing websites.

THE RESULTS
The Journey Before the Journey
A fresh design approach that introduces new colors, beautiful illustrations, and captivating photography.

Expedition discovery, research, and booking was transformed into a modern e-commerce shopping experience.
Corporate social responsibility content architecture was overhauled with trust-building case studies and testimonials to drive an increase in inquiries.

IN THEIR WORDS
The Oomph team far surpassed our (already high!) expectations. As a nonprofit, we had a tight budget and knew it would be a massive undertaking to overhaul our 7-year-old site while simultaneously launching an organizational rebrand. Oomph helped to guide us through the entire process, providing the right level of objective, data-driven expertise to ensure we were implementing user experience and design best practices. They listened closely to our needs and helped to make the website highly visual and engaging while streamlining the user journey. Thanks to their meticulous project management and time tracking, we successfully launched the site on time and exactly on budget.
ALIX MORRIS MHS, MS, Director of Communications, Earthwatch
THE BRIEF
The American Veterinary Medical Association (AVMA) advocates on behalf of 91,000+ members — mostly doctors but some veterinary support staff as well. With roots as far back as 1863, their mission is to advance the science and practice of veterinary medicine and improve animal and human health. They are the most widely recognized member organization in the field.

Make the Brand Shine
The AVMA website is the main communications vehicle for the organization. But the framework was very out of date — the site was not mobile-friendly and some pages were downright broken. The brand was strong, but the delivery on screen was weak and the tools reflected poorly.
Our goals were to:
IMPROVE THE SITE MAP
Content bloat over the years created a site tree that was in bad need of pruning.
IMPROVE SEARCH
When a site has so much content to offer, search can be the quickest way to find relevant information for a motivated user. Our goals were to make search more powerful while maintaining clarity of use.
COMMUNICATE THE VALUE OF MEMBERSHIP
Resources and benefits that come with membership were not clearly illustrated and while members were renewing regularly, they were not interacting with the site as a resource as often as they could.
STRENGTHEN THE BRAND
If the site was easier to navigate and search, if it had a clear value proposition for existing and prospective members, and if the visual design were modern and device-friendly, the brand would be stronger.

THE APPROACH
Put Members First
Oomph embarked on an extensive research and discovery phase which included:
- A competitor Analysis of 5 groups in direct competition and 5 similar membership-driven organizations
- An online survey for the existing audience
- A content and SEO audits
- Several in-person workshops with stakeholder groups, including attendance at their annual convention to conduct on-the-spot surveys
- More phone interviews with volunteers, members, and additional stakeholders
With a deep bed of research and personal anecdotes, we began to architect the new site. Communication was high as well, with numerous marketing, communications, and IT team check-ins along the way:
- An extensive card sort exercise for information architecture improvements — 200+ cards sorted by 6 groups from throughout the organization
- A new information architecture and audience testing
- A content modeling and content wireframe exercises
- A brand color accessibility audit
- Over a dozen wireframes
- Three style tiles (mood boards) with revisions and refinements
- Wireframe user testing
- A set of deep-dive technical audits
- Several full design mockups with flexible component architecture

Several rounds of style tiles explored a new set of typefaces to support a modern refresh of the brand. Our ideas included darkening colored typography to meet WCAG thresholds, adding more colored tints for design variability, and designing a set of components that could be used to create marketing pages using Drupal’s Layout Builder system.
THE RESULTS
The design update brought the main brand vehicle fully into the modern web. Large headlines and images, chunks of color, and a clearer hierarchy of information makes each pages’ purpose shine. A mega-menu system breaks complex navigation into digestible parts, with icons and color to help differentiate important sections. The important yearly convention pages got a facelift as well, with their own sub-navigation system.

FINAL THOUGHTS
Supporting Animals & Humans Alike
Membership to the AVMA for a working veterinary doctor is an important way to keep in touch with the wider community while also learning about the latest policy changes, health updates, and events. The general public can more easily find information about common pet health problems, topical issues around animal well-being during natural disasters, and food and toy recalls. The goal of supporting members first while more broadly providing value to prospective members and non-members alike has coalesced into this updated digital property.
We look forward to supporting animal health and human safety as we continue to support and improve the site over the next year.