Many organizations today, large and small, have a digital asset problem. Companies are amassing huge libraries of images, videos, audio recordings, documents, and other files — while relying on shared folders and email to move them around the organization. As asset libraries explode, digital asset management (DAM) is crucial for keeping things accessible and up to date, so teams can spend more time getting work done and less time hunting for files.

First Things First: DAM isn’t Dropbox

Some folks still equate DAM with basic digital storage solutions, like Dropbox or Google Drive. While those are great for simple sharing needs, they’re essentially just file cabinets in the cloud.

DAM technology is purpose-built to optimize the way you store, maintain, and distribute digital assets. A DAM platform not only streamlines day-to-day content work; it also systematizes the processes and guidelines that govern content quality and use.

Today’s DAMs have sophisticated functionality that offers a host of benefits, including:

Is it time for your business to invest in a DAM? Let’s see if you recognize the pain points below:

The 5 Signs You Need a DAM

There are some things you can’t afford not to invest in if they significantly impact your team’s creativity and productivity and your business’s bottom line. Here are some of the most common signs it’s time to invest in a DAM:

It takes more than a few seconds to find what you need.

As your digital asset library grows, it’s harder to keep sifting through it all to find things — especially if you’re deciphering other people’s folder systems. If you don’t know the exact name of an asset or the folder it’s in, you’re often looking for a needle in a haystack.

Using a DAM, you can tag assets with identifying attributes (titles, keywords, etc.) and then quickly search the entire database for the ones that meet your criteria. DAMs also offer AI- and machine-learning–based tagging, which automatically adds tags based on the content of an image or document. Voila! A searchable database with less manual labor.

You have multiple versions of documents — in multiple places.

Many of our clients, including universities, healthcare systems, libraries, and nonprofits, have large collections of policy documents. These files often live on public websites, intranets, and elsewhere, with the intent that staff can pull them up as needed.

Problem is, if there’s a policy change, you need to be sure that anywhere a document is accessed, it’s the most current version. And you can’t just delete old files on a website, because any previous links to them will go up in smoke.

DAMs are excellent at managing document updates and variations, making it easy to find and replace old versions. They can also perform in-place file swaps without breaking the connections to the pieces of content that refer to a particular file.

You’re still managing assets by email.

With multiple team members or departments relying on the same pool of digital assets for a variety of use cases, some poor souls will spend hours every day answering email requests, managing edits, and transferring files. The more assets and channels you’re dealing with, the more unwieldy this gets.

DAMs facilitate collaboration by providing a single, centralized platform where team members can assign tasks, track changes, and configure permissions and approval processes. As a result, content creators know they’re using the most up-to-date, fully approved assets.

Your website doubles as a dump bin.

If your website is the source of assets for your entire organization, it can be a roadblock for other departments that need to use those assets in other places. They need to know how to find assets, download copies, and obtain sizes or formats that differ from the web-based versions… and there may or may not be a web team to assist.

What’s more, some web hosting providers offer limited storage space. If you have a large and growing digital library, you’ll hit those limits in no time.

A DAM provides a high-capacity, centralized location where staff can easily access current, approved digital assets in various sizes and formats.

You’re duplicating assets you already have.

How many times have you had different teams purchase assets like stock photography and audio tracks, when they could have shared the files instead? Or, maybe your storage folders are overrun with duplicates. Instead of relying on teams to communicate whenever they create or use an asset, you could simplify things with a DAM.

Storing and tagging all your assets, in various sizes and formats, in a DAM enables your teams to:

When Should You Implement a DAM?

You can implement a DAM whether you have an existing website or you’re building a new one. DAM technology easily complements platform builds or redesigns, helping to make websites and intranets even more powerful. Organizing all of your assets in a DAM before launching a web project also makes it easier to migrate them to your new platform and helps ensure that nothing gets lost.

Plus, we’ve seen companies cling to old websites when too many departments are still using assets that are hosted on the site. Moving your assets out of your website and into a DAM frees you up to move on.

If you’re curious about your options for a DAM platform, there are a number of solutions on the market. Our partner Acquia offers an excellent DAM platform with an impressive range of functions for organizing, accessing, publishing, and repurposing assets, automating manual processes, and monitoring content metrics.

Other candidates to consider include Adobe Experience Manager AssetsBynderPictureParkCantoCloudinaryBrandfolder, and MediaValet.

Given the number of DAMs on the market, choosing the right solution is a process. We’re happy to share our experience in DAM use and implementation, to help you find the best one for your needs. Just get in touch with any questions you have.

With all the hype swirling around technology buzzwords like blockchain, cryptocurrencies, and non-fungible tokens (NFTs), it can be hard to understand their real utility for your business. But as practical applications continue to emerge, more business leaders are starting to see adopting blockchain as a business priority (PDF) — and, potentially, a competitive advantage.

To help you make sense of it, we’ve compiled a high-level look at blockchain technology and its sister concept, Web3, along with some ideas for how this technology could be relevant to your business right now.

First, What are Blockchain & Web3?

Web3

“Web3” has become a catch-all term that refers to a decentralized online ecosystem where platforms and apps are owned not by a central gatekeeper, but by the users who help develop and maintain those services. To many, Web3 is the next iteration of the internet.

The first version of the internet, Web 1.0, was largely made of individual, static web pages created by the few people who understood the technology. We are currently in the midst of Web 2.0, which provides a platform with tools for non-technical people to create their own content, essentially democratizing authorship. With it, we saw the rise of social media platforms, shopping giants like Amazon, and work tools like Office and Google Suite. This also meant that much of our individual data — our posts, reviews, and photos — have been centralized into these behemoth systems.

The promise of Web3 is the opposite approach: decentralized content, with much greater control over what you create and in what ways your data is associated with your activity.

Blockchain

Web3 achieves its goals of decentralization via blockchain, a digital ledger that exists only on the internet. This ledger uses a complex cryptography system to create encrypted “blocks” of data, ensuring that all the transactions that are written to it are verifiable and unalterable.

This ledger is open to the world to access. It’s not hosted on a single server owned by one company, but instead across a vast network of computers. The technology keeps all transactions up to date everywhere at once, and maintenance fees are paid by those that access the data.

What makes the blockchain so valuable across sectors is that it helps reduce risk, eliminates fraud, and provides scalable transparency. As a chronological, decentralized, single source of truth, the blockchain creates trust in data. As an example, at its most basic level, this open ledger makes it possible for me to verify that you conducted a certain transaction on a certain date (like a digital receipt). But it can do much more than that.

Let’s take a look at some of the ways you can put the blockchain to use.

What Can Blockchain Do for Healthcare?

We’ve already started to see innovation around Web3 in the healthcare space, much of it focused on patient health records. Given the increasing fragmentation of healthcare, the strict privacy regulations in this space, and the high risk of data breaches with current systems, the healthcare industry is a good candidate for blockchain use and the security benefits of decentralization.

Use case: patient records

To imagine how the blockchain can change healthcare, let’s consider a typical patient who sees their general practitioner and requests a visit with a specialist. To ensure continuity of care, the patient’s health records must be sent from the primary doctor to the specialist’s office quickly and securely. Since most patients don’t have copies of all their medical records, they’re relying on their doctor’s office to transfer this sensitive data.

With the blockchain, patients can own their own medical records and control who has access to them. Doctors can add detailed entries to the digital ledger, which can be shared with other medical professionals as needed. Patients can also revoke access to anyone at any time.

Improving access to patient information across providers is crucial for the healthcare industry, given that medical errors are the third leading cause of death in the U.S. Current medications and their side effects could be part of the blockchain ledger, to help reduce complications. In addition, smart contracts that automatically seek out potential conflicts between medications could be added to medical records.

This example barely scratches the surface; luckily, there are many companies already in this space, figuring out how to store patient data via blockchain in accordance with current regulations.

What Can Blockchain Do for Education?

You may not immediately think of educational applications as benefitting from blockchain technology. But paper records are one area that’s ripe for disruption, since moving to a digital format would make communication between institutions much more efficient. Here’s an example.

Use case: student transcripts

Having transcripts stored on a blockchain would make it easier for students to transfer their educational history from one school to another. It would also ensure that educational institutions, or even employers, could easily verify that history. In fact, MIT has been issuing digital, blockchain-stored diplomas since 2017.

Beyond transcripts, the Open Badge Passport issues digital badges that recognize learning, skills, and achievements by scraping information off the blockchain about individuals’ extracurricular activities. This allows students and others to demonstrate soft-skill talents that are valuable to have but not typically recorded by a degree.

What Can Blockchain Do for Ecommerce?

Product registration has long been a way for companies to gain access to buyers’ contact info (more than a way for customers to protect their investment, as advertised). Using blockchain, product registries can serve a greater range of purposes, offering value to both consumers and companies. That’s because when it comes to provenance — the place of origin or earliest known history of something — a blockchain is a perfect public record-keeping tool.

Use case: product registries

For high-ticket items that can be sold on the internet, like jewelry, designer clothing, or rare books, a blockchain entry can be used to prove, and verifiably transfer, an item’s ownership. This user history could not only help bolster the secondary market for verified products, it also reduces the ability of counterfeiters to pass knock-off products as the real thing.

Imagine this approach for high-ticket items like autos. Supporting vehicle transactions that take place online or offline, a blockchain could store vehicle maintenance and crash reports. This trove of information could boost the resale value of a vehicle, because potential buyers can access the vehicle’s entire maintenance history. If the data is connected to onboard sensors, it might even include engine efficiency and tire wear.

What Can Blockchain Do for Non-Profits?

In this example, we combine blockchain, NFTs, and smart contracts to create a unique approach to a fundraising classic. Quick primer on NFTs: blockchain-based tokens that represent digital media like music, art, videos, etc… and can verify authenticity, past history, and sole ownership of a digital item. Smart contracts are preset functions that fire on a blockchain when specific conditions are met.

Use case: silent auctions with NFTs

Non-profits could take a new approach to fundraising with NFTs. Let’s say an artist creates (as a donation) a series of NFTs that are auctioned off to the highest bidder, with ownership of the artwork transferred via a blockchain. Or museums could sell digital representations of their collection, potentially fueling new derivative artwork. Artists could remix classic works into new art, providing additional promotional and fundraising opportunities. The classic silent auction gets upgraded.

But let’s take it a step further. Smart contracts on those NFTs could perpetually pay a royalty back to the original artist or the organization as a portion of any future sales of the artwork. That means one year’s fundraiser could potentially reap monetary benefits for many years to come. The contract could take any form negotiated by the artist and non-profit — future royalties from ownership transfer pay the artist while the initial artwork was a donation, or the artist and organization split future proceeds, etc…


Blockchain can not only help overcome challenges such as consumer privacy concerns and sensitive data management, it can also help organizations seize new opportunities for growth.


What Can Blockchain Do for You?

While blockchain technology was first conceived as a mechanism to support Bitcoin, today it offers tons of uses across industry sectors. This kind of advanced technology may seem only accessible to companies that can afford expensive developers, but the cost to incorporate blockchain technology into many business operations is often less than you think. Plus, new vendors are emerging all the time to provide blockchain technology for a broad range of applications.

Blockchain can not only help overcome challenges such as consumer privacy concerns and sensitive data management, it can also help organizations seize new opportunities for growth. Think about your company’s biggest challenge or goal, and there might be a way blockchain technology can address it.

Interested in looking at ways to incorporate blockchain into your company’s digital assets? Let’s talk.


THE BRIEF

Never Stopping, Always Evolving

Leica Geosystems was founded on cutting-edge technology and continues to push the envelope with their revolutionary products. Leica Geosystems was founded by Heinrich Wild and made its first rangefinder in 1921. Fast forward to the 21st century, and Leica Geosystems is the leading manufacturer of precision laser technology used for measurements in architecture, construction, historic preservation, and DIY home remodeling projects.

Oomph and Leica collaborated on an initial project in 2014 and have completed multiple projects since. We transitioned the site into a brand new codebase with Drupal 8. With this conversion, Oomph smoothed out the Leica team’s pain points related to a multisite architecture. We created a tightly integrated single site that can still serve multiple countries, languages, and currencies.


THE CHALLENGE

Feeling the Pain-points with Multisite

Leica’s e-commerce store is active in multiple countries and languages. Managing content in a Drupal multisite environment meant managing multiple sites. Product, content, and price changes were difficult. It was Oomph’s challenge to make content and product management easier for the Leica team as well as support the ability to create new country sites on demand. Leica’s new e-commerce site needed to support:

MULTIPLE COUNTRIES AND A GLOBAL OPTION

SIX LANGUAGES

MANY 3RD-PARTY INTEGRATIONS

The pain points of the previous Multisite architecture were that each country was a silo:

  • No Single Sign On (SSO): Multiple admin log-ins to remember
  • Repetitive updates: Running Drupal’s update script on every site and testing was a lengthy process
  • Multiple stores: Multiple product lists, product features, and prices
  • Multiple sites to translate: each site was sent individually to be translated into one language

THE APPROACH

Creating a Singularity with Drupal 8, Domain Access, & Drupal Commerce

A move to Drupal 8 in combination with some smart choices in module support and customization simplified many aspects of the Leica team’s workflow, including:

  • Configuration management: Drupal 8’s introduction of configuration management in core means that point-and-click admin configuration can get exported from one environment and imported into another, syncing multiple environments and saving configuration in our code repository
  • One Database to Rule Them All: Admins have a single site to log into and do their work, and developers have one site to update, patch, and configure
  • One Commerce Install, Multiple stores: There is one Drupal Commerce 2.x install with multiple stores with one set of products. Each product has the ability to be assigned to multiple stores, and price lists per country control product pricing
  • One Page in Multiple Countries and Multiple Languages: The new single site model gives a piece of content one place to live, while authors can control which countries the content is available and the same content is translated into all the languages available once.
  • Future proof: With a smooth upgrade path into Drupal 9 in 2020, the Drupal 8 site gives Leica more longevity in the Drupal ecosystem

LEARN VS. SHOP

Supporting Visitor Intention with Two Different Modes

While the technical challenges were being worked out, the user experience and design had to reflect a cutting-edge company. With the launch of their revolutionary product, the BLK 360, in 2018, Leica positioned itself as the Apple of the geospatial measurement community — sleek, cool, cutting-edge and easy to use. While many companies want to look as good as Apple, few of them actually have the content and product to back it up.

The navigation for the site went through many rounds of feedback and testing before deciding on something radically simple — Learn or Shop. A customer on the website is either in an exploratory state of mind — browsing, comparing, reviewing pricing and specifications — or they are ready to buy. We made it very clear which part of the website was for which.

This allowed us to talk directly to the customer in two very different ways. On the Learn side, the pages educate and convince. They give the customer information about the product, reviews, articles, sample data files, and the like. The content is big, sleek, and leverages video and other embedded content, like VR, to educate.

On the Shop side the pages are unapologetically transactional. Give the visitor the right information to support a purchase, clearly deliver specs and options like software and warranties, without any marketing. We could assume the customer was here to purchase, not to be convinced, so the page content could concentrate on order completion. The entire checkout process was simplified as much as possible to reduce friction. Buying habits and patterns of their user base over the past few site iterations were studied to inform our choices about where to simplify and where to offer options.


THE RESULTS

More Nimble Together

The willingness of the Drupal community to support the needs of this project cannot be overlooked, either. Oomph has been able to leverage our team’s commitment to open source contributions to get other developers to add features to the modules they support. Without the give and take of the community and our commitment to give back, many modifications and customizations for this project would have been much more difficult. The team at Centarro, maintainers of the Commerce module, were fantastic to work with and we thank them.

We look forward to continuing to support Leica Geosystems and their product line worldwide. With a smooth upgrade path to Drupal 9 in 2020, the site is ready for the next big upgrade.

When companies merge, successfully combining digital assets like websites, intranets, apps, and other platforms takes more than just squishing things together. Poorly merged digital properties can diminish brand equity, squander years of SEO value, and even drive away customers or employees — ultimately tanking the value the merger was supposed to create.

The challenge is that you’re bringing together two end-user communities with different experiences and expectations. And it’s easy to assume the bigger or faster-growing company has the better digital platform, even when there’s a lot you could learn from the smaller company’s practices.

That’s why we recommend a collaborative, UX-centered approach to combining digital properties, to ensure you’re leveraging the best of both worlds. In this article, we’ll share a sample of UX analyses that can help set up a new combined platform for success.

First, let’s talk about leveraging the right mindset.

A Different Approach

Typically, when combining digital platforms, companies tend to take a top-down approach, meaning there’s a hierarchy of decision-making based on which platform is believed to be better. But the calculus can change a lot when those decisions are made from the end users’ point of view instead.

From a practical standpoint, these companies are usually trying to create efficiencies and add new competencies while carefully messaging the benefits of the merger for their customers. They focus on things like branding, SEO, and consolidating social media — all of which are important, and none of which truly shapes the platform user’s experience.

To be fair, before the merger, both companies were likely focused on trying to create the best possible user experience for their customers. Now that they’re joining forces, each brings a unique set of learnings and techniques to the table. Which begs the question: what if your new partner handles some aspects of UX better than you?

Working collaboratively through in-depth Acquisition Analysis gives you an opportunity to extract the best from all digital properties, as either company’s platforms may have features, functionality, or content that does a better job of meeting business goals. How do you know which elements will be more successful? By auditing both platforms with tools like the ones we’ll talk about next.


When merging, don’t assume the bigger company should swallow the smaller and all its digital assets. There might be many things that the smaller company is doing better.


Conducting UX Audits

To preserve SEO value and cull the best-performing content for the new platform, many companies conduct content and SEO audits, often using free or paid tools. These usually involve flagging duplicate content, comparing performance metrics, and using R.O.T. (redundant/outdated/trivial) analyses.

What many organizations miss, however, is the opportunity to conduct UX and customer audits while directly comparing digital platforms. These can provide invaluable insights about the mental models and behaviors of users.

At a minimum, we recommend comparing both platforms using Nielsen Norman Group’s 10 usability heuristics. Setting the standard for user interface design for almost three decades, these guidelines give you a great baseline for identifying which parts of each platform are the most user-friendly. You can also compare heatmaps and scrollmaps to assess which platform does a better job of engaging users in ways that matter to your business goals.

Here are some other examples of UX analyses we conduct for clients when merging digital platforms:

Five second test

With existing customers or representatives of your target audience, ask users to view a page for five seconds and then answer a few questions about it. You’re looking for gut feelings here, as first impressions can tell you a lot about a page’s effectiveness.

Questions might include:

This test should be done for multiple pages on a website, not just the homepage. It’s especially valuable for product or service pages, where you can assess whether specific features are easily visible and accessible.

Customer interview comparison

For this assessment, enlist 5 to 10 customers for each business. Have the customers of Company A use Company B’s platform and vice versa, asking them to explain the value each company offers. You can also ask users what’s missing when they use the other company’s website. What’s different and better (or worse) than before? The answers can help you determine which brand and functional elements are essential to the user experience for each platform.

This test can also provide insights about the impact of elements you may not have previously considered, like the quality of photography or the order in which information is presented. These elements can set expectations and affect how people use the platform, all of which contributes to building users’ trust.

For a more in-depth analysis of user engagement and preferences, try gathering a combination of quantitative and qualitative data.

Site map analysis

Given that the merging companies are likely in the same or similar industries, there will probably be overlap between the site maps for each company’s website. But there will also be elements that are unique to each site. To successfully blend the information architecture of both properties, you’ll need to determine which elements work best for your target audience.

In addition to comparing analytics for the different websites to see which elements are most effective, here are a few other research methods we recommend:

Cohort analysis

Looking at other websites in your industry, examine their site structures and the language they use (e.g. “Find a doctor” vs. “Find a provider”). This reflects visitors’ expectations of what information they’ll get and where they can find it. You can also identify areas where you should deviate from the norm, including language that’s more authentic and unique to your brand.

Card sort

Card sorting helps you understand how to structure content in a way that makes sense to your users, so they can find what they’re looking for. Participants group labeled notecards according to criteria they would naturally use. For example, if you have a car rental site, you could ask users to organize different vehicle models into groups that make sense to them. While your company might use terms like “family car” or “executive sedan,” your customers might have completely different perceptions.

Tree testing

Tree testing helps you evaluate a proposed site structure by asking users to find items based solely on the menu structure and terminology. Using an online interface (Treejack is a popular one) that displays only navigation links without layout or design, users are asked to complete a series of 10–15 tasks. This can show you how easy it is for site visitors to find and use information. This test is often used after card sorting sessions to confirm that the findings from the card sorting exercise are correct.

Use Information, Not Intuition

Like we said, just because a larger company acquires a smaller one doesn’t mean its digital properties have nothing to learn from the other’s. Better practices could exist in either place, and it would be a shame to lose any unique value the smaller company’s platform might offer.

With so many robust tools available for UX analysis, there’s no reason not to gather the crucial data that will help you decide which features of each platform will best achieve your business goals. When combining digital properties, the “1 + 1 = 3” trope only works if you truly glean the best of both worlds.

Need help laying the groundwork for merging separate digital platforms? Our strategic UX experts can craft a set of research exercises to help your team make the best possible decisions. Contact us today to learn more.


THE BRIEF

The Virtual Lab School (VLS) supports military educators with training and enrichment around educational practices from birth through age 12. Their curriculum was developed by a partnership between Ohio State University and the U.S. Department of Defense to assist direct-care providers, curriculum specialists, management personnel, and home-based care providers. Because of the distributed nature of educators around the world, courses and certifications are offered virtually through the VLS website.

Comprehensive Platform Assessment

The existing online learning platform had a deep level of complexity under the surface. For a student educator taking a certification course, the site tracks progress through the curriculum. For training leaders, they need to see how their students are progressing, assign additional coursework, or assist a student educator through a particular certification.

Learning platforms in general are complex, and this one is no different. Add to this an intertwined set of military-style administration privileges and it produces a complex tree of layers and permutations.

The focus of the platform assessment phase was to catalog features of the largely undocumented legacy system, uncover complexity that could be simplified, and most importantly identify opportunities for efficiencies.


THE RESULTS

Personalized Online Learning Experience

Enrollment and Administration Portal

Administrators and instructors leverage an enrollment portal to manage the onboarding of new students and view progress on coursework and certifications.

Course Material Delivery

Students experience the course material through a combination of reading, video, and offline coursework downloads for completion and submission.

Learning Assessments & Grading

Students are tested with online assessments, where grading and suggestions are delivered in real time, and submission of offline assignments for review by instructors.

Progress Pathways

A personalized student dashboard is the window into progress, allowing students to see which courses have been started, how much is left to complete, and the status of their certifications.

Certification

Completed coursework and assessments lead students to a point of certification resulting in a printable Certificate of Completion.


FINAL THOUGHTS

Faster and More Secure than Ever Before

When building for speed and scalability, fully leveraging Drupal’s advanced caching system is a major way to support those goals. The system design leverages query- and render-caching to support a high level of performance while also supporting personalization to an individual level. This is accomplished with computed fields and auto-placeholdering utilizing lazy builder.

The result is an application that is quicker to load, more secure, and able to support hundreds more concurrent users.

Why Drupal?

When building for speed and scalability, fully leveraging Drupal’s advanced caching system is a major way to support those goals. The system design leverages query- and render-caching to support a high level of performance while also supporting personalization to an individual level. This is accomplished with computed fields and auto-placeholdering utilizing lazy builder.

The result is an application that is quicker to load, more secure, and able to support hundreds more concurrent users.

Is your digital platform still on Drupal 7? By now, hopefully, you’ve heard that this revered content management system is approaching its end of life. In November 2023, official Drupal 7 support from the Drupal community will end, including bug fixes, critical security updates, and other enhancements.

If you’re not already planning for a transition from Drupal 7, it’s time to start.

With nearly 12 million websites currently hacked or infected, Drupal 7’s end of life carries significant security implications for your platform. In addition, any plug-ins or modules that power your site won’t be supported, and site maintenance will depend entirely on your internal resources.

Let’s take a brief look at your options for transitioning off Drupal 7, along with five crucial planning considerations.

What Are Your Options?

In a nutshell, you have two choices: upgrade to Drupal 9, or migrate to a completely new CMS.

With Drupal 9’s advanced features and functionalities, migrating from Drupal 7 to 9 involves much more than applying an update to your existing platform. You’ll need to migrate all of your data to a brand-new Drupal 9 site, with a whole new theme system and platform requirements.

Drupal 9 also requires a different developer skill set. If your developers have been on Drupal 7 for a long time, you’ll need to factor a learning curve into your schedule and budget, not only for the migration but also for ongoing development work and maintenance after the upgrade.

As an alternative, you could take the opportunity to build a new platform with a completely different CMS. This might make sense if you’ve experienced a lot of pain points with Drupal 7, although it’s worth investigating whether those problems have been addressed in Drupal 9.

Drupal 10

What of Drupal 10, you ask? Drupal 10 is slated to be released in December 2022, but it will take some time for community contributed modules to be updated to support the new version. Once ready, updating from Drupal 9 to Drupal 10 should be a breeze.

Download our “Drupal 10 Readiness” PDF guide

Preparing for the Transition

Whether you decide to migrate to Drupal 9 or a new CMS, your planning should include the following five elements:

Content Audit

Do a thorough content inventory and audit, examine user analytics, and revisit your content strategy, so you can identify which content is adding real value to your business (and which isn’t).

Some questions to ask:

Another thing to consider is your overall content architecture: Is it a Frankenstein that needs refinement or revision? Hold that thought; in just a bit, we’ll cover some factors in choosing a CMS.

Design Evaluation

As digital experiences have evolved, so have user expectations. Chances are, your Drupal 7 site is starting to show its age. Be sure to budget time for design effort, even if you would prefer to keep your current design.

Drupal 7 site themes can’t be moved to a new CMS, or a different version of Drupal, without a lot of development work. Even if you want to keep the existing design, you’ll have to retheme the entire site, because you can’t apply your old theme to a new backend.

You’ll also want to consider the impact of a new design and architecture on your existing content, as there’s a good chance that even using an existing theme will require some content development.

Integrations

What integrations does your current site have, or need? How relevant and secure are your existing features and modules? Be sure to identify any modules that have been customized or are not yet compatible with Drupal 9, as they’ll likely require development work if you want to keep them.

Are your current vendors working well for you, or is it time to try something new? There are more microservices available than ever, offering specialized services that provide immense value with low development costs. Plus, a lot of functionalities from contributed modules in D7 are now a part of the D9 core.

CMS Selection & Architecture

Building the next iteration of your platform requires more than just CMS selection. It’s about defining an architecture that balances cost with flexibility. Did your Drupal 7 site feel rigid and inflexible? What features do you wish you had – more layout flexibility, different integrations, more workflow tools, better reporting, a faster user interface?

For content being brought forward, will migrating be difficult? Can it be automated? Should it be? If you’re not upgrading to Drupal 9, you could take the opportunity to switch to a headless CMS product, where the content repository is independent of the platform’s front end. In that case, you’d likely be paying for a product with monthly fees but no need for maintenance. On the flipside, many headless CMS platforms (like Contentful, for example) don’t allow for customization.

Budget Considerations

There are three major areas to consider in planning your budget: hosting costs, feature enhancements, and ongoing maintenance and support. All of these are affected by the size and scope of your migration and the nature of your internal development team.

Key things to consider:

Look on the Bright Side

Whatever path you choose, transitioning from Drupal 7 to a new CMS is no small task. But there’s a bright side, if you view it as an opportunity. A lot has changed on the web since Drupal 7 was launched! This is a great time to explore how you can update and enhance your platform user experience.

Re-platforming is a complex process, and it’s one that we love guiding clients through. By partnering with an experienced Drupal development firm, most migrations can be planned and implemented more quickly, with better outcomes for your brand and your audience.

From organizing the project, to vendor and service selection, to budget management, we’ve helped a number of organizations bring their platforms into the modern web. To see an example, check out this design update for the American Veterinary Medical Association.

Need help figuring out the best next step for your Drupal 7 platform? Contact us today for a free consultation.


The Challenge

After six successful years of operating the platforms with steady growth year over year, COVID-19 and the rise of the at-home economy fundamentally changed the consumer relationship with healthcare.

This change resulted in more people than ever turning to digital platforms from the brands they trust for advice and information on the pandemic, mental health, wellness, at-home fitness, and at-home nutrition. This created a massive opportunity for increased brand engagement, but also a risk that members could turn to other sources. While we worked to shift the experience toward at-home health strategies, a deep look at the system architecture was in order to ensure performance at a scale we had not experienced before.

The Approach

While the legacy data center hosting model easily supported planned year-over-year growth, it was not up for the task of handling the uncertainty that came with the pandemic. In order to smoothly handle spikes in traffic without additional fixed costs of scaling the data center hosting model, we researched and vetted a number of cloud providers. AWS was selected as the cloud solution with support from HIPAA/HITRUST managed services provider Cloudticity. In partnership with our client and Cloudticity, our platform team planned and executed on this important transition in three short months.

The Results

Performance, Security, Autonomy, and Agility

The transition to the cloud resulted in performance improvements during both normal and peak periods. Partner Cloudticity brings advanced threat monitoring and hardened security. AWS and the add-ons available provide auto-scaling as well as granular system access and flexibility that gives our engineering team more power and autonomy.

And finally, the cloud environment provides increased agility in responding to disaster recovery activities. While the transition to the cloud started with COVID, the platform now has a strong foundation for success far into the future.

While the terminology was first spotlighted by IBM back in 2014, the concept of a composable business has recently gained much traction, thanks in large part to the global pandemic. Today, organizations are combining more agile business models with flexible digital architecture, to adapt to the ever-evolving needs of their company and their customers.

Here’s a high-level look at building a composable business.

What is a Composable Business?

The term “composable” encompasses a mindset, technology, and processes that enable organizations to innovate and adapt quickly to changing business needs.

A composable business is like a collection of interchangeable building blocks (think: Lego) that can be added, rearranged, and jettisoned as needed. Compare that with an inflexible, monolithic organization that’s slow and difficult to evolve (think: cinderblock). By assembling and reassembling various elements, composable businesses can respond quickly to market shifts.

Gartner offers four principles of composable business:

These four principles shape the business architecture and technology that support composability. From structural capabilities to digital applications, composable businesses rely on tools for today and tomorrow.

So, how do you get there?

Start With a Composable Mindset…

A composable mindset involves thinking about what could happen in the future, predicting what your business may need, and designing a flexible architecture to meet those needs. Essentially, it’s about embracing a modular philosophy and preparing for multiple possible futures.

Where do you begin? Research by Gartner suggests the first step in transitioning to a composable enterprise is to define a longer-term vision of composability for your business. Ask forward-thinking questions, such as:

These kinds of questions provide insights into the market forces that will impact your business, helping you prepare for multiple futures. But you also need to adopt a modular philosophy, thinking about all the assets in your organization — every bit of data, every process, every application — as the building blocks of your composable business.

…Then Leverage Composable Technology

A long-term vision helps create purpose and structure for a composable business. Technology is the tools that bring it to life. Composable technology begets sustainable business architectures, ready to address the challenges of the future, not the past.

For many organizations, the shift to composability means evolving from an inflexible, monolithic digital architecture to a modular application portfolio. The portfolio is made up of packaged business capabilities, or PBCs, which form the foundation of composable technology.

The ABCs of PBCs

PBCs are software components that provide specific business capabilities. Although similar in some respects to microservices, PBCs address more than technological needs. While a specific application may leverage a microservice to provide a feature, when that feature represents a business capability beyond just the application at hand, it is a PBC.

Because PBCs can be curated, assembled, and reassembled as needed, you can adapt your technology practically at the pace of business change. You can also experiment with different services, shed things that aren’t working, and plug in new options without disrupting your entire ecosystem.

When building an application portfolio with PBCs, the key is to identify the capabilities your business needs to be flexible and resilient. What are the foundational elements of your long-term vision? Your target architecture should drive the business outcomes that support your strategic goals.

Build or Buy?

PBCs can either be developed internally or sourced from third parties. Vendors may include traditional packaged-software vendors and nontraditional parties, such as global service integrators or financial services companies.

When deciding whether to build or buy a PBC, consider whether your target capability is unique to your business. For example, a CMS is something many businesses need, and thus it’s a readily available PBC that can be more cost-effective to buy. But if, through vendor selection, you find that your particular needs are unique, you may want to invest in building your own.

Real-World Example

While building a new member retention platform for a large health insurer, we discovered a need to quickly look up member status during the onboarding process. Because the company had a unique way of identifying members, it required building custom software.

Although initially conceived in the context of the platform being created, a composable mindset led to the development of a standalone, API-first service — a true PBC providing member lookup capability to applications across the organization, and waiting to serve the applications of the future.

A Final Word

Disruption is here to stay. While you can’t predict every major shift, innovation, or crisis that will impact your organization, you can (almost) future-proof your business with a composabile approach.

Start with the mindset, lay out a roadmap, and then design a step-by-step program for digital transformation. The beauty of an API-led approach is that you can slowly but surely transform your technology, piece by piece.

If you’re interested in exploring a shift to composability, we’d love to help. Contact us today to talk about your options.

Why are microservices growing in popularity for enterprise-level platforms? For many organizations, a microservice architecture provides a faster and more flexible way to leverage technology to meet evolving business needs. For some leaders, microservices better reflect how they want to structure their teams and processes.

But are microservices the best fit for you?

We’re hearing this question more and more from platform owners across multiple industries as software monoliths become increasingly impractical in today’s fast-paced competitive landscape. However, while microservices offer the agility and flexibility that many organizations are looking for, they’re not right for everyone.

In this article, we’ll cover key factors in deciding whether microservices architecture is the right choice for your platform.

What’s the Difference Between Microservices and Monoliths?

Microservices architecture emerged roughly a decade ago to address the primary limitations of monolithic applications: scale, flexibility, and speed.

Microservices are small, separately deployable, software units that together form a single, larger application. Specific functions are carried out by individual services. For example, if your platform allows users to log in to an account, search for products, and pay online, those functions could be delivered as separate microservices and served up through one user interface (UI).

In monolithic architecture, all of the functions and UI are interconnected in a single, self-contained application. All code is traditionally written in one language and housed in a single codebase, and all functions rely on shared data libraries.

Essentially, with most off-the-shelf monoliths, you get what you get. It may do everything, but not be particularly great at anything. With microservices, by contrast, you can build or cherry-pick optimal applications from the best a given industry has to offer.

Because of their modular nature, microservices make it easier to deploy new functions, scale individual services, and isolate and fix problems. On the other hand, with less complexity and fewer moving parts, monoliths can be cheaper and easier to develop and manage.

So which one is better? As with most things technological, it depends on many factors. Let’s take a look at the benefits and drawbacks of microservices.

Advantages of Microservices Architecture

Companies that embrace microservices see it as a cleaner, faster, and more efficient approach to meeting business needs, such as managing a growing user base, expanding feature sets, and deploying solutions quickly. In fact, there are a number of ways in which microservices beat out monoliths for speed, scale, and agility.

Shorter time to market

Large monolithic applications can take a long time to develop and deploy, anywhere from months to years. That could leave you lagging behind your competitors’ product releases or struggling to respond quickly to user feedback.

By leveraging third-party microservices rather than building your own applications from scratch, you can drastically reduce time to market. And, because the services are compartmentalized, they can be built and deployed independently by smaller, dedicated teams working simultaneously. You also have greater flexibility in finding the right tools for the job: you can choose the best of breed for each service, regardless of technology stack.

Lastly, microservices facilitate the minimum viable product approach. Instead of deploying everything on your wishlist at once, you can roll out core services first and then release subsequent services later.

Faster feature releases

Any changes or updates to monoliths require redeploying the entire application. The bigger a monolith gets, the more time and effort is required for things like updates and new releases.

By contrast, because microservices are independently managed, dedicated teams can iterate at their own pace without disrupting others or taking down the entire system. This means you can deploy new features rapidly and continuously, with little to no risk of impacting other areas of the platform.

This added agility also lets you prioritize and manage feature requests from a business perspective, not a technology perspective. Technology shouldn’t prevent you from making changes that increase user engagement or drive revenue—it should enable those changes.

Affordable scalability

If you need to scale just one service in a monolithic architecture, you’ll have to scale and redeploy the entire application. This can get expensive, and you may not be able to scale in time to satisfy rising demand.

Microservices architecture offers not only greater speed and flexibility, but also potential savings in hosting costs, because you can independently scale any individual service that’s under load. You can also configure a single service to add capability automatically until load need is met, and then scale back to normal capacity.

More support for growth


With microservices architecture, you’re not limited to a UI that’s tethered to your back end. For growing organizations that are continually thinking ahead, this is one of the greatest benefits of microservices architecture.


In the past, websites and mobile apps had completely separate codebases, and launching a mobile app meant developing a whole new application. Today, you just need to develop a mobile UI and connect it to the same service as your website UI. Make updates to the service, and it works across everything.

You have complete control over the UI — what it looks like, how it functions for the customer, etc… You can also test and deploy upgrades without disrupting other services. And, as new forms of data access and usage emerge, you have readily available services that you can use for whatever application suits your needs. Digital signage, voice commands for Alexa… and whatever comes next.

Optimal programming options

Since monolithic applications are tightly coupled and developed with a single stack, all components typically share one programming language and framework. This means any future changes or additions are limited to the choices you make early on, which could cause delays or quality issues in future releases.

Because microservices are loosely coupled and independently deployed, it’s easier to manage diverse datasets and processing requirements. Developers can choose whatever language and storage solution is best suited for each service, without having to coordinate major development efforts with other teams.

Greater resilience

For complex platforms, fault tolerance and isolation are crucial advantages of microservices architecture. There’s less risk of system failure, and it’s easier and faster to fix problems.

In monolithic applications, even just one bug affecting one tiny part of a single feature can cause problems in an unrelated area—or crash the entire application. Any time you make a change to a monolithic application, it introduces risk. With microservices, if one service fails, it’s unlikely to bring others down with it. You’ll have reduced functionality in a specific capacity, not the whole system.

Microservices also make it easier to locate and isolate issues, because you can limit the search to a single software module. Whereas in monoliths, given the possible chain of faults, it’s hard to isolate the root cause of problems or predict the outcome of any changes to the codebase.

Monoliths thus make it difficult and time-consuming to recover from failures, especially since, once an issue has been isolated and resolved, you still have to rebuild and redeploy the entire application. Since microservices allow developers to fix problems or roll back buggy updates in just one service, you’ll see a shorter time to resolution.

Faster onboarding

With smaller, independent code bases, microservices make it faster and easier to onboard new team members. Unlike with monoliths, new developers don’t have to understand how every service works or all the interdependencies at play in the system.

This means you won’t have to scour the internet looking for candidates who can code in the only language you’re using, or spend time training them in all the details of your codebase. Chances are, you’ll find new hires more easily and put them to work faster.

Easier updates

As consumer expectations for digital experiences evolve over time, applications need to be updated or upgraded to meet them. Large monolithic applications are generally difficult, and expensive, to upgrade from one version to the next.

Because third-party app owners build and pay for their own updates, with microservices there’s no need to maintain or enhance every tool in your system. For instance, you get to let Stripe perfect its payment processing service while you leverage the new features. You don’t have to pay for future improvements, and you don’t need anyone on staff to be an expert in payment processing and security.

Disadvantages of Microservices Architecture

Do microservices win in every circumstance? Absolutely not. Monoliths can be a more cost-effective, less complicated, and less risky solution for many applications. Below are a few potential downsides of microservices.

Extra complexity

With more moving parts than monolithic applications, microservices may require additional effort, planning, and automation to ensure smooth deployment. Individual services must cooperate to create a working application, but the inherent separation between teams could make it difficult to create a cohesive end product.

Development teams may have to handle multiple programming languages and frameworks. And, with each service having its own database and data storage system, data consistency could be a challenge.

Also, when you choose to leverage numerous 3rd party services, this creates more network connections as well as more opportunities for latency and connectivity issues in your architecture.

Difficulty in monitoring

Given the complexity of microservices architecture and the interdependencies that may exist among applications, it’s more challenging to test and monitor the entire system. Each microservice requires individualized testing and monitoring.

You could build automated testing scripts to ensure individual applications are always up and running, but this adds time and complexity to system maintenance.

Added external risks

There are always risks when using third-party applications, in terms of both performance and security. The more microservices you employ, the more possible points of failure exist that you don’t directly control.

In addition, with multiple independent containers, you’re exposing more of your system to potential attackers. Those distributed services need to talk to one another, and a high number of inter-service network communications can create opportunities for outside entities to access your system.

On an upside, the containerized nature of microservices architecture prevents security threats in one service from compromising other system components. As we noted in the advantages section above, it’s also easier to track down the root cause of a security issue.

Potential culture changes

Microservices architecture usually works best in organizations that employ a DevOps-first approach, where independent clusters of development and operations teams work together across the lifecycle of an individual service. This structure can make teams more productive and agile in bringing solutions to market. But, at an organizational level, it requires a broader skill set for developing, deploying, and monitoring each individual application.

A DevOps-first culture also means decentralizing decision-making power, shifting it from project teams to a shared responsibility among teams and DevOps engineers. The goal is to ensure that a given microservice meets a solution’s technical requirements and can be supported in the architecture in terms of security, stability, auditing, etc…

3 Paths Toward Microservices Transformation

In general, there are three different approaches to developing a microservices architecture:

1. Deconstruct a monolith

This kind of approach is most common for large enterprise applications, and it can be a massive undertaking. Take Airbnb, for instance: several years ago, the company migrated from a monolith architecture to a service-oriented architecture incorporating microservices. Features such as search, reservations, messaging, and checkout were broken down into one or more individual services, enabling each service to be built, deployed, and scaled independently.

In most cases, it’s not just the monolith that becomes decentralized. Organizations will often break up their development group, creating smaller, independent teams that are responsible for developing, testing, and deploying individual applications.

2. Leverage PBCs

Packaged Business Capabilities, or PBCs, are essentially autonomous collections of microservices that deliver a specific business capability. This approach is often used to create best-of-breed solutions, where many services are third-party tools that talk to each other via APIs.

PBCs can stand alone or serve as the building blocks of larger app suites. Keep in mind, adding multiple microservices or packaged services can drive up costs as the complexity of integration increases.

3. Combine both types

Small monoliths can be a cost-effective solution for simple applications with limited feature sets. If that applies to your business, you may want to build a custom app with a monolithic architecture.

However, there are likely some services, such as payment processing, that you don’t want to have to build yourself. In that case, it often makes sense to build a monolith and incorporate a microservice for any features that would be too costly or complex to tackle in-house.

A Few Words of Caution

Even though they’re called “microservices”, be careful not to get too small. If you break services down into many tiny applications, you may end up creating an overly complex application with excessive overhead. Lots of micro-micro services can easily become too much to maintain over time, with too many teams and people managing different pieces of an application.

Given the added complexity and potential costs of microservices, for smaller platforms with only one UI it may be best to start with a monolithic application and slowly add microservices as you need them. Start at a high level and zoom in over time, looking for specific functions you can optimize to make you stand out.

Lastly, choose your third party services with care. It’s not just about the features; you also need to consider what the costs might look like if you need to scale a particular service.

Final Thoughts: Micro or Mono?

Still trying to decide which architecture is right for your platform? Here are some of the most common scenarios we encounter with clients:

  1. If time to market is the most important consideration, then leveraging 3rd party microservices is usually the fastest way to build out a platform or deliver new features.
  2. If some aspect of what you’re doing is custom, then consider starting with a monolith and either building custom services or using 3rd parties for areas that will help suit a particular need.
  3. If you don’t have a ton of money, and you need to get something up quick and dirty, then consider starting with a monolith and splitting it up later.

Here at Oomph, we understand that enterprise-level software is an enormous investment and a fundamental part of your business. Your choice of architecture can impact everything from overhead to operations. That’s why we take the time to understand your business goals, today and down the road, to help you choose the best fit for your needs.

We’d love to hear more about your vision for a digital platform. Contact us today to talk about how we can help.

How we leveraged Drupal’s native API’s to push notifications to the many department websites for the State.RI.gov is a custom Drupal distribution that was built with the sole purpose of running hundreds of department websites for the state of Rhode Island. The platform leverages a design system for flexible page building, custom authoring permissions, and a series of custom tools to make authoring and distributing content across multiple sites more efficient.

Come work with us at Oomph!
VIEW OPEN POSITIONS

The Challenge

The platform had many business requirements, and one stated that a global notification needed to be published to all department sites in near real-time. These notifications would communicate important department information on all related sites. Further, these notifications needed to be ingested by the individual websites as local content to enable indexing them for search.

The hierarchy of the departments and their sites added a layer of complexity to this requirement. A department needs to create notifications that broadcast only to subsidiary sites, not the entire network. For example, the Department of Health might need to create a health department specific notification that would get pushed to the Covid site, the RIHavens site, and the RIDelivers sites — but not to an unrelated department, like DEM.

A visualization of the heirarchal structure of notifications and the way in which the system needed to work

Exploration

Aggregator:

Our first idea was to utilize the built in Drupal aggregator module and pull notifications from the hub. A proof of concept proved that while it worked well for pulling content from the hub site, it had a few problems:

  1. It relied heavily on the local site’s cron job to pull updates, which led to timing issues in getting the content — it was not in near real-time. Due to server limitations, we could not run cron as often as would be necessary
  2. Another issue with this approach was that we would need to maintain two entity types, one for global notifications and a second for local site notifications. Keeping local and global notifications as the same entity allowed for easier maintenance for this subsystem.

Feeds:

Another thought was to utilize the Feeds module to pull content from the hub into the local sites. This was a better solution than the aggregator because the nodes would be created locally and could be indexed for local searching. Unfortunately, feeds relied on cron as well.

Our Solution

JSON API

We created a suite of custom modules that centered around moving data between the network sites using Drupal’s JSON API. The API was used to register new sites to the main hub when they came online. It was also used to pass content entities from the main hub down to all sites within the network and from the network sites back to the hub.

Notifications

In order to share content between all of the sites, we needed to ensure that the data structure was identical on all sites in the network. We started by creating a new notification content type that had a title field, a body field, and a boolean checkbox indicating whether the notification should be considered global. Then, we packaged the configuration for this content type using the Features module.

By requiring our new notification feature module in the installation profile, we ensured that all sites would have the required data structure whenever a new site was created. Features also allowed us to ensure that any changes to the notification data model could be applied to all sites in the future, maintaining the consistency we needed.

Network Domain Entity

In order for the main hub, ri.gov, to communicate with all sites in the network, we needed a way to know what Drupal sites existed. To do this, we created a custom configuration entity that stored the URL of sites within the network. Using this domain entity, we were able to query all known sites and passed the global notification nodes created on ri.gov to each known site using the JSON API.

Queue API:

To ensure that the notification nodes were posted to all the sites without timeouts, we decided to utilize Drupal’s Queue API. Once the notification content was created on the ri.gov hub, we queried the known domain entities and created a queue item that would use cron to actually post the notification node to each site’s JSON API endpoint. We decided to use cron in this instance to give us some assurance that a post to many websites wouldn’t timeout and fail.

Batch API

To allow for time sensitive notifications to be pushed immediately, we created a custom batch operation that reads all of the queued notifications and pushes them out one at a time. If any errors are encountered, the notification is re-queued at the end of the stack and the process continues until all notifications have been posted to the network sites.

A visualization of the batch process we created to handle queueing updates and pushing them out to the sites that needed them

New site registrations

In order to ensure that new sites receive notifications from the hub, we needed a site registration process. Whenever a new site is spun up, a custom module is installed that calls out to the hub using JSON API and registers itself by creating a new network domain entity with it’s endpoint URL. This allows the hub to know of the new site and can push any new notifications to this site in the future.

A visualization of the way in which new satellite sites ping the home base “hub” site and become registered feed destinations

The installation process will also query the hub for any existing notifications and, using the JSON API, get a list of all notification nodes from the hub to add them to it’s local queue for creation. Then, the local site uses cron to query the hub and get the details of each notification node to create it locally. This ensured that when a new site comes online, it will have an up to date list of all the important notifications from the hub.

Authentication

Passing this data between sites is one challenge, but doing it securely adds another layer of complexity. All of the requests going between the sites are authenticating with each other using the Simple Oauth module. When a new site is created, an installation process creates a dedicated user in the local database that will own all notification nodes created with the syndication process. The installation process also creates the appropriate Simple OAuth consumers which allows the authenticated connections to be made between the sites.

Department sites

Once all of the groundwork was in place, with minimal effort, we were able to allow for department sites to act as hubs for their own department sites. Thus, the Department of Health can create notifications that only go to subsidiary sites, keeping them separate from adjacent departments.

Translations

The entire process also works with translations. After a notification is created in the default language, it gets queued and sent to the subsidiary sites. Then, a content author can create a translation of that same node and the translation will get queued and posted to the network of sites in the same manner as the original. All content and translations can be managed at the hub site, which will trickle down to the subsidiary sites.

Moving in the opposite direction

With all of the authorization, queues, batches, and the API’s in place, the next challenge was making this entire system work with a Press Release content type. This provided two new challenges that we needed to overcome:

  1. Instead of moving content from the top down, we needed to move from the bottom up. Press release nodes get created on the affiliate sites and would need to be replicated on the hub site.
  2. Press release nodes were more complex than the notification nodes. These content types included media references, taxonomy term references and toughest of all, paragraph references.

Solving the first challenge was pretty simple – we were able to reuse the custom publishing module and instructed the queue API to send the press release nodes to the hub sites.

Getting this working with a complex entity like the press release node meant that we needed to not only push the press release node, but we also needed to push all entities that the initial node referenced. In order for it all to work, the entities needed to be created in reverse order.

Once a press release node was created or updated, we used the EntityInterface referencedEntities() method to recursively drill into all of the entities that were referenced by the press release node. In some cases, this meant getting paragraph entities that were nested two, three, even four levels deep inside of other paragraphs. Once we reached the bottom of the referenced entity pile, we began queuing those entities from the bottom up. So, the paragraph that was nested four levels deep was the first to get sent and the actual node was the last to get sent

A sample visualization of a node collection, like a press release, and all of the entities within it that need to be queued and communicated to our hub’s JSON API endpoint

Are you a developer looking to grow your skills? Join our team.

Conclusion

Drupal’s powerful suite of API’s gave us all the tools necessary to come up with a platform that will allow the State of Rhode Island to easily keep their citizens informed of important information, while allowing their editing team the ease of a create once and publish everywhere workflow.

You’ve decided to decouple, you’re building your stack, and the options are limitless – oh, the freedom of escaping the LAMP square and the boundaries of the conventional CMS! Utilities that were once lumped together into one unmoveable bundle can now be separately selected, or not selected at all. It is indeed refreshing to pick and choose the individual services best fitted to your project. But now you have to choose.

One of those choices is your backend content storage. Even though decoupling means breaking free from monolithic architecture, certain concepts persist: content modeling, field types, and the content editor experience.

I recently evaluated four headless CMS options: ContentfulCosmicDato, and Prismic. Prior to that, I had no experience with any of them. Fortunately they all offer a free plan to test and trial their software. For simpler projects, that may be all you need. But if not, each CMS offers multiple tiers of features and support, and costs vary widely depending on your requirements.

I was tasked with finding a CMS and plan that met the following specs:

Although this doesn’t seem like a big ask for any CMS, these requirements eliminated the free plans for all four services, so cost became a factor.

Along with cost, I focused my evaluation on the editor experience, modeling options, integration potential, and other features. While I found lots of similarities between the four, each had something a little different to offer.

It’s worth mentioning that development is active on all four CMSs. New features and improvements were added just within the span of time it took to write this article. So keep in mind that current limitations could be resolved in a future update.

Contentful

Contentful’s Team package is currently priced at $489 per month, making it the most expensive of the four. This package includes 10 content editors and 2 separate roles. There is no editorial workflow without paying extra, but scheduled publishing is included.

Terminology

A site is a “space” and content types are “content types.”

What I love

The media library. Media of many different types and sources – from images to videos to documents and more – can be easily organized and filtered. Each asset has a freeform name and description field for searching and filtering. And since you can provide your own asset name, you’re not stuck with image_8456_blah.jpeg or whatever nonsense title your asset had when you uploaded it. Additionally, image dimensions are shown on the list view, which is a quick, helpful reference.

Video description

RUNNER UP

 Dato’s Media Area offers similar filtering and a searchable notes field.

What I like

Commenting. Every piece of content has an admin comments area for notes or questions, with a threaded Reply feature.

My Views. My Views is an option in the content navigation panel. With a single click, you can display only content that you created or edited – very convenient when working with multiple editors and a large volume of content.

What could be better

Price. Contentful is expensive if your project needs don’t allow you to use the free/community plan. You do get a lot of features for the paid plans, but there’s a big jump between the free plan and the first tier paid plan.

Cosmic

Cosmic ranks second most pricey for our requirements at $299 per month for the Pro Package. This package includes 10 editors and 4 predefined roles. It has draft/scheduled publishing, and individual editor accounts can be limited to draft status only.

Terminology

A site is a “bucket” and content types are “object types.”

What I love

Developer Tools. Developer Tools is a handy button you can click at the object or object type level to view your REST endpoint and response. It also shows other ways (GraphQL, CLI, etc.) to connect to a resource, using real code that is specific to your bucket and objects.

Video description

RUNNER UP

Dato’s has an API Explorer for writing and running GraphQL queries.

The Slack Community. The Cosmic Slack community offers a convenient way to get technical support – in some cases, even down to lines-of-code level support – with quick response times.

What I like

View as editor. This is a toggle button in the navigation panel to hide developer features – even if your account is assigned the developer or admin role – allowing you to view the CMS as the editor role sees it. This is useful for documenting an editor’s process or troubleshooting their workflow.

Extensions. Cosmic provides several plug-and-play extensions, including importers for Contentful and WordPress content, as well as Algolia Search, Stripe, and more. I tested the Algolia extension, and it only took minutes to set up and immediately began syncing content to Algolia indexes1. You can also write your extensions and upload them to your account.

What could be better

Price/price structure. I found Cosmic’s pricing structure to be the most confusing, with extra monthly charges for common features like localization, backups, versioning, and webhooks. It’s hard to know what you’ll actually pay per month until you add up all the extras. And once you do, you may be close to the cost of Contentful’s lower tier.

Content model changes. Changing the content model after you’ve created or imported a lot of content is tricky. Content model changes don’t flow down to existing content without a manual process of unlocking, editing and re-publishing each piece of content, which can be very inefficient and confusing.

Dato

Dato’s Professional package is priced at €99 (about $120) per month, making it the second least pricey for our requirements. It includes 10 content editors and 15 roles, with configurable options to limit publishing rights.

Terminology

A site is a “project” and content types are “models.”

What I love

Tree-like collections. Dato lets you organize and display records in a hierarchical structure with visual nesting. The other CMSs give you roundabout ways to accomplish this, usually requiring extra fields. But Dato lets you do it without altering the content model. And creating hierarchy is as simple as dragging and dropping one record under another, making things like taxonomy a breeze to build.

Video description

RUNNER UP

No other CMS in this comparison offers hierarchical organizing quite like Dato, but Cosmic provides a parent field type, and Prismic has a documented strategy for creating hierarchical relationships.

What I like

Maintenance Mode. You can temporarily disable writes on your project and display a warning message to logged in editors. If you need to prevent editors from adding/editing content — for instance, during content model changes — this is a useful feature.

What could be better

Field types. Out-of-the-box Dato doesn’t provide field types for dropdowns or checkboxes. There’s a plugin available that transforms a JSON field into a multiselect, but it’s presented as a list of toggles/booleans rather than a true multiselect. And managing that field means working with JSON, which isn’t a great experience for content editors.

Dato is also missing a simple repeater field for adding one or more of something. I created repeater-like functionality using the Modular Content field type, but this feels overly complicated, especially when every other CMS in my comparison implements either a Repeater field type (Cosmic, Prismic) or a multi-value field setting (Contentful).

Prismic

Prismic ranks least pricey, at $100/mo for the Medium Package. This package includes 25 content editors, 3 predefined roles, draft/scheduled publishing and an editorial workflow.

Terminology

A site is a “repository”, and content types are “custom types.”

What I love

Field types. Prismic gives you 16 unique field types for modeling your content, but it’s not the number of types that I love; it’s the particular combination of options: the dedicated Title type for headings, the Media link type, the Embed type, the Color Picker. Plus, the UI is so intuitive, content editors know exactly what they’re expected to do when populating a field.

Take the Title type for example. If you need a heading field in the other CMSs, you’d probably use a plain text or rich text field type. Using rich text almost guarantees you’ll get unwanted stuff (paragraph tags, in particular) wrapped around whatever heading the editor enters. Using plain text doesn’t let the editor pick which heading they want. Prismic’s Title type field solves both of these problems.

Video description

RUNNER UP

This is a tough one, but I’m leaning toward Contentful. What they lack in the number of available field types, they make up for in Appearance settings that allow you to render a field type to the editor in different formats.

Price. Unlimited documents, custom types, API calls and locales are included in the Medium package for a reasonable price. Additionally, Prismic has more packages and support tiers than any of the others, with one paid plan as low as $7/mo.

What I like

Slices. Slices are an interesting addition to back-end content modeling, because they’re essentially components: things you build on the front. Prismic lets you create custom components, or use their predefined ones — blockquotes, a list of articles, an image gallery, etc… I admit I didn’t test how these components render on the front-end, but Slices deserve further exploration.

What could be better

Integration options/plugins. Although Webhooks are included in all of Prismic’s plans, there doesn’t seem to be any development of plugins or ways to quickly extend functionality. Every other CMS in this comparison offers simple, click-to-install extensions and integrations to common services.


A note on Front-end Frameworks

A headless CMS, by simple definition, is a content storage container. It does not provide the markup that your website visitors will see and use. Therefore, your project planning will include choosing and testing a front-end system or framework, such as Gatsby JS. It’s important to find out early in the process what, if any, obstacles exist with connecting your choice CMS to your choice front-end.

At Oomph, we’ve successfully used both Contentful and Cosmic with a Gatsby front-end. However, Gatsby plugins exist for Prismic and Dato as well.

Summary

As with any decoupled service, your headless CMS choice will be determined by your project’s distinct requirements. Make sure to build into your project plan enough time to experiment with any CMS options you’re considering. If you haven’t worked with a particular CMS yet, give yourself a full day to explore, build a sample content model, add some content and media, and test the connection to your front-end.

Does a clear winner emerge from this comparison? I don’t think so. They each succeed and stand out in different ways. Use this article to kickstart your own evaluation, and see what works for you!


At the time of this writing, there are some field types that the extension doesn’t pass from Cosmic to Algolia.