While the terminology was first spotlighted by IBM back in 2014, the concept of a composable business has recently gained much traction, thanks in large part to the global pandemic. Today, organizations are combining more agile business models with flexible digital architecture, to adapt to the ever-evolving needs of their company and their customers.
Here’s a high-level look at building a composable business.
What is a Composable Business?
The term “composable” encompasses a mindset, technology, and processes that enable organizations to innovate and adapt quickly to changing business needs.
A composable business is like a collection of interchangeable building blocks (think: Lego) that can be added, rearranged, and jettisoned as needed. Compare that with an inflexible, monolithic organization that’s slow and difficult to evolve (think: cinderblock). By assembling and reassembling various elements, composable businesses can respond quickly to market shifts.
Gartner offers four principles of composable business:
- Discovery: React faster by sensing when change is happening.
- Modularity: Achieve greater agility with interchangeable components.
- Orchestration: Mix and match business functions to respond to changing needs.
- Autonomy: Create greater resilience via independent business units.
These four principles shape the business architecture and technology that support composability. From structural capabilities to digital applications, composable businesses rely on tools for today and tomorrow.
So, how do you get there?
Start With a Composable Mindset…
A composable mindset involves thinking about what could happen in the future, predicting what your business may need, and designing a flexible architecture to meet those needs. Essentially, it’s about embracing a modular philosophy and preparing for multiple possible futures.
Where do you begin? Research by Gartner suggests the first step in transitioning to a composable enterprise is to define a longer-term vision of composability for your business. Ask forward-thinking questions, such as:
- How will the markets we operate in evolve over the next 3-5 years?
- How will the competitive landscape change in that time?
- How are the needs and expectations of our customers changing?
- What new business models or new markets might we pursue?
- What product, service, or process innovations would help us outpace competitors?
These kinds of questions provide insights into the market forces that will impact your business, helping you prepare for multiple futures. But you also need to adopt a modular philosophy, thinking about all the assets in your organization — every bit of data, every process, every application — as the building blocks of your composable business.
…Then Leverage Composable Technology
A long-term vision helps create purpose and structure for a composable business. Technology is the tools that bring it to life. Composable technology begets sustainable business architectures, ready to address the challenges of the future, not the past.
For many organizations, the shift to composability means evolving from an inflexible, monolithic digital architecture to a modular application portfolio. The portfolio is made up of packaged business capabilities, or PBCs, which form the foundation of composable technology.
The ABCs of PBCs
PBCs are software components that provide specific business capabilities. Although similar in some respects to microservices, PBCs address more than technological needs. While a specific application may leverage a microservice to provide a feature, when that feature represents a business capability beyond just the application at hand, it is a PBC.
Because PBCs can be curated, assembled, and reassembled as needed, you can adapt your technology practically at the pace of business change. You can also experiment with different services, shed things that aren’t working, and plug in new options without disrupting your entire ecosystem.
When building an application portfolio with PBCs, the key is to identify the capabilities your business needs to be flexible and resilient. What are the foundational elements of your long-term vision? Your target architecture should drive the business outcomes that support your strategic goals.
Build or Buy?
PBCs can either be developed internally or sourced from third parties. Vendors may include traditional packaged-software vendors and nontraditional parties, such as global service integrators or financial services companies.
When deciding whether to build or buy a PBC, consider whether your target capability is unique to your business. For example, a CMS is something many businesses need, and thus it’s a readily available PBC that can be more cost-effective to buy. But if, through vendor selection, you find that your particular needs are unique, you may want to invest in building your own.
Real-World Example
While building a new member retention platform for a large health insurer, we discovered a need to quickly look up member status during the onboarding process. Because the company had a unique way of identifying members, it required building custom software.
Although initially conceived in the context of the platform being created, a composable mindset led to the development of a standalone, API-first service — a true PBC providing member lookup capability to applications across the organization, and waiting to serve the applications of the future.
A Final Word
Disruption is here to stay. While you can’t predict every major shift, innovation, or crisis that will impact your organization, you can (almost) future-proof your business with a composabile approach.
Start with the mindset, lay out a roadmap, and then design a step-by-step program for digital transformation. The beauty of an API-led approach is that you can slowly but surely transform your technology, piece by piece.
If you’re interested in exploring a shift to composability, we’d love to help. Contact us today to talk about your options.
Why are microservices growing in popularity for enterprise-level platforms? For many organizations, a microservice architecture provides a faster and more flexible way to leverage technology to meet evolving business needs. For some leaders, microservices better reflect how they want to structure their teams and processes.
But are microservices the best fit for you?
We’re hearing this question more and more from platform owners across multiple industries as software monoliths become increasingly impractical in today’s fast-paced competitive landscape. However, while microservices offer the agility and flexibility that many organizations are looking for, they’re not right for everyone.
In this article, we’ll cover key factors in deciding whether microservices architecture is the right choice for your platform.
What’s the Difference Between Microservices and Monoliths?
Microservices architecture emerged roughly a decade ago to address the primary limitations of monolithic applications: scale, flexibility, and speed.
Microservices are small, separately deployable, software units that together form a single, larger application. Specific functions are carried out by individual services. For example, if your platform allows users to log in to an account, search for products, and pay online, those functions could be delivered as separate microservices and served up through one user interface (UI).
In monolithic architecture, all of the functions and UI are interconnected in a single, self-contained application. All code is traditionally written in one language and housed in a single codebase, and all functions rely on shared data libraries.
Essentially, with most off-the-shelf monoliths, you get what you get. It may do everything, but not be particularly great at anything. With microservices, by contrast, you can build or cherry-pick optimal applications from the best a given industry has to offer.
Because of their modular nature, microservices make it easier to deploy new functions, scale individual services, and isolate and fix problems. On the other hand, with less complexity and fewer moving parts, monoliths can be cheaper and easier to develop and manage.
So which one is better? As with most things technological, it depends on many factors. Let’s take a look at the benefits and drawbacks of microservices.
Advantages of Microservices Architecture
Companies that embrace microservices see it as a cleaner, faster, and more efficient approach to meeting business needs, such as managing a growing user base, expanding feature sets, and deploying solutions quickly. In fact, there are a number of ways in which microservices beat out monoliths for speed, scale, and agility.
Shorter time to market
Large monolithic applications can take a long time to develop and deploy, anywhere from months to years. That could leave you lagging behind your competitors’ product releases or struggling to respond quickly to user feedback.
By leveraging third-party microservices rather than building your own applications from scratch, you can drastically reduce time to market. And, because the services are compartmentalized, they can be built and deployed independently by smaller, dedicated teams working simultaneously. You also have greater flexibility in finding the right tools for the job: you can choose the best of breed for each service, regardless of technology stack.
Lastly, microservices facilitate the minimum viable product approach. Instead of deploying everything on your wishlist at once, you can roll out core services first and then release subsequent services later.
Faster feature releases
Any changes or updates to monoliths require redeploying the entire application. The bigger a monolith gets, the more time and effort is required for things like updates and new releases.
By contrast, because microservices are independently managed, dedicated teams can iterate at their own pace without disrupting others or taking down the entire system. This means you can deploy new features rapidly and continuously, with little to no risk of impacting other areas of the platform.
This added agility also lets you prioritize and manage feature requests from a business perspective, not a technology perspective. Technology shouldn’t prevent you from making changes that increase user engagement or drive revenue—it should enable those changes.
Affordable scalability
If you need to scale just one service in a monolithic architecture, you’ll have to scale and redeploy the entire application. This can get expensive, and you may not be able to scale in time to satisfy rising demand.
Microservices architecture offers not only greater speed and flexibility, but also potential savings in hosting costs, because you can independently scale any individual service that’s under load. You can also configure a single service to add capability automatically until load need is met, and then scale back to normal capacity.
More support for growth
With microservices architecture, you’re not limited to a UI that’s tethered to your back end. For growing organizations that are continually thinking ahead, this is one of the greatest benefits of microservices architecture.
In the past, websites and mobile apps had completely separate codebases, and launching a mobile app meant developing a whole new application. Today, you just need to develop a mobile UI and connect it to the same service as your website UI. Make updates to the service, and it works across everything.
You have complete control over the UI — what it looks like, how it functions for the customer, etc… You can also test and deploy upgrades without disrupting other services. And, as new forms of data access and usage emerge, you have readily available services that you can use for whatever application suits your needs. Digital signage, voice commands for Alexa… and whatever comes next.
Optimal programming options
Since monolithic applications are tightly coupled and developed with a single stack, all components typically share one programming language and framework. This means any future changes or additions are limited to the choices you make early on, which could cause delays or quality issues in future releases.
Because microservices are loosely coupled and independently deployed, it’s easier to manage diverse datasets and processing requirements. Developers can choose whatever language and storage solution is best suited for each service, without having to coordinate major development efforts with other teams.
Greater resilience
For complex platforms, fault tolerance and isolation are crucial advantages of microservices architecture. There’s less risk of system failure, and it’s easier and faster to fix problems.
In monolithic applications, even just one bug affecting one tiny part of a single feature can cause problems in an unrelated area—or crash the entire application. Any time you make a change to a monolithic application, it introduces risk. With microservices, if one service fails, it’s unlikely to bring others down with it. You’ll have reduced functionality in a specific capacity, not the whole system.
Microservices also make it easier to locate and isolate issues, because you can limit the search to a single software module. Whereas in monoliths, given the possible chain of faults, it’s hard to isolate the root cause of problems or predict the outcome of any changes to the codebase.
Monoliths thus make it difficult and time-consuming to recover from failures, especially since, once an issue has been isolated and resolved, you still have to rebuild and redeploy the entire application. Since microservices allow developers to fix problems or roll back buggy updates in just one service, you’ll see a shorter time to resolution.
Faster onboarding
With smaller, independent code bases, microservices make it faster and easier to onboard new team members. Unlike with monoliths, new developers don’t have to understand how every service works or all the interdependencies at play in the system.
This means you won’t have to scour the internet looking for candidates who can code in the only language you’re using, or spend time training them in all the details of your codebase. Chances are, you’ll find new hires more easily and put them to work faster.
Easier updates
As consumer expectations for digital experiences evolve over time, applications need to be updated or upgraded to meet them. Large monolithic applications are generally difficult, and expensive, to upgrade from one version to the next.
Because third-party app owners build and pay for their own updates, with microservices there’s no need to maintain or enhance every tool in your system. For instance, you get to let Stripe perfect its payment processing service while you leverage the new features. You don’t have to pay for future improvements, and you don’t need anyone on staff to be an expert in payment processing and security.
Disadvantages of Microservices Architecture
Do microservices win in every circumstance? Absolutely not. Monoliths can be a more cost-effective, less complicated, and less risky solution for many applications. Below are a few potential downsides of microservices.
Extra complexity
With more moving parts than monolithic applications, microservices may require additional effort, planning, and automation to ensure smooth deployment. Individual services must cooperate to create a working application, but the inherent separation between teams could make it difficult to create a cohesive end product.
Development teams may have to handle multiple programming languages and frameworks. And, with each service having its own database and data storage system, data consistency could be a challenge.
Also, when you choose to leverage numerous 3rd party services, this creates more network connections as well as more opportunities for latency and connectivity issues in your architecture.
Difficulty in monitoring
Given the complexity of microservices architecture and the interdependencies that may exist among applications, it’s more challenging to test and monitor the entire system. Each microservice requires individualized testing and monitoring.
You could build automated testing scripts to ensure individual applications are always up and running, but this adds time and complexity to system maintenance.
Added external risks
There are always risks when using third-party applications, in terms of both performance and security. The more microservices you employ, the more possible points of failure exist that you don’t directly control.
In addition, with multiple independent containers, you’re exposing more of your system to potential attackers. Those distributed services need to talk to one another, and a high number of inter-service network communications can create opportunities for outside entities to access your system.
On an upside, the containerized nature of microservices architecture prevents security threats in one service from compromising other system components. As we noted in the advantages section above, it’s also easier to track down the root cause of a security issue.
Potential culture changes
Microservices architecture usually works best in organizations that employ a DevOps-first approach, where independent clusters of development and operations teams work together across the lifecycle of an individual service. This structure can make teams more productive and agile in bringing solutions to market. But, at an organizational level, it requires a broader skill set for developing, deploying, and monitoring each individual application.
A DevOps-first culture also means decentralizing decision-making power, shifting it from project teams to a shared responsibility among teams and DevOps engineers. The goal is to ensure that a given microservice meets a solution’s technical requirements and can be supported in the architecture in terms of security, stability, auditing, etc…
3 Paths Toward Microservices Transformation
In general, there are three different approaches to developing a microservices architecture:
1. Deconstruct a monolith
This kind of approach is most common for large enterprise applications, and it can be a massive undertaking. Take Airbnb, for instance: several years ago, the company migrated from a monolith architecture to a service-oriented architecture incorporating microservices. Features such as search, reservations, messaging, and checkout were broken down into one or more individual services, enabling each service to be built, deployed, and scaled independently.
In most cases, it’s not just the monolith that becomes decentralized. Organizations will often break up their development group, creating smaller, independent teams that are responsible for developing, testing, and deploying individual applications.
2. Leverage PBCs
Packaged Business Capabilities, or PBCs, are essentially autonomous collections of microservices that deliver a specific business capability. This approach is often used to create best-of-breed solutions, where many services are third-party tools that talk to each other via APIs.
PBCs can stand alone or serve as the building blocks of larger app suites. Keep in mind, adding multiple microservices or packaged services can drive up costs as the complexity of integration increases.
3. Combine both types
Small monoliths can be a cost-effective solution for simple applications with limited feature sets. If that applies to your business, you may want to build a custom app with a monolithic architecture.
However, there are likely some services, such as payment processing, that you don’t want to have to build yourself. In that case, it often makes sense to build a monolith and incorporate a microservice for any features that would be too costly or complex to tackle in-house.
A Few Words of Caution
Even though they’re called “microservices”, be careful not to get too small. If you break services down into many tiny applications, you may end up creating an overly complex application with excessive overhead. Lots of micro-micro services can easily become too much to maintain over time, with too many teams and people managing different pieces of an application.
Given the added complexity and potential costs of microservices, for smaller platforms with only one UI it may be best to start with a monolithic application and slowly add microservices as you need them. Start at a high level and zoom in over time, looking for specific functions you can optimize to make you stand out.
Lastly, choose your third party services with care. It’s not just about the features; you also need to consider what the costs might look like if you need to scale a particular service.
Final Thoughts: Micro or Mono?
Still trying to decide which architecture is right for your platform? Here are some of the most common scenarios we encounter with clients:
- If time to market is the most important consideration, then leveraging 3rd party microservices is usually the fastest way to build out a platform or deliver new features.
- If some aspect of what you’re doing is custom, then consider starting with a monolith and either building custom services or using 3rd parties for areas that will help suit a particular need.
- If you don’t have a ton of money, and you need to get something up quick and dirty, then consider starting with a monolith and splitting it up later.
Here at Oomph, we understand that enterprise-level software is an enormous investment and a fundamental part of your business. Your choice of architecture can impact everything from overhead to operations. That’s why we take the time to understand your business goals, today and down the road, to help you choose the best fit for your needs.
We’d love to hear more about your vision for a digital platform. Contact us today to talk about how we can help.
How we leveraged Drupal’s native API’s to push notifications to the many department websites for the State.RI.gov is a custom Drupal distribution that was built with the sole purpose of running hundreds of department websites for the state of Rhode Island. The platform leverages a design system for flexible page building, custom authoring permissions, and a series of custom tools to make authoring and distributing content across multiple sites more efficient.
Come work with us at Oomph!
VIEW OPEN POSITIONS
The Challenge
The platform had many business requirements, and one stated that a global notification needed to be published to all department sites in near real-time. These notifications would communicate important department information on all related sites. Further, these notifications needed to be ingested by the individual websites as local content to enable indexing them for search.
The hierarchy of the departments and their sites added a layer of complexity to this requirement. A department needs to create notifications that broadcast only to subsidiary sites, not the entire network. For example, the Department of Health might need to create a health department specific notification that would get pushed to the Covid site, the RIHavens site, and the RIDelivers sites — but not to an unrelated department, like DEM.
Exploration
Aggregator:
Our first idea was to utilize the built in Drupal aggregator module and pull notifications from the hub. A proof of concept proved that while it worked well for pulling content from the hub site, it had a few problems:
- It relied heavily on the local site’s cron job to pull updates, which led to timing issues in getting the content — it was not in near real-time. Due to server limitations, we could not run cron as often as would be necessary
- Another issue with this approach was that we would need to maintain two entity types, one for global notifications and a second for local site notifications. Keeping local and global notifications as the same entity allowed for easier maintenance for this subsystem.
Feeds:
Another thought was to utilize the Feeds module to pull content from the hub into the local sites. This was a better solution than the aggregator because the nodes would be created locally and could be indexed for local searching. Unfortunately, feeds relied on cron as well.
Our Solution
JSON API
We created a suite of custom modules that centered around moving data between the network sites using Drupal’s JSON API. The API was used to register new sites to the main hub when they came online. It was also used to pass content entities from the main hub down to all sites within the network and from the network sites back to the hub.
Notifications
In order to share content between all of the sites, we needed to ensure that the data structure was identical on all sites in the network. We started by creating a new notification content type that had a title field, a body field, and a boolean checkbox indicating whether the notification should be considered global. Then, we packaged the configuration for this content type using the Features module.
By requiring our new notification feature module in the installation profile, we ensured that all sites would have the required data structure whenever a new site was created. Features also allowed us to ensure that any changes to the notification data model could be applied to all sites in the future, maintaining the consistency we needed.
Network Domain Entity
In order for the main hub, ri.gov, to communicate with all sites in the network, we needed a way to know what Drupal sites existed. To do this, we created a custom configuration entity that stored the URL of sites within the network. Using this domain entity, we were able to query all known sites and passed the global notification nodes created on ri.gov to each known site using the JSON API.
Queue API:
To ensure that the notification nodes were posted to all the sites without timeouts, we decided to utilize Drupal’s Queue API. Once the notification content was created on the ri.gov hub, we queried the known domain entities and created a queue item that would use cron to actually post the notification node to each site’s JSON API endpoint. We decided to use cron in this instance to give us some assurance that a post to many websites wouldn’t timeout and fail.
Batch API
To allow for time sensitive notifications to be pushed immediately, we created a custom batch operation that reads all of the queued notifications and pushes them out one at a time. If any errors are encountered, the notification is re-queued at the end of the stack and the process continues until all notifications have been posted to the network sites.
New site registrations
In order to ensure that new sites receive notifications from the hub, we needed a site registration process. Whenever a new site is spun up, a custom module is installed that calls out to the hub using JSON API and registers itself by creating a new network domain entity with it’s endpoint URL. This allows the hub to know of the new site and can push any new notifications to this site in the future.
The installation process will also query the hub for any existing notifications and, using the JSON API, get a list of all notification nodes from the hub to add them to it’s local queue for creation. Then, the local site uses cron to query the hub and get the details of each notification node to create it locally. This ensured that when a new site comes online, it will have an up to date list of all the important notifications from the hub.
Authentication
Passing this data between sites is one challenge, but doing it securely adds another layer of complexity. All of the requests going between the sites are authenticating with each other using the Simple Oauth module. When a new site is created, an installation process creates a dedicated user in the local database that will own all notification nodes created with the syndication process. The installation process also creates the appropriate Simple OAuth consumers which allows the authenticated connections to be made between the sites.
Department sites
Once all of the groundwork was in place, with minimal effort, we were able to allow for department sites to act as hubs for their own department sites. Thus, the Department of Health can create notifications that only go to subsidiary sites, keeping them separate from adjacent departments.
Translations
The entire process also works with translations. After a notification is created in the default language, it gets queued and sent to the subsidiary sites. Then, a content author can create a translation of that same node and the translation will get queued and posted to the network of sites in the same manner as the original. All content and translations can be managed at the hub site, which will trickle down to the subsidiary sites.
Moving in the opposite direction
With all of the authorization, queues, batches, and the API’s in place, the next challenge was making this entire system work with a Press Release content type. This provided two new challenges that we needed to overcome:
- Instead of moving content from the top down, we needed to move from the bottom up. Press release nodes get created on the affiliate sites and would need to be replicated on the hub site.
- Press release nodes were more complex than the notification nodes. These content types included media references, taxonomy term references and toughest of all, paragraph references.
Solving the first challenge was pretty simple – we were able to reuse the custom publishing module and instructed the queue API to send the press release nodes to the hub sites.
Getting this working with a complex entity like the press release node meant that we needed to not only push the press release node, but we also needed to push all entities that the initial node referenced. In order for it all to work, the entities needed to be created in reverse order.
Once a press release node was created or updated, we used the EntityInterface referencedEntities() method to recursively drill into all of the entities that were referenced by the press release node. In some cases, this meant getting paragraph entities that were nested two, three, even four levels deep inside of other paragraphs. Once we reached the bottom of the referenced entity pile, we began queuing those entities from the bottom up. So, the paragraph that was nested four levels deep was the first to get sent and the actual node was the last to get sent
Are you a developer looking to grow your skills? Join our team.
Conclusion
Drupal’s powerful suite of API’s gave us all the tools necessary to come up with a platform that will allow the State of Rhode Island to easily keep their citizens informed of important information, while allowing their editing team the ease of a create once and publish everywhere workflow.
You’ve decided to decouple, you’re building your stack, and the options are limitless – oh, the freedom of escaping the LAMP square and the boundaries of the conventional CMS! Utilities that were once lumped together into one unmoveable bundle can now be separately selected, or not selected at all. It is indeed refreshing to pick and choose the individual services best fitted to your project. But now you have to choose.
One of those choices is your backend content storage. Even though decoupling means breaking free from monolithic architecture, certain concepts persist: content modeling, field types, and the content editor experience.
I recently evaluated four headless CMS options: Contentful, Cosmic, Dato, and Prismic. Prior to that, I had no experience with any of them. Fortunately they all offer a free plan to test and trial their software. For simpler projects, that may be all you need. But if not, each CMS offers multiple tiers of features and support, and costs vary widely depending on your requirements.
I was tasked with finding a CMS and plan that met the following specs:
- 10 separate content editors
- Multiple editor roles (admin, contributor, etc…)
- Draft/scheduled publishing, with some sort of editorial workflow, even as simple as limiting a role to draft-only status
Although this doesn’t seem like a big ask for any CMS, these requirements eliminated the free plans for all four services, so cost became a factor.
Along with cost, I focused my evaluation on the editor experience, modeling options, integration potential, and other features. While I found lots of similarities between the four, each had something a little different to offer.
It’s worth mentioning that development is active on all four CMSs. New features and improvements were added just within the span of time it took to write this article. So keep in mind that current limitations could be resolved in a future update.
Contentful
Contentful’s Team package is currently priced at $489 per month, making it the most expensive of the four. This package includes 10 content editors and 2 separate roles. There is no editorial workflow without paying extra, but scheduled publishing is included.
Terminology
A site is a “space” and content types are “content types.”
What I love
The media library. Media of many different types and sources – from images to videos to documents and more – can be easily organized and filtered. Each asset has a freeform name and description field for searching and filtering. And since you can provide your own asset name, you’re not stuck with image_8456_blah.jpeg
or whatever nonsense title your asset had when you uploaded it. Additionally, image dimensions are shown on the list view, which is a quick, helpful reference.
Video description
- Upload images from many sources: Contentful supports the addition of media from sources such as Google Photos and Facebook, not just the files on your computer.
- Give your files meaningful names and add searchable descriptions.
RUNNER UP
Dato’s Media Area offers similar filtering and a searchable notes field.
What I like
Commenting. Every piece of content has an admin comments area for notes or questions, with a threaded Reply feature.
My Views. My Views is an option in the content navigation panel. With a single click, you can display only content that you created or edited – very convenient when working with multiple editors and a large volume of content.
What could be better
Price. Contentful is expensive if your project needs don’t allow you to use the free/community plan. You do get a lot of features for the paid plans, but there’s a big jump between the free plan and the first tier paid plan.
Cosmic
Cosmic ranks second most pricey for our requirements at $299 per month for the Pro Package. This package includes 10 editors and 4 predefined roles. It has draft/scheduled publishing, and individual editor accounts can be limited to draft status only.
Terminology
A site is a “bucket” and content types are “object types.”
What I love
Developer Tools. Developer Tools is a handy button you can click at the object or object type level to view your REST endpoint and response. It also shows other ways (GraphQL, CLI, etc.) to connect to a resource, using real code that is specific to your bucket and objects.
Video description
- Find the Developer Tools button on all objects or a single object.
- Browse the different methods to connect to your resource and return data.
RUNNER UP
Dato’s has an API Explorer for writing and running GraphQL queries.
The Slack Community. The Cosmic Slack community offers a convenient way to get technical support – in some cases, even down to lines-of-code level support – with quick response times.
What I like
View as editor. This is a toggle button in the navigation panel to hide developer features – even if your account is assigned the developer or admin role – allowing you to view the CMS as the editor role sees it. This is useful for documenting an editor’s process or troubleshooting their workflow.
Extensions. Cosmic provides several plug-and-play extensions, including importers for Contentful and WordPress content, as well as Algolia Search, Stripe, and more. I tested the Algolia extension, and it only took minutes to set up and immediately began syncing content to Algolia indexes1. You can also write your extensions and upload them to your account.
What could be better
Price/price structure. I found Cosmic’s pricing structure to be the most confusing, with extra monthly charges for common features like localization, backups, versioning, and webhooks. It’s hard to know what you’ll actually pay per month until you add up all the extras. And once you do, you may be close to the cost of Contentful’s lower tier.
Content model changes. Changing the content model after you’ve created or imported a lot of content is tricky. Content model changes don’t flow down to existing content without a manual process of unlocking, editing and re-publishing each piece of content, which can be very inefficient and confusing.
Dato
Dato’s Professional package is priced at €99 (about $120) per month, making it the second least pricey for our requirements. It includes 10 content editors and 15 roles, with configurable options to limit publishing rights.
Terminology
A site is a “project” and content types are “models.”
What I love
Tree-like collections. Dato lets you organize and display records in a hierarchical structure with visual nesting. The other CMSs give you roundabout ways to accomplish this, usually requiring extra fields. But Dato lets you do it without altering the content model. And creating hierarchy is as simple as dragging and dropping one record under another, making things like taxonomy a breeze to build.
Video description
- Organize the records in your model in a hierarchical structure by dragging and dropping.
- Dato lets you easily visualize the nested relationships.
RUNNER UP
No other CMS in this comparison offers hierarchical organizing quite like Dato, but Cosmic provides a parent field type, and Prismic has a documented strategy for creating hierarchical relationships.
What I like
Maintenance Mode. You can temporarily disable writes on your project and display a warning message to logged in editors. If you need to prevent editors from adding/editing content — for instance, during content model changes — this is a useful feature.
What could be better
Field types. Out-of-the-box Dato doesn’t provide field types for dropdowns or checkboxes. There’s a plugin available that transforms a JSON field into a multiselect, but it’s presented as a list of toggles/booleans rather than a true multiselect. And managing that field means working with JSON, which isn’t a great experience for content editors.
Dato is also missing a simple repeater field for adding one or more of something. I created repeater-like functionality using the Modular Content field type, but this feels overly complicated, especially when every other CMS in my comparison implements either a Repeater field type (Cosmic, Prismic) or a multi-value field setting (Contentful).
Prismic
Prismic ranks least pricey, at $100/mo for the Medium Package. This package includes 25 content editors, 3 predefined roles, draft/scheduled publishing and an editorial workflow.
Terminology
A site is a “repository”, and content types are “custom types.”
What I love
Field types. Prismic gives you 16 unique field types for modeling your content, but it’s not the number of types that I love; it’s the particular combination of options: the dedicated Title type for headings, the Media link type, the Embed type, the Color Picker. Plus, the UI is so intuitive, content editors know exactly what they’re expected to do when populating a field.
Take the Title type for example. If you need a heading field in the other CMSs, you’d probably use a plain text or rich text field type. Using rich text almost guarantees you’ll get unwanted stuff (paragraph tags, in particular) wrapped around whatever heading the editor enters. Using plain text doesn’t let the editor pick which heading they want. Prismic’s Title type field solves both of these problems.
Video description
- Prismic gives you a unique combination of field type options, some that you won’t find elsewhere.
- Use the Title type to create designated heading fields, and select or deselect the heading tags you want to include.
RUNNER UP
This is a tough one, but I’m leaning toward Contentful. What they lack in the number of available field types, they make up for in Appearance settings that allow you to render a field type to the editor in different formats.
Price. Unlimited documents, custom types, API calls and locales are included in the Medium package for a reasonable price. Additionally, Prismic has more packages and support tiers than any of the others, with one paid plan as low as $7/mo.
What I like
Slices. Slices are an interesting addition to back-end content modeling, because they’re essentially components: things you build on the front. Prismic lets you create custom components, or use their predefined ones — blockquotes, a list of articles, an image gallery, etc… I admit I didn’t test how these components render on the front-end, but Slices deserve further exploration.
What could be better
Integration options/plugins. Although Webhooks are included in all of Prismic’s plans, there doesn’t seem to be any development of plugins or ways to quickly extend functionality. Every other CMS in this comparison offers simple, click-to-install extensions and integrations to common services.
A note on Front-end Frameworks
A headless CMS, by simple definition, is a content storage container. It does not provide the markup that your website visitors will see and use. Therefore, your project planning will include choosing and testing a front-end system or framework, such as Gatsby JS. It’s important to find out early in the process what, if any, obstacles exist with connecting your choice CMS to your choice front-end.
At Oomph, we’ve successfully used both Contentful and Cosmic with a Gatsby front-end. However, Gatsby plugins exist for Prismic and Dato as well.
Summary
As with any decoupled service, your headless CMS choice will be determined by your project’s distinct requirements. Make sure to build into your project plan enough time to experiment with any CMS options you’re considering. If you haven’t worked with a particular CMS yet, give yourself a full day to explore, build a sample content model, add some content and media, and test the connection to your front-end.
Does a clear winner emerge from this comparison? I don’t think so. They each succeed and stand out in different ways. Use this article to kickstart your own evaluation, and see what works for you!
At the time of this writing, there are some field types that the extension doesn’t pass from Cosmic to Algolia.
If you live in an area with a lot of freight or commuter trains, you may have noticed that trains often have more than one engine powering the cars. Sometimes it is an engine in front and one in back, or in the case of long freight lines, there could be an engine in the middle. This is known as “Distributed power” and is actually a recent engineering strategy. Evenly distributed power allows them to carry more, and carry it more efficiently.1
When it comes to your website, the same engineering can apply. If the Content Management System (CMS) is the only source of power, it may not have enough oomph to load pages quickly and concurrently for many users. Not only that, but a single source of power may slow down innovation and delivery to multiple sources in today’s multi-channel digital ecosystems.
One of the benefits of decoupled platform architecture is that power is distributed more evenly across the endpoints. Decoupled means that the authoring system and the rendering system for site visitors are not the same. Instead of one CMS powering content authoring and page rendering, two systems handle each task discreetly.
Digital properties are ever growing and evolving. While evaluating how to grow your own system, it’s important to know the difference between coupled and decoupled CMS architectures. Selecting the best structure for your organization will ensure you not only get what you want, but what is best for your entire team — editors, developers, designers, and marketers alike.
Bombardier Zefiro vector graphic designed for Vexels
What is a traditional CMS architecture?
In a traditional, or coupled, CMS, the architecture tightly links the back-end content administration experience to the front-end user experience.
Content creation such as basic pages, news, or blog articles are created, managed, and stored along with all media assets through the CMS’s back end administration screens. The back end is also where site developers create and store customized applications and design templates for use by the front-end of the site.
Essentially, the two sides of the CMS are bound within the same system, storing content created by authenticated users and then also being directly responsible for delivering content to the browser and end users (front end).
From a technical standpoint, a traditional CMS platform is comprised of:
- A private database-driven CMS in which content editors create and maintain content for the site, generally through some CMS administration interfaces we’re used to (think WordPress or Drupal authoring interfaces)
- An application where engineers create and apply design schemas. Extra permissions and features within the CMS give developers more options to extend the application and control the front end output
- A public front end that displays published content on HTML pages
What is a decoupled CMS architecture?
Decoupled CMS architecture separates, or decouples, the back-end and front-end management of a website into two different systems — one for content creation and storage, and another for consuming content and presenting it to the user.
In a decoupled CMS, these two systems are housed separately and work independently of the other. Once content is created and edited in the back end, this front-end agnostic approach takes advantage of flexible and fast web services and APIs to deliver the raw content to any front-end system on any device or channel. It is even possible that an authoring system delivers content to more than front-end (i.e. an article is published in the back-end and pushed out to a website as well as a mobile App).
From a technical standpoint, a decoupled CMS platform is comprised of:
- A private database-driven CMS in which content editors create and maintain content for the site, generally through the same CMS administration interfaces we’re used to — though it doesn’t have to be2
- The CMS provides a way for the front-end application to consume the data. A web-service API — usually in a RESTful manner and in a mashup-friendly format such as JSON — is the most common way
- Popular front-end frameworks such as React, VueJS, or GatsbyJS deliver the public visitor experience via a Javascript application rendering the output of the API into HTML
Benefits of decoupled
By moving the responsibility for the user experience completely into the browser, the decoupled model provides a number of benefits:
Push the envelope
Shifting the end-user experience out of the conventions and structures of the back-end allows UX Engineers and front-end masterminds to push the boundaries of the experience. Decoupled development gives front-end specialists full control using their native tools.
This is largely because traditional back-end platforms have been focused on the flexibility of authoring content and less so on the experience of public visitors. Too often the programming experience slows engineers down and makes it more difficult to deliver an experience that “wows” your users.
Need for speed
Traditional CMS structures are bogged down by “out-of-the-box” features that many sites don’t use, causing unnecessary bloat. Decoupled CMS structures allow your web development team to choose only what code they need and remove what they don’t. This leaner codebase can result in faster content delivery times and can allow the authoring site to load more quickly for your editors.
Made to order
Not only can decoupled architecture be faster, but it can allow for richer interactions. The front-end system can be focused on delivering a truly interactive experience in the form of in-browser applications, potentially delivering content without a visitor reloading the page.
The back-end becomes the system of record and “state machine”, but back-and-forth interaction will happen in the browser and in real-time.
Security Guard
Decoupling the back-end from the front-end is more secure. Since the front-end does not expose its connection to the authoring system, it makes the ecosystem less vulnerable to hackers. Further, depending on how the front-end communication is set up, if the back-end goes offline, it may not interrupt the front-end experience.
In it for the long haul
Decoupled architectures integrate easily with new technology and innovations and allow for flexibility with future technologies. More and more, this is the way that digital platform development is moving. Lean back-end only or “flat file” content management systems have entered the market — like Contentful and Cosmic — while server hosting companies are dealing with the needs of decoupled architecture as well.
The best of both worlds
Decoupled architecture allows the best decisions for two very different sets of users. Content editors and authors can continue to use some of the same CMSs they have been familiar with. These CMSs have great power and flexibility for content modelling and authoring workflows, and will continue to be useful and powerful tools. At the same time, front-end developers can get the power and flexibility they need from a completely different system. And your customers can get the amazing user experiences they have come to expect.
The New Age of Content Management Systems
Today’s modern CMS revolution is driving up demand for more flexible, scalable, customizable content management systems that deliver the experience businesses want and customers expect. Separating the front- and back-ends can enable organizations to quicken page load times, iterate new ideas and features faster, and deliver experiences that “wow” your audience.
- Great article on the distributed power of trains: Why is there an engine in the middle of that train?
- Non-monolithic CMSs have been hitting the market lately, and include products like Contentful, CosmicJS, and Prismic, among others.
The Challenge
Execute on a digital platform strategy for a global private equity firm to create a centralized employee destination to support onboarding, create interpersonal connections between offices, and drive employee satisfaction.
The key components would be an employee directory complete with photos, bios, roles and organizational structure; News, events, and other communications made easily available and organized per location as well as across all locations; The firm’s investment portfolio shared through a dashboard view with all pertinent information including the team involved.
These components, and the expected tactical assets that an intranet provides, would help the firm deepen connections with and among employees at the firm, accelerate onboarding, and increase knowledge sharing.
The Approach
Supporting Multiple Intentions: Browsing vs. Working
An effective employee engagement platform, or intranet, needs to support two distinct modes — task mode and explore mode. In task mode, employees have access to intuitive navigation, quick page loading, and dynamic search or filtering while performing daily tasks. They get what they need fast and proceed with their day.
At the same time, a platform must also encourage and enable employees to explore company knowledge, receive company-wide communications, and connect with others. For this firm, the bulk of content available in explore mode revolves around the firm’s culture, with a special focus on philanthropic initiatives and recognition of key successes.
Both modes benefit from intuitive searching and filtering capabilities for team members, news, events, FAQs, and portfolio content. News and events can be browsed in a personalized way — what is happening at my location — or a global way — what is happening across the company. For every interaction within the platform, the mode was considered and influential of nearly all design decisions.
From a technical standpoint, the private equity firm needed to support security by hosting the intranet on their own network. This and the need to completely customize the experience for close alignment with their brand meant that no off-the-shelf pre-built intranet solution would work. We went with Drupal 8 to make this intranet scalable, secure, and tailor-made to an optimal employee experience.
The Results
The platform deployment came at a time when it was most needed, playing a crucial role for the firm during a global pandemic that kept employees at home. What was originally designed as a platform to deepen employee connections between offices quickly became the firm’s hub for connecting employees within an office. As many businesses are, the firm is actively re-evaluating its approach to the traditional office model, and the early success of the new platform indicates that it is likely to play an even larger role in the future.
THE BRIEF
Transform the Experience
The core Earthwatch experience happens outdoors in the form of an expedition — usually for about a week and far away from technology in locations like the Amazon Basin, Uganda, or the Great Barrier Reef. But before this in-person experience happens, an expedition volunteer encounters a dizzying array of digital touchpoints that can sow confusion and lead to distrust. Earthwatch needed “Experience Transformation.”
SURVEY THE LANDSCAPE
Starting with a deep strategy and research engagement, Oomph left no stone unturned in cataloging users and their journeys through a decade’s worth of websites and custom applications. We were able to conduct multiple interview sessions with engaged advocates of the organization. Through these interviews, the Earthwatch staff learned how to conduct more interviews themselves and listen to their constituents to internalize what they find wonderful about the experience as well as what they find daunting.
CREATE THE MAP
With a high-level service blueprint in place, Oomph then set out to transform the digital experiences most essential to the organization: the discovery and booking journey for individuals and the discovery, research, and inquiry journey for corporate sustainability programs.
The solution took shape as an overhaul and consolidation of Earthwatch’s public-facing websites.
THE RESULTS
The Journey Before the Journey
A fresh design approach that introduces new colors, beautiful illustrations, and captivating photography.
Expedition discovery, research, and booking was transformed into a modern e-commerce shopping experience.
Corporate social responsibility content architecture was overhauled with trust-building case studies and testimonials to drive an increase in inquiries.
IN THEIR WORDS
The Oomph team far surpassed our (already high!) expectations. As a nonprofit, we had a tight budget and knew it would be a massive undertaking to overhaul our 7-year-old site while simultaneously launching an organizational rebrand. Oomph helped to guide us through the entire process, providing the right level of objective, data-driven expertise to ensure we were implementing user experience and design best practices. They listened closely to our needs and helped to make the website highly visual and engaging while streamlining the user journey. Thanks to their meticulous project management and time tracking, we successfully launched the site on time and exactly on budget.
ALIX MORRIS MHS, MS, Director of Communications, Earthwatch
THE BRIEF
The American Veterinary Medical Association (AVMA) advocates on behalf of 91,000+ members — mostly doctors but some veterinary support staff as well. With roots as far back as 1863, their mission is to advance the science and practice of veterinary medicine and improve animal and human health. They are the most widely recognized member organization in the field.
Make the Brand Shine
The AVMA website is the main communications vehicle for the organization. But the framework was very out of date — the site was not mobile-friendly and some pages were downright broken. The brand was strong, but the delivery on screen was weak and the tools reflected poorly.
Our goals were to:
IMPROVE THE SITE MAP
Content bloat over the years created a site tree that was in bad need of pruning.
IMPROVE SEARCH
When a site has so much content to offer, search can be the quickest way to find relevant information for a motivated user. Our goals were to make search more powerful while maintaining clarity of use.
COMMUNICATE THE VALUE OF MEMBERSHIP
Resources and benefits that come with membership were not clearly illustrated and while members were renewing regularly, they were not interacting with the site as a resource as often as they could.
STRENGTHEN THE BRAND
If the site was easier to navigate and search, if it had a clear value proposition for existing and prospective members, and if the visual design were modern and device-friendly, the brand would be stronger.
THE APPROACH
Put Members First
Oomph embarked on an extensive research and discovery phase which included:
- A competitor Analysis of 5 groups in direct competition and 5 similar membership-driven organizations
- An online survey for the existing audience
- A content and SEO audits
- Several in-person workshops with stakeholder groups, including attendance at their annual convention to conduct on-the-spot surveys
- More phone interviews with volunteers, members, and additional stakeholders
With a deep bed of research and personal anecdotes, we began to architect the new site. Communication was high as well, with numerous marketing, communications, and IT team check-ins along the way:
- An extensive card sort exercise for information architecture improvements — 200+ cards sorted by 6 groups from throughout the organization
- A new information architecture and audience testing
- A content modeling and content wireframe exercises
- A brand color accessibility audit
- Over a dozen wireframes
- Three style tiles (mood boards) with revisions and refinements
- Wireframe user testing
- A set of deep-dive technical audits
- Several full design mockups with flexible component architecture
Several rounds of style tiles explored a new set of typefaces to support a modern refresh of the brand. Our ideas included darkening colored typography to meet WCAG thresholds, adding more colored tints for design variability, and designing a set of components that could be used to create marketing pages using Drupal’s Layout Builder system.
THE RESULTS
The design update brought the main brand vehicle fully into the modern web. Large headlines and images, chunks of color, and a clearer hierarchy of information makes each pages’ purpose shine. A mega-menu system breaks complex navigation into digestible parts, with icons and color to help differentiate important sections. The important yearly convention pages got a facelift as well, with their own sub-navigation system.
FINAL THOUGHTS
Supporting Animals & Humans Alike
Membership to the AVMA for a working veterinary doctor is an important way to keep in touch with the wider community while also learning about the latest policy changes, health updates, and events. The general public can more easily find information about common pet health problems, topical issues around animal well-being during natural disasters, and food and toy recalls. The goal of supporting members first while more broadly providing value to prospective members and non-members alike has coalesced into this updated digital property.
We look forward to supporting animal health and human safety as we continue to support and improve the site over the next year.
The authoring experience is core to any content management system. Very few web content admins prefer to work in HTML, so they use a What-You-See-Is-What-You-Get editor nicknamed a WYSIWYG (pronounced whizzy wig
). There are many free and paid WYSIWYG solutions out there, but the big two that have been around for 10 years or more and have been adopted into widely available open source projects are CKEditor and TinyMCE. Drupal and WordPress long ago decided to pick one as their recommended editor, and so WordPress uses TinyMCE and Drupal uses CKEditor[1].
The power of a WYSIWYG like CKEditor is in its ability to be customized. Drupal makes it easy to customize the authoring experience for any user role and in any configuration that a site needs. Super Admins can have access to a fully featured “Full HTML” version of the editor while your content authors have access to a “Basic HTML” version that locks out certain kinds of code that may do harm to a website.
Oomph customizes CKEditor for each custom Drupal (or WordPress) site we build. As a best practice, though, we like to start from the same place. We’d like to share our “default” CKEditor set up as well as the steps that you need to take to customize CKEditor yourself.
Customize CKEditor Text Formats by User Roles
Drupal allows multiple CKEditor configurations, and each can be available per user role — as mentioned previously. To understand the ways in which the editor can be customized, we first need to understand the user roles and default configurations.
User Roles
Drupal ships with three main user roles built in — Administrator, Authenticated User, and Anonymous User. More official documentation about User Roles is available from drupal.org.
An Anonymous user is someone that can’t log in — they can only view content on the front end of the site. To call them a “user” is a bit of a misnomer, but their actions are being tracked to the user ID of zero — therefore, Drupal still considers them a user.
An Authenticated user is someone that can log in but they can do very little. A new Drupal installation gives this user only a few permissions — they view Media, view published content, use shortcuts, and use the Basic HTML text format.
Finally, the Administrator can do everything by default. This was the first user created when a new site was installed, and by default, the account has permissions to do everything.
Many more roles can be created and permissioned of course, but these are the ones that come out of a default Drupal install. We usually create a new “Content Editor” user role for our clients as authors on the site with permissions to create and edit content.
Text Formats and Editors
CKEditor is included in Drupal core, so it comes pre-installed. There are three “text-formats” that the default installation of CKEditor comes with — Full HTML, Basic HTML, and Plain text.
These distinctions are very handy, and also by default, they map nicely to the User Roles we described. Plain text for Anonymous users with no ability to create content, Basic HTML for Authenticated users who might be able to author some content, and Full HTML for Administrators that need to have all of the elements that HTML provides.
The Plain text format is there when there is no other format available to a user — there is no WYSIWYG at all, therefore a <textarea>
form element is naked of any formatting embellishments.
It is recommended to keep the Plain text editor plain and edit the format as little as necessary, if at all. When starting a new project, we edit the Basic and Full HTML formats to customize them to our liking.
Basic HTML
The Basic HTML editor comes with a small set of options by default — all the controls that you might expect from a rich text web editor, like heading formats, lists, blockquotes, alignment, bold, italic, and others. These options are a little disorganized, in our opinion[2], but since this is Drupal, we can customize it easily.
Out of the box, the Basic HTML format looks like this:
In the Toolbar Configuration area, admins can move “Available buttons” from the top row to the “Active Toolbar” below, and arrange them however they wish. We like to follow this grouping of button options:
- Block formatting: All of the HTML block-level elements — the Format drop down, Blockquote, Unordered list, and Ordered list
- Inline styles: Bold, Italic, Superscript, and Subscript. We don’t use the Strikethrough because it is seldom used and the Underline option because we wouldn’t want a visitor to confuse underlined text as a link if it is not
- Alignment: Left, Center, and Right. We don’t use Justify because it always looks bad (again, in our opinion)
- Linking: Add a link, Remove a link
- Insert: Anything that gets inserted into content — Horizontal rule, Paste plain text, Paste from Word (read an aside about this feature later on in the article). Basic HTML does not include an Image insertion from the Media Library for most sites
- Tools: Options for looking at the content differently — Full screen, Source view, and Remove content formatting
After the changes are made and saved, the Basic HTML text format looks like this:
Much better. From here we will probably customize it further as additional modules or custom features add buttons that we decide to turn on for content authors.
One more thing should be looked at before finishing the Basic HTML Text format. If the “Limit allowed HTML tags and correct faulty HTML” filter is enabled (should be the first checkbox right under the Toolbar configuration), there will also be a Filter Settings area at the bottom of the admin page where the allowed HTML is displayed:
The default allowed HTML for Basic is:
<a href hreflang> <em> <strong> <cite> <blockquote cite> <code> <ul type> <ol start type> <li> <dl> <dt> <dd> <h2 id> <h3 id> <h4 id> <h5 id> <h6 id> <p> <br> <span> <img src alt height width data-entity-type data-entity-uuid data-align data-caption>
We edit ours slightly as follows to be more restrictive:
<a href hreflang target rel> <em> <strong> <cite> <blockquote cite> <code> <ul> <ol start> <li> <dl> <dt> <dd> <h1> <h2> <h3> <h4> <h5> <h6> <p class=""> <br> <img src alt height width> <hr> <sup> <sub> <span lang dir>
Limiting code for the Basic HTML Text format is a good idea. Authors may think that code copied from the web somewhere is fine because it will help them do this one thing, but more often that not it introduces display issues, and at worst, it introduces something malicious.
Full HTML
For Full HTML, the same ideas apply but with a few more options. Again, the default Drupal configuration for the Full HTML text format is this:
Not very different from the previous text format — a little more robust — and we improve it our own way.
In addition to the same order and grouping as Basic HTML, we:
- Add Indent and Outdent to the Block formatting group. This is not a semantic HTML element but it will affect an entire block element by adding a class to the block HTML item
- Add the Styles drop down menu which enables custom classes for this project. We also add the Language formatting to the Inline group. On a per-project basis, we may decide to give these options to the Basic HTML text format
- Add Media to the Insert grouping and remove the paste options[3]
After adding Styles, Media, and Language, we get additional plugin settings below the Toolbar Configuration. For Media, edit those settings as you see fit. We try to keep uploads small and force Drupal to compress images that are uploaded straight from a camera. For Languages, depending on the site, you might want to enable to full set of language codes rather than the default six official languages of the United Nations.
Custom Theme Classes in CKEditor
To leverage the power of custom author styles, the Styles dropdown plugin setting is super important. This is how you get theme CSS classes into the editor! The list should take the format of [HTML element to apply the class to].[name of class]|[Label to display to user]
A subset of the styles that we add are as follows, and might look familiar to people that use Bootstrap utility classes:
p.lead|Paragraph Lead
ul.list-unstyled|List unstyled
span.display-1|Display1
span.display-2|Display2
span.display-3|Display3
span.h1|Header 1
span.h2|Header 2
span.h3|Header 3
span.h4|Header 4
span.h5|Header 5
span.h6|Header 6
span.text-small|Text size small
span.text-lowercase|Text Lower case
span.text-uppercase|TEXT ALL CAPS
span.text-abbr|Abbreviations
span.font-weight-light|Text light weight
span.font-weight-normal|Text normal weight
span.font-weight-bold|Text bold weight
Drupal does not allow the CSS * selector, which would mean that the requested class could be added to any HTML element — the rule can’t look like *.class-name|A Universal Style
, for instance. That’s too bad, which is why we do the next best thing and apply most of our custom styles to <span>
elements.
These settings allow an author to mix visual heading styles without changing the semantics. We find that it works pretty well. Say, for example, someone is designing the content for their page and they understand that an article with an H1 title needs to have subheads that are H2, and between H2s, you should only use an H3, etc… they understand the semantic structure of the page. But visually, maybe they want the H2s to look like H3s, and the H3s to look like H4s. That can be accomplished with the way we have structured our class naming and the application to <span>
tags. The resulting HTML might look like this:
<h1>An introduction to CKEditor</h1>
<h2 class="h3">What is CKEditor?</h2>
<h2 class="h3">Customizing CKEditor</h2>
<h3 class="h4">Basic HTML</h3>
<h3 class="h4">Full HTML</h3>
<h2 class="h3">Getting theme CSS into the Editor</h2>
We get the semantics needed for good SEO and great accessibility, and the author gets the page to look they way that they want.
Matching the preview in CKEditor with your Site Theme
To finalize the customization and to give the author a much more complete experience, we add some code to our site’s theme files that injects the custom visual theme into the CKEditor preview pane. The authors, therefore, will get a much better sense of how their content will look because they will see the site’s fonts, colors, and typography styles.
We go from this:
To this:
By just adding a little bit of code to the theme’s info file, or themes/custom/yourtheme/yourtheme.info.yml:
ckeditor_stylesheets:
- assets/styles/main.cs
You can use a separate CSS file specifically for CKEditor if you wish, but to keep our CSS DRY, it makes sense to use the same file as the rest of the site — its already loaded and cached after all. When authors apply one of our new custom styles from the Styles dropdown menu they will see it update live in the editor window before they save and view the content.
And that’s it! Your Drupal 8 project has a customized admin experience with CKEditor sharing the same visual styles as your front end.
# Cleaning Text Pasted from Word
Now for a tangent into the world of pasting content from a Microsoft Word document.
Clients are going to cut and paste text from Word documents; you just can’t stop them. Luckily, CKEditor has a robust scrubber that will remove the junk from this code and maintain the most important styling like bold, italic, and headers (even tables if your editor allows them).
The way it works is pretty transparent, too. We keep the button in place for folks who might have used it before, but with CKEditor version 4, anything on the clipboard pasted into the editor will get scrubbed. When the editor detects code on the clipboard that contains junky content from Word, a little notification will pop up and let the user know that it sees what you are doing (shame shame) but it will clean it for you.
If you press Cancel, the content gets pasted without being scrubbed, while if you press OK, it will. Either way, the paste of new content happens and the allowed tags portion of the Editor configuration will kick in and do its job (which may remove some of the code from Word, but probably not all).
Test it yourself with the sample Word document on this page: ckeditor.com/docs/ckeditor4/latest/features/pastefromword.html
Just one Gotcha
But there is a pretty big catch to all of this. It might seem obvious, but it needs to be stated — don’t expect Paste from Word to work unless “Limit allowed HTML tags and correct faulty HTML” is turned on. If you are using the Full HTML text format, and the format allows any and all HTML code, Paste from Word will do nothing!
We had a scenario in which the client used the Full HTML editor because they needed access to Drupal Tokens and a few custom pieces that are rather advanced. When they pasted content from Word, though, they were getting all of the code that Word exports and the visual experience was not what they expected. When we took a look and saw the source code, we didn’t understand at first why the Paste from Word filter was not working.
What we (at Oomph) should have done was give them these advanced features in the Basic HTML editor, with “Limit allowed HTML tags and correct faulty HTML” turned on and perhaps a more complex and lengthy list of allowed HTML. This would have been a little more work but it would have saved time in the long run.
Sidebar to the Sidebar: Why is content from Word so bad?
You may be wondering, why does this matter? Microsoft Word is publishing software that 83% of the business world uses, how can it be that bad? Well, Word was created for the world of printing documents, not managing content on the web. On the web and in the projects we create, there is a visual theme that should control the look of all the content. The content pasted from Word tries to force its own visual styles over the styles of the custom theme. On top of all that, the code is terribly bloated.
Here is a simple example of a single three-word headline:
<h1 align="center" style="margin:12pt 0in; -webkit-text-stroke-width:0px; text-align:center"><span style="font-size:22pt"><span style="line-height:31.386667251586914px"><span style="break-after:avoid-page"><span style="font-family:"Times New Roman""><span style="font-weight:normal"><span style="caret-color:#000000"><span style="color:#000000"><span style="font-style:normal"><span style="font-variant-caps:normal"><span style="letter-spacing:normal"><span style="orphans:auto"><span style="text-transform:none"><span style="white-space:normal"><span style="widows:auto"><span style="word-spacing:0px"><span style="-webkit-text-size-adjust:auto"><span style="text-decoration:none"><span style="font-family:Georgia"><span style="color:black"> Recognition of Achievement</span></span></span></span></span></span></span></span></span></span></span></span></span></span></span></span></span></span></span></h1>
All this should actually be is:
<h1>Recognition of Achievement</h1>
In performance speak, that’s 922 bytes (one byte is one character) when it should be only 35 — an increase of 2600%!
All of these inline <span>
tags with inline “styles” will override the site’s custom theme, making the content of this page look inconsistent. Some of these styles do nothing on the web at all — break-after:avoid-page
, orphans:auto
, and widows: auto
are print styles, and the units pt
and in
are also for print. Other tags are inefficiently nested — font-family
and color
are declared twice (the innermost CSS rule wins, by the way). If you want to get really geeky, we discovered that the properties caret-color
> and font-variant-caps
are not even supported by Microsoft’s own web browser.
So yes, content cut and pasted from Word without any cleanup really is that bad.
What We Learned
- Custom site themes deserve a custom author experience as well
- Drupal 8 makes it easy to tailor the permissions, HTML options, and visual theme for the core of most authoring experiences, the WYSIWYG editor
- Pasting content from Word is terrible and bloated and will try to ruin the nice theme that you may have put in place, so make sure to enable “Limit allowed HTML tags and correct faulty HTML”
And knowing is half the battle… or maybe just one tenth.
Footnotes
- Drupal moved CKEditor into core in version 8. Previously, developers needed to pick their preferred editor and install it as a module. Return ⤴
- Why is Blockquote grouped with Media, like an image? No text alignment? “Formatting” and “Block Formatting” as labels? C’mon Drupal, we can do better. Return ⤴
- Removing the paste option is important as you can read in another section. Since this Text Format allows all HTML, using the paste tools would do nothing to strip content of bad HTML and styles. Return ⤴