A recent conversation between a content operations and a Digital Asset Management (DAM) specialist prompted the question: why can’t you just use a DAM to store content? Why would you need a separate authoring environment for content? Good question, right? When should you put content into a DAM, and when should you use some type of Content Management System (CMS) instead? Before that question can be answered, we need to look at what types of content is being included in the premise at hand, as well as which systems.
First, let’s agree on terms
Before we go any further, we do need to clarify a few basic terms around what types of content assets a CMS actually manages. Note that because of naming conventions across disciplines, some of the descriptions have been modified to have more relevance to practitioners in both the digital asset and text-based content communities.
Digital assets
Digital assets are a type of content, typically of formats such as images, audio, video, or graphics. A Digital Asset can also refer to a text-based object such as a document or a PDF. A digital asset, sometimes called a Binary Large Object (BLOB), is delivered in its final form and used “as is” by any system that publishes it.
Text-based content assets
The term “content” often refers to text-based assets, but should be called by a more specific term. Text-based assets differ from BLObs because they have more complexity to them. A user interface (UI) label can be used on its own, or used as part of a UI string, and either the label or the string can be incorporated into a larger chunk, such as a topic or an article. Any of these can be pulled into one or more systems for publishing, or the content can be part of a BLOB, such as a non-readable PDF, and then published in the same way as a digital asset.
Metadata
Is metadata content? It could be, not not always. Some metadata is text-based, such as meta descriptions or alt text, though some may exist in data form. Metadata is used by the various systems involved to selectively route, filter, and publish digital assets or text-based content. The three types of metadata – structural, descriptive, and administrative – are used for purposes such as personalisation, multichannel and omnichannel publishing, and for establishing an order or sequence for display to the content audience.
Data
Data is in a category of its own. It’s not content because, without context, it is of limited use to a content consumer. Reference data can be used, combining data with text-based content, to enrich the context that increases human comprehension. For example, the physical dimensions of a piece of hardware are data points, but when what those data points mean are explained in a data sheet, the data sheet itself is content.
Second, let’s talk content management systems
Next, let’s look at the types of content management systems that are likely to come into play. In the bigger picture, almost any type of system that ingests, stores, and delivers content could come under the umbrella of a CMS. There are several types of systems that could co-exist within a content ecosystem that handle different content types. They all fit the description of a CMS, so in this article, we will avoid the generic term and assign different names to each system type.
Digital Experience Platforms
A DXP is the new name for the old concept called a [Web] Content Management System (WCMS). Despite decades of saying that this “manages” content, this type of system focuses on content delivery. The publication-ready content – a finalised photo, video, audio, graphic, or text-based asset – is moved, manually or through an API, into the system, where the DXP decides when and how to display the asset, and to whom. The DXP may include other post-publication functions such as performance analytics.
Digital Asset Management
A DAM system, much like a WCMS, focuses on the delivery side, but stores and manages digital assets. There are, however, a couple of fundamental differences between a DAM and a WCMS.
- A content creator, such as a photographer, videographer, podcaster, or illustrator, does not do the actual work in the DAM, but in a tool with specialty functions. A photographer may do their photo editing in Adobe Photoshop; a videographer may use Final Cut Pro; an illustrator may work in Adobe Illustrator; a podcaster may edit in Audacity. Once there is a publication-ready version, the assets are moved into the DAM. (The drafts may be moved into the DAM for storage, but the work on the files themselves are not done in the DAM.)
- The DAM likely sits between the work environment and the DXP. In other words, when it’s time to use the publication-ready assets, they are pulled by the DXP, which then routes the assets to be displayed in the appropriate place, at the appropriate time, to the appropriate audience.
Product Information Management
A Product Information Management (PIM) system – also known by Product Data Management (PDM), Product Resource Management (PRM), Product Catalogue Management (PCM) – stores product information, such as product name, product description, images, and data attributes such as dimensions, weight, colours, and so on. There are many functions associated with a PIM, but for the purpose of this article, think of a PIM as the same category as a DAM; the PIM is the single source of truth for product information, and helps the DXP to publish the right information to the right audiences at the right times.
Content Operations Management
Content Operations Management (COM) is a class of systems that help manage text-based content during the creation and approval process, as well as communicate with a DAM to connect to digital assets. Content creators are notorious for repurposing whatever tools are at hand, even when the long-term effects to content may be detrimental. The bulk of authors and editors will use bog-standard tools such as word processors or spreadsheets, not because they are COM tools – these are tools meant for casual business use, not for production-level content – but because they are part of the standard office package. In other words: they use these sub-par tools because they’re there.
On the other hand, there are built-for-purpose tools that allow for more sophisticated manipulation of content, such as a Component Content Management System (CCMS), a Content Operations Platform (COP), a Help Authoring Tool (HAT), or Learning Management System (LMS). And when there are multiple language variants, add in a Translation Management System (TMS). We should note that these tools have not only a full range of features meant for production efficiency, they usually have some form of workflow function for navigating and documenting the review and approval processes, as well as a robust version control that will satisfy the most stringent regulatory requirements.
We can draw a parallel between COM tools and the tools used by the professionals creating digital assets. For example, a photographer works in Creative Cloud, then moves the assets to the DAM. The text-based content creator works in a COM tool, then moves the content to a repository. Sometimes a DXP will have an authoring tool built in; these tools are generally considered “authoring lite” because of their limitations. (That premise is an entire discussion on its own, but let’s not get distracted.)
Connecting the tools that process content
Here’s a simplified visualisation of the way these tools might connect:
- Work environments. These COM tools (including PIMs) are where the actual management of content happens. These will differ by the type of content being created. Photographers will have photo manipulation software. Videographers will have video manipulation software. Podcasters will have audio manipulation software. Authors will have software that manipulates text-based content in sophisticated ways.
- Enrichment environment. This term is a catch-all term for “things that you do to content to make it better” and covers enriched semantics such as metadata, or content quality optimisation through a quality checker. They are often integrated with the COM tools.
- Processing environment. This is where some sort of compilation or build process takes place. The way this works in text-based content processes is discussed a little later in this article.
- Delivered environment. This is one or more repositories where publication-ready content gets stored. For text-based content, this could be a content delivery platform with specialised functions to boost the effectiveness of the DXP. For digital assets, this could be a DAM, again with specialised functions specific to audio, visual, or video processing.
- Publishing environments. A publishing environment is an umbrella term for one of many environments that pull content on an as-needed basis to publish it for use by a desired audience. These vary according to the complexity of publishing needs. Generally, there will be a DXP that feeds a website. Or an app. Or a voice assistant. Or an artificial intelligence model. Or a wearable. Or a piece of equipment with an LED display. The list goes on, including content pulled by a different system to be incorporated into a larger body of content.
To recap, work environments for different types of content get enriched with metadata. The content gets processed from its draft states and delivered, in publication-ready form, into a repository where the content is stored, ready to be pulled by whatever systems need it.
In this scenario, a DAM is in the category of “Delivered environment”, where publication-ready rich media content, such as videos, audio, photos, and illustrations, await being called by the publishing environment. Some text-based copy does get stored in the DAM; the metadata and asset descriptions are two such examples. (Another potentially distracting topic that really needs its own discussion.) However, this is the exception to the rule when it comes to effectively managing content.
When content needs more than a DAM
So after looking at the different content systems, let’s return to our question. Does it make sense to manage content in a DAM? Let’s break this down by how much demand there is during the management – authoring, editing, reviews and approvals, versioning, and language variants – of content.
When content professionals – copywriters, content designers, technical communicators, marketing communicators, documentalists, and so on – work on content, the level of content manipulation they need can vary wildly.
A DAM can likely handle the simplest need – content directly related to a digital asset, such as the asset name or a short description. Think of a painting that has a name, a short description, and some metadata attributes – perhaps size, medium used, painting style, artist name, historical period. All of this is stable information.
The next level of manipulation is for long-form content, such as an article for a static publication on a website. The author may split out the title into a separate form field, add a summary for an aggregation page or a Search Engine Results Page (SERP), and some metadata that routes the article into the correct category. This requires basic functionality that could be done inside a DXP, though the interface is likely lacking. And when that happens, authors will find a workaround, usually with whatever tools are at hand. In this case, a DAM is not the logical replacement, as its content “work environment” functions are likely less than that of a DXP.
The next level of manipulation is for content that gets written in components and often has a high re-use ratio. An example would be the content of a company that produces software that gets used by multiple clients or has multiple freemium levels. There could be different names that need to be changed, or different feature sets for different clients, or all sorts of different combinations of content that get produced for users, customer support, YouTube videos, and so on.
This scenario is a very common use case. Content gets created in one of two ways:
- Copy and paste. The author copies the source content in one document, deletes what is not needed, and modifies the text to that particular variant. Each time the content needs updating – anyone who has worked in an environment knows how often the product team will change its mind about product names, feature sets, or other product minutiae on a regular basis – the author opens each of the documents and updates them separately. If a mistake is found partway through the editing process, the author goes back through each document and makes the corrections. (In one project, instructions for three types of credit card machines resulted in a dozen documents, and took weeks to synchronise the content.) As the author works through the content, they have to hold lots of details in their short-term memories, and the likelihood of error is high, especially after an interruption.
- Tranclusions. The other way is to work in a production-grade authoring environment – not word processing tools, spreadsheets, or other tools developed for casual business use – where all common content can be transcluded, or re-used by reference, so that if an error needs correcting, it can be corrected a single time in the source component. Any variant using that source component will automatically have the error corrected. An extension of that principle is that any volatile information, such as product names, feature names, and so on, can be referenced in the source component. As the author assembles the variants – let’s say that the Basic version of a product has a slightly different set of features than the Pro version – they assemble a virtual set of content for each version of the product, with the variables still intact. There’s no need for the author to do anything else except indicate, when they generate the “build” to process all the variants, which variants they want to deliver. Want to deliver both variants at once? Done in minutes. (A recent example is how one government ministry that generates nine different variants of cybersecurity policies in under two minutes.)
There is no DAM on the market, at least to my knowledge, that can rival the basic word processor, let alone handle the type of production efficiency that a COM system is designed to handle. That’s because a DAM’s strength is not semantic structures, editing productivity, or any of the other needs of the editorial process. Add a localisation process to the mix, where content needs to be exported from the content ecosystem into a Translation Management System (TMS), and you’ve now introduced a level of complexity that would require so much custom development that it would make more sense to use purpose-built tools rather than try to adapt a tool never meant for that purpose.
We can take a lesson from DXP systems that wanted to become COM systems: when the features needed by authors are missing, authors will simply revert to the tools with the closest feature set. Recognising that a DAM is a delivery environment, where publication-ready content goes to be stored, we resist the temptation to treat it as a working environment for content. That goes for photo, video, audio, image and, yes, text-based content, too.
Why do product managers need to understand content operations?
Download our 10 essential content operations tips for product managers