Created with Sketch.
The Content Strategy Experts - Scriptorium
21 minutes | 14 days ago
DITA for small teams (podcast)
In episode 93 of The Content Strategy Experts podcast, Gretyl Kinsey and Sarah O’Keefe talk about how to determine whether DITA XML is a good fit for smaller content requirements. “Scalability or anticipated scale is actually a good reason to implement DITA for a small team.” –Sarah O’Keefe Related links: Localization, scalability, and consistency Twitter handles: @gretylkinsey @sarahokeefe Transcript: Gretyl Kinsey: Welcome to The Content Strategy Experts podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we talk about how to determine whether DITA XML is a good fit for smaller content requirements. Hello, and welcome. I’m Gretyl Kinsey. Sarah O’Keefe: Hi, I’m Sarah O’Keefe. GK: And we’re going to be talking about small DITA in this podcast. So just to set the scene, what do we mean when we talk about small in this context? SO: So when we talk about small DITA or small DITA requirements, it could be a variety of things, but basically, a smaller company, a limited number of content creators, and or a small content set. So instead of tens of thousands of pages translated into 50 languages, we’re talking about two or 3000 pages in four languages, or 500 pages. GK: Right. And sometimes we’re talking about as far as the actual content production people, maybe it’s just one writer, maybe it’s a small team of two, or three, or five. And maybe it’s also something like you have a fair number of contributors who are part time, but you only have maybe one or two people who actually gather all of that content, and put it together. So the total operation for that content production is pretty small scale. SO: Following up on that, I think we all know what we mean by a big group, a big implementation. So it’s almost helpful to look at small DITA as being not large. Not tens of thousands of pages, not 50 writers, not a ton of languages, not a ton of scale. So it’s one of these environments where you don’t have the slam dunk business requirement, because you have so much stuff. GK: Yeah, absolutely. We know that DITA is typically a good fit for a larger team, like you’ve said, because it really saves a lot of cost from the single-sourcing angle. So for example, the larger content set that you have, the more potential you probably have for reuse across that content set. And that means that you can save a lot more from establishing that single source of truth in DITA, whereas when you’ve got a smaller content set, you may not have that much reuse, or what you have may not justify the cost of that setup. SO: Yeah. I mean, it’s really common, I think, to see organizations that have a small chunk of content with actually zero reuse. So you look at it from the, do you have a business case for DITA point of view, and for reuse? The answer is absolutely not. Because there isn’t any. GK: When we look at some of the other factors that typically work for these larger groups too, another one is localization. And in a lot of ways, that one stems from reuse, because the more languages that you translate into, the more times that you have to pay, and if you are using copies rather than true reuse, that makes your costs go up. But if you have a smaller team, or maybe you’re delivering to a smaller market, and you don’t have a lot of reuse, or a lot of localization, or maybe any localization, then again, it becomes a little bit difficult to justify something like DITA. Whereas when you do have a lot of localization, especially on top of a lot of reuse, then that does justify it for a larger team. SO: Right, exactly. Because the more content you have, the more time and money you’re going to save by automation. I mean, just you automate, and then you automate across 10 or 15, or 20 languages, and you automate across all your deliverables. And all that stuff adds up. When you start talking about a team of 20 people, and they spend 10, or 15, or 20% of their time producing all these different channels or deliverables, and you automate that away. That’s a huge gain in productivity. When you have one person working through those things with limited or no localization, then the value of that is just not there. So far, we’re doing an excellent job of convincing everybody that if you have a small team, you probably don’t need DITA. GK: Well, it really depends. There are some circumstances where it can be a good fit, and maybe there are some where it’s not such a good fit. It really depends on your specific situation. When it comes to determining that fit, it may or may not always work out because you may see that that cost of standing up the DITA environment outweighs the benefits, like we just talked about. If you have some a use case for DITA, whether it is reuse, localization, more automation, your publishing, if you don’t have enough content to justify that, even if you’ve got the use case, then your management may just look at your setup and say, “Well, yes DITA could get you all these things that you’re asking for and make things easier, but it’s never going to recoup the cost of the initial stand up.” SO: So what are those things? I mean what does it look like to have a small DITA group, or a group for whom a small DITA implementation makes sense? GK: So one example is if your content has high-value, and what I mean by that is that that content is something that really is worth a lot to a lot of different people, and maybe it needs to undergo some sort of digital transformation process where it can be delivered to the right channels, it can be remixed and reused and repurposed and all sorts of different ways, there’s a lot of demand for that content. So even if there’s not much of it there, that content still has so much value, that the benefits you would get out of putting it into DITA outweigh those costs. SO: So we’ve seen this I think with content that is regulatory, not necessarily regulated, but in fact the regulations themselves which are then distributed to lots of different people. So if your content is in fact the standard that says, this is how you should be doing things in XYZ industry or in XYZ organization, that may be a candidate. The other place that we’ve seen this is in high-value educational content, which might not be a ton of pages per se, but it’s the standard curriculum or it’s referenced material that explains to you how to do a particular thing, or how to get a particular certification. GK: Absolutely. And when it comes to that content that’s where you know the value of it and the need for it for that audience really makes the difference. And I will also add that this is an example where with some of the clients we’ve worked with that have this a use case. Some of them do actually have a larger content set, but they’ve got a smaller team working on it. And so that’s where it really becomes this question of is DITA a good fit? And they get a lot of those benefits out of having a pretty decent volume of content, and then they can use the small team as almost more of a justification, because they can say well it makes these two or three or five writers lives easier, if they don’t have to do so much manual wrangling of that content, and they really can you know get the efficiency out of producing that high-value content. SO: Right. I mean it’s probably worth noting that when we talk about high-value content, and we’ve described a couple of content types. The reason that it makes sense to put that into DITA is because it enables you to label the content in useful ways, so you can have an element called regulation, or you can have a specific table that’s repeated over and over again that has elements that describe what’s in the table, so you can provide these labels that give you information about what’s going on in the content. And because of the remixing that you’re talking about, having specific labels instead of just heading or paragraph, allows you to then capture what’s going on in the content, and then remix it downstream and do what you need to do with it. SO: So the content is high-value in the sense that it gets remixed, repurposed, distributed, used by a lot of people, and it’s worth putting into DITA, because that allows you to give it those labels that make that remixing and repurposing better. GK: Absolutely. And that gets into another use case that we’ve seen where maybe a smaller content set or a smaller team can benefit from DITA which is that, if the customers are demanding a type of content delivery that is not possible where you’re currently setup, so if you’re in some sort of a desktop publishing based setup or you’re not digitally delivering your content but there’s a need for that, then that’s another area where you can evaluate, and say does it help to have DITA? And in particular, when customers start demanding personalization, they want custom content that’s delivered to them just based on maybe the products that they’ve bought, or the services that they have decided to use from your company, that if you get that semantic tagging in there, and you have everything structured, then that delivery then becomes possible. SO: And I think it’s worth noting here that we still today see a steady flow of customers who tell us things like, “Well, we’re authoring in some desktop publishing tool and we’re producing PDF for our content. We really need to put this on our website, not as a PDF but actually as some sort of web HTML, for the first time, and we’ve never done it.” There are enormous numbers of organizations out there that are still in that boat. So for those of you that are listening to this thinking, but everybody’s on the web, the answer is that, in fact, lots and lots of people actually are not just yet. GK: Right. And that’s something that surprised me a little bit with how frequently we still do see that. And I think that’s especially the case with some of these smaller teams, because they just don’t have the resources to make that jump, to make that digital transformation. So I think that, that’s where it really gets down to this point of looking at, is DITA a good fit? Because, it can make that leap over to that digital delivery possible. SO: Yeah. I mean, we’re still seeing, it’s not super common, but we’re still seeing legacy content in PageMaker, in QuarkXPress, in Inner Leaf, and I’m going to stop there before I dig myself further. Those people are out there, and you’re not alone. GK: Absolutely. So one other use case, too, that can help if you have a small content set, or a small team, is looking across the organization at a broader level. Are there multiple departments that you have at your company that maybe need to share content, but they are limited in their ability to do so? Maybe they’re working in very distinct silos. Because if you look at it that way, even if each team is small on an individual basis, when you put all of that content together, then it starts to add up, and you maybe start to have more of a use case for something like DITA to save you costs on the entire content set when you put it together, and to also look at those benefits that you can start to get reuse that you couldn’t have before. SO: Yeah. I mean, we always start with the tech comm group as the default. And that’s, I mean, 100%, where DITA lives to begin with. But the groups that we see here are technical training groups who may probably are reusing content, from Tech Comm. And also increasingly, the sort of technical marketing, or maybe sales enablement, groups that are producing pretty scary white papers and other kinds of marketing materials that are not just a short product description, or a one-page data sheet, but rather a longer not long-long, but longer form document that really could benefit from this. And in addition to the content reuse and sharing that you’re talking about, there’s also value in sharing the channels, the delivery channels. So if I have to build out a delivery channel for HTML, or for PDF, or for a portal, or whatever else, it’s really handy to be able to share that with those two or three or five other departments, so that we can all take advantage of that infrastructure, instead of having to build it two or three or five times for all your different silos. GK: Yeah, absolutely. And this is another case that cuts back to what we were just talking about how you’d be surprised how many teams are still out there that are still entrenched in desktop publishing, and haven’t gone digital. We see a lot of cases where along similar lines, these different departments when we go in and ask, how are you sharing content right now? Because you might have the training group that needs to reference something and the technical manuals, you might have a marketing team that’s using some of the training materials and their presentations. And when you ask them, how are you sharing, they just say, “We’re going to their published documents, and copying and pasting into our own systems,” which of course gets everything out of sync, it has a lot of issues with version control, it introduces a lot of inaccuracies. And so there are still many, many cases where this is the only way that they can make those connections, and use that content. And once you open those doors, and have everyone working together with DITA as the basic framework, then it really gets rid of all of those barriers and those silos, and allows for production to be much more efficient across the board. SO: So what about a component content management system, a CCMS? We’ve been talking about DITA in small teams, but is a CCMS a requirement as part of this? GK: It’s actually not. And I think a lot of people misunderstand that as well. But they think that if you go the route of a DITA setup, that you have to have something like a CCMS as part of that, and have those workflow benefits that comes from for things like reviews and approvals, and the sort of end-to-end authoring to review to publishing pipeline. But you actually can work in DITA without a CCMS. And we’ve seen several examples of these smaller teams doing this as a cost saving measure. So they might use something else, maybe something like Git for version control, and then they’re just working in some sort of a web editor in DITA, and using the DITA open toolkit for publishing. And they don’t have it all connected in a CCMS, but they have enough coordination, because it’s a small team that everyone is able to communicate about the process. And they don’t necessarily need that overhead of a CCMS to make things work. SO: Yeah. I mean, to be clear, you do get additional functionality from the CCMS. It’s just that in a small team, you can do sort of an 80-20 solution, you can get 80% of the functionality with source control, that last 20%, it would be nice. And if you have a big team, you’re going to need it. I mean, I think we shouldn’t. We should be careful with this one. All of our CCMS partners are going to yell at us. But the bigger your team is, the more value you get out of a CCMS. If your team is smaller, there is still value there. It’s just that the overall value is smaller, because your team is smaller. And so you can look at that. But certainly there are lots of instances where the smaller teams need a CCMS, and of course, they’re going to scale it appropriately. They’re not going to spend huge amounts of money on a CCMS, but there are some out there that can be very reasonable. GK: And we’ve seen a few cases too where a smaller team will start in one CCMS that is designed for a team of that size, they have different kinds of levels and plans for different sizes of teams or different amounts of content. And they can start small and work their way up. Sometimes they do that within a single CCMS and upgrade their plans. Other times, they might change from one CCMS to another, depending on how they grow and scale over time. And we’ve also seen some companies use the homegrown approach of managing their version control and their authoring and publishing themselves, temporarily maybe for a few months or a few years, while they get to the point where they truly do need to CCMS. And they have that stopgap period. So there are a lot of different ways that you can approach things. And it’s all very flexible, because it can change over time. And hopefully it will change as you grow. SO: I mean, that might be… Scalability is a really interesting point. Because one option, or one strategy that you might use is that you look at your company, your organization, you say, we’re going to grow. We’re a hot startup, we’re in the space. We’re going to grow, we have to scale. And in that case, you might take a hard look at implementing DITA now while you have a small team. You may or may not have a really great justification for it with your team of two or three or five. But you know that you’re going to be 20 in a year or in a year-and-a-half. And at that point, you’re going to be working at lightspeed. And actually taking that pause and doing a big implementation is going to be problematic. SO: So scalability or anticipated scale is actually a pretty good reason to implement DITA for a small team. Like we’re this big now, we know we’re getting bigger, we know what’s going to happen, we’re going to build this out now while we have a small content set, it’s going to be relatively easier to do it instead of waiting for that challenge to snowball, and then having to really do a big conversion process. GK: Yeah, absolutely. And I think that just gets into the idea of future-proofing your content strategy, and building that end as part of it. And when we come in and do an assessment, which is the sort of how we would help make that determination of whether DITA is a good fit for a smaller team or not, that that’s a big part of it, as we look at what are the problems you’re trying to solve right now, versus what are your goals for the future and the things that you anticipate happening in the next year, in the next five years, 10 years down the road. And try to figure out how we can make a plan or help you come up with a plan that takes that into account. And that’s where that scalability piece is really, really important. Because if you anticipate that need early enough, you really can save a lot of cost and effort and headaches when it comes to actually getting DITA in place. SO: Yeah, it’s much easier and cheaper to do this when you don’t have a ton of existing content in some other format. GK: Yeah, absolutely. So is there anything else that you want to talk about when it comes to advice for small teams using DITA? SO: We haven’t talked about conditionals really, and variant content. That would be another thing to just sort of keep in the back of your mind. If you have content variants DITA’s pretty good at that. And it’s one of the things that it handles well, that can be problematic elsewhere. GK: Yeah. Conditionals really play into the personalization angle. We’ve seen that with some of the small teams that we’ve worked with where that’s been a necessary part of making their personalization happen. So that’s definitely a big thing to keep in the back of your mind along with all the other things that we’ve talked about. GK: So with that, I think we can go ahead and wrap things up. So thank you so much, Sarah. SO: Thank you. GK: And thank you for listening to The Content Strategy Experts podcast, brought to you by Scriptorium. For more information, visit Scriptorium.com, or check the show notes for relevant links. The post DITA for small teams (podcast) appeared first on Scriptorium.
13 minutes | a month ago
How to align your content strategy with your company’s needs (podcast)
In episode 92 of The Content Strategy Experts podcast, Elizabeth Patterson and Alan Pringle share how you get started with a content strategy project and what you can do if you really don’t have a solid grasp on your needs. “It’s about opening yourself up to getting feedback from someone who’s done this stuff before, and may come up with some solutions that you didn’t necessarily consider in your own thinking.” –Alan Pringle Related links: Before you begin a content project Twitter handles: @alanpringle @PattersonScript Transcript: Elizabeth Patterson: Welcome to The Content Strategy Experts podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode we’re going to talk about what options you have when you know you need a content strategy but can’t get a handle on your needs. EP: Hi, I’m Elizabeth Patterson. Alan Pringle: I’m Alan Pringle. EP: Today we’re going to discuss how you get started with a content strategy project and what you can do if you really don’t have a solid grasp on your needs. I’ll kind of start things off and just share that when we have introductory meetings with potential clients, there’s often a problem or a pain point that they express to us, but there can be a disconnect between understanding what you need to do and what you want to do in order to fix that. Alan, I want to ask you this question. Why do you think it’s common that we see that disconnect? AP: To me, it’s very similar to when you go to the doctor, for example. You have got some pain or ache and you can’t quite figure out what’s going on, because guess what? You’re not medically trained so you go to the doctor and he or she looks at you and says, “Based on these symptoms, here are the systems in your body that could be contributing to that problem.” Again, because you don’t have medical training, the doctor may come back with some suggestions that you would have never have thought of, because guess what? You’re not a doctor. That’s kind of how I see it. You have an issue, a pain and ache, and in this case it’s content related, if you’re talking about content strategy projects, and you go to an expert and say, “We’ve got this going on, how can we fix this and make it better?” EP: And that’s a very good point. I think also there’s a bias there, and it can relate to this doctor analogy too. If you take really good care of your health but you’re having some sort of issue, you might not really think clearly about some other causes and going to the doctor would help you. It’s the same thing with a company. You might be biased because you’re inside that organization and you’re not thinking about it as thoroughly as you should. AP: Right. That gets into the whole third party thing. EP: Absolutely. AP: It’s like you go to a friend for advice. If you have got relationship problems or whatever, or you’re buying a house for the first time, what do a lot of people usually do? They go and talk to a friend who’s been through something similar to get their input on it because they’ve been there. Again, it’s about kind of opening yourself up and your mind to getting feedback from someone who’s done this stuff before, and will probably come up with some solutions that you didn’t necessarily consider in your own thinking. EP: Right. That really pulls us into the next question, which is what you can do. One of the responses to that are to look into doing some sort of discovery project with a third party. Could you speak a little bit to what a discovery project is? AP: Sure. When clients come to us they usually say, “We know we have this content related problem.” What we do, we say, “Okay, let’s take a look at that.” It becomes part of a bigger engagement essentially, because what we need to do is back up a little bit from that pain point. We need to figure out what the big overall business goals for the company are, and then we can say, “Okay, this pain point is likely happening because it’s not aligning with this particular requirement.” What we want to do is go in there and people usually come to us when something’s wrong or broken, just like you go to the doctor. It’s not usually, “I feel great. I’m going to the doctor.” It doesn’t work like that generally. Something’s wrong. They come to us. What we want to do is take a look at what’s broken, take a look at the big overarching business goals and how that content problem ties into it and what you can do to fix it to better align that content pain point with the business goals of the company and fix that problem. EP: Right. Discovery projects, when we do discovery projects, there can be differences depending on the type of project that it is. But overall discovery projects are very similar. We’re identifying gaps. We are identifying tool possibilities. We’re putting together a map for solutions for your content project. They all look very similar in that sense. AP: Right. There’s an overarching kind of theme or goal to these things. I’m glad you mentioned tools because there’s often this temptation when you’ve got some kind of problem, oh, I’m going to get a piece of software that will magically fix that. It doesn’t usually work like that. AP: That’s why I think having someone come in to kind of evaluate and articulate what the requirements are to help you build that list so you can pick the right tools. There needs to be this conversation. There needs to be a lot of back and forth among different people in your organization, whether they are directly or indirectly affected by content, in the case of a content strategy project, of course. AP: A lot of what we’re talking about applies to business in general, but of course our focus is more on content, because you want to get the big picture and get those requirements laid out and then find the tools and solutions that fit that thing and make those goals come to life and work. If you don’t do enough discovery and you don’t put a lot of thought process and you just pay attention to marketing, or what you heard another company did, that gets into a dangerous territory where you may not get return on your investment when you buy a tool and it doesn’t turn out as you anticipated. EP: Absolutely. You’re talking about how important it is to talk to all of these people so that you can pick the right tool. Oftentimes stakeholder interviews are part of a discovery project so that the third party goes in and they talk to the different stakeholders that are going to be affected by that tool, find out what their pain points are, and then that helps to identify a tool that’s going to really solve the problems or fit your solution. AP: Right. I think it’s also worth pointing out too, on these discovery projects, this is just about looking at tool options and suggesting the possibilities, and then later, what we generally do is have a phase where we configure and implement the tools. AP: This is also very helpful from a business point of view. If you go into relationship with a consultant or a vendor or whomever, it’s probably good from a business point of view to separate out the discovery part from the configuration implementation part, because what if you get into the discovery part and you discover that you and that consultant are not sinking. It happens sometimes. You can’t quite sync, so that relationship probably shouldn’t continue. AP: From a strictly business point of view, it’s a good idea to separate the discovery out from the configuration, the implementation. Also, it is very good from a budget point of view, say, “In this fiscal year, we’re going to lay out the discovery work and come up with a roadmap, which is a result of your discovery project.” Then later in the next fiscal year is when we’ll start buying the tools and implementing them, say, over the next two fiscal years or something like that. EP: Oftentimes that roadmap and the results from your discovery project are what help you, or can help you to get buy-in from upper management to actually get the funding that you need for that tool or the implementation phase. AP: Well, and really from my point of view, too, that when you mention upper management, they need to be part of the discussions from the get-go. They need to be part of those interviews because they need to articulate what they see as being the requirements are. They also need to hear about what the issues are in the content world. Because they’re the ones, A, like you mentioned, who have the money, and B, they’re the ones that also have the vision of how to reconcile those things. It points to a thorough communication system that you have to set up when you’re doing these interviews. You don’t just pick the people that are immediately affected by content, the creators who reads it, who edits it, who reviews it, how you distribute it. You also need to talk to your executives to find out what they expect from the content and how it aligns with their vision. AP: You need to talk to your IT people, for example, because they’re controlling likely maybe some web servers that your web content lives on. They may be controlling tools. They’re the ones that do inventory of tools and decide, yeah, we’ve already got a tool that does this. Why are we going to get another one? It’s not a situation where you need to be in a vacuum. A discovery process is talking to people who were directly and indirectly impacted by decisions. And you have to include those people who have the bigger overarching vision, for lack of a better word, for where your company is headed. EP: Right. We’re talking here about discovery as a way to solve a problem. Are there any other reasons that a company might consider a discovery project? AP: Well, we tend to be focusing on the negative things and we really have done that in this podcast. Oh, it’s a pain point. It’s the bad things. But you’ve also in this process, have to look at the things that are working and figure out a way to either translate those or move those over into your new process or whatever new systems you’re going to recommend, to be sure those things are handled right. You’ve got to look at the good and the bad, but the bad is what usually brings people to us. But we also have to recognize as consultants and as the people who help run these discovery programs, that you have to also have a really good ear and listen, and find out about the things that people like and that in some way need to be kept as things move forward with improving whatever it is that is indeed broken. EP: Right. Another reason to consider a discovery project, and this isn’t a problem, but there might be a merger and bringing in a third party to help with that merger and bringing content from two different companies together and making a plan, that can be very helpful because there’s a lot to unpack there. AP: Right. There’s something to be said, especially for a third party in this case, because you’re going to often have two basically completely duplicative systems that are pretty much doing the same thing around content. From an IT point of view, keeping both is probably not ideal in the long-term, at least usually it’s not. It is not a hard fast rule here. AP: It’s a good idea to have someone come in and to take a look who has had experiences helping other people with mergers, like you’ve mentioned. Is it likely you’re going to have someone on your staff that has gone through that? If you have, that’s great. Use that resource. But a lot of times you haven’t, and that again points to, let’s talk to a third party who recognizes the issues and the challenges surrounding our merger and content and how they can help us figure out how to integrate things better. EP: Right. Alan, well, thank you. That was really helpful. Thank you for joining me today. AP: Sure. Enjoyed it. EP: With that, I think we’re going to wrap up. Thank you all for listening to The Content Strategy Experts podcast brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relative links. The post How to align your content strategy with your company’s needs (podcast) appeared first on Scriptorium.
16 minutes | a month ago
Using text strings and microcontent (podcast, part 2)
20 minutes | 2 months ago
Using text strings and microcontent (podcast, part 1)
18 minutes | 2 months ago
The misuse of metadata (podcast)
In episode 89 of The Content Strategy Experts podcast, Gretyl Kinsey and Bill Swallow talk about strategies for avoiding the misuse of metadata and DITA XML-based content. “The more you fine-tune how your content model needs to operate, the easier it’s going to be to move it forward over time. The more you start taking shortcuts and using metadata for purposes other than what it was intended for, the more problems you’re going to have.” – Bill Swallow Related links: Making metadata in DITA work for you Tips for developing a taxonomy in DITA Twitter handles: @gretylkinsey @billswallow Transcript: Gretyl Kinsey: Welcome to the Content Strategy Experts podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize and distribute content in an efficient way. In this episode, we talk about strategies for avoiding the misuse of metadata and DITA XML-based content. GK: Hello and welcome, I’m Gretyl Kinsey. Bill Swallow: I’m Bill Swallow. GK: And we are going to be talking about all the different ways that we have seen metadata get misused in DITA XML. Before we dive into that subject, Bill, what is metadata in the context of DITA? And how is it used just for anyone who may be unfamiliar? BS: In the context of DITA, metadata is a series of elements and attributes that are applied to your DITA content in order to give it some meaningful purpose. A lot of times we see it as profiling metadata, so being able to set, for example, an audience on a topic to say, “This is only for beginner people.” This way, when you publish your output, you can turn on or off your beginner audience content and produce either a beginner guide or a more advanced guide without the beginner information in there. Metadata also allows you to do more interesting things with your content. One example that we see with metadata in the standard DITA implementation is around notes and warnings and cautions. They’re all the same root element of note, but you can use a type attribute to set whether it is a note, whether it is a caution, a tip, a warning, a danger flag or what have you. That’s an example of how metadata can influence the type of content that you have in DITA. GK: Yeah, absolutely. It’s essentially we describe it as data about data. It’s all of the information in your DITA content that does not actually get printed or electronically distributed, I guess you could say, on the page. It’s everything that’s making it run kind of behind the scenes, but it’s not actually part of your published output. It can influence the way that it is produced as Bill described, the way it’s sorted and searched and delivered. But it’s not something that you actually see, like the words on the page. And I think that that can cause a lot of the confusion that makes it get misused because when people are coming into DITA from a mindset from desktop publishing, the way that metadata works there is quite a bit different. And I think that getting into that mindset of the proper use of metadata in DITA is a really big shift. I want to talk about some of the examples of misuse that we have commonly seen and fixed with a lot of different cases. BS: I think probably one of the most common examples we’ve seen with regard to metadata misuse has been around using metadata or generic metadata buckets for very specific purposes. And one of these is the output class attribute that a lot of people end up using as formatting instruction within DITA. Which kind of breaks the rules of DITA itself because you generally are going to DITA so you can separate your content from its formatting. But here, we often see output class equals red or output class equals 16 points, where they’re adding instruction that wasn’t built into the transform itself in order to tell the transform how to render a piece of content. And it blows my mind, but it is one of the most common things we’ve seen. GK: Yeah. And that just, again, it comes from that mindset of working in something like desktop publishing, where you do get to control all the little bits and pieces of the formatting at an individual level. And when you go into something like DITA, where your formatting is separated from the content and is automated, it can be really, really difficult to get your mind past that shift. And so then we see a lot of instances where people go, “Oh, I’m really limited. I can’t make this one piece of text bold anymore or I can’t turn this green anymore.” They just find the workaround of putting an output class on it. And what happens over time is that that misuse of the output class attribute ends up completely defeating the purpose of having automated formatting, because you’ve put all these overrides everywhere. And if you actually had a more legitimate use of output class, then that’s kind of ruined too by the fact that you have misused it in all of these places. BS: Another bit of metadata misuse that we’ve seen is using one metadata element or attribute for another purpose, a purpose it wasn’t designed for. One example could be that you might be setting audience to let’s say a different country, which kind of makes sense if you want to be able to filter on certain types of content for certain geographies, but really that level of metadata should be held up in the xml:lang attribute where you’re describing which language and which country this content is aimed toward. If you’re labeling something for a German audience, regardless of whether it’s in English or in German, you really should be using the xml:lang attribute as opposed to profiling it for a German audience. Now, there are some differences that you can get into as to whether you want to include or omit certain types of information for a particular audience but in general, you have to be clear about which elements and which attributes you’re going to use for which specific purpose. GK: Yeah. And audience is a really interesting one because I’ve seen a few different cases where if the audience that a company is delivering for is really complex and there are maybe a lot of different ways that they need to kind of parcel out the content for different chunks of that audience, that just the default metadata for audience in DITA isn’t really enough for them. And some of the workarounds that I’ve seen them do are they’ll use audience for kind of one facet of their total audience and then they’ll go in and pick another metadata element or attribute for another facet of their audience, when it’s really not designed for that. And so that points to the fact that if you’ve got complexity that’s not really built in, that you need to start looking at a more effective way to handle that than just shoving it into a metadata element where it doesn’t actually fit and where it’s not designed for it. GK: Because then what happens down the road is if they actually did need to use a different metadata element that they had designated for a part of their audience and later they need to use that for its intended purpose then it’s already taken up with however they’ve described it for that piece of their audience. And then they have to do a lot of reworking. It really is important to kind of think about this. And I know we’ve talked about this in some of our other podcasts about planning out a taxonomy and thinking about your metadata as a whole before you go in and just start assigning it to the DITA elements and attributes. BS: Right. And then even if you’re not misusing an attribute, a lot of times we do see cases where other meta is just used throughout an entire content set, where you’re essentially defining custom metadata, which is good, but you’re doing it in a very generic way that usually requires a lot of, well, it usually involves a lot of user error because everything is hand-typed at that point. GK: Yeah. I’ve seen a lot of instances where that’s just kind of used as a catch all or a place to shove anything that doesn’t fit into all of the other existing metadata categories that there are. And then what happens is later when you need to organize that better, everything has just been shoved into other meta and there’s not really an easy way to kind of parse that back out and define it without doing a lot of work. It really is, like we said, helpful to plan this out ahead of time, think about all the different metadata that you’re going to need and figure out where and how it fits into DITA’s metadata structure. BS: Absolutely. The more you fine-tune exactly how your content model needs to operate, the easier it’s going to be to move it forward over time. The more you start taking shortcuts at the beginning and using metadata for purposes other than what it was absolutely intended for, you’re going to have a lot of problems unwinding that as your content set grows, as your publication breadth grows. You’re just going to run into problem after problem after problem so it’s best to do it the right way. Rather than shoehorning a bunch of metadata into random elements and attributes and using other meta wherever you want to and output class and all these other things, probably want to talk about what might be a better approach? GK: Sure. Of course one is specialization and that is the ability that DITA has to create custom elements and attributes based on the structure of existing ones. And this is absolutely something that you can and should do with metadata if you have that need. And this is really, I think one of the more common areas where we see specialization among our clients. That a lot of times the actual topic and map structure is fine, but there are metadata requirements that they have that just do not fit within what’s available by default in DITA. Coming up with some sort of a taxonomy before you start putting everything into those default DITA elements and attributes can help you see where, okay, maybe we might need a specialized element or set of elements or attributes for the specific type of metadata. And that really gives you a roadmap for how that might work. GK: And I think one thing to look out for is if you do start with a kind of set metadata structure and then things change over time and you start noticing a pattern, that you may be are using other meta a lot or output class a lot for the same kind of thing over and over because it just doesn’t fit, that can oftentimes kind of be a little red flag to you that, hey, maybe we need to go back and take another look at this and think about specialization for that kind of information. BS: Right. A lot of people are really hesitant to look at specialization because it means customizing the DITA model and really doing some very high tech and difficult things. And from a metadata perspective, it really is the best way to get in there and make the model work for your content and for your needs. And the beauty of the specialization approach is that once you’ve implemented it, it carries forward. You can update DITA from version to version, to version, to version and your specialized content will work. Your specialized elements and attributes will just work. It’s not divorced from the model. You’re not dealing with some FrankenDITA thing that’s never going to be able to be updated again. It’s really the ideal approach to wrestling with metadata and making sure that you have the right buckets for the right types of data about your data. GK: Yeah, absolutely. I want to talk about another feature that can easily be misused but also it can be really helpful if it’s used correctly and that is subject scheme. And subject scheme is basically a special type of map that’s available in DITA that allows you to bind specific values or sets of values to attributes. And this can kind of work sometimes as an alternative to specialization if you don’t really have a compelling enough case for specialization yet, but you still need some sort of a custom set of values for your attributes. GK: And some examples we’ve seen are again, if we go back to the example we talked about for audience and you want to define a list of different pieces of your audience that is kind of more complex than maybe something like beginner, intermediate, advanced, then you can set up that list in your subject scheme, you can set up hierarchical lists of values and it really just makes it a lot easier for your writers to avoid mistyping something because they have a pick list that comes from that subject scheme. But again, it’s also something that can really easily be misused. BS: Absolutely. And the other piece about a subject scheme is that you can set it up so that you do have that finite list of values and you only have those values available. And that really allows you to only provide the values that you have handling built in for. If you are experimenting with different types of metadata and you don’t necessarily want them in a production mode, you can exclude those from some of the lists, depending on the authors that are working on it. You might have an experimental batch of authors that are working on the next latest and greatest batch of content, but for a lot of the content, that’s more in a maintenance mode or is using the existing publishing workflows that you have established. BS: You can limit the metadata values to just what’s in the subject scheme, that’s all they have available so they don’t accidentally create something or mistype something that is not going to be handled. Because usually the publishing instruction will basically say, “I don’t understand this value. I’m just going to throw it away and just go with the content that’s there.” And that could be very detrimental if you’re publishing something that has metadata applied to it that says, “Do not publish this in this scenario,” in which case, then you get content that you didn’t intend in your output. GK: Yeah. I think kind of one of the other ways I’ve seen people misuse subject scheme is that when they start to approach it as kind of a stop gap between having no specialization and eventually getting into specialization for their metadata, that over time, it starts to become really unwieldy and they’re trying to kind of shove, I think, too much complexity into some of those lists of values and really try to make it a substitute for specialization when really it’s not. And I think that’s another one of those things to look out for as kind of a red flag. Is that just like in your content, if you find that you are using output class excessively for the same thing, if you’re shoving into other meta too much, if you get into a subject scheme and you realize that it’s not actually helping with the complexity of everything that you need to capture, then that’s another one of those red flags that says, “Hey, we should look at specialization.” BS: And likewise it hearkens right back to your taxonomy as well, because at the point where you’re using subject scheme, it should be reflecting what’s in your taxonomy. If there are additional things that you need to add that aren’t available in your pick list for the subject scheme, chances are they’re also missing from your taxonomy, which means you have some more thinking to do on exactly how you are categorizing your content. GK: Absolutely. We’ve talked about taxonomy as kind of a way to make sure that you avoid that in this use of metadata. Another one I wanted to bring up is also not just to defining a taxonomy, but defining your formatting and your presentation needs as much as you can upfront as well, because that’s also going to play a role in where you might need some custom elements or attributes that can drive things a lot better than just using output class all over the place. BS: That’s a good point. And I think the final thing that we want to mention is that you want to future-proof your content model as much as possible so that these needs are either expected or at least your model can grow as expectations grow from others of that content model. Being able to have specific metadata that’s specialized for your exact content is going to make it a lot easier to be able to introduce new values, to be able to constrain against specific values for that metadata and also having that mature taxonomy model as well will help you in that regard. GK: Yeah. If you think about future-proofing and think about planning your taxonomy, planning, your publishing needs, planning your distribution around that, then that will really kind of help shape the way that you think about your metadata use and make sure that you allow for that growth and that scaling that should happen in your company if you’re going in a successful direction. Before you just start going into DITA and building the metadata, it really requires that level of forethought to make sure that you’re not going to misuse anything. GK: And we’re going to wrap things up there. Thank you so much, Bill. BS: Thank you. GK: And thank you for listening to the Content Strategy Experts podcast, brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links. The post The misuse of metadata (podcast) appeared first on Scriptorium.
27 minutes | 3 months ago
Understanding information architecture
In episode 88 of The Content Strategy Experts podcast, Alan Pringle and special guest Amber Swope of DITA Strategies talk about information architecture. “Information architecture is a role, not necessarily a position, but by ignoring it, you end up without the discipline and the consistency that really enables great customer experiences.” – Amber Swope Related links: DITA Strategies Information architecture in DITA XML (podcast) Twitter handles: @alanpringle @DITAStrategies Transcript: Alan Pringle: Welcome to the Content Strategy Experts podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode we talk about information architecture with special guest, Amber Swope, of DITA Strategies. Hey everybody, I’m Alan Pringle, and today we have a guest on the podcast, Amber Swope, of DITA Strategies. Hey there, Amber? Amber Swope: Hi there, Alan. AP: So, let’s start with the basics here. Give me your definition of information architecture. AS: Well, as you know there is no common definition of information architecture- AP: No. AS: … So rather than getting frustrated by that I chose that as an opportunity to own the version of it I want to have. So, I go with the Samantha Bailey definition that starts with information architecture is the art and science of organizing information so that it is find-able, manageable and useful. I really like that definition because it acknowledges that there’s art and science to this practice. AP: Also, there’s not a whole lot of jargon in that definition, I appreciate that a lot too. AS: Yeah. And particularly the science part is obvious to see. So for instance if you’re using an open standard DITA, then you could have five IAs, give them the same challenge, tell them which version of DITA they’re going to use, and they would probably come up with solutions that are 80% consistent. But that 20%, that art is where different information architects will bring to bear their experience and potentially give you something slightly different which is why it’s always great to have more than one information architect on a project. AP: Sure. And there is absolutely an element of judgment call to it as you have said, it is not just a straightforward everybody’s going to do the same exact thing. There is no book basically that tells everyone how to do it exactly the same. AS: And I also take this a step further and make a delineation between management information architecture and delivery information architecture. And I found that most information that is available for information architects is dedicated to delivery information architecture. That is the architecture, the structure, the metadata, et cetera, that is required to deliver information on a specific platform. So a mobile app, or a website, a portal, a working environment. AS: And then there’s what I tend to do which is management information architecture. And the difference is that I’m tasked with creating an information architecture that can support omni-channel publishing to any of those platforms. I tend to work with companies that are trying to have a single source of truth that they manage in DITA but that serves different platforms, and each one of those platforms will have its own architecture because that architecture supports the display, usability, findability, et cetera, of that information. AP: I think that is a very important distinction to make and it hearkens back to something that started in the late 1990s, the idea of single-sourcing, where you basically you have a source that is then output into a bunch of different formats and you’re not writing specifically for one format type. AS: And that’s particularly powerful. And when you think about content that is to support learning, if you have content and you want to send it out to an LMS, you’re not going to structure it just for the LMS in the management architecture but the LMS, that experience is so important to learners that that architecture needs to be fully developed. And when you work in these larger projects the biggest challenge is first getting folks to acknowledge that they actually need information architecture as a separate discipline, and then next understanding that they need more than one. And understanding what those roles are and communicating which direction the requirements are going. And the reality is they’re both going both ways, and that leaves a lot of opportunity for some great collaboration but also an opportunity for some miscommunication. AP: Sure. And I think this makes me want to ask the question, when should a group of content creators, a company, a department, whoever, when should they be thinking about information architecture? What’s kind of an inflection point where you say, “We really need to buckle down and think about this seriously?” AS: Well, when I speak to a group of information developers or tech writers or whatever label you want to use for people who are creating content, I asked them, “How many of you are information architects?” And very rarely does anyone raise their hand. Then I ask, “Do you control the table of contents? Do you put in keywords or index words?” And everyone raises their hands. So, everyone’s doing information architecture as they create content in these organizations. I think the question really is when do you need to acknowledge it as a separate discipline? And I would say as soon as you have more than one deliverable. Because if you look at high-tech companies, one of the classic questions is, well, where does the troubleshooting information go? Does it go in the user guide? Does it go in a surfs guide? Where does that go? AS: Well, that’s an architecture question. And if you have guidelines that indicate that user guides have this information, getting started guides have this information, administration guides have this other information, and it’s okay to have the same information in more than one deliverable or it’s not, that’s architecture. And I feel that it’s a disservice to not acknowledge that everyone’s doing it already. It’s a role, not necessarily a position, but by ignoring it you then end up not having the discipline and the consistency that really enables great customer experiences. AP: I think that’s a very great point you just made. As people are creating content they are adding intelligence to it. They are categorizing that information oftentimes without even realizing that they’re doing it. AS: And if you have more than one author then you have different people’s ideas and opinions and judgment calls. And I would argue that many of the style guides that teams have that allow folks to be more consistent, actually, most of the time incorporate a lot of the architecture. And that it might be helpful for teams to look at that information with a critical eye and say, if it’s about what information goes where, and what’s the structure of a specific deliverable, maybe it’s worth calling out into a separate section of the guide and acknowledging that this really is different than the words that you choose or how you format something. AP: With IA, is it generally a project by itself or is there some trigger, some bigger corporate initiative that may make that happen or put attention toward it? AS: I would love there to be projects where someone calls me up and says, “We just really want them have a great IA.” That never happens. Folks call up and say, “We are having this type of a business challenge. We understand that baking the structure of the content into the DTDs or baking it into the CMS structure, or completely ignoring the structures, the metadata that we need is causing us pain.” And then because IA is around the structure of the content and the metadata, and particularly if you’re working in XML, you don’t give people the raw XML, you always process it or render it with a transform. So it’s always going to be bound to additional work, you’re not going to go and change the IA and then not be able to generate the content. The simple answer is it’s always been part of a bigger project. AS: When I look at projects like this I see the business question, what are we trying to achieve? And then I think of three dials or areas where we can control and make accommodations and improvements, architecture, technology, and process. And most challenges require some work in all three areas. What can the architecture do to give you more consistent, well-structured, more powerful content? What’s the technology that’s required to perhaps present that information in a better way to meet the user need? And then process, well, what processes need to change in order for us to produce the right content in a timely fashion? AP: Those are really good ways to break it down. But if I’m talking to a C-level person, an executive, the person who has the money in their hand, how do you communicate to them about the importance of IA because I’m pretty sure telling them we’re going to get spiffy new tools is not the way to win that argument? AS: Well, and it’s a challenge because everybody wants a simple answer, particularly in the U.S. where we get judged quarterly by our success or our failures. Most of these projects to make significant change or improvement take longer than a quarter, so the whole budget question is always difficult. The first challenge I think is for them to take a business challenge and understand when content is involved at all. AP: Yes. AS: Because once we say, “Oh, content is part of the solution,” then it immediately is, well, it’s not just the words but it’s also the structure, and at that point we can introduce the discussion around architecture, the role of architecture. And I’m actually working on a book with a coauthor about this exact challenge, is how should management understand when a business challenge involves content and when simply buying new software won’t be the answer because there’s always going to be some sales person out there offering them some sexy new software and telling them that it’ll fix everything. AP: Indeed. And it usually doesn’t, says the narrator. AS: Well, I would say it always doesn’t because the idea of buying software without understanding the inputs and outputs of it and the role of the people using it, that process, that’s how you end up with shelf-ware. AP: Exactly. And I’ve seen that happen so many times I can’t even tell you, I’m sure you have too. AS: Oh, yeah. AP: Yeah. So there’s got to be some process here, some way to consider to map this out, especially to get that buy-in for the vision and then to implement it. Is there a loose process? Now, I realize this is a huge, huge leading question that we could talk about for hour upon hour. But is there’s some kind of a loose outline about how these projects go? AS: Definitely. As with any challenge we want to start off with what the definition of success really is. Because we don’t want to make change, we want to make improvement and how will we know when we’re done if we don’t know what the goal is? And if you’re in a larger project, in larger organizations, a lot of times they’ll have a content strategist and the content strategist is usually the person who defines what success is. They work with the management team, understand what the challenge is and say, “Oh, okay, let’s talk about what success looks like from the contents point of view.” AS: Then of course we want to understand the current status so we do some assessment to understand why is the current content, whether it’s its structure, its delivery, the actual words, it’s in the wrong language, where is the content falling short and what are we currently working with? For a lot of teams one of the biggest challenges is that they have multiple instances of the same content. And so it’s easy enough to write, but then when you go to update it’s like Pokemon, you have to go catch them all and you never do because you might be new, you might be busy or you might not even have access to the repository that has that fifth instance of that content. And that is why we really want to get to a single source of content in a management architecture so that when people need to update the content they simply do it once. AS: After we know what we have, we want to look at the future state, what should the future state look like and understand taking the idea of success and making it concrete in a way that we can then start building toward. And then once we have this from the architectural point of view, I’m going to start looking at the deliverables, really looking at them and saying, “Okay, what’s this deliverable type? What’s its purpose? How should it be delivered? Is it for one or more audiences?” And when I mean deliverable, I mean a manual, an article, a course. If you’re a mobile app that has glossary quizzes, what is that thing that the end user consumes. AS: And based upon the purpose and who it’s for then we can start looking at, well, what kind of content needs to be here to meet that purpose? And once we have the idea of what success looks like for the deliverable then we can look at the content types. In some organizations the content types are super basic. They have concept task, reference glossary, maybe some troubleshooting. If you’re in education you’re going to have learning objectives, you’re going to have questions, you’re going to have overviews, you’re going to have summaries. And the more specific your industry is, I have found the more content types you potentially have. AP: Yeah. We’ve had that experience as well, I agree with that. AS: When I say content type I’m not necessarily referring to an official topic type. For instance, you don’t have to specialize to get a content type. If the content structure is the same but you really want to identify the purpose of the content so that you can empower a more nuanced delivery. So for instance, if you have glossary terms, the glossary structure, for instance in DITA, but maybe you want to include that this is a vocabulary word versus this… And it’s for a specific industry or you want to say, “Oh no, this is a chemistry formula.” That’s a very specific purpose. And you don’t have to specialize, you can use the base topic, but then you are identifying for downstream systems what the content type really is. AS: And when we know that then we can look at its structure. And what our goal is is for us to be super clear for the people creating the content what purpose is that they’re writing it, because creating smaller, modular, structured content is still a new concept to a lot of content authors. And even though it started way back with information mapping and has been used through multiple systems including DITA, the idea that you would write and store pieces of content for different purposes is still a big change for lots of authors. AP: It is and if it were not we wouldn’t be employed, quite frankly. AS: Indeed. And so once we have some idea about what the structure should be then we’re going to do some proof of concepts and try out some lightweight mock-ups and understand how things come together. And what I typically do is I start with what they do now and replicate it and then we start thinking about the art of the possible. Because, we’re not being brought in to ask them to recreate what they have now, because they have a business challenge that what they now doesn’t meet. Understanding what that change is, what’s that delta is really important from a structural point of view because we can’t help the authors make that journey to the new format and the new structure unless we fully understand it. So I’m a big fan of recreating doing a proof of concept, understanding with the stakeholders why what they have doesn’t work because me telling them that usually is not enough. AP: No, but it does help that you’re a third party voice coming in there. And I’m actually very glad that you brought up proofs of concept because there is always on these kinds of projects a chicken and egg challenge with the tools and technologies. If you’re doing the information architecture and laying that all out, at that point you often don’t have the tools that will do the transformation, or the tools that they’ll be using for authoring. So how do you balance that lack of tools and doing these proofs of concept? How do you handle that chasm, for lack of a better word? AS: Well, I start with what DITA gives us for free. Because it’s an open standard we have the open toolkit and a lot of the authoring vendors provide multiple transforms. And so I go with what I have available. Because it’s a proof of concept I don’t want to invest development time if I can help it and I see what I can get. So for instance, if I’m trying to show people how they can get different types of associations that can be represented as links in the output, I’ll just use a tripping helper, an HTML5, a transform, just to show them, hey, this is what you get. That’s particularly useful when you’re trying to explain to people why they will no longer have to manually manage and type the link text for all their links particularly if they’re hierarchical. AP: Yeah. And I know having worked with you on some past projects, you’ve often even not even touched DITA tools to do proof of concept, for example, doing mock-ups of a table as they stand now and then doing a future state table using Excel or Word to show the differences and what’s possible without even actually having to touch DITA. Because you have that DITA knowledge, you can translate it in a way that’s very visual and help people understand without them even, or you really even having to touch the DITA code, I think that’s also very helpful. AS: Well, that’s the thing and this is the chicken and egg part of it that you mentioned, Alan, which is, I’m trying to help folks understand what they can do with their content and they shouldn’t have to know DITA in order to be able to communicate their needs to me. AP: Absolutely. Really, it is a situation where they need to bring their expertise and that is with the current state and with how process flow works and how information flow works. And it has to be basically combined with your expertise on DITA and whatever other model that may be, your case it is usually DITA. You’ve got to find a way to bring those two things together and have them sync for these projects to work, at least that’s my point of view. AS: And I’m a big fan of using diagrams because first of all I’m very graphically-oriented, I love a good picture. And second, it allows me to help folks see past their words to see their structure. And the first version of the diagrams I do has no DITA in it, it literally, for instance if I were doing a diagram of a glossary unit and I wouldn’t even need to say the word topic, I’d say a glossary unit. It’s like, okay, we start off with the obvious, we have a term and a definition. Do we need abbreviations or some other alternate form? Okay, let’s talk about the alternate forms that you want. Do you need usage notes? What is it for instance, for the folks that did an application, a mobile app that tested glossaries, they’re basically digital flashcards. AS: We had to say, “Oh, we need pronunciation here as well.” And that has nothing to do with the DITA elements I would use to support that, it’s helping the client communicate to me what does success look like, and like I said, I love using diagrams. I usually have two sets. I have a set for the structure and then I have a set that I then create that says, “Oh, here’s the DITA element,” and potentially the attributes I’m going to use to create and structure the information to meet the structure that they told me they needed. AP: Yeah. And it’s almost baby steps, starting out simple and then adding another layer on top of it and that makes a great deal of sense to me. AS: And I use the same ones over and over. I actually have a toolkit that I sell, that is the toolkit that I use with my clients. So not everybody has the opportunity to bring in a consultant but if you want the tools that I use you can get them. AP: Absolutely. And before we wrap up, is there any one stumbling block that you can think of that really stands out based on your past experience where you can give a simple piece of advice to get around that stumbling block when you’re working on an IA project? AS: I think that the biggest one is recognizing first that architecture is a separate discipline. And the second part of that is that you may have more than one architecture. Most companies I see have multiple ways that they are producing their content now and if we want to get into a management architecture we have to look at the input into that architecture and say, “How do we harmonize?” And I like the word harmonize because it allows me to express that we’re not making everything exactly the same, which when I say something like normalize it would imply massive change. AS: Oh, no harmonize, I want everything to work together in one repository or repositories that they all have the same structure and then we can look at the downstream implications. So for instance if you’re doing a chatbot and you also have a self-guided troubleshooting and you have basic user manuals, you have FAQs, we should be able to structure the content and the management IA and empower it with the correct metadata so that you can deliver that content in the way that it needs to be delivered because for each of those platforms, which could be radically different if you think about the difference between an FAQ and a chatbot, that delivery IA is radically different. AS: And most folks they’ve been thinking about it from the idea that, oh, we’re just going to push it out and it’s going to magically work on that platform and understanding that they will need to have two different IAs and take the effort to trace back from the delivery platform, back to the management IA to understand when metadata, specifically metadata gets assigned to which units. Is it one unit, is it a group of units, is it based on a map? Whatever that is, and recognize that there might be times when metadata never makes it back to the source, that it may need to be managed in different places. And so this idea about metadata being used to power the content needs to be discussed in the context of multiple architectures. AS: And I find that that’s an evolving conversation that once we talk about it that way a light bulb goes on for people, but I wasn’t having that conversation two years ago with people. And I should have been, but it just became clear to me over the last couple of years that that is where I can really help folks understand how they can power their content in new and better ways. And maybe even using their existing content that they have and they just add a new delivery channel, whether or not they have to actually go back and touch their source, or whether there’s an opportunity to power it from a different place. AP: That’s really good advice, I appreciate that. And I’ll be sure to include your website in the show notes so people can find you and continue this conversation with you. And with that, Amber, I want to thank you for your time, this has been a great conversation. AS: Well, thank you, Alan. You’ve given me an opportunity to talk about one of my favorite subjects, I love talking about architecture. AP: Well, we’re glad to do it, thanks again. Thank you for listening to the Content Strategy Experts podcast, brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links. The post Understanding information architecture appeared first on Scriptorium.
25 minutes | 3 months ago
Finding the value when selling structure (podcast)
In episode 87 of The Content Strategy Experts podcast, Sarah O’Keefe and special guest Nenad Furtula of Bluestream talk about finding the value when selling structure. Why do so many tech pubs departments fail to get support for structured content and what can we potentially do to change that? “Step five is when you’re thinking even your structure is structured. You’re really thinking about how to take this to the highest possible level, how to get the most out of your automation, and how to make sure that the way you’re delivering your content is maximum efficiency.” – Gretyl Kinsey Related links: Bluestream Content Solutions Steps to structured content (podcast, part 1) Steps to structured content (podcast, part 2) Twitter handles: @sarahokeefe Transcript: Sarah O’Keefe: Welcome to the Content Strategy Experts podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize and distribute content in an efficient way. In this episode, we talk about finding the value when selling structure with special guests, Nenad Furtula of Bluestream. Why do so many tech pubs departments fail to get support for structured content and what can we potentially do to change that? Hi everyone. I’m Sarah O’Keefe from Scriptorium and I’m here with Nenad. Nenad, you’re over there in sunny Canada? Nenad Furtula: Thank you, Sarah. Always good to hear your voice and talk to you. I’m located in Vancouver, British Columbia. SO: Nenad, tell us a little about yourself and Bluestream and what brings you to the structured content conversation? NF: Of course. Yeah. My role at Bluestream is I guess I’m one of the two managing partners and I also manage all the business development and marketing activities when it comes to Bluestream. Bluestream has been around since 1997 and we initially were an XML database company and shortly after that, transitioned into content management. We’ve been doing content management for a very long time. Around that time that we got into this business, about 2005, DITA came about and so what we’ve done is we built a product called XDocs, which is a component content management system. For the past, I guess, 15 years, I’ve been soliciting the value proposition of our flagship product. SO: Right. And you and I have had many conversations, at many conferences, with many drinks, about the industry and it’s always interesting to hear what you think about it. And so today I wanted to ask you specifically about what I think is yours and perhaps my number one business problem, which is why is it so hard to sell structured content at the executive level? When we go in and we’re selling to potential clients, why is that so hard with the execs? NF: I guess the famous line is to catch a gopher you to think like a gopher, Bill Murray from Caddyshack. If you think about roles in an organization, executives have a role and predominantly they are concerned with growth, business growth and returning shareholder value, and sometimes stakeholders as well, right? Classical product documentation, when we talk about structures for example, is generally seen as a low-level cost center. That is necessary when you have to release a product, but it’s not really a forefront of the business thought. It does not generate revenue and it does not necessarily improve your organization’s image like marketing does, right? So that is a problem. SO: And it doesn’t bring in new leads, right? NF: Exactly, exactly. It’s a cost center, that’s the problem, right? It’s not a priority and it’s also a low-level cost center, meaning that there are expenses that are not overbearing essentially, right? Just to give an example is when you’re talking about, say Salesforce, they write that check every year no problem, right? Whereas when you’re talking about a component content management system, that becomes a bit of an issue. SO: Basically, we have to show that this type of content, product and technical content, does in fact add value to the business, or I guess maybe more accurately, doing it better adds value to the business, right? NF: Well, we have to show where it adds value. I think that’s key and we have to think about how it does that. Right? I think that I am guilty of this as I think many of us are in the beginning, let’s just say, let’s roll back 10 years ago or so, we were really focused on content reuse and how great it is for documentation lifecycle and proves the processes and reduces publish times and that these things are not really, executives don’t think about things like that. Right? The focus, really I think, has been in the last five years to really sell value of information and also show where it brings most value to the organization. SO: Yeah. I’ve been warning people that focusing on cost avoidance is pretty much a straight train to the land of commoditization. NF: Right. SO: Which we don’t want actually. NF: Right. SO: What about DITA? Is there a different argument there or is it the same? NF: Well, it’s sort of worse, right? If you think about it because, well, that’s the problem. Again, going back 10 years, we’re telling the world how great DITA is, right? I think it frightens some people too because it is great, it’s a wonderful standard. At the very essence of it, it’s just a technology that helps you deliver a structured content, right? Executives care even less about it. But where I’ve found the DITA argument, in particular, the standard argument helpful, is when you’re trying to mitigate risk. Right? Because the question inevitably comes up. We’re bringing this new tool and are we going to be vendor bound, right? Here we say well look, when you’re going with something like DITA which is a standard and not a proprietary schema, there’s a bunch of them out there, you are essentially mitigating risk. To me, that’s the most valid argument. You can talk about there’s a community, there’s thought share and all that wonderful stuff, but it really comes down to, can I switch to vendor? Should I need to? And yeah, you can because you’re working with the standard. SO: Right, so you have a risk mitigation and then I’ve talked about it a little bit as an enabling layer in that there are things that you want to be able to do with your structured content and the people who built out DITA originally thought pretty carefully about what those things might be. There’s a lot of stuff in there that’s useful if you have the typical kind of structured content. Okay. We know that we can do some cost avoidance, some lower expenses, but we don’t really want to focus on that too much. What other kinds of things, what other kinds of value propositions do we have then? NF: Well, when it comes to value proposition, I mean it depends on the organization and it depends on the industry. We’ll get into to the industry later on in this call but the true value proposition in my mind has to show an ROI, right? We get asked for this all the time. In particular, in my line of work, when I’m working with procurements, when I’m working with technical documentation managers trying to solicit value proposition internally, it’s all about all about the ROI. The number one, I think, point when it comes to ROI is, is this going to have an impact on my revenue, right? If you can show that a structure or going towards structure is going to impact your revenue, you have a pretty good argument. That’s a good starting point, right? And not everybody can show that. Not every industry is capable of showing that. NF: Now of course, the second point is as you mentioned, is impact on expenses and reducing expenses. It is about lowering translation costs and making these departments more productive, if you would, right? That’s a big one. It’s really difficult to quantify a third point, pardon me, is that through documentation, you can enhance end user experience with your product. Okay. That’s a very interesting point to make because we’re no longer shipping 500 page PDFs. We’re shipping help centers. Give you an answer to your question, right? Talk about enhancing experience with the product that you were looking for an answer, it’s there, right? But then, like I said, it’s much more difficult to quantify. SO: Yeah. I think you’re right. We’ve run into some other related things to what you’re talking about. I don’t know where you put this, but regulatory compliance and making it easier to deliver the right content that your regulatory body requires and doing it correctly the first time means fewer holdups in your regulatory experience, right? Fewer calls from the regulators saying, “Hey you didn’t do this,” or, “Hey we’re not going to approve your product unless you give us X, Y, and Z.” You give them exactly what is required and accurately the first time. You talked about risk earlier in a technology context. We talk a lot about risk mitigation as a value proposition that if you have a transparent, traceable, et cetera kind of process, you can reduce the number of mistakes you make in your content, right? And if you do make a mistake, you can fix it and be confident that it’ll get fixed everywhere, which reduces potentially your exposure from a product liability point of view, right? If you ship a possibly dangerous product, dangerous if used incorrectly and you don’t provide good instructions, you’ve got some exposure there, so that’s a concern. NF: I agree. I agree. SO: Yeah. NF: That was the preamble to the question. The answer was it really depends on the industry. SO: Mm-hmm (affirmative). NF: That’s what we’ve seen. I’m sure you’ve seen the same thing. Adoption of structure, from these regulatory driven industries was much quicker, right? Pharma, they jumped on this early on. Medical device manufacturers, we’ve seen them adopt structure for that very reason early on. SO: Right. To your point, risk mitigation. Yep. NF: Risk mitigation. Exactly. I think that should have been a fourth point is industries driven by regulation. They just have to do it. SO: Well and I guess they recognize the value, right? Because they know what the consequences are if they don’t do it right. For a lot of other people, the consequences are kind of squishy. NF: They are. They are. SO: I did want to ask you about cost centers because you mentioned them and I sort of twitched because we have seen a pattern, especially recently, where you have a technical publication or information development group that actually charges their services back to the in-house business units. If I’m tech pubs or whatever they’re called, then every time I produce a document for a particular product line, I charge back my time or the team’s time, to that business unit. Oh, we spent 30 hours, we spent 100 hours on your document so you owe us 100 hours times our internal magic bogus rate. SO: What they’ve run into is that if they layer in something like structured content, or let’s say they’re sharing content, and so I write content for business unit A, but then I actually use that content again for business unit B. Well, business unit B pays eight minutes and business unit A pays three hours because that’s what it took me to write that piece of content, but then I reused it. If I have better efficiency, I charge back fewer hours which means the team gets less budget the following year and there’s no provision really to fund the infrastructure, to fund the build of structured content or the maintenance of the style sheets or anything like that. And this may be an unanswerable question, but I’m looking at this and saying, this doesn’t work. This cost center approach doesn’t work. NF: Well it sort of works for certain organizations and not for others. We’ve certainly, especially in large organizations, this is the case, we have yet to run into, or I have yet to run into a case where the technical documentation department has become so efficient that they are getting their budgets cut. That’s just my experience. I personally haven’t seen it. The other reason too is because just the demand for information is growing as well. There is more information, there are more product lines. Maybe that’s why I haven’t necessarily seen that myself, but certainly it could be a problem. SO: Yeah. It’s not common, but we’ve seen it a few times and we keep saying, well you have to account for the shared infrastructure somehow. I think the challenge is when you move to structure, there’s more shared infrastructure and less hourly billing back, and that’s what you want because more reuse equals more lower translation costs and all the rest of it. You mentioned different industries have different arguments for structure and we kind of touched on regulatory and risk management and what that looks like. What are some of the other examples of that where a different industry or a different vertical might care about different things when they’re looking at structured content? NF: Yeah. I’m actually glad that we went through regulatory first because the two examples that I had in mind, I’m comparing say a classic software vendor to say someone like a heavy equipment manufacturer, right? Their arguments for structure are going to be different, we found anyway. When you’re producing software manuals and say you have a software product like we do and you need user manuals and such, basically it is a straight up cost to business to develop that. Okay? For example, just to start with, ignore the fact that your processes are going to be better while using structure and you’re going to be more efficient and all that, and your localization costs are going to lower. NF: What you really need to do is you need to focus on information flow and you need to figure out which recipients of that information have the most value or are getting the most value. In an example of a software company, quite often we see these delivery platforms emerging, and that’s the argument, right? The argument is we need to go into structure so that we can have a better delivery mechanism of our documentation so that for example, we can reduce the burden on our support organization. Okay? And voila, here is your delivery platform, right? What’s interesting about that argument, what we’ve seen there is we’ve seen a lot of people, a lot of software companies actually, sell structure successfully to management and of course, now they’re working debt, because they can’t get money for a tool and get budget for it, for a tool like a CCMS. NF: But then, it’ll be much easier for them to sell a delivery platform because it’s outward facing. Right. And the whole argument there is, well information flow is, hey, look at my end user. They’re interacting with this documentation, with our product. Again, you’re enhancing the end-user experience with the product and you’re reducing the burden on support. Right? And that works very well for say a software manufacturer or software vendor, for example. Whereas if we take a look at someone like heavy equipment manufacturing and Bluestream has really niched into that vertical quite a bit, over the years, they have a completely different requirement and their requirement is much more sophisticated when it comes to delivering information for the use of this equipment. Right? NF: Well, first of all, the equipment has a long lifespan, right? And this equipment needs to be serviced and a big portion of a company’s revenue or some fair portion of a company’s revenue is associated with servicing that equipment, and as well as selling spare parts, if you would, right? So when you look at that information flow, when you think about, well, who are the recipients of this information that really matter? Well, they become this service personnel, either third-party or internal, who have to service these machines for many, many years. And of course, they have to sell parts. And so those parts and that service, or those aftermarket parts I should say, and this service become a big part of the revenue, right? The revenue story, company’s revenue story. And so when you’re going into a situation like that, what you’re going to talk about is increasing the sale of spare parts, and that has all the attention of management. NF: So I’ll give you an example. We’re dealing with a very large train manufacturer, they’re actually worldwide. And I was looking at their business case that they presented, and we’ve been dealing with this customer for about four years, but I remember their case that they presented to management, it was 95% of the business case was focused on increasing the sale of spare parts. Whereas 5% of the business case focused on basically increasing productivity of some 70 plus technical writers. Okay. And that says it all, right? Where’s the focus? Well the focus is in fulfillment, in that particular case. Right? So very different than what we see in the example that I gave earlier, like a software industry. Right? And so the focus has to really be adjusted to the industry that you’re selling into or the industry that you’re in essentially. SO: Yeah, that’s interesting. And I think we’ve seen that as well, that on the software side, with some exceptions, but in general, on the software side, the focus is on cost savings and also on velocity, time to market. Because software gets distributed electronically. This sounds dumb, but some of us are old enough to remember the literal, we have a contract and our client is required to get this piece of software by close of business on December 1st. And if you miss the FedEx 9:00 PM deliverable, or sorry, you miss the 5:00 PM pickup at the office, that means you have a 9:00 PM cutoff at the airport. And if you miss that, you’re putting somebody on a plane at 6:00 AM to fly them to California, holding a CD in their lap so that they can walk into this business and deliver the software on time. Right? SO: That’s how it works the olden days. And now you obviously distribute it via a patch or an electronic download or whatever, and that entire shipping process went away. And it took content a long time to catch up to that distribution mechanism. Eventually, we had PDF and we could electronically distribute. But at first it was kind of a big problem. And so software is interested in speed, velocity, time to market, cost savings. And then as you said, manufacturing really has this, it’s more like a two part sale, right? You sell the core product, but then there’s this long lifespan of maintenance and updates and service and spare parts. It’s just a much, much different chain. We’re also seeing an awful lot of companies getting into the fleet management and service management. NF: That’s right. SO: So they actually go from being a product like a manufacturing company to being also a software company, because they’ve got the database of all of the equipment that they’ve sold you and think of airlines, when is it due? When is this plane due for maintenance? Keeping track of that is actually a service. So now this distinction between product and service is starting to blend. NF: Well you know who defined that actually initially? Believe it or not, it was Xerox. Xerox is a big partner of ours and they were for many years and Xerox actually, everybody thought the Xerox was about copiers, right? Yeah, sure, they sold copiers, but a bulk of their revenue came from servicing these copiers. Xerox actually is not a products company, it’s a services company. Right? So it’s true. SO: Then what about organizations where the content is in fact the product? NF: Yeah. So those guys have, we see a lot of folks generating learning content, training content in particular. We have a number of customers in those fields and they actually, interestingly enough, they’ve caught on to XML, I should say, early, early on. Okay? And at the time, I say early on, probably about 15 to 20 years ago. Right? And so they’ve paid attention to this stuff and they, for the most part, built their own systems. That’s what we’re seeing. A lot of proprietary systems, a lot of proprietary XML. And so for them, getting into structure was much easier, is much easier I should say. And for them, embracing something like DITA makes sense. The challenge of course becomes how easy is it to use? How easy is it to author? NF: And this is where DITA, maybe did a disservice to some of us because it’s been presented, it’s so powerful and yet so complex. In fact, I just had a conversation last week with someone that said, “Gosh, we can’t do this. It’s too complicated. We’re going to go with something different.” Right? And so anyhow, I know there’s a discussion around DITA Light and all that wonderful stuff. But those organizations who sell content as their primary business, they’re embracing this and they really are coming on board. The other one is insurance. Sarah, we’re seeing insurance companies. I mean, it makes a lot of sense for insurance, structure makes a lot of sense for insurance. We’re seeing airlines embrace this. Of course, that’s a regulatory industry. We really are seeing an uptake in structured content. There’s no question about it. Last few years have been, in my mind, changing. SO: Yeah, which sounds like some good news for all of us. So well thank you. I appreciate this because I think there’s a lot of food for thought in here and obviously you’ve not just thought about this, but had to think about this in the course of your business. And I think it’s helpful to me to chew through all these things and contemplate what they’re like. I’m going to, with that, wrap this one up. Thank you, Nenad. I appreciate it as always. NF: And thank you, Sarah. Thank you for having us on. Like I said, this topic is near and dear to us and should anyone want to discuss further, I’m sure they can reach us at www.bluestream.com. SO: Yep. And we will drop that in the show notes, along with some other contact information so that you know where to find everybody. Thank you to our audience for listening to the Content Strategy Experts podcast brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for the relevant links. The post Finding the value when selling structure (podcast) appeared first on Scriptorium.
15 minutes | 4 months ago
Steps to structured content (podcast, part 2)
In episode 86 of The Content Strategy Experts podcast, Gretyl Kinsey and Bill Swallow continue their discussion about the steps to structure, how to move from unstructured content to structure, and what each level of maturity looks like. “Step five is when you’re thinking even your structure is structured. You’re really thinking about how to take this to the highest possible level, how to get the most out of your automation, and how to make sure that the way you’re delivering your content is maximum efficiency.” – Gretyl Kinsey Related links: Steps to structured content (podcast, part 1) The challenge of digital transformation Twitter handles: @gretylkinsey @billswallow Transcript: GK: Welcome to The Content Strategy Experts podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize and distribute content in an efficient way. In this episode, we talk about the steps to structure, how to move from unstructured content to structure and what each level of maturity looks like. This is part two of a two-part podcast. Hello, and welcome. I’m Gretyl Kinsey. BS: And I’m Bill Swallow. GK: And today we’re continuing our discussion about the steps to structure. So we previously covered steps one and two, which are unstructured phase, and step three, which is getting to structure. There’s step four, which is customized or specialized structure. So could you tell us a little bit about what that means compared to just sort of the baseline structure? BS: Sure. So once you have everything in your structured format, chances are you’re going to start finding little bits of differences or dissonance between the type of content that you’re producing and what the structure will allow you to use. You may say, “Well, we have this very specific type of paragraph, or this very specific block of content that doesn’t really fit into the structure in its own native form.” We want to be able to handle it, and we want to call it something unique. We want to be able to structure it uniquely as well, yet still use it within the framework of everything else we’re doing. So this act of specialization or customization is kind of the next step because now you’re looking at the structure and saying, “This is great, but we can do more with this.” So you’re fine tuning and tailoring things a bit more so that you can label your content more appropriate to your needs and be able to handle that content specifically for the types of uses for that content. GK: Yeah. Absolutely. And I think this is an area where we start to really see a lot more work on the kind of metadata and taxonomy side of things because that’s when you start thinking, “Okay. Now that everything actually is structured, now we can think about how this content needs to be organized, how it needs to be sorted and filtered, how both our authors and our customers need to be able to search for the particular information that they need within this content set, how we might need to do something like personalized delivery.” So once you kind of have that foundation laid down with just the basics of structure, that’s where you really kind of start to think about: Okay, how do we want to customize our metadata? And how do we want to build out some sort of a taxonomy that we can support with metadata so that the content is not just tagged in structure, but it’s also organized? And there is information about the content itself being captured in a way that makes it a lot more flexible. BS: Right. And what’s really driving a lot of this is not only the different types of content that a company might produce, but it’s also starting to hit that personalization note with people and being able to drive content dynamically to them that is of their immediate interest, rather than generic content that might be suitable for any audience. GK: Yeah. So this is where, if you’ve got that structure in place and you’ve started to do those customizations, that you can do some kind of dynamic delivery. So your users might sign into a portal, and it can pick off information about that user based on their login, and then feed them the content that they need without them having to kind of dig through and search for it. So that really kind of takes your use of content to a higher level than you were able to do before, even though this is still a structured step, but it’s just really kind of enhancing it and taking it to the next level. BS: Yep. And that next level beyond that would be something that uses, or that next step, so step five. Once you have everything in step four done, which is all of your customization, step five is kind of building upon that even further and implementing a lot more additional dynamic capabilities to your content. GK: Yeah. So step five, we’re kind of thinking of this as even your structure is structured, so you’re really thinking about how to take this to the highest possible level, how to get the most out of your automation, how to really make sure that the way you’re delivering your content is maximum efficiency. And this is kind of what I think of as the differentiating factor between just simply moving to structure versus true digital transformation of content. I know that’s something that we’ve talked about in some of other webcasts and podcasts and posts is this idea of digital transformation has kind of been an industry discussion as well. But this is where we tend to think of truly transformed content is content that is a lot more personalized, where you’re really making the most out of your automation and your efficiencies. And the content itself is kind of not just one single digital delivery, but it’s something that a user can customize, mix and match, and it can be really, truly personalized. GK: So this is where you’re really, really looking at: What is the most that we can do with structured content beyond even steps three and four? How can we really continue to take it to the next level and make sure that it keeps on scaling as the company grows? BS: Yep. Step five tends to be incredibly specific from implementation to implementation, so one company will be doing things one way in a structured environment. Another company might be using the same exact underlying structured framework, but be organizing their content and doing completely different things with it. This is where essentially every single case that we’re seeing of companies that are at or looking to move to a stage five in their structured progress, it’s a unique engagement. It’s a unique way of looking at content based on what that company specifically wants to do with their content. GK: Yeah. And this is really where if you’ve got most of your content problems solved with structure, but then you just have a few of these edge cases and unique requirements, where some additional customization would really take it to that next level, that’s kind of what we consider for that step five. And as you said, it is unique from company to company. But it’s something that’s also important to consider when you’re still in stages three or four, thinking about what your future requirements might be, and making sure that you kind of don’t lock yourself out of that. So if you are, let’s say you just get to stage three, you’ve just moved to structure, and you sort of know what your five year plan is, maybe not necessarily specifically, but you have some ideas of things you want to be able to do with content in the future, it’s always important to keep that on your roadmap and keep an eye on it because you don’t want to build something in a way that when you do get to that maturity point of being at the step five, that you’ve done something with your structure earlier that then requires a massive amount of cleanup or lots of tedious fixes here and there to get to that point. GK: And I know we’ve talked about this on at least one of our other episodes about how it’s really important to plan, to be very careful, and to spend a lot of time on that planning. I think especially when you’re kind of going from step three to step four, and you’re thinking more about your metadata and your taxonomy, that has a lot of implications when you get to something like step five as well, when you’re really maximizing your content potential and your efficiency. Just that when you are building those structures and when you’re thinking about a taxonomy and how you want to organize your content, that you don’t lock yourself out of those future requirements. BS: Yeah. You always want to keep some options open there because things will continue to shift and change, especially as your requirements change, or if you acquire another company, or acquired by another company. You want that nimbleness still built in and room for improvement, or room for change still available, and not just nail everything down and call it done. GK: Absolutely. So on that note, what are some tips for moving to structure? If you are kind of at maybe a step one or a step two, how do you eventually get all the way to step five or close to it? And how do you do that as efficiently as possible? BS: The first step is to kind of wrap your head around the strategy for your content and where it’s going to go, how you’re going to author content, what your future state looks like. So a lot of the things that we’ve been talking about, not just in this episode, but in many of our podcasts, but building that content strategy that gets you from where you are to where you want to be, and make sure that you have some kind of roadmap or framework for each of those steps that you want to take, so that you understand the scope of work that is going to be required to move from one step to the next, and to have some criteria so that you can measure what done looks like, and whether you’ve accomplished things that you wanted to get done in that stage. So not just: Are you done, but is it working? GK: Yeah. And I think that’s also really important when you are coming up with that strategy to build in some kind of backup or contingency plans for when things don’t always go the way you think they will. And that’s why it’s really important to kind of look further out toward the future, so if you’re kind of at a stage one or two right now, that go ahead and make your ideal plan for stage five, but know that there’s going to have to be some flexibility in how you might get there. So you may want to have a few backup options of things that you would achieve in stages three and four before you get to that ultimate goal. GK: Another tip that I want to bring up is just, like we said, when you go from that second step to the third, where you are cutting over from unstructured to structured, it’s really important to come up with a conversion strategy because that’s where you are going to be getting all of your content out of one format into another and migrating it into whatever kind of tools or systems are going to be managing that content. And that’s why we really emphasize having a step two and not just kind of skipping from step one to step three because that really I think helps improve that conversion strategy. And things to think about at that stage are one, how much content cleanup has to get done on the front end versus the back end, so pre versus post conversion. GK: And what can you do to minimize the amount of kind of human intervention or manual cleanup that you’ll have to do? Because the more content you have, the more time it’s going to take to convert everything, and so the better off you’ll be if you can automate it. And that’s why having a clean content set as much as possible really helps with that conversion strategy. So just before you convert everything, it’s really important to think about what’s highest priority. What state is your content in? And what kind of clean up you’re going to have to do on either end of that conversion. BS: Yep. And once you get to that stage three, you no longer really have a conversion path that you need to worry about, but you need an exit strategy going forward no matter if you’re at stage three or step five. You need to have an exit strategy for your content if you do need to change tools again, so keep a lot of that in mind when you’re selecting things. It’s not necessarily that one’s going to be bad and another one’s going to be better from an exit strategy point of view. But you need to understand how these new tools and new systems work with your content so that if you do need to move from tool A to tool B, you know how you can export the content, what certain handling capabilities from the old tool need to be redone, or somehow otherwise implemented in tool B. And have that in mind going forward. Moving to structure generally allows you to have some degree of portability with your content. But again, your mileage may vary depending on the tool choices you make and the types of structure you’re looking at. GK: Like you said, having an exit strategy is so important because as we’ve mentioned in a lot of our other discussion on this episode, things do change. When you go through all of these updates to your content process over time, things change. And when you are kind of moving through those structured steps, so going from step three to step five, a lot of your decisions are going to be driven by the changes that happen in your organization, and the new requirements and the new demands that you’re going to face over time. So you have to think about how your processes need to scale up to meet all of the changes that are going to come, and just sort of use that as your guide. Update the roadmap that you come up with at the beginning as you get new information, and just kind of constantly keep your eye on that so that you can ultimately sort of move through from step three, to four, to five over time, just based on what’s happening at your company. BS: And also keep in mind, jumping from one step to the next, any given step to the next, you need to make sure that you have a clear understanding of the benefits that you are going to get by making these improvements in order to get buy in, not only from people who have the money that you will need to purchase new tools, or to provide training to your team, but also to get your team to buy in to the idea of, we’re going to work differently, and this is why it’s going to help you going forward. GK: Yeah. That benefit is important because you don’t want to just kind of move from let’s say step four to step five without a good reason for it. You have to be able to explain, here’s why we’re doing this, and here is how it’s going to improve content production going forward. And with that, I think we can go ahead and wrap up. So thank you so much, Bill. BS: Thank you. GK: And thank you for listening to The Content Strategy Experts Podcast, brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links. The post Steps to structured content (podcast, part 2) appeared first on Scriptorium.
17 minutes | 4 months ago
Steps to structured content (podcast, part 1)
In episode 85 of The Content Strategy Experts podcast, Gretyl Kinsey and Bill Swallow talk about the steps to structure, how to move from unstructured content to structure, and what each level of maturity looks like. “It’s important to keep in mind when you move from step two to step three that your authoring tools may change. The writers might have gotten used to working with one set of tools in steps one and two. But as you move to structure, the tools that you’re using for unstructured content may not support the underlying framework for the structure that you’re moving forward with.” – Bill Swallow Related links: Moving to structured content: Expectations vs. reality (podcast) The challenge of digital transformation Twitter handles: @gretylkinsey @billswallow Transcript: Gretyl Kinsey: Welcome to The Content Strategy Experts podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we talk about the steps to structure, how to move from unstructured content to structure, and what each level of maturity looks like. This is part one of a two-part podcast. GK: Hello and welcome everyone. I am Gretyl Kinsey. Bill Swallow: And I’m Bill Swallow. GK: Today we’re going to be talking about structured content and all the different steps it takes to get there. Let’s just go ahead and dive right into what is the first step or the baseline when we’re talking about moving from unstructured content to structured. BS: Well, I guess the very first step that you’re on is that you have content. GK: Yes. BS: Congratulations. You have content. It exists. It’s probably written well. It is probably being authored by a bunch of different people or authored by people using a variety of different tools. Basically there’s no general rhyme or reason as to how the content is being produced, but it looks good, it serves its purpose, it’s published, it’s out there, people are reading it, but there’s generally no underlying structure. You might be using Microsoft Word and various other tools, no actual templates involved, all the formatting is kind of ad hoc and all hand produced. GK: Yes. I think this is what we consider as the baseline or the bare minimum when it comes to content. It’s there, it’s well-written, it’s usable and you have it and it’s working, but you’re not really able to leverage it necessarily and do a lot more sophisticated things with it, and so you may have some limitations if you’re at step one. For example, with how you publish your content if everything is very manual in the process of creating it, then that’s probably true on the publishing side as well. So you’re not really getting mass automation there. You may also be limited in your ability to share content across maybe different departments, different types of documents. A lot of times when we see companies who are in what we would consider the step one, they tend to be in silos with unstructured content, and so you’ve got sort of different types of unstructured content all over the place and none of it is really connected or working together. BS: Right. With regard to being able to share the content, there’s also that issue of copy paste that we ended up seeing a lot. This happens a lot in this first step where if you need to share content or you need to reproduce the same content in multiple formats, or in multiple documents, there’s a copy and paste going on, which then just adds to the whole snowballing effect of being able to manage to your content. If you need to make an update, you then have to find where you’ve copied and pasted that information throughout all of the documents and deliverables that you’ve produced. GK: Yeah, and sometimes this can really have a snowball effect where if you do have different departments that produce content and let’s say maybe you don’t have as much of a problem with separate silos, but you do have this issue where there’s no connectivity. So let’s say you’ve got some folks over in training and they need to reference information from the official technical documentation and their training materials, and they go over and don’t necessarily grab the latest and greatest version, but they copy and paste some of the documentation from somewhere and then that gets into the training materials. And so there’s not really any sense of version control. There’s not really a sort of enterprise level sense of how the content is being used and maintained. That can really become a big maintenance issue over time as you need to grow and scale. BS: Yep. With regard to growing and scaling and with regard to leaving this first step behind, what is, I guess, the next level that we’re going to; step two? GK: Step two is when you’re using templates and a consistent style in your content. This is where for example, if you are working in something like Microsoft Word, Framemaker, InDesign, you actually have templates set up. So you’re not just kind of ad hoc creating different styles all over the place. You’ve got something that can not necessarily enforce that style, but at least give you a guideline to work within and some parameters to use as a starting point. That can really help improve that consistency of your content, can make sure that everything follows a more not exactly structured, but approaching structured pattern. This is something that I tend to refer to as implied structure because it’s not actual enforced structure on your content yet, but it’s kind of that intermediate step to getting there. BS: With that implied structure, there’s also usually a style guide that will go along with it that will further help people follow the same structural composition when they’re authoring. So it’s not just the templates that are in place that they always use heading one for the first level heading in a document or use a particular note style if they need to produce a note in their documentation, but there is also a style guide that says, this is how the content should be arranged. Not only going through and saying these are what all the different styles afford, but this is generally how you approach building documentation. This is the type of content that you want to put in this type of section in whatever you’re writing. GK: Yeah, absolutely. I’ve seen some company style guides also address things like branding consistency. So if you do have a lot of different departments creating content, there is something that says here is the logo you always use, here is the official way that you refer to the company, here are the official list of product names, that sort of thing so that there’s not an inconsistency there that just makes your company look unprofessional. We also can see it sometimes if your company is doing any kind of localization, if your content is being translated and there are particular things to avoid or ways that you want to phrase things that help make translation easier, that can be included in a style guide as well. BS: Yep. Really, it also comes down to that level of organization of content within the documents you’re putting together. So if you’re putting together training materials or some kind of repair guide or something that’s very procedural, you generally want to have a section that says, “Okay, we’re going to start with a heading. We’re going to introduce the topic using these types of paragraphs. Then we’re going to break into a subheading and perhaps give a list of all the parts required, if you’re doing some maintenance or all of the things you need in order to complete a particular procedure. And then jump into the procedure itself, perhaps with another heading or with some other section delineation there.” BS: That way the style guide allows the writers to understand when they’re going to write something for a particular audience, that they have this structure in place that they can follow. Again, it’s an implied structure. There are no set rules against it, other than the style guide and whoever enforces that coming down on the writers and saying, “No, you must do it this way.” It at least gives you a starting point to be able to start making your content look and feel the same, regardless of who’s authoring it, regardless of what tools they’re using to do it. GK: I think that’s a really important foundation to get in place before you move on to step three. What is step three in this process? BS: Step three is actually using structure. So being able to identify that there is a need for this level of consistency and these level of rules and adopting a framework that builds those controls in. So structure, we’re talking something like XML or DITA, which is a flavor of XML, SGML, that’s an old school one that’s still around to some degree, but it’s essentially a technological framework that says, “Here are all of the types of content that you have, and this is how they all play together. This is where they’re allowed. This is where they’re not allowed, and this is how they all flow together as well.” GK: Yeah. So this going from step two to step three is really the break point between unstructured content and structured content or between that implied structure we talked about and an actual structure. I think that’s why it’s so important that if you are going to move out of your unstructured content and get into true structure, that you do have that intermediary step one to step two, because if you try to go straight from step one into step three it’s probably not going to be a very clean migration over into structure. So if you’ve already laid that groundwork and you have that implied structure in place in step two, it puts you in a much better position to go on to step three. BS: Yep. Not only do you have the content aligned so that you can convert it to some kind of structured format, it makes that conversion process a lot easier. If you have step two in place and you have these solid templates that you use, and you have this consistent writing format that you’re using, you can automate that process to some degree, or if not completely to get it to a structured format. But it’s also important to not skip step two, because you want your authors to be able to acclimate to now writing in a structured format. If they’re used to just doing whatever they would like as long as the end product looks good and reads well, they’re not going to come around to the idea of authoring in a structured environment very willingly. GK: Yeah. This is, I think the biggest challenge that we do see when a company goes from unstructured content to structured content, is that big mind shift that has to happen. That’s why I think it’s important to have that step two, so that people get accustomed to working in something that’s like a structure, even if it’s not a programmatically enforced actual structure that that mind shift does not have to be as big, because that is where you see a lot of resistance to change that can just really get in the way of your progress. BS: Yep. It’s important to keep in mind when you move from step two to step three, that your tools may change, your authoring tools. The writers might have gotten used to working with one set of tools in steps one and two, where they were unstructured but perhaps following a style guide, perhaps using different templates, but as you move to structure, the tools that you’re using for unstructured content may not support the underlying framework for the structure that you’re moving forward with. BS: Often we see a little bit of reluctance among the authors to move towards structure because the tool set is going to change. And what they’ve been accustomed to using perhaps for many years, they need to abandon, and they need to adopt a new tool with a new user interface, with a new underlying file format that they are just not accustomed to. Things may look a little strange, especially when you’re moving to structure using XML or so forth, that doesn’t have formatting necessarily applied to the content itself. They’re not accustomed to seeing a different representation of what they’re authoring than what will be delivered to the other people. So what they’re authoring in and what it looks like to them is not what it’s going to look like to the person who’s reading the finished produced deliverable. BS: That’s a little jarring for some people. A lot of care needs to go into making sure your team is aware of these changes and that they have the training and the support necessary to make that leap. GK: Yeah, absolutely. I think it’s a really intimidating thing because suddenly you’re going from, like you said, something where you can actually see what the finished product will look like as you’re working to something where you really have no idea. If you are moving to structure for the purpose of automating your publishing processes, for example, then you’re going to have one tool for authorizing, you’re going to most likely have some other tool or suite of tools for content management and then another tool or suite of tools for publishing, and all of those pieces are separate. So if you are used to everything all being in one tool together where you write everything, you review everything and then you just export it directly to publish from that same tool, and then suddenly you’re in this very different framework, it is a shift in not only the tools themselves, but how you work. GK: It’s really, really important to make sure that nobody feels like their concerns fall by the wayside or that they’re getting left behind, but that instead they are supported because really there are a lot of benefits to this. I think that’s the main thing is just you convincing people here is how your life will be so much easier if you’re not dealing with all of those problems. We talked about earlier with copying and pasting, and not knowing where your content lives and not knowing what version is up to date. Going to structure can really help fix all of that. But it is that big change that you have to get people over that hurdle. BS: Right. If they’re accustomed to producing multiple different types of deliverables, for example, a PDF and some HTML from a particular content source, it’s going to make their lives a lot easier on the publishing side, because that can be done automatically. At that point, you’re really removing the writer from the process of publishing, and their job is to make sure the content is structured appropriately and written correctly. At that point then automation takes it to the publishing stage. GK: Yeah. Another thing that you get at this stage that I think it’s important to call out is that you get to leverage smart reuse. So instead of copying and pasting information, instead of finding workarounds to share it, you can actually have a single source of content that gets used in multiple places. That again is another shift in mindset, right? But that’s also a major benefit that you get out of going to structure. That’s something again, that should be a major part of training for writers. GK: On a lot of the client projects I’ve worked on, we end up doing a split where we start with basic structured authoring training, and then we usually do a separate training session or series of sessions specifically on reuse for each company because each organization is going to have its own reuse strategy and its own reuse requirements. Being able to leverage that finally is a really powerful thing, and it’s important to have that as part of the training that you do to support the authors. BS: And of course, once you hit the structured stage, there’s nowhere else to go. Step three is the final step, right? GK: Oh no. There’s much more. We will be covering that in part two of this podcast. For now, thank you so much, Bill. BS: Thank you. GK: Thank you for listening to The Content Strategy Experts podcast, brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links. The post Steps to structured content (podcast, part 1) appeared first on Scriptorium.
27 minutes | 5 months ago
The personalization paradox (podcast)
In episode 84 of The Content Strategy Experts podcast, Sarah O’Keefe talks with Val Swisher of Content Rules about why companies fail and how to succeed at delivering personalized experiences at scale. “It all has to be completely standardized in order to be successful. There have to be small, individual, standardized chunks of content that are devoid of format that can be mixed and matched. Then the output can be personalized to the person who asked for it and sent to them at that moment in time.” —Val Swisher Related links: Preorder The Personalization Paradox Twitter handles: @sarahokeefe @valswisher Transcript: Sarah O’Keefe: Welcome to The Content Strategy Experts podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we talk with special guest Val Swisher of Content Rules about why companies fail, which seems terrifying, and also how to succeed at delivering personalized experiences at scale. And I imagine you’re going to tell us those two things are related, like do one to avoid the other. SO: So, hi, my name is Sarah O’Keefe and I’m here with my special guest Val Swisher, who is the CEO of Content Rules. Val and I have some common affinities for a variety of causes and some needlework and some other fun stuff like that. And we run similar businesses so we actually talk quite often. Today, we’re going to attempt to distill that into a useful podcast for you. So wish us luck. Val, hi. Val Swisher: Hey Sarah, how are you? SO: I’m good. How are you doing over there? VS: I am doing just fine here. SO: Excellent. VS: It’s a new day. SO: It is a new day. For context, we are recording this on November 9th? VS: 9th. SO: 9th in 2020 so you can take that away for whatever you want. But Val, tell us a little bit about Content Rules and what you do over there. VS: Well, we do a lot of similar things to Scriptorium, don’t we? Since you and I are in similar businesses. So as you said, I’m the CEO of Content Rules and I started the company in 1994 and we do a variety of things that are all related to content. So we develop content with contract writers and editors and course developers and all those kinds of folks. We do a lot of content strategy work, helping customers move from an, unstructured environment to a structured environment or helping customers with their global content strategy and how to go global, what they need to do to go global. And we also help customers optimize their content using special software that will allow them to program in their style guides and terminology, and make sure that their content is as good as it can be. So those are all of our different service lines. SO: Yeah. And you’re right, there is a good bit of overlap. Although the funny thing I think is that we don’t actually see that much customer overlap, which is probably why we manage to get along, which is helpful. VS: Undoubtedly. It is interesting though. SO: Do, you have a book you’re working on? VS: I do. I do. I am working on my fourth book. This book is titled The Personalization Paradox: Why Companies Fail and How to Succeed at Delivering Personalized Experiences at Scale. SO: Okay, so let’s start with failure. It’s 2020 so I feel like that’s where we need to start. Why are companies failing at this? VS: Okay. So there are a few reasons that companies are failing. The first thing is that they focus on the wrong place. Companies have spent years focusing on the delivery of personalized experiences, the delivery mechanisms of content. They’ve spent a lot of time, a lot of money, all around how they’re going to deliver content. And that’s the wrong place to start, they need to start with the content. And when you start at the end, rather than the beginning, you’re kind of setting yourself up to fail. So that’s one reason. SO: So you’re saying they should start at the very beginning and that’s a very good place to start? VS: Indeed. I could break into song right now. SO: Okay. VS: Yes. So starting with the content is the most important thing you can do. It doesn’t matter what delivery mechanisms you have, if you don’t have your content set up to deliver personalized experiences, it’s not going to work. So that’s the first problem. VS: The second reason companies fail is that they are into the new, great, bright, shiny object. So they keep buying tools and they don’t think about it before they buy tools. They’re just like, “Oh, let’s buy this tool. This’ll do it. Oh, let’s buy that tool. That’ll do it.” And my new saying, I have a new saying, if you take the same crappy content and put it into your new expensive tool, you will end up with expensive crappy content. SO: That seems accurate. VS: So once again, we are starting at the content. And then the third reason is the same old silos that we’ve always had. I mean, we’ve been talking about silos for decades and decades, and if you really want to deliver personalized experiences at scale, you’re really going to need to play well with each other. This silo thing gets more and more difficult. So those are the reasons. SO: So those are the three. Okay, so what are we talking about here? When we talk about personalization, what does that mean? What is a personalized experience? VS: So personalization is when we deliver the right content to the right person at the right time on the right device in the language of their choice. So some people refer to it, we start referring to it as the Amazon experience. When I log into Amazon, boy, they know me really well. They show me everything I want to buy right now. They’re like in my brain, “Oh, Val, she likes shoes, we’re going to show her these boots,” this sort of thing. VS: More and more, we’re coming to expect the content that we receive from a company to match what we need rather than having to go hunt for it. In fact, I was talking to someone over the weekend about this and they were telling me how frustrating it is for them when they go out to a particular financial site that actually has all their information. And rather than just showing them what they need, they know what funds they have and all this, they make him just search for stuff nonstop. And he’s like, “They know all about me, why am I putting this information in? Why can’t they just show me what I need?” SO: That would be nice. I actually saw an example of this that I thought was fantastic and it was a credit card company believe it or not. And this was so stunning because they did the right thing. My mind was blown. So what happened was, now this was of course in the before times, I had bought a plane ticket because I was going somewhere and I went on to the credit card website to do something and was looking at my list of transactions and there was the charge for the airline, right? And underneath it, it said, essentially, “Hey, you’re traveling overseas. Would you like to set up a travel alert?” And I thought, well, that’s pretty good. SO: Now, I’ve since seen a different version of this, where I actually got an email that said, “Hey, we noticed you bought a plane ticket and so we automatically set the travel alert for the place you’re going,” which was actually even better. But I was stunned because it was so unusual. Normally you have to dig through 18,000 menus to find the travel notification. Okay, in the olden days, children, we used to this thing called getting on airplanes and we would go places, we would leave our house and go to this big building and then we would get on the small tube in the sky and go places, yes. So, anyway, sorry, bad example right now. So personalization really just means deliver reasonable information, right? I mean, is it fair to say, you’re not really talking about, it doesn’t have to be that personalized, it doesn’t have to be, “Hey Val, here’s your stuff.” VS: It’s a really, really good point. It’s very interesting you should even talk about that. When we’re figuring out how to talk to the customer, we need to be super careful about how we do that. It is so contrived, dear blank, and then they use the wrong name or it says dear [first name], because something’s screwed up. It really just means, give me what I need. Honestly, I don’t care if you know my name, as long as you give me what I need. VS: We’ve been working on making it easy to find content for hundreds and hundreds of years literally. In fact, I was doing some research and back in the 1500s, there was a man named Pliny the Elder, not to be confused with the beer from the Russian River Brewing Company called Pliny the Elder. There was a guy called Pliny the Elder and he wrote a 37 volume, it was like an encyclopedia at the time, of the natural world. And book one was an index to the other 36 books. This was in the 1500s. VS: So we’ve tried everything as we’ve gotten more and more technologically advanced. We have the card catalog for libraries and we’ve had indexes and table of contents and lists of figures and lists of tables, navigation on a website, navigation in any type of app or training or whatever. VS: We’re at the point where people don’t want to have to pull that information. All of those ways of searching, ways of finding content is pulling that content. The onus is on the person looking for the information. We don’t want that anymore. We want it pushed to us automatically, just push what I need right now. I don’t have time to look in book one to see that what I want is in book 28. We’re out of that kind of time. The expectations are really different. So it’s not new. SO: No, but we seem to be sort of bad at this. I mean, there’s the creepy version, right? Or there’s the failure, dear first name, which is terrible. And then, I mean, you mentioned Amazon, but my experience with Amazon is like, “You bought a washing machine, you’re obviously starting a laundromat. Let me sell you some more washing machines,” right? They seem to have kind of lost the chain there between somebody bought a washing machine, maybe I should sell them detergent. And so they’re not quite there yet, but you buy these big appliances and they immediately assume you want more like that. And so there’s something not quite right with that algorithm, but setting aside that example and thinking more about the business content that you and I mostly deal with, why are people so bad at delivering relevant content? VS: Well, again, I think it’s because they’re focused on the wrong things. For a very long time, we were focused on trying to figure out enough information about you that we could go get the content for you. And even 10 years ago, 12 years ago, there were companies focused on that problem, how are we going to get enough information about you so that we know what to target our ads, so we know what to advertise to you? And now it would be we know what content to deliver. VS: That problem has been solved. I mean, big data is here. We have more of a problem with controlling all the information they know about us than gathering. We know that, we see it every day. It’s the creepy, creepy and on the one hand, it’s uncomfortable. And on the other hand, if you want only the content that you want to see delivered to you, then I’ve got to know a whole bunch of stuff about you. So it’s time to start focusing on the ways that we create, manage, publish, and deliver the content. SO: And so you talked about content and you talked about, I mean, there are certainly tools that can help with this, but they won’t help unless you do the content first. What about the silo issue? What are the problems there? What are the failures there? VS: Where do you begin? I mean I once saw you do this fantastic presentation at a conference where you brought up a manual and it had nothing to do with the marketing content, it was just like, this happens all the time, you see companies’ marketing messages and examples and illustrations and positioning and their terminology and the way they talk about the product and then you move over to the knowledge base, or you move over to training courses or technical documentation and we have four different descriptions of the same widget when really we need to be sharing one description of the widget. So the more content that we each make in our own silo, the worse the problem is because now we have too much content, it all is kind of sort of the same, but not really, we cannot reuse it across silos, we’re restricted in terms of what we can deliver. We can only deliver that which we create. It’s expensive, it’s inefficient, it’s often inconsistent. There’s nothing good about it. So silos get more and more exacerbated when we try to deliver personalized experiences at scale. Same problems, just maybe exponentiated a tad. SO: So what does that mean? I mean, are we talking about one monster piece of software to rule them all? VS: Well, so I would say we actually need to step back from the software and really focus on the content because how people store and manage and publish the content is definitely a challenge to solve, but we need to teach people how to create the content. And you know this as well as I do, the only way to deliver a personalized experience at scale is to write your content in very small units, call it a component, call it a chunk, call it a topic, call it whatever you want, but a very small unit that’s self-contained, that can be mixed and matched with other small units, devoid of format so the format comes in at the end and have this library, searchable, tagged, find-able units of content that at the point of delivery can be mixed and matched so that an output is built, a format is applied and publishing happens on the right device at the right time, etc. SO: Yep. And I’m totally there with you, but all the non-tech writers just ran screaming from the room. VS: I know they did. I know they did. They ran, they’re hyperventilating, but it gets worse for them. It gets worse for that. Actually. SO: Tell us more. VS: It does, sorry. SO: It’s 2020. Tell us more. VS: Well, so here’s the paradox. The paradox is that in order to be successful with this, in order to be successful mixing and matching these little components so that they create a thing that’s specific for you or specific for Tom or Sally or whatever, each one of those components needs to be standardized at every level. The terminology needs to be standardized, the grammar needs to be standardized, the style needs to be standardized, the tone of voice needs to be standardized. It all needs to be standardized to create an experience that is not disjointed, that at best kind of reads funny or looks funny because we’re not calling a widget a widget, we’re calling it 20 different things and at worst completely confuses the person you’re delivering it to. VS: It all has to be completely standardized in order to be successful with this. So they have got to be small, individual standardized chunks of content, devoid of format that can be mixed and matched so that at the point of publishing, that output is personalized to the person who asked for it and sent to them at that moment in time. So yes, everybody’s now screaming. “You’ve taken away my creativity, danger Will Robinson! Creativity, creativity.” SO: And I’m really sad right now that this video will not be captured on podcast. Excellent robot impersonation. VS: You can see me with my hands like robot. Yes, sir. SO: Okay. So having covered all the 2020 buzzwords, COVID travel, etc. What about artificial intelligence? Is that going to help us with this mess? VS: So it will, it’s going to fundamentally change the way all of this happens. So with today’s technology, we have some constraints. One of the constraints is that we have to tag each piece of content with enough metadata that is appropriate, that systems can locate each chunk of content that needs to be delivered for your personalized experience. So that’s the first constraint that AI is going to pretty much mitigate. In an AI engine when they become ubiquitous, that cognitive system itself sets up its own matrices that we don’t tell an AI system, “Here are your tags,” it sets it up. And we tell it, “Here are the things that go together,” we train it, “Here are the things that go together,” and we train it with a whole bunch of information, and then it continues to figure it out on its own. So the locating of the content is going to be much easier. VS: Also, AI systems can look through any kind of content. It doesn’t have to be a structured content. It can look through emails and social posts and all kinds of other content in order to grab what it is you need at that moment in time. And it does it really fast and it learns over time what’s correct and what’s not correct. So the whole process of locating that information and grabbing it, and the whole percentage accuracy goes up, right? The longer it goes on, it’s more likely to be accurate. So that’s one way. VS: The second way is right now, we are constrained by output types. We really do have to define the output type. In the AI world, we won’t need to, it will just send you information. It will be able to on the fly know, “Oh, this is what you need, I’m going to take all these different pieces and I’m just going to send it to you.” We won’t need to define in advance what it’s going to look like. It will be able to do that on its own. We’re not there yet, we’re definitely a few years away minimum, probably… I mean, you and I have plenty of customers that aren’t even at the point of being in structure yet, right? They’re just getting there. So I think there’ll be companies that can leap frog right to it once AI systems are all over the place, but for now we are constrained and AI will take those constraints away. SO: So that’ll be fun and hopefully not at all troubling. All right so it sounds as though we’re going to need this book. So is it out yet? Where can we get it? When can we get it? VS: Any minute now. So the book is not out yet. It’s November 9th. It was supposed to be out at the end of October, but it’s 2020 and nothing happened on time in 2020. It will be out in the very beginning of 2021. You’ll be able to get it on Amazon, or you’ll be able to order it from XML Press. And again, the title is The Personalization Paradox: Why Companies Fail and How to Succeed at Delivering Personalized Experiences at Scale. And I should mention that I do have a coauthor, her name is Regina Lynn Preciado. Regina and I have worked together for, we got to 15 years and it just got blurry beyond that because we’re old, we’ve worked together for a very, very long time. She’s a phenomenal content strategist, I’m really a happy to have collaborated with her on the book. SO: Awesome. So we’ll add all of that information to the show notes and hopefully with any luck XML Press or Amazon or somebody has a pre-order page up. VS: XML Press does and the Content Rules website also does. SO: Okay, great. So we’ll add some version of those. And I think with that, Val, thank you so much, I’m going to wrap this up. This has been the most fun I’ve had today by a long shot actually. VS: Oh, goodie. That’s cause you like my robot impersonation. Danger, danger. SO: The robot was very helpful. So thanks again, and hopefully I will see you in person at some point in 2021 and not just on a screen because I’m kind of over the screen thing, but we’re lucky that we get to work at home, but… VS: We are. And thank you so much for inviting me on and it’s always fun to talk to you. SO: You too. So with that, thank you for listening to The Content Strategy Experts podcast brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links. The post The personalization paradox (podcast) appeared first on Scriptorium.
34 minutes | 9 months ago
The true cost of quick fixes (podcast, part 2)
In episode 79 of The Content Strategy Experts podcast, Gretyl Kinsey and Bill Swallow continue their discussion and talk about solutions to quick fixes. “A big part of your content strategy should be how requests come in, how the timelines are built, and what you’re responding to and how you’re responding to them in the first place.” —Bill Swallow Related links: The true cost of quick fixes (podcast, part 1) Twitter handles: @gretylkinsey @billswallow Transcript: Gretyl Kinsey: Welcome to the Content Strategy Experts Podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we’ll be continuing our discussion on quick fixes, this time focusing on solutions. How can you undo quick fixes or better yet avoid them in the first place? This is part two of a two-part podcast. Hello and welcome everyone. I’m Gretyl Kinsey. Bill Swallow: Hi, and I’m Bill Swallow. GK: And today we’re going to be revisiting our previous discussion on quick fixes, but this time with a bit more of a positive spin. Just to recap a little bit from last time, what we mean when we talk about quick fixes are when you take a one off or bandaid approach to your content strategy, you do some sort of a work around to get content out the door, usually on a tight deadline or under a constrained budget, and then that later can cascade into lots of problems down the road if you have done a quick fix instead of planning and doing things the right way. And where I want to start things off today, talking about how you can undo or avoid quick fixes, if your company decided to use a quick fix in the past, what are some reasons that you might need to change that now? BS: Well, I think one of the first things that you should be looking at is the amount of time your team is spending on overall tasks and to see exactly how much time is being spent fighting with, or otherwise futsing with their content development tools. Are they going in and constantly having to reformat things? Are they constantly having to retag things? Are they fighting with the tool to get it to work the way they need it to? And looking at these types of things to figure out, do I have a problem with quick fixes? Did we implement things correctly? Are we using the tool the way we should be using the tool, and is the tool right in the first place? GK: Yeah, absolutely. And I think this kind of touches on the flip side of the scenario that we talked about in the previous episode, where we mentioned things like template abuse and tag abuse, and people going outside those parameters that you have defined in your structure or in your template and doing these one off quick fixes for formatting. So if you realize that you’re spending a whole lot of time on those kinds of things, then suddenly that’s not really a quick fix. That’s a very time consuming fix when you put all of those little individual quick fixes together. So if you realize that you’ve got a lot of writers doing that, then that can lead to something like a limitation down the road. If you realize, for example, “Hey, we really need to streamline templates that we have, or we need to introduce a new template or a new publishing output that is a lot more sleek and efficient than what we’ve already got,” and you’ve got writers all over the place breaking the existing templates, then suddenly they’re imposing a limitation unnecessarily on the tools that you have. BS: Yep. And we’ve been hearing a lot over the past several years about companies going through digital transformations and being able to essentially modernize their entire content set. And I want to say just putting it online because that’s not what digital transformation is all about. Yes, it’s a component. But one of the things that a lot of these companies are struggling with is that they’re looking to move to a more digital foothold on their content and where they need their content to go. And they’re taking a look at their entire legacy content set, and they’re finding out that they have millions of different Word files that are all using different formatting, different templates, if they’re using templates at all, several different content tools in play. They might have Word. They might have FrameMaker. They might have InDesign for some more higher designed outputs that they were producing. BS: They might have both RoboHelp and Flare in the mix because there were two different divisions of the company at the time and each one decided on their own tools to use, and they have different styles and templates and even different approaches to how they develop the content in the first place. So you start seeing all of these things where you have all of these different documents using a wide variety of conventions, and suddenly you need to be able to standardize this stuff so that you can start doing more intelligent things with your content and it makes it incredibly difficult to take that leap if everything’s a mess at the starting gate. GK: Yeah, of course. Absolutely. And that is a massive problem I think that I’ve seen in probably the majority of the projects I’ve worked on here at Scriptorium that… Especially when it’s factors outside of maybe the company’s overall control, if there has been something like a merger in the past, and you’ve had lots of disparate teams that suddenly are working together and they’ve all had their processes, then suddenly any of those teams who have employed a quick fix solution, that’s going to be multiplied when you’ve got all these different teams and all of their past histories of quick fixes working together. That’s when it becomes really important to look at what all these different teams are doing and streamline their processes and come up with a content strategy that brings everything together as it should be. GK: And I think that gets into the issue, not only of streamlining, but of scalability as well, if you need to scale your processes to a larger target audience, a larger market, or as you mentioned earlier, Bill, if you need to undergo a digital transformation and you need to deliver more intelligent content, content that is not only available online, but that is interactive or that’s personalized, then if you are hindered by all of these one off quick fixes that people have taken, it can be almost impossible to scale. And that’s when you’re looking at maybe a complete content overhaul at that point. BS: Yeah, and I do remember one client a while ago who decided that after looking at all the numbers and taking into account all the different documents they had in play, they needed to go ahead and rebrand, they renamed their company and had new logo, new look, new feel to all their content. They did a lot of upfront analysis and came to the conclusion that it would be a lot easier to just fix it all, to basically press the pause button, fix it all, move it to… In this case, they moved to DITA, but move it to a single content format and then apply all of their branding changes using automated formatting. It was a lot cheaper and a lot less time to do that than it would have been to go into every single document and update it by hand. And that speaks volumes. GK: And I’ve seen a few clients take a similar, but maybe not quite as quick approach where if they couldn’t press the pause button on everything, they at least did that one department at a time. So start in one place with DITA and then pull the next department in when they were ready and then so on and so forth. So kind of depending on the size of your company, your budget, your deadlines for different products and different content that comes from different departments, then that approach in phases or with a small starting point that expands outward might be a good idea to make it manageable as well. But it really all depends on how interconnected things are when you start, how interconnected they need to be by the end, and how that all interacts with your product release schedule. BS: And another consideration there is also if you happen to be merging teams or bringing on new teams, or if your team is growing, you’re bringing on new hires, it is very difficult for someone to figure out not only a new job or a new role, but also to figure out how to produce things when everything is formatted differently, when everything uses a different convention, when you have to know all these little details about how a particular deliverable comes together, because nothing is consistent in everything is done ad hoc. It becomes very difficult to get new people up and running in that environment. GK: Yeah. And that gets into some of the things we talked about on the previous episode with training and how I think that one of the things that we talked about is that a lack of training or a lack of documented knowledge can lead to this problem of these one off quick fixes just growing and growing. And then that perpetuates itself into this problem that any time a new hire comes on, it is very difficult to keep them trained if it was a lack of training that led to people making these mistakes before. So that’s where it becomes really imperative when you bring on new teams, whether it’s from a merger or whether it’s just expanding and hiring that you get all of your content systems streamlined and aligned across the organization and provide adequate training and ongoing training to prevent those ad hoc solutions that people were using before. BS: That’s great, and brings up another question here, which is types of approa
23 minutes | a year ago
Getting started with DITA (podcast, part 1)
In episode 71 of The Content Strategy Experts Podcast, Gretyl Kinsey and Barbara Green of ACS Technologies talk about getting started with DITA. “We ran the conversion and got the content in DITA. It wasn’t structured the way it would be if you had started writing in DITA from the beginning. If I ever had another project, I would know to really take that into consideration.” —Barbara Green Related links: Managing DITA projects: Five keys to success Twitter handles: @gretylkinsey Transcript: Gretyl Kinsey: Welcome to The Content Strategy Experts Podcast brought to you by Scriptorium. Since 1997 Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we talk about getting started with DITA and taking the next steps forward with special guest Barbara Green of ACS Technologies. This is part one of a two-part podcast. GK: Hello and welcome to the podcast. I’m Gretyl Kinsey. Barbara Green: And I’m Barbara Green. GK: And today we’re going to talk about a case study with the project that Scriptorium did with ACS Technologies, started a few years ago and it’s still ongoing, about getting the company started with DITA. So first thing that I want to ask you, Barbara, is to just give us a brief overview of the company. Tell us what ACS Technologies does. BG: Okay, well, ACS Technologies has been in the business for about 40 years. We develop software solutions primarily for faith based organizations, and our corporate offices are in Florence, South Carolina, but we have distributed teams throughout the country and offices in Greenville and Phoenix as well. GK: All right, perfect. And when it came to moving into DITA, what were some of the reasons that you wanted to start looking into changing the way that you were developing content? What were the business drivers behind this decision? BG: Well, we were developing our flagship product at the time, which is called Realm, and it began to grow more complex even though we were still in the early phases. It wasn’t developed as a core product with modules that plugged in depending on the features our customers wanted, but instead features were turned off and on based on packages or experiences that customers required. BG: And so I guess about three, three and a half years ago, I realized we can’t keep documenting the way we’re doing. In the early stages of that development, writers could add notes here and there to help customers find their paths. But we knew this was not the user experience that we wanted to create and we also knew that the product offering was growing more complex and personalization was on the horizon. We also spent many hours formatting content. BG: So, right away we had four problems that were identified. We needed to target custom content, we needed to integrate content within the product, we needed better findability for sure. Search was a struggle. We had multiple output types and while we had tried very hard to move just to online, many of our customers still requested PDFs. We also were seeing content reused across various departments more and more and we really could not prove our value because we lacked a cohesive set of content metrics. GK: Yeah, and I remember when Scriptorium went in and helped assess all of these issues, those were the root cause of all of those were kind of coming from the fact that all of the content was being offered in a Wiki. So, all of the Realm help content was stuck in this silo that made it really difficult to achieve all of those things, especially things like search and reuse and personalization. And so I remember back when we were initially talking about this, that we were looking at all of these problems and DITA seemed like absolutely the logical solution to help solve all of them over time. BG: Yes, it did. When we had software products that were more modular in orientation, the Wiki worked okay. I’ll say that years ago the Wiki got us online where our help had not been online. So it had a value at the time. But we really outgrew it very fast. GK: Right. And I think that’s something that we’ve seen in a lot of different organizations where some solution that does help you at one time isn’t scalable in the way that DITA is. And so making that transition makes a lot of sense. So I want to get into talking some about how we actually went about getting everything up and running with DITA and the strategy that we put in place to make that happen. GK: And the approach that Scriptorium ended up taking with ACS Technologies was what we call a phased approach. So this was determined by a lot of different things including timelines, schedules, budgets, and it also gave us the opportunity to start small at a pilot level and then expand outward. GK: So, we set up content strategy in phases where each one built off of the previous one and we have pretty much stuck to those phases. The timeline of those has gotten a little off track from what we’d initially planned. Some phases happen more slowly, others have happened more quickly. But we initially outlined these phases and then just started tackling the plan in that order. And so I wanted to talk to you about how that’s gone and we can get into a little bit what those phases involved and how that played out in reality versus what we had initially planned. BG: Right. Yes, I think no one is more surprised than I that we’ve made it through four of the five phases. GK: Yes. BG: It’s like a dream come true, right? GK: Yeah, absolutely. And so, we really just started phase one. The big push there was just getting the content out of the Wiki and into Dita and so that involved a process of conversion. And so I wanted to just get your take on how that process went and what kinds of things that you wish you had known in hindsight. BG: Yeah. So, I guess the conversion itself after running several test iterations went very well considering the product we were converting from. The Wiki that we used puts a lot of junk code in the background. So, Lord bless the developer that had to write that for us. One of the big surprises there that we found is every time we had uploaded an image, there was a version of that image in the database. So, that was a lot of fun to try to figure out. GK: Oh, wow. BG: Yeah. But we ran the conversion and got the content in DITA. Now it wasn’t structured the way you would if you had started from the beginning writing in DITA and if I ever had another project, I would know better now to really take that into consideration. We’ve talked about, we don’t feel we made a bad decision converting content, but we have sat around the water cooler, so to speak, and talked about, “Hmm, would it have been easier to just start over?” Because we didn’t have a very large set of content at that point. GK: Yeah. And that’s something that I think all companies have to take into consideration. Is it easier to rewrite or restructure or reorganize your content on the front end before you convert or do it after you convert? And it’s a difficult question, especially when you’ve got such a small set of content. Because the good thing about that is it doesn’t take as much time either way compared to if you had hundreds of thousands of topics. But it’s still a big thing to consider to try and make sure that you take whatever approach is going to be the least amount of stress and time and effort on the people that have to do that work. BG: Right. And the driving factor for us to convert too was we had been given a timeline and so we felt like if we didn’t convert, there was no way we could meet that timeline. I guess one of my lessons just personally, as an information manager at the time, is push back on timelines. GK: Yeah. And I think that as content strategists here at Scriptorium, that’s an important thing too, is to be realistic about timelines, because we see that a lot where you’ll have executive pressure to get something done by a certain date, but then you often have to compromise. Do you get it done by this date and maybe it’s not done quite as well? Or in the same way that you would have done it if you had unlimited time? So, you have to find that sweet spot of what’s the right amount of time to do something correctly but still try to meet your deadlines or meet a schedule or not get things behind. And that’s always the challenge that I think companies face with something like this. GK: But as we know we did get through that phase. And so then phase two was basically an interim phase of using the content in Dita, managing it under source control with Git, and starting to deliver HTML output. Particularly a couple of different variants for different customers. And the main goal of that phase was just basically stay in it until you reached a critical point of needing a component content management system to manage things like workflow and publishing and especially publishing, all these different content variants. GK: And I think this was the phase that we stayed in a little bit longer than we had initially planned because I think we had planned for that to maybe be six months to a year and that ended up going on longer than we thought. So, I wanted to get your perspective on that phase of the project and how things went. BG: Yeah, so it did go on longer than we thought it would. It also went on probably longer than it should have from a technical standpoint, but again we’ve gotten through it. One of the lessons learned, and we’ll talk about this more later, is making sure that you have development resources in place. BG: O
20 minutes | a year ago
LearningDITA Live 2020 highlights (podcast)
In episode 70 of The Content Strategy Experts Podcast, Elizabeth Patterson shares some highlights from LearningDITA Live 2020. “Structured content is a way to strategically optimize your content so it frees your content from the format. Because it’s free of the format, it also frees it from tools.” —Bernard Aschwanden Related links: LearningDITA Live 2020 recordings Twitter handles: @PattersonScript @simonbate @aschwanden4stc @easyDITA @OctavianNadolu @cwhandrews @gretylkinsey @Center4infoDev @kheathScript @Turbonomic @cdybdahl @JackMolisani Transcript: Elizabeth Patterson: Welcome to The Content Strategy Experts Podcast brought to you by Scriptorium. Since 1997 Scriptorium has helped companies manage, structure, organize and distribute content in an efficient way. In episode 70 we take a look at some of the highlights from LearningDITA Live 2020. Hi, I’m Elizabeth Patterson and today I’m going to share with you some bits and pieces from the sessions from LearningDITA Live 2020. We’re going to start with a highlight from Simon Bate of Scriptorium. Simon’s session was Introduction to DITA. In this clip, Simon talks about some of the reasons you might want to use DITA and some of the benefits it offers. Simon Bate: Let’s talk about some of the reasons you might want to use DITA and some of the benefits it offers. Being an open standard makes DITA flexible. DITA can be used with a variety of different tools for authoring, editing, formatting, and storing content. If your business goals change or your context view change being in DITA makes it easier to go to a different toolset while keeping the same source content. The elements used to mark up your DITA content, give your content semantic value. That is every piece of content in DITA is tagged with an element. Semantic value means that the tag surrounding each piece of content has meaning such as paragraph, step, hazard statement and so on. Both your authors and your authoring and publishing tools know what kind of information is contained in each tag. Tagging your content that has semantic value also helps with search and filtering. SB: Topic-based content is easier to reuse. Individual topics can be used and reused in any order in any number of different documents. In DITA the topics are organized in maps which are much like a table of contents. The map allows you to specify the order and hierarchy of your topics. A reusable topic addresses a single idea or question. It contains enough information to stand on its own and it doesn’t assume any context about what information comes before it or after it. And separating your content for the formatting makes it easy to produce multiple outputs. If you’re producing more than one type of output. For example, PDF, HTML and EPUB using DITA allows you to apply different formatting to the same set of source files, which means all your outputs will be consistent with each other. EP: This session was particularly beneficial for those that were relatively new to DITA so if you’re interested in watching this recording or you missed it, you can find the link to the complete recordings in the show notes. In this next highlight Bernard Aschwanden from Publishing Smarter presented 10 ways DITA can help drive a unified content strategy. In the following clip he talks to us about what a unified content strategy is. Bernard Aschwanden: A unified content strategy is really that methodical and purposeful management of information assets across all of the divisions of your enterprise. This really has to be done in a way that breaks down the silos and makes that information easy to find and easy to use. Structured content is really best as a way to strategically optimize your content so it frees your content from the format and because it’s free of the format, it also frees it from tools. Like for example Microsoft word. There is no proprietary format that you have to think about. In turn this then enables intelligent reuse, multichannel publishing, efficiencies in the translation and all sorts of enhancements when you’re delivering content to your end users. BA: A structure also gives you a markup for the content and it does so through this semantic set of elements and metadata that in turn can lead to high levels of consistency and help lower maintenance efforts. Structured content really is fully ready for XML and not behind the scenes XML is what gives structured content all of its so-called superpowers. There is an XML standard that’s really taken off in TechComm and that’s DITA. And I would say one thing that TechComm communicators know inside and out is content. Whether it’s the planning, the creation, the review, or the delivery, we are the experts in the room. It’s probably safe to say that we’re the experts inside the enterprise. EP: Jarod Sickler of Jorsek presented DITA CCMS. What is it? Why do you need one? How to pick the right one. In this next clip, Jarod explains what a DITA CCMS is. Jarod Sickler: This is an important point. DITA CCMS is a CCMS that’s specifically designed to handle all aspects of DITA content and its semantic markup. They’re designed to effectively manage those components both as independent resources. If you think of a particular phrase or a word, a CCMS is designed to manage those components as individual resources. Phrases, but also when that phrase is used as part of a larger content object. A CCMS has to simultaneously manage it independently and as a part. EP: If you are considering a DITA CCMS, this presentation would be really beneficial for you. Jarod goes into a lot more detail in the presentation about what your CCMS should offer, so be sure to check for the link to the recordings. Next we’re going to hear an excerpt from Octavian Nadolu of Synchro Soft. He presented custom business rules for DITA projects. In this clip he gives some examples of specific business rules that he uses in a particular project. Octavian Nadolu: For DITA I added a few businesses that we have, for example in our documentation project such as titles must be uppercase. I want in my DITA project to have all the titles uppercase and I can add these type of rule or short description should not exceed 50 characters otherwise it is not a short description anymore or I want to avoid empty elements in my project or I want to avoid semi-colon at the end of our list items. You either add a dot or you don’t add any semi-colon then a list should have more than one list item otherwise what where you should have a list or, and another rule would be to have output classes on code blocks. Otherwise, the output would not look as you want when it’s published. EP: Charlie Andrews from Ovitas presented centralized metadata management simplifies DITA implementation. In this next clip Charlie explains what metadata is. Charlie Andrews: What is metadata? Metadata describes and categorizes content as information about information and how that content lives and how it relates to your organization, your products, customers, and then how it may be used within a component content management system to manage your content. And we’ll talk about that a little bit. I think what’s important to this particular bullet item is that it’s your organization, not just products and not just content, but metadata describes a lot of things even outside the content management system. You can have taxonomies that describe your corporate relations of all of your products or all of your marketing and all of your sales and all of the different areas of support and things like that. All of those things are metadata and can be gathered together in support of the types of things that you’re trying to publish, right. The major categories of metadata pretty much are agreed there’d be three different categories, descriptive, structural and administrative metadata. Charlie Andrews: For those of you not familiar with this descriptive metadata is the things you would normally think of who wrote something, what’s it’s title, what’s the date it was done? What product, family or model or part number or things like that. It describes the particular piece of content that you’re looking to do. Structural metadata it basically talks about the relationship between different, DITA content, topics or chunks and how they should be assembled or packaged. A structural metadata may be used in DITA maps and things like that. Administrative metadata may include things like geo-market or location. What version, what files, what rights management, what permissions does a piece of information have? EP: Gretyl Kinsey of Scriptorium presented Unsung heroes of DITA. In this highlight, she talks about what the powers of relationship tables are. Gretyl Kinsey: Here are the rel tables powers. Rel tables establish relationships among content such as specifying how DITA topics and a map relate to one another. Defining associations among DITA topics and non-DITA resources and the most common use of rel tables is giving you the structure that you need to generate a list of related links in your output. We’ve seen a lot of companies set up links among topics using embedded inline cross-references in DITA, and while that is a valid way to set up your linking in DITA, there are issues around reuse for that approach, which we’re going to get into in a little bit more detail later on. GK: There are also some issues around link management with inline cross-references. Basically the more inline links that you have, the more difficult it can be to keep track of them all and manage them all, especially when you do have a lot of reuse. But rel tables give you a way to create a network of links among rel
Terms of Service
Do Not Sell My Personal Information
© Stitcher 2021