Business Intelligence (BI) has been a backbone of many businesses and essentially a set of processes which processes various types and formats of data from multiple data sources and infers meaningful information out of it – this information can then be utilized to understand a variety of business metrics and help decide on important business decision.
While there are numerous number of tools and technologies for Business Intelligence; one of the popular tool is Tableau which enables data discovery and processing from multiple data sources like MS Excel, MS SQL Server, Oracle, Google Analytics, Salesforce etc. and help analyze, visualize, forecast, and predictions based on a variety of factors without much help from technical/developer teams.
Recently, we have worked on a requirement, where a Tableau Dashboard with data extracted from a data lake, manipulated, and rendered along with few filters and sort parameters for user friendliness, need to be shown on a web page on an existing SDL Tridion DXA webpage.
The simplest way to do it is to use the Embed Code generated by the Tableau for its dashboard and you can choose to manage it through flexible Content Management provided by SDL Tridion and through extensible DXA plaform provided by SDL.
Below screen shots explains the sample dashboard we have used to embed in our SDL Tridion DXA as well as the way to get the html embed code or an iframe link to embed the same in web page:
In continuation to my previous POST , this post will show a pictorial representation of the solution discussed earlier as well as the advantages we seen with this isolation of custom deployer logic from deployer extension to azure functions.
The diagram below depicts the serverless architecture and solution described in previous post:
This serverless architecture provides many advantages, few of them listed as below:
- One of the big advantage is the scalability and flexibility of the solution – any new feature can be added easily by introducing a new service bus+azure function combination without impacting other functionalities as well as can be scaled out easily because of no real server in picture but just the serverless implementation.
- The introduction of Azure Function ensure scalability is no issue and cost effectiveness as you pay for actual time of execution of the function rather than for a server cost – the real advantage of Serverless 🙂
- As implementation and deployment is just one time in custom deployer extension and further changes would rarely require any change or deployment in deployer extension while a new feature would have even lesser chances of changes in the deployer extension – this provides a low dependency on Tridion skills and deployment on content delivery side
- As most of the logic is out of the Tridion side, the upgrade of custom features will relatively be simple and straightforward
- Any changes in third party API – like Akamai cache clearing or SOLR/Elastic Search APIs – won’t trigger a massive testing or impacts or deployment requirements
- Less Tridion technology dependency, as anyone with azure functions skills (in any language) can add/change features and functionalities
One of the thing we have been working in last few months is to generalize and modularize the Deployer extension such as to remove dependencies on Tridion as much as possible (the concept can be applied to a Storage Extension as well).
The main intentions and advantages of this are as below:
- Removing dependencies on Tridion developer for any custom changes pre/post/during the publishing process. (Custom changes like: clearing a cache, indexing to a search engine, sending notifications etc.)
- Making upgrade less painful as pulling out the custom logic from custom deployer module
- In case of SDL Cloud deployment, lesser or no dependency on SDL Support
- Improved go-to market timeline
The Solution we did is utilized the Serverless architecture in MS Azure briefly described as below:
Part – I
- The deployer extension was written in a generic way to fetch all information like component presentation, metadata information etc. and convert it into a json format.
- The deployer extension then interact with an Azure Service Bus Message queue asynchronously and pass all the information in json format to the Azure Service Bus Message queue to process
Part – II
- On Azure portal; a Service Bus Message Queue is configured to invoked from the deployer extension and accept content in JSON format
- An azure function is being written which listen to the Service Message Bus queue mentioned in Step (1) above and accepts the published content in json format
- This azure function then applies all business logic and manipulate JSON to extract necessary information for processing and send it another specific Service us Message Queue
- For every custom functionality, there would be a Service Bus Message Queue and each of these Service Bus Message Queue will be listen by specific azure function implementing the specific functionality consuming the publish content sent in JSON format to the Service Message Bus Queue
- These azure function then take care of all the individual functionalities and offloading the deployer extension for any real logic in it.
A detailed pictorial representation of this implementation and advantages we gain out of this implementation will be described in next blog in the series.
Referring to my earlier post – Restructuring-the-tridion-blueprinting-and-content-demotion – another issue while doing demote is that you might still have a handful of items which could not be demoted citing he variety of dependency they may have. Few examples of such dependencies are:
- Item is checked out
- Referenced in another item which will be in a publication which have no relationship with the target demote publication
- Item is published from a publication which will not be a child of the target demote publication
- The item is added into a bundle
- A cyclic reference to items with in the same publication from you are trying to demote
- The items are localized in a publication which is up in the blueprinting hierarchy than the target demote publication
While most of these demote blockers requires to be resolved manually, to aid the process of identifying the unresolved dependencies, we at Content Bloom created a core service script which essentially extends the “Where Used” feature to provide a detailed matrix of all items with in a folder on a single html page or as a JSON (so as to enable further custom processing if needed)
As a third step (refer the above referenced post and This Post and This Post if you need a background of this series of blog) if there are unresolved dependencies remains in this massive demote exercise, we execute this script to get a report of all unresolved dependencies for the items which fails to demote.
This script, requested for the folder TCM URI, processes each items inside the folder recursively, and generates a json of all dependencies of each item viz. where used, what it uses, where localized, where published, whether checked out etc.
Another small program accepts this json to convert it into the HTML format for better visualization.
This table can be used to identify the dependency, enables to resolve those dependencies manually, or take corrective actions for demote operation.
The below screenshots will explain it a bit better:
Referring to my earlier post – Restructuring-the-tridion-blueprinting-and-content-demotion – another issue while doing demote is the fact that you won’t be able to demote simply by selecting the root folder and demote everything inside – you actually need to go to the deepest level of the hierarchy and starts with the leaf of the content hierarchy – which is absolutely pain to go in each and every folders in the hierarchy and do the demote and there is another issue which might still result in fail if the item in the folder is having a dependency on another item in the hierarchy.
To resolve this, we at Content Bloom created another core service script which accepts tcm uri of the folder which we want to demote along with all its item and the publication uri of the target publication to which we want to demote.
As a second step (refer the above referenced post and This Post if you need a background of this series of blog) in this massive demote exercise, we execute this script to start the actual demote process. The script will go almost blind to start the demote process traversing each and every item in the folder and perform demote in a cyclic manner- this might fail or this might succeed – but ensure to resolve dependency of items with in the same folder with every cycle, ultimately reached to a stage where either all items are demoted, or are just handful of them left with dependencies which needs manual intervention – like checkout, localized version, cyclic references etc.
A few screen shots of the scripts are shown as below for the better understanding of its operation:
Referring to my earlier post – Restructuring-the-tridion-blueprinting-and-content-demotion – one of the issue while doing demote is the fact that you will loose any custom permissions you may have on your folders or structure group if the item successfully demoted.
To manage a track of what all folders have a custom permission and provide a list of pre and post demote report of custom permissions on all demoted folder, we at Content Bloom have created a core service script.
As a first step in this massive demote operation, we executed this script to gather a report of all custom permissions on every folder in the hierarchy.
When the demote process is completed, we executed this script again on the same folder in the new publication where the folder has been demoted.
The two reports are then compared to find missing custom permissions, which can then be setup back on each folder in the new publication where the folders have been demoted.
Below are few screen shots showing the script in action as well as the report generated in the form of a CSV:
The recent improvements(DXA, Headless etc.), content modeling demands, and more aggressive content changes requirements; triggers a change in existing Tridion blueprinting hierarchy from the traditional, wide, isolated architecture to a lean, interchangeable, and less site/publication dependent.
After completing two such blueprinting transformation, we come across a bit next level changes – which clearly looks looks implemented earlier without any brain storm – Every content was created in 020 Global Master for all websites/verticals where not a single content is shared across the websites/vertical – This creates a huge confusion for content author as well as quite error prone for content authors (mostly System Admins) who ended up updating a wrong content breaking functionalities on another site unexpectedly.
A solution was proposed to minimize the number of publications by reducing the various levels of publication vertically and expanding it horizontally by demoting the content from 020 Global Master to individual Site Masters.
The main challenge in this task is doing a massive number of demote of already published components and categories as well as a number of localized version of items in child publications.
Moreover, since there is a hierarchy of folders, we can not simply choose the root folder and start demoting as the demote has to be start from the leaf of content tree – so apparently, have to traverse all folders and demote them individually – a big limitation of existing demote process. Refer the below capture from SDL Tridion Docs:
Following are few learning we have while performing this big overhaul of Content restructuring and appropriate measures must have been consider while doing demote:
- Whenever demoting the content – the custom permissions and security applied will be vanished and only inherited permissions will be applied.
- Demoting the content will delete all previous/old version of the item
- The demote of an item will not work if:
- It has been published from a publication which is not a child of the publication to which the content is demoted
- It has been in use in any way (component link, RTF field link etc.) by another item in a publication which is not a child of the publication to which the content is demoted
- It has been localized in a publication which is up in blueprinting hierarchy from the publication to which the content is demoted
- The item is checked-out
- You can not choose folder to demote all its content
- Despite of all measures, there are good chance of scenario which restrict the demotion process – for example: cyclic reference of components through component link
I will be discussing in subsequent post about how we dealt with above scenario to have a rather smooth demotion of huge number of Tridion items for multiple websites without a zero down time. Stay Tuned!
The SDL Tridion DX developer summit 2019 – The India Chapter – happened on 29th and 30th March 2019 in Bengaluru, India – and this time much bigger and better. We got pretty good welcome, the room was vibrant with big LED screens, photographers, and most importantly the recording of the session.
This time in the India chapter of the Tridion DX Developer Summit, we saw some new faces from SDL and SDL Leadership – getting keynotes from Jim Saunders (Chief Product Officer), idea about Tridion DX Roadmap from Ivo Van de Lagemaat (Tridion DX Product Owner), and a nice talk on Global Enabling and SDL Ecosystem from Wilhard Rass (VP, Professional Service) gives a pretty good idea about what’s happening in the Tridion World and the road map of the product.
The event span over a two full day of extensive 16 sessions – all in one room. More than 70 people have attended the event with 14 presenters. it has been a two days learning and fun extravaganza. Some of the interesting sessions were around, Connector Frameworks, GraphQL, Unified Extensions, SDL Clouds, Tridion Docs and Tridion Docs API, Outscaled Publishing, and Customized DXA Forms were some real learning for the participants of the event.
It has been a fun party after some extended learning; Bengaluru is a very much party place especially on a Friday night. During the session also we made people play “dumb-charade” – The expressions from Wilhard is really awesome to look at:
Disclaimer: This is not a recommended approach to perform an upgrade. The SDL recommendation is to do Content Manager First.
In this approach, the Content Delivery module along with the application will be upgrade first to the latest version while keeping the Content Manager and other add-ons intact.
The old Content Manager then publish to the new, updated Content Delivery.
The below sequence of diagram illustrates the Five Steps approach:
- Additional efforts for setting up the Publishing Target and Topology Manager both
- Risk of failures and down-times
- Repeated quality analysis
SDL has released the latest version of their WCMS – SDL Tridion Sites 9 – and with this many of the existing customers eagerly waiting for its launch, will look forward to upgrade their existing Tridion implementations. This blog series will list various aspects of the Upgrade from an earlier version of SDL Tridion to a newer version.
Why there is a need to upgrade
The first question that might comes in the mind; why should I upgrade? or what are the advantages of upgrade to the newer version?
I list down following advantages that I can see after upgrade to newer version:
- Low total cost of ownership
- High return on investments
- More flexibility and manageability
- Easier development
- Advanced technologies
- Enhanced support
Below diagram shows this in a snapshot:
Typical Upgrade Process
An upgrade process may differ from business to business and environment to environment; however, a typical upgrade process listing essential elements is described as below:
There could be different scenario and strategies for taking care of an SDL Tridion upgrade; the most common strategies are listed as below:
- Content Delivery First
- Content Manager First
- Parallel Upgrade
I will be describing them in individual blogs, stay tuned…