Controlling System Load

A CMS is a complex system designed to solve several tasks at the same time. In contrast to an e-mail server, for example, whose only task it is to accept e-mail messages from senders and to send them to the recipients, CMS Fiona serves numerous editors, exports content in the background, provides its search engine with up-to-date content, and delivers (possibly dynamically created) content to the website visitors – to mention just the most important tasks.

Depending on the extent of these tasks, and on the degree to which their execution overlaps, more or less strain is put on the computer system on which these tasks are running. During normal operation of the CMS, each of these tasks uses different resources to a different and constantly changing extent.

Bottlenecks emerge if a resource (RAM, processor power, data throughput) is not available in the amount required to process the tasks in the desired period of time. This might sound trivial, it is, however, not trivial to determine the missing resources and to find the remedy. The following sections are intended to help you with this.

Editorial Work

Editorial work with the CMS, i.e. with the Content Navigator, normally does not cause considerable load on the system, compared to the export or indexing of large amounts of data, for example. Particularly, the load cannot be determined only from the number of editors working with the system. It rather depends on the kind of work the editors are doing and how they use the Content Navigator:

  • If many or large documents are uploaded, large amounts of data need to be transferred via the GUI to the Content Navigator which then processes and stores it. This causes heavy network load and requires a lot of memory.
  • If the tree view is used and serveral folders are open at the same time, a large amount file data needs to be queried and transferred. For each file in the hierarchy, the GUI requests its name and status information (such as the file type, version information, release status, to mention just a few) from the Content Manager for displaying it in the hierarchy. Fetching this data from the database consumes a lot of RAM or network bandwidth, depending on how the database is linked into the system. Transferring this data from the Content Management Server to the GUI and from the GUI to the client computer also consumes network bandwidth.
  • Intense use of the preview, if many calculations are done in the layouts (templates). Every preview is an export of a file. The computational power required for this is proportional to the complexity of the templates involved.

Export

The export is a process in which data from several sources is processed to generate the result – normally a web document. The more difficult it is to determine this data the more computational power is required for the export. The processor load is caused mainly by the queries and instructions in the layout files:

  • File lists are mainly used to determine the contents of folders. In most cases, they are filtered for particular file types. This means that, for example, resource files contained in a CMS folder are checked individually for inclusion into the web document being created. Then, several items (title, path, URL, abstract, and others) are determined and exported for each of these files. For this, numerous small queries are sent to the database.
  • Each access to the content of a file other than the one being exported causes a so-called dependency of the exported file from that other file. If the content of the other file changes, the currently exported file needs to be exported again at the next export. An extraordinarily large number of new exports is caused by changes to layout files because all the files that are generated using this layout depend on it. Each export of a file also causes the file to be reindexed by the Search Server.
  • Often, Tcl procedures (so-called systemExecute procedures, formatters or callbacks) are called during the export. These procedures help to calculate data not part of the content itself or to format field values or links. Each call of such a Tcl procedure consumes computing time. Particularly, the manipulation of links by the link callback can considerably slow down the export because this callback is called for each link contained in the content of the file being exported.
  • The load caused by the export is also significantly influenced by the export frequency. Short export intervals in conjunction with rapidly changing content, weak hardware, or low network data throughput can quickly lead to an overloaded system and the limitations associated with this.

If the export by the Template Engine and the Editorial System are running on the same computer, the response time of the system will be longer if the load is high. As a consequence, the editors will have to wait longer for GUI or preview pages to be fully displayed.

Live-System

The live system does not require much computing power if mainly static content is delivered to the visitors. However, generating content dynamically using PHP or Java, for example, or intense use of portlets, requires computing power. This also applies to encrypting web pages and to large amounts of search requests.

The network bandwidth requirement of the live system mainly depends on the number of incoming requests and the amount of data to be delivered. If the demand is high, the bandwidth available for other tasks such as transferring the exported pages to the live server decreases.

Recommendations

  • Optimize your layouts with respect to the issues mentioned above.
  • Minimize the number of systemExecute and formatter calls and avoid link callbacks.
  • Increase the time between exports, i.e. decrease the export frequency.
  • If possible, run exports at times when nobody works with the editorial system and live web pages are requested least often.
  • Operate the Template Engine and the Search Server on a different computer than the editorial system.