During normal operations, node replication will flow from source server to target server. If an error occurs that might be from target server to source server. For example, with TSM v7.1.1’ “recover damaged files from a replication server” feature, when a file on the source is marked as damaged it will be replicated back from the target. That’s all clear. But what happens when a volume on the target server is set to destroyed? The short answer is….nothing will happen. Destroyed volumes are ignored by the node replication processes. Or so it seems. Continue reading →
After installing and configuring the TSM for SAN for Linux (= Storage Agent for Linux) software, you can manually start the Storage Agent in the foreground to check if your configuration is working and to bring the Storage Agent online with the dsmsta command. But eventually you might want to run the Storage Agent as a daemon and to start it automatically at system restart. Instructions for that are of course in the Storage Agent User’s Guide, but there is a problem with using the Storage Agent daemon after this configuration. However, it is a minor problem but I describe it here so that people can find a workaround. This is all tested on v7.1.1 (software and documentation).
From Appendix A (“Automating the storage agent startup”) of the Storage Agent Installation and User’s Guide:
Back in TSM v6.2 and earlier versions, backing up the TSM database (DB) required just 2 things to think about: the device class to be used and the type of backup you wanted. Of course, you can still use this, but since TSM v6.3 different methods were introduced to handle the increased TSM DB backup size. Typically, the size of the TSM DB will increase noticeably when using TSM native deduplication (client-side or server-side). All that metadata about all those chunks, pointers, dereferenced chunks, etc., etc. eventually needs to be stored somewhere. The increased TSM DB size also increased the space requirements for the DBB’s.
Two of the methods introduced:
Introduced in v6.3.0, and named “Multistream database backup and restore processing”. The NUMStreams specifies the number of parallel data movement streams to use when you backup the DB. The default is 1, the maximum is 4. This will not reduce the space requirements for the DBB’s, but it might improve the overall time a DBB take. Typically you should only use this if your TSM DB is big enough. Meaning: multistream backups will not win you that much time with a small TSM DB, but the concurrent data stream will cause you to lose volumes which are not fully utilized. As always, there is a tradeoff to consider.
The 12th edition of the well known European TSM Symposium was announced a while ago. This time the theme is “Tivoli Storage Manager: Promising Future”.
From the TSM Symposium 2015 website:
The TSM Symposia cover a wide variety of Tivoli Storage Manager (TSM) related topics and particularly thinking about Benefit from Innovation for new and changed functionality expected to come in TSM over the next couple of years. The well-established TSM-Symposium is hosted by Guide-Share-Europe and the University of Cologne. It will take place from Tuesday 22th September 2015 to Friday 25th September 2015 in the Westin Bellevue Hotel in Dresden, Germany.
It will have been two years since the last symposium in Berlin and there will be plenty of TSM related topics to talk about.