The no-query restore (for the remainder of this article NQR) has been around forever. Let’s not get into details since which version and the changes made per release (if any) to client or server-side, but it is in there since ADSM v3+.
What I noticed over the years is that the majority of people (customers and business partners) are just referring to NQR as “a faster restore”. When asked for details the majority of them know about the multi-sessions capabilities, but that’s about it. In this article I’ll try to explain the differences between a classic restore and a NQR restore, as I was unable to find a more than basic explanation of this subject. If this sounds too basic to you, which I can imagine, skip this article.
First, let’s talk about the “Standard query restore”, aka “Classic restore”. For the remainder of this article called SQR.
During normal operations, node replication will flow from source server to target server. If an error occurs that might be from target server to source server. For example, with TSM v7.1.1’ “recover damaged files from a replication server” feature, when a file on the source is marked as damaged it will be replicated back from the target. That’s all clear. But what happens when a volume on the target server is set to destroyed? The short answer is….nothing will happen. Destroyed volumes are ignored by the node replication processes. Or so it seems. Continue reading →
After installing and configuring the TSM for SAN for Linux (= Storage Agent for Linux) software, you can manually start the Storage Agent in the foreground to check if your configuration is working and to bring the Storage Agent online with the dsmsta command. But eventually you might want to run the Storage Agent as a daemon and to start it automatically at system restart. Instructions for that are of course in the Storage Agent User’s Guide, but there is a problem with using the Storage Agent daemon after this configuration. However, it is a minor problem but I describe it here so that people can find a workaround. This is all tested on v7.1.1 (software and documentation).
From Appendix A (“Automating the storage agent startup”) of the Storage Agent Installation and User’s Guide:
Back in TSM v6.2 and earlier versions, backing up the TSM database (DB) required just 2 things to think about: the device class to be used and the type of backup you wanted. Of course, you can still use this, but since TSM v6.3 different methods were introduced to handle the increased TSM DB backup size. Typically, the size of the TSM DB will increase noticeably when using TSM native deduplication (client-side or server-side). All that metadata about all those chunks, pointers, dereferenced chunks, etc., etc. eventually needs to be stored somewhere. The increased TSM DB size also increased the space requirements for the DBB’s.
Two of the methods introduced:
Introduced in v6.3.0, and named “Multistream database backup and restore processing”. The NUMStreams specifies the number of parallel data movement streams to use when you backup the DB. The default is 1, the maximum is 4. This will not reduce the space requirements for the DBB’s, but it might improve the overall time a DBB take. Typically you should only use this if your TSM DB is big enough. Meaning: multistream backups will not win you that much time with a small TSM DB, but the concurrent data stream will cause you to lose volumes which are not fully utilized. As always, there is a tradeoff to consider.