This article describes several methods to restore TSM (Spectrum Protect) client data between nodes (set access, fromnode, virtualnodename, asnodename). This can be convenient if machine A crashes and you’ll need to restore machine A’s data on machine B. That’s just one example and there are different use cases for this. As you found this article, you’ll have a good reason for it.
The information provided in this article is not new – it is actually very old. VIRTUALNODENAME is used for restoring or retrieving (some or all) files to another workstation. FROMNODE is used for restoring or retrieving files from another client node and access needs to be granted. You can be very specific in the authorization rules. ASNODENAME allows an agent node to backup/restore and archieve/retrieve data on behalf of a target node. There is some overlap between the provided methods, so you have to pick the best one in your scenario.
Let’s assume these 2 nodes: HANNIBAL and FACE. In all methods, HANNIBAL backups a testfile and FACE needs a way to restore that file on its own system. Replace their names with the names of your machines.
The no-query restore (for the remainder of this article NQR) has been around forever. Let’s not get into details since which version and the changes made per release (if any) to client or server-side, but it is in there since ADSM v3+.
What I noticed over the years is that the majority of people (customers and business partners) are just referring to NQR as “a faster restore”. When asked for details the majority of them know about the multi-sessions capabilities, but that’s about it. In this article I’ll try to explain the differences between a classic restore and a NQR restore, as I was unable to find a more than basic explanation of this subject. If this sounds too basic to you, which I can imagine, skip this article.
First, let’s talk about the “Standard query restore”, aka “Classic restore”. For the remainder of this article called SQR.
During normal operations, node replication will flow from source server to target server. If an error occurs that might be from target server to source server. For example, with TSM v7.1.1’ “recover damaged files from a replication server” feature, when a file on the source is marked as damaged it will be replicated back from the target. That’s all clear. But what happens when a volume on the target server is set to destroyed? The short answer is….nothing will happen. Destroyed volumes are ignored by the node replication processes. Or so it seems. Continue reading →
After installing and configuring the TSM for SAN for Linux (= Storage Agent for Linux) software, you can manually start the Storage Agent in the foreground to check if your configuration is working and to bring the Storage Agent online with the dsmsta command. But eventually you might want to run the Storage Agent as a daemon and to start it automatically at system restart. Instructions for that are of course in the Storage Agent User’s Guide, but there is a problem with using the Storage Agent daemon after this configuration. However, it is a minor problem but I describe it here so that people can find a workaround. This is all tested on v7.1.1 (software and documentation).
From Appendix A (“Automating the storage agent startup”) of the Storage Agent Installation and User’s Guide: