Backup
For some time there have been plenty of examples of backing up Palo Alto Firewalls with curl commands (extracting the files using the XML API) however that may not sit well with some Windows administrators who want to use PowerShell. As such I’ve put together the BackupPANNGFWConfig repo on GitHub which contains the scripts to get ahold of the API keys needed and then to perform the backups for a series of firewalls.
To get the scripts drop by the link below and for the configuration see the screenshot sequences in this post. You will need a basic understanding of Palo Alto Firewalls, PowerShell and Windows Server to work through these steps.
Super important note, this script is configured to use a TLS1.2 connection to the firewall as well as only allow connections to a firewall with a trusted security certificate – if you jump on the web management interface of the firewalls from the server that you are running the script from you should see the ‘secure’ padlock icon in the address bar.
https://github.com/jamesfed/BackupPANNGFWConfig
With the scripts all configured you will then want to configure a scheduled task on the server to take these backup files on a regular basis.
As some readers may know I currently work in Higher Education and while all of the business data is trivial to backup providing any level of backup service to students and academics is significantly harder. The challenges faced include the myriad of Operating Systems in use (Windows/OSX/Linux), the fact that the devices being backed up are inherently ‘untrusted’ (i.e. owned by the individual) and that they are often on networks (be it eduroam/public/home) that have no direct connectivity back to the internal trusted network.
Most enterprise class backup systems just aren’t suited to this kind of environment in that they cannot be securely published through a firewall or have exorbitant licencing costs for the number of devices to be protected (a few file servers vs 500+ student owned laptops).
One solution to this issue cropped up at a recent trade show where Synology were demonstrating their Synology DiskStation Manager NAS software which set itself apart from the traditional enterprise backup solutions with…
- Support up to 16,000 users on high end models (and 2048 on the kind of model that we would consider using) with no extra licencing costs, users can have storage quotas set either by group or per user
- Secure remote access (simply publish a single port which can be protected by HTTPS for encryption in transit)
- Home grown backup clients for modern versions of Windows, OSX/macOS and Linux
- On the point of OSX/macOS the backup client for Synology does not rely on Time Machine and so overcomes the issues associated with having to be on the same network as your backup device
- Home grown Btrfs file system which auto detects (and fixes) corrupted files through metadata along with extensive snapshot support
- Up to 32 recovery points and real-time file protection (when connected to the DiskStation)
So time for some screenshots! Below we have the initial setup of the Disk Station Manager and the installation of the client on a Windows PC.
Then restoring a file that has been deleted on the Windows PC; note that you can restore either individual files or entire folders to a point in time.
The same but for OSX…
So that’s all of the good, the only downside we have found thus far is while shared drives can be protected with encryption it is not possible to protect each individual home area (per user) with a unique encryption key thus opening up issues with data privacy. However, if you consider the following scenario…
- A business needs to provide backup to remote workers
- Those remote workers do not connect to the trusted network often
- Perhaps they don’t like VPNs/DirectAccess (and so rules out using Offline Files)
- and those remote workers do not use a commercial ‘cloud’ service to protect their data with
- Perhaps trusting a 3rd party to host the data is not an option
- The remote workers use OSX/macOS
- Those remote workers do not connect to the trusted network often
…then using a Synology DiskStation should be a serious consideration for that business.
I’ve recently been testing our disaster recovery abilities particularly in restoring servers from the bare metal recovery feature of System Centre Data Protection Manager 2012.
When restoring one of our servers (that is a virtual machine) I was getting the error message below just before the drive data starts to copy over.
The system image restore failed.
Error details: Element not found. (0x80070490)
As it turns out this error message relates to the restore program not detecting the required number of hard drives attached to the VM that I am restoring the data to.
The fix is simple – assign the extra drives required. The slide show below goes into this in a little more detail.
On further thought I remembered that this particular VM was originally on a physical machine – hence the extra drive came from the tiny partition (usually 100-500MB depending on OS) that Windows creates when doing a first time install that’s used for bitlocker/bootloader stuff. Either way your server won’t work without it and neither will the restore.
If you are running System Centre Data Protection Manager (2010/2012) on Windows Server 2008 or 2008 R2 you may or may not know that when you are running a Bare Metal/System State Backup you are actually using the built in Windows Server Backup feature to keep your data safe.
One thing that baffled me is why on some servers the backups are so large right until I figured out it was backing up drives other than the C (Operating System) drive. As it turns out when you perform a bare metal backup Windows looks at not only the OS drive but any other drives that might be relevant in a disaster recovery scenario.
So how do we find out which drives are included in the backup? Simple method is to go to the command line and enter this command-
wbadmin.exe start backup -allcritical -backuptarget:C:\test
Don’t worry it won’t accualy perform the backup but it will give you a little list of which drives are included as you can see below.
While setting up our new backup server (System Centre Data Protection Manager 2012) one of the issues we came across was with it failing the data synchronizations with a error message like this one-
Type: Synchronization
Status: Failed
Description: Changes for Volume C:\ on <servername><domainname> cannot be applied to \\?\Volume{4fac41a1-0f58-11dc-8993-806d6172696f}\ProgramData\Sophos\AutoUpdate\Cache\savxp\. (ID 112 Details: Cannot create a file when that file already exists (0x800700B7))
More informationEnd time: 11/06/2012 14:29:16
Start time: 11/06/2012 14:28:14
Time elapsed: 00:01:01
Data transferred: 0 MB
Cluster node –
Source details: C:\
Protection group: <servername>
The simple solution here is to exclude the Sophos AutoUpdate folder from the DPM backup, its quite a pain if you have to do it for a whole lot of servers but not much else that can be done!
The screenshots below go into a little more detail