The The Community is live! Post new content or topics so our teams can assist.

Please contact your support team if you have a question or need assistance for any Rackspace products, services, or articles.

[Feedback] Need better support for large backup sets

I manage a development web server with maybe 20 or so websites in development at any given time.  I use Cloud Backup to run daily backups of the entire vhost file directory (/var/www/vhosts), currently sitting at 8.3GB.  Once every month or so, including just recently, I receive an email alert telling me that a backup was missed. If I decide to ignore the warning and wait for it to finish the next day, it will just continue to fail.  And the backups will continue to fail until I get a Racker to look at the problem.  As best as I understand, it has been explained to me that this is due to how the system works.  If a backup set is too large, it will timeout before the backup can finish.  Then the next day, it still has the previous day's backup in the queue, plus cleanup, plus the current day's backup to finish, which it can't because of time restrictions.  Cloud Backup is great, but adjustments need to be made to handle long running backups without timing out or failing.  8.3GB is not a lot, and it's pretty frustrating to have to deal with this over and over again and have nothing done to resolve it.