With apologies to Edgar Dijkstra...
Usually when people talk about virtual machine snapshotting, they include with it snapshotting both the server and any filesystems its directly connected to. Although this is more complex than just snapshotting the virtual machine, it isn't that hard.
This works in some very narrow technical sense for some few cases, but it involves loss of data in every case. If you take a checkpoint every 30 minutes (or every 5 or whatever), then all the updates made during that period of time, are lost when you restore this snapshot and its storage to a consistent (but old) state. This means that all the checks you deposited during that time, or all the bonuses your boss put you in for during that time, or the books you ordered, or whatever, are lost. Lost to the point that they probably have to be restored manually - to the tune of great customer dissatisfaction.
In addition, if this application has connections as a client, or as a server to other servers or clients, then although the application and its immediately mounted storage are now consistent, but unless you do simultaneous snapshots between this virtual machines and all the world it is connecting with (some of which may be outside your enterprise), and then restore your entire world to this older state, then there are likely to be many client/server connections which will no longer work - because the client and server are in mutually inconsistent states.
The worst case of this is if you have a Service Oriented Architecture, where any given server is only a small part of the overall service - every service has connections to something else all the time, and to make matters worse, the clients and/or servers are often outside your own enterprise.
And, of course, don't forget that you lost transactions in the process too. So, a reboot interval of 1 to 3 minutes sounds really good by comparision. Because all you'll lose in that case is transactions that were not yet committed - which are many fewer than the number of transactions lost by backing up to the previous checkpoint.
As an example of a common special case where this obviously doesn't work, imagine that the server in question is a file server. So, you restore the virtual machine and all its storage (the file server) to some older state. Now all the connected applications which _thought_ they had committed some particular piece of work (a spreadsheet, a database transaction) - just had all that work undone. And, depending on the file server protocol and the application, bad things will happen - certainly loss of data, and probably some of the applications will create corrupt data - since updates they thought they'd made are now gone - unbeknownst to them. This corrupt data can cause any number of problems - inability to make further updates, cascading application crashes - these are all possibilities.
Or what if it's a client of a file server? The file server is a separate machine (possibly virtual, possibly real, possibly an appliance). Then you can't put its storage state back to a known state - without restoring all its clients back to the same consistent state - and if you somehow did, then _all_ of them now suffer data loss.
Not a very pretty picture.
There are some few cases where you can isolate the application from the "real world" and snapshot the whole "mini-enterprise" in a synchronous way. Those are mostly limited to large scale scientific applications. Given how hard it is to make them more available in any other way, this is a good thing. But, its a practice with narrow applicability. After reading the paragraphs above, perhaps you can see why...