A file geodatabase is really a database without the RDBMS. It gets better performance, but it also
doesn't have the ACID properties of an RDBMS (or the logging or backup and restore procedures
of one, either, unfortunately).
File corruption can happen. The best way to insulate yourself is to make regular backups. It
also wouldn't hurt to run some sort of validation procedure that visits all the rows in all the tables,
since a backup won't help if a corruption is introduced and permeates all your backups (I had a
client lose a 9TB RDBMS instance when a mirroring backup process failed to detect filesystem
corruption and overwrote the last successful backup with random garbage -- two weeks later
they had to restart a 15 week load procedure from CD and DVD media).
- V