We get data from external agencies which currently is a manual process. We are working on automating this using Python. The first script looks in the download folder for shapefiles and tables and updates our SQL Server Database which is the main data source which serves data to the public. This part is working well.
The second part is taking the same data and updating existing File Geodatabase which is used extensively by internal users and sometimes is even faster to work from than through SQL server connection. I have a portion of code that deletes the data from the existing shapefiles which basically leads to an empty table and the second portion appends data to that table.
My question is, do I have to worry about schema locks since all I am doing is erasing the table rows and appending data to the table and leaving the schema unchanged.
I would think not, but the best answer will come from making some copies and testing it out. Just to double check, when you say, "deletes the data from the existing shapefiles which basically leads to an empty table," you are purging the file or enterprise geodatabase table, right? Or are the shape files empty? Especially if the former, use Truncate to purge the table of records, it is what it is designed for.
Since you are going from an Enterprise SDE to a file gdb, have you thought about using one-way replication?
I have a similar process where I basically replace my fgdb which is used for ArcGIS Server and user access. Because I do more than replace, I actually do all my work on a temp fgdb then replace my working fgdb. I occasionally get file locks (not schema locks) and thinking the one-way replaca is the way to go. Just throwing that out that in case you have considered it yet.