I currently have a python script that uploads CSVs pulled from a database to AGOL. These CSV items then are used to update hosted feature layers on ArcGIS online via another python script (AGOL Notebook).
The notebook script is able to update the hosted feature layers (using featurelayercollection.manager.overwrite method) with any new values from the uploaded CSVs but the issue I am seeing is that if there are more records in the CSVs than the target feature layers, those records do not get added as expected (the opposite is also true when the CSVs have less records - the number of records in the target hosted feature layer stays the same).
Has anyone expereinced this issue when using notebooks on AGOL?
You would have to append records. overwrite only works on existing records not non-existent ones.
We didn't use that exact process, but we were testing a script once that was similar enough, i.e., pulling data from a database and bulk-uploading lots of layers using overwrite. I don't know that I ever saw the precise problem you are, but I can tell you that we saw plenty of other problems, and soon abandoned the script entirely.
It ought to be able to handle more or fewer rows, as the documentation states, but you have to be careful with overwriting a FeatureLayerCollection, because you can change virtually everything about it. And because AGOL's interpretation of a CSV's fields is not always the best, you may end up overwriting a table with different field types than the table had before, which is sure to cascade problems down to any map or app that referenced the layer.
As @DanPatterson notes, append is really what you want to use here, provided your input CSVs have a unique identifier column to differentiate adds from updates. The key benefit here is you don't have fields getting their type changed on you during an append.
If you don't have a unique identifier to use the append function, I'd suggest reading the CSVs into a dataframe first using arcgis.features.GeoAccessor.from_table (or from_xy, if it's spatial), and then truncating the target layer and appending new records from the dataframe.