The regression line in cross validation excludes some of the extreme values when fitting the line. This is why the line differs from Excel. Sorry that I had forgotten to mention this earlier. From the help:
"This procedure first fits a standard linear regression line to the scatterplot. Next, any points that are more than two standard deviations above or below the regression line are removed, and a new regression equation is calculated. This procedure ensures that a few outliers will not corrupt the entire regression equation."
Regarding whether to do validation or cross validation, you have a few choices. Validation is the most statistically defensible methodology (because it validates against data that was completely withheld), but it requires not using some of your data. Cross validation, on the other hand, uses all data to build the model, but it then validates against the same data used to build the model, so there is a bit of data double-dipping. The double-dipping isn't usually a problem because the influence of any individual point should not be too extreme.
The third option is to do a validation workflow to decide the parameters of the model. You can then apply this model to the entire data. To do this, perform the entire validation workflow. Then use the Create Geostatistical Layer tool, and provide the geostatistical layer used for validation and the entire dataset. This will apply the parameters of the validation model to all data.
If, as you say, you're going to choose your model by cross validation statistics, then I would probably just do cross validation and not do a full validation workflow. But it's up to you.